url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/10433 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10433/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10433/comments | https://api.github.com/repos/huggingface/transformers/issues/10433/events | https://github.com/huggingface/transformers/issues/10433 | 817,820,447 | MDU6SXNzdWU4MTc4MjA0NDc= | 10,433 | About the speed when return_dict is set to True | {
"login": "ridiculouz",
"id": 56992804,
"node_id": "MDQ6VXNlcjU2OTkyODA0",
"avatar_url": "https://avatars.githubusercontent.com/u/56992804?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ridiculouz",
"html_url": "https://github.com/ridiculouz",
"followers_url": "https://api.github.com/users/ridiculouz/followers",
"following_url": "https://api.github.com/users/ridiculouz/following{/other_user}",
"gists_url": "https://api.github.com/users/ridiculouz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ridiculouz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ridiculouz/subscriptions",
"organizations_url": "https://api.github.com/users/ridiculouz/orgs",
"repos_url": "https://api.github.com/users/ridiculouz/repos",
"events_url": "https://api.github.com/users/ridiculouz/events{/privacy}",
"received_events_url": "https://api.github.com/users/ridiculouz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"No, it shouldn't!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,614 | 1,619 | 1,619 | NONE | null | Hi!
I just want to know whether or not the speed is slower in model forward function like roberta or bert when return_dict=True. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10433/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10433/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10432 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10432/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10432/comments | https://api.github.com/repos/huggingface/transformers/issues/10432/events | https://github.com/huggingface/transformers/issues/10432 | 817,804,871 | MDU6SXNzdWU4MTc4MDQ4NzE= | 10,432 | Adding Longformer Encoder Decoder support for T5 | {
"login": "huu4ontocord",
"id": 8900094,
"node_id": "MDQ6VXNlcjg5MDAwOTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8900094?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/huu4ontocord",
"html_url": "https://github.com/huu4ontocord",
"followers_url": "https://api.github.com/users/huu4ontocord/followers",
"following_url": "https://api.github.com/users/huu4ontocord/following{/other_user}",
"gists_url": "https://api.github.com/users/huu4ontocord/gists{/gist_id}",
"starred_url": "https://api.github.com/users/huu4ontocord/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/huu4ontocord/subscriptions",
"organizations_url": "https://api.github.com/users/huu4ontocord/orgs",
"repos_url": "https://api.github.com/users/huu4ontocord/repos",
"events_url": "https://api.github.com/users/huu4ontocord/events{/privacy}",
"received_events_url": "https://api.github.com/users/huu4ontocord/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"So it looks like using sliding chunk mult is not the way to go. I can't figure out what's happening to the attn_scores and how it is shaped to be able to apply the position bias to it.\r\n\r\n\r\n```\r\n # POSITION_BIAS here: stack 2*one_sided_attn_window_size+1 worth of bias in the last dimension\r\n position_bias2 = self._sliding_chunks_query_key_matmul(\r\n position_bias.new_ones(size=position_bias.size()), position_bias, self.one_sided_attn_window_size\r\n )\r\n```",
"Thanks, @ontocord! It would be great if we can get an LED based on T5. \r\nWe gave it a try but the PR is still WIP. Check here: https://github.com/allenai/longformer/pull/149\r\nIIRC, the key idea is in this function: https://github.com/allenai/longformer/blob/t5/longformer/longformer.py#L144-L157\r\nIf this is not helpful enough, please let me know and I can explain it in more detail later. \r\n",
"@ibeltagy, what do you think of something like this? I think it works!!\r\nThe relative position tensor is over the window_overlap (128), and not the attention_window (512)\r\n\r\n```\r\n relative_position = torch.tensor([[i-window_overlap for i in range(2*window_overlap+1)]])\r\n relative_position_bucket = self._relative_position_bucket(\r\n relative_position, # shape (query_length, key_length)\r\n bidirectional=True,\r\n num_buckets=self.relative_attention_num_buckets,\r\n )\r\n relative_position_bucket = relative_position_bucket.to(self.relative_attention_bias.weight.device)\r\n values = self.relative_attention_bias(relative_position_bucket) # shape (query_length, key_length, num_heads)\r\n position_bias = values.permute([0, 2, 1]).unsqueeze(0) # shape (1, num_heads, query_length, key_length)\r\n\r\n```\r\n\r\n\r\nAnd the test:\r\n```\r\n from transformers import AutoTokenizer, pipelines\r\n model = T5ForConditionalGeneration.from_pretrained('t5-small-long')\r\n tokenizer = AutoTokenizer.from_pretrained(\"t5-small\")\r\n tokenizer.model_max_length=1000000000\r\n #print (tokenizer)\r\n p = pipelines.pipeline(\"text2text-generation\", model=model, tokenizer=tokenizer, device=0)\r\n print (p(\"\"\"question: Where was Lincoln born? context: \r\nAbraham Lincoln (/ˈlɪŋkən/; February 12, 1809 – April 15, 1865) was an American statesman and lawyer who served as the 16th president of the United States from 1861 until his assassination in 1865. Lincoln led the nation through the American Civil War, the country's greatest moral, constitutional, and political crisis. He succeeded in preserving the Union, abolishing slavery, bolstering the federal government, and modernizing the U.S. economy.\r\n\r\nLincoln was born into poverty in a log cabin and was raised on the frontier primarily in Indiana. He was self-educated and became a lawyer, Whig Party leader, Illinois state legislator, and U.S. Congressman from Illinois. In 1849, he returned to his law practice but became vexed by the opening of additional lands to slavery as a result of the Kansas–Nebraska Act. He reentered politics in 1854, becoming a leader in the new Republican Party, and he reached a national audience in the 1858 debates against Stephen Douglas. Lincoln ran for President in 1860, sweeping the North in victory. Pro-slavery elements in the South equated his success with the North's rejection of their right to practice slavery, and southern states began seceding from the union. To secure its independence, the new Confederate States fired on Fort Sumter, a U.S. fort in the South, and Lincoln called up forces to suppress the rebellion and restore the Union.\r\n\r\nAs the leader of moderate Republicans, Lincoln had to navigate a contentious array of factions with friends and opponents on both sides. War Democrats rallied a large faction of former opponents into his moderate camp, but they were countered by Radical Republicans, who demanded harsh treatment of the Southern Confederates. Anti-war Democrats (called \"Copperheads\") despised him, and irreconcilable pro-Confederate elements plotted his assassination. Lincoln managed the factions by exploiting their mutual enmity, by carefully distributing political patronage, and by appealing to the U.S. people. His Gettysburg Address became a historic clarion call for nationalism, republicanism, equal rights, liberty, democracy and freedom.\r\n\"\"\"))\r\n\r\n```\r\n\r\n\r\n[{'generated_text': 'Indiana'}]\r\n\r\n...\r\nBut asking the question in t5-long: Who hated Lincoln? I get: \r\n\r\n[{'generated_text': 'anti-war Democrats (called \"Copperheads\") despised him, and irre'}]\r\n\r\n\r\nBut asking in t5-small, I get:\r\n\r\n{'generated_text': 'Anti-war Democrats'}]\r\n\r\nI think there's something going on with the relative_position still (maybe in the extra column?)\r\n\r\nI've updated the code on my repository so you can see.\r\n\r\n\r\n",
"> ` relative_position = torch.tensor([[i-window_overlap for i in range(2*window_overlap+1)]])`\r\n> The relative position tensor is over the window_overlap (128), and not the attention_window (512)\r\n\r\nFor an `attention_window = 512`, the relative positions need to be from -256 to 256. What you have here is -128 to 128.\r\nI am not sure how the -128 to 128 works, it will give you a tensor with dimensions that don't fit here `attn_scores += diagonal_mask + position_bias2`. \r\n\r\n\r\n> And the test:\r\n\r\nI would recommend a unit test with input seqlen < 512, then assert that the hidden states you get from `t5-small-long` perfectly match those from `t5-small`. This helps with debugging because if hidden stats don't match, you can step through both models to find the discrepancy.\r\n\r\n\r\n\r\n",
"@ibeltagy , my mistake. Yes the overlap window is 256, not 128. I meant the code should refer to window_overlap, which made it work. The code you referenced in https://github.com/allenai/longformer/blob/t5/longformer/longformer.py#L144-L157 refers to the whole attention_window*2 which would cause issues.\r\n\r\n` relative_position = torch.tensor([[i-self.attention_window for i in range(2*self.attention_window+1)]])`\r\n\r\nThere are still bugs, so I'll do the step through of each hidden_state per your suggestion. Thanks again!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,614 | 1,619 | 1,619 | NONE | null | # 🚀 Adding Longformer Encoder Decoder support for T5
LED is great for doing long form encoder decoder of documents, but it is based only on BART. T5 has certain advantages, such as being designed for multi tasks (QA, summarization, etc.) and having relative positioning.
T5 uses relative positioning which maps well to doing sliding chunks and should not require additional training to learn new relative position buckets. Adding LED support will permit any already trained T5 models to be used efficiently on long document.
I've started incorporating LED features into the encoder portion of T5 but have some quesitons about the position_bias and implementation details of t5 and LED. With some help on understanding how sliding window multiplcation works in LED and how relative position is organized, I think I can finish the impelmentation.
In particular, T5 passes a position_bias that along with the mask as added in each layer. This bias is added to each score before performing a softmax.
I've surmised that I can add the position_bias to the mask in the long former self attention, and then that should mostly be the same as the orginal t5 self attention.
T5's position_bias is in the shape of (batch_size, n_heads, seq_length, key_length) . But the mask used for LED is in the form of (batch_size, seq_length), which is then mapped to n_heads and then through sliding multiplication to stack the mask. I permute the postion_bias, and then run through sliding multiplication to stack the bias so that the posiion bias can db added to the mask.
I tried a test of attention_window size of 512 with exactly 512 worth of tokens, which should make it equivalent to t5 self attention. But something seems to be off.
The encoder produces a tensor that suprisingly can be decoded by the decoder, which is encouraging, but it's not producing an answer for QA for example.
I noticed that t5 doesn't use sqrt (key value proj dim) normalization, and has an extra mapping through tensor o. I tried with and without sqrt but no good either way.
Am I getting something mixed up with the position_bias?
@ibeltagy @patrickvonplaten @sgugger any help would be much appreciated. Happy to contribute this as a PR when completed.
Current code: https://github.com/ontocord/t5_led/blob/main/t5_ext.py
relevant portion:
```
def forward_long(
self,
hidden_states,
mask=None,
position_bias=None,
layer_head_mask=None,
is_index_masked=None,
is_index_global_attn=None,
is_global_attn=None,
output_attentions=False,
compute_relative_attention_bias=False,
query_states = None,
query_mask = None,
layer_id=0,
):
"""
:class:`LEDEncoderSelfAttention` expects `len(hidden_states)` to be multiple of `attention_window`. Padding to
`attention_window` happens in :meth:`LEDEncoderModel.forward` to avoid redoing the padding on each layer.
The `mask` is changed in :meth:`LEDEncoderModel.forward` from 0, 1, 2 to:
* -10000: no attention
* 0: local attention
* +10000: global attention
"""
batch_size, seq_length = hidden_states.shape[:2]
if position_bias is None:
if not self.has_relative_attention_bias or not compute_relative_attention_bias:
position_bias = torch.zeros(
(1, self.n_heads, seq_length, seq_lenth), device=hidden_states.device, dtype=hidden_states.dtype
)
else:
position_bias = self.compute_bias(seq_length, seq_length, False) # (batch_size, n_heads, seq_length, key_length)
position_bias = position_bias.permute(0, 2, 1, 3)
print ("ccompute bias 2", position_bias.size())
hidden_states = hidden_states.transpose(0, 1)
if query_states is None:
query_states = hidden_states
# project hidden states
if query_mask is not None:
query_vectors = self.q(query_states) * query_mask.unsqueeze(-1).expand(-1, -1, query.shape[-1])
else:
query_vectors = self.q(query_states)
key_vectors = self.k(hidden_states)
value_vectors = self.v(hidden_states)
seq_len, batch_size, embed_dim = hidden_states.size()
assert (
embed_dim == self.embed_dim
), f"hidden_states should have embed_dim = {self.embed_dim}, but has {embed_dim}"
# normalize query - T5 does not do the sqrt???
query_vectors /= math.sqrt(self.key_value_proj_dim)
query_vectors = query_vectors.view(seq_len, batch_size, self.n_heads, self.key_value_proj_dim).transpose(0, 1)
key_vectors = key_vectors.view(seq_len, batch_size, self.n_heads, self.key_value_proj_dim).transpose(0, 1)
attn_scores = self._sliding_chunks_query_key_matmul(
query_vectors, key_vectors, self.one_sided_attn_window_size
)
# values to pad for attention probs
remove_from_windowed_mask = (mask != 0)[:, :, None, None]
# cast to fp32/fp16 then replace 1's with -inf
float_mask = remove_from_windowed_mask.type_as(query_vectors).masked_fill(
remove_from_windowed_mask, -10000.0
)
# POSITION_BIAS here: stack 2*one_sided_attn_window_size+1 worth of bias in the last dimension
position_bias2 = self._sliding_chunks_query_key_matmul(
position_bias.new_ones(size=position_bias.size()), position_bias, self.one_sided_attn_window_size
)
# diagonal mask with zeros everywhere and -inf inplace of padding
diagonal_mask = self._sliding_chunks_query_key_matmul(
float_mask.new_ones(size=float_mask.size()), float_mask, self.one_sided_attn_window_size
)
# pad local attention probs and add the position bias
attn_scores += diagonal_mask + position_bias2
assert list(attn_scores.size()) == [
batch_size,
seq_len,
self.n_heads,
self.one_sided_attn_window_size * 2 + 1,
], f"local_attn_probs should be of size ({batch_size}, {seq_len}, {self.n_heads}, {self.one_sided_attn_window_size * 2 + 1}), but is of size {attn_scores.size()}"
# compute local attention probs from global attention keys and contact over window dim
if is_global_attn:
# compute global attn indices required through out forward fn
(
max_num_global_attn_indices,
is_index_global_attn_nonzero,
is_local_index_global_attn_nonzero,
is_local_index_no_global_attn_nonzero,
) = self._get_global_attn_indices(is_index_global_attn)
# calculate global attn probs from global key
global_key_attn_scores = self._concat_with_global_key_attn_probs(
query_vectors=query_vectors,
key_vectors=key_vectors,
max_num_global_attn_indices=max_num_global_attn_indices,
is_index_global_attn_nonzero=is_index_global_attn_nonzero,
is_local_index_global_attn_nonzero=is_local_index_global_attn_nonzero,
is_local_index_no_global_attn_nonzero=is_local_index_no_global_attn_nonzero,
)
# concat to local_attn_probs
# (batch_size, seq_len, n_heads, extra attention count + 2*window+1)
attn_scores = torch.cat((global_key_attn_scores, attn_scores), dim=-1)
# free memory
del global_key_attn_scores
attn_probs = F.softmax(attn_scores, dim=-1, dtype=torch.float32) # use fp32 for numerical stability
if layer_head_mask is not None:
assert layer_head_mask.size() == (
self.n_heads,
), f"Head mask for a single layer should be of size {(self.n_heads,)}, but is {layer_head_mask.size()}"
attn_probs = layer_head_mask.view(1, 1, -1, 1) * attn_probs
# softmax sometimes inserts NaN if all positions are masked, replace them with 0
attn_probs = torch.masked_fill(attn_probs, is_index_masked[:, :attn_probs.size()[1], None, None], 0.0)
attn_probs = attn_probs.type_as(attn_scores)
# free memory
del attn_scores
# apply dropout
attn_probs = F.dropout(attn_probs, p=self.dropout, training=self.training)
value_vectors = value_vectors.view(seq_len, batch_size, self.n_heads, self.key_value_proj_dim).transpose(0, 1)
# compute local attention output with global attention value and add
if is_global_attn:
# compute sum of global and local attn
attn_output = self._compute_attn_output_with_global_indices(
value_vectors=value_vectors,
attn_probs=attn_probs,
max_num_global_attn_indices=max_num_global_attn_indices,
is_index_global_attn_nonzero=is_index_global_attn_nonzero,
is_local_index_global_attn_nonzero=is_local_index_global_attn_nonzero,
)
else:
# compute local attn only
attn_output = self._sliding_chunks_matmul_attn_probs_value(
attn_probs, value_vectors, self.one_sided_attn_window_size
)
assert attn_output.size() == (batch_size, seq_len, self.n_heads, self.key_value_proj_dim), "Unexpected size"
attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous()
# compute value for global attention and overwrite to attention output
# TODO: remove the redundant computation
if is_global_attn:
global_attn_output, global_attn_probs = self._compute_global_attn_output_from_hidden(
hidden_states=hidden_states,
max_num_global_attn_indices=max_num_global_attn_indices,
layer_head_mask=layer_head_mask,
is_local_index_global_attn_nonzero=is_local_index_global_attn_nonzero,
is_index_global_attn_nonzero=is_index_global_attn_nonzero,
is_local_index_no_global_attn_nonzero=is_local_index_no_global_attn_nonzero,
is_index_masked=is_index_masked,
)
# get only non zero global attn output
nonzero_global_attn_output = global_attn_output[
is_local_index_global_attn_nonzero[0], :, is_local_index_global_attn_nonzero[1]
]
# overwrite values with global attention
attn_output[is_index_global_attn_nonzero[::-1]] = nonzero_global_attn_output.view(
len(is_local_index_global_attn_nonzero[0]), -1
)
# The attention weights for tokens with global attention are
# just filler values, they were never used to compute the output.
# Fill with 0 now, the correct values are in 'global_attn_probs'.
attn_probs[is_index_global_attn_nonzero] = 0
attn_output = attn_output.transpose(0, 1)
# t5 runs the attn_output through o, and expects attn_output to be (batch_size, seq_length, dim)
attn_output = self.o(attn_output)
present_key_value_state = None
outputs = (attn_output,) + (present_key_value_state,) + (position_bias,)
if output_attentions:
outputs = outputs + (attn_weights,)
return outputs + (global_attn_probs,) if (is_global_attn and output_attentions) else outputs
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10432/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10432/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10431 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10431/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10431/comments | https://api.github.com/repos/huggingface/transformers/issues/10431/events | https://github.com/huggingface/transformers/pull/10431 | 817,783,925 | MDExOlB1bGxSZXF1ZXN0NTgxMjIxODM3 | 10,431 | Fix conda-build | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,614 | 1,614 | 1,614 | MEMBER | null | Fix the tokenizer version so that conda can correctly build packages | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10431/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10431/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10431",
"html_url": "https://github.com/huggingface/transformers/pull/10431",
"diff_url": "https://github.com/huggingface/transformers/pull/10431.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10431.patch",
"merged_at": 1614388830000
} |
https://api.github.com/repos/huggingface/transformers/issues/10430 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10430/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10430/comments | https://api.github.com/repos/huggingface/transformers/issues/10430/events | https://github.com/huggingface/transformers/issues/10430 | 817,699,483 | MDU6SXNzdWU4MTc2OTk0ODM= | 10,430 | Inference with Finetuned BERT Model outputting odd results | {
"login": "singhn27",
"id": 1694751,
"node_id": "MDQ6VXNlcjE2OTQ3NTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1694751?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/singhn27",
"html_url": "https://github.com/singhn27",
"followers_url": "https://api.github.com/users/singhn27/followers",
"following_url": "https://api.github.com/users/singhn27/following{/other_user}",
"gists_url": "https://api.github.com/users/singhn27/gists{/gist_id}",
"starred_url": "https://api.github.com/users/singhn27/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/singhn27/subscriptions",
"organizations_url": "https://api.github.com/users/singhn27/orgs",
"repos_url": "https://api.github.com/users/singhn27/repos",
"events_url": "https://api.github.com/users/singhn27/events{/privacy}",
"received_events_url": "https://api.github.com/users/singhn27/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\n\r\nis it possible to ask this question on the [forum](https://discuss.huggingface.co/) rather than here? Since this question is a perfect use case for that.\r\n\r\nThank you. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,614 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.5.1
- Platform: Linux-4.14.203-116.332.amzn1.x86_64-x86_64-with-glibc2.10
- Python version: 3.7.6
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@LysandreJik @patrickvonplaten
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Trained HuggingFace Transformers model BertForSequenceClassification on custom dataset with PyTorch backend.
2. Used provided convert_graph_to_onnx.py script to convert model (from saved checkpoint) to ONNX format.
3. Loaded the model with ONNXRuntime
4. Instantiated BertTokenizer.from_pretrained('bert-based-uncased') and fed in various input text to encode_plus method.
5. Fed outputs of this to the ONNXRuntime session.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The expected behavior is that the output of sess.run on the aforementioned inputs should output an array of dimension (1, 100) (corresponding to 100 classes) with each value between 0 and 1, with all entries summing to 1. We get the correct dimension, however, we get values between about -3.04 and 7.14 (unsure what these values refer to).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10430/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10429 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10429/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10429/comments | https://api.github.com/repos/huggingface/transformers/issues/10429/events | https://github.com/huggingface/transformers/issues/10429 | 817,694,937 | MDU6SXNzdWU4MTc2OTQ5Mzc= | 10,429 | Trainer's load_best_model_at_end argument results in error with DistributedDataParallel | {
"login": "abhishek0318",
"id": 22981267,
"node_id": "MDQ6VXNlcjIyOTgxMjY3",
"avatar_url": "https://avatars.githubusercontent.com/u/22981267?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishek0318",
"html_url": "https://github.com/abhishek0318",
"followers_url": "https://api.github.com/users/abhishek0318/followers",
"following_url": "https://api.github.com/users/abhishek0318/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishek0318/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishek0318/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishek0318/subscriptions",
"organizations_url": "https://api.github.com/users/abhishek0318/orgs",
"repos_url": "https://api.github.com/users/abhishek0318/repos",
"events_url": "https://api.github.com/users/abhishek0318/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishek0318/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you explain a bit more the code you are running as well as the exact command you are using for launch? We can't help if we can't reproduce your bug and running:\r\n```\r\npython -m torch.distributed.launch --nproc_per_node 2 examples/text-classification/run_glue.py \\\r\n --model_name_or_path bert-base-uncased \\\r\n --task_name mrpc --output_dir test/mrpc \\\r\n --load_best_model_at_end \\\r\n --do_train \\\r\n --do_eval \\\r\n --evaluation_strategy epoch \\\r\n --overwrite_output_dir \r\n```\r\nfor instance does not reproduce it.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,614 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.0
- Platform: Linux
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (CUDA Version: 11.2)
- Tensorflow version (GPU?): NA
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes, DistributedDataParallel
### Who can help
@sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): T5
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
```
training_args = TrainingArguments(
output_dir=os.path.join(output_dir, 'results'),
overwrite_output_dir=True,
num_train_epochs=num_train_epochs,
per_device_train_batch_size=per_device_train_batch_size,
per_device_eval_batch_size=per_device_eval_batch_size,
warmup_steps=warmup_steps,
weight_decay=weight_decay,
logging_dir=os.path.join(output_dir, 'logs'),
logging_steps=100,
learning_rate=learning_rate,
evaluation_strategy="epoch",
max_grad_norm=max_grad_norm,
metric_for_best_model="eval_loss",
report_to=['tensorboard'],
local_rank=local_rank)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=val_dataset
)
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Set load_best_model_at_end=True, when using DistributedDataParallel (python -m torch.distributed.launch ...) and the following stack trace appears after training is complete.
2. If you don't use DistributedDataParallel or don't set load_best_model_at_end to True, then this work as expected and there is no error.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
OSError: Can't load config for 'checkpoint-115'. Make sure that: - 'checkpoint-115' is a correct model identifier listed on 'https://huggingface.co/models' - or 'checkpoint-115' is the correct path to a directory containing a config.json file
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
No error.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10429/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10429/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10428 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10428/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10428/comments | https://api.github.com/repos/huggingface/transformers/issues/10428/events | https://github.com/huggingface/transformers/pull/10428 | 817,692,475 | MDExOlB1bGxSZXF1ZXN0NTgxMTQyOTIw | 10,428 | [run_seq2seq.py] restore functionality: saving to test_generations.txt | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ah, I re-read it closer, you're correct. I just remembered the part about the `test_generations.txt` but didn't bother to check out the full story. My bad. I will study it and follow up once I understand it better. ",
"I went back to `finetune_trainer.py` from December and checked that it was just saving `test_generations.txt` at the very end once. I can't find any code where it did generate this at every checkpoint. @kingpalethe, please correct me if I'm wrong.\r\n\r\nIf there were to be a `save_checkpoint` callback then it could generate one for each saved check point. So it probably needs to be requested via a feature request Issue.\r\n\r\nSo the current PR is still is a good idea to support those who relied on this particular filename. But I'm not attached to it.",
"I don't think it matters much, we just have to be quick if people using the new script start to rely on the new name.",
"OK, let's restore the original name. "
] | 1,614 | 1,614 | 1,614 | CONTRIBUTOR | null | This PR restores the original functionality that for some reason was modified.
Fixes: https://github.com/huggingface/transformers/issues/10381
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10428/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10428/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10428",
"html_url": "https://github.com/huggingface/transformers/pull/10428",
"diff_url": "https://github.com/huggingface/transformers/pull/10428.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10428.patch",
"merged_at": 1614442910000
} |
https://api.github.com/repos/huggingface/transformers/issues/10427 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10427/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10427/comments | https://api.github.com/repos/huggingface/transformers/issues/10427/events | https://github.com/huggingface/transformers/pull/10427 | 817,677,097 | MDExOlB1bGxSZXF1ZXN0NTgxMTI5MDgz | 10,427 | [examples] better model example | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,614 | 1,614 | 1,614 | CONTRIBUTOR | null | As a continued effort to make examples easy to read and synchronizing them all to use the same look and feel, this PR tries to improve `run_seq2seq.py` as a model and then future PRs will sync other examples with it.
* [x] makes the helper methods work for rank0 internally - simplifying the caller
* [x] abstracts the helper `trainer.state.save_to_json` into a simple method
* [x] automatically aggregates all metrics into `all_metrics.json` w/o requiring any extra code on the caller side
Anything else?
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10427/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10427",
"html_url": "https://github.com/huggingface/transformers/pull/10427",
"diff_url": "https://github.com/huggingface/transformers/pull/10427.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10427.patch",
"merged_at": 1614387662000
} |
https://api.github.com/repos/huggingface/transformers/issues/10426 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10426/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10426/comments | https://api.github.com/repos/huggingface/transformers/issues/10426/events | https://github.com/huggingface/transformers/pull/10426 | 817,675,353 | MDExOlB1bGxSZXF1ZXN0NTgxMTI3NjQ2 | 10,426 | [WIP] CLIP | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | null | [] | [
"Awesome effort. Is the current version already compatible with (now merged) #10594 ?",
"Thanks @dribnet \r\n\r\nI haven't added a feature extractor class for CLIP yet. We first need to finish the `ImageFeatureExtractor` (#01608)\r\nThen the `ClipFeatureExtractor` can inherit from that. This PR will be ready to merge by the end of next week.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Unstale",
"continuing this in #11445 "
] | 1,614 | 1,619 | 1,619 | MEMBER | null | # What does this PR do?
This PR adds OpenAI's CLIP model.
original repo: https://github.com/openai/CLIP
initial demo: https://colab.research.google.com/drive/1hwiCuKvw7hwSlE8yv7J1dh280PlYgPef?usp=sharing | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10426/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10426/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10426",
"html_url": "https://github.com/huggingface/transformers/pull/10426",
"diff_url": "https://github.com/huggingface/transformers/pull/10426.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10426.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10425 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10425/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10425/comments | https://api.github.com/repos/huggingface/transformers/issues/10425/events | https://github.com/huggingface/transformers/issues/10425 | 817,631,722 | MDU6SXNzdWU4MTc2MzE3MjI= | 10,425 | RAG and retrieved documents | {
"login": "calderma",
"id": 18285670,
"node_id": "MDQ6VXNlcjE4Mjg1Njcw",
"avatar_url": "https://avatars.githubusercontent.com/u/18285670?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/calderma",
"html_url": "https://github.com/calderma",
"followers_url": "https://api.github.com/users/calderma/followers",
"following_url": "https://api.github.com/users/calderma/following{/other_user}",
"gists_url": "https://api.github.com/users/calderma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/calderma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/calderma/subscriptions",
"organizations_url": "https://api.github.com/users/calderma/orgs",
"repos_url": "https://api.github.com/users/calderma/repos",
"events_url": "https://api.github.com/users/calderma/events{/privacy}",
"received_events_url": "https://api.github.com/users/calderma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,614 | 1,614 | 1,614 | NONE | null | I pretrained a rag model using the "finetune_rag.py" script and it generates pretty good results for my (knowledge intensive) use case, certainly better than the straight finetuned BART model i was using before. I am using my own custom datasource generated from use_own_knowledge_dataset.py. One curious thing that is happening is that when i try to find what documents were retrieved during the generation process, i always get the same documents. I'm using the basic code snippet from issue#8104 and no matter what input i give, it returns the same few documents no matter how unrelated they are to the input. The generated results are still very good, so I'm not sure if there's an issue with how i'm retrieving the documents or if it really is always grabbing the same few docs for some reason, potentially an issue with my data. Any help or pointers with this would be greatly appreciated. Thank you. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10425/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10425/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10424 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10424/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10424/comments | https://api.github.com/repos/huggingface/transformers/issues/10424/events | https://github.com/huggingface/transformers/pull/10424 | 817,569,644 | MDExOlB1bGxSZXF1ZXN0NTgxMDQwNzM2 | 10,424 | Refactor checkpoint name in BERT and MobileBERT | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,614 | 1,614 | 1,614 | COLLABORATOR | null | # What does this PR do?
Linked to #10193, this PR gives an example on how to refactor the checkpoint names in one private constant and use the `# Copied from` syntax to make the task-specific models that are copies of each other properly watched by our tooling.
It adds the option in check-copies to:
- put multiple statements behind the with: so for instance `with Bert->MobileBert, bert->mobilebert, BERT->MOBILEBERT`
- have the option to do all possible casings (like in the example above) by just adding `all-casing`: `with Bert->MobileBert all-casing`.
It also fixes an existing bug when the first line of the function/class copied from was empty.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10424/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10424/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10424",
"html_url": "https://github.com/huggingface/transformers/pull/10424",
"diff_url": "https://github.com/huggingface/transformers/pull/10424.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10424.patch",
"merged_at": 1614788477000
} |
https://api.github.com/repos/huggingface/transformers/issues/10423 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10423/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10423/comments | https://api.github.com/repos/huggingface/transformers/issues/10423/events | https://github.com/huggingface/transformers/issues/10423 | 817,565,907 | MDU6SXNzdWU4MTc1NjU5MDc= | 10,423 | [examples] add --max_train_samples --max_val_samples --max_test_samples cl args to all scripts | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hi @stas00,\r\nCan I work on this?",
"Yes please! Thank you, @bhadreshpsavani ",
"I was just thinking about this and why do we not have this functionality in the Trainer in first place? then perhaps none of this will be needed.\r\n\r\nI'm asking here: https://github.com/huggingface/transformers/issues/10437\r\n\r\nPerhaps this task will become redundant then. Please wait a little bit.",
"Cool!",
"Hi @stas00,\r\n`templates/adding_a_new_example_script` is still remaining right?",
"That's correct! Thank you for remembering it!\r\n\r\nfor max cl args and also the metrics please! Thank you!",
"Hi @stas00,\r\nSince its just a template there no way to test the changes right?",
"It sounds right. Use your internal compiler.\r\n\r\nProbably once it's written, it can be run through the cookie-cutter and then tested? But I think it shouldn't be too difficult to test it visually.\r\n\r\nIf you want to try the cookie-cutter, the doc is here: \r\nhttps://github.com/huggingface/transformers/tree/master/templates/adding_a_new_example_script\r\n"
] | 1,614 | 1,615 | 1,615 | CONTRIBUTOR | null | As a part of an effort to make all examples have the same look and feel this issue requests to sync the support for these 3 cl args in `run_seq2seq.py`:
```
--max_train_samples 5 --max_val_samples 5 --max_test_samples 5
```
into:
1. all other `examples/*/run_*.py`
2. `templates/adding_a_new_example_script`
Part B. the metrics should be now updated to include the actual number of samples that were run. here is an example for train:
https://github.com/huggingface/transformers/blob/f52a15897b46ffa40af5c96d3726f0e18e91879b/examples/seq2seq/run_seq2seq.py#L586-L590
and the same for eval/test.
I'd say this can probable refactored too. Let me check with Sylvain.
The way it's currently used is to limit the number of dataset entries w/o needing to change the dataset, example:
```
run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir --do_eval --do_predict --do_train \
--evaluation_strategy=steps --predict_with_generate --task summarization --dataset_name xsum \
--max_train_samples 60 --max_val_samples 10 --n_test 10
```
All the code that currently takes care of it can be found inside https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_seq2seq.py
This issue is open to anybody in the community who would like to tackle it.
Thank you!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10423/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10423/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10422 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10422/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10422/comments | https://api.github.com/repos/huggingface/transformers/issues/10422/events | https://github.com/huggingface/transformers/pull/10422 | 817,522,054 | MDExOlB1bGxSZXF1ZXN0NTgxMDAxMTcw | 10,422 | Layoutlm tf | {
"login": "atahmasb",
"id": 25216362,
"node_id": "MDQ6VXNlcjI1MjE2MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/25216362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/atahmasb",
"html_url": "https://github.com/atahmasb",
"followers_url": "https://api.github.com/users/atahmasb/followers",
"following_url": "https://api.github.com/users/atahmasb/following{/other_user}",
"gists_url": "https://api.github.com/users/atahmasb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/atahmasb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/atahmasb/subscriptions",
"organizations_url": "https://api.github.com/users/atahmasb/orgs",
"repos_url": "https://api.github.com/users/atahmasb/repos",
"events_url": "https://api.github.com/users/atahmasb/events{/privacy}",
"received_events_url": "https://api.github.com/users/atahmasb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Very nice! Can you let us know when you want us to review/give feedback/help? Thanks!",
"> Very nice! Can you let us know when you want us to review/give feedback/help? Thanks!\r\n\r\nthanks! I need to upload the TF model file to the hub and run another check to make sure it gives the same results as the PT version. I had verified that but I made few changes so I am gonna run the checks one more time. I'll tag you when it's done",
"@LysandreJik I've uploaded TF models to the model hub under :\r\n- atahmasb/tf-layoutlm-base-uncased\r\n- atahmasb/tf-layoutlm-large-uncased\r\n\r\nI appreciate if you and the team could take a look at the code and give me feedback.\r\nThere are some tests that are failing, I haven't figured the issues out but the code is up for review. Maybe someone could look into the logs and guide me on how to fix them.\r\nMeanwhile I'll see if I can make the tests pass.",
"> Hi @atahmasb, this looks great! I've only left a few comments, everything looks good. I'll do a deeper review once all the tests pass, as things are bound to change until they do, but the idea here is sound!\r\n> \r\n> Do you need any help to make the tests pass?\r\n\r\nI am going to try one more time to resolve them today, if I can't then I'll ask for help",
"@LysandreJik all tests passed! It's ready for a deeper review please ",
"Thanks for letting me know! It seems GitHub botched your rebase, as it is showing 53 files changed. Could you close this PR and open a new one (no need to do anything on your branch) so that we may see the diff better?\r\n\r\nThanks! ",
"> Thanks for letting me know! It seems GitHub botched your rebase, as it is showing 53 files changed. Could you close this PR and open a new one (no need to do anything on your branch) so that we may see the diff better?\r\n> \r\n> Thanks!\r\n\r\nsure, will do"
] | 1,614 | 1,614 | 1,614 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds TF version of LayoutLM for issue [(10312)](https://github.com/huggingface/transformers/issues/10312)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10422/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10422/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10422",
"html_url": "https://github.com/huggingface/transformers/pull/10422",
"diff_url": "https://github.com/huggingface/transformers/pull/10422.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10422.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10421 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10421/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10421/comments | https://api.github.com/repos/huggingface/transformers/issues/10421/events | https://github.com/huggingface/transformers/pull/10421 | 817,514,809 | MDExOlB1bGxSZXF1ZXN0NTgwOTk1MjI4 | 10,421 | updated metrics saving and logging | {
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I want to mention one thing here while testing the file I found that,\r\nFor files `run_clm.py, run_mlm.py, run_plm.py, run_ner.py, run glue.py`\r\nThe logs are as expected like this\r\n```\r\n02/26/2021 20:31:22 - INFO - __main__ - ***** eval metrics *****\r\n02/26/2021 20:31:22 - INFO - __main__ - HasAns_exact = 0.0\r\n02/26/2021 20:31:22 - INFO - __main__ - HasAns_f1 = 0.0\r\n02/26/2021 20:31:22 - INFO - __main__ - HasAns_total = 8\r\n02/26/2021 20:31:22 - INFO - __main__ - NoAns_exact = 100.0\r\n02/26/2021 20:31:22 - INFO - __main__ - NoAns_f1 = 100.0\r\n02/26/2021 20:31:22 - INFO - __main__ - NoAns_total = 6\r\n02/26/2021 20:31:22 - INFO - __main__ - best_exact = 42.857142857142854\r\n02/26/2021 20:31:22 - INFO - __main__ - best_exact_thresh = 0.0\r\n02/26/2021 20:31:22 - INFO - __main__ - best_f1 = 42.857142857142854\r\n02/26/2021 20:31:22 - INFO - __main__ - best_f1_thresh = 0.0\r\n02/26/2021 20:31:22 - INFO - __main__ - epoch = 1.43\r\n02/26/2021 20:31:22 - INFO - __main__ - exact = 42.857142857142854\r\n02/26/2021 20:31:22 - INFO - __main__ - f1 = 42.857142857142854\r\n02/26/2021 20:31:22 - INFO - __main__ - total = 14\r\n```\r\nBut for other files like `run_qa.py, run_qa_beam_search.py, run_swags.py`\r\nThe logs were like below,\r\n```\r\n***** eval metrics *****\r\n HasAns_exact = 0.0\r\n HasAns_f1 = 0.0\r\n HasAns_total = 8\r\n NoAns_exact = 100.0\r\n NoAns_f1 = 100.0\r\n NoAns_total = 6\r\n best_exact = 42.8571\r\n best_exact_thresh = 0.0\r\n best_f1 = 42.8571\r\n best_f1_thresh = 0.0\r\n epoch = 1.43\r\n exact = 42.8571\r\n f1 = 42.8571\r\n total = 14\r\n```\r\nWithout a timestamp, log level, and function name. \r\n\r\nWhen I run the command `python -m unittest discover -s examples -t examples -v` it was giving proper logs\r\n\r\n",
"And also for consistency let's add this bit that is currently in `run_seq2seq.py`:\r\n\r\nhttps://github.com/huggingface/transformers/blob/98569d4ba237d84714f6c15e2c301fd22d42d2b1/examples/seq2seq/run_seq2seq.py#L643-L644\r\n\r\nthis allows the user to load all metrics in one call.\r\n\r\nIt can be part of this PR, or a separate one if you'd like to make this completed faster. And I can make a separate issue to add it.\r\n\r\nThat is if @sgugger you're in agreement with that syncing proposal.",
"I'm fine with it.\r\nOne thing that's striking me is that all those calls are inside an `if trainer.is_world_process_zero()`. Shouldn't we refactor that bit in the `log_metrics`/`save_metrics` method?",
"Sure @sgugger and @stas00 \r\nI will make changes in this PR to get the changes available it faster",
"It won't make much of a difference at the moment since there is other code that is running under `if trainer.is_world_process_zero()` - if we make that code refactored too then absolutely yes - it would make that part of the scripts so much simpler.",
"we could probably do something about:\r\n```\r\n trainer.save_metrics(\"eval\", metrics)\r\n all_metrics.update(metrics)\r\n```\r\n\r\nso that it's not done separately, by \r\n1. either having `save_metrics` always update `all_results.json` with every call\r\n2. or simply have trainer store the metrics internally and then just have one call to flush it to the disk at the end of the run\r\n\r\nsuggestion 1 will require read+write but the cool thing is that it's totally automated and requires no extra calls later.",
"@bhadreshpsavani, I suggest we \r\n\r\n1. finish this PR w/o introducing new changes we are discussing, since they aren't yet thought out well/agreed upon yet.\r\n2. Then we tweak `run_seq2seq.py` to do things better, have it as a model, \r\n3. and then sync to other scripts? \r\n\r\nhow does that sound?\r\n\r\nTo be clear, perhaps leave out this suggestion for now https://github.com/huggingface/transformers/pull/10421#issuecomment-786806023 if we are going to refactor it anyway - unless you already did it, then please keep it in.",
"Okay @stas00,\r\nI have added [this](https://github.com/huggingface/transformers/pull/10421#pullrequestreview-599815674) changes about logger and it is working perfectly. I will commit it in this PR for that three files.",
"OK, so the only missing step is to update the template `templates/adding_a_new_example_script`",
"@bhadreshpsavani, this is now merged: https://github.com/huggingface/transformers/pull/10427 and can be replicated to other scripts (and one template)\r\n\r\nPlease feel free to add it to this PR or open a new one - whatever works the best for you.\r\n\r\nThank you! ",
"@stas00, I will first try to make changes to this PR",
"in the `run_glue.py` we have code like this for saving test result,\r\n```python\r\noutput_test_file = os.path.join(training_args.output_dir, f\"test_results_{task}.txt\")\r\nif trainer.is_world_process_zero():\r\n with open(output_test_file, \"w\") as writer:\r\n logger.info(f\"***** Test results {task} *****\")\r\n writer.write(\"index\\tprediction\\n\")\r\n for index, item in enumerate(predictions):\r\n if is_regression:\r\n writer.write(f\"{index}\\t{item:3.3f}\\n\")\r\n else:\r\n item = label_list[item]\r\n writer.write(f\"{index}\\t{item}\\n\")\r\n```\r\nI didn't change it because it was different than our general save_metrics() methods. \r\nIs there any way we can generalize it?\r\n",
"Hi @stas00 and @sgugger,\r\n\r\nI was trying to make changes in the same PR and after doing rebase from master I needed to merge the new commits to my branch to push my changes. Let me know if this PR is not fine or I need to make another PR with all these changes and delete this one.\r\n\r\nThis is the first time I used `git rebase upstream/master` so I might have done it incorrectly.",
"Yes the rebase has made new files appear in the diff that are irrelevant to your work, so it would be great if you could close this PR and open a new one (no need to do anything else than that like creating a new branch, it's just git being annoying here).\r\n\r\nFor your earlier question, leave the part in `run_glue` that doesn't refactor nicely as it is I would say.",
"As Sylvain said you can make a new PR branch, but you can also fix this PR by rolling back to the last good commit before the failed rebase:\r\n\r\n```\r\ngit reset --soft 4e529f1\r\ngit commit\r\ngit push -f\r\n```\r\n\r\nand then rebase\r\n\r\nBTW, If you want to use an automated rebase process please consider this little script: https://github.com/stas00/git-tools/tree/master/git-rebase\r\n",
"Sure @stas00,\r\nI will use an automatic rebase script next time.\r\nFor simplicity, I have created another [PR](https://github.com/huggingface/transformers/pull/10436) with all the changes and tested the changes.\r\nI closing this PR. \r\n"
] | 1,614 | 1,614 | 1,614 | CONTRIBUTOR | null | # What does this PR do?
I have updated redundant code for saving and logging metrics in the example scripts
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #10337
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@stas00 @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10421/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10421/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10421",
"html_url": "https://github.com/huggingface/transformers/pull/10421",
"diff_url": "https://github.com/huggingface/transformers/pull/10421.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10421.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10420 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10420/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10420/comments | https://api.github.com/repos/huggingface/transformers/issues/10420/events | https://github.com/huggingface/transformers/issues/10420 | 817,405,736 | MDU6SXNzdWU4MTc0MDU3MzY= | 10,420 | Unable to convert Facebook/mbart-many-to-many model to onxx | {
"login": "sankarsiva123",
"id": 58412261,
"node_id": "MDQ6VXNlcjU4NDEyMjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/58412261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sankarsiva123",
"html_url": "https://github.com/sankarsiva123",
"followers_url": "https://api.github.com/users/sankarsiva123/followers",
"following_url": "https://api.github.com/users/sankarsiva123/following{/other_user}",
"gists_url": "https://api.github.com/users/sankarsiva123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sankarsiva123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sankarsiva123/subscriptions",
"organizations_url": "https://api.github.com/users/sankarsiva123/orgs",
"repos_url": "https://api.github.com/users/sankarsiva123/repos",
"events_url": "https://api.github.com/users/sankarsiva123/events{/privacy}",
"received_events_url": "https://api.github.com/users/sankarsiva123/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I do not think mBART can be converted to ONNX as of now.",
"Hi Thanks for the information.\r\nFacebook/many-to-many takes 9s seconds for translation on cpu , is there a way to reduce the inference time ?",
"Hi @sankarsiva123, have you tried HF's API inference ?\r\n\r\n9s per inference seems a bit off: https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt?text=Hello+there+%21+\r\nWe do run some optimizations there as HF's hosted API but still it seems like you could have better inference times than 9s.\r\n\r\nMaybe it depends on what you are sending it ? Are you using GPU or CPU ?",
"Hi, @Narsil Yeah, I tried HF's API inference, it is pretty much fast.\r\nI am using CPU, I tried both in google colab, and in my local, it is taking around 9s.\r\nAm I missing something while using the model, so my inference time is high than normal?\r\nAlso pls let me know is there a way to reduce inference time?\r\n\r\n\r\n\r\n",
"Can you time your inner loop without the tokenizer ? (Just making sure it's not that).\r\n\r\nOtherwise you see to use generate, which is the right way to go.\r\nI don't know colab's CPU nor yours, but it could definitely be the problem (or the pytorch version you're rolling which might have not been optimized for your CPU instruction set.)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,614 | 1,619 | 1,619 | NONE | null | When I tried to convert Facebook/mbart-many-to-many model . I am unable to convert I am getting issue.
Pls help me to convert this model to ONXX | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10420/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10420/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10419 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10419/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10419/comments | https://api.github.com/repos/huggingface/transformers/issues/10419/events | https://github.com/huggingface/transformers/pull/10419 | 817,402,457 | MDExOlB1bGxSZXF1ZXN0NTgwOTAxMjM1 | 10,419 | [LED] Correct Docs | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,614 | 1,614 | 1,614 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10419/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10419/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10419",
"html_url": "https://github.com/huggingface/transformers/pull/10419",
"diff_url": "https://github.com/huggingface/transformers/pull/10419.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10419.patch",
"merged_at": 1614351208000
} |
https://api.github.com/repos/huggingface/transformers/issues/10418 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10418/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10418/comments | https://api.github.com/repos/huggingface/transformers/issues/10418/events | https://github.com/huggingface/transformers/issues/10418 | 817,396,368 | MDU6SXNzdWU4MTczOTYzNjg= | 10,418 | Slow evaluation using Trainer with TPUs in Colab | {
"login": "finiteautomata",
"id": 167943,
"node_id": "MDQ6VXNlcjE2Nzk0Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/167943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/finiteautomata",
"html_url": "https://github.com/finiteautomata",
"followers_url": "https://api.github.com/users/finiteautomata/followers",
"following_url": "https://api.github.com/users/finiteautomata/following{/other_user}",
"gists_url": "https://api.github.com/users/finiteautomata/gists{/gist_id}",
"starred_url": "https://api.github.com/users/finiteautomata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/finiteautomata/subscriptions",
"organizations_url": "https://api.github.com/users/finiteautomata/orgs",
"repos_url": "https://api.github.com/users/finiteautomata/repos",
"events_url": "https://api.github.com/users/finiteautomata/events{/privacy}",
"received_events_url": "https://api.github.com/users/finiteautomata/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The notebook won't execute on TPU, you need to spawn a function on multiple processes for this (`xm.spawn(train_function)`). That function should contain all the training code including the `Trainer`, but `Trainer.train` by itself won't spawn multiple processes.\r\n\r\nThe recommended way to train on TPU is to follow the steps in the [examples](https://github.com/huggingface/transformers/tree/master/examples#running-on-tpus) to run the scripts.",
"Thanks for your answer @sgugger. Is there any plan to add an easier way to use TPUs in Colab?",
"I don't know of any easier way than launching the training function (in PyTorch). If you come across an easy example, please let me know and we will try to make the `Trainer` as easy to use.",
"Ok, sorry. I think I misunderstood. I thought that I should create a separate module for the training function because of the same reason that `multiprocessing` has issues with jupyter environments\r\n\r\nI tried moving everything to a function and using `xmp.spawn(train_nli, args=())`, but I get this error which is not quite clear:\r\n\r\n```python\r\n---------------------------------------------------------------------------\r\nException Traceback (most recent call last)\r\n<ipython-input-4-d4081c64cb6f> in <module>()\r\n 5 \r\n----> 6 xmp.spawn(train_nli, args=())\r\n\r\n2 frames\r\n/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py in spawn(fn, args, nprocs, join, daemon, start_method)\r\n 393 join=join,\r\n 394 daemon=daemon,\r\n--> 395 start_method=start_method)\r\n 396 \r\n 397 \r\n\r\n/usr/local/lib/python3.7/dist-packages/torch/multiprocessing/spawn.py in start_processes(fn, args, nprocs, join, daemon, start_method)\r\n 155 \r\n 156 # Loop on join until it returns True or raises an exception.\r\n--> 157 while not context.join():\r\n 158 pass\r\n 159 \r\n\r\n/usr/local/lib/python3.7/dist-packages/torch/multiprocessing/spawn.py in join(self, timeout)\r\n 110 raise Exception(\r\n 111 \"process %d terminated with exit code %d\" %\r\n--> 112 (error_index, exitcode)\r\n 113 )\r\n 114 \r\n\r\nException: process 7 terminated with exit code 1\r\n```\r\n\r\nAny ideas? \r\n\r\n(Everything is on the same notebook as before)",
"Ok, I followed this notebook ([T5 on TPU](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb)) and I managed to solve that error by using **`start_method=\"fork\"`** on `xmp.spawn`. Thanks for your help @sgugger!\r\n\r\n```python\r\ndef train_nli(index):\r\n # All the training code here\r\n ...\r\n \r\nxmp.spawn(train_nli, args=(), start_method=\"spawn\")\r\n```\r\n\r\nThe notebook with the full code is [here](https://colab.research.google.com/drive/1dVEfoxGvMAKd0GLnrUJSHZycGtyKt9mr#scrollTo=k-e4NqfrtrJy)"
] | 1,614 | 1,618 | 1,614 | NONE | null | ## Environment info
- `transformers` version: 4.3.3
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.9.0a0+7a178a8 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: TPU
- Using distributed or parallel set-up in script?: NO
@sgugger @patrickvonplaten
Model I am using (Bert, XLNet ...): BERT
I'm having very slow eval times using the `Trainer` API in conjunction with `XLA` in Google Colab. While the training epochs are running at a good speed, evaluating after each epoch it takes a very long time. I've tried restricting dataset size and tokenization max length with no success.
I'm not sure how to check whether it's using `XLA` during evaluation.
The task I am working on is NLI, using `multi-nli` from `datasets`
## To reproduce
Execute this notebook
https://colab.research.google.com/drive/1dVEfoxGvMAKd0GLnrUJSHZycGtyKt9mr?usp=sharing
## Expected behavior
Evaluation speed should be approximately the same as training.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10418/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10417 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10417/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10417/comments | https://api.github.com/repos/huggingface/transformers/issues/10417/events | https://github.com/huggingface/transformers/pull/10417 | 817,384,747 | MDExOlB1bGxSZXF1ZXN0NTgwODg2NTIy | 10,417 | Dont use sigmoid when num_labels==1 | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sorry, I don't really understand this - could give a bit more context?",
"@patrickvonplaten Please see this https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_bert.py#L1515\r\nor any `XForSequenceClassification` model",
"This is a very big breaking change. What do you think of the proposed approach here? https://github.com/huggingface/transformers/pull/8328\r\nI think it allows what you're looking for, but in backwards-compatible way.\r\n\r\nThe PR is old and the diff isn't very readable, but if that's something that could fit your use case I can update it to the latest code.",
"@LysandreJik The way it's done in the pipeline here is absolutely incorrect. If the model is trained using MSELoss when num_labels = 1, it means that it is a regression problem and in that case, we should return raw values, not sigmoid. \r\nReturning raw values can be an option but for now, this fix is important as the values returned for num_labels=1 in this pipeline is incorrect: it should return raw value for regression, not sigmoid.",
"@LysandreJik Also, I didn't understand what would this PR break? ",
"Thank you for your feedback. It was done this way in order to enable inference on [DialogRPT](https://github.com/golsun/DialogRPT). It was the first model that performed sequence classification with a single label, so we defined it this way as you can see in this issue https://github.com/huggingface/transformers/issues/7493.\r\n\r\nI understand that this is an issue for most cases, so being able to return raw values is important. However, we must find a way to do it in a backwards compatible way. We can't just change the code and break all models that rely on that pipeline.\r\n\r\n> @LysandreJik Also, I didn't understand what would this PR break?\r\n\r\nWell, for one, all of the DialogRPT models on the [hub](https://huggingface.co/models?search=dialogrpt).",
"I have updated PR #8328 for better readability. If this suits your use-case, I'll try to have it merged ASAP so as not to be blocking for you.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,614 | 1,651 | 1,619 | MEMBER | null | It seems like we use MSELoss when num_labels==1 in config, i.e. a single column regression problem. But in text_classification pipeline, this is considered as a classification problem and uses sigmoid. This PR fixes that issue. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10417/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10417/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10417",
"html_url": "https://github.com/huggingface/transformers/pull/10417",
"diff_url": "https://github.com/huggingface/transformers/pull/10417.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10417.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10416 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10416/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10416/comments | https://api.github.com/repos/huggingface/transformers/issues/10416/events | https://github.com/huggingface/transformers/pull/10416 | 817,344,217 | MDExOlB1bGxSZXF1ZXN0NTgwODUyNjA4 | 10,416 | Add BERTForMultiLabel Classification or Regression | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"moving to new pr"
] | 1,614 | 1,651 | 1,617 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10416/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10416",
"html_url": "https://github.com/huggingface/transformers/pull/10416",
"diff_url": "https://github.com/huggingface/transformers/pull/10416.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10416.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/10415 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10415/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10415/comments | https://api.github.com/repos/huggingface/transformers/issues/10415/events | https://github.com/huggingface/transformers/issues/10415 | 817,291,312 | MDU6SXNzdWU4MTcyOTEzMTI= | 10,415 | Bug when combining grouped beam search and constrained prefix decoding | {
"login": "mnschmit",
"id": 2377507,
"node_id": "MDQ6VXNlcjIzNzc1MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2377507?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mnschmit",
"html_url": "https://github.com/mnschmit",
"followers_url": "https://api.github.com/users/mnschmit/followers",
"following_url": "https://api.github.com/users/mnschmit/following{/other_user}",
"gists_url": "https://api.github.com/users/mnschmit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mnschmit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mnschmit/subscriptions",
"organizations_url": "https://api.github.com/users/mnschmit/orgs",
"repos_url": "https://api.github.com/users/mnschmit/repos",
"events_url": "https://api.github.com/users/mnschmit/events{/privacy}",
"received_events_url": "https://api.github.com/users/mnschmit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @mnschmit,\r\n\r\nthanks for your bug report! Yes, you're right -> I think we should indeed replace `num_beams` by `num_beams // num_beam_groups`. Do you want to open a PR to fix it? :-) Otherwise, I can do it as well"
] | 1,614 | 1,614 | 1,614 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.3.3
- Platform: Linux-5.8.0-38-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.5.1+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): T5
The problem arises when using: my own modified scripts
## To reproduce
Steps to reproduce the behavior: run this simple script
```python
from transformers import T5TokenizerFast, T5ForConditionalGeneration
tokenizer = T5TokenizerFast.from_pretrained('t5-small')
inp = 'The <extra_id_0> walks in <extra_id_1> park'
enc_inp = tokenizer(inp, return_tensors='pt')
model = T5ForConditionalGeneration.from_pretrained('t5-small')
def prefix_allowed_tokens_fn(batch_id, input_ids):
return [2] # dummy value
out = model.generate(
**enc_inp,
num_beams=2,
num_beam_groups=2,
diversity_penalty=0.2,
prefix_allowed_tokens_fn=prefix_allowed_tokens_fn
)
```
This produces the following error:
```
Traceback (most recent call last):
File "debugging/grouped_beam_search.py", line 14, in <module>
out = model.generate(
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "/mounts/Users/student/martin/.local/lib/python3.8/site-packages/transformers/generation_utils.py", line 1041, in generate
return self.group_beam_search(
File "/mounts/Users/student/martin/.local/lib/python3.8/site-packages/transformers/generation_utils.py", line 2161, in group_beam_search
next_token_scores = logits_processor(
File "/mounts/Users/student/martin/.local/lib/python3.8/site-packages/transformers/generation_logits_process.py", line 89, in __call__
scores = processor(input_ids, scores)
File "/mounts/Users/student/martin/.local/lib/python3.8/site-packages/transformers/generation_logits_process.py", line 458, in __call__
for batch_id, beam_sent in enumerate(input_ids.view(-1, self._num_beams, input_ids.shape[-1])):
RuntimeError: shape '[-1, 2, 1]' is invalid for input of size 1
```
## Expected behavior
No error.
As far as I can tell, the `PrefixConstrainedLogitsProcessor` still receives the original number of beams even when grouped beam search is used. But it should be the number of subbeams. So replacing `num_beams` with `num_beams // num_beam_groups` in the constructor of `PrefixConstrainedLogitsProcessor` in method `_get_logits_processor` in file `generation_utils.py` should fix it.
What do you think? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10415/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10414 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10414/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10414/comments | https://api.github.com/repos/huggingface/transformers/issues/10414/events | https://github.com/huggingface/transformers/pull/10414 | 817,284,602 | MDExOlB1bGxSZXF1ZXN0NTgwODAyMjUw | 10,414 | Add Ray Tune hyperparameter search integration test | {
"login": "krfricke",
"id": 14904111,
"node_id": "MDQ6VXNlcjE0OTA0MTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/14904111?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krfricke",
"html_url": "https://github.com/krfricke",
"followers_url": "https://api.github.com/users/krfricke/followers",
"following_url": "https://api.github.com/users/krfricke/following{/other_user}",
"gists_url": "https://api.github.com/users/krfricke/gists{/gist_id}",
"starred_url": "https://api.github.com/users/krfricke/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krfricke/subscriptions",
"organizations_url": "https://api.github.com/users/krfricke/orgs",
"repos_url": "https://api.github.com/users/krfricke/repos",
"events_url": "https://api.github.com/users/krfricke/events{/privacy}",
"received_events_url": "https://api.github.com/users/krfricke/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Oh, that makes sense. Does this happen in the repo? Is this something I can help with, or do you have to configure it?",
"I'm currently working on these scheduled tests, I'll enable these while I do so. Thanks!"
] | 1,614 | 1,614 | 1,614 | CONTRIBUTOR | null | # What does this PR do?
Currently, only Optuna HP search is tested in integration tests. This PR duplicates and adjusts the test for the Ray backend.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@amogkam
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10414/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10414",
"html_url": "https://github.com/huggingface/transformers/pull/10414",
"diff_url": "https://github.com/huggingface/transformers/pull/10414.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10414.patch",
"merged_at": 1614352713000
} |
https://api.github.com/repos/huggingface/transformers/issues/10413 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10413/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10413/comments | https://api.github.com/repos/huggingface/transformers/issues/10413/events | https://github.com/huggingface/transformers/pull/10413 | 817,255,751 | MDExOlB1bGxSZXF1ZXN0NTgwNzc4MTkz | 10,413 | Update run_mlm.py | {
"login": "ayxlin",
"id": 79697459,
"node_id": "MDQ6VXNlcjc5Njk3NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/79697459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayxlin",
"html_url": "https://github.com/ayxlin",
"followers_url": "https://api.github.com/users/ayxlin/followers",
"following_url": "https://api.github.com/users/ayxlin/following{/other_user}",
"gists_url": "https://api.github.com/users/ayxlin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayxlin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayxlin/subscriptions",
"organizations_url": "https://api.github.com/users/ayxlin/orgs",
"repos_url": "https://api.github.com/users/ayxlin/repos",
"events_url": "https://api.github.com/users/ayxlin/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayxlin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,614 | 1,619 | 1,619 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10413/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10413",
"html_url": "https://github.com/huggingface/transformers/pull/10413",
"diff_url": "https://github.com/huggingface/transformers/pull/10413.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10413.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10412 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10412/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10412/comments | https://api.github.com/repos/huggingface/transformers/issues/10412/events | https://github.com/huggingface/transformers/issues/10412 | 817,241,151 | MDU6SXNzdWU4MTcyNDExNTE= | 10,412 | Trainer: Make `best_model_checkpoint` path in `trainer_state.json` relative to `args.output_dir` | {
"login": "tanmay17061",
"id": 32801726,
"node_id": "MDQ6VXNlcjMyODAxNzI2",
"avatar_url": "https://avatars.githubusercontent.com/u/32801726?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tanmay17061",
"html_url": "https://github.com/tanmay17061",
"followers_url": "https://api.github.com/users/tanmay17061/followers",
"following_url": "https://api.github.com/users/tanmay17061/following{/other_user}",
"gists_url": "https://api.github.com/users/tanmay17061/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tanmay17061/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanmay17061/subscriptions",
"organizations_url": "https://api.github.com/users/tanmay17061/orgs",
"repos_url": "https://api.github.com/users/tanmay17061/repos",
"events_url": "https://api.github.com/users/tanmay17061/events{/privacy}",
"received_events_url": "https://api.github.com/users/tanmay17061/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] | [
"I think this would a very welcome change indeed, so please work on that if you have time and if it's something you'd like to do!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,614 | 1,619 | 1,619 | CONTRIBUTOR | null | # 🚀 Feature request
An enhancement of `best_model_checkpoint` for more robustness.
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Currently `Trainer.state.best_model_checkpoint` holds absolute path to the best checkpoint when `Trainer.args.load_best_model_at_end=True` is passed.
It can be useful if `Trainer.state.best_model_checkpoint` value is relative to `Trainer.args.output_dir`
## Motivation
**Absolute path hinders portability** of the trained models.
For example, if a user wants to continue _previous training_ using argument `resume_from_checkpoint` for `Trainer.train`, not having the `output_dir` exactly same as the _previous training_ (eg- renaming of any directory in the path) can break the `load_best_model_at_end` functionality due to the previous absolute paths being no longer valid.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
I can raise a PR if this is a useful change to have!
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10412/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10411 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10411/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10411/comments | https://api.github.com/repos/huggingface/transformers/issues/10411/events | https://github.com/huggingface/transformers/issues/10411 | 817,235,401 | MDU6SXNzdWU4MTcyMzU0MDE= | 10,411 | Problem using add_special_tokens | {
"login": "hhou435",
"id": 59219579,
"node_id": "MDQ6VXNlcjU5MjE5NTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/59219579?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hhou435",
"html_url": "https://github.com/hhou435",
"followers_url": "https://api.github.com/users/hhou435/followers",
"following_url": "https://api.github.com/users/hhou435/following{/other_user}",
"gists_url": "https://api.github.com/users/hhou435/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hhou435/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hhou435/subscriptions",
"organizations_url": "https://api.github.com/users/hhou435/orgs",
"repos_url": "https://api.github.com/users/hhou435/repos",
"events_url": "https://api.github.com/users/hhou435/events{/privacy}",
"received_events_url": "https://api.github.com/users/hhou435/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,614 | 1,619 | 1,619 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:3.4.0
- Platform:windows
- Python version:3.7.0
- PyTorch version (GPU?):1.7.0
- Tensorflow version (GPU?):2.4.1
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@n1t0, @LysandreJik
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Hi,I want to add some special tokens to bert tokenizer and these tokens are already part of the vocabulary.

So i use `add_special_tokens`
```
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('t5')
tokenizer.add_special_tokens({'extra_id_0':'[extra_id_0]'},)
```
But something went wrong
```
Traceback (most recent call last):
File "E:/github/Update_model/update_t5.py", line 10, in <module>
tokenizer.add_special_tokens({'extra_id_0':'[extra_id_0]'},)
File "D:\Anaconda\envs\hc\lib\site-packages\transformers\tokenization_utils_base.py", line 948, in add_special_tokens
assert key in self.SPECIAL_TOKENS_ATTRIBUTES, f"Key {key} is not a special token"
AssertionError: Key extra_id_0 is not a special token
```
And I have another question.
I want to show the special tokens in the Hosted inference API when generate texts.
But I dont know how to do this.
Thanks!
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10411/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10411/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10410 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10410/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10410/comments | https://api.github.com/repos/huggingface/transformers/issues/10410/events | https://github.com/huggingface/transformers/pull/10410 | 817,200,481 | MDExOlB1bGxSZXF1ZXN0NTgwNzMyMzYx | 10,410 | [WIP] RAG end-to-end retriever training (with ray workers) | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@lhoestq @patrickvonplaten \r\n\r\nI have already started doing the above changes you have mentioned. Apart from that, I changed the following elements of the codebase. \r\n\r\n1. I used a dedicated RAY worker to compute embeddings for the dataset with an updated ctx encoder in this version. However, the process of add_faiss_index gets very slow when running inside a ray worker (I think it is due to the need for multiprocessing threads). I tried to increase the number of CPU cores, but it is still very slow. The computing of embeddings is an embarrassingly parallel task, where we can share the dataset between GPUs and compute them very fast. Nevertheless, it is hard to work with RAY when it comes to multiple GPUs. So I utilize the DDP process to compute embeddings using N number of dedicated GPUs that only do the embeddings calculation task.\r\n\r\n2. Then I did a minor thing. Pytorch lightning has removed the DDP accelerators in their latest installation. Nevertheless, we can easily use the **on_sanity_check_start** callback to initialize the index when using RAY. I feel it is a lot cleaner. \r\n\r\n__________________________________________________________________________________________________\r\n\r\nAs per my experiments, at the moment end-to-end training process is stable. I would love to double-check the following parts with your help. \r\n\r\n\r\n1. Re-loading of the updated index for the workers.\r\n2. Re-initialization of the retrieval index.\r\n\r\nApart from them, I see **add_faiss_index** can get hours when the dataset consists of more than a million passages. My custom dataset has 8 million datasets. Is it normal, or should we able to improve it? If we can improve its speed, this whole process can be very engineering friendly. \r\n\r\nPlease let me know your ideas. I will quickly do the updated PR. \r\n\r\n\r\n",
"For the records, we are discussing the indexing speed difference here: https://github.com/huggingface/datasets/issues/2046",
"@lhoestq \r\n\r\nI and @elliott-wen updated the codebase. Now the embedding update happens with a parallel process and we use the stale gradients to update the entire model (pretty-much similar to REALM). ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"@shamanez do you need any help to get the test working?",
"Hi, @patrickvonplaten I did change the code and got a stable end-to-end trainable RAG. I also did the changes you have mentioned. But I had to update the code with new pytorch lightning, especially they do not use plugging now. So is that ok to upload the code with a new end-to-end fine-tune script and lightning base?\r\n\r\n@patrickvonplaten @lhoestq \r\n\r\nI also added all the details to a blog and I am happy to share it with you two. It includes all the changes I did in the RAG. (I also included your names since you guys helped me a lot :) )",
"Hi, that's good news !\r\n\r\n> is that ok to upload the code with a new end-to-end fine-tune script and lightning base?\r\n\r\nI think it could be a good idea to make the code compatible with the latest pytorch-lightning yes :)\r\nEspecially since many things we used in lightning_base don't work anymore, and that we can now hope pytorch-lightning to not do such radical changes again.\r\npinging @patrickvonplaten to confirm it's ok",
"@lhoestq Thanks. \r\n\r\nBTW I read this new paper named [Retrieval Augmentation Reduces Hallucination in Conversation](https://arxiv.org/abs/2104.07567), which kinds of highlights the importance of RAG-like models in LM modeling. So I do believe end-to-end fine-tuning can allow users to experiment with different components of the RAG architecture.\r\n\r\n\r\nSince you guys helped me a lot in this process, is it okay to include your names in the blog? Here's a link to the unpublished draft blog post.\r\n\r\n\r\n\r\nhttps://medium.com/@shamanesiriwardhana/end-to-end-rag-fine-tuning-with-huggingface-pytorch-lightning-and-ray-4b4385322552\r\n\r\n\r\n\r\nPlease let me know your thoughts.\r\n\r\n",
"Good job with this blog post draft ! Sure it's fine to mention us, thanks",
"Thanks.\n\nOn Sat, May 8, 2021, 04:42 Quentin Lhoest ***@***.***> wrote:\n\n> Good job with this blog post draft ! Sure it's fine to mention us, thanks\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/10410#issuecomment-834610009>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGVY24QYGUQ6SR5T473TMQJ67ANCNFSM4YID3MKQ>\n> .\n>\n",
"> Hi, that's good news !\r\n> \r\n> > is that ok to upload the code with a new end-to-end fine-tune script and lightning base?\r\n> \r\n> I think it could be a good idea to make the code compatible with the latest pytorch-lightning yes :)\r\n> Especially since many things we used in lightning_base don't work anymore, and that we can now hope pytorch-lightning to not do such radical changes again.\r\n> pinging @patrickvonplaten to confirm it's ok\r\n\r\nSo is that ok we create a folder in researc_project naming, RAG-end-to-end-Retriever training\r\n",
"closing this with a new pull request. \r\n\r\nhttps://github.com/huggingface/transformers/pull/11655",
"@lhoestq @patrickvonplaten could you please let me know if there is anything to change in the recent pull request. ",
"Hey @shamanez,\r\n\r\nSorry for being so inactive here! I reviewed your newly opened PR :-)",
"Hey, it is totally fine. Thanks million times :)."
] | 1,614 | 1,620 | 1,620 | CONTRIBUTOR | null | # What does this PR do?
As mentioned in this [issue](https://github.com/huggingface/transformers/issues/9646), this PR adds the fine-tuning ability the Retriever in the original RAG implementation.
This PR first updates the ctx_encoder and then initializes the index using RAY workers.
@lhoestq
@amogkam | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10410/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10410",
"html_url": "https://github.com/huggingface/transformers/pull/10410",
"diff_url": "https://github.com/huggingface/transformers/pull/10410.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10410.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10409 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10409/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10409/comments | https://api.github.com/repos/huggingface/transformers/issues/10409/events | https://github.com/huggingface/transformers/pull/10409 | 817,161,253 | MDExOlB1bGxSZXF1ZXN0NTgwNzAwMjY0 | 10,409 | [ci, flax] non-existing models are unlikely to pass tests | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,614 | 1,614 | 1,614 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10409/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10409",
"html_url": "https://github.com/huggingface/transformers/pull/10409",
"diff_url": "https://github.com/huggingface/transformers/pull/10409.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10409.patch",
"merged_at": 1614332136000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/10408 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10408/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10408/comments | https://api.github.com/repos/huggingface/transformers/issues/10408/events | https://github.com/huggingface/transformers/issues/10408 | 817,109,555 | MDU6SXNzdWU4MTcxMDk1NTU= | 10,408 | Question about the `decoder_input_ids` in `LEDForConditionalGeneration` forward method | {
"login": "yww211",
"id": 32888325,
"node_id": "MDQ6VXNlcjMyODg4MzI1",
"avatar_url": "https://avatars.githubusercontent.com/u/32888325?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yww211",
"html_url": "https://github.com/yww211",
"followers_url": "https://api.github.com/users/yww211/followers",
"following_url": "https://api.github.com/users/yww211/following{/other_user}",
"gists_url": "https://api.github.com/users/yww211/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yww211/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yww211/subscriptions",
"organizations_url": "https://api.github.com/users/yww211/orgs",
"repos_url": "https://api.github.com/users/yww211/repos",
"events_url": "https://api.github.com/users/yww211/events{/privacy}",
"received_events_url": "https://api.github.com/users/yww211/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @yww211, \r\n\r\nactually there was a bug in the docstring, that I found thanks to you :-) The attached PR corrects this mistake.\r\n\r\nTo answer your question, you should not shift the `decoder_input_ids` explicitly by one when passing them & in fact you have to pass the `decoder_input_ids` if you want to use the forward method."
] | 1,614 | 1,614 | 1,614 | NONE | null | https://github.com/huggingface/transformers/blob/17b6e0d474b797cdddf5225b0f51bf0e928091b9/src/transformers/models/led/modeling_led.py#L2337
Hi,
I have a question about the `LEDForConditionalGeneration` forward args.
The `decoder_input_ids` has a comment that `decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) – Provide for translation and summarization training. By default, the model will create this tensor by shifting the input_ids to the right, following the paper.`.
Form the forward method in `LEDForConditionalGeneration`, i can see that when not assigning the `decoder_input_ids` in the forward method of `LEDForConditionalGeneration` object , the `decoder_input_ids` will be generated by [shifting the `labels` value one token to right in the forward method](https://github.com/huggingface/transformers/blob/17b6e0d474b797cdddf5225b0f51bf0e928091b9/src/transformers/models/led/modeling_led.py#L2337).
So my question is if i want to explictly pass the `decoder_input_ids` to the forward method, do i need to explictly shift it one token as the [code](https://github.com/huggingface/transformers/blob/17b6e0d474b797cdddf5225b0f51bf0e928091b9/src/transformers/models/led/modeling_led.py#L2337) shows before the forward pass?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10408/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10407 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10407/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10407/comments | https://api.github.com/repos/huggingface/transformers/issues/10407/events | https://github.com/huggingface/transformers/pull/10407 | 816,981,680 | MDExOlB1bGxSZXF1ZXN0NTgwNTUwODc5 | 10,407 | offline mode for firewalled envs | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> The constant should be documented in the install page, in the section about caching models I think (https://huggingface.co/transformers/installation.html#caching-models).\r\n\r\nGreat idea, @sgugger. Please kindly check the doc I added is good when you get a chance.\r\n\r\nAnd also the kind of test I had to add is unorthodox too, so please see if it works for you. The original version couldn't have worked.\r\n\r\nThank you!"
] | 1,614 | 1,614 | 1,614 | CONTRIBUTOR | null | This PR implements the proposal from https://github.com/huggingface/transformers/issues/10379 to enable transformers to cache everything it needs and then run in the offline mode - e.g. in a firewalled environment.
This PR:
* [x] adds `is_offline_mode()` helper function that returns `True` when env var `TRANSFORMERS_OFFLINE` is set to `1/YES/ON`
* [x] automatically sets `local_files_only=True` in all 3 `from_pretrained()` methods
* [x] handles `ntlk` download dynamically in `run_seq2seq.py`
* [x] adds offline test (thanks to @lhoestq for the idea for mocking no network in the test)
* [x] adds doc
This is to match the recently added `HF_DATASETS_OFFLINE=1` in `datasets` (https://github.com/huggingface/datasets/pull/1976). Tested that both work well together.
So now we can run with the network:
```
python examples/seq2seq/run_seq2seq.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ...
```
and then with the same filesystem w/o the network or w/ a firewalled network:
```
HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 \
python examples/seq2seq/run_seq2seq.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ...
```
and the latter succeeds since step 1 had all the data pre-fetched and cached.
@sgugger, @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10407/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10407",
"html_url": "https://github.com/huggingface/transformers/pull/10407",
"diff_url": "https://github.com/huggingface/transformers/pull/10407.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10407.patch",
"merged_at": 1614994069000
} |
https://api.github.com/repos/huggingface/transformers/issues/10406 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10406/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10406/comments | https://api.github.com/repos/huggingface/transformers/issues/10406/events | https://github.com/huggingface/transformers/pull/10406 | 816,936,839 | MDExOlB1bGxSZXF1ZXN0NTgwNTE0MDE4 | 10,406 | Ray Tune Integration Bug Fixes | {
"login": "amogkam",
"id": 8068268,
"node_id": "MDQ6VXNlcjgwNjgyNjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8068268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amogkam",
"html_url": "https://github.com/amogkam",
"followers_url": "https://api.github.com/users/amogkam/followers",
"following_url": "https://api.github.com/users/amogkam/following{/other_user}",
"gists_url": "https://api.github.com/users/amogkam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amogkam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amogkam/subscriptions",
"organizations_url": "https://api.github.com/users/amogkam/orgs",
"repos_url": "https://api.github.com/users/amogkam/repos",
"events_url": "https://api.github.com/users/amogkam/events{/privacy}",
"received_events_url": "https://api.github.com/users/amogkam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@amogkam There is an `s` missing in line 202 in `src/transformers/integrations.py` in `{kwargs['keep_checkpoint_num']}` which should be `{kwargs['keep_checkpoints_num']}`, which is causing the logger to crash instead of just a warning. Thanks for the fixes btw!\r\n"
] | 1,614 | 1,615 | 1,614 | COLLABORATOR | null | # What does this PR do?
Fixes resource allocation and checkpointing bugs with the Ray Tune `hyperparameter_search` integration.
@sgugger @richardliaw @krfricke | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10406/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10406",
"html_url": "https://github.com/huggingface/transformers/pull/10406",
"diff_url": "https://github.com/huggingface/transformers/pull/10406.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10406.patch",
"merged_at": 1614384368000
} |
https://api.github.com/repos/huggingface/transformers/issues/10405 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10405/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10405/comments | https://api.github.com/repos/huggingface/transformers/issues/10405/events | https://github.com/huggingface/transformers/issues/10405 | 816,852,408 | MDU6SXNzdWU4MTY4NTI0MDg= | 10,405 | Problem running T5 (configuration) with text classification | {
"login": "ioana-blue",
"id": 17202292,
"node_id": "MDQ6VXNlcjE3MjAyMjky",
"avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ioana-blue",
"html_url": "https://github.com/ioana-blue",
"followers_url": "https://api.github.com/users/ioana-blue/followers",
"following_url": "https://api.github.com/users/ioana-blue/following{/other_user}",
"gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions",
"organizations_url": "https://api.github.com/users/ioana-blue/orgs",
"repos_url": "https://api.github.com/users/ioana-blue/repos",
"events_url": "https://api.github.com/users/ioana-blue/events{/privacy}",
"received_events_url": "https://api.github.com/users/ioana-blue/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey, \r\n\r\neven though T5 can be used very well for text-classification it remains a text-to-text only model. So you can only load the model via\r\n\r\n```python\r\nfrom transformers import AutoModelForConditionalGeneration\r\nmodel = AutoModelForConditionalGeneration.from_pretrained(\"t5-small\")\r\n```",
"Got it, thanks!",
"@patrickvonplaten Hi does `from transformers import AutoModelForConditionalGeneration` still work? Returns me an error when i try to use it",
"Should work yes :-)",
"@patrickvonplaten I just upgraded transformers to the latest version (4.16) and when i run this:\r\n\r\n```python\r\nfrom transformers import AutoModelForConditionalGeneration\r\n```\r\n\r\nI get this error:\r\n```\r\n---------------------------------------------------------------------------\r\nImportError Traceback (most recent call last)\r\n/tmp/ipykernel_20/1334627133.py in <module>\r\n----> 1 from transformers import AutoModelForConditionalGeneration\r\n\r\nImportError: cannot import name 'AutoModelForConditionalGeneration' from 'transformers' (/opt/conda/lib/python3.7/site-packages/transformers/__init__.py)\r\n```\r\n\r\nIf this is supposed to work I can open an issue (let me know who I should tag). See [kaggle notebook example](https://www.kaggle.com/xhlulu/transformers-automodelforconditionalgeneration)"
] | 1,614 | 1,645 | 1,614 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.2
- Platform: Linux-4.18.0-193.el8.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.5.1+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: single gpu
### Who can help
Perhaps @patrickvonplaten, @patil-suraj could help?
## Information
Model I am using (Bert, XLNet ...): T5
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
I'm trying to run the T5 base model. It seems that I use the correct model path (i.e., t5-base) and it finds and downloads the model, but crashes when it tries to instantiate it. The problem seems to be around the configuration class not being found. This is what I get:
```
File "../../../models/tr-4.3.2/run_puppets.py", line 279, in main
model = AutoModelForSequenceClassification.from_pretrained(
File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/transformers/models/auto/modeling_auto.py", line 1362, in from_pretrained
raise ValueError(
ValueError: Unrecognized configuration class <class 'transformers.models.t5.configuration_t5.T5Config'> for this kind of AutoModel: AutoModelForSequenceClassification.
Model type should be one of ConvBertConfig, LEDConfig, DistilBertConfig, AlbertConfig, CamembertConfig, XLMRobertaConfig, MBartConfig, BartConfig, LongformerConfig, RobertaConfig, SqueezeBertConfig, LayoutLMConfig, BertConfig, XLNetConfig, MobileBertConfig, FlaubertConfig, XLMConfig, ElectraConfig, FunnelConfig, DebertaConfig, GPT2Config, OpenAIGPTConfig, ReformerConfig, CTRLConfig, TransfoXLConfig, MPNetConfig, TapasConfig.
```
I dig a bit and I may have a hunch why this happens. The config file is there: https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/configuration_t5.py#L32
but it's not recorded here: https://github.com/huggingface/transformers/blob/master/src/transformers/models/auto/modeling_auto.py#L514
So the check here fails: https://github.com/huggingface/transformers/blob/master/src/transformers/models/auto/modeling_auto.py#L1389
And the ValueError is raised.
I hope this is it. It looks like an easy fix :) Thanks!
PS: I'm running the same scripts/files with other models without problems. This seems to be something specific to T5.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10405/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10404 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10404/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10404/comments | https://api.github.com/repos/huggingface/transformers/issues/10404/events | https://github.com/huggingface/transformers/issues/10404 | 816,817,899 | MDU6SXNzdWU4MTY4MTc4OTk= | 10,404 | Model Hub: Search by model size | {
"login": "ioana-blue",
"id": 17202292,
"node_id": "MDQ6VXNlcjE3MjAyMjky",
"avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ioana-blue",
"html_url": "https://github.com/ioana-blue",
"followers_url": "https://api.github.com/users/ioana-blue/followers",
"following_url": "https://api.github.com/users/ioana-blue/following{/other_user}",
"gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions",
"organizations_url": "https://api.github.com/users/ioana-blue/orgs",
"repos_url": "https://api.github.com/users/ioana-blue/repos",
"events_url": "https://api.github.com/users/ioana-blue/events{/privacy}",
"received_events_url": "https://api.github.com/users/ioana-blue/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Definitely a good idea",
"And since I started talking about model cards... :) I think it would be cool if you guys actually imposed some format. I think the original paper/idea had a format. Now \"model cards\" stands for \"whatever the researcher had time to fill in that day\" :) A few fields of interest: model size, training data, NLP tasks, language(s), paper, _maybe_ something about model of inspiration (e.g., TinyBERT is a modification of BERT by...). ",
"I agree, there should at least be a template in my opinion. I hate to find models on the hub which don't provide any information. Moreover, all model cards look different, there's not really a structure.",
"There is a template we link to in the second question of https://huggingface.co/docs (=> https://github.com/huggingface/model_card), though we should make it more built-in/central at some point.",
"It would also be nice if the template also included details on tokenisation, what algorithm was used (BPE, Unigram, Word Piece) and the parameters (vocab size etc).",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,614 | 1,619 | 1,619 | NONE | null | # 🚀 Feature request
It would be great if the model cards for models would include the model size (i.e., the number of parameters) and then the model hub will allow searching for models by size.
## Motivation
Depending on the task/problem/context, smaller or larger models are more beneficial. It's hard to keep up with all the models out there. For example, if I'm interested in distilled/compressed/smaller BERTs, I may be able to remember DistilBERT, MobileBERT but maybe forget about SqueezeBERT, TinyBERT, etc. A search by size would make all these smaller models visible.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10404/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10404/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10403 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10403/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10403/comments | https://api.github.com/repos/huggingface/transformers/issues/10403/events | https://github.com/huggingface/transformers/pull/10403 | 816,806,207 | MDExOlB1bGxSZXF1ZXN0NTgwNDA1OTQ1 | 10,403 | Sagemaker Model Parallel tensoboard writing fix | {
"login": "mansimane",
"id": 23171195,
"node_id": "MDQ6VXNlcjIzMTcxMTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/23171195?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mansimane",
"html_url": "https://github.com/mansimane",
"followers_url": "https://api.github.com/users/mansimane/followers",
"following_url": "https://api.github.com/users/mansimane/following{/other_user}",
"gists_url": "https://api.github.com/users/mansimane/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mansimane/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mansimane/subscriptions",
"organizations_url": "https://api.github.com/users/mansimane/orgs",
"repos_url": "https://api.github.com/users/mansimane/repos",
"events_url": "https://api.github.com/users/mansimane/events{/privacy}",
"received_events_url": "https://api.github.com/users/mansimane/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,614 | 1,616 | 1,614 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
Fixes # 10402
https://github.com/huggingface/transformers/issues/10402
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10403/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10403",
"html_url": "https://github.com/huggingface/transformers/pull/10403",
"diff_url": "https://github.com/huggingface/transformers/pull/10403.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10403.patch",
"merged_at": 1614344695000
} |
https://api.github.com/repos/huggingface/transformers/issues/10402 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10402/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10402/comments | https://api.github.com/repos/huggingface/transformers/issues/10402/events | https://github.com/huggingface/transformers/issues/10402 | 816,750,677 | MDU6SXNzdWU4MTY3NTA2Nzc= | 10,402 | SageMaker Model Parallel: cluttered tensorboard plots | {
"login": "mansimane",
"id": 23171195,
"node_id": "MDQ6VXNlcjIzMTcxMTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/23171195?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mansimane",
"html_url": "https://github.com/mansimane",
"followers_url": "https://api.github.com/users/mansimane/followers",
"following_url": "https://api.github.com/users/mansimane/following{/other_user}",
"gists_url": "https://api.github.com/users/mansimane/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mansimane/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mansimane/subscriptions",
"organizations_url": "https://api.github.com/users/mansimane/orgs",
"repos_url": "https://api.github.com/users/mansimane/repos",
"events_url": "https://api.github.com/users/mansimane/events{/privacy}",
"received_events_url": "https://api.github.com/users/mansimane/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! Thank you for opening an issue and for offering a code sample!\r\n\r\nCould you open a PR with your code changes?\r\n\r\nThank you!",
"@LysandreJik Please find the PR here: https://github.com/huggingface/transformers/pull/10403/files ",
"Cool, thanks for fixing! Just merged the PR."
] | 1,614 | 1,614 | 1,614 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: master
- Platform: SageMaker
- Python version: 3.6
- PyTorch version (GPU?): 1.6
- Using GPU in script?: Y
- Using distributed or parallel set-up in script?: Y
### Who can help
Models:All
Library:SageMaker Model parallel
## Information
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: MLM
## Expected behavior
Currently, SM trainer class inherits `is_world_process_zero` variables from main trainer class. In main trainer class, these variables are derived using `self.args.local_rank` or `dist.get_rank()`, which are not unique for SMMP. This cause multiple processes writing tensorboard summaries because of which there are loops in the tensorboard graph. The `is_world_process_zero` can be implemented in SageMakerTrainer as below. This makes sure that only single process is writing tensorboard summaries.
```python
def is_world_process_zero(self) -> bool:
"""
Whether or not this process is the global main process (when training in a distributed fashion on several
machines, this is only going to be :obj:`True` for one process).
"""
if self.is_model_parallel_enabled:
return smp.rank() == 0 and smp.local_rank() == 0 and smp.mp_rank() == 0 and smp.dp_rank() == 0
else:
return super.is_world_process_zero()
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10402/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10402/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10401 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10401/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10401/comments | https://api.github.com/repos/huggingface/transformers/issues/10401/events | https://github.com/huggingface/transformers/pull/10401 | 816,710,607 | MDExOlB1bGxSZXF1ZXN0NTgwMzI0Mzcy | 10,401 | Fix run_glue evaluation when model has a label correspondence | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,614 | 1,614 | 1,614 | COLLABORATOR | null | # What does this PR do?
The `run_glue` script uses the correspondence id to label stored in a given model but when using
```
AutoModelForSequenceClassication.from_pretrained(xxx, num_labels=x)
```
that correspondence is reset. This PR fixes that, along with a few other bugs in the script. To confirm MNLI evaluation does take the correspondence in a model config
```bash
python examples/text-classification/run_glue.py --model_name_or_path roberta-large-mnli --task_name mnli --max_seq_length 128 --output_dir ~/tmp/test-mnli --do_eval
```
gices 90.6%/90.1% accuracy (matched/mismatched) after this PR, vs 4.28%/4.86% accuracy on current master. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10401/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10401",
"html_url": "https://github.com/huggingface/transformers/pull/10401",
"diff_url": "https://github.com/huggingface/transformers/pull/10401.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10401.patch",
"merged_at": 1614285038000
} |
https://api.github.com/repos/huggingface/transformers/issues/10400 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10400/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10400/comments | https://api.github.com/repos/huggingface/transformers/issues/10400/events | https://github.com/huggingface/transformers/issues/10400 | 816,694,608 | MDU6SXNzdWU4MTY2OTQ2MDg= | 10,400 | [Deepspeed] getting multiple prints of: Avoid using `tokenizers` before the fork if possible | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"@chrissyjsartt, you probably accidentally subscribed/set to \"Watching\" the transformers repository which will now send you every comment on every Issue or PR. \r\n\r\nSo urgently go to https://github.com/watching and \"Unwatch\" this or any other repositories you may have set to Watch. Then you will stop getting these notifications.",
"@LysandreJik replied elsewhere to set `TOKENIZERS_PARALLELISM=false` and to read https://github.com/huggingface/tokenizers/issues/187#issuecomment-635692450 for the explanation of why this is needed.\r\n\r\nBut this could make things slow, so trying `=true` first is a better idea - if it doesn't hang then all is good.\r\n\r\nAlso Anthony shared:\r\n> If the `tokenizer` wasn't used to encode before forking the process, it shouldn't happen. So just a new `encode_batch` somewhere before the fork happens can be enough to trigger this."
] | 1,614 | 1,614 | 1,614 | CONTRIBUTOR | null | on master when running with DeepSpeed I started getting multiple dumps of:
```
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
```
This script:
```
export BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0,1 deepspeed --num_gpus=2 examples/seq2seq/run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_eval --do_train --do_predict --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --val_max_target_length 128 --warmup_steps 500 --max_train_samples 100 --max_val_samples 100 --max_test_samples 100 --dataset_name wmt16 --dataset_config ro-en --source_prefix "translate English to Romanian: " --deepspeed examples/tests/deepspeed/ds_config.json
```
prints it 15 times.
There are no 15 forks, it probably gets triggered by threads. The problem doesn't happen with DDP or DP.
Thank you.
@LysandreJik, @n1t0
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10400/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10399 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10399/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10399/comments | https://api.github.com/repos/huggingface/transformers/issues/10399/events | https://github.com/huggingface/transformers/pull/10399 | 816,582,416 | MDExOlB1bGxSZXF1ZXN0NTgwMjE5NDI2 | 10,399 | Make Barthez tokenizer tests a bit faster | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,614 | 1,614 | 1,614 | COLLABORATOR | null | # What does this PR do?
Currently, CI is pretty slow because of this:
```
93.44s call tests/test_tokenization_barthez.py::BarthezTokenizationTest::test_add_special_tokens
77.20s call tests/test_tokenization_barthez.py::BarthezTokenizationTest::test_pretokenized_inputs
77.00s call tests/test_tokenization_barthez.py::BarthezTokenizationTest::test_maximum_encoding_length_single_input
76.66s call tests/test_tokenization_barthez.py::BarthezTokenizationTest::test_maximum_encoding_length_pair_input
75.77s call tests/test_tokenization_barthez.py::BarthezTokenizationTest::test_internal_consistency
```
This is cause by the BarthezTokenizer conversion from slow to fast being pretty slow, so this PR saves the fast tokenizer to make those tests faster. To be even more efficient a new sentencepiece model with a mask token and a pad token should be added and used here. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10399/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10399",
"html_url": "https://github.com/huggingface/transformers/pull/10399",
"diff_url": "https://github.com/huggingface/transformers/pull/10399.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10399.patch",
"merged_at": 1614271345000
} |
https://api.github.com/repos/huggingface/transformers/issues/10398 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10398/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10398/comments | https://api.github.com/repos/huggingface/transformers/issues/10398/events | https://github.com/huggingface/transformers/issues/10398 | 816,552,299 | MDU6SXNzdWU4MTY1NTIyOTk= | 10,398 | Does the synonym replacement tasks need Transformer? | {
"login": "Kiode",
"id": 19408781,
"node_id": "MDQ6VXNlcjE5NDA4Nzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/19408781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kiode",
"html_url": "https://github.com/Kiode",
"followers_url": "https://api.github.com/users/Kiode/followers",
"following_url": "https://api.github.com/users/Kiode/following{/other_user}",
"gists_url": "https://api.github.com/users/Kiode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kiode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kiode/subscriptions",
"organizations_url": "https://api.github.com/users/Kiode/orgs",
"repos_url": "https://api.github.com/users/Kiode/repos",
"events_url": "https://api.github.com/users/Kiode/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kiode/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,614 | 1,614 | 1,614 | NONE | null | Or traditional language processing toolkit (like WordNet) is enough? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10398/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10397 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10397/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10397/comments | https://api.github.com/repos/huggingface/transformers/issues/10397/events | https://github.com/huggingface/transformers/pull/10397 | 816,513,248 | MDExOlB1bGxSZXF1ZXN0NTgwMTYxNjU1 | 10,397 | Ignore unexpected weights from PT conversion | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,614 | 1,614 | 1,614 | MEMBER | null | Some weights resulting from the conversion from a PyTorch model to a TensorFlow model are throwing an unnecessary warning.
To see for yourself, the following code throws a warning before the PR:
```py
from transformers import BertForPreTraining, BertConfig, TFBertForPreTraining
pt = BertForPreTraining(BertConfig())
pt.save_pretrained("here")
tf = TFBertForPreTraining.from_pretrained("here", from_pt=True)
```
Fix https://github.com/huggingface/transformers/issues/10348 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10397/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10397/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10397",
"html_url": "https://github.com/huggingface/transformers/pull/10397",
"diff_url": "https://github.com/huggingface/transformers/pull/10397.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10397.patch",
"merged_at": 1614267747000
} |
https://api.github.com/repos/huggingface/transformers/issues/10396 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10396/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10396/comments | https://api.github.com/repos/huggingface/transformers/issues/10396/events | https://github.com/huggingface/transformers/issues/10396 | 816,490,214 | MDU6SXNzdWU4MTY0OTAyMTQ= | 10,396 | how to freeze specific layers of TFbert model and just train a classifier? | {
"login": "nerses0",
"id": 24301163,
"node_id": "MDQ6VXNlcjI0MzAxMTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/24301163?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nerses0",
"html_url": "https://github.com/nerses0",
"followers_url": "https://api.github.com/users/nerses0/followers",
"following_url": "https://api.github.com/users/nerses0/following{/other_user}",
"gists_url": "https://api.github.com/users/nerses0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nerses0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nerses0/subscriptions",
"organizations_url": "https://api.github.com/users/nerses0/orgs",
"repos_url": "https://api.github.com/users/nerses0/repos",
"events_url": "https://api.github.com/users/nerses0/events{/privacy}",
"received_events_url": "https://api.github.com/users/nerses0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I googled it, this should be working:\r\n\r\nhttps://colab.research.google.com/drive/1EAVhQGdVvXbCu8gGq0lZ9dOnN4jJtvAj?usp=sharing",
"@NielsRogge thanks for the help! ",
"> I googled it, this should be working:\r\n> \r\n> https://colab.research.google.com/drive/1EAVhQGdVvXbCu8gGq0lZ9dOnN4jJtvAj?usp=sharing\r\n\r\n\r\nThank you very much, this is very useful to me"
] | 1,614 | 1,635 | 1,614 | NONE | null | Could someone help me to freeze say first 3 layers of transformers.TFDistilBertModel.from_pretrained('distilbert-base-multilingual-cased'). I've tried recommendations for TF model from [#400](https://github.com/huggingface/transformers/issues/400) but it seems to freeze all layers at once.
Thanks in advance! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10396/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10396/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10395 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10395/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10395/comments | https://api.github.com/repos/huggingface/transformers/issues/10395/events | https://github.com/huggingface/transformers/issues/10395 | 816,437,271 | MDU6SXNzdWU4MTY0MzcyNzE= | 10,395 | RobertaTokenizerFast does not add special tokens | {
"login": "marrrcin",
"id": 6958772,
"node_id": "MDQ6VXNlcjY5NTg3NzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6958772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marrrcin",
"html_url": "https://github.com/marrrcin",
"followers_url": "https://api.github.com/users/marrrcin/followers",
"following_url": "https://api.github.com/users/marrrcin/following{/other_user}",
"gists_url": "https://api.github.com/users/marrrcin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marrrcin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marrrcin/subscriptions",
"organizations_url": "https://api.github.com/users/marrrcin/orgs",
"repos_url": "https://api.github.com/users/marrrcin/repos",
"events_url": "https://api.github.com/users/marrrcin/events{/privacy}",
"received_events_url": "https://api.github.com/users/marrrcin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, thanks for opening an issue. Indeed, I can reproduce. The conversion to IDs happens in the `encode_batch` method in `tokenizers` directly, which doesn't return the special tokens even though it correctly receives `add_special_tokens=True`. The tokenizer object in transformers also seems to have the correct `special_tokens_map`.\r\n\r\n@n1t0, is this an issue from `tokenizers` side? If we're not correctly passing something to the Rust tokenizer when instantiating a it from files, happy to look into it.",
"Discussed the issue with @n1t0 and the issue comes from the fact that the special tokens must be added to the tokenizer via a [post-processor](https://huggingface.co/docs/tokenizers/python/latest/api/reference.html?highlight=post#module-tokenizers.processors).\r\n\r\nIf it isn't done, then tokenizers cannot have their special tokens. The slow tokenizers having them anyway is linked to their initialization and not to the tokenizer you generated using the `tokenizers` library.\r\n\r\nHere is a gist made from your colab showing how the post-processor should be used: https://gist.github.com/LysandreJik/04c7cfe3d2656ae1c4c388ce9cdd3ea4",
"Thanks @LysandreJik for the reply!\r\nSo it's more like a misconfiguration of the training pipeline on my side, not a bug per se?",
"Yes, I believe that is so. Tokenizers created with `tokenizers` need to have their post-processors/pre-tokenizers and other components defined to work correctly, otherwise it yields unexpected results as we have just seen!",
"Closing, but still seems odd that the behaviour for exact same files is different between those tokenizers..."
] | 1,614 | 1,614 | 1,614 | CONTRIBUTOR | null | I'm not sure whether this should be a part of tokenizers or transformers, because it uses both. Classes that don't work are from `transformers` so I'm posting it here.
## Environment info
- `transformers` version: 4.3.3
- Platform: Colab
- PyTorch version (GPU?): n/a
- Tensorflow version (GPU?): n/a
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
- tokenizers: @n1t0, @LysandreJik
## Information
### Reproduction code
https://colab.research.google.com/drive/1iYLBLzXRkQpdPyVlIdi_qNCzfbD1uwGs?usp=sharing
When loading tokenizer trained using `tokenizers` from transformers, e.g.
```python
tfast = RobertaTokenizerFast.from_pretrained("./workdir/tokenizer", model_max_length=10)
```
it does not add special tokens
```python
tfast("asd", add_special_tokens=True)
```
```
{'input_ids': [400, 72], 'attention_mask': [1, 1]}
```
"Slow" version behaves correctly:
```python
tslow = RobertaTokenizer.from_pretrained("./workdir/tokenizer", model_max_length=10)
tslow("asd", add_special_tokens=True)
```
```
{'input_ids': [0, 400, 72, 2], 'attention_mask': [1, 1, 1, 1]}
```
## Expected behavior
Both tokenizers produce the same output.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10395/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10394 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10394/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10394/comments | https://api.github.com/repos/huggingface/transformers/issues/10394/events | https://github.com/huggingface/transformers/issues/10394 | 816,430,079 | MDU6SXNzdWU4MTY0MzAwNzk= | 10,394 | DeepSpeedEngine object has no attribute 'no_sync' | {
"login": "ayubSubhaniya",
"id": 20911334,
"node_id": "MDQ6VXNlcjIwOTExMzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/20911334?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayubSubhaniya",
"html_url": "https://github.com/ayubSubhaniya",
"followers_url": "https://api.github.com/users/ayubSubhaniya/followers",
"following_url": "https://api.github.com/users/ayubSubhaniya/following{/other_user}",
"gists_url": "https://api.github.com/users/ayubSubhaniya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayubSubhaniya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayubSubhaniya/subscriptions",
"organizations_url": "https://api.github.com/users/ayubSubhaniya/orgs",
"repos_url": "https://api.github.com/users/ayubSubhaniya/repos",
"events_url": "https://api.github.com/users/ayubSubhaniya/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayubSubhaniya/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you for your report. This issue has already been fixed in `transformers` master."
] | 1,614 | 1,614 | 1,614 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.3.0
- Platform: Linux-4.14.209-160.339.amzn2.x86_64-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.8
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: distributed
### Who can help
Library:
- deepspeed: @stas00
## Information
Deepspeed with single node, multi gpus breaking
```
Traceback (most recent call last):
File "training/run_training.py", line 273, in <module>
raise e
File "training/run_training.py", line 270, in <module>
remaining_args=remaining_args)
File "training/run_training.py", line 186, in main
trainer.train(model_path=model_path)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 937, in train
with model.no_sync():
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 779, in __getattr__
type(self).__name__, name))
torch.nn.modules.module.ModuleAttributeError: 'DeepSpeedEngine' object has no attribute 'no_sync'
```
```
deepspeed \
--num_gpus ${GPUS_ALLOWED} \
training/run_training.py \
--deepspeed ds_config.json \
--output_dir ${OUTPUT_BASE_PATH} \
--model_name_or_path ${MODEL_NAME_OR_PATH} \
--per_device_train_batch_size ${TRAIN_BATCH_SIZE} \
--per_device_eval_batch_size ${TEST_BATCH_SIZE} \
--gradient_accumulation_steps ${GRAD_STEP} \
--evaluation_strategy steps \
--eval_steps ${EVAL_STEP} \
--num_train_epochs ${EPOCH} \
--save_steps ${SAVE_STEP} \
--logging_steps ${LOG_STEP} \
--dataloader_num_workers ${DATALOADER_NUM_WORKERS} \
--load_best_model_at_end true \
--do_train true \
--do_eval true \
--fp16 true \
--dataloader_drop_last true \
--overwrite_output_dir true \
--use_lazy true \
--logging_first_step true || { echo 'training failed' ; exit 0; }
echo 'training successful'
```
```
ds_config.json
{
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"reduce_scatter": true,
"overlap_comm": true,
"contiguous_gradients": true,
"cpu_offload": true,
"allgather_bucket_size": 2e8,
"reduce_bucket_size": 2e8,
}
}
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10394/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10393 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10393/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10393/comments | https://api.github.com/repos/huggingface/transformers/issues/10393/events | https://github.com/huggingface/transformers/issues/10393 | 816,340,853 | MDU6SXNzdWU4MTYzNDA4NTM= | 10,393 | NER Pipeline not working | {
"login": "GuillemGSubies",
"id": 37592763,
"node_id": "MDQ6VXNlcjM3NTkyNzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/37592763?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GuillemGSubies",
"html_url": "https://github.com/GuillemGSubies",
"followers_url": "https://api.github.com/users/GuillemGSubies/followers",
"following_url": "https://api.github.com/users/GuillemGSubies/following{/other_user}",
"gists_url": "https://api.github.com/users/GuillemGSubies/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GuillemGSubies/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GuillemGSubies/subscriptions",
"organizations_url": "https://api.github.com/users/GuillemGSubies/orgs",
"repos_url": "https://api.github.com/users/GuillemGSubies/repos",
"events_url": "https://api.github.com/users/GuillemGSubies/events{/privacy}",
"received_events_url": "https://api.github.com/users/GuillemGSubies/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! The fix for this was merged a few days ago: https://github.com/huggingface/transformers/pull/10184\r\n\r\nI recommend you install from source while no new version is available for Transformers (a new version should be out in ~2 weeks). Sorry for the inconvenience.",
"Ok thank you for the fast response. I always open the issues before checking master branch :sweat: ",
"No worries, better to have too much issues reported than not enough!"
] | 1,614 | 1,614 | 1,614 | CONTRIBUTOR | null |
I just followed the token classification notebook and created a pipeline from the model I trained there. Here you can see the full notebook: https://colab.research.google.com/drive/1OzfFTgZwjxdIikbQ8lVJbA2SRF3IeJ5F?usp=sharing
The only thing that I changed (apart from creating the pipeline and calling it) s the number of training steps so the training is faster.
In the last cell you can see the error:
```
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.
```
### Who can help
- pipelines: @LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10393/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10392 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10392/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10392/comments | https://api.github.com/repos/huggingface/transformers/issues/10392/events | https://github.com/huggingface/transformers/pull/10392 | 816,320,566 | MDExOlB1bGxSZXF1ZXN0NTgwMDAxMzE3 | 10,392 | Remove unused variable in example for Q&A | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks a lot for the fix!",
"What is this all about why am I attached can someone help me\n\n\nOn Thu, Feb 25, 2021, 8:19 AM Lysandre Debut <[email protected]>\nwrote:\n\n> Merged #10392 <https://github.com/huggingface/transformers/pull/10392>\n> into master.\n>\n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/10392#event-4376807276>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AS5YU5XRPNEC22Y7RAP4W6TTAZL7LANCNFSM4YGIRPLA>\n> .\n>\n"
] | 1,614 | 1,614 | 1,614 | MEMBER | null | This PR removed unsed `text_tokens = tokenizer.convert_ids_to_tokens(input_ids)` from pytorch and tensorflow examples of Question Answering: https://huggingface.co/transformers/usage.html#extractive-question-answering | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10392/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10392",
"html_url": "https://github.com/huggingface/transformers/pull/10392",
"diff_url": "https://github.com/huggingface/transformers/pull/10392.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10392.patch",
"merged_at": 1614262727000
} |
https://api.github.com/repos/huggingface/transformers/issues/10391 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10391/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10391/comments | https://api.github.com/repos/huggingface/transformers/issues/10391/events | https://github.com/huggingface/transformers/issues/10391 | 816,281,773 | MDU6SXNzdWU4MTYyODE3NzM= | 10,391 | some bugs about mbart50 for spanish | {
"login": "songwang41",
"id": 6013961,
"node_id": "MDQ6VXNlcjYwMTM5NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6013961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/songwang41",
"html_url": "https://github.com/songwang41",
"followers_url": "https://api.github.com/users/songwang41/followers",
"following_url": "https://api.github.com/users/songwang41/following{/other_user}",
"gists_url": "https://api.github.com/users/songwang41/gists{/gist_id}",
"starred_url": "https://api.github.com/users/songwang41/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songwang41/subscriptions",
"organizations_url": "https://api.github.com/users/songwang41/orgs",
"repos_url": "https://api.github.com/users/songwang41/repos",
"events_url": "https://api.github.com/users/songwang41/events{/privacy}",
"received_events_url": "https://api.github.com/users/songwang41/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,614 | 1,619 | 1,619 | NONE | null | ```
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
article_en = "Hello World"
encoded_en = tokenizer(article_en, return_tensors="pt", padding='longest', truncation=True, max_length=1024)
generated_tokens = model.generate(**encoded_en, forced_bos_token_id=tokenizer.lang_code_to_id["es_XX"])
#tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
text_es = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True, clean_up_tokenization_spaces=True)
```
text_es
['El Presidente (habla en inglés): Doy las gracias al representante de la República Islámica del Irán por su declaración.']
Looks like this model definitely has some bugs for spanish since this is an easy translation task. the correct answer is "Hola Mundo" | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10391/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10390 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10390/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10390/comments | https://api.github.com/repos/huggingface/transformers/issues/10390/events | https://github.com/huggingface/transformers/issues/10390 | 816,256,080 | MDU6SXNzdWU4MTYyNTYwODA= | 10,390 | Tokenizer not working | {
"login": "GuillemGSubies",
"id": 37592763,
"node_id": "MDQ6VXNlcjM3NTkyNzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/37592763?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GuillemGSubies",
"html_url": "https://github.com/GuillemGSubies",
"followers_url": "https://api.github.com/users/GuillemGSubies/followers",
"following_url": "https://api.github.com/users/GuillemGSubies/following{/other_user}",
"gists_url": "https://api.github.com/users/GuillemGSubies/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GuillemGSubies/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GuillemGSubies/subscriptions",
"organizations_url": "https://api.github.com/users/GuillemGSubies/orgs",
"repos_url": "https://api.github.com/users/GuillemGSubies/repos",
"events_url": "https://api.github.com/users/GuillemGSubies/events{/privacy}",
"received_events_url": "https://api.github.com/users/GuillemGSubies/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1862634478,
"node_id": "MDU6TGFiZWwxODYyNjM0NDc4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix",
"name": "Should Fix",
"color": "FF0000",
"default": false,
"description": "This has been identified as a bug and should be fixed."
}
] | closed | false | null | [] | [
"Hi! We do not maintain the conda-forge versions of transformers and tokenizers. We maintain the versions that are on the `huggingface` channel.\r\n\r\nI just tried with the `huggingface` channel and I get no such errors:\r\n\r\n```\r\nconda create --name=env1 python=3.8 jupyter transformers tokenizers -y -c huggingface && conda activate env1\r\n```\r\nSee the result:\r\n\r\n```\r\n~ (🌟) 🤗 python (env1) 10:00:33 ~\r\nPython 3.8.5 (default, Sep 4 2020, 07:30:14)\r\n[GCC 7.3.0] :: Anaconda, Inc. on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from transformers import AutoTokenizer\r\nNone of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.\r\n>>> tokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\", do_lower_case=False, strip_accents=False)\r\nDownloading: 100%|█████████████████████████████████████████████████| 213k/213k [00:00<00:00, 2.71MB/s]\r\nDownloading: 100%|█████████████████████████████████████████████████| 436k/436k [00:00<00:00, 4.67MB/s]\r\nIgnored unknown kwargs option do_lower_case\r\n>>>\r\n```",
"I got a similar problem with [another BERT model](https://huggingface.co/ltgoslo/norbert)\r\n\r\nEverything is OK if the `tokenizer_config.json` contains only this:\r\n```\r\n{\r\n \"do_lower_case\": false\r\n}\r\n```\r\n\r\nBut as soon as another line is added:\r\n```\r\n{\r\n \"do_lower_case\": false,\r\n \"do_basic_tokenize\": false\r\n}\r\n```\r\nthe `AutoTokenizer.from_pretrained(\"ltgoslo/norbert\")` ends in \r\n`unexpected keyword argument: do_lower_case`,\r\n which is weird, since the argument obviously is valid, if given alone.\r\n\r\nI see the same problem even on the HuggingFace Model Hub itself:\r\n`Can't load tokenizer using from_pretrained, please update its configuration: PyBertNormalizer.__new__() got an unexpected keyword argument: do_lower_case`\r\n\r\nWhat is wrong with the `AutoTokenizer` + `do_basic_tokenize` combination? Locally, everything is fine if I use\r\n`tokenizer = BertTokenizer.from_pretrained(\"ltgoslo/norbert\")`\r\nor\r\n`tokenizer = AutoTokenizer.from_pretrained(\"ltgoslo/norbert\", use_fast=False)`",
"Hi @akutuzov, the `do_basic_tokenize` is a python tokenizer only attribute, what behavior do you want from it?\r\n\r\nYou get the error because the `AutoTokenizer` tries to load a fast tokenizer by default.",
"Thanks @LysandreJik , this is my impression as well. But two questions then:\r\n\r\n1. If the problem is with the `do_basic_tokenize`, why the warning says `unexpected keyword argument: do_lower_case`?\r\n2. Is it possible to tell the `AutoTokenizer` **not** to load the fast tokenizer by default for a particular model? Anything I can put in the `config.json` or `tokenizer_config.json`? Since it seems that fast tokenizers sometimes lack the functionality which is there in the python tokenizers, it would be great to have some way to enforce using the python ones. \r\n\r\nIn our case, we need `do_basic_tokenize=False`, since we would like to avoid punctuation splitting.",
"I have just encountered the same problem. It appears only with Version 4.3.x. For now you could switch back to 4.2.x like me. In version 4.2.2 there is just this output: \r\n`Ignored unknown kwargs option do_lower_case`\r\n\r\nEverythink works as expected in 4.2.2!",
"@NebelAI I am using 4.2.2. It does not work as expected: the `do_basic_tokenize` parameter is silently ignored by the `AutoTokenizer`, which instead produces a strange warning about `do_lower_case`.\r\nI see it as problematic behavior.\r\n",
"@akutuzov You are right. My problem goes in the same direction but is not identical to yours, sorry.\r\n\r\nI was capable of using `AutoTokenizer.from_pretrained(file, use_fast=True)` with `tokenizer.json` as file input for quite some time. After upgrading to 4.3.3 I was facing this weird exception you mentioned at the beginning. So my attempt only works if you are using tokenizer.json which has been created by tokenizers lib.\r\n\r\nBu still ... this error needs to be fixed.",
"Re-opening this as the issue isn't solved.",
"related to https://github.com/huggingface/transformers/issues/10121",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I believe this issue has been fixed by https://github.com/huggingface/transformers/pull/10686",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,614 | 1,621 | 1,621 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.3.2
- Platform: Ubuntu 16.04.6 LTS
- Python version: 3.8.8
### Who can help
- tokenizers: @n1t0, @LysandreJik
## Information
To reproduce:
```
conda create --name=env1 python=3.8 jupyter transformers tokenizers -y -c conda-forge
conda activate env1
```
```
~$ python
Python 3.8.8 | packaged by conda-forge | (default, Feb 20 2021, 16:22:27)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import AutoTokenizer
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased", do_lower_case=False, strip_accents=False)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/guillem.garcia/.conda/envs/cosas/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 395, in from_pretrained
return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/home/guillem.garcia/.conda/envs/cosas/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1788, in from_pretrained
return cls._from_pretrained(
File "/home/guillem.garcia/.conda/envs/cosas/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1860, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/guillem.garcia/.conda/envs/cosas/lib/python3.8/site-packages/transformers/models/bert/tokenization_bert_fast.py", line 199, in __init__
self.backend_tokenizer.normalizer = pre_tok_class(**pre_tok_state)
TypeError: PyBertNormalizer.__new__() got an unexpected keyword argument: do_lower_case
```
The most weird thing is that running that exact command in a jupyter notebook does not raise any error. Also `AutoTokenizer.from_pretrained("bert-base-cased", do_lower_case=False)` works so it seems to be something related to strip accents.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10390/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10389 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10389/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10389/comments | https://api.github.com/repos/huggingface/transformers/issues/10389/events | https://github.com/huggingface/transformers/pull/10389 | 815,960,419 | MDExOlB1bGxSZXF1ZXN0NTc5NzAyMzgz | 10,389 | GA: only run model templates once - from fork | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,614 | 1,614 | 1,614 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10389/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10389/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10389",
"html_url": "https://github.com/huggingface/transformers/pull/10389",
"diff_url": "https://github.com/huggingface/transformers/pull/10389.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10389.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/10388 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10388/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10388/comments | https://api.github.com/repos/huggingface/transformers/issues/10388/events | https://github.com/huggingface/transformers/pull/10388 | 815,958,385 | MDExOlB1bGxSZXF1ZXN0NTc5NzAwNjY4 | 10,388 | GA: only run model templates once | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,614 | 1,614 | 1,614 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10388/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10388",
"html_url": "https://github.com/huggingface/transformers/pull/10388",
"diff_url": "https://github.com/huggingface/transformers/pull/10388.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10388.patch",
"merged_at": 1614214081000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/10387 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10387/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10387/comments | https://api.github.com/repos/huggingface/transformers/issues/10387/events | https://github.com/huggingface/transformers/issues/10387 | 815,913,391 | MDU6SXNzdWU4MTU5MTMzOTE= | 10,387 | loss.backward() TypeError seed issue for pretrained reformer model | {
"login": "cnedwards",
"id": 35178531,
"node_id": "MDQ6VXNlcjM1MTc4NTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/35178531?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cnedwards",
"html_url": "https://github.com/cnedwards",
"followers_url": "https://api.github.com/users/cnedwards/followers",
"following_url": "https://api.github.com/users/cnedwards/following{/other_user}",
"gists_url": "https://api.github.com/users/cnedwards/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cnedwards/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cnedwards/subscriptions",
"organizations_url": "https://api.github.com/users/cnedwards/orgs",
"repos_url": "https://api.github.com/users/cnedwards/repos",
"events_url": "https://api.github.com/users/cnedwards/events{/privacy}",
"received_events_url": "https://api.github.com/users/cnedwards/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"duplicate of https://github.com/huggingface/transformers/issues/10370 more or less. Just need to do the same fixes as in the answer of #10370 "
] | 1,614 | 1,614 | 1,614 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.2
- Platform: Google Colab
- Python version: 3.7.10
- PyTorch version (GPU?): 1.7.0+cu101
- Using GPU in script?: no, but error also occurs on GPU
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten
## Information
Model I am using: ReformerForSequenceClassification
The problem arises when using: Pretrained model
The tasks I am working on is:
* [x] my own task or dataset:
## To reproduce
Steps to reproduce the behavior:
```
from transformers import ReformerForSequenceClassification, ReformerTokenizerFast
test = ReformerForSequenceClassification.from_pretrained('google/reformer-crime-and-punishment')
tokenizer = ReformerTokenizerFast.from_pretrained('google/reformer-crime-and-punishment')
input = tokenizer("this is a test", return_tensors='pt')
out = test(**input, labels = torch.zeros((1,1), dtype=torch.long))
out.loss.backward()
```
Error message:
```
TypeError Traceback (most recent call last)
<ipython-input-155-db0c2d6dca2a> in <module>()
6 out = test(**input, labels = torch.zeros((1,1), dtype=torch.long))
7
----> 8 out.loss.backward()
5 frames
/usr/local/lib/python3.7/dist-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
219 retain_graph=retain_graph,
220 create_graph=create_graph)
--> 221 torch.autograd.backward(self, gradient, retain_graph, create_graph)
222
223 def register_hook(self, hook):
/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
130 Variable._execution_engine.run_backward(
131 tensors, grad_tensors_, retain_graph, create_graph,
--> 132 allow_unreachable=True) # allow_unreachable flag
133
134
/usr/local/lib/python3.7/dist-packages/torch/autograd/function.py in apply(self, *args)
87 def apply(self, *args):
88 # _forward_cls is defined by derived class
---> 89 return self._forward_cls.backward(self, *args) # type: ignore
90
91
/usr/local/lib/python3.7/dist-packages/transformers/models/reformer/modeling_reformer.py in backward(***failed resolving arguments***)
1673 head_mask=head_mask[len(layers) - idx - 1],
1674 attention_mask=attention_mask,
-> 1675 buckets=buckets,
1676 )
1677
/usr/local/lib/python3.7/dist-packages/transformers/models/reformer/modeling_reformer.py in backward_pass(self, next_attn_output, hidden_states, grad_attn_output, grad_hidden_states, attention_mask, head_mask, buckets)
1527
1528 # set seed to have correct dropout
-> 1529 torch.manual_seed(self.feed_forward_seed)
1530 # g(Y_1)
1531 res_hidden_states = self.feed_forward(next_attn_output)
/usr/local/lib/python3.7/dist-packages/torch/random.py in manual_seed(seed)
30 `0xffff_ffff_ffff_ffff + seed`.
31 """
---> 32 seed = int(seed)
33 import torch.cuda
34
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
```
## Expected behavior
No error.
## Workaround
Calling the seed init functions fixes the issue:
```
from transformers import ReformerForSequenceClassification, ReformerTokenizerFast
test = ReformerForSequenceClassification.from_pretrained('google/reformer-crime-and-punishment')
tokenizer = ReformerTokenizerFast.from_pretrained('google/reformer-crime-and-punishment')
for l in [m for m in test.modules()][0].reformer.encoder.layers:
l._init_feed_forward_seed()
l._init_attention_seed()
input = tokenizer("this is a test", return_tensors='pt')
out = test(**input, labels = torch.zeros((1,1), dtype=torch.long))
out.loss.backward()
```
Also note that this doesn't occur when I use a custom config.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10387/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10386 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10386/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10386/comments | https://api.github.com/repos/huggingface/transformers/issues/10386/events | https://github.com/huggingface/transformers/issues/10386 | 815,889,277 | MDU6SXNzdWU4MTU4ODkyNzc= | 10,386 | MNLI evaluation on pretrained models | {
"login": "AliHadizadeh",
"id": 43891002,
"node_id": "MDQ6VXNlcjQzODkxMDAy",
"avatar_url": "https://avatars.githubusercontent.com/u/43891002?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AliHadizadeh",
"html_url": "https://github.com/AliHadizadeh",
"followers_url": "https://api.github.com/users/AliHadizadeh/followers",
"following_url": "https://api.github.com/users/AliHadizadeh/following{/other_user}",
"gists_url": "https://api.github.com/users/AliHadizadeh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AliHadizadeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AliHadizadeh/subscriptions",
"organizations_url": "https://api.github.com/users/AliHadizadeh/orgs",
"repos_url": "https://api.github.com/users/AliHadizadeh/repos",
"events_url": "https://api.github.com/users/AliHadizadeh/events{/privacy}",
"received_events_url": "https://api.github.com/users/AliHadizadeh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! This may be because of labels being switched around for the MNLI task. See this thread https://github.com/huggingface/transformers/pull/10203 for more context.",
"Hello, \r\nMany thanks for your response. Yes, that seems to be the source of my issue, and now I can get the accuracy. \r\nThanks!\r\n",
"I think there is also a specific problem in `huggingface/distilbert-base-uncased-finetuned-mnli`: its labels seem wrongly coded. Using them specifically and evaluating gives me 34% accuracy.",
"Yes. But other models seem to work with the modification that I made https://github.com/huggingface/transformers/pull/10203#discussion_r582971857",
"@sgugger Can someone fix this, or remove the model from the model hub? This is a serious gotcha and cost me a couple weeks of confusion!",
"The model has been fixed a year ago, in [this commit](https://huggingface.co/huggingface/distilbert-base-uncased-finetuned-mnli/commit/0fadb1fe60cd119b3af82e2bf9cb98a59336d7bc)",
"Thank you for clarifying @sgugger! I think we had an old copy"
] | 1,614 | 1,645 | 1,614 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.dev / 4.3.3 / 4.3.2
- Platform: Ubuntu 18.04/ Windows 10
- Python version: 3.6.2
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): N/A
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> @patil-suraj , @sgugger, @LysandreJik
## Information
Model I am using (Bert, XLNet ...): huggingface/distilbert-base-uncased-finetuned-mnli - microsoft/deberta-v2-xxlarge-mnli - roberta-large-mnli - squeezebert/squeezebert-mnli - BERT-Base-MNLI....
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
I use run_glue.py on fine-tuned models to reproduce the evaluation result (only `--do_eval`). But the accuracy is about 7%. Other tasks like MRPC or STS-B are ok when I use their fine-tuned models.
## To reproduce
Steps to reproduce the behavior:
1. Run `python run_glue.py --model_name_or_path huggingface/distilbert-base-uncased-finetuned-mnli --task_name mnli --do_eval --max_seq_length 128 --output_dir temp/distill` or any other MNLI fine-tuned model. I even tried a model that I fine-tuned myself using V2.10.0 and that again results in 6%-7% accuracy.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
python run_glue.py --model_name_or_path huggingface/distilbert-base-uncased-finetuned-mnli --task_name mnli --do_eval --max_seq_length 128 --output_dir temp/distill
02/24/2021 11:38:34 - WARNING - main - Process rank: -1, device: cuda:0, n_gpu: 1distributed training: False, 16-bits training: False
02/24/2021 11:38:34 - INFO - main - Training/evaluation parameters TrainingArguments(output_dir=temp/distill, overwrite_output_dir=False, do_train=False, do_eval=True, do_predict=False, evaluation_strategy=EvaluationStrategy.NO, prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_steps=0, logging_dir=runs\Feb24_11-38-34_Ali_Workstation, logging_first_step=False, logging_steps=500, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level=O1, fp16_backend=auto, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name=temp/distill, disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=False, deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, report_to=[], ddp_find_unused_parameters=None, dataloader_pin_memory=True, n_gpu=1)
02/24/2021 11:38:36 - WARNING - datasets.builder - Reusing dataset glue (C:\Users\Ali.cache\huggingface\datasets\glue\mnli\1.0.0\7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4)
[INFO|configuration_utils.py:449] 2021-02-24 11:38:36,777 >> loading configuration file h***://huggingface.co/huggingface/distilbert-base-uncased-finetuned-mnli/resolve/main/config.json from cache at C:\Users\Ali/.cache\huggingface\transformers\240bd330b0e7919215436efe944c4073bfcc0bac4b7ed0a3378ab3d1793beb1a.acfb235b208288614b764ad50394132d4751a48a6c81fc382dc669e4d8a80a55
[INFO|configuration_utils.py:485] 2021-02-24 11:38:36,779 >> Model config DistilBertConfig {
“activation”: “gelu”,
“architectures”: [
“DistilBertForMaskedLM”
],
“attention_dropout”: 0.1,
“bos_token_id”: 0,
“dim”: 768,
“dropout”: 0.1,
“eos_token_ids”: 0,
“finetuning_task”: “mnli”,
“hidden_dim”: 3072,
“id2label”: {
“0”: “LABEL_0”,
“1”: “LABEL_1”,
“2”: “LABEL_2”
},
“initializer_range”: 0.02,
“label2id”: {
“LABEL_0”: 0,
“LABEL_1”: 1,
“LABEL_2”: 2
},
“max_position_embeddings”: 512,
“model_type”: “distilbert”,
“n_heads”: 12,
“n_layers”: 6,
“output_past”: true,
“pad_token_id”: 0,
“qa_dropout”: 0.1,
“seq_classif_dropout”: 0.2,
“sinusoidal_pos_embds”: false,
"tie_weights": true,
“transformers_version”: “4.3.2”,
“vocab_size”: 30522
}[INFO|configuration_utils.py:449] 2021-02-24 11:38:36,923 >> loading configuration file hs://huggingface.co/huggingface/distilbert-base-uncased-finetuned-mnli/resolve/main/config.json from cache at C:\Users\Ali/.cache\huggingface\transformers\240bd330b0e7919215436efe944c4073bfcc0bac4b7ed0a3378ab3d1793beb1a.acfb235b208288614b764ad50394132d4751a48a6c81fc382dc669e4d8a80a55
[INFO|configuration_utils.py:485] 2021-02-24 11:38:36,924 >> Model config DistilBertConfig {
“activation”: “gelu”,
“architectures”: [
“DistilBertForMaskedLM”
],
“attention_dropout”: 0.1,
“bos_token_id”: 0,
“dim”: 768,
“dropout”: 0.1,
“eos_token_ids”: 0,
“finetuning_task”: “mnli”,
“hidden_dim”: 3072,
“id2label”: {
“0”: “contradiction”,
“1”: “neutral”,
“2”: “entailment”
},
“initializer_range”: 0.02,
“label2id”: {
“contradiction”: “0”,
“entailment”: “2”,
“neutral”: “1”
},
“max_position_embeddings”: 512,
“model_type”: “distilbert”,
“n_heads”: 12,
“n_layers”: 6,
“output_past”: true,
“pad_token_id”: 0,
“qa_dropout”: 0.1,
“seq_classif_dropout”: 0.2,
“sinusoidal_pos_embds”: false,
“tie_weights_”: true,
“transformers_version”: “4.3.2”,
“vocab_size”: 30522
}
[INFO|tokenization_utils_base.py:1688] 2021-02-24 11:38:36,928 >> Model name ‘huggingface/distilbert-base-uncased-finetuned-mnli’ not found in model shortcut name list (distilbert-base-uncased, distilbert-base-uncased-distilled-squad, distilbert-base-cased, distilbert-base-cased-distilled-squad, distilbert-base-german-cased, distilbert-base-multilingual-cased). Assuming ‘huggingface/distilbert-base-uncased-finetuned-mnli’ is a path, a model identifier, or url to a directory containing tokenizer files.
[INFO|tokenization_utils_base.py:1786] 2021-02-24 11:38:37,946 >> loading file hps://huggingface.co/huggingface/distilbert-base-uncased-finetuned-mnli/resolve/main/vocab.txt from cache at C:\Users\Ali/.cache\huggingface\transformers\3aa49bfb368cde995cea246a5c5ca4d75f769e74b3e6d450776805f998c78366.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99
[INFO|tokenization_utils_base.py:1786] 2021-02-24 11:38:37,947 >> loading file hps://huggingface.co/huggingface/distilbert-base-uncased-finetuned-mnli/resolve/main/tokenizer.json from cache at None
[INFO|tokenization_utils_base.py:1786] 2021-02-24 11:38:37,950 >> loading file htps://huggingface.co/huggingface/distilbert-base-uncased-finetuned-mnli/resolve/main/added_tokens.json from cache at C:\Users\Ali/.cache\huggingface\transformers\603dca04f5c89cbdcdb8021ec21c4376c7334fa6393347c80a54c942a93e50cb.5cc6e825eb228a7a5cfd27cb4d7151e97a79fb962b31aaf1813aa102e746584b
[INFO|tokenization_utils_base.py:1786] 2021-02-24 11:38:37,951 >> loading file ht*ps://huggingface.co/huggingface/distilbert-base-uncased-finetuned-mnli/resolve/main/special_tokens_map.json from cache at C:\Users\Ali/.cache\huggingface\transformers\dea17c39d149e23cb97e2a2829c6170489551d2454352fd18488f17bf90c54db.dd8bd9bfd3664b530ea4e645105f557769387b3da9f79bdb55ed556bdd80611d
[INFO|tokenization_utils_base.py:1786] 2021-02-24 11:38:37,952 >> loading file hps://huggingface.co/huggingface/distilbert-base-uncased-finetuned-mnli/resolve/main/tokenizer_config.json from cache at C:\Users\Ali/.cache\huggingface\transformers\ce6fb0f339483f5ca331e9631b13bc5e9c842e64e9a40aa60defb3898b99dbed.11d9edb6b1301b5af13d33c1585ff45ff84dd55cc6915c2872f856d1ee2dc409
[INFO|modeling_utils.py:1027] 2021-02-24 11:38:38,148 >> loading weights file hps://huggingface.co/huggingface/distilbert-base-uncased-finetuned-mnli/resolve/main/pytorch_model.bin from cache at C:\Users\Ali/.cache\huggingface\transformers\16516ebd442e5f41cd8caf2de88c478fe8a3a0948e20eaf1fdae0bf2d4998be6.73881288e7255a28dacc8ad53661dde9248c11f6e2d10f3b6db193dddee2a2bc
[INFO|modeling_utils.py:1143] 2021-02-24 11:38:39,218 >> All model checkpoint weights were used when initializing DistilBertForSequenceClassification.
[INFO|modeling_utils.py:1152] 2021-02-24 11:38:39,221 >> All the weights of DistilBertForSequenceClassification were initialized from the model checkpoint at huggingface/distilbert-base-uncased-finetuned-mnli.
If your task is similar to the task the model of the checkpoint was trained on, you can already use DistilBertForSequenceClassification for predictions without further training.
02/24/2021 11:38:39 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at C:\Users\Ali.cache\huggingface\datasets\glue\mnli\1.0.0\7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4\cache-0a88ac8e6b3bd378.arrow
02/24/2021 11:38:39 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at C:\Users\Ali.cache\huggingface\datasets\glue\mnli\1.0.0\7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4\cache-e1993e6695981db0.arrow
02/24/2021 11:38:39 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at C:\Users\Ali.cache\huggingface\datasets\glue\mnli\1.0.0\7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4\cache-133d62ae090971a5.arrow
02/24/2021 11:38:39 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at C:\Users\Ali.cache\huggingface\datasets\glue\mnli\1.0.0\7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4\cache-497afbfcce3a8a9d.arrow
02/24/2021 11:38:39 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at C:\Users\Ali.cache\huggingface\datasets\glue\mnli\1.0.0\7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4\cache-7146b31017748988.arrow
02/24/2021 11:38:39 - INFO - main - Sample 335243 of the training set: {‘attention_mask’: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ‘hypothesis’: “Parents are busy and it’s sometimes hard to get them out.”, ‘idx’: 335243, ‘input_ids’: [101, 2017, 2113, 2043, 2037, 3008, 2272, 1998, 2009, 1005, 1055, 2524, 2000, 2131, 2068, 2041, 1998, 1037, 2843, 1997, 3008, 2031, 3182, 2000, 2175, 1998, 1998, 2477, 2066, 2008, 1998, 2009, 1005, 1055, 2397, 2012, 2305, 2061, 102, 3008, 2024, 5697, 1998, 2009, 1005, 1055, 2823, 2524, 2000, 2131, 2068, 2041, 1012, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ‘label’: 0, ‘premise’: “you know when their parents come and it’s hard to get them out and a lot of parents have places to go and and things like that and it’s late at night so”}.
02/24/2021 11:38:39 - INFO - main - Sample 58369 of the training set: {‘attention_mask’: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ‘hypothesis’: 'Where and what is art? ', ‘idx’: 58369, ‘input_ids’: [101, 2073, 2003, 2396, 1029, 102, 2073, 1998, 2054, 2003, 2396, 1029, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ‘label’: 1, ‘premise’: ‘Where is art?’}.
02/24/2021 11:38:39 - INFO - main - Sample 13112 of the training set: {‘attention_mask’: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ‘hypothesis’: ‘The list says alcohol and injury are negatives facing staff.’, ‘idx’: 13112, ‘input_ids’: [101, 6544, 1998, 4544, 1010, 2004, 2092, 2004, 4766, 19388, 1010, 2024, 2006, 1996, 2862, 1012, 102, 1996, 2862, 2758, 6544, 1998, 4544, 2024, 4997, 2015, 5307, 3095, 1012, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ‘label’: 1, ‘premise’: ‘Alcohol and injury, as well as brief interventions, are on the list.’}.
[INFO|trainer.py:432] 2021-02-24 11:38:41,361 >> The following columns in the training set don’t have a corresponding argument in DistilBertForSequenceClassification.forward and have been ignored: premise, hypothesis, idx.
[INFO|trainer.py:432] 2021-02-24 11:38:41,362 >> The following columns in the evaluation set don’t have a corresponding argument in DistilBertForSequenceClassification.forward and have been ignored: premise, hypothesis, idx.
02/24/2021 11:38:41 - INFO - main - *** Evaluate ***
[INFO|trainer.py:432] 2021-02-24 11:38:41,366 >> The following columns in the evaluation set don’t have a corresponding argument in DistilBertForSequenceClassification.forward and have been ignored: premise, hypothesis, idx.
[INFO|trainer.py:1600] 2021-02-24 11:38:41,371 >> ***** Running Evaluation *****
[INFO|trainer.py:1601] 2021-02-24 11:38:41,371 >> Num examples = 9815
[INFO|trainer.py:1602] 2021-02-24 11:38:41,372 >> Batch size = 8
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1227/1227 [00:10<00:00, 122.19it/s]
02/24/2021 11:38:52 - INFO - main - ***** Eval results mnli *****
02/24/2021 11:38:52 - INFO - main - eval_accuracy = 0.07865511971472236
02/24/2021 11:38:52 - INFO - main - eval_loss = 4.536623954772949
02/24/2021 11:38:52 - INFO - main - eval_runtime = 10.733
02/24/2021 11:38:52 - INFO - main - eval_samples_per_second = 914.471
[INFO|trainer.py:432] 2021-02-24 11:38:52,120 >> The following columns in the evaluation set don’t have a corresponding argument in DistilBertForSequenceClassification.forward and have been ignored: premise, hypothesis, idx.
[INFO|trainer.py:1600] 2021-02-24 11:38:52,124 >> ***** Running Evaluation *****
[INFO|trainer.py:1601] 2021-02-24 11:38:52,124 >> Num examples = 9832
[INFO|trainer.py:1602] 2021-02-24 11:38:52,125 >> Batch size = 8
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1229/1229 [00:10<00:00, 121.59it/s]
02/24/2021 11:39:02 - INFO - main - ***** Eval results mnli-mm *****
02/24/2021 11:39:02 - INFO - main - eval_accuracy = 0.08482506102522376
02/24/2021 11:39:02 - INFO - main - eval_loss = 4.487601280212402
02/24/2021 11:39:02 - INFO - main - eval_runtime = 10.127
02/24/2021 11:39:02 - INFO - main - eval_samples_per_second = 970.87
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
It seems all the weights are loaded in the correct place, but the accuracy is below 10% which should be above 80%.
```
[INFO|modeling_utils.py:1143] 2021-02-24 11:38:39,218 >> All model checkpoint weights were used when initializing DistilBertForSequenceClassification.
[INFO|modeling_utils.py:1152] 2021-02-24 11:38:39,221 >> All the weights of DistilBertForSequenceClassification were initialized from the model checkpoint at huggingface/distilbert-base-uncased-finetuned-mnli.
If your task is similar to the task the model of the checkpoint was trained on, you can already use DistilBertForSequenceClassification for predictions without further training.
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10386/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10385 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10385/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10385/comments | https://api.github.com/repos/huggingface/transformers/issues/10385/events | https://github.com/huggingface/transformers/issues/10385 | 815,857,133 | MDU6SXNzdWU4MTU4NTcxMzM= | 10,385 | DDP performing slightly worse in terms of loss and metrics than DP | {
"login": "johncookds",
"id": 16158793,
"node_id": "MDQ6VXNlcjE2MTU4Nzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/16158793?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johncookds",
"html_url": "https://github.com/johncookds",
"followers_url": "https://api.github.com/users/johncookds/followers",
"following_url": "https://api.github.com/users/johncookds/following{/other_user}",
"gists_url": "https://api.github.com/users/johncookds/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johncookds/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johncookds/subscriptions",
"organizations_url": "https://api.github.com/users/johncookds/orgs",
"repos_url": "https://api.github.com/users/johncookds/repos",
"events_url": "https://api.github.com/users/johncookds/events{/privacy}",
"received_events_url": "https://api.github.com/users/johncookds/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! This thread https://github.com/huggingface/transformers/issues/10223 might be useful, it sheds light over the issue you mention.",
"Hi thank you for linking, it is actually worse in terms of loss, not speed. DDP is significantly faster than DP, which I believe it is supposed to be.",
"Oh, sorry about that, I read a bit too fast. Pinging @sgugger who might know what's up.",
"I have not experienced this, so no idea of what might be causing the issue. It's also a bit vague and with no reproducer, so very hard to investigate further.",
"Thank you and I know it is very vague and will work on something that is reproducible.\r\n\r\n Do you have any thoughts as to why this could occur? From my knowledge I would think these would be identical as the gradients from all GPUs are being averaged in both and there is no batch-norm which would be implemented differently.\r\n\r\nI'd appreciate any thoughts and will work to find something more reproducible.\r\n\r\nThank you for the help.",
"This might be coming from the distributed sampler setting the random seed at each epoch, so trying to set it the same way with a run in DP might help (a bad seed could explain a small difference). It might also interfere with the random masking somehow, but that's far-fetched.\r\n\r\nI also have no idea of how much difference you observed in the loss, one thing to try for debugging would also to double check the evaluation loss is the same in DP/DDP for a given model. There might a bug in the way one of them is computed.",
"The DistributedSampler looks it sets the seed in a way that would not effect the seed, its first lines of the __iter__ method are:\r\n```\r\n def __iter__(self) -> Iterator[T_co]:\r\n if self.shuffle:\r\n # deterministically shuffle based on epoch and seed\r\n g = torch.Generator()\r\n g.manual_seed(self.seed + self.epoch)\r\n indices = torch.randperm(len(self.dataset), generator=g).tolist() # type: ignore\r\n```\r\nIn that case the seed is set in both cases to the default huggingface seed of 42.\r\n\r\nFor our task we don't have random masking, as we are just doing CausalLM.\r\n\r\nThe difference in eval loss for one run is around the best score being .17 for DP and .175 for DDP. So slight but consistent across many runs. There are additional task specific metrics in which it performs significantly worse, which is the main reason I are trying to fix the issue. The eval_loss is both using the default GPT2WithLMHead, no change to the way it calculates loss.",
"> The DistributedSampler looks it sets the seed in a way that would not effect the seed\r\n\r\nCorrect if you use the most recent version of PyTorch, but it was not always the case.",
"Ah, great, thanks for the clarification. Yes, using pytorch 1.7.1 so that shouldn't be an issue",
"If the following suggestion might help, one way I approach such problems is logging the loss on every step with its count and noticing when numbers start to diverge - and checking if there is a significant event happening around that time. \r\n\r\nFor example with DeepSpeed's fp16 default dynamic loss scale enabled odd things were starting to happen around step 20 - until I learned that the scheduler was getting skipped until the loss scale value was small enough and only then it'd kick in - which typically happened around step 20. I'm certain this is not relevant to your situation, but I'm just giving an example. ",
"Thank you for the suggestion! I will try this",
"Hi, after back-tracking on this we found that the padding issue hadn't been fully investigated and that did turn out to be the issue. I was able to use pytorch's all_reduce to communicate the max sequence length across gpus and pad to that amount. I'll paste the code here for the prepare_inputs method in case it helps anyone else bridge the gap between DDP and DP, its written for a bs per gpu of 1, would need a tweak to the torch.cat for larger batch sizes.\r\n```\r\n def _prepare_inputs(self, inputs: Dict[str, Union[torch.Tensor, Any]]) -> Dict[str, Union[torch.Tensor, Any]]:\r\n world_size = dist.get_world_size()\r\n self.gpu_group = dist.new_group(list(range(world_size)))\r\n self.world_size = world_size\r\n max_len = torch.tensor(max([len(i) for i in inputs['input_ids']]))\r\n max_len = max_len.to(self.args.device)\r\n dist.all_reduce(max_len, op=dist.ReduceOp.MAX, group=self.gpu_group)\r\n max_len = max_len.cpu()\r\n for k, v in inputs.items():\r\n if isinstance(v, torch.Tensor):\r\n if k in ['input_ids', 'labels']:\r\n v = torch.cat((v, torch.ones(1, max(0, max_len - len(v[0])))*self.tokenizer.pad_token_id), axis=1).long()\r\n #print(f'{k}-{self.args.local_rank}-{self.state.global_step}: {len(v[0])}')\r\n elif k in ['attention_mask']:\r\n v = torch.cat((v, torch.ones(1, max(0, max_len - len(v[0])))), axis=1).long()\r\n inputs[k] = v.to(self.args.device)\r\n\r\n if self.args.past_index >= 0 and self._past is not None:\r\n inputs[\"mems\"] = self._past\r\n\r\n return inputs\r\n```\r\nI'd suggest that to be a first try for people trying to debug. We found that padding materially improved our results and not by just reducing the loss. One can also use all_reduce in place of the DDP model wrapper as in here: https://pytorch.org/tutorials/intermediate/dist_tuto.html. It may be helpful for someone else investigating DDP.\r\nThanks for the suggestions and apologies for misleading on the original post.",
"Interesting, thanks for the info and the code!"
] | 1,614 | 1,615 | 1,615 | NONE | null | Hi,
I am running Transformers 4.3.2 and am testing DistributedDataParallel vs DataParallel. The Huggingface Trainer in both instances have been untouched and slight modifications were made to run_clm.py to fit my specific use case.
This is using GPT-2 model trained from scratch.
I am consistently seeing much faster, but slightly worse results for DistributedDataParallel and was wondering if there are any possible reasons this could occur. Convergence still occurs but the evaluation loss is often slightly worse and alternative metrics we use are also slightly worse as well. The worse results are consistent through many runs and when keeping hyper-parameters the same (learning rate, # gpus, batch size, etc.)
The DistributedDataParallel code is being launch using "-m torch.distributed.launch" as has been recommended.
I load my data into a Datasets object from the huggingface/Datasets library.
Things I have checked:
1. Padding, padding is being treated the same in both models
2. gradient averaging, some information online suggested the gradients may be summed when using DataParallel vs averaged when using Distributed but I found this was not the case looking at the code.
Apologies if this too abstract a question, but I felt I would raise it as I have not seen any discussion of possible regression when switching to DDP.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10385/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10384 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10384/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10384/comments | https://api.github.com/repos/huggingface/transformers/issues/10384/events | https://github.com/huggingface/transformers/issues/10384 | 815,836,682 | MDU6SXNzdWU4MTU4MzY2ODI= | 10,384 | Fine-tune pretrained Wav2Vec2 on a small custom dataset | {
"login": "Omarnabk",
"id": 72882909,
"node_id": "MDQ6VXNlcjcyODgyOTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/72882909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Omarnabk",
"html_url": "https://github.com/Omarnabk",
"followers_url": "https://api.github.com/users/Omarnabk/followers",
"following_url": "https://api.github.com/users/Omarnabk/following{/other_user}",
"gists_url": "https://api.github.com/users/Omarnabk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Omarnabk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Omarnabk/subscriptions",
"organizations_url": "https://api.github.com/users/Omarnabk/orgs",
"repos_url": "https://api.github.com/users/Omarnabk/repos",
"events_url": "https://api.github.com/users/Omarnabk/events{/privacy}",
"received_events_url": "https://api.github.com/users/Omarnabk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Patrick is working on it, see #10145 ",
"I am also searching for fine-tuning of this model.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,614 | 1,619 | 1,619 | NONE | null | I am wondering how to **fine-tune** a pre-trained model on a small speech/audio dataset. I have 10 hours of audio with their transcript.
I would like to fine-tune a model and then use it as described here:
[](https://huggingface.co/facebook/wav2vec2-large-960h)
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10384/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10384/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10383 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10383/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10383/comments | https://api.github.com/repos/huggingface/transformers/issues/10383/events | https://github.com/huggingface/transformers/pull/10383 | 815,836,127 | MDExOlB1bGxSZXF1ZXN0NTc5NTk5Nzk2 | 10,383 | Run GA on every push even on forks | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,614 | 1,614 | 1,614 | MEMBER | null | This PR updates the Github Actions YAML file so that PRs opened from forks run the GA tests on every commit.
Fixes https://github.com/huggingface/transformers/issues/10065 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10383/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10383",
"html_url": "https://github.com/huggingface/transformers/pull/10383",
"diff_url": "https://github.com/huggingface/transformers/pull/10383.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10383.patch",
"merged_at": 1614212619000
} |
https://api.github.com/repos/huggingface/transformers/issues/10382 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10382/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10382/comments | https://api.github.com/repos/huggingface/transformers/issues/10382/events | https://github.com/huggingface/transformers/pull/10382 | 815,832,063 | MDExOlB1bGxSZXF1ZXN0NTc5NTk2NTE4 | 10,382 | Run GA on forks (Attempt #2) | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,614 | 1,614 | 1,614 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10382/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10382",
"html_url": "https://github.com/huggingface/transformers/pull/10382",
"diff_url": "https://github.com/huggingface/transformers/pull/10382.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10382.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/10381 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10381/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10381/comments | https://api.github.com/repos/huggingface/transformers/issues/10381/events | https://github.com/huggingface/transformers/issues/10381 | 815,737,788 | MDU6SXNzdWU4MTU3Mzc3ODg= | 10,381 | Option to output "test predictions" text file with each checkpoint in run_seq2seq.py | {
"login": "kingpalethe",
"id": 11775831,
"node_id": "MDQ6VXNlcjExNzc1ODMx",
"avatar_url": "https://avatars.githubusercontent.com/u/11775831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kingpalethe",
"html_url": "https://github.com/kingpalethe",
"followers_url": "https://api.github.com/users/kingpalethe/followers",
"following_url": "https://api.github.com/users/kingpalethe/following{/other_user}",
"gists_url": "https://api.github.com/users/kingpalethe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kingpalethe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kingpalethe/subscriptions",
"organizations_url": "https://api.github.com/users/kingpalethe/orgs",
"repos_url": "https://api.github.com/users/kingpalethe/repos",
"events_url": "https://api.github.com/users/kingpalethe/events{/privacy}",
"received_events_url": "https://api.github.com/users/kingpalethe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"May be of interest to @patil-suraj @stas00 @sgugger ",
"Yes, as I replied in the forums, this functionality was dropped - not sure why it was done, as I wasn't part part of the planning discussion.\r\n\r\nI think it was not intentional, the devs were probably unaware it was used and given that the example tests were dropped too it's not surprising it was missed. I propose the dropped examples tests are restored (which will require porting to the new script) which will expose some of the functionality that was removed with it.\r\n\r\nPractically, let's identify what else might have been removed and create separate issues besides this one and may be ask the community to help restore/backport the previously working things to the new script(s)?\r\n\r\ne.g. one such important thing is the tests that were moved to legacy, so this script is no longer being tested.\r\n\r\np.s. this should be of help restoring/porting the example tests https://github.com/huggingface/transformers/issues/10036",
"@bhadreshpsavani, please let us know if you're inspired to take care of this in:\r\nhttps://github.com/huggingface/transformers/issues/10337#issuecomment-785938863\r\nThank you.",
"Sure @stas00,\r\nI can take care of this with a separate PR or if possible in the same PR,\r\nThanks",
"Correction, as I was refactoring `run_seq2seq.py` I can see now that the code wasn't removed - it's exactly the same. Someone decided to rename the resulting file instead. So the feature hasn't been removed, just renamed.\r\n\r\nI'm not attached to either, \r\n1. the original was saving it as \"test_generations.txt\"\r\n2. the new one as \"test_preds_seq2seq.txt\"\r\n\r\nI think the original name is the most intuitive one.\r\n\r\n@sgugger, do you have an opinion here?",
"@bhadreshpsavani, so please hold a moment while we are re-modelling `run_seq2seq.py` and then I will update you when the model example is ready to be synced. Thank you!",
"PR to restore the original functionality: https://github.com/huggingface/transformers/pull/10428",
"OK, the original name has been restored as it used to be, @kingpalethe \r\n\r\nAs I mentioned in https://github.com/huggingface/transformers/pull/10428 if you'd like to request a new feature to do this on each check point please don't hesitate to make such request.",
"@stas00 thanks -- apologies, you are correct. I had hallucinated this behavior. I made a new issue: https://github.com/huggingface/transformers/issues/10439",
"All is good. \r\n\r\nand now I see that my PR made that script inconsistent with other scripts, but perhaps all scripts should use the same filename for `test_generations.txt`. I can't quite see the point of it having a different name in each script."
] | 1,614 | 1,614 | 1,614 | NONE | null | Further to this discussion:
https://discuss.huggingface.co/t/how-to-output-test-generations-txt-with-run-seq2seq-py/3825
The prior incarnation of this script would output test generations at each checkpoint, which was very useful for understanding the progress of model training.
The current script...
https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_seq2seq.py
Seems to only output this text file once, at the end of the last epoch.
If there was a way to enable the previous behavior, I am guessing that would be widely useful.
thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10381/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10380 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10380/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10380/comments | https://api.github.com/repos/huggingface/transformers/issues/10380/events | https://github.com/huggingface/transformers/issues/10380 | 815,709,748 | MDU6SXNzdWU4MTU3MDk3NDg= | 10,380 | Trainer.train() gets stuck when executed on K8 pods | {
"login": "pthalasta",
"id": 68306050,
"node_id": "MDQ6VXNlcjY4MzA2MDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/68306050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pthalasta",
"html_url": "https://github.com/pthalasta",
"followers_url": "https://api.github.com/users/pthalasta/followers",
"following_url": "https://api.github.com/users/pthalasta/following{/other_user}",
"gists_url": "https://api.github.com/users/pthalasta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pthalasta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pthalasta/subscriptions",
"organizations_url": "https://api.github.com/users/pthalasta/orgs",
"repos_url": "https://api.github.com/users/pthalasta/repos",
"events_url": "https://api.github.com/users/pthalasta/events{/privacy}",
"received_events_url": "https://api.github.com/users/pthalasta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there. Unless you tell us what script you are using and how you are launching it, there is nothing we can do to help.",
"@sgugger i'm trying to execute this example script\r\nhttps://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb\r\n\r\nThis notebook works fine in a docker container but not on K8s pod\r\n\r\nScript is launched in jupyter server that is hosted on kubeflow",
"@sgugger any information on how i can solve the issue?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,614 | 1,619 | 1,619 | NONE | null | ## Environment info
- `transformers` version: 4.3.2
- Platform: Linux-4.15.0-76-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.10
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
@LysandreJik @sgugger
## Information
Model I am using BertForSequenceClassification
The problem arises when using:
* [ ] my own modified scripts: (give details below)
When i try to start the training of the model in K8 pod (in kubeflow env) with ubuntu 18.04 image. There is no output or error shown by the function even after 30mins of runtime. GPU usage doesn't change as well
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... On | 00000000:07:00.0 Off | 0 |
| N/A 28C P0 56W / 300W | 1514MiB / 32510MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 Tesla V100-SXM2... On | 00000000:0A:00.0 Off | 0 |
| N/A 27C P0 56W / 300W | 1134MiB / 32510MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
```
## Expected behavior
Model to train and output the results | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10380/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10379 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10379/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10379/comments | https://api.github.com/repos/huggingface/transformers/issues/10379/events | https://github.com/huggingface/transformers/issues/10379 | 815,688,966 | MDU6SXNzdWU4MTU2ODg5NjY= | 10,379 | [firewalled env] OFFLINE mode | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is done."
] | 1,614 | 1,615 | 1,615 | CONTRIBUTOR | null | This is done - we now have:
* `HF_DATASETS_OFFLINE=1`
* `TRANSFORMERS_OFFLINE=1`
Documented: [here](https://huggingface.co/transformers/master/installation.html#offline-mode)
The transformers-specific issue is here:
-------------------
Similar to `datasets` https://github.com/huggingface/datasets/issues/1939 `transformers` needs to have an OFFLINE mode where it can work w/o ever making a network call to the outside world.
This issue comes from a need to be able to run `transformers` in a firewalled env, which currently makes the software hang until it times out, as it's unable to complete the network calls.
We assume `DATASETS_OFFLINE=1` will already deal with datasets and metrics as I proposed at https://github.com/huggingface/datasets/issues/1939, so this issue is specific to `transformers` only.
I propose the following approach to solving this problem, using the example of `run_seq2seq.py` as a sample program. There are 2 possible ways to going about it.
## 1. Manual
manually download model files, that is transfer to the firewalled instance and run:
```
TRANSFORMERS_OFFLINE=1 run_seq2seq.py --model_name_or_path ./t5-small-local ...
```
`transformers` must not make any network calls and if there is a logic to do that and something is missing it should assert that this or that action requires network and therefore it can't proceed.
## 2. Automatic
In some clouds one can prepare a data storage ahead of time with a normal networked environment but which doesn't have gpus and then one switches to the gpu instance which is firewalled, but it can access all the cached data. This is the ideal situation, since in this scenario we don't have to do anything manually, but simply run the same application twice:
1. on the non-firewalled instance:
```
run_seq2seq.py --model_name_or_path t5-small ...
```
which should download and cached everything.
2. and then immediately after on the firewalled instance, which shares the same filesystem:
```
TRANSFORMERS_OFFLINE=1 run_seq2seq.py --model_name_or_path t5-small ...
```
and the model should be cached by the invocation number 1 and any network calls be skipped and if the logic is missing data it should assert and not try to fetch any data from online.
## Specifics
1. We already have `local_files_only=True` for all 3 `.from_pretrained()` calls which make this already possible, but this requires editing software between invocation 1 and 2 in the Automatic scenario which is very error-prone. Thus I propose that `TRANSFORMERS_OFFLINE=1` will turn these flags True from the ouside of the system.
2. There are other issues to check, for example in some `examples` scripts we have:
```
with FileLock(".lock") as lock:
nltk.download("punkt", quiet=True)
```
which also issues a network call and under `TRANSFORMERS_OFFLINE=1` it should be skipped and replaced with a check that the corresponding nltk data is already available.
Thanks.
@julien-c, @sgugger, @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10379/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10379/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10378 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10378/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10378/comments | https://api.github.com/repos/huggingface/transformers/issues/10378/events | https://github.com/huggingface/transformers/issues/10378 | 815,578,400 | MDU6SXNzdWU4MTU1Nzg0MDA= | 10,378 | AttributeError: 'QAModel' object has no attribute 'automatic_optimization' | {
"login": "emilzilyaev",
"id": 30962491,
"node_id": "MDQ6VXNlcjMwOTYyNDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/30962491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emilzilyaev",
"html_url": "https://github.com/emilzilyaev",
"followers_url": "https://api.github.com/users/emilzilyaev/followers",
"following_url": "https://api.github.com/users/emilzilyaev/following{/other_user}",
"gists_url": "https://api.github.com/users/emilzilyaev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emilzilyaev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emilzilyaev/subscriptions",
"organizations_url": "https://api.github.com/users/emilzilyaev/orgs",
"repos_url": "https://api.github.com/users/emilzilyaev/repos",
"events_url": "https://api.github.com/users/emilzilyaev/events{/privacy}",
"received_events_url": "https://api.github.com/users/emilzilyaev/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! This issue seems to be with PyTorch Lightning rather than with Transformers.",
"> Hello! This issue seems to be with PyTorch Lightning rather than with Transformers.\r\n\r\nWell, I've tried to pass MT5ForConditionalGeneration directly to fit() function, but got following error:\r\n\r\nModuleAttributeError: 'MT5ForConditionalGeneration' object has no attribute 'automatic_optimization'",
"As you'll see in your error:\r\n\r\n```\r\n/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/connectors/model_connector.py in copy_trainer_model_properties(self, model)\r\n```\r\n\r\nthis originates from a PyTorch-Lightning error. Transformers has no `automatic_optimization` parameter, our models are plain PyTorch models.\r\n\r\nThank you.",
"thank you a lot"
] | 1,614 | 1,614 | 1,614 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.2
- Platform: google colab
- `pytorch-lightning` version (GPU?): 1.2.0
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people. -->
####Models:
MT5ForConditionalGeneration ('google/mt5-base')
Library:
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Model I am using (MT5ForConditionalGeneration ('google/mt5-base')):
The problem arises when using:
pl.LightningDataModule with T5ForConditionalGeneration
* [ ] my own modified scripts: (give details below)
```
class QAModel(pl.LightningDataModule):
[def __init__(self):
super().__init__()
self.model = MT5ForConditionalGeneration.from_pretrained(MODEL_NAME, return_dict=True)
def forward(self, input_ids, attention_mask, labels=None):
output = self.model(
input_ids=input_ids,
attention_mask=attention_mask,
labels=labels
)
return output.loss, output.logits
def training_step(self, batch, batch_idx):
input_ids = batch['input_ids']
attention_mask = batch['attention_mask']
labels = batch['labels']
loss, outputs = self(input_ids, attention_mask, labels)
self.log('train_loss', loss, prog_bar=True, logger=True)
return loss
def validation_step(self, batch, batch_idx):
input_ids = batch['input_ids']
attention_mask = batch['attention_mask']
labels = batch['labels']
loss, outputs = self(input_ids, attention_mask, labels)
self.log('val_loss', loss, prog_bar=True, logger=True)
return loss
def test_step(self, batch, batch_idx):
input_ids = batch['input_ids']
attention_mask = batch['attention_mask']
labels = batch['labels']
loss, outputs = self(input_ids, attention_mask, labels)
self.log('train_loss', loss, prog_bar=True, logger=True)
return loss
def configure_optimizers(self):
print('done')
return AdamW(self.parameters(), lr=0.0001)]
model = QAModel()
trainer.fit(model, data_module)
```
That is my colab code
https://colab.research.google.com/drive/1wRYnuQhkO8UvE2CtsJ09dGHy4R_nVTPd?usp=sharing
Thanks a lot!
**** | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10378/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10377 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10377/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10377/comments | https://api.github.com/repos/huggingface/transformers/issues/10377/events | https://github.com/huggingface/transformers/issues/10377 | 815,555,962 | MDU6SXNzdWU4MTU1NTU5NjI= | 10,377 | Training LongformerForQuestionAnswering on TriviaQA | {
"login": "SapirWeissbuch",
"id": 44585792,
"node_id": "MDQ6VXNlcjQ0NTg1Nzky",
"avatar_url": "https://avatars.githubusercontent.com/u/44585792?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SapirWeissbuch",
"html_url": "https://github.com/SapirWeissbuch",
"followers_url": "https://api.github.com/users/SapirWeissbuch/followers",
"following_url": "https://api.github.com/users/SapirWeissbuch/following{/other_user}",
"gists_url": "https://api.github.com/users/SapirWeissbuch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SapirWeissbuch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SapirWeissbuch/subscriptions",
"organizations_url": "https://api.github.com/users/SapirWeissbuch/orgs",
"repos_url": "https://api.github.com/users/SapirWeissbuch/repos",
"events_url": "https://api.github.com/users/SapirWeissbuch/events{/privacy}",
"received_events_url": "https://api.github.com/users/SapirWeissbuch/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Pinging @ibeltagy and @patrickvonplaten who would know better.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,614 | 1,619 | 1,619 | NONE | null | Hello,
Could you please give a short explanation on how to retrain `LongformerForQuestionAnswering` in order to receive these weights: `allenai/longformer-large-4096-finetuned-triviaqa` (https://huggingface.co/allenai/longformer-large-4096-finetuned-triviaqa/tree/main).
Thank you,
Sapir | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10377/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10377/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10376 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10376/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10376/comments | https://api.github.com/repos/huggingface/transformers/issues/10376/events | https://github.com/huggingface/transformers/issues/10376 | 815,534,852 | MDU6SXNzdWU4MTU1MzQ4NTI= | 10,376 | UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte | {
"login": "math-sasso",
"id": 23565626,
"node_id": "MDQ6VXNlcjIzNTY1NjI2",
"avatar_url": "https://avatars.githubusercontent.com/u/23565626?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/math-sasso",
"html_url": "https://github.com/math-sasso",
"followers_url": "https://api.github.com/users/math-sasso/followers",
"following_url": "https://api.github.com/users/math-sasso/following{/other_user}",
"gists_url": "https://api.github.com/users/math-sasso/gists{/gist_id}",
"starred_url": "https://api.github.com/users/math-sasso/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/math-sasso/subscriptions",
"organizations_url": "https://api.github.com/users/math-sasso/orgs",
"repos_url": "https://api.github.com/users/math-sasso/repos",
"events_url": "https://api.github.com/users/math-sasso/events{/privacy}",
"received_events_url": "https://api.github.com/users/math-sasso/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You're using `AutoModelForTokenClassification` but you're specifying a `ckpt` file. It needs a folder with a `config.json`, and a state dict. Here it's expecting a PyTorch state dict named `pytorch_model.bin` since you're using the PyTorch version.\r\n\r\nIf you've retrieved your checkpoint from the original BERT repository, I recommend you take a look at the following documentation [Converting Tensorflow Checkpoints](https://huggingface.co/transformers/converting_tensorflow_models.html).\r\n\r\nAlso, you're using a PyTorch model's `from_pretrained` model, so I point you to the documentation of that method [here](https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel.from_pretrained).",
"@LysandreJik first of all thanks for replying.\r\n\r\nIs it the proper way to retrieve a model from a checkpoint generated by me? When I pass `self.model=bert-base` on my new env it doesnt retrieve my checkpoints, so when I call `model.precit('any text ')` it gives me wrong results [maybe because it gets original weights from bert-base instead of my finne tunned weights].\r\n\r\nSo following your suggestion, if changing `self.model` for my finne tunned model is the correct way, how do I generate `config.json` and `pytorch_model.bin` from my **trained model**",
"Well you should use the `save_pretrained` method on your model. I haven't used PyTorch Lightning but if your module is named `model` and that the transformers is the `model` attribute of that module, it would be something like:\r\n\r\n```py\r\nmodel.model.save_pretrained(\"directory\")\r\n```",
"I did it and I worked. But it stills giving wrong results when I call the predict. On my model I have inserted the path to the created dir instead of bert-base.\r\n\r\n self.model = AutoModelForTokenClassification.from_pretrained(\r\n \"/content/drive/MyDrive/CityZen/Explorations/BERT/weights/binaries\",\r\n num_labels=len(self.tags_infos_dict[\"tag2idx\"]),\r\n output_attentions = self.output_attentions,\r\n output_hidden_states = self.output_hidden_states\r\n )"
] | 1,614 | 1,614 | 1,614 | NONE | null | ## Description
I have fine tunned a transformers model ('bert-base') and saved the checkpoint. Now I want to use somewhere with the checkpoints I saved. But If I still use bert-base on my new env it will not retrieve my checkpoints, so I want to know the proper way to make my model (Pytorch Lightning model) understand that I want to retrieve my checkpoint
## Relavant info on my model class
```py
class NER_Model(pl.LightningModule):
def __init__(self, hyperparams, model_parameters, dataset_infos, extra_infos):
super(NER_Model, self).__init__()
self.model_name = "/checkpoins_folder/epoch=2-step=167-v1.ckpt"
self.model = AutoModelForTokenClassification.from_pretrained(
self.model_name,
num_labels=7,
output_attentions = False,
output_hidden_states = False
)
def predict(self, X: str):
self.step = "Deployment"
self.test_pred_tags = []
batch = self.tokenizer.encode_plus(X, return_tensors="pt")
batch["attention_masks"] = torch.ones_like(batch["input_ids"])
batch = dict((key, input.to( self.device)) for key, input in batch.items())
return self.test_step(batch, None)
## Call
model = NER_Model.load_from_checkpoint(
checkpoint_path = "/checkpoins_folder/epoch=2-step=167-v1.ckpt",
map_location={"cuda":"cpu"},
hyperparams=hyperparams,
model_parameters=model_parameters,
dataset_infos=dataset_infos,
extra_infos=extra_infos,
)
```
##Error:
```out
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-26-c4ae8a2442c3> in <module>()
6 model_parameters=model_parameters,
7 dataset_infos=dataset_infos,
----> 8 extra_infos=extra_infos,
9 )
7 frames
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/core/saving.py in load_from_checkpoint(cls, checkpoint_path, map_location, hparams_file, strict, **kwargs)
154 checkpoint[cls.CHECKPOINT_HYPER_PARAMS_KEY].update(kwargs)
155
--> 156 model = cls._load_model_state(checkpoint, strict=strict, **kwargs)
157 return model
158
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/core/saving.py in _load_model_state(cls, checkpoint, strict, **cls_kwargs_new)
196 _cls_kwargs = {k: v for k, v in _cls_kwargs.items() if k in cls_init_args_name}
197
--> 198 model = cls(**_cls_kwargs)
199
200 # give model a chance to load something
<ipython-input-10-ed914cb098e8> in __init__(self, hyperparams, model_parameters, dataset_infos, extra_infos)
61 num_labels=len(self.tags_infos_dict["tag2idx"]),
62 output_attentions = self.output_attentions,
---> 63 output_hidden_states = self.output_hidden_states
64 )
65
/usr/local/lib/python3.7/dist-packages/transformers/models/auto/modeling_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1648 if not isinstance(config, PretrainedConfig):
1649 config, kwargs = AutoConfig.from_pretrained(
-> 1650 pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs
1651 )
1652
/usr/local/lib/python3.7/dist-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
366 {'foo': False}
367 """
--> 368 config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
369
370 if "model_type" in config_dict:
/usr/local/lib/python3.7/dist-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
425 )
426 # Load config dict
--> 427 config_dict = cls._dict_from_json_file(resolved_config_file)
428
429 except EnvironmentError as err:
/usr/local/lib/python3.7/dist-packages/transformers/configuration_utils.py in _dict_from_json_file(cls, json_file)
508 def _dict_from_json_file(cls, json_file: Union[str, os.PathLike]):
509 with open(json_file, "r", encoding="utf-8") as reader:
--> 510 text = reader.read()
511 return json.loads(text)
512
/usr/lib/python3.7/codecs.py in decode(self, input, final)
320 # decode input (taking the buffer into account)
321 data = self.buffer + input
--> 322 (result, consumed) = self._buffer_decode(data, self.errors, final)
323 # keep undecoded input until the next call
324 self.buffer = data[consumed:]
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10376/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10375 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10375/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10375/comments | https://api.github.com/repos/huggingface/transformers/issues/10375/events | https://github.com/huggingface/transformers/issues/10375 | 815,490,718 | MDU6SXNzdWU4MTU0OTA3MTg= | 10,375 | DPR decode_best_spans include spans from title | {
"login": "vinicius-cleves",
"id": 25393523,
"node_id": "MDQ6VXNlcjI1MzkzNTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/25393523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vinicius-cleves",
"html_url": "https://github.com/vinicius-cleves",
"followers_url": "https://api.github.com/users/vinicius-cleves/followers",
"following_url": "https://api.github.com/users/vinicius-cleves/following{/other_user}",
"gists_url": "https://api.github.com/users/vinicius-cleves/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vinicius-cleves/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vinicius-cleves/subscriptions",
"organizations_url": "https://api.github.com/users/vinicius-cleves/orgs",
"repos_url": "https://api.github.com/users/vinicius-cleves/repos",
"events_url": "https://api.github.com/users/vinicius-cleves/events{/privacy}",
"received_events_url": "https://api.github.com/users/vinicius-cleves/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes that's totally true. Good catch !\r\nCould you open a PR to fix that please ?\r\n\r\nMaybe this could have affected the performance of the DPR Reader a bit, but probably not significantly though since the logits of the tokens in the title have very low values. The model was trained to return answers from the passage, not from the title.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,614 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.1
### Who can help
@lhoestq, @LysandreJik
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
I believe there is a bug on the following line
```python
passage_offset = sequence_ids.index(self.sep_token_id, 2) + 1 # second sep id
```
It is on file `src/transformers/models/dpr/tokenization_dpr.py`. Next some context to this line:
```python
class CustomDPRReaderTokenizerMixin:
...
def decode_best_spans(...) -> List[DPRSpanPrediction]:
...
for doc_id in sorted_docs:
...
# assuming question & title information is at the beginning of the sequence
passage_offset = sequence_ids.index(self.sep_token_id, 2) + 1 # second sep id
...
return nbest_spans_predictions[:num_spans]
```
The comments make me think that `passage_offset` refers to the start of the passage, after question and title. I feel that the intent behind `sequence_ids.index(self.sep_token_id, 2) ` was to select the second position where `self.sep_token_id` appears, but this doesn't happen, as this is selecting the first ocurrence of sep_token_id starting on token number 2.
I believe an easy fix would be:
```python
title_offset = sequence_ids.index(self.sep_token_id) + 1 # first sep id
passage_offset = sequence_ids.index(self.sep_token_id, title_offset) + 1 # second sep id
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10375/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10375/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10374 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10374/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10374/comments | https://api.github.com/repos/huggingface/transformers/issues/10374/events | https://github.com/huggingface/transformers/pull/10374 | 815,485,446 | MDExOlB1bGxSZXF1ZXN0NTc5MzA3NDI3 | 10,374 | Fix None in add_token_positions - issue #10210 | {
"login": "andreabac3",
"id": 36055796,
"node_id": "MDQ6VXNlcjM2MDU1Nzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/36055796?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andreabac3",
"html_url": "https://github.com/andreabac3",
"followers_url": "https://api.github.com/users/andreabac3/followers",
"following_url": "https://api.github.com/users/andreabac3/following{/other_user}",
"gists_url": "https://api.github.com/users/andreabac3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andreabac3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andreabac3/subscriptions",
"organizations_url": "https://api.github.com/users/andreabac3/orgs",
"repos_url": "https://api.github.com/users/andreabac3/repos",
"events_url": "https://api.github.com/users/andreabac3/events{/privacy}",
"received_events_url": "https://api.github.com/users/andreabac3/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello @joeddav,\r\nYes the proposed change works!\r\n\r\nI have updated the commit.\r\n\r\nThank you for the attention,\r\nAndrea"
] | 1,614 | 1,614 | 1,614 | CONTRIBUTOR | null | Related to the issue #10210
I fix the error in that way, can you confirm is right?
@joeddav @sgugger
Kind regards,
Andrea | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10374/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10374/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10374",
"html_url": "https://github.com/huggingface/transformers/pull/10374",
"diff_url": "https://github.com/huggingface/transformers/pull/10374.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10374.patch",
"merged_at": 1614269913000
} |
https://api.github.com/repos/huggingface/transformers/issues/10373 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10373/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10373/comments | https://api.github.com/repos/huggingface/transformers/issues/10373/events | https://github.com/huggingface/transformers/issues/10373 | 815,482,878 | MDU6SXNzdWU4MTU0ODI4Nzg= | 10,373 | [Documentation issue] Sequence to sequence models | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Option 1) seems the way to go for me as well here!",
"Said the same in private but realized I didn't put it here. So option 1 it is!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Don't close this one robot!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,614 | 1,621 | 1,621 | MEMBER | null | Most models have their example docstrings appended using the `add_code_sample_docstrings` method. This method checks the name of the architecture, and according to its suffix, adds the corresponding sequence.
However, this isn't perfect with the difference between sequence-to-sequence and enc/dec only models, which can have the same suffix: `BertModel` and `MarianModel` have the same suffixes, but should not have the same docstrings, the latter needing `decoder_input_ids` in order to work.
I propose to return a different code sample according to the architecture type (seq-to-seq vs enc/dec). The signature of the method that updates the code sample is the following:
https://github.com/huggingface/transformers/blob/2d458b2c7d6fb1dd5b2361938d1b5bd4c2106479/src/transformers/file_utils.py#L884-L886
There are in my opinion two ways to go about it:
- We can infer the type from the `output_type`. We've mentioned some while ago that having inheritance with model outputs and being able to identify categories of outputs from their classes would make sense for a potential pipeline-v2 implementation, this could be implemented here to detect if a model is seq-2-seq or not.
- We could add another argument to the signature too mention if it's seq-2-seq or not. However, this isn't very future-proof and can result in increased complexity if we ever need an additional separation across architectures.
@patrickvonplaten @sgugger @patil-suraj looking forward to your feedback. I tend to prefer the first option even if it requires a bit more work.
Related issue: https://github.com/huggingface/transformers/issues/10368 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10373/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10372 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10372/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10372/comments | https://api.github.com/repos/huggingface/transformers/issues/10372/events | https://github.com/huggingface/transformers/issues/10372 | 815,357,615 | MDU6SXNzdWU4MTUzNTc2MTU= | 10,372 | deprecated reference `tokenizer.max_len` in glue.py (PR #10220) | {
"login": "poedator",
"id": 24738311,
"node_id": "MDQ6VXNlcjI0NzM4MzEx",
"avatar_url": "https://avatars.githubusercontent.com/u/24738311?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/poedator",
"html_url": "https://github.com/poedator",
"followers_url": "https://api.github.com/users/poedator/followers",
"following_url": "https://api.github.com/users/poedator/following{/other_user}",
"gists_url": "https://api.github.com/users/poedator/gists{/gist_id}",
"starred_url": "https://api.github.com/users/poedator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/poedator/subscriptions",
"organizations_url": "https://api.github.com/users/poedator/orgs",
"repos_url": "https://api.github.com/users/poedator/repos",
"events_url": "https://api.github.com/users/poedator/events{/privacy}",
"received_events_url": "https://api.github.com/users/poedator/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for opening a PR which fixes the issue! I just merged it.\r\n\r\n"
] | 1,614 | 1,614 | 1,614 | CONTRIBUTOR | null | There is a deprecated reference to `tokenizer.max_len` with `tokenizer.model_max_length` - similar to [issue 8739](https://github.com/huggingface/transformers/issues/8739) and [PR 8604](https://github.com/huggingface/transformers/pull/8604).
See error example [in Colab here](https://colab.research.google.com/gist/poedator/f8776349e5c625ce287fc6fcd312fa1e/tokenizer-max_len-error-in-transformers_glue.ipynb). it causes `AttributeError: 'BertTokenizer' object has no attribute 'max_len'`
The error happens when `glue_convert_examples_to_features()` is called without `max_length` parameter specified. In that case [line 119](https://github.com/huggingface/transformers/blob/master/src/transformers/data/processors/glue.py#L119) with wrong reference gets called.
I submitted a [simple PR #10220](https://github.com/huggingface/transformers/pull/10220). It should be able to fix this issue. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10372/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10372/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10371 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10371/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10371/comments | https://api.github.com/repos/huggingface/transformers/issues/10371/events | https://github.com/huggingface/transformers/issues/10371 | 815,308,742 | MDU6SXNzdWU4MTUzMDg3NDI= | 10,371 | Load pretrained model except the head layer for a specific downstream task | {
"login": "hasansalimkanmaz",
"id": 49716619,
"node_id": "MDQ6VXNlcjQ5NzE2NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/49716619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hasansalimkanmaz",
"html_url": "https://github.com/hasansalimkanmaz",
"followers_url": "https://api.github.com/users/hasansalimkanmaz/followers",
"following_url": "https://api.github.com/users/hasansalimkanmaz/following{/other_user}",
"gists_url": "https://api.github.com/users/hasansalimkanmaz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hasansalimkanmaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hasansalimkanmaz/subscriptions",
"organizations_url": "https://api.github.com/users/hasansalimkanmaz/orgs",
"repos_url": "https://api.github.com/users/hasansalimkanmaz/repos",
"events_url": "https://api.github.com/users/hasansalimkanmaz/events{/privacy}",
"received_events_url": "https://api.github.com/users/hasansalimkanmaz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"for now, how can I load pretrained models that have different prediction heads? thanks!",
"For now you can discard the head by passing through a base model:\r\n\r\n```py\r\nfrom transformers import AutoModelForSequenceClassification, AutoModel\r\n\r\npretrained_with_head = AutoModelForSequenceClassification.from_pretrained(\"distilbert-base-uncased-distilled-squad\")\r\npretrained_with_head.save_pretrained(directory)\r\n# Model saved in directory has the head\r\n\r\npretrained_no_head = AutoModel.from_pretrained(directory)\r\npretrained_no_head.save_pretrained(directory)\r\n# Model saved in directory no longer has the head\r\n\r\npretrained_with_head = AutoModelForSequenceClassification.from_pretrained(directory)\r\n# Loaded model has the full transformer, but the head is randomly initialized\r\n```",
"Hi @LysandreJik, \r\nIs this issue being addressed elsewhere? \r\nIf not, would like to work on it. ",
"@vimarshc this issue has not been addressed elsewhere. Feel free to draft a proposal in an issue/PR so that we can take a look and discuss! Thank you!",
"Hi @LysandreJik is this still available for contribution? If yes, I would love to work on it. It would be helpful if you could add a reference draft proposal. Thanks!",
"This has been somewhat addressed by https://github.com/huggingface/transformers/pull/12664"
] | 1,614 | 1,630 | null | CONTRIBUTOR | null | # 🚀 Feature request
It would be nice to have a flag for `from_pretrained` method that indicates whether to load last layer or not. This feature is needed for transfer learning.
## Motivation
I have trained a model with a specific dataset for a downstream task. Now, I need to train another model that needs to be trained on a similar dataset with different labels. I know that previous model have learned the features from the previous dataset and the new model doesn't need to start from scratch. When I try to load the first model with `from_pretrained` method, it returns size mismatch error due to last layer that has different shape for different number of labels. If there is a flag to load/not to load the last layer, I can initialize last layer randomly and go on my training with transfer learning.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10371/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10370 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10370/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10370/comments | https://api.github.com/repos/huggingface/transformers/issues/10370/events | https://github.com/huggingface/transformers/issues/10370 | 815,246,120 | MDU6SXNzdWU4MTUyNDYxMjA= | 10,370 | ReformerForQuestionAnswering : int() argument must be a string, a bytes-like object or a number, not 'NoneType' | {
"login": "harikc456",
"id": 21287383,
"node_id": "MDQ6VXNlcjIxMjg3Mzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/21287383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harikc456",
"html_url": "https://github.com/harikc456",
"followers_url": "https://api.github.com/users/harikc456/followers",
"following_url": "https://api.github.com/users/harikc456/following{/other_user}",
"gists_url": "https://api.github.com/users/harikc456/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harikc456/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harikc456/subscriptions",
"organizations_url": "https://api.github.com/users/harikc456/orgs",
"repos_url": "https://api.github.com/users/harikc456/repos",
"events_url": "https://api.github.com/users/harikc456/events{/privacy}",
"received_events_url": "https://api.github.com/users/harikc456/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @harikc456,\r\n\r\nThe problem is that the model is not put into training mode. If you run the following code:\r\n\r\n```python\r\nfrom transformers import ReformerTokenizer, ReformerForQuestionAnswering\r\nfrom transformers.models.reformer.modeling_reformer import PositionEmbeddings\r\nimport torch\r\n\r\ntokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment')\r\nmodel = ReformerForQuestionAnswering.from_pretrained('google/reformer-crime-and-punishment')\r\n\r\n# change to position embeddings to prevent error\r\nmodel.reformer.embeddings.position_embeddings = PositionEmbeddings(model.config)\r\n\r\nquestion, text = \"Who was Jim Henson?\", \"Jim Henson was a nice puppet\"\r\ninputs = tokenizer(question, text, return_tensors='pt')\r\nstart_positions = torch.tensor([1])\r\nend_positions = torch.tensor([3])\r\n\r\noutputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)\r\nloss = outputs.loss\r\nloss.backward()\r\n```\r\n\r\nyou can see that the code runs without error.",
"@patrickvonplaten \r\n\r\nHello, I've just come across the same issue.\r\n\r\nI tried the code below,\r\n\r\n``` python\r\nfrom transformers import ReformerTokenizer, ReformerForQuestionAnswering\r\nfrom transformers.models.reformer.modeling_reformer import PositionEmbeddings\r\nimport torch\r\n\r\ntokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment')\r\nmodel = ReformerForQuestionAnswering.from_pretrained('google/reformer-crime-and-punishment')\r\n\r\n# change to position embeddings to prevent error\r\nmodel.reformer.embeddings.position_embeddings = PositionEmbeddings(model.config)\r\n\r\nquestion, text = \"Who was Jim Henson?\", \"Jim Henson was a nice puppet\"\r\ninputs = tokenizer(question, text, return_tensors='pt')\r\nstart_positions = torch.tensor([1])\r\nend_positions = torch.tensor([3])\r\n\r\noutputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)\r\nloss = outputs.loss\r\nloss.backward()\r\n```\r\n\r\nand got the following error message.\r\n\r\n```\r\nSome weights of the model checkpoint at google/reformer-crime-and-punishment were not used when initializing ReformerForQuestionAnswering: ['lm_head.bias', 'lm_head.decoder.weight', 'lm_head.decoder.bias']\r\n- This IS expected if you are initializing ReformerForQuestionAnswering from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing ReformerForQuestionAnswering from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of ReformerForQuestionAnswering were not initialized from the model checkpoint at google/reformer-crime-and-punishment and are newly initialized: ['reformer.encoder.layers.0.attention.self_attention.mask_value_float16', 'reformer.encoder.layers.0.attention.self_attention.mask_value_float32', 'reformer.encoder.layers.1.attention.self_attention.self_mask_value_float16', 'reformer.encoder.layers.1.attention.self_attention.self_mask_value_float32', 'reformer.encoder.layers.1.attention.self_attention.mask_value_float16', 'reformer.encoder.layers.1.attention.self_attention.mask_value_float32', 'reformer.encoder.layers.2.attention.self_attention.mask_value_float16', 'reformer.encoder.layers.2.attention.self_attention.mask_value_float32', 'reformer.encoder.layers.3.attention.self_attention.self_mask_value_float16', 'reformer.encoder.layers.3.attention.self_attention.self_mask_value_float32', 'reformer.encoder.layers.3.attention.self_attention.mask_value_float16', 'reformer.encoder.layers.3.attention.self_attention.mask_value_float32', 'reformer.encoder.layers.4.attention.self_attention.mask_value_float16', 'reformer.encoder.layers.4.attention.self_attention.mask_value_float32', 'reformer.encoder.layers.5.attention.self_attention.self_mask_value_float16', 'reformer.encoder.layers.5.attention.self_attention.self_mask_value_float32', 'reformer.encoder.layers.5.attention.self_attention.mask_value_float16', 'reformer.encoder.layers.5.attention.self_attention.mask_value_float32', 'qa_outputs.weight', 'qa_outputs.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n/path/to/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/torch/nn/modules/container.py:435: UserWarning: Setting attributes on ParameterList is not supported.\r\n warnings.warn(\"Setting attributes on ParameterList is not supported.\")\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-1-60eb084822c0> in <module>\r\n 16 outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)\r\n 17 loss = outputs.loss\r\n---> 18 loss.backward()\r\n\r\n~/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)\r\n 219 retain_graph=retain_graph,\r\n 220 create_graph=create_graph)\r\n--> 221 torch.autograd.backward(self, gradient, retain_graph, create_graph)\r\n 222 \r\n 223 def register_hook(self, hook):\r\n\r\n~/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)\r\n 128 retain_graph = create_graph\r\n 129 \r\n--> 130 Variable._execution_engine.run_backward(\r\n 131 tensors, grad_tensors_, retain_graph, create_graph,\r\n 132 allow_unreachable=True) # allow_unreachable flag\r\n\r\n~/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/torch/autograd/function.py in apply(self, *args)\r\n 87 def apply(self, *args):\r\n 88 # _forward_cls is defined by derived class\r\n---> 89 return self._forward_cls.backward(self, *args) # type: ignore\r\n 90 \r\n 91 \r\n\r\n~/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/transformers/models/reformer/modeling_reformer.py in backward(***failed resolving arguments***)\r\n 1666 \r\n 1667 # backprop\r\n-> 1668 output = layer.backward_pass(\r\n 1669 next_attn_output=output.attn_output,\r\n 1670 hidden_states=output.hidden_states,\r\n\r\n~/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/transformers/models/reformer/modeling_reformer.py in backward_pass(self, next_attn_output, hidden_states, grad_attn_output, grad_hidden_states, attention_mask, head_mask, buckets)\r\n 1527 \r\n 1528 # set seed to have correct dropout\r\n-> 1529 torch.manual_seed(self.feed_forward_seed)\r\n 1530 # g(Y_1)\r\n 1531 res_hidden_states = self.feed_forward(next_attn_output)\r\n\r\n~/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/torch/random.py in manual_seed(seed)\r\n 30 `0xffff_ffff_ffff_ffff + seed`.\r\n 31 \"\"\"\r\n---> 32 seed = int(seed)\r\n 33 import torch.cuda\r\n 34 \r\n\r\nTypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'\r\n```\r\n\r\nI first tried to use:\r\n```python\r\n tokenizer = AutoTokenizer.from_pretrained(\"google/reformer-crime-and-punishment\")\r\n model = AutoModelForSequenceClassification.from_pretrained(\r\n \"google/reformer-crime-and-punishment\", return_dict=True\r\n )\r\n```\r\nIt failed, then I found this issue and added:\r\n```\r\n # change to position embeddings to prevent error\r\n model.reformer.embeddings.position_embeddings = PositionEmbeddings(model.config)\r\n```\r\nHowever, the same error occurs.\r\n\r\n- `transformers` version: 4.1.1\r\n- Platform: Linux-4.15.0-135-generic-x86_64-with-glibc2.10\r\n- Python version: 3.8.3\r\n- PyTorch version (GPU?): 1.7.1 (True)\r\n- Tensorflow version (GPU?): 2.4.1 (False)\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n\r\nMaybe the problem is that the version of Transformers I am using for this is old?\r\n\r\nThank you in advance.",
"It seems that the same issue occurs when I updated the transformers to the latest stable version via pip.\r\n\r\n- `transformers` version: 4.4.1\r\n- Platform: Linux-4.15.0-135-generic-x86_64-with-glibc2.10\r\n- Python version: 3.8.3\r\n- PyTorch version (GPU?): 1.7.1 (True)\r\n- Tensorflow version (GPU?): 2.4.1 (False)\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n\r\nIs the problem depending on the version of some other library?",
"Excuse me for my frequent posting.\r\n\r\nInstead of overwriting `position_embeddings`,\r\ninserting `model.train()` seems to work (but with another issue).\r\n\r\n```python \r\nfrom transformers import ReformerTokenizer, ReformerForQuestionAnswering\r\nfrom transformers.models.reformer.modeling_reformer import PositionEmbeddings\r\nimport torch\r\n\r\ntokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment')\r\nmodel = ReformerForQuestionAnswering.from_pretrained('google/reformer-crime-and-punishment')\r\n\r\n# # change to position embeddings to prevent error\r\n# model.reformer.embeddings.position_embeddings = PositionEmbeddings(model.config)\r\n\r\nmodel.train()\r\n\r\nquestion, text = \"Who was Jim Henson?\", \"Jim Henson was a nice puppet\"\r\ninputs = tokenizer(question, text, return_tensors='pt')\r\nstart_positions = torch.tensor([1])\r\nend_positions = torch.tensor([3])\r\n\r\noutputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)\r\nloss = outputs.loss\r\n\r\nloss.backward()\r\n```\r\n\r\nThe different error message is shown, but it seems can be treated by just doing padding.\r\n\r\n```\r\n~/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/transformers/models/reformer/modeling_reformer.py in forward(self, position_ids)\r\n 154 \r\n 155 if self.training is True:\r\n--> 156 assert (\r\n 157 reduce(mul, self.axial_pos_shape) == sequence_length\r\n 158 ), \"If training, make sure that config.axial_pos_shape factors: {} multiply to sequence length. Got prod({}) != sequence_length: {}. You might want to consider padding your sequence length to {} or changing config.axial_pos_shape.\".format(\r\n\r\nAssertionError: If training, make sure that config.axial_pos_shape factors: (512, 1024) multiply to sequence length. Got prod((512, 1024)) != sequence_length: 28. You might want to consider padding your sequence length to 524288 or changing config.axial_pos_shape.\r\n```\r\n\r\nI'm now trying padding the input, and it seems working.\r\n\r\n```\r\ntokenizer.pad_token = tokenizer.eos_token\r\n\r\ninputs = tokenizer(question, text, padding='max_length', truncation=True, max_length=524288, return_tensors='pt')\r\n```\r\n\r\nI apologize if this is not an appropriate solution.",
"We could maybe add a better error message that fires when Reformer is not in training mode, but one runs `.backward()`. @forest1988 if you want feel free to open a PR :-)",
"@patrickvonplaten \r\nThanks, I'll open a PR!\r\nI'm a little busy right now, but I'll make time to work on it soon.",
"Hi @patrickvonplaten,\r\nSorry to be late. I've just opened PR #11117 regarding this issue. All checks have passed.\r\nCould you please have a look at it when you have time?"
] | 1,614 | 1,618 | 1,618 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version: 3.7.10
- PyTorch version (GPU?): 1.7
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): Reformer
The problem arises when using:
* [ ] my own modified scripts: performing a backward() after passing the query and text to the `ReformerForQuestionAnswering` model.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: a subset of SQuAD
## To reproduce
Steps to reproduce the behavior:
Performing backward on the loss throwing an error.
Minimal code to reproduce the error.
```
from transformers import ReformerTokenizer, ReformerForQuestionAnswering
import torch
tokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment')
model = ReformerForQuestionAnswering.from_pretrained('google/reformer-crime-and-punishment')
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
loss.backward()
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Error Traceback
```
create_graph)
219 retain_graph=retain_graph,
220 create_graph=create_graph)
--> 221 torch.autograd.backward(self, gradient, retain_graph, create_graph)
222
223 def register_hook(self, hook):
/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
130 Variable._execution_engine.run_backward(
131 tensors, grad_tensors_, retain_graph, create_graph,
--> 132 allow_unreachable=True) # allow_unreachable flag
133
134
/usr/local/lib/python3.7/dist-packages/torch/autograd/function.py in apply(self, *args)
87 def apply(self, *args):
88 # _forward_cls is defined by derived class
---> 89 return self._forward_cls.backward(self, *args) # type: ignore
90
91
/usr/local/lib/python3.7/dist-packages/transformers/models/reformer/modeling_reformer.py in backward(***failed resolving arguments***)
1673 head_mask=head_mask[len(layers) - idx - 1],
1674 attention_mask=attention_mask,
-> 1675 buckets=buckets,
1676 )
1677
/usr/local/lib/python3.7/dist-packages/transformers/models/reformer/modeling_reformer.py in backward_pass(self, next_attn_output, hidden_states, grad_attn_output, grad_hidden_states, attention_mask, head_mask, buckets)
1527
1528 # set seed to have correct dropout
-> 1529 torch.manual_seed(self.feed_forward_seed)
1530 # g(Y_1)
1531 res_hidden_states = self.feed_forward(next_attn_output)
/usr/local/lib/python3.7/dist-packages/torch/random.py in manual_seed(seed)
30 `0xffff_ffff_ffff_ffff + seed`.
31 """
---> 32 seed = int(seed)
33 import torch.cuda
34
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
```
From debugging, I believe that the error was caused because the `self.feed_forward_seed` in `ReformerLayer` class is `None`.
I have tried the same code with Longformer and it was working perfectly.
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
`loss.backward()` running properly. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10370/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10370/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10369 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10369/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10369/comments | https://api.github.com/repos/huggingface/transformers/issues/10369/events | https://github.com/huggingface/transformers/issues/10369 | 815,238,477 | MDU6SXNzdWU4MTUyMzg0Nzc= | 10,369 | Why should `attn_weights` be reshaped twice in BartAttention ? | {
"login": "shenfe",
"id": 22103866,
"node_id": "MDQ6VXNlcjIyMTAzODY2",
"avatar_url": "https://avatars.githubusercontent.com/u/22103866?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shenfe",
"html_url": "https://github.com/shenfe",
"followers_url": "https://api.github.com/users/shenfe/followers",
"following_url": "https://api.github.com/users/shenfe/following{/other_user}",
"gists_url": "https://api.github.com/users/shenfe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shenfe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shenfe/subscriptions",
"organizations_url": "https://api.github.com/users/shenfe/orgs",
"repos_url": "https://api.github.com/users/shenfe/repos",
"events_url": "https://api.github.com/users/shenfe/events{/privacy}",
"received_events_url": "https://api.github.com/users/shenfe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patrickvonplaten excuse me, could you share any idea please",
"See this PR for more information: https://github.com/huggingface/transformers/pull/8747",
"> See this PR for more information: #8747\r\n\r\nAlmost understand, although it's still weird. As #8747 explained,\r\n```\r\nThis ensures that the returned hidden state tensors lie upstream in the graph from the model outputs (allowing their gradients to be computed)\r\n```\r\n\r\nThanks!"
] | 1,614 | 1,614 | 1,614 | CONTRIBUTOR | null | Can anybody help to understand that? https://github.com/huggingface/transformers/blob/3437d12134893dd7b45737e422e105e511341297/src/transformers/models/bart/modeling_bart.py#L238-L244
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10369/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10369/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10368 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10368/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10368/comments | https://api.github.com/repos/huggingface/transformers/issues/10368/events | https://github.com/huggingface/transformers/issues/10368 | 815,227,400 | MDU6SXNzdWU4MTUyMjc0MDA= | 10,368 | TFMarianModel from_pretrained can't load weights | {
"login": "brand17",
"id": 36546021,
"node_id": "MDQ6VXNlcjM2NTQ2MDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/36546021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brand17",
"html_url": "https://github.com/brand17",
"followers_url": "https://api.github.com/users/brand17/followers",
"following_url": "https://api.github.com/users/brand17/following{/other_user}",
"gists_url": "https://api.github.com/users/brand17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brand17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brand17/subscriptions",
"organizations_url": "https://api.github.com/users/brand17/orgs",
"repos_url": "https://api.github.com/users/brand17/repos",
"events_url": "https://api.github.com/users/brand17/events{/privacy}",
"received_events_url": "https://api.github.com/users/brand17/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for the issue! \r\n\r\nI uploaded the TF weights https://huggingface.co/Helsinki-NLP/opus-mt-en-de/commit/1a8c2263da11e68e50938f97e10cd57820bd504c. Should be fixed now - could you try again?",
"Thanks, this method working now. Please upload a model 'Helsinki-NLP/opus-mt-ru-en'. And I think other models are not working as well.\r\n\r\nBut now I can not call the model:\r\n\r\n```python\r\nfrom transformers import MarianTokenizer, TFMarianModel\r\nimport tensorflow as tf\r\ntokenizer = MarianTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-de')\r\nmodel = TFMarianModel.from_pretrained('Helsinki-NLP/opus-mt-en-de')\r\ninputs = tokenizer(\"Hello, my dog is cute\", return_tensors=\"tf\")\r\noutputs = model(inputs)\r\n\r\n```\r\n\r\nI am getting an error:\r\n\r\n> Exception has occurred: ValueError (note: full exception trace is shown but execution is paused at: _run_module_as_main)\r\n> You have to specify either decoder_input_ids or decoder_inputs_embeds\r\n> File \"C:\\Program Files (x86)\\Microsoft Visual Studio\\Shared\\Python36_64\\Lib\\site-packages\\transformers\\models\\marian\\modeling_tf_marian.py\", line 924, in call\r\n> raise ValueError(\"You have to specify either decoder_input_ids or decoder_inputs_embeds\")\r\n> File \"C:\\Program Files (x86)\\Microsoft Visual Studio\\Shared\\Python36_64\\Lib\\site-packages\\tensorflow\\python\\keras\\engine\\base_layer.py\", line 985, in __call__\r\n> outputs = call_fn(inputs, *args, **kwargs)\r\n> File \"C:\\Program Files (x86)\\Microsoft Visual Studio\\Shared\\Python36_64\\Lib\\site-packages\\transformers\\models\\marian\\modeling_tf_marian.py\", line 1137, in call\r\n> training=inputs[\"training\"],\r\n> File \"C:\\Program Files (x86)\\Microsoft Visual Studio\\Shared\\Python36_64\\Lib\\site-packages\\tensorflow\\python\\keras\\engine\\base_layer.py\", line 985, in __call__\r\n> outputs = call_fn(inputs, *args, **kwargs)\r\n> File \"C:\\Program Files (x86)\\Microsoft Visual Studio\\Shared\\Python36_64\\Lib\\site-packages\\transformers\\models\\marian\\modeling_tf_marian.py\", line 1232, in call\r\n> training=inputs[\"training\"],\r\n> File \"C:\\Program Files (x86)\\Microsoft Visual Studio\\Shared\\Python36_64\\Lib\\site-packages\\tensorflow\\python\\keras\\engine\\base_layer.py\", line 985, in __call__\r\n> outputs = call_fn(inputs, *args, **kwargs)\r\n> File \"C:\\Users\\FA.PROJECTOR-MSK\\Google Диск\\Colab Notebooks\\PoetryTransformer\\Unsupervised\\translation\\paraphrases_translation.py\", line 16, in <module>\r\n> outputs = model(inputs)\r\n> File \"C:\\Users\\FA.PROJECTOR-MSK\\Google Диск\\Colab Notebooks\\PoetryTransformer\\Unsupervised\\translation\\run_locally.py\", line 1, in <module>\r\n> from paraphrases_translation import run\r\n> File \"C:\\Program Files (x86)\\Microsoft Visual Studio\\Shared\\Python36_64\\Lib\\runpy.py\", line 85, in _run_code\r\n> exec(code, run_globals)\r\n> File \"C:\\Program Files (x86)\\Microsoft Visual Studio\\Shared\\Python36_64\\Lib\\runpy.py\", line 96, in _run_module_code\r\n> mod_name, mod_spec, pkg_name, script_name)\r\n> File \"C:\\Program Files (x86)\\Microsoft Visual Studio\\Shared\\Python36_64\\Lib\\runpy.py\", line 263, in run_path\r\n> pkg_name=pkg_name, script_name=fname)\r\n> File \"C:\\Program Files (x86)\\Microsoft Visual Studio\\Shared\\Python36_64\\Lib\\runpy.py\", line 85, in _run_code\r\n> exec(code, run_globals)\r\n> File \"C:\\Program Files (x86)\\Microsoft Visual Studio\\Shared\\Python36_64\\Lib\\runpy.py\", line 193, in _run_module_as_main (Current frame)\r\n> \"__main__\", mod_spec)",
"There are quite a lot of other models to upload, so this will take some time. \r\nI'll start writing a script to automate this process...\r\n\r\nUntil then you can make use of this easy fix:\r\n\r\n```python\r\nfrom transformers import MarianTokenizer, TFMarianModel\r\nimport tensorflow as tf\r\ntokenizer = MarianTokenizer.from_pretrained('Helsinki-NLP/opus-mt-ru-de')\r\nmodel = TFMarianModel.from_pretrained('Helsinki-NLP/opus-mt-ru-en', from_pt=True)\r\ninputs = tokenizer(\"Hello, my dog is cute\", return_tensors=\"tf\")\r\noutputs = model(inputs)\r\n```",
"Thanks, the model is loading. But I am still not able to call it. Should I report this bug separately ?",
"Hi! I think there's an error in the `TFMarianModel` docstring, it should be similar to the `MarianModel` docstring. You can't call encoder-decoder models with only input IDs (with a few exceptions like BART), you also need to provide decoder input IDs.\r\n\r\nIn your case this should work:\r\n\r\n```py\r\nfrom transformers import MarianTokenizer, TFMarianModel\r\nimport tensorflow as tf\r\n\r\ntokenizer = MarianTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-de')\r\nmodel = TFMarianModel.from_pretrained('Helsinki-NLP/opus-mt-en-de')\r\n\r\ninput_ids = tokenizer(\"Studies have been shown that owning a dog is good for you\", return_tensors=\"tf\").input_ids # Batch size 1\r\ndecoder_input_ids = tokenizer(\"<pad> Studien haben gezeigt dass es hilfreich ist einen Hund zu besitzen\", return_tensors=\"tf\", add_special_tokens=False).input_ids # Batch size 1\r\n\r\noutputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)\r\nlast_hidden_states = outputs.last_hidden_state\r\n```",
"Thanks, it works",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,614 | 1,630 | 1,630 | NONE | null | ## Environment info
- `transformers` version: 4.3.2
- Platform: Windows-7-6.1.7601-SP1
- Python version: 3.6.6
- PyTorch version (GPU?): 1.5.1+cpu (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): TFMarianMT
The problem arises when using:
* [+] the official example scripts: (give details below)
## To reproduce
```python
from transformers import MarianTokenizer, TFMarianModel
model = TFMarianModel.from_pretrained('Helsinki-NLP/opus-mt-en-de')
```
Steps to reproduce the behavior:
1. Run the above code
2. Get an error:
> Exception has occurred: OSError (note: full exception trace is shown but execution is paused at: _run_module_as_main)
> Can't load weights for 'Helsinki-NLP/opus-mt-en-de'. Make sure that:
>
> - 'Helsinki-NLP/opus-mt-en-de' is a correct model identifier listed on 'https://huggingface.co/models'
>
> - or 'Helsinki-NLP/opus-mt-en-de' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin.
>
> File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\Lib\site-packages\transformers\modeling_tf_utils.py", line 1219, in from_pretrained
> raise EnvironmentError(msg)
> File "C:\Users\FA.PROJECTOR-MSK\Google Диск\Colab Notebooks\PoetryTransformer\Unsupervised\translation\paraphrases_translation.py", line 14, in <module>
> model = TFMarianModel.from_pretrained('Helsinki-NLP/opus-mt-en-de')
> File "C:\Users\FA.PROJECTOR-MSK\Google Диск\Colab Notebooks\PoetryTransformer\Unsupervised\translation\run_locally.py", line 1, in <module>
> from paraphrases_translation import run
> File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\Lib\runpy.py", line 85, in _run_code
> exec(code, run_globals)
> File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\Lib\runpy.py", line 96, in _run_module_code
> mod_name, mod_spec, pkg_name, script_name)
> File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\Lib\runpy.py", line 263, in run_path
> pkg_name=pkg_name, script_name=fname)
> File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\Lib\runpy.py", line 85, in _run_code
> exec(code, run_globals)
> File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\Lib\runpy.py", line 193, in _run_module_as_main (Current frame)
> "__main__", mod_spec)
>
## Expected behavior
No error
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10368/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10367 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10367/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10367/comments | https://api.github.com/repos/huggingface/transformers/issues/10367/events | https://github.com/huggingface/transformers/issues/10367 | 815,177,536 | MDU6SXNzdWU4MTUxNzc1MzY= | 10,367 | device-side assert triggered Error while doing inference on Distilbert and Bert | {
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"If you have a CUDA device-side error, it is advised to run your code on CPU, because then you will receive a more informative error message.",
"Hi @NielsRogge,\r\nIn CPU its working fine, Error is coming while using GPU only\r\n",
"Hi @bhadreshpsavani, this can't work on CPU. You're sending a sequence that is too long to the model so it cannot handle it.\r\n\r\nPlease replace\r\n```py\r\ninputs = tokenizer(example['question'], example['context'], return_tensors=\"pt\")\r\n```\r\nby\r\n```py\r\ninputs = tokenizer(example['question'], example['context'], return_tensors=\"pt\", truncation=True)\r\n```\r\n\r\nThis truncates the sequences that are too long.\r\n\r\nYour colab should work then.",
"Thanks @LysandreJik,\r\nIt worked!",
"Glad we could help, closing!"
] | 1,614 | 1,614 | 1,614 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: Colab
- Python version: 3.8
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using Distilbert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
[colab](https://github.com/bhadreshpsavani/UnderstandingNLP/blob/master/DistilbertPerformance.ipynb)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQUaD v2
* [ ] my own task or dataset:
## To reproduce
Steps to reproduce the behavior:
1. Get Model and Tokenizer
2. Get SQUAD2 datasets
3. Performance Inference on the validation dataset with GPU
4. get result on SQUAD V2 Metrix
Run the [colab](https://github.com/bhadreshpsavani/UnderstandingNLP/blob/master/DistilbertPerformance.ipynb) to reproduce.
## Expected behavior
It should give results in the below format without error on SQUAD V2 Metrix
```python
{'exact': 79.4660153288975, 'f1': 82.91266052065696, 'total': 11873, 'HasAns_exact': 77.64844804318489, 'HasAns_f1': 84.55162253066118, 'HasAns_total': 5928, 'NoAns_exact': 81.27838519764508, 'NoAns_f1': 81.27838519764508, 'NoAns_total': 5945, 'best_exact': 79.4660153288975, 'best_exact_thresh': 1.0, 'best_f1': 82.91266052065693, 'best_f1_thresh': 1.0}
```
Note: This code is working fine for longformer model. I found this issue in Distilbert and Bert Model while doing inference on GPU
Tagging SMEs: @LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10367/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10366 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10366/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10366/comments | https://api.github.com/repos/huggingface/transformers/issues/10366/events | https://github.com/huggingface/transformers/issues/10366 | 815,176,328 | MDU6SXNzdWU4MTUxNzYzMjg= | 10,366 | can't allocate memory error with wav2vec2 | {
"login": "kleekaai",
"id": 48985855,
"node_id": "MDQ6VXNlcjQ4OTg1ODU1",
"avatar_url": "https://avatars.githubusercontent.com/u/48985855?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kleekaai",
"html_url": "https://github.com/kleekaai",
"followers_url": "https://api.github.com/users/kleekaai/followers",
"following_url": "https://api.github.com/users/kleekaai/following{/other_user}",
"gists_url": "https://api.github.com/users/kleekaai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kleekaai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kleekaai/subscriptions",
"organizations_url": "https://api.github.com/users/kleekaai/orgs",
"repos_url": "https://api.github.com/users/kleekaai/repos",
"events_url": "https://api.github.com/users/kleekaai/events{/privacy}",
"received_events_url": "https://api.github.com/users/kleekaai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! You seem to be passing all of the file at once to the model. This can be extremely expensive from a memory point of view, as the number of samples (and therefore your batch size) can be very big.\r\n\r\nI would advocate for you to do a custom batching here, by only passing some of the values in yout `input_values` at a time, rather than everything at once.\r\n\r\nI can't tell exactly because I don't have your files handy, but I would guess this is the issue and how to resolve it. If it doesn't help, do you mind opening a colab where I can reproduce the issue?",
"Thanks @LysandreJik for looking into it. I couldn`t figure out how to apply custom batching for audio data. Is there a batch_size param that can be used? \r\n\r\nlink to the audio file [here](https://easyupload.io/fwhf6v) \r\nlink to the [colab notebook](https://drive.google.com/file/d/1V_u5XKOLQXXg-94KQiBcrShy_eaYHWFj/view?usp=sharing)",
"I've requested access for your notebook!",
"Sorry for the delay, I thought it required it is accessible outside. I have given you access. ",
"Okay, so the issue isn't in the number of samples as I thought previously: there seems to be a single audio stream in your recording.\r\n\r\nHowever, the issue here is that it's a 7 minutes and 30 seconds long recording, which really is very very long. I talked about it with @patrickvonplaten, and he mentions that Wav2Vec2 was trained on ~40 seconds of recording maximum. What one could do here is split the recording in 30 seconds chunks. You're using `librosa` and you can do that easily with `librosa.stream`.\r\n\r\nHere for example your method to retrieve the transcript is the following:\r\n\r\n```py\r\ndef asr_transcript(tokenizer, model, input_file):\r\n \r\n speech, fs = sf.read(input_file)\r\n\r\n if len(speech.shape) > 1: \r\n speech = speech[:,0] + speech[:,1]\r\n\r\n if fs != 16000:\r\n speech = librosa.resample(speech, fs, 16000)\r\n\r\n input_values = tokenizer(speech, return_tensors=\"pt\").input_values\r\n logits = model(input_values).logits\r\n predicted_ids = torch.argmax(logits, dim=-1)\r\n transcription = tokenizer.decode(predicted_ids[0])\r\n\r\n return correct_sentence(transcription.lower())\r\n```\r\n\r\nI've updated it to the following (please note that it's the first time I've used `librosa` myself so the parameters I put for the stream values may be wrong):\r\n\r\n```py\r\ndef asr_transcript(tokenizer, model, input_file):\r\n transcript = \"\"\r\n # Ensure that the sample rate is 16k\r\n print(librosa.get_samplerate(input_file))\r\n\r\n # Stream over 30 seconds chunks rather than load the full file\r\n stream = librosa.stream(\r\n input_file,\r\n block_length=30,\r\n frame_length=16000,\r\n hop_length=16000\r\n )\r\n\r\n for speech in stream:\r\n if len(speech.shape) > 1:\r\n speech = speech[:, 0] + speech[:, 1]\r\n\r\n input_values = tokenizer(speech, return_tensors=\"pt\").input_values\r\n logits = model(input_values).logits\r\n\r\n predicted_ids = torch.argmax(logits, dim=-1)\r\n transcription = tokenizer.decode(predicted_ids[0])\r\n transcript += correct_sentence(transcription.lower())\r\n\r\n return transcript\r\n```\r\n\r\nWith this I seem to obtain sensible results! This could probably be improved in the following ways:\r\n- Ensure that the parameters passed to `librosa.stream` are correct. Changing these seem to have a very big impact on the transcript.\r\n- Patrick mentions that an advanced solution would be to use a Voice Activity detector to see where there is no speech and chunk there, for example finding a sequence of 100 values very close to zero, and cutting there. Little performance would be lost then.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,614 | 1,619 | 1,619 | NONE | null | I am trying out the wav2vec2 model for ASR from the huggingface library. Here, I am passing a 7 min(~15 MB file) long wav file having a conversation(english) to the wav2vec2 model. I am getting "can't allocate memory" error. I found that the model uses all 64 GB of the available RAM. Can anyone help with this.
- `transformers` version: 4.3.2
- Platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.3
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: (NA)
- Using distributed or parallel set-up in script?: (NA)
Code
```
import os
import librosa
import soundfile as sf
from pydub import AudioSegment
def convert_audio_segment(fp, upload_dir_path):
"""Convert audio file"""
USER_UPLOAD_DIR = upload_dir_path
formats_to_convert = ['.m4a']
dirpath = os.path.abspath(USER_UPLOAD_DIR)
if fp.endswith(tuple(formats_to_convert)):
(path, file_extension) = os.path.splitext(fp)
file_extension_final = file_extension.replace('.', '')
file_handle = ''
try:
track = AudioSegment.from_file(fp,
file_extension_final)
print("track", track)
wav_path = fp.replace(file_extension_final, 'wav')
file_handle = track.export(wav_path, format='wav')
except Exception:
print("ERROR CONVERTING " + str(fp))
return file_handle
else:
print("No file format conversion required " + str(fp))
return fp
def load_wav2vec_100h_model():
tokenizer = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base-100h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-100h")
return tokenizer, model
def correct_sentence(input_text):
sentences = nltk.sent_tokenize(input_text)
return (' '.join([s.replace(s[0],s[0].capitalize(),1) for s in sentences]))
def asr_transcript(tokenizer, model, input_file):
speech, fs = sf.read(input_file)
if len(speech.shape) > 1:
speech = speech[:,0] + speech[:,1]
if fs != 16000:
speech = librosa.resample(speech, fs, 16000)
input_values = tokenizer(speech, return_tensors="pt").input_values
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = tokenizer.decode(predicted_ids[0])
return correct_sentence(transcription.lower())
if __name__ == "__main__":
tokenizer_100h, model_100h = load_wav2vec_100h_model()
wav_input = 'Recording_biweu.wav'
fp = wav_input
processed_file = convert_audio_segment(str(fp), str(data_dir))
text = asr_transcript(tokenizer_100h,model_100h,processed_file)
print(text)
```
I am adding more details about my wav file here
```
General
Complete name : Recording_biweu.wav
Format : Wave
File size : 13.8 MiB
Duration : 7 min 30 s
Overall bit rate mode : Constant
Overall bit rate : 256 kb/s
Track name : Recording_biweu
Recorded date : 2021
Writing application : Lavf57.83.100
Audio
Format : PCM
Format settings : Little / Signed
Codec ID : 1
Duration : 7 min 30 s
Bit rate mode : Constant
Bit rate : 256 kb/s
Channel(s) : 1 channel
Sampling rate : 16.0 kHz
Bit depth : 16 bits
Stream size : 13.8 MiB (100%)
```
Error
```
Some weights of the model checkpoint at facebook/wav2vec2-base-100h were not used when initializing Wav2Vec2ForCTC: ['wav2vec2.mask_time_emb_vector']
- This IS expected if you are initializing Wav2Vec2ForCTC from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing Wav2Vec2ForCTC from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Traceback (most recent call last):
File "asr_wav2vec2.py", line 130, in <module>
text = asr_transcript(tokenizer_100h,model_100h,processed_file)
File "asr_wav2vec2.py", line 96, in asr_transcript
logits = model(input_values).logits
File "/home/joel/pyvenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/joel/pyvenv/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 795, in forward
outputs = self.wav2vec2(
File "/home/joel/pyvenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/joel/pyvenv/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 646, in forward
encoder_outputs = self.encoder(
File "/home/joel/pyvenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/joel/pyvenv/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 457, in forward
hidden_states, attn_weights = layer(hidden_states, output_attentions=output_attentions)
File "/home/joel/pyvenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/joel/pyvenv/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 392, in forward
hidden_states, attn_weights, _ = self.attention(hidden_states, output_attentions=output_attentions)
File "/home/joel/pyvenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/joel/pyvenv/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 286, in forward
attn_weights = torch.bmm(query_states, key_states.transpose(1, 2))
RuntimeError: [enforce fail at CPUAllocator.cpp:65] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 24373495488 bytes. Error code 12 (Cannot allocate memory)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10366/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10365 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10365/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10365/comments | https://api.github.com/repos/huggingface/transformers/issues/10365/events | https://github.com/huggingface/transformers/issues/10365 | 815,122,205 | MDU6SXNzdWU4MTUxMjIyMDU= | 10,365 | Knowledge Retrieval missing from BlenderBot Implementation | {
"login": "applyinnovations",
"id": 77219112,
"node_id": "MDEyOk9yZ2FuaXphdGlvbjc3MjE5MTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/77219112?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/applyinnovations",
"html_url": "https://github.com/applyinnovations",
"followers_url": "https://api.github.com/users/applyinnovations/followers",
"following_url": "https://api.github.com/users/applyinnovations/following{/other_user}",
"gists_url": "https://api.github.com/users/applyinnovations/gists{/gist_id}",
"starred_url": "https://api.github.com/users/applyinnovations/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/applyinnovations/subscriptions",
"organizations_url": "https://api.github.com/users/applyinnovations/orgs",
"repos_url": "https://api.github.com/users/applyinnovations/repos",
"events_url": "https://api.github.com/users/applyinnovations/events{/privacy}",
"received_events_url": "https://api.github.com/users/applyinnovations/received_events",
"type": "Organization",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [] | 1,614 | 1,614 | null | NONE | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
The original Blenderbot [paper](https://arxiv.org/pdf/2004.13637.pdf) considered three transformer based models (Retrieval, Generator and RetNRef), however from what I can see only the generator model is implemented within this repository: [transformers/src/transformers/models/blenderbot/modeling_blenderbot.py](https://github.com/huggingface/transformers/blame/master/src/transformers/models/blenderbot/modeling_blenderbot.py).
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
As part of my academic work I am generating topic bound conversations and wish to compare Blenderbot as well as make modifications to its knowledge retrieval component. It would be useful if someone could point me towards this (if it is already implemented) or inform me if this feature is planned to be added in the future.

## Contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
Prior to making a contribution I want to confirm that feature is not implemented and there is no intention of implementing this in the near future.
Thanks,
Alex
@patrickvonplaten @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10365/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10365/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10364 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10364/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10364/comments | https://api.github.com/repos/huggingface/transformers/issues/10364/events | https://github.com/huggingface/transformers/issues/10364 | 815,032,134 | MDU6SXNzdWU4MTUwMzIxMzQ= | 10,364 | Loading mBART Large 50 MMT (many-to-many) is slow | {
"login": "xhluca",
"id": 21180505,
"node_id": "MDQ6VXNlcjIxMTgwNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/21180505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xhluca",
"html_url": "https://github.com/xhluca",
"followers_url": "https://api.github.com/users/xhluca/followers",
"following_url": "https://api.github.com/users/xhluca/following{/other_user}",
"gists_url": "https://api.github.com/users/xhluca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xhluca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xhluca/subscriptions",
"organizations_url": "https://api.github.com/users/xhluca/orgs",
"repos_url": "https://api.github.com/users/xhluca/repos",
"events_url": "https://api.github.com/users/xhluca/events{/privacy}",
"received_events_url": "https://api.github.com/users/xhluca/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Related: https://github.com/huggingface/transformers/issues/9205",
"Thanks. I'll rerun the benchmarks once patrick makes the changes.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Has there been an updated to https://github.com/huggingface/transformers/issues/9205's timeline?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,614 | 1,621 | 1,621 | CONTRIBUTOR | null | ## Environment info
I'm installing the library directly from `master` and running it in a kaggle notebook.
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.0.dev0
- Platform: Linux-5.4.89+-x86_64-with-debian-buster-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- bart: @patrickvonplaten, @patil-suraj
Library:
- text generation: @patrickvonplaten
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): mBART-Large 50 MMT (many-to-many)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
After caching the weights of the model, load it with `from_pretrained` is significantly slower compared with `torch.load`.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Machine Translation
## To reproduce
Here's the [kaggle notebook](https://www.kaggle.com/xhlulu/reproducing-speed-issues-with-mbart-large-50) reproducing the issue. Here's a [colab notebook](https://colab.research.google.com/drive/1fKuLG_U6uw4x8LqcIQFEFjQYjnc1nBzQ?usp=sharing) showing essentially the same thing.
Steps to reproduce the behavior:
1. Load model with `model = MBartForConditionalGeneration.load_pretrained("facebook/mbart-large-50-many-to-many-mmt")`
2. Save model with `model.save_pretrained('./my-model')`
3. Save model with `torch.save(model, 'model.pt')`
4. Reload and time with `MBartForConditionalGeneration.load_pretrained('./my-model')`
5. Load with `torch.load('model.pt')`
The step above can be reproduced inside a kaggle notebook:
```python
model = MBartForConditionalGeneration.load_pretrained("facebook/mbart-large-50-many-to-many-mmt")
model.save_pretrained('./my-model/')
torch.save(model, 'model.pt')
%time model = MBartForConditionalGeneration.from_pretrained("./my-model/")
%time torch_model = torch.load('model.pt')
```
We will notice that loading with `from_pretrained` (step 4) is significantly slower than `torch.load` (step 5); the former takes over 1 minute and the latter just a few seconds (or around 20s if it hasn't been previously loaded in memory; see [notebook](https://www.kaggle.com/xhlulu/use-saved-torch-model)).
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The model should take less than 1 minute to load if it has already been cached (see step 1)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10364/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10363 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10363/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10363/comments | https://api.github.com/repos/huggingface/transformers/issues/10363/events | https://github.com/huggingface/transformers/pull/10363 | 815,004,637 | MDExOlB1bGxSZXF1ZXN0NTc4OTAzMzUz | 10,363 | [trainer] move secondary methods into a separate file | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,614 | 1,614 | 1,614 | CONTRIBUTOR | null | We are trying to keep `trainer.py` to a manageable size and as of recent it has been getting new helper methods which should remain methods, but aren't really important for understanding how the Trainer works, so we propose to move them into the utils file and then import them using this nifty idea presented at https://stackoverflow.com/a/47562412/9201239 where instead of subclassing and mixing in, we import the desired methods into the class.
See if you like it.
And if yes please let me know if there are any other candidates to move.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10363/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10363",
"html_url": "https://github.com/huggingface/transformers/pull/10363",
"diff_url": "https://github.com/huggingface/transformers/pull/10363.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10363.patch",
"merged_at": 1614184372000
} |
https://api.github.com/repos/huggingface/transformers/issues/10362 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10362/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10362/comments | https://api.github.com/repos/huggingface/transformers/issues/10362/events | https://github.com/huggingface/transformers/pull/10362 | 814,827,713 | MDExOlB1bGxSZXF1ZXN0NTc4NzUyMTc4 | 10,362 | [Trainer/Deepspeed] handle get_last_lr() before first step() | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, that's a good idea. I'm just not clear on whether you suggest to make a function just for the deepspeed segment of the branch or wrap up the whole getting lr function?\r\n\r\nPlus, the code needs the `trainer` object, so I'm not sure how to put it in utils. \r\n\r\n\r\nI propose to put it as a separate Trainer method instead. \r\n```\r\nlogs[\"learning_rate\"] = self._get_learning_rate()\r\n```\r\n\r\nPlease check if the proposed change in the next commit looks good to you - made it into a method and put it at the end of the file so it's out of the way.\r\n",
"> Yes, that's a good idea. I'm just not clear on whether you suggest to make a function just for the deepspeed segment of the branch or wrap up the whole getting lr function?\r\n\r\nThe whole thing, as you did.\r\n\r\n> Plus, the code needs the trainer object, so I'm not sure how to put it in utils.\r\n\r\nIt doesn't need the whole Trainer, just the `lr_scheduler` and the `args` (to detect if deepspeed is activated).",
"Can someone explain to me why I am suscribed to this\n\nOn Tue, Feb 23, 2021, 7:43 PM Stas Bekman <[email protected]> wrote:\n\n> Merged #10362 <https://github.com/huggingface/transformers/pull/10362>\n> into master.\n>\n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/10362#event-4368294070>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AS5YU5U6O456ILN65VM72ELTARKSPANCNFSM4YDGTCNA>\n> .\n>\n",
"Hi @chrissyjsartt \r\n\r\nWe won't know, since only you can do it. Perhaps you hit [Subscribe] by mistake?\r\n\r\nBut if you pay close attention to the email you received it tells you how to unsubscribe at the end of it:\r\n\r\n> You are receiving this because you are subscribed to this thread.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/pull/10362#event-4368294070>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AS5YU5U6O456ILN65VM72ELTARKSPANCNFSM4YDGTCNA>"
] | 1,614 | 1,614 | 1,614 | CONTRIBUTOR | null | with deepspeed's fp16 and dynamic loss scale enabled the optimizer/scheduler steps may not run for the first few dozens steps while loss is overflowing, so `get_last_lr` will fail if called during that warm up stage, so this PR tries to catch that special warm situation and handle it by returning a fake LR=0, which is a good default because since there is no stepping it's effectively a 0.
I'm just not sure if I should warn - it ends up emitting like 20-30 of those: if the user picks `--logging_steps=`, e.g:
```
2021-02-23 12:53:06,798] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 4294967296, reducing to 4294967296
[WARNING|trainer.py:1142] 2021-02-23 12:53:06,799 >> tried to get lr value before scheduler/optimizer started stepping, returning lr=0
{'loss': 11.0, 'learning_rate': 0, 'epoch': 0.0}
[2021-02-23 12:53:06,990] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 4294967296, reducing to 2147483648.0
[WARNING|trainer.py:1142] 2021-02-23 12:53:06,992 >> tried to get lr value before scheduler/optimizer started stepping, returning lr=0
{'loss': 10.9922, 'learning_rate': 0, 'epoch': 0.0}
```
* [x] added a test too.
I first thought it should be handled by DeepSpeed https://github.com/microsoft/DeepSpeed/issues/782 but then realized that since pytorch optimizers won't be aware of this, we have to handle this in the trainer since we are the ones calling `get_last_lr()` sort of prematurely - (yet, we don't have a way to know that it's premature as we can't even called `lr_scheduler.step()` as it's being handled opaquely by DeepSpeed.
@sgugger
Fixes: #https://github.com/huggingface/transformers/issues/10330#issuecomment-784457460 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10362/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10362",
"html_url": "https://github.com/huggingface/transformers/pull/10362",
"diff_url": "https://github.com/huggingface/transformers/pull/10362.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10362.patch",
"merged_at": 1614130945000
} |
https://api.github.com/repos/huggingface/transformers/issues/10361 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10361/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10361/comments | https://api.github.com/repos/huggingface/transformers/issues/10361/events | https://github.com/huggingface/transformers/issues/10361 | 814,826,760 | MDU6SXNzdWU4MTQ4MjY3NjA= | 10,361 | denoising objective for pretraining | {
"login": "dorooddorood606",
"id": 79288051,
"node_id": "MDQ6VXNlcjc5Mjg4MDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/79288051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorooddorood606",
"html_url": "https://github.com/dorooddorood606",
"followers_url": "https://api.github.com/users/dorooddorood606/followers",
"following_url": "https://api.github.com/users/dorooddorood606/following{/other_user}",
"gists_url": "https://api.github.com/users/dorooddorood606/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorooddorood606/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorooddorood606/subscriptions",
"organizations_url": "https://api.github.com/users/dorooddorood606/orgs",
"repos_url": "https://api.github.com/users/dorooddorood606/repos",
"events_url": "https://api.github.com/users/dorooddorood606/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorooddorood606/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patil-suraj @patrickvonplaten please help. thanks ",
"Hey @dorooddorood606,\r\n\r\ncould you please make use of the forum: https://discuss.huggingface.co/ for such questions. Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,614 | 1,619 | 1,619 | NONE | null | Hi
Denoising objective is used in T5 and BART models, could you please add it in pretraining language models?
For now, if you could I appreciate sharing some advice how I can implement it. is there a piece of codes in huggingface I could start from?
thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10361/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10361/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10360 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10360/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10360/comments | https://api.github.com/repos/huggingface/transformers/issues/10360/events | https://github.com/huggingface/transformers/issues/10360 | 814,801,374 | MDU6SXNzdWU4MTQ4MDEzNzQ= | 10,360 | Rag Use Your Knowledge dataset | {
"login": "Blaizzy",
"id": 23445657,
"node_id": "MDQ6VXNlcjIzNDQ1NjU3",
"avatar_url": "https://avatars.githubusercontent.com/u/23445657?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Blaizzy",
"html_url": "https://github.com/Blaizzy",
"followers_url": "https://api.github.com/users/Blaizzy/followers",
"following_url": "https://api.github.com/users/Blaizzy/following{/other_user}",
"gists_url": "https://api.github.com/users/Blaizzy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Blaizzy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Blaizzy/subscriptions",
"organizations_url": "https://api.github.com/users/Blaizzy/orgs",
"repos_url": "https://api.github.com/users/Blaizzy/repos",
"events_url": "https://api.github.com/users/Blaizzy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Blaizzy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi ! It looks like you `embed` functions expects a batch of documents as input.\r\nCan you try to set `batched=True` in your call to `dataset.map` ?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,614 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Colab
- Python version: 3.7.10
- PyTorch version (CPU): 1.7.0
### Who can help
Models:
rag: @patrickvonplaten, @lhoestq
Library:
- tokenizers: @n1t0, @LysandreJik
## Information
The model I am using (Rag):
The problem arises when using:
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Create a csv file with title and text
2. Load the dataset
3. Map the split_document function
4. Map the embed function (error)
```
def embed(
documents: dict,
ctx_encoder: DPRContextEncoder,
ctx_tokenizer: DPRContextEncoderTokenizerFast
):
"""Compute the DPR embeddings of document passages"""
input_ids = ctx_tokenizer(
documents["title"], documents["text"], truncation=True,
padding="longest", return_tensors='pt'
)
embeddings = ctx_encoder(
input_ids["input_ids"],
return_dict=True).pooler_output
return {'embeddings': embeddings.detach().cpu().numpy()}
# And compute the embeddings
ctx_encoder = DPRContextEncoder.from_pretrained(
'facebook/dpr-ctx_encoder-multiset-base'
)
ctx_tokenizer = DPRContextEncoderTokenizerFast.from_pretrained(
'facebook/dpr-ctx_encoder-multiset-base'
)
new_fts = Features(
{
'text': Value('string'),
'title': Value('string'),
'embeddings': Sequence(Value('float32'))
}
) # optional, save as float32 instead of float64 to save space
dataset = dataset.map(
partial(embed, ctx_encoder = ctx_encoder, ctx_tokenizer=ctx_tokenizer),
features = new_fts
)
```
### Error
```
ArrowInvalid Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, update_data)
1412 if update_data:
-> 1413 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file
1414 except (Exception, KeyboardInterrupt):
22 frames
ArrowInvalid: Could not convert [-0.007409881334751844, 0.0715881809592247, -0.130095437169075, 0.08213236927986145, -0.06481412053108215, 0.219411239027977, 0.2758248746395111, -0.24343284964561462, -0.17551296949386597, -0.16576780378818512, -0.19957710802555084, 0.513848602771759, -0.2469034492969513, -0.27209365367889404, -0.019221562892198563, 0.3769649565219879, 0.47224175930023193, -0.5267099142074585, -0.3105331361293793, -0.3371395170688629, -0.2927161753177643, -0.7542601227760315, -0.17370374500751495, -0.024053143337368965, 0.14522959291934967, 0.2945793867111206, 0.03297216817736626, -0.0938640609383583, -0.34509730339050293, 0.3848630487918854, -0.1607687622308731, 0.08243361860513687, 0.036992475390434265, -0.5837609767913818, -0.057669747620821, 0.33589160442352295, -0.6164276003837585, 0.22745771706104279, 0.2599221467971802, 0.021962007507681847, 0.38935932517051697, 0.0007948490092530847, -0.71791011095047, 0.008848031982779503, -0.2997898459434509, -0.17859186232089996, -1.5019792318344116, 0.151197612285614, -0.5586768984794617, -0.008638408035039902, -0.49596720933914185, 0.4330417513847351, 0.16217979788780212, 0.27230459451675415, -0.20549386739730835, 0.24903732538223267, -0.18732021749019623, -0.6536538004875183, 0.09260211139917374, -0.49740439653396606, -0.007311557419598103, 0.3489222824573517, -0.14408843219280243, 0.3663439154624939, -0.09016768634319305, 0.7361327409744263, -0.013332066126167774, 0.241610586643219, -0.779755353927...
During handling of the above exception, another exception occurred:
ArrowInvalid Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: Could not convert [-0.007409881334751844, 0.0715881809592247, -0.130095437169075, 0.08213236927986145, -0.06481412053108215, 0.219411239027977, 0.2758248746395111, -0.24343284964561462, -0.17551296949386597, -0.16576780378818512, -0.19957710802555084, 0.513848602771759, -0.2469034492969513, -0.27209365367889404, -0.019221562892198563, 0.3769649565219879, 0.47224175930023193, -0.5267099142074585, -0.3105331361293793, -0.3371395170688629, -0.2927161753177643, -0.7542601227760315, -0.17370374500751495, -0.024053143337368965, 0.14522959291934967, 0.2945793867111206, 0.03297216817736626, -0.0938640609383583, -0.34509730339050293, 0.3848630487918854, -0.1607687622308731, 0.08243361860513687, 0.036992475390434265, -0.5837609767913818, -0.057669747620821, 0.33589160442352295, -0.6164276003837585, 0.22745771706104279, 0.2599221467971802, 0.021962007507681847, 0.38935932517051697, 0.0007948490092530847, -0.71791011095047, 0.008848031982779503, -0.2997898459434509, -0.17859186232089996, -1.5019792318344116, 0.151197612285614, -0.5586768984794617, -0.008638408035039902, -0.49596720933914185, 0.4330417513847351, 0.16217979788780212, 0.27230459451675415, -0.20549386739730835, 0.24903732538223267, -0.18732021749019623, -0.6536538004875183, 0.09260211139917374, -0.49740439653396606, -0.007311557419598103, 0.3489222824573517, -0.14408843219280243, 0.3663439154624939, -0.09016768634319305, 0.7361327409744263, -0.013332066126167774, 0.241610586643219, -0.779755353927...
```
## Expected behavior
Return the dataset embedding so I can index it and run inference .
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10360/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10359 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10359/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10359/comments | https://api.github.com/repos/huggingface/transformers/issues/10359/events | https://github.com/huggingface/transformers/issues/10359 | 814,796,764 | MDU6SXNzdWU4MTQ3OTY3NjQ= | 10,359 | Security Bug found - looking for contact for responsible disclosure | {
"login": "ethan-carlten",
"id": 79542241,
"node_id": "MDQ6VXNlcjc5NTQyMjQx",
"avatar_url": "https://avatars.githubusercontent.com/u/79542241?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ethan-carlten",
"html_url": "https://github.com/ethan-carlten",
"followers_url": "https://api.github.com/users/ethan-carlten/followers",
"following_url": "https://api.github.com/users/ethan-carlten/following{/other_user}",
"gists_url": "https://api.github.com/users/ethan-carlten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ethan-carlten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ethan-carlten/subscriptions",
"organizations_url": "https://api.github.com/users/ethan-carlten/orgs",
"repos_url": "https://api.github.com/users/ethan-carlten/repos",
"events_url": "https://api.github.com/users/ethan-carlten/events{/privacy}",
"received_events_url": "https://api.github.com/users/ethan-carlten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi – can you send an email to `tech at huggingface.co`? Thanks.",
"I did send it.",
"closing as will handle over email. Thanks!"
] | 1,614 | 1,614 | 1,614 | NONE | null | Hi,
I found a security bug on software related to you.
Can you please tell me how to contact you for a responsible disclosure?
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10359/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10358 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10358/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10358/comments | https://api.github.com/repos/huggingface/transformers/issues/10358/events | https://github.com/huggingface/transformers/issues/10358 | 814,737,100 | MDU6SXNzdWU4MTQ3MzcxMDA= | 10,358 | BART Summarization : Torchscript Export / Inference Triton Server | {
"login": "anshoomehra",
"id": 24396120,
"node_id": "MDQ6VXNlcjI0Mzk2MTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/24396120?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anshoomehra",
"html_url": "https://github.com/anshoomehra",
"followers_url": "https://api.github.com/users/anshoomehra/followers",
"following_url": "https://api.github.com/users/anshoomehra/following{/other_user}",
"gists_url": "https://api.github.com/users/anshoomehra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anshoomehra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anshoomehra/subscriptions",
"organizations_url": "https://api.github.com/users/anshoomehra/orgs",
"repos_url": "https://api.github.com/users/anshoomehra/repos",
"events_url": "https://api.github.com/users/anshoomehra/events{/privacy}",
"received_events_url": "https://api.github.com/users/anshoomehra/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1897896961,
"node_id": "MDU6TGFiZWwxODk3ODk2OTYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Migration",
"name": "Migration",
"color": "e99695",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"It seems I have to mimic GenerationMixin.generate() -- advisable? any detailed documentation on 'beam_search' method ?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"@anshoomehra Were you able to run BART on Triton?",
"@anshoomehra @moise-g \r\nAre you able to run BART with Triton, if yes can you please share the details?"
] | 1,614 | 1,683 | 1,621 | NONE | null | # 📚 Migration
@sshleifer maybe you can help ?? (thanks for all your work bud!)
## Information
**Objective** : Performance gain, clocking 1-1.5 sec per transaction at the moment, target : under 100 ms. It seems exporting model via TorchScript & running on Triton Server may be plausible solution.
I am exporting BART Large CNN (for generating Summaries) using TorchScript. I have fine-tuned the model with localized data, but I am unclear on how to use **model.generate(input)** which seems to wrap **model(input)**, whereas **model(input)** is being triggered default at the time of inference from the exported model. For simplicity & ability to reproduce, I pasting the issue/code details as if the model is vanilla pre-trained (and not fine-tuned)
Model: **facebook/bart-large-cnn**
Language: **English**
The problem arises when using: **torch.jit.trace(<model>, <dummy_input>)**
## Details
1. model.pt gets generated without any issues
2. However, the generated trace is producing output for mask-filling like model(input), what I am hoping to generate is model.**generate**(input).
3. Not sure, how to handle this, at the time of export or later during inference. Can you please help.
Code Block Below:
**Step 1**: Generate model.pt file
```
import torch
from transformers import (
AutoModelForSeq2SeqLM,
AutoTokenizer,
AutoConfig
)
dummy_input = torch.tensor([512 * [1]])
BART_CNN_PATH = 'facebook/bart-large-cnn'
BART_CNN_MODEL = AutoModelForSeq2SeqLM.from_pretrained(BART_CNN_PATH)
BART_CNN_MODEL.eval()
traced_model = torch.jit.trace(BART_CNN_MODEL, dummy_input)
traced_model.save("exportedModelsForTritan/bart_large_cnn_fl/1/model.pt")
```
**Step2**: Inference **(+ Error Details)**
```
BART_CNN_TOKENIZER = AutoTokenizer.from_pretrained(BART_CNN_PATH)
input_tokenized = BART_CNN_TOKENIZER.encode(input_text, return_tensors="pt", max_length=512, truncation=True, padding='max_length')
## Test Inference, If I do not use .generate() code works fine ...
## but then it would attempt mask-filling instead of summaries?...
## With model.generate(input), it returns [1xn] where n is the length of summary
## where-as with mode(input), it's generating tuple with length 3 perhaps logits and possibly hidden state weights..
## which I do not know is of significance for summaries ...
model_output = traced_model.generate(input_tokenized)
```
**Error**: ModuleAttributeError: 'RecursiveScriptModule' object has no attribute 'generate'
**Working Code Prior Export as Reference: <Notice the use of model.generate()>**
```
BART_CNN_PATH = 'facebook/bart-large-cnn'
BART_CNN_MODEL = AutoModelForSeq2SeqLM.from_pretrained(BART_CNN_PATH)
BART_CNN_TOKENIZER = AutoTokenizer.from_pretrained(BART_CNN_PATH)
def bart_cnn_summarize_automl(input_text, num_beams=4, num_words=50):
input_text = str(input_text)
input_tokenized = BART_CNN_TOKENIZER.encode(input_text, return_tensors="pt", max_length=512)
summary_ids = BART_CNN_MODEL.generate(input_tokenized,
max_length=100,
min_length=40,
length_penalty=2.0,
num_beams=num_beams,
early_stopping=True)
output = [BART_CNN_TOKENIZER.decode(id, skip_special_tokens=True, clean_up_tokenization_spaces=False) for id in summary_ids]
return str(output[0])
```
## Environment info
- Python version:Python 3.6.9
- PyTorch version (GPU?): GPU (T4), 1.7.1
- Docker Image: huggingface/transformers-pytorch-gpu:4.2.1
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10358/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10357 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10357/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10357/comments | https://api.github.com/repos/huggingface/transformers/issues/10357/events | https://github.com/huggingface/transformers/pull/10357 | 814,701,451 | MDExOlB1bGxSZXF1ZXN0NTc4NjQ2Mzg1 | 10,357 | tokenization_marian.py: use current_spm for decoding | {
"login": "Mehrad0711",
"id": 28717374,
"node_id": "MDQ6VXNlcjI4NzE3Mzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/28717374?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mehrad0711",
"html_url": "https://github.com/Mehrad0711",
"followers_url": "https://api.github.com/users/Mehrad0711/followers",
"following_url": "https://api.github.com/users/Mehrad0711/following{/other_user}",
"gists_url": "https://api.github.com/users/Mehrad0711/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mehrad0711/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mehrad0711/subscriptions",
"organizations_url": "https://api.github.com/users/Mehrad0711/orgs",
"repos_url": "https://api.github.com/users/Mehrad0711/repos",
"events_url": "https://api.github.com/users/Mehrad0711/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mehrad0711/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @patil-suraj! \r\nThanks for your review. As you suggested, I started updating the code and docs where `decode` or `batch_decode` is used.\r\nDoing so, I noticed RAG model also has the same issue: in `decode` and `batch_decode`, `generator` is used instead of `current_tokenizer`. Do you want me to also update that model and its docs accordingly in this PR? ",
"Hey, I submitted my changes and also fixed the RAG tokenizer. Please let me know if I missed something or you want me to change any of the fixes. I can rebase and force-push again. ",
"> do that inside the context manager, so we should also update all the ex\r\n\r\n@patil-suraj @sgugger, wouldn't it be nicer to just do `as_target_tokenizer` in the `batch_decode` and `decode` function itself? Because decoding usually corresponds to the \"target tokenizer\" -> I think this would be nicer for the user",
"We could do that, but the reported issue is about not being able to decode the source tokens as the `decode` always uses `spm_target`. So if we do `as_target_tokenizer` inside `decode` then source tokens will be decoded using `spm_target`, which will cause the same issue.",
"True, yeah I was a bit off there!\r\n\r\nOk, I understand the fix now. It's a very problematic fix however because it's a big backward breaking change. In 99% of the cases people use `batch_decode` for the target outputs and we don't really want people to update their code just so that the source targets work correctly I think...If I understand correctly this PR would change the default behavior of `batch_decode(...)` which is a no-go sadly...\r\n\r\nCould we maybe somehow let the `current_spm` default to `target_spm` when using `batch_decode`, `decode` so that we don't have any breaking changes & then add maybe a new context manager `as_source_tokenizer`? or just add a optional arg to `decode` for Marian?\r\n\r\ncc @LysandreJik @sgugger ",
"It's not very complicated to add the `as_source_tokenizer` context manager. Another solution is to add a flag `use_source_tokenizer` (defaults to False) to `decode` and `batch_decode`.\r\n\r\nIn any case, backward-compatibility is paramount so it needs to be fully enforced.",
"`use_source_tokenizer` seems a better option to me, since the tokenizer already behaves like a source tokenizer by default so adding `as_source_tokenizer` seems a bit confusing IMO.\r\n\r\n@Mehrad0711 , here's how we could now implement this\r\n1. `current_spm` should always default to `source_spm` except inside the `as_target_tokenizer`.\r\n2. As Sylvain suggested, add the `use_source_tokenizer` argument to `decode` and `batch_decode`, if it's `True` use `source_spm` in `convert_tokens_to_string`. \r\n3. `convert_tokens_to_string` should never use `current_spm` as it defaults to `source_spm` and this would break backward-compatibility.\r\n\r\nAnd the user should now pass `use_source_tokenizer` to decode source tokens",
"Thanks. I can proceed with the suggested implementation.\r\n However, passing `use_source_tokenizer` to `convert_tokens_to_string` requires updating `PreTrainedTokenizer`'s `_decode` method. Since `use_source_tokenizer` is passed to `convert_tokens_to_string` for all tokenizers, afaiu, the tokenizer classes for all models that implement their own `convert_tokens_to_string` should be updated individually to accept `use_source_tokenizer` (even though it's not used for some such as encoder-only models). \r\nI think a potential workaround can be adding instance checks within `_decode` to see if the tokenizer accepts that argument, or perhaps use function overloading.\r\nPlease let me know how you want me to proceed.",
"I think we can work around this by setting an internal attribute with the value of `use_source_tokenizer` passed by the user. This way we can recover it in `convert_tokens_to_string` without having to overload any other methods. What do you think?",
"Thanks for the suggestion. It makes sense. If backward-compatibility wasn't a big issue, I think it would be better to use `current_spm` (set to None) in all tokenizer methods and switch it to source or target spm using two (`as_source_tokenizer` and `as_target_tokenizer`) context managers as needed. This way encoding and decoding methods become source/ target agnositc. \r\nHowever, since it's an issue, I think your first suggestion which is using `use_source_tokenizer` is still better than setting an internal value from the user perspective because now they have to use a context_manager during encoding but then set an attribute during decoding which can persist during next decoding if not unset. What do you think?",
"> than setting an internal value from the user perspective\r\n\r\nI never meant the user would have to set it. I meant for us to set it in `decode`/`decode_batch` with the `use_source_tokenizer` argument received.",
"Gotcha. Yeah, that should work.",
"Hi, the PR is ready for review. please let me know if the changes look good. Thanks.",
"Ok I think these changes address the comments. If there are still improvements to the docstring/ code you want to make, please feel free to push directly to this branch. Thanks.",
"The docstrings don't accumulate when you subclass and overload, they get reset. So we have to copy the whole things *and* add the extra argument. Will push on your branch the change.",
"Thanks. I think the PR is ready for the final review.\r\ncc: @sgugger @patrickvonplaten @patil-suraj\r\n",
"Thanks a lot for your work on this PR!",
"Thanks a lot for your feedback and a great PR experience!"
] | 1,614 | 1,615 | 1,615 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #10294
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10357/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10357/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10357",
"html_url": "https://github.com/huggingface/transformers/pull/10357",
"diff_url": "https://github.com/huggingface/transformers/pull/10357.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10357.patch",
"merged_at": 1615209271000
} |
https://api.github.com/repos/huggingface/transformers/issues/10356 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10356/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10356/comments | https://api.github.com/repos/huggingface/transformers/issues/10356/events | https://github.com/huggingface/transformers/issues/10356 | 814,656,314 | MDU6SXNzdWU4MTQ2NTYzMTQ= | 10,356 | Fine-tuning bart-base on XSum and got 34.0 as ROUGE1 (40.61 with higher lr) | {
"login": "XinnuoXu",
"id": 5082188,
"node_id": "MDQ6VXNlcjUwODIxODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5082188?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XinnuoXu",
"html_url": "https://github.com/XinnuoXu",
"followers_url": "https://api.github.com/users/XinnuoXu/followers",
"following_url": "https://api.github.com/users/XinnuoXu/following{/other_user}",
"gists_url": "https://api.github.com/users/XinnuoXu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XinnuoXu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XinnuoXu/subscriptions",
"organizations_url": "https://api.github.com/users/XinnuoXu/orgs",
"repos_url": "https://api.github.com/users/XinnuoXu/repos",
"events_url": "https://api.github.com/users/XinnuoXu/events{/privacy}",
"received_events_url": "https://api.github.com/users/XinnuoXu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Just an update, I increased lr and got ROUGE-F(1/2/l): 40.61/17.48/32.62",
"Maybe @patrickvonplaten or @patil-suraj have an idea",
"Hi @XinnuoXu \r\n\r\nThat model was trained a while ago, and there were some bugs in BART related to `decoder_start_token_id` at that time, see https://discuss.huggingface.co/t/bart-lm-odd-beam-search-output/618/13\r\nwhich could be the reason for this.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,614 | 1,620 | 1,620 | NONE | null | Hi, I'm wondering whether there are any benchmarks for fine-turning bart-base on XSum. I found this one [https://huggingface.co/VictorSanh/bart-base-finetuned-xsum/tree/main](url) which also shows the R1 is 35 -ish. Does it suppose to be this low? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10356/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10356/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10355 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10355/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10355/comments | https://api.github.com/repos/huggingface/transformers/issues/10355/events | https://github.com/huggingface/transformers/issues/10355 | 814,646,561 | MDU6SXNzdWU4MTQ2NDY1NjE= | 10,355 | ProphetNet Positional Embeddings Index Issue | {
"login": "ManavR123",
"id": 17506262,
"node_id": "MDQ6VXNlcjE3NTA2MjYy",
"avatar_url": "https://avatars.githubusercontent.com/u/17506262?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ManavR123",
"html_url": "https://github.com/ManavR123",
"followers_url": "https://api.github.com/users/ManavR123/followers",
"following_url": "https://api.github.com/users/ManavR123/following{/other_user}",
"gists_url": "https://api.github.com/users/ManavR123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ManavR123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ManavR123/subscriptions",
"organizations_url": "https://api.github.com/users/ManavR123/orgs",
"repos_url": "https://api.github.com/users/ManavR123/repos",
"events_url": "https://api.github.com/users/ManavR123/events{/privacy}",
"received_events_url": "https://api.github.com/users/ManavR123/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I believe a \"fix\" was made to this issue in #10501, but it is still not quite correct I think. The solution was to simply clamp the `position_ids` to be bounded between 0 and `max_length - 1`, but the ids still won't be properly set this way. The current solution will result in a tensor that looks like `[1, 2, ..., max_length - 1, max_length - 1]`. Everything is receiving the wrong position embedding except the last item. I think we also need to subtract `1` from the `position_ids` before clamping it. This should fix the offset issue and then the clamp will prevent any leading 0s in the `attention_mask` from becoming -1. Is there something else I am missing for why we wouldn't want this?",
"Hey @ManavR123, \r\n\r\nProphetNet is actually a bit weird since the position_ids start at 1 and not at 0. This is because ProphetNet was for its most part derived from Bart which actually had 513 position id weights even though only 512 were allowed (the first position id was skipped -> it's the padding_id_token). ProphetNet however has exactly 512 weights, so it actually allows only 511 tokens. Now to nevertheless allow ProphetNet to handle 512 tokens we just clamp the last id which shouldn't make a huge difference in performance. This means all position ids are correct **except** the last one, which is a bug that is accepted since it provides the possibility to run 512 tokens.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,614 | 1,622 | 1,622 | CONTRIBUTOR | null | I am having an issue finetuning a ProphetNet model for question generation. The current error I am running into is an indexing issue when getting the `position_embeddings, position_ids`. In the below line of code, we call the `ProphetNetPositionalEmbeddings` module to get the embeddings
https://github.com/huggingface/transformers/blob/461e8cacf94d1f76367cc9ba2cfd5b9bd3641c81/src/transformers/models/prophetnet/modeling_prophetnet.py#L1221
As you can see, the call isn't passing in anything for the `attention mask` (which I am not sure I fully understand, so I appreciate clarification as to why that is happening) or `position_ids`, which means both will be None by default. Then, in the forward method for `ProphetNetPositionalEmbeddings`, we see the following logic
https://github.com/huggingface/transformers/blob/461e8cacf94d1f76367cc9ba2cfd5b9bd3641c81/src/transformers/models/prophetnet/modeling_prophetnet.py#L587-L593
Since `attention_mask` is None as noted above, it is set to a tensor of all ones, which makes sense. However, then the `position_ids` would be calculated to be for each sample in the batch a vector from 1 to `max_length`. This is the cause of the indexing issue, I am facing. Should this vector not be from 0 to `max_length - 1`? Is there something I am doing wrong in my setup that would be causing this? Is there some level of logic I am missing that is causing the issue?
I am happy to share more code if needed to give more context, but I believe this issue is isolated to just ProphetNet code. Any thoughts? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10355/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10355/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10354 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10354/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10354/comments | https://api.github.com/repos/huggingface/transformers/issues/10354/events | https://github.com/huggingface/transformers/pull/10354 | 814,629,849 | MDExOlB1bGxSZXF1ZXN0NTc4NTg3NTAz | 10,354 | Add support for ZeRO-2/3 and ZeRO-offload in fairscale | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Other values supported are: `zero2`, `zero2_offload`, `zero3` and `zero3_offload`. To fully take advantage of the `zero3`/`zero3_offload` the model passed to the `Trainer` will need to have its internal layers wrapped inside the `FullyShardedDataParallel`, but this out of scope for this particular PR.\r\n\r\nDo you feel it's better to hardcode these combinations and not have a more flexible approach of:\r\n```\r\n--sharded_ddp \"zero2;offload;future_option\"\r\n```\r\nor\r\n```\r\n--sharded_ddp \"zero2 offload future_option\"\r\n```\r\nwhich would enable adding new features in the future, without needing to create all possible combinations of options which would double every time a new option will be added.\r\n\r\nThis is the cmd API I'm tentatively using for the pipelines `--pipeline \"chunks=5 device_map=0:0-5,1:5-10 ....\"`\r\n\r\n> One thing to think further is that this integration breaks the usual convention that `self.model` is the original model (`FullyShardedDataParallel` consumes the model to use less memory).\r\n\r\nYes, we will need to rethink this - the trainer is getting more and more complex.",
"> Do you feel it's better to hardcode these combinations and not have a more flexible approach of:\r\n>\r\n> --sharded_ddp \"zero2;offload;future_option\"\r\n\r\nHappy to explore that design as it seems more flexible and less prone to future breaking changes. Will adapt the PR accordingly once we get the wrapper to work.",
"Probably whitespace separation is more readable: `--sharded_ddp \"zero2 offload future_option\"`\r\n\r\nAlso we need to make sure that we distinguish between `FullyShardedDataParallel` and `ShardedDataParallel` since as the [commentary was made](https://github.com/facebookresearch/fairscale/pull/413#issuecomment-784168151), they aren't quite the same. Perhaps `not_full` for `ShardedDataParallel`? both should be corresponding to stage2 but they don't work in the same way.\r\n\r\nDeepspeed has a `stage` param which goes from 0 to 3. where stage=0 doesn't enable ZeRO, and then each number matches the stage.\r\n\r\nFor the user's sake perhaps we could make things as similar as possible so it'd be more intuitive for them to switch between fairscale (and eventually pytorch) and deepspeed.\r\n\r\nAlso note that DeepSpeed exposes other params like the size of buckets, which actually are very important and need to be user-configurable. I won't be surprised that FSDP will also have those configurable down the road - i.e. more params.",
"Reworked the API to take your suggestion of list of options into account @stas00. I don't think we have to worry about uniformizing with deepspeed or cleaning more at this stage as:\r\n- this API will evolve in the future (ShardedDataParallel might very well disappear if FullyShardedDataParallel is better, and this might change again on the road to be merged in PyTorch)\r\n- we don't know yet all the options we will have between deepspeed/fairscale/PyTorch\r\n- this is an experimental API and while we won't break it just for fun, we can make slight changes down the road.",
"Moving out the cl arg naming discussion from https://github.com/huggingface/transformers/pull/10354#pullrequestreview-596676591 to the open\r\n\r\nSo if's not DDP but DP, then we should probably change the cl arg to `_dp` as I suggested above so that it's consistently either DP or DDP all the way through.\r\n\r\nOr perhaps we should just call it `--sharded`? the dp part is already inside the value anyway as in: `--sharded zero_dp_3`"
] | 1,614 | 1,614 | 1,614 | COLLABORATOR | null | # What does this PR do?
This PR adds support for the new `FullyShardedDataParallel` introduced in fairscale. See [this PR](https://github.com/facebookresearch/fairscale/pull/413) for more details.
The PR changes a tiny bit the behavior of the `--sharded_ddp` flag/training argument to support a list of options. You can still use the TrainingArguments class with `sharded_dpp=True` but if launching a script, `--sharded_ddp` has to be replaced with `--sharded_ddp simple`. The `--sharded_ddp` was marked as an experimental API so I think this breaking change is fine if properly documented.
Other values supported are: `zero_dp_2`, `zero_dp_2 offload`, `zero_dp_3` and `zero_dp_3 offload`. To fully take advantage of the `zero_dp_3`/`zero_dp_3 offload` the model passed to the `Trainer` will need to have its internal layers wrapped inside the `FullyShardedDataParallel`, but this out of scope for this particular PR.
For all those new modes, the model simply needs to be wrapped inside `FullyShardedDataParallel` but the optimizer needs to be created after the model wrapping (to get the parameters shards).
Note that:
- `predict_with_generate` does not work with this integration
- `cpu_offload` does not work for now due to the bug mentioned in [this issue](https://github.com/facebookresearch/fairscale/issues/421). Once the issue is fixed, the option should work with the existing code.
One thing to think further is that this integration breaks the usual convention that `self.model` is the original model (`FullyShardedDataParallel` consumes the model to use less memory). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10354/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10354",
"html_url": "https://github.com/huggingface/transformers/pull/10354",
"diff_url": "https://github.com/huggingface/transformers/pull/10354.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10354.patch",
"merged_at": 1614269274000
} |
https://api.github.com/repos/huggingface/transformers/issues/10353 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10353/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10353/comments | https://api.github.com/repos/huggingface/transformers/issues/10353/events | https://github.com/huggingface/transformers/pull/10353 | 814,591,506 | MDExOlB1bGxSZXF1ZXN0NTc4NTU1Njg0 | 10,353 | [bert-base-german-cased] use model repo, not external bucket | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,614 | 1,614 | 1,614 | MEMBER | null | References: #10306 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10353/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10353/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10353",
"html_url": "https://github.com/huggingface/transformers/pull/10353",
"diff_url": "https://github.com/huggingface/transformers/pull/10353.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10353.patch",
"merged_at": 1614101447000
} |
https://api.github.com/repos/huggingface/transformers/issues/10351 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10351/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10351/comments | https://api.github.com/repos/huggingface/transformers/issues/10351/events | https://github.com/huggingface/transformers/issues/10351 | 814,511,016 | MDU6SXNzdWU4MTQ1MTEwMTY= | 10,351 | Can every line in the input CSV file contain more than one sentence when pertraining BERT for MLM Loss? | {
"login": "abhisheksgumadi",
"id": 1021734,
"node_id": "MDQ6VXNlcjEwMjE3MzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1021734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhisheksgumadi",
"html_url": "https://github.com/abhisheksgumadi",
"followers_url": "https://api.github.com/users/abhisheksgumadi/followers",
"following_url": "https://api.github.com/users/abhisheksgumadi/following{/other_user}",
"gists_url": "https://api.github.com/users/abhisheksgumadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhisheksgumadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhisheksgumadi/subscriptions",
"organizations_url": "https://api.github.com/users/abhisheksgumadi/orgs",
"repos_url": "https://api.github.com/users/abhisheksgumadi/repos",
"events_url": "https://api.github.com/users/abhisheksgumadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhisheksgumadi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead? You'll get more answers over there, as questions like these are the point of the forum :)\r\n\r\nThanks!"
] | 1,614 | 1,614 | 1,614 | NONE | null | Hello HF Team,
I am familiar with how to pretrain BERT and I have a Dataloader that reads an input CSV file line by line and every time it reads a line, it tokenizes it and sends back the tokens for training to the training code. My question is whether is it ok for this input CSV file to contain more than one sentence on every line when pretraining BERT for masked language modelling?
Otherwise, is it important for it to contain only one meaningful sentence only? I am thinking if self attention will still continue to work and the model train properly even if every single line in the input CSV file (single training sample) is actually more than one sentence, each of it separated with a '.' delimiter of course.
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10351/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10351/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10350 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10350/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10350/comments | https://api.github.com/repos/huggingface/transformers/issues/10350/events | https://github.com/huggingface/transformers/issues/10350 | 814,475,652 | MDU6SXNzdWU4MTQ0NzU2NTI= | 10,350 | Got "RuntimeError: CUDA error: device-side assert triggered" with Seq2SeqTrainer | {
"login": "ithieund",
"id": 4217195,
"node_id": "MDQ6VXNlcjQyMTcxOTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4217195?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ithieund",
"html_url": "https://github.com/ithieund",
"followers_url": "https://api.github.com/users/ithieund/followers",
"following_url": "https://api.github.com/users/ithieund/following{/other_user}",
"gists_url": "https://api.github.com/users/ithieund/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ithieund/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ithieund/subscriptions",
"organizations_url": "https://api.github.com/users/ithieund/orgs",
"repos_url": "https://api.github.com/users/ithieund/repos",
"events_url": "https://api.github.com/users/ithieund/events{/privacy}",
"received_events_url": "https://api.github.com/users/ithieund/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there. Could you please post the code you are using? The steps you are defining are too vague for us to efficiently reproduce the issue and help.",
"Hi @sgugger \r\nIt was my fault when I update the max_encoder_length and re-run on the updated blocks.\r\nThe issue will not happen when I restart the kernel on Google Colab.\r\n\r\nI think you can close the issue.\r\nThank you.",
"Glad you could resolve your issue!"
] | 1,614 | 1,614 | 1,614 | NONE | null | ## Environment info
- `transformers` version: 4.3.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
### Who can help
@patrickvonplaten @sgugger @stas00
Models:
- encoderdecoder: @patrickvonplaten, @patil-suraj
Library:
- trainer: @sgugger
## Information
Model I am using (PhoBERT):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Create an EncoderDecoderModel with phobert-base as encoder and phobert-base as decoder
2. Prepare train_data and val_data
3. Create Seq2SeqTrainer with that model and data
## Code
`trainer = Seq2SeqTrainer(
model=sum_model,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_data,
eval_dataset=val_data,
)`
## Error
`---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-49-67f458786328> in <module>()
25 compute_metrics=compute_metrics,
26 train_dataset=train_data,
---> 27 eval_dataset=val_data,
28 )
6 frames
/usr/local/lib/python3.6/dist-packages/transformers/trainer.py in __init__(self, model, args, data_collator, train_dataset, eval_dataset, tokenizer, model_init, compute_metrics, callbacks, optimizers)
269 # 2. fp16-enabled DeepSpeed loads the model in half the size and it doesn't need .to() anyway
270 if not (self.is_model_parallel or args.deepspeed):
--> 271 model = model.to(args.device)
272
273 # Force n_gpu to 1 to avoid DataParallel as MP will manage the GPUs
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in to(self, *args, **kwargs)
610 return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
611
--> 612 return self._apply(convert)
613
614 def register_backward_hook(
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _apply(self, fn)
357 def _apply(self, fn):
358 for module in self.children():
--> 359 module._apply(fn)
360
361 def compute_should_use_set_data(tensor, tensor_applied):
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _apply(self, fn)
357 def _apply(self, fn):
358 for module in self.children():
--> 359 module._apply(fn)
360
361 def compute_should_use_set_data(tensor, tensor_applied):
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _apply(self, fn)
357 def _apply(self, fn):
358 for module in self.children():
--> 359 module._apply(fn)
360
361 def compute_should_use_set_data(tensor, tensor_applied):
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _apply(self, fn)
379 # `with torch.no_grad():`
380 with torch.no_grad():
--> 381 param_applied = fn(param)
382 should_use_set_data = compute_should_use_set_data(param, param_applied)
383 if should_use_set_data:
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in convert(t)
608 if convert_to_format is not None and t.dim() == 4:
609 return t.to(device, dtype if t.is_floating_point() else None, non_blocking, memory_format=convert_to_format)
--> 610 return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
611
612 return self._apply(convert)
RuntimeError: CUDA error: device-side assert triggered`
## Actual behavior
When I reduce the max_encoder_length to 80, it works OK
But when I increase the max_encoder_length >= 100, the error occurred!
## Expected behavior
The code should run properly with max_encoder_length = 512
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10350/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10350/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10349 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10349/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10349/comments | https://api.github.com/repos/huggingface/transformers/issues/10349/events | https://github.com/huggingface/transformers/issues/10349 | 814,434,106 | MDU6SXNzdWU4MTQ0MzQxMDY= | 10,349 | Padding of bbox input in LayoutLM | {
"login": "valentinkoe",
"id": 8581199,
"node_id": "MDQ6VXNlcjg1ODExOTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8581199?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/valentinkoe",
"html_url": "https://github.com/valentinkoe",
"followers_url": "https://api.github.com/users/valentinkoe/followers",
"following_url": "https://api.github.com/users/valentinkoe/following{/other_user}",
"gists_url": "https://api.github.com/users/valentinkoe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/valentinkoe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/valentinkoe/subscriptions",
"organizations_url": "https://api.github.com/users/valentinkoe/orgs",
"repos_url": "https://api.github.com/users/valentinkoe/repos",
"events_url": "https://api.github.com/users/valentinkoe/events{/privacy}",
"received_events_url": "https://api.github.com/users/valentinkoe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! This is a fair request, indeed. The `bbox` values should definitely be padded/truncated by the tokenizer.\r\n\r\nI think here we would welcome a PR adding this functionality for the LayoutLM tokenizer, and then think of a way to upstream it to be handled by the tokenizer directly, for LayoutLM but also for any other model that requires special inputs.\r\n\r\nWould you be open to contributing a PR which adds this functionality to LayoutLM?",
"LayoutLM would really benefit from its own tokenizer indeed. Currently you have to use `BertTokenizer`, but this let's you only tokenize text, not really prepare data for the model.\r\n\r\nA nice API (in my opinion) would look something like:\r\n\r\n`LayoutLMTokenizer(image: PIL.Image, words: List[str], bounding_boxes: List[List[int]], labels: List[str])`\r\n\r\nThe tokenizer then automatically takes care of normalizing the bounding boxes (users can still choose which OCR engine to use to get words and bounding boxes), transform the words and labels into token-level `input_ids`, `bbox`, padding (as you mention), etc.\r\n\r\nThe functionality implemented in the function you refer to ([`convert_examples_to_features`](https://github.com/microsoft/unilm/blob/23a7ea35b55279a171a118ac767e863aa92e692c/layoutlm/layoutlm/data/funsd.py#L206)) could be added by overwriting the `prepare_for_model` method, and the padding functionality by overwriting `_pad`. \r\n\r\n",
"I'm definitely up for working on this.\r\n\r\nThanks a lot for the suggestions @NielsRogge , I see you already did some great work in improving the layoutLM implementation :+1: .\r\nWhat I do not fully understand is what we would need the `image` for at this stage. Can you clarify?\r\n\r\nI'd also like to understand the better the normalization of bounding boxes you mention. If I understand correctly, the bounding boxes generated by the OCR engine may be split further according to whether the tokenizer splits the text inside a box (the official layoutLM code seems to [repeat the same bounding box](https://github.com/microsoft/unilm/blob/master/layoutlm/layoutlm/data/funsd.py#L252) in those cases).\r\nAfaik, most OCR engines do some kind of tokenization already so the additional splitting may not be optimal for all use cases (it is not for mine because of some downstream tasks). There should either be a way to revert that splitting or disable it. What do you think?",
"> Thanks a lot for the suggestions @NielsRogge , I see you already did some great work in improving the layoutLM implementation \n\nThanks! \n\n> What I do not fully understand is what we would need the `image` for at this stage. Can you clarify?\n\nThe image can be used to normalize the bounding boxes for the tokens, based on the width and height of the image. If we decide to let LayoutLMTokenizer to handle the normalization, then it should receive the image. \n\n> I'd also like to understand the better the normalization of bounding boxes you mention. If I understand correctly, the bounding boxes generated by the OCR engine may be split further according to whether the tokenizer splits the text inside a box (the official layoutLM code seems to [repeat the same bounding box](https://github.com/microsoft/unilm/blob/master/layoutlm/layoutlm/data/funsd.py#L252) in those cases).\n\nAn OCR engine (like Google's Tesseract) recognizes words and corresponding bounding boxes in an image. However, LayoutLM (like BERT) uses wordpieces, so if a word like San Francisco is tokenized into ['San', 'Fran', '##Cisco'], then we need to repeat the bounding box for every subword token indeed. \n\n> Afaik, most OCR engines do some kind of tokenization already so the additional splitting may not be optimal for all use cases (it is not for mine because of some downstream tasks). There should either be a way to revert that splitting or disable it. What do you think?\n\nDo they? I used Google's Tesseract and it just recognizes words. \n\n",
"> The image can be used to normalize the bounding boxes for the tokens, based on the width and height of the image. If we decide to let LayoutLMTokenizer to handle the normalization, then it should receive the image.\r\n\r\nSo you mean normalization in terms of bringing the coordinates to the same scale? That totally makes sense. Is that really something the tokenizer should do? Or should we expect the user to supply boxes with correctly scaled values?\r\nIf we want to do the scaling here, do we really need the full image for that and also restrict it to e.g. PIL/Pillow images? Some may use for example opencv where image objects are numpy arrays and don't have `height` and `width` attributes. The height and width values could also be provided as (optional) paramters for the tokenizer.\r\n\r\n> An OCR engine (like Google's Tesseract) recognizes words and corresponding bounding boxes in an image. However, LayoutLM (like BERT) uses wordpieces, so if a word like San Francisco is tokenized into ['San', 'Fran', '##Cisco'], then we need to repeat the bounding box for every subword token indeed.\r\n\r\nActually, I referred to this recognition of words by OCR as tokenization as well - after all the text on a document/image could also be delivered as just one large string. Maybe that wasn't the right choice of words, sorry for the confusion.\r\nI totally get that wordpiece \"takes it a step further\" and that this makes sense. What I wanted to clarify is how that should be dealt with. It might be confusing to a user to get a larger amount of bounding boxes after tokenization. I guess this is in line with the other tokenizers but it should at least be documented very clearly.",
"> So you mean normalization in terms of bringing the coordinates to the same scale? That totally makes sense. Is that really something the tokenizer should do? Or should we expect the user to supply boxes with correctly scaled values?\r\n\r\nThat's a design decision. We could choose to let the tokenizer handle normalization or not. And yeah maybe PIL images is too strict. \r\n\r\n> I guess this is in line with the other tokenizers but it should at least be documented very clearly.\r\n\r\nIf we add bounding boxes for every token, we should add it to the documentation indeed!",
"I created a PR https://github.com/huggingface/transformers/pull/10719 in which I added the functionality to repeat bounding boxes for text that is split, also solving the padding problem that lead me here.\r\n\r\nHowever, I ended up basically repeating a lot of code (also for the tests) and I'm not sure this is the nicest way to tackle the problem. Maybe it could make more sense to add an optional `additional_input` parameter for the base tokenizers to avoid all this repetition. There may be more models that need additional inputs than the `input_ids`.\r\n\r\nI also removed the fast version of the tokenizer for now as I first would like to clarify with the maintainers if this is the right approach.\r\n\r\n(I also added optional coordinate normalization)",
"@LysandreJik could you maybe give this a look and suggest how to proceed with this?",
"ping @LysandreJik . Would be great to get some feedback on my PR :slightly_smiling_face: ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,614 | 1,620 | 1,620 | CONTRIBUTOR | null | I've been working with LayoutLM and had some issues with different lengths of samples in a batch.
```
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.
```
It turns out that `transformers.tokenization_utils_base.PreTrainedTokenizerBase._pad` does not pad the `bbox` items to maximum length in the batch and eventually trying to join differently sized lists into a tensor crashes.
One way to solve this is to pad all required items when generating samples like e.g. the official implementation does for the [FUNSD data set](https://github.com/microsoft/unilm/blob/master/layoutlm/layoutlm/data/funsd.py#L317-L331). I also implemented it this way for my use case and it seems to work well.
But this is basically repeating the pad functionality and I was wondering if the `_pad` method should allow for additional required input like the `bbox`es are for LayoutLM. I'm happy to work on a PR for that but also wanted to check if there's anything more to consider. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10349/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10348 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10348/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10348/comments | https://api.github.com/repos/huggingface/transformers/issues/10348/events | https://github.com/huggingface/transformers/issues/10348 | 814,424,293 | MDU6SXNzdWU4MTQ0MjQyOTM= | 10,348 | BertForMaskedLM cannot be initialized from BERT checkpoints | {
"login": "DavidNemeskey",
"id": 690386,
"node_id": "MDQ6VXNlcjY5MDM4Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/690386?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DavidNemeskey",
"html_url": "https://github.com/DavidNemeskey",
"followers_url": "https://api.github.com/users/DavidNemeskey/followers",
"following_url": "https://api.github.com/users/DavidNemeskey/following{/other_user}",
"gists_url": "https://api.github.com/users/DavidNemeskey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DavidNemeskey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DavidNemeskey/subscriptions",
"organizations_url": "https://api.github.com/users/DavidNemeskey/orgs",
"repos_url": "https://api.github.com/users/DavidNemeskey/repos",
"events_url": "https://api.github.com/users/DavidNemeskey/events{/privacy}",
"received_events_url": "https://api.github.com/users/DavidNemeskey/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Note that the way to load TF weights in a PyTorhc model while using the hub is just to do:\r\n```\r\nmodel = BertForMaskedLM.from_pretrained(\"SZTAKI-HLT/hubert-base-cc\", from_tf=True)\r\n```\r\nIt does look like this model has some weights missing (I still get a warning but with a few less weights than you) but if that's blocking you, there is nothing we can do about it: you should contact the author of the model on the hub (the same weights are missing in the TF version).",
"@sgugger **I** am the author of the model. :) And yes, the weights are missing, hence this issue; as described above, I used `transformers-cli convert` to convert the original BERT TF (1.5) checkpoint to Pytorch, and the script apparently did not convert all the weights.\r\n\r\nI might be wrong about this, but I was under the impression that the `from_tf=True` is to be used when the model has both PT and TF versions uploaded to the hub, not for importing an original BERT checkpoint.",
"Ah sorry I misunderstood your problem, sorry! I though you were trying to use the `transformers-cli` on the PyTorch model file of the hub. Not sure what the problem is with the conversion script. Maybe @LysandreJik will have an idea?",
"Hello! Thank your for opening an issue. As you both have said, there seems to be an error with the conversion.\r\n\r\nDo you mind letting me know how you obtained your checkpoint? For example, is it one of the checkpoints available on google-research/bert, or is a custom one?\r\n\r\nAll the checkpoints available on google-research/bert should convert without any issue.",
"OK, I have experimented a bit and it seems that actually conversion works -- it is possible that I created a `BertModel` and saved my model as that instead of `BertForPretraining`. In any case, I have updated my model(s) and the issue is moot.\r\n\r\nBefore closing it, however, one final question. I created the Pytorch model first. When I converted that to TF2 via\r\n```\r\nTFBertForPreTraining.from_pretrained('SZTAKI-HLT/hubert-base-cc', from_pt=True)\r\n```\r\n, I got the following warning:\r\n```\r\nSome weights of the PyTorch model were not used when initializing the TF 2.0 model TFBertForPreTraining: ['cls.predictions.decoder.bias', 'bert.embeddings.position_ids']\r\n```\r\nThe model, when used for masked LM, behaves identically to the Pytorch model, so I am wondering why they store different tensors (given that these tensors come from the original TF checkpoint) and if the model will be alright without them.",
"Glad you could convert it!!\r\n\r\nThese warnings aren't important, the bias is included in another weight and the position IDs are a buffer that does not need to be created in TensorFlow.\r\n\r\nI'm addressing these warnings in #10397."
] | 1,614 | 1,614 | 1,614 | CONTRIBUTOR | null | When I try to load a BERT model from a TF checkpoint (via `transformers-cli convert`) into a `BertForMaskedLM`, I get the following warning:
```
Some weights of BertForMaskedLM were not initialized from the model checkpoint at `SZTAKI-HLT/hubert-base-cc` and are newly initialized:
['cls.predictions.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.weight',
'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.LayerNorm.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
The model also performs poorly (as in: completely randomly) in masked LM. If I load a "named" model, such as `bert-base-cased`, I do not get the warning and masked LM works OK.
This is all to be expected if the tensors mentioned in the warning are indeed not part of the converted model. The question then is twofold:
1. Why aren't they? MaskedLM is one of the training tasks for a BERT model, and users rightly expect that it works (I have already received two reports for my model to that effect); i.e. that they can initialize a `BertForMaskedLM` model from a BERT checkpoint / HF model without any problems.
2. How can I convert the model so that it includes said tensors? To my knowledge, there are not options in `transformers-cli convert` that would enable me to do so.
3. The [documentation](https://huggingface.co/transformers/converting_tensorflow_models.html) should warn people of this (and better yet, describe how to convert all tensors).
## Environment info
- `transformers` version: 4.3.2
- Platform: Linux-5.4.0-60-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: N/A
- Using distributed or parallel set-up in script?: N/A
### Who can help
@LysandreJik
@sgugger
## Information
Model I am using (Bert, XLNet ...): SZTAKI-HLT/hubert-base-cc (BERT)
The problem arises when using:
* [X] the official example scripts: `transformer-cli convert`
* [ ] my own modified scripts:
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: masked LM
* [ ] my own task or dataset:
## To reproduce
Steps to reproduce the behavior:
1. `BertForMaskedLM.from_pretrained('SZTAKI-HLT/hubert-base-cc')`
2. Observer warning messages
3. Try to use it for masked LM
## Expected behavior
Conversion: the ability to convert tensors for the training tasks
Usage: no warning messages and same MLM / NSP performance as with the official TF BERT code | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10348/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10348/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10347 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10347/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10347/comments | https://api.github.com/repos/huggingface/transformers/issues/10347/events | https://github.com/huggingface/transformers/issues/10347 | 814,405,254 | MDU6SXNzdWU4MTQ0MDUyNTQ= | 10,347 | [Benchmark] | {
"login": "IsaacZachary",
"id": 66381599,
"node_id": "MDQ6VXNlcjY2MzgxNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/66381599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IsaacZachary",
"html_url": "https://github.com/IsaacZachary",
"followers_url": "https://api.github.com/users/IsaacZachary/followers",
"following_url": "https://api.github.com/users/IsaacZachary/following{/other_user}",
"gists_url": "https://api.github.com/users/IsaacZachary/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IsaacZachary/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IsaacZachary/subscriptions",
"organizations_url": "https://api.github.com/users/IsaacZachary/orgs",
"repos_url": "https://api.github.com/users/IsaacZachary/repos",
"events_url": "https://api.github.com/users/IsaacZachary/events{/privacy}",
"received_events_url": "https://api.github.com/users/IsaacZachary/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,614 | 1,614 | 1,614 | NONE | null | # 🖥 Benchmarking `transformers`
## Benchmark
Which part of `transformers` did you benchmark?
## Set-up
What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use?
## Results
Put your results here! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10347/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10347/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10346 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10346/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10346/comments | https://api.github.com/repos/huggingface/transformers/issues/10346/events | https://github.com/huggingface/transformers/issues/10346 | 814,403,227 | MDU6SXNzdWU4MTQ0MDMyMjc= | 10,346 | Custom tokenizer with run_mlm script | {
"login": "shampp",
"id": 55344772,
"node_id": "MDQ6VXNlcjU1MzQ0Nzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/55344772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shampp",
"html_url": "https://github.com/shampp",
"followers_url": "https://api.github.com/users/shampp/followers",
"following_url": "https://api.github.com/users/shampp/following{/other_user}",
"gists_url": "https://api.github.com/users/shampp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shampp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shampp/subscriptions",
"organizations_url": "https://api.github.com/users/shampp/orgs",
"repos_url": "https://api.github.com/users/shampp/repos",
"events_url": "https://api.github.com/users/shampp/events{/privacy}",
"received_events_url": "https://api.github.com/users/shampp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! The issue here is that the `AutoTokenizer` has no idea what is the type of your tokenizer: it's looking for the `model_type` specified in the `config.json`, but it seems it cannot find it.\r\n\r\nCould you show us the results of `ls ../Data/tokenizer/`, and if the file `config.json` is in it, could you show us the exact content of the JSON file?\r\n\r\nThanks a lot!",
"I am expecting the config.json and vocabulary files to be saved by running `bert_tokenizer.save(vocab_file)` (Please check the attached code). But unfortunately it saves a json file containing only the vocabulary. I tried the function `bert_tokenizer.save_model`, but got an error saying Tokenizer don't have such a function. So there is no configuration files. But only a vocabulary json file. If I give a directory path as input to `bert_tokenizer.save`, it gives me error `Exception: Is a directory (os error 21)`.",
"The `bert_tokenizer.save(vocab_file)` method does not save the configuration as the configuration is linked to the model. It is unfortunately currently impossible to use the `AutoTokenizer` without having the model `config.json` in the same folder, which is a hard limitation of the `AutoTokenizer`.\r\n\r\nWe are aware of this limitation and it is part of the immediate roadmap. Expect a change in the coming weeks related to that issue.\r\n\r\nThank you for your understanding.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hi @LysandreJik! Has there been any change on this subject?",
"There hasn't been any change - but we've been freeing some time to work on this subject. I would expect this to be resolved in 2 or 3 weeks.",
"Awesome, thanks a lot for your reply :) ",
"I also encountered this problem, how to solve it",
"Using a recent version of the library should now work for these use-cases.\r\n\r\nCould you try using the `master` branch to see if it fixes your issue? You should use it to both save your tokenizer, as well as to load it in the script. If it doesn't work, please provide the code you're using as well as the full stack trace. Thank you!"
] | 1,614 | 1,625 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.2
- Platform: Linux-5.9.16-1-MANJARO-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: NO
### Who can help
@LysandreJik, @n1t0
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. I follow the official link [(https://huggingface.co/docs/tokenizers/python/latest/pipeline.html#example)] to train and save a Bert WordPieceLevel tokenizer on a custom corpus.
2. I use this tokenizer to train a bert model from scratch using the run_mlm script
3.
`python run_mlm.py --output_dir=../Data/model/ --model_type=bert --mlm_probability 0.1 --tokenizer_name=../Data/tokenizer --learning_rate 1e-4 --do_train --train_file ../Data/corpus.txt --gradient_accumulation_steps=4 --num_train_epochs 100 --per_gpu_train_batch_size 2 --save_steps 50000 --seed 42 --config_name=../Data/config/ --line_by_line --do_eval --max_seq_length=8 --logging_steps 5000 --validation_split_percentage 20 --save_steps 50000 --save_total_limit 10`
from tokenizers import Tokenizer
from tokenizers.models import WordPiece
from tokenizers.pre_tokenizers import Whitespace
from tokenizers import normalizers
from tokenizers.normalizers import NFD, StripAccents
from tokenizers.processors import TemplateProcessing
from tokenizers.trainers import WordPieceTrainer
vocab_file = '../Data/tokenizer/config.json'
corpus_file = '../Data/corpus.txt'
df = pd.read_csv(corpus_file)
bert_tokenizer = Tokenizer(WordPiece(unk_token="[UNK]"))
bert_tokenizer.normalizer = normalizers.Sequence([NFD(), StripAccents()])
bert_tokenizer.pre_tokenizer = Whitespace()
bert_tokenizer.post_processor = TemplateProcessing(
single="[CLS] $A [SEP]",
pair="[CLS] $A [SEP] $B:1 [SEP]:1",
special_tokens=[("[CLS]", 1),("[SEP]", 2),],)
trainer = WordPieceTrainer(vocab_size=25000,min_frequency=3,special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
bert_tokenizer.train_from_iterator(df.query_text.to_list(),trainer)
bert_tokenizer.save(vocab_file)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
My training configuration is as follows
` "architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 128,
"initializer_range": 0.02,
"intermediate_size": 256,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 1536,
"model_type": "bert",
"num_attention_heads": 4,
"num_hidden_layers": 4,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"transformers_version": "4.3.2",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 25000
`
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I am getting the error
` loading configuration file ../Data/tokenizer/config.json
Traceback (most recent call last):
File "run_mlm.py", line 457, in <module>
main()
File "run_mlm.py", line 276, in main
tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, **tokenizer_kwargs)
File ".../lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 362, in from_pretrained
config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
File ".../lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 379, in from_pretrained
raise ValueError(
ValueError: Unrecognized model in ../Data/tokenizer. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: wav2vec2, convbert, led, blenderbot-small, retribert, mt5, t5, mobilebert, distilbert, albert, bert-generation, camembert, xlm-roberta, pegasus, marian, mbart, mpnet, bart, blenderbot, reformer, longformer, roberta, deberta, flaubert, fsmt, squeezebert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm-prophetnet, prophetnet, xlm, ctrl, electra, encoder-decoder, funnel, lxmert, dpr, layoutlm, rag, tapas` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10346/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10345 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10345/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10345/comments | https://api.github.com/repos/huggingface/transformers/issues/10345/events | https://github.com/huggingface/transformers/issues/10345 | 814,371,963 | MDU6SXNzdWU4MTQzNzE5NjM= | 10,345 | MarianMT - ONNX only accepts fixed input despite setting dynamic axes | {
"login": "10-zin",
"id": 33179372,
"node_id": "MDQ6VXNlcjMzMTc5Mzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/33179372?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/10-zin",
"html_url": "https://github.com/10-zin",
"followers_url": "https://api.github.com/users/10-zin/followers",
"following_url": "https://api.github.com/users/10-zin/following{/other_user}",
"gists_url": "https://api.github.com/users/10-zin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/10-zin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/10-zin/subscriptions",
"organizations_url": "https://api.github.com/users/10-zin/orgs",
"repos_url": "https://api.github.com/users/10-zin/repos",
"events_url": "https://api.github.com/users/10-zin/events{/privacy}",
"received_events_url": "https://api.github.com/users/10-zin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"pinging @mfuntowicz, our `onnx` expert.",
"Any lead on the solution? @mfuntowicz, @patrickvonplaten, @patil-suraj ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,614 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.1.0
- Platform: Linux-5.8.0-43-generic-x86_64-with-glibc2.29 (Ubuntu 20.04.2)
- Python version: 3.8.3
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes (No too, error persists in both cases)
- Using distributed or parallel set-up in script?: No
### Who can help
marian: @patrickvonplaten, @patil-suraj
## Information
Model I am using (Bert, XLNet ...): Helsinki-NLP/opus-mt-en-hi (bug persists for other languages too)
The problem arises when using:
* [ ] my own modified scripts: (give details below)
I slightly modified the `convert_graph_to_onnx.py` script with the code snippet given below(call to export is exactly the same). Apparently torch.triu() is not supported for onnx conversion, so following prior issues in PyTorch [#32968](https://github.com/pytorch/pytorch/issues/32968) I modified the script, resulting in successful onnx conversion of models.
torch_triu = torch.triu
def triu_onnx(x, diagonal=0):
l = x.shape[0]
arange = torch.arange(l, device=x.device)
mask = arange.expand(l, l)
arange = arange.unsqueeze(-1)
if diagonal:
arange = arange + diagonal
mask = mask >= arange
return x.masked_fill(mask == 0, 0)
torch.triu = triu_onnx
export(
nlp.model,
model_args,
f=output.as_posix(),
input_names=ordered_input_names,
output_names=output_names,
dynamic_axes=dynamic_axes,
do_constant_folding=True,
use_external_data_format=use_external_format,
enable_onnx_checker=True,
opset_version=opset,
)
torch.triu = torch_triu
The tasks I am working on is:
* Simple Machine Translation
## To reproduce
Steps to reproduce the behavior:
1. ```python convert_graph_to_onnx.py --framework pt --model Helsinki-NLP/opus-mt-en-hi onnx-models/opus-mt-en-hi.onnx```
2.
```
from transformers import AutoTokenizer
from onnxruntime import ExecutionMode, InferenceSession, SessionOptions
import numpy as np
tok_name = 'Helsinki-NLP/opus-mt-en-hi'
model_name = 'onnx-models/opus-mt-en-hi.onnx'
tokenizer = AutoTokenizer.from_pretrained(tok_name)
options = SessionOptions()
options.intra_op_num_threads = 1
options.execution_mode = ExecutionMode.ORT_SEQUENTIAL
session = InferenceSession(model_name, options)
tokens = tokenizer.encode_plus('Testing onnx conversion through a sample input for machine translation.')
tokens = {name: np.atleast_2d(value) for name, value in tokens.items()}
op = session.run(None, tokens)
```
3. Stack Trace
```
---------------------------------------------------------------------------
RuntimeException Traceback (most recent call last)
<ipython-input-96-d76889a45083> in <module>
14 tokens = tokenizer.encode_plus('Testing onnx conversion through a sample input for machine translation.')
15 tokens = {name: np.atleast_2d(value) for name, value in tokens.items()}
---> 16 op = session.run(None, tokens)
~/.pyenv/versions/3.8.3/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py in run(self, output_names, input_feed, run_options)
122 output_names = [output.name for output in self._outputs_meta]
123 try:
--> 124 return self._sess.run(output_names, input_feed, run_options)
125 except C.EPFail as err:
126 if self._enable_fallback:
```
```
RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_62' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:42 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, std::vector<long int>&) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,13}, requested shape:{5}
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Ideally the input should have smoothly passed through the onnx converted model during inference, but it doesn't.
Possible useful information
```
Dynamic Axes:
{'input_ids': {0: 'batch', 1: 'sequence'},
'attention_mask': {0: 'batch', 1: 'sequence'},
'output_0': {0: 'batch', 1: 'sequence'},
'output_1': {0: 'batch', 1: 'sequence'}}
```
```
Generated inputs order: ['input_ids', 'attention_mask']
```
**My lead is that** -
Exporting is not taking into account the dynamic axes for some reason, when any Marian Mt model is being used.
The error also notes - requested shape size - 5, which is the sequence length of the dummy input (line 196 in `convert_graph_to_onnx.py` while converting to onnx.
Notably, passing an input with sequence length 5 works perfectly fine.
Moreover, this script works perfectly for standard models like distilbert for both model conversion, and inference. So it's surely some model-specific problem.
Will be really helpful to get a fix for this, especially since there are numerous Marian mt models so it can have a larger impact! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10345/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10344 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10344/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10344/comments | https://api.github.com/repos/huggingface/transformers/issues/10344/events | https://github.com/huggingface/transformers/pull/10344 | 814,351,512 | MDExOlB1bGxSZXF1ZXN0NTc4MzUzNDc5 | 10,344 | Fix broken examples/seq2seq/README.md markdown | {
"login": "Wikidepia",
"id": 72781956,
"node_id": "MDQ6VXNlcjcyNzgxOTU2",
"avatar_url": "https://avatars.githubusercontent.com/u/72781956?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wikidepia",
"html_url": "https://github.com/Wikidepia",
"followers_url": "https://api.github.com/users/Wikidepia/followers",
"following_url": "https://api.github.com/users/Wikidepia/following{/other_user}",
"gists_url": "https://api.github.com/users/Wikidepia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wikidepia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wikidepia/subscriptions",
"organizations_url": "https://api.github.com/users/Wikidepia/orgs",
"repos_url": "https://api.github.com/users/Wikidepia/repos",
"events_url": "https://api.github.com/users/Wikidepia/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wikidepia/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,614 | 1,614 | 1,614 | CONTRIBUTOR | null | # What does this PR do?
This PR fix broken markdown in examples/seq2seq/README.md
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10344/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10344",
"html_url": "https://github.com/huggingface/transformers/pull/10344",
"diff_url": "https://github.com/huggingface/transformers/pull/10344.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10344.patch",
"merged_at": 1614095365000
} |
https://api.github.com/repos/huggingface/transformers/issues/10343 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10343/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10343/comments | https://api.github.com/repos/huggingface/transformers/issues/10343/events | https://github.com/huggingface/transformers/issues/10343 | 814,164,977 | MDU6SXNzdWU4MTQxNjQ5Nzc= | 10,343 | Where can we find the `RAG` implementation? | {
"login": "g-jing",
"id": 44223191,
"node_id": "MDQ6VXNlcjQ0MjIzMTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/44223191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g-jing",
"html_url": "https://github.com/g-jing",
"followers_url": "https://api.github.com/users/g-jing/followers",
"following_url": "https://api.github.com/users/g-jing/following{/other_user}",
"gists_url": "https://api.github.com/users/g-jing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g-jing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g-jing/subscriptions",
"organizations_url": "https://api.github.com/users/g-jing/orgs",
"repos_url": "https://api.github.com/users/g-jing/repos",
"events_url": "https://api.github.com/users/g-jing/events{/privacy}",
"received_events_url": "https://api.github.com/users/g-jing/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The implementation can be found [here](https://github.com/huggingface/transformers/tree/master/src/transformers/models/rag).",
"@NielsRogge Thanks! That's it."
] | 1,614 | 1,614 | 1,614 | NONE | null | I noticed that `transformers` included the implementation for `DPR`. But for `RAG`, I only find a [demo](https://huggingface.co/rag/). Is there a source code for `RAG`? Or do you know where is Facebook's source code for `RAG`? Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10343/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10342 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10342/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10342/comments | https://api.github.com/repos/huggingface/transformers/issues/10342/events | https://github.com/huggingface/transformers/issues/10342 | 814,162,350 | MDU6SXNzdWU4MTQxNjIzNTA= | 10,342 | DialoGPT tokenizer config issue | {
"login": "ayubSubhaniya",
"id": 20911334,
"node_id": "MDQ6VXNlcjIwOTExMzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/20911334?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayubSubhaniya",
"html_url": "https://github.com/ayubSubhaniya",
"followers_url": "https://api.github.com/users/ayubSubhaniya/followers",
"following_url": "https://api.github.com/users/ayubSubhaniya/following{/other_user}",
"gists_url": "https://api.github.com/users/ayubSubhaniya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayubSubhaniya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayubSubhaniya/subscriptions",
"organizations_url": "https://api.github.com/users/ayubSubhaniya/orgs",
"repos_url": "https://api.github.com/users/ayubSubhaniya/repos",
"events_url": "https://api.github.com/users/ayubSubhaniya/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayubSubhaniya/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! This change may have originated from the move to the git-based repos. @patrickvonplaten and I have just modified the DialoGPT tokenizer configuration to have 1024 as model max length, you shouldn't have to do anything but re-run your script to see it updated.",
"@LysandreJik Don't hesitate to reference the url for the commit on huggingface.co for future reference\r\n\r\nHere I believe it's https://huggingface.co/microsoft/DialoGPT-small/commit/9fb5c2d6a01395898bfd90acce2dbec1537730f1",
"Indeed! Here are the commits:\r\n\r\n`DialoGPT-small`: [huggingface@364722e](https://huggingface.co/microsoft/DialoGPT-small/commit/364722ef15f5c04dcb9a57d3b77815bbc1d51efc) and [huggingface@9fb5c2d](https://huggingface.co/microsoft/DialoGPT-small/commit/364722ef15f5c04dcb9a57d3b77815bbc1d51efc)\r\n`DialoGPT-medium`: [huggingface@e84a3e](https://huggingface.co/microsoft/DialoGPT-medium/commit/e84a3e0adc90aabc6b57e59318e15bf4b733eedc)\r\n`DialoGPT-large`: [huggingface@acc7ea](https://huggingface.co/microsoft/DialoGPT-large/commit/acc7eaf98122bc6922976182b4d365d650f179b3)",
"Thanks @LysandreJik for quick fix."
] | 1,614 | 1,614 | 1,614 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.3.2
- Platform: Darwin-19.3.0-x86_64-i386-64bit
- Python version: 3.6.5
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Models:
- gpt2: @patrickvonplaten, @LysandreJik
Library:
- tokenizers: @n1t0, @LysandreJik
## Information
Model I am using (DialoGPT-small.):
When I load tokenizer, its `model_max_len` is coming infinite.
```
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-small")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-small")
max_len = tokenizer.model_max_length
```
## Expected behavior
Before It was coming as 1024.
Is this some recent change?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10342/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10341 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10341/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10341/comments | https://api.github.com/repos/huggingface/transformers/issues/10341/events | https://github.com/huggingface/transformers/issues/10341 | 814,113,801 | MDU6SXNzdWU4MTQxMTM4MDE= | 10,341 | Translate English into Japanese using mbart | {
"login": "DUT-Tjy",
"id": 57056813,
"node_id": "MDQ6VXNlcjU3MDU2ODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/57056813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DUT-Tjy",
"html_url": "https://github.com/DUT-Tjy",
"followers_url": "https://api.github.com/users/DUT-Tjy/followers",
"following_url": "https://api.github.com/users/DUT-Tjy/following{/other_user}",
"gists_url": "https://api.github.com/users/DUT-Tjy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DUT-Tjy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DUT-Tjy/subscriptions",
"organizations_url": "https://api.github.com/users/DUT-Tjy/orgs",
"repos_url": "https://api.github.com/users/DUT-Tjy/repos",
"events_url": "https://api.github.com/users/DUT-Tjy/events{/privacy}",
"received_events_url": "https://api.github.com/users/DUT-Tjy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @DUT-Tjy \r\n\r\n`facebook/mbart-large-cc25` is not fine-tuned for translation, it's a pretrained model, which should be fine-tuned if you want to use it for translation. You could use the `mbart-large-50-one-to-many-mmt` or `mbart-large-50-many-to-many-mmt` model for `en-ja` translation, these are fine-tuned multilingual translation models.\r\nhttps://huggingface.co/models?filter=mbart-50\r\n\r\nWe have stopped supporting the `task_specific_params` params. You should directly set the `decoder_start_token_id` in `config`, instead of `config.task_specific_params`. ",
"Thank you for your reply!\r\nI execute run_seq2seq.py again through the following command:\r\npython transformers/examples/seq2seq/run_seq2seq.py\r\n--model_name_or_path facebook/mbart-large-50-one-to-many-mmt\r\n--do_predict\r\n--task translation_en_to_ja\r\n--source_lang en_XX\r\n--target_lang ja_XX\r\n--train_file train.json\r\n--validation_file val.json\r\n--test_file test.json\r\n--output_dir predict\r\n--per_device_train_batch_size=4\r\n--per_device_eval_batch_size=4\r\n--overwrite_output_dir\r\n--predict_with_generate\r\n\r\nHowever, the generated prediction content is not Japanese, but a mixture of languages, as follows:\r\nIk zal jullie vandaag leren hoe je onderzoek doet.\r\n[وکٹر] بس انلاین تلاش کرنے کی طرح؟\r\nEn nee.\r\n\r\nThe contents of test.json are as follows\r\n{\"translation\": {\"ja\": \"オンライン調査みたいなものですか?\", \"en\": \"Like just searching online?\"}}\r\n{\"translation\": {\"ja\": \"それも含みます。\", \"en\": \"Yes and no.\"}}\r\n\r\nIs there something wrong with the command I set?\r\nLooking forward to your reply.\r\n@patil-suraj",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"@DUT-Tjy Hi, I also face this problem. I expect my generated sentences are only in the Vietnamese language, but they are mixed in both English and Vietnamese. Have you resolved it? If yes, could you please share the solution with me?"
] | 1,614 | 1,622 | 1,619 | NONE | null | Transformers version: 4.4.0.dev0
Hello, I am trying to translate English into Japanese(en-ja). I confirm that there is no error in my target content and source content. However, when I try to translate, the result of predict is always English. What should I do?
I only changed config.json as shown below:
"task_specific_params": {
"translation_en_to_ja": { "decoder_start_token_id": 250020}
}
I execute run_seq2seq.py through the following command:
python run_seq2seq.py
--model_name_or_path facebook/mbart-large-cc25
--do_train
--do_eval
--do_predict
--task translation_en_to_ja
--source_lang en_XX
--target_lang ja_XX
--train_file train.json
--validation_file val.json
--test_file test.json
--output_dir result
--per_device_train_batch_size=4
--per_device_eval_batch_size=4
--overwrite_output_dir
--predict_with_generate
The results of the model predict are as follows:
test_bleu = 1.665
test_gen_len = 41.0
test_loss = 4.2494
test_mem_cpu_alloc_delta = 0MB
test_mem_cpu_peaked_delta = 8MB
test_runtime = 77.6885
test_samples = 4
test_samples_per_second = 0.051
Am I not setting the config.json correctly? Or there are other things that need to be set up?
Looking forward to your reply.
@patil-suraj
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10341/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10340 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10340/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10340/comments | https://api.github.com/repos/huggingface/transformers/issues/10340/events | https://github.com/huggingface/transformers/issues/10340 | 814,015,617 | MDU6SXNzdWU4MTQwMTU2MTc= | 10,340 | tokenizer.Tokenizer compatibility with Inference API or Auto* classes | {
"login": "gstranger",
"id": 36181416,
"node_id": "MDQ6VXNlcjM2MTgxNDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/36181416?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gstranger",
"html_url": "https://github.com/gstranger",
"followers_url": "https://api.github.com/users/gstranger/followers",
"following_url": "https://api.github.com/users/gstranger/following{/other_user}",
"gists_url": "https://api.github.com/users/gstranger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gstranger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gstranger/subscriptions",
"organizations_url": "https://api.github.com/users/gstranger/orgs",
"repos_url": "https://api.github.com/users/gstranger/repos",
"events_url": "https://api.github.com/users/gstranger/events{/privacy}",
"received_events_url": "https://api.github.com/users/gstranger/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [] | 1,614 | 1,614 | null | NONE | null | # 🚀 Feature request
Make tokenizers created with the tokenizers library compatible with the Inference API or Auto classes
## Motivation
I have trained a model on a specific domain by modeling a sequence generation problem as a language modeling problem to predict the next token in the set. The tokenizer associated with the model I used (TransformerXL) was not compatible with my domain since my tokens contained whitespace so I created my own using the `WordLevelTrainer` class in the `tokenizers` library. **Now that I have a complete working solution I would like to use this tokenizer and model in the huggingface Inference API, however it does not work because it requires the tokenizer associated with the model**. Making the `transformers` models compatible with `tokenizers` library could make all kinds of use cases outside of NLP possible with these libraries.
## Your contribution
Is it possible to hack the saved config for a tokenizer created through the `tokenizers` library to work directly with the `Auto` classes? If so I can document this approach for other users.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10340/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10339 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10339/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10339/comments | https://api.github.com/repos/huggingface/transformers/issues/10339/events | https://github.com/huggingface/transformers/issues/10339 | 813,922,912 | MDU6SXNzdWU4MTM5MjI5MTI= | 10,339 | Problem with GPT2/DistilGPT2 prediction - dimension mismatch | {
"login": "ioana-blue",
"id": 17202292,
"node_id": "MDQ6VXNlcjE3MjAyMjky",
"avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ioana-blue",
"html_url": "https://github.com/ioana-blue",
"followers_url": "https://api.github.com/users/ioana-blue/followers",
"following_url": "https://api.github.com/users/ioana-blue/following{/other_user}",
"gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions",
"organizations_url": "https://api.github.com/users/ioana-blue/orgs",
"repos_url": "https://api.github.com/users/ioana-blue/repos",
"events_url": "https://api.github.com/users/ioana-blue/events{/privacy}",
"received_events_url": "https://api.github.com/users/ioana-blue/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Maybe @sgugger has seen that error previously?",
"Yes, and it has been fixed... but only in the more recent versions of Transformers.",
"Excellent, thank you guys, I'll work on upgrading tomorrow.",
"If it can help you, we have a [migration guide from versions v3 to v4](https://huggingface.co/transformers/migration.html#migrating-from-transformers-v3-x-to-v4-x).\r\n\r\nPlease let us know if you run into any issues not described here!",
"will take a look, thank you! My tasks are similar to text classification in Glue, so I usually start from the sample code and modify it accordingly.",
"These are some different bits that I found by glancing at the new sample code to run GLUE. @LysandreJik \r\n\r\n- Columns in the train/dev/test datasets: Fill in the `task_to_keys` dictionary with appropriate column names \r\n- The scripts assume the datasets have a column called `label`; label_to_id needs attention\r\n- Data processing/tokenization changed the api\r\n- The dataset ingestion changed (the code is in there, though, on an else branch if the task is not a glue task)\r\n- Metric computation also changed",
"It also looks like I have to install `pip install datasets` separately. I don't think I had to do that before.",
"Thank you for mentioning all of this. I believe the changes you're seeing are related to the example scripts, which are not static, and not related to the core of the library.\r\n\r\nThe changes related to the core of the library here would be those that were applied to the `Trainer`; did you manage to run your previous example script with the latest version, after updating it w.r.t the migration guide?",
"Correct. Training works, I have to fix a few things related to prediction & metrics computation. I have some home-brewed code that computes metrics for test (which is missing in the original sample code). I'll figure it out soon. So far, easier than expected. The data ingestion simplified a lot and I'm actually surprised it worked :D ",
"I figured out the (one of the?) problem: some name collapse on my side. Fixed. Fingers crossed it works now. In any case, not too painful to upgrade. But now I expect backward compatibility 🥇 since most of the big pieces have gone through lots of refactoring. ",
"I fixed all the problems and I managed to reproduce some old results with BERT-finetuned model. Thanks for your help!",
"Glad you could get it to work!",
"@LysandreJik It turns out I still have a problem with fine-tuning GPT with padding and batching. I have the following lines in my code:\r\n\r\n```\r\n if tokenizer.pad_token is None:\r\n tokenizer.pad_token = tokenizer.eos_token\r\n config.pad_token_id = config.eos_token_id\r\n```\r\n\r\nwhich work fine for GPT2*. However, the padding fails in GPT. After some digging, I realized there is no eos_token so the statements above have no effect. \r\n\r\nThis is the tokenizer configuration:\r\n```\r\n{\"unk_token\": \"<unk>\", \"model_max_length\": 512, \"name_or_path\": \"openai-gpt\"}\r\n```\r\n\r\nAny advice on how to fix this? I'd like to run GPT with padding & batching if at all possible. Thanks!",
"The open-ai GPT model has neither a pad, nor bos, nor eos token, which means that you will have to set them yourself. I'd advise to either set the `<unk_token>` to the EOS token:\r\n\r\n```python\r\ntokenizer.eos_token = tokenizer.unk_token\r\n```\r\n\r\nThe other solutions is to add a special token before fine-tuning as explained here: https://huggingface.co/transformers/internal/tokenization_utils.html?highlight=add_special_tokens#transformers.tokenization_utils_base.SpecialTokensMixin.add_special_tokens",
"Thank you for this suggestion, I'll give it a try. "
] | 1,614 | 1,614 | 1,614 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: Linux-4.18.0-193.el8.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.5.1+cu101 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: single gpu
### Who can help
Perhaps @patrickvonplaten, @LysandreJik could help?
## Information
Model I am using: GPT2/DistilGPT2
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
I'm able to train GPT2/DistilGPT2 successfully. However, during prediction, I consistently get the following error:
```
***** Running Prediction *****
Num examples = 1922
Batch size = 1
0%| | 0/1922 [00:00<?, ?it/s]Traceback (most recent call last):
File "../../models/jigsaw/tr-3.4//run_puppets.py", line 283, in <module>
main()
File "../../models/jigsaw/tr-3.4//run_puppets.py", line 212, in main
pred_results = trainer.predict(test_dataset = eval_dataset) # call predict to get access to both metrics and predictions
File "/u/ioana/.conda/envs/tr34/lib/python3.8/site-packages/transformers/trainer.py", line 1287, in predict
return self.prediction_loop(test_dataloader, description="Prediction")
File "/u/ioana/.conda/envs/tr34/lib/python3.8/site-packages/transformers/trainer.py", line 1353, in prediction_loop
preds_host = logits if preds_host is None else nested_concat(preds_host, logits, dim=0)
File "/u/ioana/.conda/envs/tr34/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 47, in nested_concat
return type(tensors)(nested_concat(t, n, dim) for t, n in zip(tensors, new_tensors))
File "/u/ioana/.conda/envs/tr34/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 47, in <genexpr>
return type(tensors)(nested_concat(t, n, dim) for t, n in zip(tensors, new_tensors))
File "/u/ioana/.conda/envs/tr34/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 47, in nested_concat
return type(tensors)(nested_concat(t, n, dim) for t, n in zip(tensors, new_tensors))
File "/u/ioana/.conda/envs/tr34/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 47, in <genexpr>
return type(tensors)(nested_concat(t, n, dim) for t, n in zip(tensors, new_tensors))
File "/u/ioana/.conda/envs/tr34/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 49, in nested_concat
return torch.cat((tensors, new_tensors), dim=dim)
RuntimeError: Sizes of tensors must match except in dimension 3. Got 53 and 23
```
It doesn't seem to be a function of training (e.g., I've trained for 1-2-3 epochs, same results for prediction; the good news is that training seems to work perfectly, or at least, it seems to train and I get a model checkpoint).
Any hunch on where I should start looking for a problem? I have no experience with these models. I checked other issues that were closed and some indicated that may be an attention problem. Thanks in advance!
I've used the same scripts successfully with 10+ different models, including GPT.
PS: I may upgrade to the newer version of the library, but that requires some work on my side to update my code as sometimes the upgrades are not backward compatible... | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10339/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10338 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10338/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10338/comments | https://api.github.com/repos/huggingface/transformers/issues/10338/events | https://github.com/huggingface/transformers/pull/10338 | 813,868,908 | MDExOlB1bGxSZXF1ZXN0NTc3OTQzOTEx | 10,338 | Fix evaluation with label smoothing in Trainer | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,614 | 1,614 | 1,614 | COLLABORATOR | null | # What does this PR do?
There was a bug in Trainer when using label smoothing: the `compute_loss` function pops the labels out of the inputs so they couldn't be gathered. This PR fixes that.
Fixes #10309 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10338/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10338/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10338",
"html_url": "https://github.com/huggingface/transformers/pull/10338",
"diff_url": "https://github.com/huggingface/transformers/pull/10338.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10338.patch",
"merged_at": 1614029943000
} |
https://api.github.com/repos/huggingface/transformers/issues/10337 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10337/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10337/comments | https://api.github.com/repos/huggingface/transformers/issues/10337/events | https://github.com/huggingface/transformers/issues/10337 | 813,868,378 | MDU6SXNzdWU4MTM4NjgzNzg= | 10,337 | [trainer] port metrics logging and saving methods to all example scripts | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
},
{
"id": 2155169140,
"node_id": "MDU6TGFiZWwyMTU1MTY5MTQw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/trainer",
"name": "trainer",
"color": "2ef289",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Can I work on this issue?",
"Yes, please and thank you!",
"Hi @stas00,\r\nSometimes it is saving it as txt file instead of a JSON file like the below code\r\n```python\r\noutput_train_file = os.path.join(training_args.output_dir, \"train_results.txt\")\r\nif trainer.is_world_process_zero():\r\n with open(output_train_file, \"w\") as writer:\r\n logger.info(\"***** Train results *****\")\r\n for key, value in sorted(train_result.metrics.items()):\r\n logger.info(f\" {key} = {value}\")\r\n writer.write(f\"{key} = {value}\\n\")\r\n```\r\nShould we keep it in JSON format only or write code for saving it as a txt file?\r\n\r\nI have seen such behavior in [run_qa.py](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_qa.py#L483), [run_mlm.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py#L416), run_clm.py, run_plm.py, and many other run_**.py files",
"`.json` format everywhere please, as this method writes out is a json data that it writes.\r\n\r\nYou don't need to keep the code that did .txt files writing.\r\n\r\nThese are examples and by definition they have no API as such to maintain, other than ensuring we don't drop functionality if someone uses these examples for something. And this effort will make things consistent on the metrics logging/saving front.\r\n\r\nThank you.",
"Hi @stas00 \r\nIn [run_ner.py](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner.py#L432) we are saving two things `test_result` and `test_predictions` shall we add another function in a trainer for saving results apart from metrics? or just add an extra Argument for saving it as `test_predictions.json` file\r\n\r\n```python\r\noutput_test_results_file = os.path.join(training_args.output_dir, \"test_results.txt\")\r\nif trainer.is_world_process_zero():\r\n with open(output_test_results_file, \"w\") as writer:\r\n for key, value in sorted(metrics.items()):\r\n logger.info(f\" {key} = {value}\")\r\n writer.write(f\"{key} = {value}\\n\")\r\n\r\n# Save predictions\r\noutput_test_predictions_file = os.path.join(training_args.output_dir, \"test_predictions.txt\")\r\nif trainer.is_world_process_zero():\r\n with open(output_test_predictions_file, \"w\") as writer:\r\n for prediction in true_predictions:\r\n writer.write(\" \".join(prediction) + \"\\n\")\r\n```",
"Yes, `test_predictions.txt` is a different feature that was unintentionally dropped in some of the scripts and ideally should be restored as well. \r\n\r\nHere is a request to restore it: https://github.com/huggingface/transformers/issues/10381\r\n\r\nSo if you'd like to tackle it as well together inside this one PR or as a separate PR that would be fantastic!\r\n\r\nI'd say, it probably could be `trainer.save_predictions(\"test\", predictions)`. And in which case please put it in all example scripts where it's relevant.\r\n\r\nIt probably should remain `test_predictions.txt`, as there is no data structure to it.\r\n\r\nNote, that the secondary helper Trainer methods have been just moved in master (so rebase your branch), e.g. `save_metrics`:\r\n\r\n```\r\nsrc/transformers/trainer.py: from .trainer_pt_utils import _get_learning_rate, log_metrics, metrics_format, save_metrics\r\nsrc/transformers/trainer_pt_utils.py:def save_metrics(self, split, metrics):\r\n```\r\n",
"Sure @stas00,\r\nI would love to work on that as well if possible!\r\nI would create separate PR because it will help me to make fewer mistakes.\r\n\r\nIs there any way to test these four files faster: `run_tf_multiple_choice.py`, `run_xnli.py`, `run_tf_glue.py`, `run_tf_text_classification.py`. For Other files testing script, I figure out from [test_examples.py](https://github.com/huggingface/transformers/blob/master/examples/test_examples.py) \r\n ",
"Great! thank you, @bhadreshpsavani!\r\n\r\n> Is there any way to test these four files faster: `run_tf_multiple_choice.py`, `run_xnli.py`, `run_tf_glue.py`, `run_tf_text_classification.py`. For Other files testing script, I figure out from [test_examples.py](https://github.com/huggingface/transformers/blob/master/examples/test_examples.py)\r\n\r\nIf you'd like this could be your next challenge after this task. Ideally all examples should be tested, so missing tests, even one or two would be very welcome. We can create a separate issue and discuss the specifics if that appeals to you. If not, then please do not worry about it.\r\n\r\nBut otherwise, if you're testing manually, just use very short `--max_train_samples 5 --max_val_samples 5 --max_test_samples 5`",
"Ya @stas00, we can add the missing tests after this task.\r\nI have noticed that `--max_train_samples 5 --max_val_samples 5 --max_test_samples 5` is mostly not working for scripts other than `run_seq2seq.py`. \r\nIt mostly giving this error\r\n`ValueError: Some specified arguments are not used by the HfArgumentParser: ['--max_train_samples', '5', '--max_val_samples', '5', '--max_test_samples', '5']`",
"As you can tell I only ever use seq2seq for testing. You're absolutely correct that other examples don't have those.\r\n\r\nI think it'd be greatly appreciated and very useful if other examples had a way to do the same. Let me check if others agree with that.",
"Oh and as you are doing an amazingly useful work syncing all examples to look and feel similar, there is one very crucial thing to sync and it's `templates/adding_a_new_example_script/` on which all new examples will be based, so we better have a good template to start with. I forgot to mention that earlier. Thank you!\r\n",
"> As you can tell I only ever use seq2seq for testing. You're absolutely correct that other examples don't have those.\r\n> \r\n> I think it'd be greatly appreciated and very useful if other examples had a way to do the same. Let me check if others agree with that.\r\n\r\nCreated a dedicated issue for that now, should you be interested, @bhadreshpsavani \r\nhttps://github.com/huggingface/transformers/issues/10423\r\nThank you!",
"Sure @stas00,\r\nI will be happy to work on it!"
] | 1,614 | 1,614 | 1,614 | CONTRIBUTOR | null | In an effort to make the examples easier to read, in https://github.com/huggingface/transformers/pull/10266 we added new trainer methods:
* `trainer.log_metrics` - to perform consistent formatting for logged metrics
* `trainer.save_metrics` - to save the metrics into a corresponding json file.
and deployed them in `run_seq2seq.py`.
The next task is do the same for all the other `examples/*/run_*.py` scripts.
Steps:
1. Study the diff for `run_seq2seq.py`. https://github.com/huggingface/transformers/pull/10266/files#diff-82bfb61a8b91894c2c2101734a6ab7b415be4ace5cd1e01b4c37663020d924ae
2. pick a script, e.g. `examples/multiple-choice/run_swag.py`
3. apply the same changes as in step 1 removing the explicit metrics printing lines and replacing them with the 2 new methods
4. test the modified script (usually `README.md` for that folder should have the instructions to do so) and see that your change works - train/eval/test metrics are printed using the new way and that `(train|eval|test|all)_results.json` are generated.
You can use a very short datasample 5 records is enough, by just adding: `--max_train_samples 5 --max_val_samples 5 --max_test_samples 5`
repeat for other scripts.
Thank you very much!
The metrics log should be similar to this, with the exception of using different scoring metrics:
```
02/16/2021 17:06:39 - INFO - __main__ - ***** train metrics *****
02/16/2021 17:06:39 - INFO - __main__ - epoch = 1.0
02/16/2021 17:06:39 - INFO - __main__ - init_mem_cpu_alloc_delta = 2MB
02/16/2021 17:06:39 - INFO - __main__ - init_mem_cpu_peaked_delta = 0MB
02/16/2021 17:06:39 - INFO - __main__ - init_mem_gpu_alloc_delta = 230MB
02/16/2021 17:06:39 - INFO - __main__ - init_mem_gpu_peaked_delta = 0MB
02/16/2021 17:06:39 - INFO - __main__ - total_flos = 2128GF
02/16/2021 17:06:39 - INFO - __main__ - train_mem_cpu_alloc_delta = 55MB
02/16/2021 17:06:39 - INFO - __main__ - train_mem_cpu_peaked_delta = 0MB
02/16/2021 17:06:39 - INFO - __main__ - train_mem_gpu_alloc_delta = 692MB
02/16/2021 17:06:39 - INFO - __main__ - train_mem_gpu_peaked_delta = 661MB
02/16/2021 17:06:39 - INFO - __main__ - train_runtime = 2.3114
02/16/2021 17:06:39 - INFO - __main__ - train_samples = 100
02/16/2021 17:06:39 - INFO - __main__ - train_samples_per_second = 3.028
02/16/2021 17:06:43 - INFO - __main__ - ***** val metrics *****
02/16/2021 17:13:05 - INFO - __main__ - epoch = 1.0
02/16/2021 17:13:05 - INFO - __main__ - eval_bleu = 24.6502
02/16/2021 17:13:05 - INFO - __main__ - eval_gen_len = 32.9
02/16/2021 17:13:05 - INFO - __main__ - eval_loss = 3.7533
02/16/2021 17:13:05 - INFO - __main__ - eval_mem_cpu_alloc_delta = 0MB
02/16/2021 17:13:05 - INFO - __main__ - eval_mem_cpu_peaked_delta = 0MB
02/16/2021 17:13:05 - INFO - __main__ - eval_mem_gpu_alloc_delta = 0MB
02/16/2021 17:13:05 - INFO - __main__ - eval_mem_gpu_peaked_delta = 510MB
02/16/2021 17:13:05 - INFO - __main__ - eval_runtime = 3.9266
02/16/2021 17:13:05 - INFO - __main__ - eval_samples = 100
02/16/2021 17:13:05 - INFO - __main__ - eval_samples_per_second = 25.467
02/16/2021 17:06:48 - INFO - __main__ - ***** test metrics *****
02/16/2021 17:06:48 - INFO - __main__ - test_bleu = 27.146
02/16/2021 17:06:48 - INFO - __main__ - test_gen_len = 41.37
02/16/2021 17:06:48 - INFO - __main__ - test_loss = 3.6682
02/16/2021 17:06:48 - INFO - __main__ - test_mem_cpu_alloc_delta = 0MB
02/16/2021 17:06:48 - INFO - __main__ - test_mem_cpu_peaked_delta = 0MB
02/16/2021 17:06:48 - INFO - __main__ - test_mem_gpu_alloc_delta = 0MB
02/16/2021 17:06:48 - INFO - __main__ - test_mem_gpu_peaked_delta = 645MB
02/16/2021 17:06:48 - INFO - __main__ - test_runtime = 5.1136
02/16/2021 17:06:48 - INFO - __main__ - test_samples = 100
02/16/2021 17:06:48 - INFO - __main__ - test_samples_per_second = 19.556
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10337/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10336 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10336/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10336/comments | https://api.github.com/repos/huggingface/transformers/issues/10336/events | https://github.com/huggingface/transformers/issues/10336 | 813,781,072 | MDU6SXNzdWU4MTM3ODEwNzI= | 10,336 | [Benchmark] Converting a QA distilbert model to onnx - the f1 score plummet | {
"login": "pievalentin",
"id": 14977219,
"node_id": "MDQ6VXNlcjE0OTc3MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/14977219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pievalentin",
"html_url": "https://github.com/pievalentin",
"followers_url": "https://api.github.com/users/pievalentin/followers",
"following_url": "https://api.github.com/users/pievalentin/following{/other_user}",
"gists_url": "https://api.github.com/users/pievalentin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pievalentin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pievalentin/subscriptions",
"organizations_url": "https://api.github.com/users/pievalentin/orgs",
"repos_url": "https://api.github.com/users/pievalentin/repos",
"events_url": "https://api.github.com/users/pievalentin/events{/privacy}",
"received_events_url": "https://api.github.com/users/pievalentin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"After further investigating, I found this useful repo: https://github.com/airKlizz/benchmark-for-transformers\r\nWorking on adding squad to this"
] | 1,614 | 1,614 | 1,614 | NONE | null | # 🖥 Benchmarking onnx QA `transformers`
## Issue
Poor benchmark result (squadv2) of converted onnx QA model using run_squad_onnx benchmark.
## Context
As part of my day job, my goal is to convert our QA model to onnx to make them available in production in Java.
My teammate generated a distilbert model trained on SQUADv2. He reproduced SOTA result on the squad benchmark.
I want to evaluate the quality of my converted model.
## Set-up
### Hardware and OS
Training of pytorch model:
- OS: Debian
- pytorch version: 1.7.0 on GPU
Conversion and benchmarking:
- OS: MacOS
- onnxruntime: 1.6.0 (CPUExecutioner)
- onnx: 1.8.1
### To reproduce
To convert my model I used the convert fucntion as follow:
`convert('pt', <path-to-distilbert-model>, '/tmp/onnx', 12, 'distilbert-base-uncased-distilled-squad')`
This generate an onnx model succesfully. To benchmark the model I modified the legacy run_squad with onnx inference. You can find the source here: https://gist.github.com/pievalentin/c0007be4c2483bb113326fed0b1bddb2
I modified the inputs to make it ort compatible and I update the start_logit and end_logit to match the output generated by the inference session.
I run the script with the following config:
python run_squad_ort.py --framework=ort --ort_model_path=<path-to-onnx-model> --model_name_or_path=<path-to-original-pt-model> --model_type=question-answering --output_dir=/tmp/qa --max_seq_length=384 --doc_stride=128 --n_best_size=20 --max_answer_length=30 --data_dir=/eai/datasets/squad2 --tokenizer_name=distilbert-base-uncased-distilled-squad
here is the config.json of the pytorch model:
```
{
"_name_or_path": "distilbert-base-uncased-distilled-squad",
"activation": "gelu",
"architectures": [
"DistilBertForQuestionAnswering"
],
"attention_dropout": 0.1,
"dim": 768,
"dropout": 0.1,
"hidden_dim": 3072,
"initializer_range": 0.02,
"max_position_embeddings": 512,
"model_type": "distilbert",
"n_heads": 12,
"n_layers": 6,
"pad_token_id": 0,
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"tie_weights_": true,
"vocab_size": 30522,
"return_dict": false
}
```
## Results
With the previous config, I get those poor result:
```
{
"exact": 8.641455403015245,
"f1": 11.271879679607238,
"total": 11873,
"HasAns_exact": 0.1349527665317139,
"HasAns_f1": 5.403344709172896,
"HasAns_total": 5928,
"NoAns_exact": 17.12363330529857,
"NoAns_f1": 17.12363330529857,
"NoAns_total": 5945,
"best_exact": 50.07159100480081,
"best_exact_thresh": 0,
"best_f1": 50.07310704960835,
"best_f1_thresh": 0
}
```
As if the weight of the model were discarded. So I am wondering what I am missing. Is it a poor configuration or benchmarking is not done the right way. I saw this script which is pretty similar to mine: https://github.com/onnx/models/blob/f6779d235046f28c0d3bf4ec25e4456c4689d2ce/text/machine_comprehension/bert-squad/dependencies/run_onnx_squad.py
So I would guess I must be missing something either in the conversion or the benchmarking.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10336/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10335 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10335/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10335/comments | https://api.github.com/repos/huggingface/transformers/issues/10335/events | https://github.com/huggingface/transformers/issues/10335 | 813,773,781 | MDU6SXNzdWU4MTM3NzM3ODE= | 10,335 | Return cross-attention weights in generation function | {
"login": "Mehrad0711",
"id": 28717374,
"node_id": "MDQ6VXNlcjI4NzE3Mzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/28717374?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mehrad0711",
"html_url": "https://github.com/Mehrad0711",
"followers_url": "https://api.github.com/users/Mehrad0711/followers",
"following_url": "https://api.github.com/users/Mehrad0711/following{/other_user}",
"gists_url": "https://api.github.com/users/Mehrad0711/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mehrad0711/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mehrad0711/subscriptions",
"organizations_url": "https://api.github.com/users/Mehrad0711/orgs",
"repos_url": "https://api.github.com/users/Mehrad0711/repos",
"events_url": "https://api.github.com/users/Mehrad0711/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mehrad0711/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @patrickvonplaten!\r\nPlease let me know what you think about the feature. I can send a PR for it once you confirm.",
"Yes, this would be a nice addition indeed :-)",
"Happy to help you in your PR!"
] | 1,614 | 1,614 | 1,614 | CONTRIBUTOR | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
With v4.2.0 release, generation can now return encoder and decoder self-attention weights but it still doesn't return cross-attention weights. These weights are already computed and returned in model `forward` method, and just need to be returned in `generate` method.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
Visualizing cross-attention weights is useful for many applications such as token-alignment.
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
I can submit a PR.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10335/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10334 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10334/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10334/comments | https://api.github.com/repos/huggingface/transformers/issues/10334/events | https://github.com/huggingface/transformers/pull/10334 | 813,618,259 | MDExOlB1bGxSZXF1ZXN0NTc3NzM2OTg4 | 10,334 | Loading from last checkpoint functionality in Trainer.train | {
"login": "tanmay17061",
"id": 32801726,
"node_id": "MDQ6VXNlcjMyODAxNzI2",
"avatar_url": "https://avatars.githubusercontent.com/u/32801726?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tanmay17061",
"html_url": "https://github.com/tanmay17061",
"followers_url": "https://api.github.com/users/tanmay17061/followers",
"following_url": "https://api.github.com/users/tanmay17061/following{/other_user}",
"gists_url": "https://api.github.com/users/tanmay17061/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tanmay17061/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanmay17061/subscriptions",
"organizations_url": "https://api.github.com/users/tanmay17061/orgs",
"repos_url": "https://api.github.com/users/tanmay17061/repos",
"events_url": "https://api.github.com/users/tanmay17061/events{/privacy}",
"received_events_url": "https://api.github.com/users/tanmay17061/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Raised changes. 1 reply to your review comment. \r\nDo let me know if any other change required. \r\n\r\nThanks.",
"Thanks a lot for your contribution!"
] | 1,614 | 1,614 | 1,614 | CONTRIBUTOR | null | Enhance resume_from_checkpoint argument of Trainer.train to accept
bool type. If True given, last saved checkpoint in self.args.output_dir
will be loaded. (#10280)
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Please look at [the feature request](https://github.com/huggingface/transformers/issues/10280) for full description of the changes. Thanks.
Fixes #10280
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10334/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10334/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10334",
"html_url": "https://github.com/huggingface/transformers/pull/10334",
"diff_url": "https://github.com/huggingface/transformers/pull/10334.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10334.patch",
"merged_at": 1614025980000
} |
https://api.github.com/repos/huggingface/transformers/issues/10333 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10333/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10333/comments | https://api.github.com/repos/huggingface/transformers/issues/10333/events | https://github.com/huggingface/transformers/pull/10333 | 813,582,263 | MDExOlB1bGxSZXF1ZXN0NTc3NzA2ODAz | 10,333 | Clean TF ConvBert | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I don't understand this PR, I disagree with your proposal of adding keyword names when they're not required and don't help readability.\r\n\r\nMentioned here https://github.com/huggingface/transformers/pull/9788#discussion_r564365003 and here https://github.com/huggingface/transformers/pull/9788#discussion_r564373664.\r\n\r\nSame goes for the BART refactor.",
"I have applied the same changes than in the #9788 PR. Should I remove all the keyword names and keep only the typing parts?"
] | 1,614 | 1,614 | 1,614 | CONTRIBUTOR | null | # What does this PR do?
This PR aims to clean TF ConvBert by adding explicit keyword arguments, typing and update the documentation in the model implementation to make it easier to understand and read.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10333/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10333/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10333",
"html_url": "https://github.com/huggingface/transformers/pull/10333",
"diff_url": "https://github.com/huggingface/transformers/pull/10333.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10333.patch",
"merged_at": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.