url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
β | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
β | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/11944 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11944/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11944/comments | https://api.github.com/repos/huggingface/transformers/issues/11944/events | https://github.com/huggingface/transformers/issues/11944 | 906,670,948 | MDU6SXNzdWU5MDY2NzA5NDg= | 11,944 | wandb integration gags during hyperparameter search | {
"login": "Mindful",
"id": 2897172,
"node_id": "MDQ6VXNlcjI4OTcxNzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2897172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mindful",
"html_url": "https://github.com/Mindful",
"followers_url": "https://api.github.com/users/Mindful/followers",
"following_url": "https://api.github.com/users/Mindful/following{/other_user}",
"gists_url": "https://api.github.com/users/Mindful/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mindful/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mindful/subscriptions",
"organizations_url": "https://api.github.com/users/Mindful/orgs",
"repos_url": "https://api.github.com/users/Mindful/repos",
"events_url": "https://api.github.com/users/Mindful/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mindful/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Maybe of interest to @borisdayma ",
"Just ran into the same problem.\r\nThanks for opening this issue."
] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | ## Environment info
- transformers version: 4.6.1
- Platform: Linux-4.19.0-16-cloud-amd64-x86_64-with-glibc2.10
- Python version: 3.8.10
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
wandb version is 0.10.26, but I don't think it matters.
### Who can help
Maybe @sgugger since this is Trainer-related; I don't know who did the wandb integration specifically.
## Information
Model I am using: custom Pytorch model.
The problem arises when using:
* [ ] the official example scripts: (probably, haven't tried)
* [x] my own modified scripts: custom training script using the Trainer
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task:
* [x] my own task or dataset: custom MLM training
## To reproduce
Steps to reproduce the behavior:
1. Train a model using the Trainer with the wandb logging integration and run a hyperparameter search using Optuna (also maybe Ray, but I haven't tried with Ray)
2. After the first run, you'll get an exception like below when wandb tries to log. The issue is that the previous run has finished but a new one hasn't been started.
```
..... (first trial runs fine; logs to wandb and finishes)
wandb: Synced /home/josh/runs/hps_test: https://wandb.ai/mindful/projectname/runs/2vojg06h
5%|β | 1/19 [00:03<01:02, 3.47s/it][W 2021-05-30 07:41:43,979] Trial 1 failed because of the following error: Error('You must call wandb.init() before wandb.log()')
Traceback (most recent call last):
File "/home/josh/anaconda3/envs/project/lib/python3.8/site-packages/optuna/_optimize.py", line 217, in _run_trial
value_or_values = func(trial)
File "/home/josh/anaconda3/envs/project/lib/python3.8/site-packages/transformers/integrations.py", line 138, in _objective
trainer.train(resume_from_checkpoint=checkpoint, trial=trial)
File "/home/josh/anaconda3/envs/project/lib/python3.8/site-packages/transformers/trainer.py", line 1332, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/home/josh/anaconda3/envs/project/lib/python3.8/site-packages/transformers/trainer.py", line 1405, in _maybe_log_save_evaluate
self.log(logs)
File "/home/josh/anaconda3/envs/project/lib/python3.8/site-packages/transformers/trainer.py", line 1692, in log
self.control = self.callback_handler.on_log(self.args, self.state, self.control, logs)
File "/home/josh/anaconda3/envs/project/lib/python3.8/site-packages/transformers/trainer_callback.py", line 371, in on_log
return self.call_event("on_log", args, state, control, logs=logs)
File "/home/josh/anaconda3/envs/project/lib/python3.8/site-packages/transformers/trainer_callback.py", line 378, in call_event
result = getattr(callback, event)(
File "/home/josh/anaconda3/envs/project/lib/python3.8/site-packages/transformers/integrations.py", line 754, in on_log
self._wandb.log({**logs, "train/global_step": state.global_step})
File "/home/josh/anaconda3/envs/project/lib/python3.8/site-packages/wandb/sdk/lib/preinit.py", line 38, in preinit_wrapper
raise wandb.Error("You must call wandb.init() before {}()".format(name))
wandb.errors.Error: You must call wandb.init() before wandb.log()
wandb: ERROR You must call wandb.init() before wandb.log()
```
## Expected behavior
wandb should just reinitialize per training run so that each run is logged separately.
Note that as far as I can tell this is a one-line fix (set `_initialized` to `False` in `WandbCallback.on_train_begin` when running an hyperparameter search) so I'll open a PR with that. I just figured there should be an issue as well for clarity.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11944/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11943 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11943/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11943/comments | https://api.github.com/repos/huggingface/transformers/issues/11943/events | https://github.com/huggingface/transformers/issues/11943 | 906,669,544 | MDU6SXNzdWU5MDY2Njk1NDQ= | 11,943 | RuntimeError: CUDA error: device-side assert triggered | {
"login": "gandharvsuri",
"id": 31670690,
"node_id": "MDQ6VXNlcjMxNjcwNjkw",
"avatar_url": "https://avatars.githubusercontent.com/u/31670690?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gandharvsuri",
"html_url": "https://github.com/gandharvsuri",
"followers_url": "https://api.github.com/users/gandharvsuri/followers",
"following_url": "https://api.github.com/users/gandharvsuri/following{/other_user}",
"gists_url": "https://api.github.com/users/gandharvsuri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gandharvsuri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gandharvsuri/subscriptions",
"organizations_url": "https://api.github.com/users/gandharvsuri/orgs",
"repos_url": "https://api.github.com/users/gandharvsuri/repos",
"events_url": "https://api.github.com/users/gandharvsuri/events{/privacy}",
"received_events_url": "https://api.github.com/users/gandharvsuri/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patil-suraj ",
"Hey @gandharvsuri,\r\n\r\nRunning your (slighly adapted) code example does not give me any errors. \r\n\r\nAlso, please don't ping more people if you don't receive an answer within a day (no need to also ping @patil-suraj). We try to answer all issues and to do so efficiently, it is not helpful to be pinged unnecessarly. Thanks!",
"Hey @patrickvonplaten, my apologies, I just saw him replying on similar issues so thought of pinging him as well, I agree I shouldn't have done that. \r\n\r\nCan you try with this custom input?\r\n```\r\ntext = \"\"\"Dred Scott v. Sandford, 60 U.S. (19 How.) 393 (1857), often referred to as the Dred Scott decision, was a landmark decision of the US Supreme Court in which the Court held that the US Constitution was not meant to include American citizenship for black people, regardless of whether they were enslaved or free, and so the rights and privileges that the Constitution confers upon American citizens could not apply to them.The decision was made in the case of Dred Scott, an enslaved black man whose owners had taken him from Missouri, which was a slave-holding state, into Illinois and the Wisconsin Territory, which were free areas where slavery was illegal. When his owners later brought him back to Missouri, Scott sued in court for his freedom and claimed that because he had been taken into \"free\" U.S. territory, he had automatically been freed and was legally no longer a slave. Scott sued first in Missouri state court, which ruled that he was still a slave under its law. He then sued in US federal court, which ruled against him by deciding that it had to apply Missouri law to the case. He then appealed to the US Supreme Court.\r\nIn March 1857, the Supreme Court issued a 7β2 decision against Dred Scott. In an opinion written by Chief Justice Roger Taney, the Court ruled that black people \"are not included, and were not intended to be included, under the word 'citizens' in the Constitution, and can therefore claim none of the rights and privileges which that instrument provides for and secures to citizens of the United States.\" Taney supported his ruling with an extended survey of American state and local laws from the time of the Constitution's drafting in 1787 that purported to show that a \"perpetual and impassable barrier was intended to be erected between the white race and the one which they had reduced to slavery.\" Because the Court ruled that Scott was not an American citizen, he was also not a citizen of any state and, accordingly, could never establish the \"diversity of citizenship\" that Article III of the US Constitution requires for a US federal court to be able to exercise jurisdiction over a case. After ruling on those issues surrounding Scott, Taney continued further and struck down the entire Missouri Compromise as a limitation on slavery that exceeded the US Congress's constitutional powers. \r\nAlthough Taney and several of the other justices hoped that the decision would permanently settle the slavery controversy, which was increasingly dividing the American public, the decision's effect was the complete opposite. Taney's majority opinion suited the slaveholding states, but was intensely decried in all the other states. The decision inflamed the national debate over slavery and deepened the divide that led ultimately to the Civil War. In 1865, after the Union won the Civil War, the Dred Scott ruling was voided by the Thirteenth Amendment to the US Constitution, which abolished slavery except as punishment for a crime, and the Fourteenth Amendment, which guaranteed citizenship for \"all persons born or naturalized in the United States, and subject to the jurisdiction thereof.\"\r\nThe Supreme Court's decision has been widely denounced ever since, both for how overtly racist the decision was and its crucial role in the near destruction of the United States four years later. Bernard Schwartz said that it \"stands first in any list of the worst Supreme Court decisionsβChief Justice Hughes called it the Court's greatest self-inflicted wound.\" Junius P. Rodriguez said that it is \"universally condemned as the U.S. Supreme Court's worst decision\". Historian David Thomas Konig said that it was \"unquestionably, our court's worst decision ever.\"\r\nAbraham Lincoln (; February 12, 1809 β April 15, 1865) was an American statesman and lawyer who served as the 16th president of the United States from 1861 until his assassination in 1865. Lincoln led the nation through the American Civil War, the country's greatest moral, cultural, constitutional, and political crisis. He succeeded in preserving the Union, abolishing slavery, bolstering the federal government, and modernizing the U.S. economy.\r\nLincoln was born into poverty in a log cabin and was raised on the frontier primarily in Indiana. He was self-educated and became a lawyer, Whig Party leader, Illinois state legislator, and U.S. Congressman from Illinois. In 1849, he returned to his law practice but became vexed by the opening of additional lands to slavery as a result of the KansasβNebraska Act. He reentered politics in 1854, becoming a leader in the new Republican Party, and he reached a national audience in the 1858 debates against Stephen Douglas. Lincoln ran for President in 1860, sweeping the North in victory. Pro-slavery elements in the South equated his success with the North's rejection of their right to practice slavery, and southern states began seceding from the union. To secure its independence, the new Confederate States fired on Fort Sumter, a U.S. fort in the South, and Lincoln called up forces to suppress the rebellion and restore the Union.\r\nAs the leader of moderate Republicans, Lincoln had to navigate a contentious array of factions with friends and opponents on both sides. War Democrats rallied a large faction of former opponents into his moderate camp, but they were countered by Radical Republicans, who demanded harsh treatment of the Southern Confederates. Anti-war Democrats (called \"Copperheads\") despised him, and irreconcilable pro-Confederate elements plotted his assassination. Lincoln managed the factions by exploiting their mutual enmity, by carefully distributing political patronage, and by appealing to the U.S. people. His Gettysburg Address became a historic clarion call for nationalism, republicanism, equal rights, liberty, and democracy. Lincoln scrutinized the strategy and tactics in the war effort, including the selection of generals and the naval blockade of the South's trade. He suspended habeas corpus, and he averted British intervention by defusing the Trent Affair. He engineered the end to slavery with his Emancipation Proclamation and his order that the Army protect and recruit former slaves. He also encouraged border states to outlaw slavery, and promoted the Thirteenth Amendment to the United States Constitution, which outlawed slavery across the country.\r\nLincoln managed his own successful re-election campaign. He sought to heal the war-torn nation through reconciliation. On April 14, 1865, just days after the war's end at Appomattox, Lincoln was attending a play at Ford's Theatre with his wife Mary when he was assassinated by Confederate sympathizer John Wilkes Booth. His marriage had produced four sons, two of whom preceded him in death, with severe emotional impact upon them and Mary. Lincoln is remembered as the martyr hero of the United States and he is consistently ranked as one of the greatest presidents in American history.\"\"\"\r\n```\r\n\r\nIt gives me the above mentioned error and later, all inputs even for the ones for which the model earlier was working, start giving the same error.",
"Hey @gandharvsuri,\r\n\r\nI sadly still cannot reproduce the error, I adapted the code snippet with the `text` you provided. Running the adapted code example does not throw an error for me -> could you maybe create a colab showing the error instead? ",
"Ah, one probably has to call `summarise` to reproduce the error. When I call `gpt2.summarise(...)` I'm getting some import errors (numpy is not imported). Could you please provide a complete code snippet that includes all imports to reproduce the error? :-)",
"Sure, You can use the following notebook, I've added you as an editor. \r\nhttps://colab.research.google.com/drive/17-STvWmqNROY8tlD8mfdm1grcRCFD4hy?usp=sharing",
"I misunderstood complete code snippet meaning, here it is.\r\n```\r\n!pip install torch\r\n!pip install transformers\r\n\r\nimport numpy as np\r\nimport os\r\n# tried this to resolve the error as well :)\r\nos.environ['CUDA_LAUNCH_BLOCKING'] = \"1\" \r\nimport torch\r\nimport numpy as np\r\ndevice = 'cuda' if torch.cuda.is_available() else 'cpu'\r\n\r\nfrom transformers import GPT2LMHeadModel, GPT2Tokenizer\r\n\r\nmodel = GPT2LMHeadModel.from_pretrained(\"gpt2\")\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\n\r\ntext = \"\"\"Dred Scott v. Sandford, 60 U.S. (19 How.) 393 (1857), often referred to as the Dred Scott decision, was a landmark decision of the US Supreme Court in which the Court held that the US Constitution was not meant to include American citizenship for black people, regardless of whether they were enslaved or free, and so the rights and privileges that the Constitution confers upon American citizens could not apply to them.The decision was made in the case of Dred Scott, an enslaved black man whose owners had taken him from Missouri, which was a slave-holding state, into Illinois and the Wisconsin Territory, which were free areas where slavery was illegal. When his owners later brought him back to Missouri, Scott sued in court for his freedom and claimed that because he had been taken into \"free\" U.S. territory, he had automatically been freed and was legally no longer a slave. Scott sued first in Missouri state court, which ruled that he was still a slave under its law. He then sued in US federal court, which ruled against him by deciding that it had to apply Missouri law to the case. He then appealed to the US Supreme Court.\r\nIn March 1857, the Supreme Court issued a 7β2 decision against Dred Scott. In an opinion written by Chief Justice Roger Taney, the Court ruled that black people \"are not included, and were not intended to be included, under the word 'citizens' in the Constitution, and can therefore claim none of the rights and privileges which that instrument provides for and secures to citizens of the United States.\" Taney supported his ruling with an extended survey of American state and local laws from the time of the Constitution's drafting in 1787 that purported to show that a \"perpetual and impassable barrier was intended to be erected between the white race and the one which they had reduced to slavery.\" Because the Court ruled that Scott was not an American citizen, he was also not a citizen of any state and, accordingly, could never establish the \"diversity of citizenship\" that Article III of the US Constitution requires for a US federal court to be able to exercise jurisdiction over a case. After ruling on those issues surrounding Scott, Taney continued further and struck down the entire Missouri Compromise as a limitation on slavery that exceeded the US Congress's constitutional powers. \r\nAlthough Taney and several of the other justices hoped that the decision would permanently settle the slavery controversy, which was increasingly dividing the American public, the decision's effect was the complete opposite. Taney's majority opinion suited the slaveholding states, but was intensely decried in all the other states. The decision inflamed the national debate over slavery and deepened the divide that led ultimately to the Civil War. In 1865, after the Union won the Civil War, the Dred Scott ruling was voided by the Thirteenth Amendment to the US Constitution, which abolished slavery except as punishment for a crime, and the Fourteenth Amendment, which guaranteed citizenship for \"all persons born or naturalized in the United States, and subject to the jurisdiction thereof.\"\r\nThe Supreme Court's decision has been widely denounced ever since, both for how overtly racist the decision was and its crucial role in the near destruction of the United States four years later. Bernard Schwartz said that it \"stands first in any list of the worst Supreme Court decisionsβChief Justice Hughes called it the Court's greatest self-inflicted wound.\" Junius P. Rodriguez said that it is \"universally condemned as the U.S. Supreme Court's worst decision\". Historian David Thomas Konig said that it was \"unquestionably, our court's worst decision ever.\"\r\nAbraham Lincoln (; February 12, 1809 β April 15, 1865) was an American statesman and lawyer who served as the 16th president of the United States from 1861 until his assassination in 1865. Lincoln led the nation through the American Civil War, the country's greatest moral, cultural, constitutional, and political crisis. He succeeded in preserving the Union, abolishing slavery, bolstering the federal government, and modernizing the U.S. economy.\r\nLincoln was born into poverty in a log cabin and was raised on the frontier primarily in Indiana. He was self-educated and became a lawyer, Whig Party leader, Illinois state legislator, and U.S. Congressman from Illinois. In 1849, he returned to his law practice but became vexed by the opening of additional lands to slavery as a result of the KansasβNebraska Act. He reentered politics in 1854, becoming a leader in the new Republican Party, and he reached a national audience in the 1858 debates against Stephen Douglas. Lincoln ran for President in 1860, sweeping the North in victory. Pro-slavery elements in the South equated his success with the North's rejection of their right to practice slavery, and southern states began seceding from the union. To secure its independence, the new Confederate States fired on Fort Sumter, a U.S. fort in the South, and Lincoln called up forces to suppress the rebellion and restore the Union.\r\nAs the leader of moderate Republicans, Lincoln had to navigate a contentious array of factions with friends and opponents on both sides. War Democrats rallied a large faction of former opponents into his moderate camp, but they were countered by Radical Republicans, who demanded harsh treatment of the Southern Confederates. Anti-war Democrats (called \"Copperheads\") despised him, and irreconcilable pro-Confederate elements plotted his assassination. Lincoln managed the factions by exploiting their mutual enmity, by carefully distributing political patronage, and by appealing to the U.S. people. His Gettysburg Address became a historic clarion call for nationalism, republicanism, equal rights, liberty, and democracy. Lincoln scrutinized the strategy and tactics in the war effort, including the selection of generals and the naval blockade of the South's trade. He suspended habeas corpus, and he averted British intervention by defusing the Trent Affair. He engineered the end to slavery with his Emancipation Proclamation and his order that the Army protect and recruit former slaves. He also encouraged border states to outlaw slavery, and promoted the Thirteenth Amendment to the United States Constitution, which outlawed slavery across the country.\r\nLincoln managed his own successful re-election campaign. He sought to heal the war-torn nation through reconciliation. On April 14, 1865, just days after the war's end at Appomattox, Lincoln was attending a play at Ford's Theatre with his wife Mary when he was assassinated by Confederate sympathizer John Wilkes Booth. His marriage had produced four sons, two of whom preceded him in death, with severe emotional impact upon them and Mary. Lincoln is remembered as the martyr hero of the United States and he is consistently ranked as one of the greatest presidents in American history.\"\"\"\r\n\r\n# Another text with lesser number of tokens\r\ntext_2 = \"\"\"At least nine people were killed and 25 others injured after a powerful blast outside Pakistanβs famous Sufi shrine Data Darbar in Lahore. According to initial police reports, the explosion took place close to two police vehicles near Gate 2 of Data Darbar. The nature and exact target of the explosion is yet to be ascertained. Rescue operations are underway. The blast comes as the country marks the fasting month of Ramzan.\r\nData Darbar, located in Pakistanβs Lahore city, is one of the oldest Sufi shrines in South Asia. Considered to be one of the most sacred places in Lahore, the shrine houses the remains of Sufi saint Abul Hassan Ali Hujwiri, commonly known as Data Ganj Baksh. He is said to have lived on the site in the 11th century and was reputed to have miraculous powers.\r\nData Darbar attracts a lot of visitors to its annual Urs festival. The Urs marks the death anniversary of the Sufi saint.\r\nAccording to the BBC, the shrine was originally established as a simple grave next to the mosque which Hujwiri had built on the outskirts of Lahore in the 11th century. It was later expanded in the 13th century to commemorate the burial site of Hujwiri after his spiritual powers became popular.\r\nFor centuries, the shrine has seen visitors from all religions. Pakistanβs former prime minister Nawaz Sharif is also a frequent visitor to the shrine.\r\nIn 2010, two suicide bombers detonated their explosive vests outside the shrine, killing close to 50 people. More than 200 people were injured in the blasts\"\"\"\r\n\r\n\r\nclass GPT2():\r\n\r\n def __init__(self,device,model,tokenizer):\r\n self.name = \"GPT2\"\r\n self.device = device\r\n self.model = model.to(device)\r\n self.tokenizer = tokenizer\r\n # self.tokenizer.add_special_tokens({'pad_token': '[PAD]'})\r\n self.model.resize_token_embeddings(len(self.tokenizer))\r\n\r\n def summarise(self,text):\r\n if text == np.nan or len(str(text)) < 10:\r\n return np.nan\r\n \r\n text = str(text)\r\n text = text + \" TL;DR:\"\r\n generated = self.tokenizer(text,return_tensors = 'pt', truncation=True, max_length = 1024)\r\n context = generated['input_ids'].to(self.device)\r\n past = None\r\n generated = []\r\n \r\n for i in range(50):\r\n output = self.model(context, past_key_values=past)\r\n past = output[\"past_key_values\"]\r\n logits = output[\"logits\"]\r\n token = torch.argmax(logits[..., -1, :])\r\n\r\n generated += [token.tolist()]\r\n context = token.unsqueeze(0)\r\n summary = self.tokenizer.decode(generated)\r\n return summary\r\n\r\n def __str__(self):\r\n return self.name\r\n\r\ngpt2 = GPT2(\"cuda\",model,tokenizer)\r\n\r\n# This cell works fine\r\nprint(gpt2.summarise(text_2))\r\n\r\n# Throws error\r\nprint(gpt2.summarise(text))\r\n\r\n# The same example which was working earlier also stopped working now.\r\nprint(gpt2.summarise(text_2))\r\n```",
"@patrickvonplaten I guess I found the error. I am running a for loop to predict the next 50 tokens. After each iteration, a new token would be added to the sequence thus increasing its size by 1. Now, this new sequence would be fed to the model (the one with the new predicted token) to predict the next token. So to predict the 50 tokens, I need to set ```max_length = (1024-50)``` so that in the last iteration, it does not exceed the sequence length limit.\r\n",
"Hey @gandharvsuri,\r\n\r\nThanks for the very nice reproducible code snippet & sorry to answer so late. The problem is indeed an out-of-index error of the position ids matrix. \r\n\r\nThose errors are often hard to catch on GPU because the error message is quite obscure",
"Hi @gandharvsuri ,\r\n\r\nThanks for raising this issue. I get the same error but only for specific passages. But when I get this error for one passage, the BART model throws the same error for all the following passages. \r\n\r\nI wanted to ask if you could please share the code change that you had done for - \"So to predict the 50 tokens, I need to set max_length = (1024-50) so that in the last iteration, it does not exceed the sequence length limit.\"\r\n\r\nAnd there are several posts suggesting - model.resize_token_embeddings(len(tokenizer))\r\nI added it after loading the model and also after each generation call. It still throws the above error. I wanted to ask if you found the above useful anywhere in fixing this error? \r\n\r\nThanks and looking forward for your response :D\r\n",
"I get this issue when finetuning Llama on Abirate/English_Quotes",
"I get this issue when using model.resize_token_embeddings(len(tokenizer)).\r\n\r\nIt seems that the change of model's embeddings size is behind the config?\r\n"
] | 1,622 | 1,702 | 1,622 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Google Colab
- Python version: 3.7
- PyTorch version (GPU?): 1.8.1+cu101
- Using GPU in script?: Yes
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- gpt2: @patrickvonplaten, @LysandreJik
Library:
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Add ```TL;DR:``` tag at the end of the sequence
2. Preferably use a sequence longer than 1024.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## My Script
The sequences are long (>1024) and I expect the ```truncation = True``` to take care of that.
```python
from transformers import GPT2LMHeadModel, GPT2Tokenizer
model = GPT2LMHeadModel.from_pretrained("gpt2")
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
text = """Dred Scott v. Sandford, 60 U.S. (19 How.) 393 (1857), often referred to as the Dred Scott decision, was a landmark decision of the US Supreme Court in which the Court held that the US Constitution was not meant to include American citizenship for black people, regardless of whether they were enslaved or free, and so the rights and privileges that the Constitution confers upon American citizens could not apply to them.The decision was made in the case of Dred Scott, an enslaved black man whose owners had taken him from Missouri, which was a slave-holding state, into Illinois and the Wisconsin Territory, which were free areas where slavery was illegal. When his owners later brought him back to Missouri, Scott sued in court for his freedom and claimed that because he had been taken into "free" U.S. territory, he had automatically been freed and was legally no longer a slave. Scott sued first in Missouri state court, which ruled that he was still a slave under its law. He then sued in US federal court, which ruled against him by deciding that it had to apply Missouri law to the case. He then appealed to the US Supreme Court.
In March 1857, the Supreme Court issued a 7β2 decision against Dred Scott. In an opinion written by Chief Justice Roger Taney, the Court ruled that black people "are not included, and were not intended to be included, under the word 'citizens' in the Constitution, and can therefore claim none of the rights and privileges which that instrument provides for and secures to citizens of the United States." Taney supported his ruling with an extended survey of American state and local laws from the time of the Constitution's drafting in 1787 that purported to show that a "perpetual and impassable barrier was intended to be erected between the white race and the one which they had reduced to slavery." Because the Court ruled that Scott was not an American citizen, he was also not a citizen of any state and, accordingly, could never establish the "diversity of citizenship" that Article III of the US Constitution requires for a US federal court to be able to exercise jurisdiction over a case. After ruling on those issues surrounding Scott, Taney continued further and struck down the entire Missouri Compromise as a limitation on slavery that exceeded the US Congress's constitutional powers.
Although Taney and several of the other justices hoped that the decision would permanently settle the slavery controversy, which was increasingly dividing the American public, the decision's effect was the complete opposite. Taney's majority opinion suited the slaveholding states, but was intensely decried in all the other states. The decision inflamed the national debate over slavery and deepened the divide that led ultimately to the Civil War. In 1865, after the Union won the Civil War, the Dred Scott ruling was voided by the Thirteenth Amendment to the US Constitution, which abolished slavery except as punishment for a crime, and the Fourteenth Amendment, which guaranteed citizenship for "all persons born or naturalized in the United States, and subject to the jurisdiction thereof."
The Supreme Court's decision has been widely denounced ever since, both for how overtly racist the decision was and its crucial role in the near destruction of the United States four years later. Bernard Schwartz said that it "stands first in any list of the worst Supreme Court decisionsβChief Justice Hughes called it the Court's greatest self-inflicted wound." Junius P. Rodriguez said that it is "universally condemned as the U.S. Supreme Court's worst decision". Historian David Thomas Konig said that it was "unquestionably, our court's worst decision ever."
Abraham Lincoln (; February 12, 1809 β April 15, 1865) was an American statesman and lawyer who served as the 16th president of the United States from 1861 until his assassination in 1865. Lincoln led the nation through the American Civil War, the country's greatest moral, cultural, constitutional, and political crisis. He succeeded in preserving the Union, abolishing slavery, bolstering the federal government, and modernizing the U.S. economy.
Lincoln was born into poverty in a log cabin and was raised on the frontier primarily in Indiana. He was self-educated and became a lawyer, Whig Party leader, Illinois state legislator, and U.S. Congressman from Illinois. In 1849, he returned to his law practice but became vexed by the opening of additional lands to slavery as a result of the KansasβNebraska Act. He reentered politics in 1854, becoming a leader in the new Republican Party, and he reached a national audience in the 1858 debates against Stephen Douglas. Lincoln ran for President in 1860, sweeping the North in victory. Pro-slavery elements in the South equated his success with the North's rejection of their right to practice slavery, and southern states began seceding from the union. To secure its independence, the new Confederate States fired on Fort Sumter, a U.S. fort in the South, and Lincoln called up forces to suppress the rebellion and restore the Union.
As the leader of moderate Republicans, Lincoln had to navigate a contentious array of factions with friends and opponents on both sides. War Democrats rallied a large faction of former opponents into his moderate camp, but they were countered by Radical Republicans, who demanded harsh treatment of the Southern Confederates. Anti-war Democrats (called "Copperheads") despised him, and irreconcilable pro-Confederate elements plotted his assassination. Lincoln managed the factions by exploiting their mutual enmity, by carefully distributing political patronage, and by appealing to the U.S. people. His Gettysburg Address became a historic clarion call for nationalism, republicanism, equal rights, liberty, and democracy. Lincoln scrutinized the strategy and tactics in the war effort, including the selection of generals and the naval blockade of the South's trade. He suspended habeas corpus, and he averted British intervention by defusing the Trent Affair. He engineered the end to slavery with his Emancipation Proclamation and his order that the Army protect and recruit former slaves. He also encouraged border states to outlaw slavery, and promoted the Thirteenth Amendment to the United States Constitution, which outlawed slavery across the country.
Lincoln managed his own successful re-election campaign. He sought to heal the war-torn nation through reconciliation. On April 14, 1865, just days after the war's end at Appomattox, Lincoln was attending a play at Ford's Theatre with his wife Mary when he was assassinated by Confederate sympathizer John Wilkes Booth. His marriage had produced four sons, two of whom preceded him in death, with severe emotional impact upon them and Mary. Lincoln is remembered as the martyr hero of the United States and he is consistently ranked as one of the greatest presidents in American history."""
class GPT2():
def __init__(self,device,model,tokenizer):
self.name = "GPT2"
self.device = device
self.model = model.to(device)
self.tokenizer = tokenizer
self.tokenizer.add_special_tokens({'pad_token': '[PAD]'})
self.model.resize_token_embeddings(len(self.tokenizer))
def summarise(self,text):
if text == np.nan or len(str(text)) < 10:
return np.nan
text = str(text)
text = text + " TL;DR:"
generated = self.tokenizer(text,return_tensors = 'pt', truncation=True, padding = True)
context = generated['input_ids'].to(device)
past = None
generated = []
for i in range(50):
output = self.model(context, past_key_values=past)
past = output["past_key_values"]
logits = output["logits"]
token = torch.argmax(logits[..., -1, :])
generated += [token.tolist()]
context = token.unsqueeze(0)
summary = self.tokenizer.decode(generated)
return summary
def __str__(self):
return self.name
gpt2 = GPT2("cuda",model,tokenizer)
```
## Error Produced
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-9-1460361bbf99> in <module>()
----> 1 gpt2.summarise(data.loc[5,"clubbed_def"])
7 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1914 # remove once script supports set_grad_enabled
1915 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1916 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1917
1918
RuntimeError: CUDA error: device-side assert triggered
```
Earlier tried to truncate manually checking the length of the input text but that gave ```IndexError: index out of range in self ````
Have tried #1805 but didn't work.
@patrickvonplaten, @LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11943/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11943/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11942 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11942/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11942/comments | https://api.github.com/repos/huggingface/transformers/issues/11942/events | https://github.com/huggingface/transformers/issues/11942 | 906,665,301 | MDU6SXNzdWU5MDY2NjUzMDE= | 11,942 | Typo in Pegasus model usage example | {
"login": "albertovilla",
"id": 1217687,
"node_id": "MDQ6VXNlcjEyMTc2ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1217687?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertovilla",
"html_url": "https://github.com/albertovilla",
"followers_url": "https://api.github.com/users/albertovilla/followers",
"following_url": "https://api.github.com/users/albertovilla/following{/other_user}",
"gists_url": "https://api.github.com/users/albertovilla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertovilla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertovilla/subscriptions",
"organizations_url": "https://api.github.com/users/albertovilla/orgs",
"repos_url": "https://api.github.com/users/albertovilla/repos",
"events_url": "https://api.github.com/users/albertovilla/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertovilla/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for flagging this! Would you mind sending a PR with the fix since you found it?",
"Hi @sgugger, I have just created a PR. \r\nKind regards",
"Thanks!\r\nClosed by #11979 "
] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux-5.4.109+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
-->
Documentation: @sgugger
## Information
Model I am using (Bert, XLNet ...): Pegasus
The problem arises when using:
* [x] the official example scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below): just trying the Pegasus model
## To reproduce
Steps to reproduce the behavior:
1. Create a new Colab notebook and install the required libraries using:
```python
!pip install transformers
!pip install sentencepiece
```
2. Copy / paste the "Usage example" from the pegasus [documentation](https://huggingface.co/transformers/model_doc/pegasus.html) page in a cell:
```python
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
import torch
src_text = [
""" PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."""
]
model_name = 'google/pegasus-xsum'
device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = PegasusTokenizer.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name).to(device)
batch = tokenizer(src_text, truncation=True, padding='longest', return_tensors="pt").to(device)
translated = model.generate(**batch)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
assert tgt_text[0] == "California's largest electricity provider has turned off power to hundreds of thousands of customers."
```
3. Execute the cell
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The final statement `assert` should be True, however there is an typo en the usage example and we get an error:
```python
12 model = PegasusForConditionalGeneration.from_pretrained(model_name).to(device)
13
---> 14 batch = tokenizer(src_text, truncation=True, padding='longest', return_tensors="pt").to(torch_device)
15 translated = model.generate(**batch)
16 tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
NameError: name 'torch_device' is not defined
```
The reason of the above error is that one of the lines of the code is referring to `torch_device` when it should refer to `device`. Change the line
- ```python batch = tokenizer(src_text, truncation=True, padding='longest', return_tensors="pt").to(torch_device)```
To the following:
- ```python batch = tokenizer(src_text, truncation=True, padding='longest', return_tensors="pt").to(device)```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11942/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11941 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11941/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11941/comments | https://api.github.com/repos/huggingface/transformers/issues/11941/events | https://github.com/huggingface/transformers/issues/11941 | 906,608,505 | MDU6SXNzdWU5MDY2MDg1MDU= | 11,941 | position_ids version changed during training | {
"login": "Gforky",
"id": 4157614,
"node_id": "MDQ6VXNlcjQxNTc2MTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4157614?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Gforky",
"html_url": "https://github.com/Gforky",
"followers_url": "https://api.github.com/users/Gforky/followers",
"following_url": "https://api.github.com/users/Gforky/following{/other_user}",
"gists_url": "https://api.github.com/users/Gforky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Gforky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gforky/subscriptions",
"organizations_url": "https://api.github.com/users/Gforky/orgs",
"repos_url": "https://api.github.com/users/Gforky/repos",
"events_url": "https://api.github.com/users/Gforky/events{/privacy}",
"received_events_url": "https://api.github.com/users/Gforky/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! Do you have a reproducible code example? Did you use the text-classification examples given in this repo?",
"> Hello! Do you have a reproducible code example? Did you use the text-classification examples given in this repo?\r\n\r\nHi, thanks for your reply. Since it's a private project, I cannot provide you the entire code. But I can share with you the code that triggers the issue:\r\n```\r\ndef get_loss(args, model, train_batch, unsup_batch, global_step, total_steps):\r\n # batch\r\n input_ids, attention_mask, token_type_ids, input_len, labels = train_batch\r\n if unsup_batch:\r\n ori_input_ids, ori_attention_mask, ori_token_type_ids, ori_input_len, ori_labels, \\\r\n aug_input_ids, aug_attention_mask, aug_token_type_ids, aug_input_len = unsup_batch\r\n\r\n input_ids = torch.cat((input_ids, aug_input_ids), dim=0)\r\n attention_mask = torch.cat((attention_mask, aug_attention_mask), dim=0)\r\n token_type_ids = torch.cat((token_type_ids, aug_token_type_ids), dim=0)\r\n # torch_device_one used by loss computation\r\n torch_device_one = torch.tensor(1., device=args.device)\r\n # logits\r\n outputs = model(input_ids, attention_mask, token_type_ids)\r\n logits = outputs[0]\r\n # loss fct\r\n sup_loss_fct = CrossEntropyLoss(reduction='none')\r\n unsup_loss_fct = KLDivLoss(reduction='none')\r\n # sup loss\r\n sup_size = labels.shape[0]\r\n sup_loss = sup_loss_fct(logits[:sup_size], labels) # shape : train_batch_size\r\n if unsup_batch and args.do_tsa:\r\n tsa_thresh = get_tsa_thresh(args.tsa_type,\r\n global_step,\r\n total_steps,\r\n start=args.tsa_start,\r\n end=1,\r\n scale=args.tsa_scale)\r\n larger_than_threshold = torch.exp(-sup_loss) > tsa_thresh\r\n loss_mask = torch.ones_like(labels, dtype=torch.float32) * (1 - larger_than_threshold.type(torch.float32))\r\n sup_loss = torch.sum(sup_loss * loss_mask, dim=-1) / torch.max(torch.sum(loss_mask, dim=-1), torch_device_one)\r\n else:\r\n sup_loss = torch.mean(sup_loss)\r\n # unsup loss\r\n if unsup_batch:\r\n uda_softmax_temp = args.uda_softmax_temp if args.uda_softmax_temp > 0 else 1.\r\n with torch.no_grad():\r\n ori_outputs = model(ori_input_ids, ori_attention_mask, ori_token_type_ids)\r\n ori_logits = ori_outputs[0]\r\n ori_prob = F.softmax(ori_logits, dim=-1) # KLdiv target\r\n # confidence-based masking\r\n if args.uda_confidence_thresh != -1:\r\n unsup_loss_mask = torch.max(ori_prob, dim=-1)[0] > args.uda_confidence_thresh\r\n unsup_loss_mask = unsup_loss_mask.type(torch.float32)\r\n else:\r\n unsup_loss_mask = torch.ones(len(logits) - sup_size, dtype=torch.float32, device=args.device)\r\n # softmax temperature controlling\r\n ori_logits = ori_logits / uda_softmax_temp\r\n ori_prob = F.softmax(ori_logits, dim=-1)\r\n aug_log_prob = F.log_softmax(logits[sup_size:], dim=-1)\r\n unsup_loss = torch.sum(unsup_loss_fct(aug_log_prob, ori_prob), dim=-1)\r\n unsup_loss = torch.sum(unsup_loss * unsup_loss_mask, dim=-1) / torch.max(torch.sum(unsup_loss_mask, dim=-1),\r\n torch_device_one)\r\n final_loss = sup_loss + args.uda_unsup_coeff * unsup_loss\r\n return final_loss, sup_loss, unsup_loss\r\n return sup_loss, None, None\r\n```\r\n\r\nFinally we've noticed the issue is we call the forward computation twice, and we change the inner parameters within a no_grad operation. So we move the no_grad code block to the position before the first forward computation, it also resolves the issue.\r\n\r\nBut back to my question, is that possible to just add a clone() to the position_ids initialization should avoid this issue under other scenarios? Thanks.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,625 | 1,625 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: linux
- Python version: 3.6.9
- PyTorch version (GPU?): yes
- Tensorflow version (GPU?): no
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: distributed
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [* ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [* ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. deployed BertModel in modeling_bert.py to train text classification model.
2. submitted task using `torch.distributed.launch` with 2 P40 GPUs.
3. left `position_ids` empty, and initialized it in `BertEmbeddings` while doing `forward` computation.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Expected training to be successfully finished, but encountered issue below:
```
RuntimeError: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.LongTensor [1, 128]] is at version 4; expected version 3 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.LongTensor [1, 128]] is at version 4; expected version 3 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
```
Maybe the initialization of position_ids should be suffixed with clone() in line 195 under `transformers.models.bert.modeling_bert`? I resolved the issue by adding clone to `self.position_ids`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11941/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11940 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11940/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11940/comments | https://api.github.com/repos/huggingface/transformers/issues/11940/events | https://github.com/huggingface/transformers/pull/11940 | 906,607,224 | MDExOlB1bGxSZXF1ZXN0NjU3NTc0NTEx | 11,940 | [deepspeed] docs | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | This PR:
* documents `train_batch_size` and `train_micro_batch_size_per_gpu` DS config entries
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11940/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11940",
"html_url": "https://github.com/huggingface/transformers/pull/11940",
"diff_url": "https://github.com/huggingface/transformers/pull/11940.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11940.patch",
"merged_at": 1622564481000
} |
https://api.github.com/repos/huggingface/transformers/issues/11939 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11939/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11939/comments | https://api.github.com/repos/huggingface/transformers/issues/11939/events | https://github.com/huggingface/transformers/issues/11939 | 906,528,561 | MDU6SXNzdWU5MDY1Mjg1NjE= | 11,939 | XLM tokenizer lang2id attribute is None | {
"login": "Quang-Vinh",
"id": 22286515,
"node_id": "MDQ6VXNlcjIyMjg2NTE1",
"avatar_url": "https://avatars.githubusercontent.com/u/22286515?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Quang-Vinh",
"html_url": "https://github.com/Quang-Vinh",
"followers_url": "https://api.github.com/users/Quang-Vinh/followers",
"following_url": "https://api.github.com/users/Quang-Vinh/following{/other_user}",
"gists_url": "https://api.github.com/users/Quang-Vinh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Quang-Vinh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Quang-Vinh/subscriptions",
"organizations_url": "https://api.github.com/users/Quang-Vinh/orgs",
"repos_url": "https://api.github.com/users/Quang-Vinh/repos",
"events_url": "https://api.github.com/users/Quang-Vinh/events{/privacy}",
"received_events_url": "https://api.github.com/users/Quang-Vinh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"FYI, I tried downgrading and I found that the most recent version that doesn't have this bug is `transformers==4.3.3`. So you could try downgrading to that version for now, until someone fixes it.\r\n\r\n```\r\npip install transformers==4.3.3\r\n```",
"Thanks for the advice! So far I have just been manually specifying the language id for the two languages, hopefully, that is sufficient as well.",
"Hello! Sorry for taking so long to get back to this issue - the issue should normally be fixed now, for all versions. We updated the configurations of the XLM models on the hub.\r\n\r\nThanks for flagging!",
"Hi @LysandreJik is the update going to solve this XLM issue as well? https://github.com/huggingface/transformers/issues/12174",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,628 | 1,628 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.8
- PyTorch version (GPU?): 1.8.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: No
### Who can help
Models:
- albert, bert, xlm: @LysandreJik
Library:
- tokenizers: @LysandreJik
## Information
Model I am using XLM with Causal language modelling:
The problem arises when using:
* [x] the official example scripts: (give details below)
## To reproduce
Steps to reproduce the behaviour:
1. Run example code from https://huggingface.co/transformers/multilingual.html
``` python
import torch
from transformers import XLMTokenizer, XLMWithLMHeadModel
tokenizer = XLMTokenizer.from_pretrained("xlm-clm-enfr-1024")
model = XLMWithLMHeadModel.from_pretrained("xlm-clm-enfr-1024")
language_id = tokenizer.lang2id['en']
```
The attribute lang2id is None and so I get a Nonetype is a non-suscriptable error. Following the example I am expecting to get 0 for language_id.
As a side note, it says that these checkpoints require language embeddings which I'm assuming is from the argument langs. What is the default behavior when this is not provided? I tried looking at https://huggingface.co/transformers/glossary.html but could not find any reference to it.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11939/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11939/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11938 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11938/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11938/comments | https://api.github.com/repos/huggingface/transformers/issues/11938/events | https://github.com/huggingface/transformers/issues/11938 | 906,507,527 | MDU6SXNzdWU5MDY1MDc1Mjc= | 11,938 | [docs] XLNETModel forward returns last_hidden_state 3rd dim should be d_model instead of hidden_size | {
"login": "Muktan",
"id": 31338369,
"node_id": "MDQ6VXNlcjMxMzM4MzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/31338369?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Muktan",
"html_url": "https://github.com/Muktan",
"followers_url": "https://api.github.com/users/Muktan/followers",
"following_url": "https://api.github.com/users/Muktan/following{/other_user}",
"gists_url": "https://api.github.com/users/Muktan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Muktan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Muktan/subscriptions",
"organizations_url": "https://api.github.com/users/Muktan/orgs",
"repos_url": "https://api.github.com/users/Muktan/repos",
"events_url": "https://api.github.com/users/Muktan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Muktan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\n\r\nActually, `hidden_size` is an alias for `d_model`, as can be seen [here](https://github.com/huggingface/transformers/blob/7e73601f3240b99d952c34b63bf4f8b78ca1462d/src/transformers/models/xlnet/configuration_xlnet.py#L233).\r\n\r\nThey mean the same thing. But I get your confusion, as some models use `hidden_size`, others `d_model`, or other names.\r\n\r\nFeel free to open a PR to fix this in the docs of XLNet.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,625 | 1,625 | CONTRIBUTOR | null | **Link to doc** - https://huggingface.co/transformers/model_doc/xlnet.html#xlnetmodel
**Problem description** - forward method of XLNETModel returns `last_hidden_state` with dimension `(batch_size, num_predict, hidden_size)`. However, `hidden_size` is not an XLNET config (unlike for BERT) instead that dimension should be `d_model` which is a config of XLNETModel.
**current shape** - (batch_size, num_predict, hidden_size)
**expected shape** - (batch_size, num_predict, d_model)
## Environment info
Not required
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
### Who can help
@sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11938/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11938/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11937 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11937/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11937/comments | https://api.github.com/repos/huggingface/transformers/issues/11937/events | https://github.com/huggingface/transformers/pull/11937 | 906,425,893 | MDExOlB1bGxSZXF1ZXN0NjU3NDMzMzcz | 11,937 | Neptune.ai integration | {
"login": "vbyno",
"id": 2923624,
"node_id": "MDQ6VXNlcjI5MjM2MjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2923624?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbyno",
"html_url": "https://github.com/vbyno",
"followers_url": "https://api.github.com/users/vbyno/followers",
"following_url": "https://api.github.com/users/vbyno/following{/other_user}",
"gists_url": "https://api.github.com/users/vbyno/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbyno/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbyno/subscriptions",
"organizations_url": "https://api.github.com/users/vbyno/orgs",
"repos_url": "https://api.github.com/users/vbyno/repos",
"events_url": "https://api.github.com/users/vbyno/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbyno/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | # What does this PR do?
The PR integrates the trainer with [neptune.ai](https://neptune.ai/)
To start with neptune.ai logging:
1) Set env variables:
NEPTUNE_PROJECT
NEPTUNE_API_TOKEN
2) Add an option that turns on Neptune logging
```
--report_to 'neptune'
```
# Who can review?
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11937/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11937",
"html_url": "https://github.com/huggingface/transformers/pull/11937",
"diff_url": "https://github.com/huggingface/transformers/pull/11937.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11937.patch",
"merged_at": 1622554853000
} |
https://api.github.com/repos/huggingface/transformers/issues/11936 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11936/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11936/comments | https://api.github.com/repos/huggingface/transformers/issues/11936/events | https://github.com/huggingface/transformers/issues/11936 | 906,424,786 | MDU6SXNzdWU5MDY0MjQ3ODY= | 11,936 | Flax text-classification multi-optimizer incorrect | {
"login": "n2cholas",
"id": 12474257,
"node_id": "MDQ6VXNlcjEyNDc0MjU3",
"avatar_url": "https://avatars.githubusercontent.com/u/12474257?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n2cholas",
"html_url": "https://github.com/n2cholas",
"followers_url": "https://api.github.com/users/n2cholas/followers",
"following_url": "https://api.github.com/users/n2cholas/following{/other_user}",
"gists_url": "https://api.github.com/users/n2cholas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n2cholas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n2cholas/subscriptions",
"organizations_url": "https://api.github.com/users/n2cholas/orgs",
"repos_url": "https://api.github.com/users/n2cholas/repos",
"events_url": "https://api.github.com/users/n2cholas/events{/privacy}",
"received_events_url": "https://api.github.com/users/n2cholas/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Actually `optax.adamw` can take a `mask` directly to mask weight decay (since that's pretty common):\r\n\r\n```py\r\ntx = optax.adamw(learning_rate=learning_rate_fn, b1=0.9, b2=0.999, eps=1e-6, weight_decay=weight_decay, \r\n mask=partial(jax.tree_map, lambda p: p.ndim != 1))\r\n```",
"Hey @n2cholas,\r\n\r\nThanks for diving into this! \r\n\r\nI think a nice solution would be:\r\n\r\n```python\r\ntx = optax.adamw(learning_rate=learning_rate_fn, b1=0.9, b2=0.999, eps=1e-6, weight_decay=weight_decay, \r\n mask=partial(jax.tree_map, lambda p: p[-1] != \"bias\" and p[-2:] != (\"LayerNorm\", \"scale\")))\r\n```\r\n\r\nThis way, it's consistent, but we also show the user clearly that `bias` and `LayerNorm` are not included in the weight decay. It would be great if you could open a PR for this :-) I can re-run the experiments afterward\r\n",
"Great, will do!"
] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | The below code is incorrect:
https://github.com/huggingface/transformers/blob/8d171628fe84bdf92ee40b5375d7265278180f14/examples/flax/text-classification/run_flax_glue.py#L173-L188
- The `dict` from `traverse_util.flatten_dict` has keys which are tuples of strings, not one long string with the path separated by periods.
- `optax.masked` applies the transformation wherever the mask is True, so the masks are flipped.
- Flax's LayerNorm calls the scale parameter `scale` not `weight`
I believe following code would be correct:
```py
# We use Optax's "masking" functionality to create a multi-optimizer, one
# with weight decay and the other without. Each sub-optimizer will be applied
# wherever the mask is True
decay_path = lambda p: p[-1] != "bias" and p[-2:] != ("LayerNorm", "scale") # noqa: E731
tx = optax.chain(
optax.masked(adamw(0.0), mask=traverse(lambda path, _: not decay_path(path))),
optax.masked(adamw(weight_decay), mask=traverse(lambda path, _: decay_path(path))),
)
```
But, for the networks in that example, the parameters that shouldn't be decayed all have 1 dimension (biases and layernorm scales). L173-L188 can therefore be simplified to:
```py
# We use Optax's "masking" functionality to create a multi-optimizer, one
# with weight decay and the other without. Each sub-optimizer will be applied
# wherever the mask is True. The bias parameters and LayerNorm scale
# parameters should not be decayed, and these are the only parameters
# with 1 dimension.
tx = optax.chain(
optax.masked(adamw(0.0), mask=partial(jax.tree_map, lambda p: p.ndim == 1)),
optax.masked(adamw(weight_decay), mask=partial(jax.tree_map, lambda p: p.ndim != 1)),
)
```
Though the latter solution is simpler, perhaps the first one is better since it illustrates a more general way for users to construct a multi-optimizer. I'd be happy to open a PR with either.
cc: @marcvanzee @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11936/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11936/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11935 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11935/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11935/comments | https://api.github.com/repos/huggingface/transformers/issues/11935/events | https://github.com/huggingface/transformers/pull/11935 | 906,408,144 | MDExOlB1bGxSZXF1ZXN0NjU3NDIwMDg0 | 11,935 | Use `self.assertEqual` instead of `assert` in deberta v2 test. | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | PR to fix #11929 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11935/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11935/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11935",
"html_url": "https://github.com/huggingface/transformers/pull/11935",
"diff_url": "https://github.com/huggingface/transformers/pull/11935.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11935.patch",
"merged_at": 1622448130000
} |
https://api.github.com/repos/huggingface/transformers/issues/11934 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11934/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11934/comments | https://api.github.com/repos/huggingface/transformers/issues/11934/events | https://github.com/huggingface/transformers/issues/11934 | 906,402,013 | MDU6SXNzdWU5MDY0MDIwMTM= | 11,934 | Predict masked word at the beginning of the sentence | {
"login": "lalchand-pandia",
"id": 20551584,
"node_id": "MDQ6VXNlcjIwNTUxNTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/20551584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lalchand-pandia",
"html_url": "https://github.com/lalchand-pandia",
"followers_url": "https://api.github.com/users/lalchand-pandia/followers",
"following_url": "https://api.github.com/users/lalchand-pandia/following{/other_user}",
"gists_url": "https://api.github.com/users/lalchand-pandia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lalchand-pandia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lalchand-pandia/subscriptions",
"organizations_url": "https://api.github.com/users/lalchand-pandia/orgs",
"repos_url": "https://api.github.com/users/lalchand-pandia/repos",
"events_url": "https://api.github.com/users/lalchand-pandia/events{/privacy}",
"received_events_url": "https://api.github.com/users/lalchand-pandia/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The special characters mean that these words are preceded by a space. This is not the case when the words are the start of sentences!",
"So, if I insert a space at the beginning of the sentence before [MASK], I should words beginning with special character or the tokenizer removes those extra spaces at the beginning?",
"Yes, the encoding of the first word will change if you add a space. The tokeniser does not normalise whitespace. Try adding 2 spaces, a tab or a newline character. Please read about `add_prefix_space=True` and `is_split_into_words=True` on https://huggingface.co/transformers/model_doc/roberta.html#robertatokenizer",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,625 | 1,625 | NONE | null | Case 1: I have a sentence "After the campaign finance laws changed, Albert ran for mayor of his city." which I have modified as
\<s\> \<mask\> the campaign finance laws changed, he ran for mayor of his city. \</s\>.
This input is passed through pre-trained RobertaBase and RobertaLarge (RobertaTokenizer, RobertaForMaskedLM).
The predicted top-10 words are:
['When', 'After', 'Before', 'Once', 'As', 'Until', 'While', 'With', 'Since', 'Then']
Case 2: But when we have masked word in the middle of the sentence:
\<s\> He ran for mayor of his city <mask> the campaign finance laws changed. \</s\>
The predicted top-10 words are:
['Δ before', 'Δ after', 'Δ when', 'Δ until', 'Δ as', 'Δ and', 'Δ but', 'Δ once', 'Δ because', ',']
Why the special character is added in the 2nd case and not case 1?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11934/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11933 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11933/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11933/comments | https://api.github.com/repos/huggingface/transformers/issues/11933/events | https://github.com/huggingface/transformers/pull/11933 | 906,375,786 | MDExOlB1bGxSZXF1ZXN0NjU3MzkxNTYw | 11,933 | Porting Layoutlmv2 to Huggingface | {
"login": "monuminu",
"id": 33065876,
"node_id": "MDQ6VXNlcjMzMDY1ODc2",
"avatar_url": "https://avatars.githubusercontent.com/u/33065876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/monuminu",
"html_url": "https://github.com/monuminu",
"followers_url": "https://api.github.com/users/monuminu/followers",
"following_url": "https://api.github.com/users/monuminu/following{/other_user}",
"gists_url": "https://api.github.com/users/monuminu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/monuminu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/monuminu/subscriptions",
"organizations_url": "https://api.github.com/users/monuminu/orgs",
"repos_url": "https://api.github.com/users/monuminu/repos",
"events_url": "https://api.github.com/users/monuminu/events{/privacy}",
"received_events_url": "https://api.github.com/users/monuminu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,625 | 1,625 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
https://github.com/huggingface/transformers/issues/11932
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section? Yes
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. https://github.com/microsoft/unilm/issues/325
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). No
- [ ] Did you write any new necessary tests? No
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->@inproceedings{Xu2020LayoutLMv2MP,
title = {LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding},
author = {Yang Xu and Yiheng Xu and Tengchao Lv and Lei Cui and Furu Wei and Guoxin Wang and Yijuan Lu and Dinei Florencio and Cha Zhang and Wanxiang Che and Min Zhang and Lidong Zhou},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics (ACL) 2021},
year = {2021},
month = {August},
}
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11933/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11933",
"html_url": "https://github.com/huggingface/transformers/pull/11933",
"diff_url": "https://github.com/huggingface/transformers/pull/11933.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11933.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11932 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11932/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11932/comments | https://api.github.com/repos/huggingface/transformers/issues/11932/events | https://github.com/huggingface/transformers/issues/11932 | 906,371,108 | MDU6SXNzdWU5MDYzNzExMDg= | 11,932 | LayoutLMv2 Model | {
"login": "monuminu",
"id": 33065876,
"node_id": "MDQ6VXNlcjMzMDY1ODc2",
"avatar_url": "https://avatars.githubusercontent.com/u/33065876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/monuminu",
"html_url": "https://github.com/monuminu",
"followers_url": "https://api.github.com/users/monuminu/followers",
"following_url": "https://api.github.com/users/monuminu/following{/other_user}",
"gists_url": "https://api.github.com/users/monuminu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/monuminu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/monuminu/subscriptions",
"organizations_url": "https://api.github.com/users/monuminu/orgs",
"repos_url": "https://api.github.com/users/monuminu/repos",
"events_url": "https://api.github.com/users/monuminu/events{/privacy}",
"received_events_url": "https://api.github.com/users/monuminu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"What's the current status of this?\r\nI see Microsoft has already pushed the code as mentioned above for this but i can't import it from huggingface.",
"I have completed the work but there is a problem with installing detectron2 using pip . Anyone can help on this ?",
"Can you tell me the issue with installing detectron2? I can prolly help",
"Its saying pip install detectron2 is giving issue . ",
"You have to install detectron from `git+https://github.com/facebookresearch/detectron2.git`, not pypi\r\n\r\nSee https://github.com/facebookresearch/detectron2/blob/master/INSTALL.md for instructions. \r\n\r\nUse\r\n`python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'`"
] | 1,622 | 1,630 | 1,630 | NONE | null | # π New model addition
## Model description
LayoutLMv2 by pre-training text, layout and image in a multi-modal framework, where new model architectures and pre-training tasks are leveraged. Specifically, LayoutLMv2 not only uses the existing masked visual-language modeling task but also the new text-image alignment and text-image matching tasks in the pre-training stage, where cross-modality interaction is better learned.
<!-- Important information -->
## Open source status
* [ ] the model implementation is available: (give details) https://github.com/microsoft/unilm/tree/master/layoutlmv2
* [ ] the model weights are available: (give details) https://huggingface.co/microsoft/layoutlmv2-base-uncased/
* [ ] who are the authors: (mention them, if possible by @gh-username)
@inproceedings{Xu2020LayoutLMv2MP,
title = {LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding},
author = {Yang Xu and Yiheng Xu and Tengchao Lv and Lei Cui and Furu Wei and Guoxin Wang and Yijuan Lu and Dinei Florencio and Cha Zhang and Wanxiang Che and Min Zhang and Lidong Zhou},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics (ACL) 2021},
year = {2021},
month = {August},
}
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11932/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11931 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11931/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11931/comments | https://api.github.com/repos/huggingface/transformers/issues/11931/events | https://github.com/huggingface/transformers/issues/11931 | 906,296,874 | MDU6SXNzdWU5MDYyOTY4NzQ= | 11,931 | How to load the best performance checkpoint after trainingοΌ | {
"login": "Gpwner",
"id": 19349207,
"node_id": "MDQ6VXNlcjE5MzQ5MjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/19349207?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Gpwner",
"html_url": "https://github.com/Gpwner",
"followers_url": "https://api.github.com/users/Gpwner/followers",
"following_url": "https://api.github.com/users/Gpwner/following{/other_user}",
"gists_url": "https://api.github.com/users/Gpwner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Gpwner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gpwner/subscriptions",
"organizations_url": "https://api.github.com/users/Gpwner/orgs",
"repos_url": "https://api.github.com/users/Gpwner/repos",
"events_url": "https://api.github.com/users/Gpwner/events{/privacy}",
"received_events_url": "https://api.github.com/users/Gpwner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you give us the whole command you executed? We can't reproduce the problem without it.",
"@sgugger Here is my whole command:\r\n`python run_mlm.py --model_name_or_path model/roberta-base --do_train --do_eval --output_dir mlm_new --max_seq_length 128 --load_best_model_at_end True --save_steps 2000 --overwrite_output_dir True --per_device_train_batch_size 32 --train_file data/wikitext-train.txt --validation_file data/wikitext-validation.txt`\r\n\r\n\r\n\r\nWhere the wikitext-train.txt and wikitext-validation.txt are txt files ,something like οΌ\r\n\r\n` Three other applications make up nearly all the rest of the consumption . One of these uses is as a stabilizer and a catalyst for the production of polyethyleneterephthalate . Another application is to serve as a fining agent to remove microscopic bubbles in glass , mostly for TV screens ; this is achieved by the interaction of antimony ions with oxygen , interfering the latter from forming bubbles . The third major application is the use as pigment . \r\n Antimony is being increasingly used in the semiconductor industry as a dopant for heavily doped n @-@ type silicon wafers in the production of diodes , infrared detectors , and Hall @-@ effect devices . In the 1950s , tiny beads of a lead @-@ antimony alloy were used to dope the emitters and collectors of n @-@ p @-@ n alloy junction transistors with antimony . Indium antimonide is used as a material for mid @-@ infrared detectors . `\r\n\r\n\r\nIN MY case ,I just change these two files to my domain corpus txt files.",
"another QuestionοΌwhat is the difference between `load_best_model_at_end ` and Early StopοΌ\r\n\r\n",
"Is there at least one evaluation during the training? I don't know the size of your dataset. Passing `--evaluation_strategy epoch` would ensure there is one per epoch at least.\r\n\r\nAs for your second question, `loas_best_model_at_end` loads the best model seen during the training. Early stopping, stops when the loss/metrics does not improve.",
"@sgugger \r\n1.YESοΌthere is one evaluation at the endοΌBut `best_model_checkpoint` is still null:\r\n```\r\n[INFO|trainer.py:2107] 2021-06-03 02:40:32,693 >> ***** Running Evaluation *****\r\n[INFO|trainer.py:2109] 2021-06-03 02:40:32,693 >> Num examples = 17979\r\n[INFO|trainer.py:2112] 2021-06-03 02:40:32,693 >> Batch size = 128\r\n100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 141/141 [00:48<00:00, 2.92it/s]\r\n[INFO|trainer_pt_utils.py:907] 2021-06-03 02:41:21,274 >> ***** eval metrics *****\r\n[INFO|trainer_pt_utils.py:912] 2021-06-03 02:41:21,274 >> epoch = 3.0\r\n[INFO|trainer_pt_utils.py:912] 2021-06-03 02:41:21,274 >> eval_loss = 1.4715\r\n[INFO|trainer_pt_utils.py:912] 2021-06-03 02:41:21,274 >> eval_mem_cpu_alloc_delta = 0MB\r\n[INFO|trainer_pt_utils.py:912] 2021-06-03 02:41:21,274 >> eval_mem_cpu_peaked_delta = 0MB\r\n[INFO|trainer_pt_utils.py:912] 2021-06-03 02:41:21,274 >> eval_mem_gpu_alloc_delta = 0MB\r\n[INFO|trainer_pt_utils.py:912] 2021-06-03 02:41:21,274 >> eval_mem_gpu_peaked_delta = 3928MB\r\n[INFO|trainer_pt_utils.py:912] 2021-06-03 02:41:21,274 >> eval_runtime = 0:00:48.49\r\n[INFO|trainer_pt_utils.py:912] 2021-06-03 02:41:21,274 >> eval_samples = 17979\r\n[INFO|trainer_pt_utils.py:912] 2021-06-03 02:41:21,274 >> eval_samples_per_second = 370.704\r\n[INFO|trainer_pt_utils.py:912] 2021-06-03 02:41:21,274 >> perplexity = 4.3558\r\n\r\n```\r\n\r\nAfter passing `--evaluation_strategy epoch` ,`best_model_checkpoint` become normal and it is not null now.\r\n```\r\n \"best_metric\": 1.4732710123062134,\r\n \"best_model_checkpoint\": \"mlm_new1/checkpoint-426\",\r\n \"epoch\": 3.0,\r\n```\r\n\r\n2.Does `loads the best model seen during the training` mean the code will load the best model in memory or save it in disk? In my origin case (without passing `--evaluation_strategy epoch`) οΌI Have only one checkpoint.Is it the best checkpoint or the last checkpointοΌ\r\n",
"No the evaluation at the end does not count, it's not part of the `train` method. If there is no evaluation during the training phase, there can't be a best model to load, it's as simple as that.\r\n\r\nThe `load_best_model_at_end` just keeps track of the best model as you evaluate it and will reload at the end the checkpoint that had the best evaluation score.",
"@sgugger I think if this process can automatically save the best performance and the last checkpoints ,that will be great.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,625 | 1,625 | NONE | null | When I was training MLM with `https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_mlm.py`
, I have already set the `--load_best_model_at_end True`.But when finish the trainingοΌI check the trainer_state.json file and I found these mssage:
```
"best_metric": null,
"best_model_checkpoint": null,
"epoch": 100.0,
"global_step": 559300,
"is_hyper_param_search": false,
"is_local_process_zero": true,
"is_world_process_zero": true,
```
as shown aboveοΌ"best_model_checkpoint" is null.
Here is my questionοΌhow to load the best performance checkpointοΌIf I have ONLY one checkpointοΌis
it the best performance checkpointοΌThanks in advance!
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11931/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/11931/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11930 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11930/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11930/comments | https://api.github.com/repos/huggingface/transformers/issues/11930/events | https://github.com/huggingface/transformers/issues/11930 | 905,876,078 | MDU6SXNzdWU5MDU4NzYwNzg= | 11,930 | Ray pickling issue when running hp search | {
"login": "mvacaporale",
"id": 20452655,
"node_id": "MDQ6VXNlcjIwNDUyNjU1",
"avatar_url": "https://avatars.githubusercontent.com/u/20452655?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mvacaporale",
"html_url": "https://github.com/mvacaporale",
"followers_url": "https://api.github.com/users/mvacaporale/followers",
"following_url": "https://api.github.com/users/mvacaporale/following{/other_user}",
"gists_url": "https://api.github.com/users/mvacaporale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mvacaporale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mvacaporale/subscriptions",
"organizations_url": "https://api.github.com/users/mvacaporale/orgs",
"repos_url": "https://api.github.com/users/mvacaporale/repos",
"events_url": "https://api.github.com/users/mvacaporale/events{/privacy}",
"received_events_url": "https://api.github.com/users/mvacaporale/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Same error as this: https://github.com/huggingface/transformers/issues/11249. We are currently looking into this. In the meantime, can you use an earlier version as you suggested? Thanks.",
"Will do. Didn't see that other issue there. Thanks for the help. "
] | 1,622 | 1,622 | 1,622 | NONE | null | ## Environment info
- `transformers` version: 4.5.1
- Platform: Amazon Linux 2
- Python version: 3.7.7
- PyTorch version (GPU?): 1.8.1
- Tensorflow version (GPU?): 2.0.0
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
Trainer / Ray
@sgugger, @richardliaw, @amogkam
## Information
I'm using BERT with my own modified training scripts for hp searches on pre-training; however the issue is reproduced on a simple example as shown below.
## To reproduce
The following code snippet is a slight modification from the blog post [here](https://huggingface.co/blog/ray-tune)
```
from datasets import load_dataset, load_metric
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
Trainer, TrainingArguments)
tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased')
dataset = load_dataset('glue', 'mrpc')
metric = load_metric('glue', 'mrpc')
def encode(examples):
outputs = tokenizer(
examples['sentence1'], examples['sentence2'], truncation=True)
return outputs
encoded_dataset = dataset.map(encode, batched=True)
def model_init():
return AutoModelForSequenceClassification.from_pretrained(
'distilbert-base-uncased', return_dict=True)
def compute_metrics(eval_pred):
predictions, labels = eval_pred
predictions = predictions.argmax(axis=-1)
return metric.compute(predictions=predictions, references=labels)
# Evaluate during training and a bit more often
# than the default to be able to prune bad trials early.
# Disabling tqdm is a matter of preference.
training_args = TrainingArguments(
"test", eval_steps=500, disable_tqdm=True)
trainer = Trainer(
args=training_args,
tokenizer=tokenizer,
train_dataset=encoded_dataset["train"],
eval_dataset=encoded_dataset["validation"],
model_init=model_init,
compute_metrics=compute_metrics,
)
# Defaut objective is the sum of all metrics
# when metrics are provided, so we have to maximize it.
trainer.hyperparameter_search(
direction="maximize",
backend="ray",
n_trials=10, # number of trials
)
```
The call to `trainer.hyperparameter_search` creates the following error:
```
TypeError: can't pickle _thread.RLock objects
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-3-af3e9d5e18dd> in <module>
42 direction="maximize",
43 backend="ray",
---> 44 n_trials=10, # number of trials
45 )
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/transformers/trainer.py in hyperparameter_search(self, hp_space, compute_objective, n_trials, direction, backend, hp_name, **kwargs)
1457
1458 run_hp_search = run_hp_search_optuna if backend == HPSearchBackend.OPTUNA else run_hp_search_ray
-> 1459 best_run = run_hp_search(self, n_trials, direction, **kwargs)
1460
1461 self.hp_search_backend = None
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/transformers/integrations.py in run_hp_search_ray(trainer, n_trials, direction, **kwargs)
233 config=trainer.hp_space(None),
234 num_samples=n_trials,
--> 235 **kwargs,
236 )
237 best_trial = analysis.get_best_trial(metric="objective", mode=direction[:3])
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/ray/tune/tune.py in run(run_or_experiment, name, metric, mode, stop, time_budget_s, config, resources_per_trial, num_samples, local_dir, search_alg, scheduler, keep_checkpoints_num, checkpoint_score_attr, checkpoint_freq, checkpoint_at_end, verbose, progress_reporter, log_to_file, trial_name_creator, trial_dirname_creator, sync_config, export_formats, max_failures, fail_fast, restore, server_port, resume, queue_trials, reuse_actors, trial_executor, raise_on_failed_trial, callbacks, loggers, ray_auto_init, run_errored_only, global_checkpoint_period, with_server, upload_dir, sync_to_cloud, sync_to_driver, sync_on_checkpoint)
296
297 trial_executor = trial_executor or RayTrialExecutor(
--> 298 reuse_actors=reuse_actors, queue_trials=queue_trials)
299 if isinstance(run_or_experiment, list):
300 experiments = run_or_experiment
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/ray/tune/ray_trial_executor.py in __init__(self, queue_trials, reuse_actors, ray_auto_init, refresh_period)
198 "For cluster usage or custom Ray initialization, "
199 "call `ray.init(...)` before `tune.run`.")
--> 200 ray.init()
201
202 if ray.is_initialized():
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/ray/_private/client_mode_hook.py in wrapper(*args, **kwargs)
45 if client_mode_enabled and _client_hook_enabled:
46 return getattr(ray, func.__name__)(*args, **kwargs)
---> 47 return func(*args, **kwargs)
48
49 return wrapper
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/ray/worker.py in init(address, num_cpus, num_gpus, resources, object_store_memory, local_mode, ignore_reinit_error, include_dashboard, dashboard_host, dashboard_port, job_config, configure_logging, logging_level, logging_format, log_to_driver, _enable_object_reconstruction, _redis_max_memory, _plasma_directory, _node_ip_address, _driver_object_store_memory, _memory, _redis_password, _java_worker_options, _temp_dir, _lru_evict, _metrics_export_port, _system_config)
771
772 for hook in _post_init_hooks:
--> 773 hook()
774
775 node_id = global_worker.core_worker.get_current_node_id()
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/ray/tune/registry.py in flush(self)
169 def flush(self):
170 for k, v in self.to_flush.items():
--> 171 self.references[k] = ray.put(v)
172 self.to_flush.clear()
173
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/ray/_private/client_mode_hook.py in wrapper(*args, **kwargs)
45 if client_mode_enabled and _client_hook_enabled:
46 return getattr(ray, func.__name__)(*args, **kwargs)
---> 47 return func(*args, **kwargs)
48
49 return wrapper
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/ray/worker.py in put(value)
1487 with profiling.profile("ray.put"):
1488 try:
-> 1489 object_ref = worker.put_object(value)
1490 except ObjectStoreFullError:
1491 logger.info(
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/ray/worker.py in put_object(self, value, object_ref)
267 "inserting with an ObjectRef")
268
--> 269 serialized_value = self.get_serialization_context().serialize(value)
270 # This *must* be the first place that we construct this python
271 # ObjectRef because an entry with 0 local references is created when
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/ray/serialization.py in serialize(self, value)
317 return RawSerializedObject(value)
318 else:
--> 319 return self._serialize_to_msgpack(value)
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/ray/serialization.py in _serialize_to_msgpack(self, value)
297 metadata = ray_constants.OBJECT_METADATA_TYPE_PYTHON
298 pickle5_serialized_object = \
--> 299 self._serialize_to_pickle5(metadata, python_objects)
300 else:
301 pickle5_serialized_object = None
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/ray/serialization.py in _serialize_to_pickle5(self, metadata, value)
257 except Exception as e:
258 self.get_and_clear_contained_object_refs()
--> 259 raise e
260 finally:
261 self.set_out_of_band_serialization()
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/ray/serialization.py in _serialize_to_pickle5(self, metadata, value)
254 self.set_in_band_serialization()
255 inband = pickle.dumps(
--> 256 value, protocol=5, buffer_callback=writer.buffer_callback)
257 except Exception as e:
258 self.get_and_clear_contained_object_refs()
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/ray/cloudpickle/cloudpickle_fast.py in dumps(obj, protocol, buffer_callback)
71 file, protocol=protocol, buffer_callback=buffer_callback
72 )
---> 73 cp.dump(obj)
74 return file.getvalue()
75
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/ray/cloudpickle/cloudpickle_fast.py in dump(self, obj)
572 def dump(self, obj):
573 try:
--> 574 return Pickler.dump(self, obj)
575 except RuntimeError as e:
576 if "recursion" in e.args[0]:
TypeError: can't pickle _thread.RLock objects
```
## Expected behavior
Ray seems to be pickling the Trainer although it can't. On an earlier version `transformers==4.4.2` this was not an issue. I believe some update in 4.5 is causing this and the above script should be able to complete without any pickling error.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11930/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11929 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11929/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11929/comments | https://api.github.com/repos/huggingface/transformers/issues/11929/events | https://github.com/huggingface/transformers/issues/11929 | 905,689,215 | MDU6SXNzdWU5MDU2ODkyMTU= | 11,929 | Use `self.assertEqual` instead of `assert` in tests. | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> I can provide a PR...\r\n\r\nsee #11935"
] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | IMO the test should use `self.assertEqual` instead of `assert` here:
https://github.com/huggingface/transformers/blob/8d171628fe84bdf92ee40b5375d7265278180f14/tests/test_tokenization_deberta_v2.py#L105-L106
I can provide a PR... | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11929/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11928 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11928/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11928/comments | https://api.github.com/repos/huggingface/transformers/issues/11928/events | https://github.com/huggingface/transformers/pull/11928 | 905,500,423 | MDExOlB1bGxSZXF1ZXN0NjU2NTc0NzM1 | 11,928 | [Flax][WIP] Speed up pretraining | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The whole random masking can be done before each training loop to speed things up -> would be an interesting experiment",
"Ok this gives no speed-up actually -> closing"
] | 1,622 | 1,622 | 1,622 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11928/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11928",
"html_url": "https://github.com/huggingface/transformers/pull/11928",
"diff_url": "https://github.com/huggingface/transformers/pull/11928.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11928.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11927 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11927/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11927/comments | https://api.github.com/repos/huggingface/transformers/issues/11927/events | https://github.com/huggingface/transformers/pull/11927 | 905,475,585 | MDExOlB1bGxSZXF1ZXN0NjU2NTUxNzM5 | 11,927 | add relevant description to tqdm in examples | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> I was also thinking we should have `check_min_version` for `datasets` as well as this won't work in dataset versions prior to 1.7.0.\r\n\r\nThat and also we need to make sure that the CI runs 1.7.0. So probably just bumping the required min version project-wide will resolve this at once.",
"@stas00, Datasets 1.8.0 was released yesterday and now all tests are passing in this PR. Please let me know if it looks good or if any changes are required so that I can update the rest of them as well.\r\n\r\n@lhoestq can I work on `check_min_version` feature in `datasets`?",
"Thank you for the heads up, @bhavitvyamalik!\r\n\r\nThe code now works:\r\n```\r\n$ python run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir output_dir\r\n[...]\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nRunning tokenizer on dataset: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4/4 [00:00<00:00, 22.54ba/s]\r\nRunning tokenizer on dataset: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 47.85ba/s]\r\nRunning tokenizer on dataset: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:00<00:00, 14.15ba/s]\r\n06/09\r\n```\r\n\r\nSo what is missing is the version checking. normally we have all the versions setup in `setup.py` and its autogenerated table, and then we just need to do `dep_version_check(\"datasets\")` but here this higher version is only needed for these few scripts. so one way is to code it explicitly:\r\n```\r\nfrom packaging import version\r\nif version.parse(datasets.__version__) < version.parse(\"1.8.0\")\r\n raise ValueError(\"....\")\r\n```\r\n\r\n\r\nBut at the same time is there any reason why we can't just bump transformers's library-wide dependency to `datasets>=1.8.0`?\r\n\r\n",
"> @lhoestq can I work on check_min_version feature in datasets?\r\n\r\nThe problem with this one is that it will require a certain version of `datasets` to even work,\r\n",
"OK, so 2 things to do:\r\n\r\n1. update: `examples/pytorch/text-classification/requirements.txt` to bump up `datasets`:\r\n2. and then in the scripts:\r\n```\r\nfrom transformers.utils.versions import require_version\r\nrequire_version(\"datasets>=1.8.0\", \"To fix: pip install -r examples/pytorch/text-classification/requirements.txt\")\r\n```\r\n\r\n",
"@sgugger, I was yet to replicate this for other examples. I'll raise another PR for that! ",
"Oops! Sorry, missed that part.",
"I like the approach of making the first PR with just 1 or a few examples with a commitment to extend it to other examples in a separate PR (by creating an Issue that will track the replication).\r\n\r\nThe problem with tackling more than one example is that it puts a huge onus on the developer and reviewers to do much more work if things aren't right right away. But with one example, ideas are bounced around, tried, applied, verified, merged - then if all is good it's now easy to replicate to many more examples.\r\n\r\nThis is IMHO, of course."
] | 1,622 | 1,623 | 1,623 | CONTRIBUTOR | null | # What does this PR do?
Fixes #11797. Added description to `dataset.map` to improve tqdm bars. This would tell the user what's being processed. As of now I've only targeted `text-classification`, please let me know if it looks good or if any changes are required so that I can update the rest of them as well.
I was also thinking we should have `check_min_version` for `datasets` as well as this won't work in dataset versions prior to 1.7.0.
## Who can review?
@stas00 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11927/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11927",
"html_url": "https://github.com/huggingface/transformers/pull/11927",
"diff_url": "https://github.com/huggingface/transformers/pull/11927.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11927.patch",
"merged_at": 1623355195000
} |
https://api.github.com/repos/huggingface/transformers/issues/11926 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11926/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11926/comments | https://api.github.com/repos/huggingface/transformers/issues/11926/events | https://github.com/huggingface/transformers/issues/11926 | 905,394,534 | MDU6SXNzdWU5MDUzOTQ1MzQ= | 11,926 | Modifying the distill bert architecture | {
"login": "IamSparky",
"id": 42636586,
"node_id": "MDQ6VXNlcjQyNjM2NTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42636586?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IamSparky",
"html_url": "https://github.com/IamSparky",
"followers_url": "https://api.github.com/users/IamSparky/followers",
"following_url": "https://api.github.com/users/IamSparky/following{/other_user}",
"gists_url": "https://api.github.com/users/IamSparky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IamSparky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IamSparky/subscriptions",
"organizations_url": "https://api.github.com/users/IamSparky/orgs",
"repos_url": "https://api.github.com/users/IamSparky/repos",
"events_url": "https://api.github.com/users/IamSparky/events{/privacy}",
"received_events_url": "https://api.github.com/users/IamSparky/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"If you look at the [documentation of `DistilBertModel`](https://huggingface.co/transformers/model_doc/distilbert.html#distilbertmodel), you can see it doesn't use token_type_ids.",
"> If you look at the [documentation of `DistilBertModel`](https://huggingface.co/transformers/model_doc/distilbert.html#distilbertmodel), you can see it doesn't use token_type_ids.\r\n\r\nI removed the token_type_ids but still getting this error !!!!\r\n\r\n```\r\nclass DISTILLBERTBaseUncased(nn.Module):\r\n def __init__(self):\r\n super(DISTILLBERTBaseUncased, self).__init__()\r\n self.bert = transformers.DistilBertModel.from_pretrained(DISTILL_BERT_PATH, return_dict=False)\r\n self.bert_drop = nn.Dropout(0.5)\r\n self.out = nn.Linear(768, 1)\r\n\r\n def forward(self, ids, mask):\r\n _, x = self.bert(ids, attention_mask=mask)\r\n \r\n bo = self.bert_drop(x)\r\n output = self.out(bo)\r\n return output\r\n \r\nmodel = DISTILLBERTBaseUncased()\r\nmodel = model.to(device)\r\n```\r\n\r\n\r\n",
"Yes it gives you an error since the model does only return a single thing, namely a tuple containing a single element (as you set `return_dict=False`). This is also explained in the documentation by the way. You should replace\r\n\r\n`_, x = self.bert(input_ids=ids, attention_mask=mask) `\r\n\r\nby \r\n\r\n`outputs = self.bert(input_ids=ids, attention_mask=mask)`\r\n\r\nYou can then access the last hidden states using `outputs[0]`. ",
"Still facing the error here\r\n\r\n\r\n",
"Yes, `outputs` is a tuple. You should use `outputs[0]`, which is a PyTorch tensor.\r\n\r\nSorry for interrupting here, but Github issues are not the place to ask such questions. They are related to understanding PyTorch, working with Transformers. For such questions, you can use the [forum](https://discuss.huggingface.co/).\r\n\r\nWe would like to keep Github issues for bugs/feature requests.\r\n\r\nThanks!",
"Thanks , I really appreciate you and the time you put time here in the thread for helping me out. Finally everything seems like working.\r\n\r\nCoolbeas"
] | 1,622 | 1,622 | 1,622 | NONE | null | Currently getting the following error while running the distillbert model :
**TypeError: forward() got an unexpected keyword argument 'token_type_ids'**
I constructed the model as follows:
```
class Distill_BERTBaseUncased(nn.Module):
def __init__(self):
super(Distill_BERTBaseUncased, self).__init__()
self.bert = transformers.DistilBertModel.from_pretrained(DISTILL_BERT_PATH, return_dict=False)
self.bert_drop = nn.Dropout(0.5)
self.out = nn.Linear(768 * 2, 1)
def forward(self, ids, mask, token_type_ids):
o1, _ = self.bert(ids, attention_mask=mask, token_type_ids = token_type_ids)
mean_pooling = torch.mean(o1, 1)
max_pooling, _ = torch.max(o1, 1)
cat = torch.cat((mean_pooling, max_pooling), 1)
bo = self.bert_drop(cat)
output = self.out(bo)
return output
model = Distill_BERTBaseUncased()
model = model.to(device)
```
Please help me in resolving this as I took 3 parameters as input for the model which is present in the forward function in the model class ! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11926/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11925 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11925/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11925/comments | https://api.github.com/repos/huggingface/transformers/issues/11925/events | https://github.com/huggingface/transformers/issues/11925 | 905,286,464 | MDU6SXNzdWU5MDUyODY0NjQ= | 11,925 | BERT pretraining: [SEP] vs. Segment Embeddings? | {
"login": "velixo",
"id": 7550072,
"node_id": "MDQ6VXNlcjc1NTAwNzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7550072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/velixo",
"html_url": "https://github.com/velixo",
"followers_url": "https://api.github.com/users/velixo/followers",
"following_url": "https://api.github.com/users/velixo/following{/other_user}",
"gists_url": "https://api.github.com/users/velixo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/velixo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/velixo/subscriptions",
"organizations_url": "https://api.github.com/users/velixo/orgs",
"repos_url": "https://api.github.com/users/velixo/repos",
"events_url": "https://api.github.com/users/velixo/events{/privacy}",
"received_events_url": "https://api.github.com/users/velixo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"From the BERT paper: \"We differentiate the sentences in two ways. First, we separate them with a special token ([SEP]). Second, we add a learned embedding to every token indicating whether it belongs to sentence A or sentence B.\"\r\n\r\nDeep learning, as you may now, is a lot of experimenting, and in this case, it was a design choice. I guess you could try to omit the [SEP] token, perhaps it doesn't add much information to the model. Or omit the token type embeddings, and check whether the results are significantly different. \r\n\r\nTo give another example, people are experimenting with all kinds of position encodings (include absolute ones, as in BERT, relatives ones, as in T5, sinusoidal ones, as in the original Transformer, now there are rotary embeddings, as in the new RoFormer paper)... \r\n\r\nSo the question you're asking is a genuine research question :) ",
"Thank you for the quick answer, good to know! I was suspecting it might be something along these lines :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,625 | 1,625 | NONE | null | Iβm confused about the differences between the intent of the [SEP] tokens and Segment Embeddings applied to the input of BERT during pretraining.
As far as Iβve understood, the [SEP] tokens are inserted between sentence A and B to enable the modelβs ability to distinguish between the two sentences for BERTs Next-Sentence Prediction pretraining-task. Similarly, the Segment Embeddings are added to the input embeddings to alter the input, creating another opportunity for the model to learn that sentence A and B are distinct things.
However, these seem to be facilitating the same purpose. Why canβt BERT be trained on only Segment Embeddings, omitting [SEP] tokens? What additional information do [SEP] tokens conceptually provide, that the Segment Embeddings donβt?
Furthermore, [SEP] tokens arenβt used directly anyways. NSP is trained on the [CLS] embeddings, which I understand to sort of represent an embedding of sentence continuity. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11925/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11925/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11924 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11924/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11924/comments | https://api.github.com/repos/huggingface/transformers/issues/11924/events | https://github.com/huggingface/transformers/pull/11924 | 905,258,694 | MDExOlB1bGxSZXF1ZXN0NjU2MzUwMjI5 | 11,924 | Test optuna and ray | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,622 | 1,622 | 1,622 | MEMBER | null | Run the slow tests for optuna and ray
cc @richardliaw @amogkam | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11924/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11924",
"html_url": "https://github.com/huggingface/transformers/pull/11924",
"diff_url": "https://github.com/huggingface/transformers/pull/11924.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11924.patch",
"merged_at": 1622202721000
} |
https://api.github.com/repos/huggingface/transformers/issues/11923 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11923/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11923/comments | https://api.github.com/repos/huggingface/transformers/issues/11923/events | https://github.com/huggingface/transformers/issues/11923 | 905,213,762 | MDU6SXNzdWU5MDUyMTM3NjI= | 11,923 | Trainer.predict using customized model.predict function? | {
"login": "iamlockelightning",
"id": 12706469,
"node_id": "MDQ6VXNlcjEyNzA2NDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/12706469?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iamlockelightning",
"html_url": "https://github.com/iamlockelightning",
"followers_url": "https://api.github.com/users/iamlockelightning/followers",
"following_url": "https://api.github.com/users/iamlockelightning/following{/other_user}",
"gists_url": "https://api.github.com/users/iamlockelightning/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iamlockelightning/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iamlockelightning/subscriptions",
"organizations_url": "https://api.github.com/users/iamlockelightning/orgs",
"repos_url": "https://api.github.com/users/iamlockelightning/repos",
"events_url": "https://api.github.com/users/iamlockelightning/events{/privacy}",
"received_events_url": "https://api.github.com/users/iamlockelightning/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Cant you just subclass the Trainer class and write your own `predict`?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,625 | 1,625 | CONTRIBUTOR | null | I am using the [Trainer](https://huggingface.co/transformers/main_classes/trainer.html) to train a sentence-bert model with triplet-loss. Then I want to do some inference. How to call Trainer.predict using custom model.predict function?
I use `model.forward()` to calculate loss in training stage. But I want to use a customized `model.predict()` to calculate prediction results based on `model.forward()` (e.g., model.forward() -> embedding -> other method to calculate prediction instead of the loss function)
I saw the `prediction_step()` function just called `outputs = model(**inputs)` to get `(loss, logits, labels)`
Is there any good method to do that? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11923/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11923/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11922 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11922/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11922/comments | https://api.github.com/repos/huggingface/transformers/issues/11922/events | https://github.com/huggingface/transformers/pull/11922 | 905,074,062 | MDExOlB1bGxSZXF1ZXN0NjU2MTc3Njc0 | 11,922 | get_ordinal(local=True) replaced with get_local_ordinal() in training_args.py | {
"login": "BassaniRiccardo",
"id": 48254418,
"node_id": "MDQ6VXNlcjQ4MjU0NDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/48254418?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BassaniRiccardo",
"html_url": "https://github.com/BassaniRiccardo",
"followers_url": "https://api.github.com/users/BassaniRiccardo/followers",
"following_url": "https://api.github.com/users/BassaniRiccardo/following{/other_user}",
"gists_url": "https://api.github.com/users/BassaniRiccardo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BassaniRiccardo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BassaniRiccardo/subscriptions",
"organizations_url": "https://api.github.com/users/BassaniRiccardo/orgs",
"repos_url": "https://api.github.com/users/BassaniRiccardo/repos",
"events_url": "https://api.github.com/users/BassaniRiccardo/events{/privacy}",
"received_events_url": "https://api.github.com/users/BassaniRiccardo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks a lot!"
] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | ## Fixed
Wrong method call fixed. Modified according to:
https://pytorch.org/xla/release/1.8.1/_modules/torch_xla/core/xla_model.html
TPU training as called by the following or similar scripts now works:
```ruby
python xla_spawn.py \
--num_cores=8 \
language-modeling/run_mlm.py \
--train_file $TRAIN_FILE \
--model_name_or_path bert-base-uncased \
--output_dir $OUTPUT_DIR \
--overwrite_output_dir True \
--do_train True \
```
## Discussed/approved
https://github.com/huggingface/transformers/issues/11910
## Who can review?
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11922/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11922",
"html_url": "https://github.com/huggingface/transformers/pull/11922",
"diff_url": "https://github.com/huggingface/transformers/pull/11922.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11922.patch",
"merged_at": 1622552691000
} |
https://api.github.com/repos/huggingface/transformers/issues/11921 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11921/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11921/comments | https://api.github.com/repos/huggingface/transformers/issues/11921/events | https://github.com/huggingface/transformers/issues/11921 | 904,950,309 | MDU6SXNzdWU5MDQ5NTAzMDk= | 11,921 | ProphetNetForConditionalGeneration model isn't returning all objects properly | {
"login": "pranonrahman",
"id": 37942208,
"node_id": "MDQ6VXNlcjM3OTQyMjA4",
"avatar_url": "https://avatars.githubusercontent.com/u/37942208?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pranonrahman",
"html_url": "https://github.com/pranonrahman",
"followers_url": "https://api.github.com/users/pranonrahman/followers",
"following_url": "https://api.github.com/users/pranonrahman/following{/other_user}",
"gists_url": "https://api.github.com/users/pranonrahman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pranonrahman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pranonrahman/subscriptions",
"organizations_url": "https://api.github.com/users/pranonrahman/orgs",
"repos_url": "https://api.github.com/users/pranonrahman/repos",
"events_url": "https://api.github.com/users/pranonrahman/events{/privacy}",
"received_events_url": "https://api.github.com/users/pranonrahman/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"What are all the objects you expect to get? The `loss` is only returned if you pass the labels to the model - otherwise it cannot compute any loss. Please check out the [return statement of ProphetNetForConditionalGeneration's forward method for more information](https://huggingface.co/transformers/model_doc/prophetnet.html#transformers.ProphetNetForConditionalGeneration.forward). ",
"Thank you, it worked @LysandreJik "
] | 1,622 | 1,622 | 1,622 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Google Colab
- Python version:
- PyTorch version (GPU?): GPU
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
- text generation: @patrickvonplaten
## Information
The model I am using Prophetnet:
The problem arises when using:
* my own modified scripts: [My notebook](https://colab.research.google.com/drive/1rmZyTXsdEDDpx8tbX-Gt6Uj9NuQi92VK?usp=sharing)
The tasks I am working on is:
* an official SQUaD task: Question Generation
## To reproduce
Steps to reproduce the behavior:
1. Just run the notebook
2. After running a single inference I am only getting 4 objects while I should get loss and other objects.
## Expected behavior
After running a single inference I am only getting 4 objects while I should get loss and other objects. @patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11921/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11920 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11920/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11920/comments | https://api.github.com/repos/huggingface/transformers/issues/11920/events | https://github.com/huggingface/transformers/pull/11920 | 904,857,318 | MDExOlB1bGxSZXF1ZXN0NjU1OTc4Nzc5 | 11,920 | Remove redundant `nn.log_softmax` in `run_flax_glue.py` | {
"login": "n2cholas",
"id": 12474257,
"node_id": "MDQ6VXNlcjEyNDc0MjU3",
"avatar_url": "https://avatars.githubusercontent.com/u/12474257?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n2cholas",
"html_url": "https://github.com/n2cholas",
"followers_url": "https://api.github.com/users/n2cholas/followers",
"following_url": "https://api.github.com/users/n2cholas/following{/other_user}",
"gists_url": "https://api.github.com/users/n2cholas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n2cholas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n2cholas/subscriptions",
"organizations_url": "https://api.github.com/users/n2cholas/orgs",
"repos_url": "https://api.github.com/users/n2cholas/repos",
"events_url": "https://api.github.com/users/n2cholas/events{/privacy}",
"received_events_url": "https://api.github.com/users/n2cholas/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Great catch @n2cholas! Could you also remove the line:\r\n\r\n```python\r\nimport flax.linen as nn\r\n```\r\n\r\nto make our code quality checks happy? Happy to merge right after :-)",
"Done @patrickvonplaten!"
] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | # What does this PR do?
`optax.softmax_cross_entropy` expects unscaled logits, so it already calls `nn.log_softmax` ([here](https://github.com/deepmind/optax/blob/master/optax/_src/loss.py#L166)). `nn.log_softmax` is idempotent so mathematically it shouldn't have made a difference.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@marcvanzee @patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11920/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11920",
"html_url": "https://github.com/huggingface/transformers/pull/11920",
"diff_url": "https://github.com/huggingface/transformers/pull/11920.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11920.patch",
"merged_at": 1622471344000
} |
https://api.github.com/repos/huggingface/transformers/issues/11919 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11919/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11919/comments | https://api.github.com/repos/huggingface/transformers/issues/11919/events | https://github.com/huggingface/transformers/issues/11919 | 904,760,647 | MDU6SXNzdWU5MDQ3NjA2NDc= | 11,919 | Trainer reported loss is wrong when using DeepSpeed and gradient_accumulation_steps > 1 | {
"login": "rfernand2",
"id": 4296158,
"node_id": "MDQ6VXNlcjQyOTYxNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4296158?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rfernand2",
"html_url": "https://github.com/rfernand2",
"followers_url": "https://api.github.com/users/rfernand2/followers",
"following_url": "https://api.github.com/users/rfernand2/following{/other_user}",
"gists_url": "https://api.github.com/users/rfernand2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rfernand2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rfernand2/subscriptions",
"organizations_url": "https://api.github.com/users/rfernand2/orgs",
"repos_url": "https://api.github.com/users/rfernand2/repos",
"events_url": "https://api.github.com/users/rfernand2/events{/privacy}",
"received_events_url": "https://api.github.com/users/rfernand2/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"Please note that the fix should involve ignoring the return value of `deepspeed.backward()` in this [line](https://github.com/huggingface/transformers/blob/8d171628fe84bdf92ee40b5375d7265278180f14/src/transformers/trainer.py#L1754). Or at least not updating loss with this return value since it is the scaled loss value, similar to `scaled_loss` in this [line](https://github.com/huggingface/transformers/blob/8d171628fe84bdf92ee40b5375d7265278180f14/src/transformers/trainer.py#L1750)",
"Aaaah! We had two different definitions of scaled here, I know fully understand the issue. I was thinking scaled as scaled by the gradient accumulation steps factor, not scaled as scaled by the loss scaling factor. This is an easy fix to add, will do that in a bit.",
"> Please note that the fix should involve ignoring the return value of `deepspeed.backward()` in this [line](https://github.com/huggingface/transformers/blob/8d171628fe84bdf92ee40b5375d7265278180f14/src/transformers/trainer.py#L1754). Or at least not updating loss with this return value since it is the scaled loss value, similar to `scaled_loss` in this [line](https://github.com/huggingface/transformers/blob/8d171628fe84bdf92ee40b5375d7265278180f14/src/transformers/trainer.py#L1750)\r\n\r\n@tjruwase, could you please review your suggestion, since I see the deepspeed code doing scaling by GAS only. Please see:\r\n\r\nhttps://github.com/microsoft/DeepSpeed/blob/c697d7ae1cf5a479a8a85afa3bf9443e7d54ac2b/deepspeed/runtime/engine.py#L1142-L1143\r\n\r\nAm I missing something?\r\n\r\nAnd running tests I don't see any problem with the current code.",
"@stas00, you are right my suggestion here is not correct. I initially thought that deepspeed code scaling by GAS and exposing the scaled value to the client (HF) was the problem. But based yours and @sgugger findings, it seems there is nothing to do if HF is fine with `deepspeed.backward()` returning the GAS-scaled loss. \r\n\r\nSounds like this issue can be closed, once @rfernand2 agrees. ",
"Yes, sounds good to me.",
"Closing as the same report on Deepspeed side has been closed https://github.com/microsoft/DeepSpeed/issues/1107\r\n"
] | 1,622 | 1,623 | 1,623 | NONE | null | ## Environment info
- `transformers` version: 4.7.0.dev0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.0
- PyTorch version (GPU?): 1.8.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: no but using DeepSpeed on a single node
### Who can help
@stas00, @sgugger (trainer.py)
### See Also
https://github.com/microsoft/DeepSpeed/issues/1107
## Information
Model I am using (Roberta)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
* [ ] pretraining a Language Model (wikipedia and bookcorpus datasets)
## To reproduce
Steps to reproduce the behavior:
1. run scripts to pretrain a model with DeepSpeed on a single node with 1 GPU for N steps (gradient_accum_steps=1)
2. run scripts to pretrain a model with DeepSpeed on a single node with 1 GPU for N steps (gradient_accum_steps=8)
3. note that vast difference in **loss** reported on console by trainer.py
## Expected behavior
reported loss for any number of gradient_accum_steps, nodes, or GPUs should be the mean of all losses; the same order of magnitude as shown when training with gradient_accum_steps=1, on a single node, with a single GPU.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11919/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11918 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11918/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11918/comments | https://api.github.com/repos/huggingface/transformers/issues/11918/events | https://github.com/huggingface/transformers/pull/11918 | 904,732,403 | MDExOlB1bGxSZXF1ZXN0NjU1ODY3NjE0 | 11,918 | [Flax] Return Attention from BERT, ELECTRA, RoBERTa and GPT2 | {
"login": "jayendra13",
"id": 651057,
"node_id": "MDQ6VXNlcjY1MTA1Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/651057?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jayendra13",
"html_url": "https://github.com/jayendra13",
"followers_url": "https://api.github.com/users/jayendra13/followers",
"following_url": "https://api.github.com/users/jayendra13/following{/other_user}",
"gists_url": "https://api.github.com/users/jayendra13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jayendra13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jayendra13/subscriptions",
"organizations_url": "https://api.github.com/users/jayendra13/orgs",
"repos_url": "https://api.github.com/users/jayendra13/repos",
"events_url": "https://api.github.com/users/jayendra13/events{/privacy}",
"received_events_url": "https://api.github.com/users/jayendra13/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"π "
] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # https://github.com/huggingface/transformers/issues/11901
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11918/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11918/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11918",
"html_url": "https://github.com/huggingface/transformers/pull/11918",
"diff_url": "https://github.com/huggingface/transformers/pull/11918.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11918.patch",
"merged_at": 1622198816000
} |
https://api.github.com/repos/huggingface/transformers/issues/11917 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11917/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11917/comments | https://api.github.com/repos/huggingface/transformers/issues/11917/events | https://github.com/huggingface/transformers/pull/11917 | 904,715,132 | MDExOlB1bGxSZXF1ZXN0NjU1ODUxMjY5 | 11,917 | [Flax][WIP] Addition of Flax-Wav2Vec Model | {
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Cool PR! For next steps, we should write the missing classes and remove everything which is related to:\r\n\r\n```\r\nfeat_extract_norm=\"group\"\r\ndo_stable_layer_norm=True\r\n```\r\n\r\n(This config parameters are only used for https://huggingface.co/facebook/wav2vec2-base-960h which is the oldest of the wav2vec2 models)\r\n\r\nAlso, it would be very important to add tests in `modeling_flax_wav2vec2.py` ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Closing this in favor of https://github.com/huggingface/transformers/pull/12271"
] | 1,622 | 1,625 | 1,625 | CONTRIBUTOR | null | # What does this PR do?
This PR is for the addition of Wav2Vec Model
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11917/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11917/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11917",
"html_url": "https://github.com/huggingface/transformers/pull/11917",
"diff_url": "https://github.com/huggingface/transformers/pull/11917.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11917.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11916 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11916/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11916/comments | https://api.github.com/repos/huggingface/transformers/issues/11916/events | https://github.com/huggingface/transformers/issues/11916 | 904,680,663 | MDU6SXNzdWU5MDQ2ODA2NjM= | 11,916 | Wrong perplexity when evaluate the megatron-gpt2. | {
"login": "codecaution",
"id": 34735292,
"node_id": "MDQ6VXNlcjM0NzM1Mjky",
"avatar_url": "https://avatars.githubusercontent.com/u/34735292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codecaution",
"html_url": "https://github.com/codecaution",
"followers_url": "https://api.github.com/users/codecaution/followers",
"following_url": "https://api.github.com/users/codecaution/following{/other_user}",
"gists_url": "https://api.github.com/users/codecaution/gists{/gist_id}",
"starred_url": "https://api.github.com/users/codecaution/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codecaution/subscriptions",
"organizations_url": "https://api.github.com/users/codecaution/orgs",
"repos_url": "https://api.github.com/users/codecaution/repos",
"events_url": "https://api.github.com/users/codecaution/events{/privacy}",
"received_events_url": "https://api.github.com/users/codecaution/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Weβll try to reproduce the issue on our side. Weβll keep you posted. Thanks!",
"> Weβll try to reproduce the issue on our side. Weβll keep you posted. Thanks!\r\n\r\nThanks for your help!\r\n",
"We (NVIDIA engineers) were able to reproduce strange perplexity results and we are trying to identify the root cause. We will update you as we know more. Thanks for reporting the issue and for the reproducer.",
"Hi,\r\nI think #12004 is an related issue"
] | 1,622 | 1,623 | 1,623 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.7.0.dev0
- Platform: Linux-5.4.0-1046-azure-x86_64-with-debian-buster-sid
- Python version: 3.6.10
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@jdemouth @LysandreJik @sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using gpt2(megatron-gpt2-345m):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official language-modeling task: (transformers/examples/pytorch/language-modeling/run_clm.py )
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Follow the steps given by [huggingface](https://huggingface.co/nvidia/megatron-gpt2-345m) to convert the megatron-lm model to huggingface model.
+ export MYDIR=/mnt/reproduce
+ git clone https://github.com/huggingface/transformers.git $MYDIR/transformers
+ mkdir -p $MYDIR/nvidia/megatron-gpt2-345m
+ wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_lm_345m/versions/v0.0/zip -O $MYDIR/nvidia/megatron-gpt2-345m/checkpoint.zip
+ python3 $MYDIR/transformers/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py $MYDIR/nvidia/megatron-gpt2-345m/checkpoint.zip
(Here I meet error: *"io.UnsupportedOperation: seek. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead."* And I solve it by,
- unzip $MYDIR/nvidia/megatron-gpt2-345m/checkpoint.zip,
- change the code in transformers/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py Line **209-211** by
```python
with open(args.path_to_checkpoint, "rb") as pytorch_dict:
input_state_dict = torch.load(pytorch_dict, map_location="cpu")
```
- python3 $MYDIR/transformers/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py $MYDIR/nvidia/megatron-gpt2-345m/release/mp_rank_00/model_optim_rng.pt
+ git clone https://huggingface.co/nvidia/megatron-gpt2-345m/
+ mv $MYDIR/nvidia/megatron-gpt2-345m/release/mp_rank_00/pytorch_model.bin $MYDIR/nvidia/megatron-gpt2-345m/release/mp_rank_00/config.json $MYDIR/megatron-gpt2-345m/
2. run the clm.py tests on wikitext-2, the scripts is given by [readme](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/README.md).
```python
CUDA_VISIBLE_DEVICES=0 python $MYDIR/transformers/examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path $MYDIR/megatron-gpt2-345m \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--do_eval \
--output_dir /mnt/logs/evaluation/megatron/wikitext-2
```
3. The results are shown as, which shows the wrong perplexity(I also test on other datasets, and the perplexity results are also big):
``` txt
[INFO|trainer_pt_utils.py:907] 2021-05-28 04:17:49,817 >> ***** eval metrics *****
[INFO|trainer_pt_utils.py:912] 2021-05-28 04:17:49,817 >> eval_loss = 11.63
[INFO|trainer_pt_utils.py:912] 2021-05-28 04:17:49,817 >> eval_runtime = 0:00:22.85
[INFO|trainer_pt_utils.py:912] 2021-05-28 04:17:49,817 >> eval_samples = 240
[INFO|trainer_pt_utils.py:912] 2021-05-28 04:17:49,817 >> eval_samples_per_second = 10.501
[INFO|trainer_pt_utils.py:912] 2021-05-28 04:17:49,817 >> eval_steps_per_second = 1.313
[INFO|trainer_pt_utils.py:912] 2021-05-28 04:17:49,817 >> perplexity = 112422.0502
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I want to convert my megatron-lm model checkpoints into huggingface. Please help me.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11916/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11916/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11915 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11915/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11915/comments | https://api.github.com/repos/huggingface/transformers/issues/11915/events | https://github.com/huggingface/transformers/issues/11915 | 904,644,362 | MDU6SXNzdWU5MDQ2NDQzNjI= | 11,915 | RuntimeError: The size of tensor a (716) must match the size of tensor b (512) at non-singleton dimension 1 | {
"login": "lisa563",
"id": 78732119,
"node_id": "MDQ6VXNlcjc4NzMyMTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/78732119?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lisa563",
"html_url": "https://github.com/lisa563",
"followers_url": "https://api.github.com/users/lisa563/followers",
"following_url": "https://api.github.com/users/lisa563/following{/other_user}",
"gists_url": "https://api.github.com/users/lisa563/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lisa563/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lisa563/subscriptions",
"organizations_url": "https://api.github.com/users/lisa563/orgs",
"repos_url": "https://api.github.com/users/lisa563/repos",
"events_url": "https://api.github.com/users/lisa563/events{/privacy}",
"received_events_url": "https://api.github.com/users/lisa563/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,625 | 1,625 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.0.dev0
- Platform: windows 7
- Python version: 3.8.8
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [β] the official example scripts: (give details below)
`python run_ner.py --model_name_or_path nlpaueb/legal-bert-base-uncased --train_file ***.json --validation_file ***.json --output_dir /tmp/*** --do_train --do_eval`
* [ ] my own modified scripts: (give details below)
The tasks I am working on is: NER
* [ ] an official GLUE/SQUaD task: (give the name)
* [β] my own task or dataset: (give details below)
## Error
`File "run_ner.py", line 504, in <module>
main()
File "run_ner.py", line 446, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "D:\Tool\Install\Python\lib\site-packages\transformers\trainer.py", line
1240, in train
tr_loss += self.training_step(model, inputs)
File "D:\Tool\Install\Python\lib\site-packages\transformers\trainer.py", line
1635, in training_step
loss = self.compute_loss(model, inputs)
File "D:\Tool\Install\Python\lib\site-packages\transformers\trainer.py", line
1667, in compute_loss
outputs = model(**inputs)
File "D:\Tool\Install\Python\lib\site-packages\torch\nn\modules\module.py", li
ne 727, in _call_impl
result = self.forward(*input, **kwargs)
File "D:\Tool\Install\Python\lib\site-packages\transformers\models\bert\modeli
ng_bert.py", line 1679, in forward
outputs = self.bert(
File "D:\Tool\Install\Python\lib\site-packages\torch\nn\modules\module.py", li
ne 727, in _call_impl
result = self.forward(*input, **kwargs)
File "D:\Tool\Install\Python\lib\site-packages\transformers\models\bert\modeli
ng_bert.py", line 964, in forward
embedding_output = self.embeddings(
File "D:\Tool\Install\Python\lib\site-packages\torch\nn\modules\module.py", li
ne 727, in _call_impl
result = self.forward(*input, **kwargs)
File "D:\Tool\Install\Python\lib\site-packages\transformers\models\bert\modeli
ng_bert.py", line 207, in forward
embeddings += position_embeddings
RuntimeError: The size of tensor a (716) must match the size of tensor b (512) at non-singleton dimension 1
5%|βββ | 12/231 [03:07<56:54, 15.59s/i
t]`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11915/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11914 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11914/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11914/comments | https://api.github.com/repos/huggingface/transformers/issues/11914/events | https://github.com/huggingface/transformers/issues/11914 | 904,614,159 | MDU6SXNzdWU5MDQ2MTQxNTk= | 11,914 | How to get back the identified words from LayoutLMForTokenClassification? | {
"login": "karandua2016",
"id": 18511921,
"node_id": "MDQ6VXNlcjE4NTExOTIx",
"avatar_url": "https://avatars.githubusercontent.com/u/18511921?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/karandua2016",
"html_url": "https://github.com/karandua2016",
"followers_url": "https://api.github.com/users/karandua2016/followers",
"following_url": "https://api.github.com/users/karandua2016/following{/other_user}",
"gists_url": "https://api.github.com/users/karandua2016/gists{/gist_id}",
"starred_url": "https://api.github.com/users/karandua2016/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/karandua2016/subscriptions",
"organizations_url": "https://api.github.com/users/karandua2016/orgs",
"repos_url": "https://api.github.com/users/karandua2016/repos",
"events_url": "https://api.github.com/users/karandua2016/events{/privacy}",
"received_events_url": "https://api.github.com/users/karandua2016/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\n\r\nA solution for this can be the following (taken from my [Fine-tuning BERT for NER notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/BERT/Custom_Named_Entity_Recognition_with_BERT_only_first_wordpiece.ipynb)):\r\n\r\n\r\n\r\nIn my notebook for `LayoutLMForTokenClassification`, only the label for the first word piece of each word matters. In HuggingFace Transformers, a tokenizer takes an additional parameter called `return_offsets_mapping` which can be set to `True` to return the (char_start, char_end) for each token.\r\n\r\nYou can use this to determine whether a token is the first wordpiece of a word, or not. As we are only interested in the label of the first wordpiece, you can assign its label to be the label for the entire word.\r\n\r\nDo you understand?\r\n\r\n",
"I do. Thanks. I tried doing this by referring your BERT code. But I am getting this error, unfortunately. \r\n\r\n\r\n\r\nApologies if I messed up. This is the first time that I am working with transformers.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,625 | 1,625 | NONE | null | I am using LayoutLMForTokenClassification as described [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb). In the end, the tutorial shows an annotated image with identified classes for various tokens.
How can I get back the original words as well to be annotated along with the labels?
I tried to read the words with tokenizer.decode(input_ids).split(" ") but the tokenizer broke words into multiple tokens which it wasn't supposed to. So, I have more words/outputs/boxes that I am supposed to have.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11914/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11913 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11913/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11913/comments | https://api.github.com/repos/huggingface/transformers/issues/11913/events | https://github.com/huggingface/transformers/issues/11913 | 904,492,548 | MDU6SXNzdWU5MDQ0OTI1NDg= | 11,913 | Inference for pinned model keeps loading | {
"login": "Kvit",
"id": 1123272,
"node_id": "MDQ6VXNlcjExMjMyNzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1123272?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kvit",
"html_url": "https://github.com/Kvit",
"followers_url": "https://api.github.com/users/Kvit/followers",
"following_url": "https://api.github.com/users/Kvit/following{/other_user}",
"gists_url": "https://api.github.com/users/Kvit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kvit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kvit/subscriptions",
"organizations_url": "https://api.github.com/users/Kvit/orgs",
"repos_url": "https://api.github.com/users/Kvit/repos",
"events_url": "https://api.github.com/users/Kvit/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kvit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"after I have unpinned the model embeddings pipeline started working again (unless you did something on the back end)",
"I have repeated the experiment: pinned model - embedding pipeline API returns \"loading\" status; unpinned model - embedding pipeline returns valid results (after model 'warm up'). Look like inference API to embedding pipeline stops working if the model is pinned for instant inference access. ",
"Tagging @Narsil for visibility (API support issues are best handled over email if possible!)",
"Thanks for tagging me, the `ligolab/DxRoberta` is defined a `fill-mask` by default, so that was what was being pinned down leading the issues you were encountering. You can override that by changing the `pipeline_tag` in the model card (if you want).\r\n\r\nThere is currently no way to specify the `task` when pinning, so I did it manually for now ! You should be good to go !",
"If we change `pipeline_tag `do we still need to use API endpoint `/pipeline/feature-extraction/ ` ?",
"No if you change the default tag, then the regular route /models/{MODEL} will work ! ",
"@Narsil , could you chare snippet of using `pipeline_tag` in the card? I don't recall seeing this option it in the documentation https://github.com/huggingface/model_card.",
"Just `pipeline_tag: xxx`, see https://huggingface.co/docs#how-is-a-models-type-of-inference-api-and-widget-determined"
] | 1,622 | 1,622 | 1,622 | NONE | null | I have pinned the enterprise model: `ligolab/DxRoberta`
This model is pinned for instant start.
When I try the Inference API `fill-mask` task, it responds instantly
However, when I try API call to embedding pipeline https://api-inference.huggingface.co/pipeline/feature-extraction/ligolab/DxRoberta , I keep getting the message: `{'error': 'Model ligolab/DxRoberta is currently loading', 'estimated_time': 20}` and status does not change with time.
API call to embedding pipeline was working yesterday when I tested it.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11913/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11913/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11912 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11912/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11912/comments | https://api.github.com/repos/huggingface/transformers/issues/11912/events | https://github.com/huggingface/transformers/issues/11912 | 904,350,610 | MDU6SXNzdWU5MDQzNTA2MTA= | 11,912 | Distillation of Pegasus using Pseudo labeling | {
"login": "ibrahim-elsawy",
"id": 53919684,
"node_id": "MDQ6VXNlcjUzOTE5Njg0",
"avatar_url": "https://avatars.githubusercontent.com/u/53919684?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ibrahim-elsawy",
"html_url": "https://github.com/ibrahim-elsawy",
"followers_url": "https://api.github.com/users/ibrahim-elsawy/followers",
"following_url": "https://api.github.com/users/ibrahim-elsawy/following{/other_user}",
"gists_url": "https://api.github.com/users/ibrahim-elsawy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ibrahim-elsawy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ibrahim-elsawy/subscriptions",
"organizations_url": "https://api.github.com/users/ibrahim-elsawy/orgs",
"repos_url": "https://api.github.com/users/ibrahim-elsawy/repos",
"events_url": "https://api.github.com/users/ibrahim-elsawy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ibrahim-elsawy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,622 | 1,622 | 1,622 | NONE | null |
### Who can help
@sgugger
@patrickvonplaten
Models:
- Distillation of Pegasus
## Information
The model I am using (google/pegasus-xsum):
The problem:
- Trying to implement Pegasus Distillation using [Pseudo Labeling](https://github.com/huggingface/transformers/blob/master/examples/research_projects/seq2seq-distillation/precomputed_pseudo_labels.md) in [PRE-TRAINED SUMMARIZATION DISTILLATION](https://arxiv.org/pdf/2010.13002v2.pdf)
- By copying layers from the Teacher model, freezing the positional, token embeddings, all Encoder layers
- The model trained for two epochs on Xsum-Dataset using cross-entropy loss function between logits of student and output
generated from the teacher model
- Generating outputs from the student model gives repeated words and poor generation, although the losses function decreases from 8 to 0.7647 in training and 0.5424 in validation
```python
[have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have wing wing wing wing wing wing wing wing wing wing wing wing wing']
```
How can I improve the generation of the model
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11912/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11911 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11911/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11911/comments | https://api.github.com/repos/huggingface/transformers/issues/11911/events | https://github.com/huggingface/transformers/pull/11911 | 904,175,486 | MDExOlB1bGxSZXF1ZXN0NjU1MzQ1MDc2 | 11,911 | Fix a condition in test_generate_with_head_masking | {
"login": "stancld",
"id": 46073029,
"node_id": "MDQ6VXNlcjQ2MDczMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stancld",
"html_url": "https://github.com/stancld",
"followers_url": "https://api.github.com/users/stancld/followers",
"following_url": "https://api.github.com/users/stancld/following{/other_user}",
"gists_url": "https://api.github.com/users/stancld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stancld/subscriptions",
"organizations_url": "https://api.github.com/users/stancld/orgs",
"repos_url": "https://api.github.com/users/stancld/repos",
"events_url": "https://api.github.com/users/stancld/events{/privacy}",
"received_events_url": "https://api.github.com/users/stancld/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,622 | 1,623 | 1,623 | CONTRIBUTOR | null | Fix a glitch in a condition in `test_generate_with_headmasking`, i.e.
```diff
- if set(head_masking.keys()) < set([*signature.parameters.keys()]):
+ if set(head_masking.keys()) > set([*signature.parameters.keys()]):
continue
```
This PR also fixes usage of head_mask for bigbird_pegasus and speech2texy models.
**Reviewer:** @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11911/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11911/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11911",
"html_url": "https://github.com/huggingface/transformers/pull/11911",
"diff_url": "https://github.com/huggingface/transformers/pull/11911.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11911.patch",
"merged_at": 1623335287000
} |
https://api.github.com/repos/huggingface/transformers/issues/11910 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11910/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11910/comments | https://api.github.com/repos/huggingface/transformers/issues/11910/events | https://github.com/huggingface/transformers/issues/11910 | 904,141,831 | MDU6SXNzdWU5MDQxNDE4MzE= | 11,910 | xla_spawn.py: xm.get_ordinal() got an unexpected keyword argument 'local' | {
"login": "BassaniRiccardo",
"id": 48254418,
"node_id": "MDQ6VXNlcjQ4MjU0NDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/48254418?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BassaniRiccardo",
"html_url": "https://github.com/BassaniRiccardo",
"followers_url": "https://api.github.com/users/BassaniRiccardo/followers",
"following_url": "https://api.github.com/users/BassaniRiccardo/following{/other_user}",
"gists_url": "https://api.github.com/users/BassaniRiccardo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BassaniRiccardo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BassaniRiccardo/subscriptions",
"organizations_url": "https://api.github.com/users/BassaniRiccardo/orgs",
"repos_url": "https://api.github.com/users/BassaniRiccardo/repos",
"events_url": "https://api.github.com/users/BassaniRiccardo/events{/privacy}",
"received_events_url": "https://api.github.com/users/BassaniRiccardo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for the catch! Since you have the proper fix indeed, would like to make a PR with it?",
"Done, thanks!",
"Closed by #11922"
] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.7.0.dev0
- Platform: Linux-5.4.109+-x86_64-with-Ubuntu-18.04-bionic (working on Colab with TPU)
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Using GPU in script?: No, using TPU
- Using distributed or parallel set-up in script?: number_of_cores = 8
### Who can help
<@sgugger, @patil-suraj -->
## Information
Model I am using: Bert
The problem arises when using:
* [ ] the official example scripts:
```ruby
python /transformers/examples/pytorch/xla_spawn.py --num_cores=8 \
/transformers/examples/pytorch/language-modeling/run_mlm.py (--run_mlm.py args)
```
The tasks I am working on is:
* Pretraining BERT with TPU
## To reproduce
Steps to reproduce the behavior:
1. install necessary packages:
```ruby
pip install git+https://github.com/huggingface/transformers
cd /content/transformers/examples/pytorch/language-modeling
pip install -r requirements.txt
pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.8.1-cp37-cp37m-linux_x86_64.whl
```
2. run xla_spwan with minimal args passed to run_mlm: specify a small .txt TRAIN_FILE and an OUTPUT_DIR:
```ruby
python xla_spawn.py \
--num_cores=8 \
language-modeling/run_mlm.py \
--train_file $TRAIN_FILE \
--model_name_or_path bert-base-uncased \
--output_dir $OUTPUT_DIR \
--overwrite_output_dir True \
--do_train True \
```
I get this error (for different TPU cores):
```ruby
Exception in device=TPU:0: get_ordinal() got an unexpected keyword argument 'local'
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 329, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn
fn(gindex, *args)
File "/content/drive/My Drive/Thesis/transformers/examples/pytorch/language-modeling/run_mlm.py", line 493, in _mp_fn
main()
File "/content/drive/My Drive/Thesis/transformers/examples/pytorch/language-modeling/run_mlm.py", line 451, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1193, in train
self.state.is_local_process_zero = self.is_local_process_zero()
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1784, in is_local_process_zero
return self.args.local_process_index == 0
File "/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py", line 1605, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/training_args.py", line 864, in local_process_index
return xm.get_ordinal(local=True)
TypeError: get_ordinal() got an unexpected keyword argument 'local'
```
## Expected behavior
The training should run without errors. I achieved this by simply replacing line 864 of /transformers/training_args.py:
```ruby
return xm.get_ordinal(local=True)
```
with:
```ruby
return xm.get_local_ordinal()
```
Following torch docs at:
https://pytorch.org/xla/release/1.5/_modules/torch_xla/core/xla_model.html
If this is the correct syntax (and this behaviour is not due to something wrong in my environment), this easy fix should be enough. My model trained correctly.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11910/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11909 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11909/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11909/comments | https://api.github.com/repos/huggingface/transformers/issues/11909/events | https://github.com/huggingface/transformers/pull/11909 | 904,137,095 | MDExOlB1bGxSZXF1ZXN0NjU1MzExNjQx | 11,909 | FlaxGPTNeo Draft PR | {
"login": "zanussbaum",
"id": 33707069,
"node_id": "MDQ6VXNlcjMzNzA3MDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zanussbaum",
"html_url": "https://github.com/zanussbaum",
"followers_url": "https://api.github.com/users/zanussbaum/followers",
"following_url": "https://api.github.com/users/zanussbaum/following{/other_user}",
"gists_url": "https://api.github.com/users/zanussbaum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zanussbaum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zanussbaum/subscriptions",
"organizations_url": "https://api.github.com/users/zanussbaum/orgs",
"repos_url": "https://api.github.com/users/zanussbaum/repos",
"events_url": "https://api.github.com/users/zanussbaum/events{/privacy}",
"received_events_url": "https://api.github.com/users/zanussbaum/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2934977194,
"node_id": "MDU6TGFiZWwyOTM0OTc3MTk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Flax",
"name": "Flax",
"color": "4862AD",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hey @zanussbaum, \r\n\r\nRegarding on how to proceed with the implementation, could you maybe post your questions here and tag @patil-suraj and @patrickvonplaten so that we can move forward? :-)",
"Hey @patrickvonplaten, I actually chatted with Suraj this morning and\ncleared my questions up about the Self Attention Module. I am working on\nimplementing it and hope to have something out this weekend!\n\nOn Thu, Jun 3, 2021 at 11:27 AM Patrick von Platen ***@***.***>\nwrote:\n\n> Hey @zanussbaum <https://github.com/zanussbaum>,\n>\n> Regarding on how to proceed with the implementation, could you maybe post\n> your questions here and tag @patil-suraj <https://github.com/patil-suraj>\n> and @patrickvonplaten <https://github.com/patrickvonplaten> so that we\n> can move forward? :-)\n>\n> β\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/11909#issuecomment-853957259>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AIBFIPPVDY3FYPLTVEYQCTTTQ6NPXANCNFSM45U7XKKQ>\n> .\n>\n"
] | 1,622 | 1,625 | 1,625 | CONTRIBUTOR | null | # What does this PR do?
Add FlaxGPTNeo to HuggingFace Models!
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11909/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11909/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11909",
"html_url": "https://github.com/huggingface/transformers/pull/11909",
"diff_url": "https://github.com/huggingface/transformers/pull/11909.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11909.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11908 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11908/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11908/comments | https://api.github.com/repos/huggingface/transformers/issues/11908/events | https://github.com/huggingface/transformers/issues/11908 | 904,119,499 | MDU6SXNzdWU5MDQxMTk0OTk= | 11,908 | Fine tuning with transformer models for Regression tasks | {
"login": "zyberg2091",
"id": 42847318,
"node_id": "MDQ6VXNlcjQyODQ3MzE4",
"avatar_url": "https://avatars.githubusercontent.com/u/42847318?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zyberg2091",
"html_url": "https://github.com/zyberg2091",
"followers_url": "https://api.github.com/users/zyberg2091/followers",
"following_url": "https://api.github.com/users/zyberg2091/following{/other_user}",
"gists_url": "https://api.github.com/users/zyberg2091/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zyberg2091/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zyberg2091/subscriptions",
"organizations_url": "https://api.github.com/users/zyberg2091/orgs",
"repos_url": "https://api.github.com/users/zyberg2091/repos",
"events_url": "https://api.github.com/users/zyberg2091/events{/privacy}",
"received_events_url": "https://api.github.com/users/zyberg2091/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think it's better to ask this question on the [forum](https://discuss.huggingface.co/) rather than here. For example, all questions related to training BERT for regression can be found [here](https://discuss.huggingface.co/search?q=bert%20regression)."
] | 1,622 | 1,623 | 1,623 | NONE | null | - `transformers` version: Bert, Albert, openai-gpt2
- Tensorflow version (GPU?): 2.5.0
## Information
Model I am using : Bert, Albert, openai-gpt2
The problem arises when using:
* [x] my own modified scripts: (give details below) <br>
- performed fine tuning
The tasks I am working on is:
* [x] my own task or dataset: (give details below)<br>
- I have been trying to use BertModel, albert and GPT2 models for fine tuning on my regression task and i was able to produce unwanted results . i will mention it below: <br>
- I tried it two times: <br>
1. I used CLS token embeddings and fine tuned over my entire custom model but it produced some random number repeating over and over in my output matrix space.<br>
2. I simply passed CLS token embeddings to the feed forward NN. In this case also it produced some random number and no learning is seen here.<br>
<br>
**what can be the solution to this problem? is there any issues with transformers with respect to regression?** | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11908/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11907 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11907/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11907/comments | https://api.github.com/repos/huggingface/transformers/issues/11907/events | https://github.com/huggingface/transformers/pull/11907 | 904,020,157 | MDExOlB1bGxSZXF1ZXN0NjU1MjA5NTUw | 11,907 | Add conversion from TF to PT for Tapas retrieval models | {
"login": "bogdankostic",
"id": 48713846,
"node_id": "MDQ6VXNlcjQ4NzEzODQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/48713846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bogdankostic",
"html_url": "https://github.com/bogdankostic",
"followers_url": "https://api.github.com/users/bogdankostic/followers",
"following_url": "https://api.github.com/users/bogdankostic/following{/other_user}",
"gists_url": "https://api.github.com/users/bogdankostic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bogdankostic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bogdankostic/subscriptions",
"organizations_url": "https://api.github.com/users/bogdankostic/orgs",
"repos_url": "https://api.github.com/users/bogdankostic/repos",
"events_url": "https://api.github.com/users/bogdankostic/events{/privacy}",
"received_events_url": "https://api.github.com/users/bogdankostic/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for this, it's certainly something I'd like to add in the future. The TAPAS team seems quite active, they released [yet another paper involving TAPAS](https://arxiv.org/abs/2106.00479) (to work on larger tables).",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"@NielsRogge I just quickly reviewed this script and it looks fine. Is it possible to know why this PR became stall and was automatically closed? Is there anything wrong with the script I should be aware of? \r\n\r\nI'm currently working on finetuning the TAPAS retrieval model for a research project, just wanted to have your thoughts on this before running the script and uploading the model to the Huggingface hub.",
"@jonathanherzig Just wanted to confirm with you, in this case `bert` is the table encoder and `bert_1` is the question encoder, right? ",
"Hi @xhlulu ,\r\nSorry, but I not familiar with the implementation details in this version of TAPAS... probably @NielsRogge can help.\r\n\r\nBest,\r\nJonathan",
"No worries, thanks Jonathan!"
] | 1,622 | 1,644 | 1,625 | CONTRIBUTOR | null | # What does this PR do?
Table Retrieval models based on Tapas as described [here](https://arxiv.org/pdf/2103.12011.pdf) just got published in the [Tapas repository](https://github.com/google-research/tapas). The existing conversion function does not work with the retrieval models, so I added support to convert them to Pytorch.
Unfortunately, this only converts the language model without the down projection layer. However, I think this might still be useful to some people who, for instance, want to fine-tune the pre-trained models.
Unfortunately, I do not have the time at the moment to add the down projection layer myself.
## Who can review?
@NielsRogge
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11907/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11907",
"html_url": "https://github.com/huggingface/transformers/pull/11907",
"diff_url": "https://github.com/huggingface/transformers/pull/11907.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11907.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11906 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11906/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11906/comments | https://api.github.com/repos/huggingface/transformers/issues/11906/events | https://github.com/huggingface/transformers/pull/11906 | 903,894,197 | MDExOlB1bGxSZXF1ZXN0NjU1MTA0Njgz | 11,906 | Added Sequence Classification class in GPTNeo | {
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | # Added Sequence Classification Class in GPT Neo Model
Fixes #11811
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. #11811
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten @patil-suraj @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11906/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11906",
"html_url": "https://github.com/huggingface/transformers/pull/11906",
"diff_url": "https://github.com/huggingface/transformers/pull/11906.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11906.patch",
"merged_at": 1622197622000
} |
https://api.github.com/repos/huggingface/transformers/issues/11905 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11905/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11905/comments | https://api.github.com/repos/huggingface/transformers/issues/11905/events | https://github.com/huggingface/transformers/issues/11905 | 903,809,141 | MDU6SXNzdWU5MDM4MDkxNDE= | 11,905 | Customize pretrained model for model hub | {
"login": "Matthieu-Tinycoaching",
"id": 77435960,
"node_id": "MDQ6VXNlcjc3NDM1OTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/77435960?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Matthieu-Tinycoaching",
"html_url": "https://github.com/Matthieu-Tinycoaching",
"followers_url": "https://api.github.com/users/Matthieu-Tinycoaching/followers",
"following_url": "https://api.github.com/users/Matthieu-Tinycoaching/following{/other_user}",
"gists_url": "https://api.github.com/users/Matthieu-Tinycoaching/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Matthieu-Tinycoaching/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Matthieu-Tinycoaching/subscriptions",
"organizations_url": "https://api.github.com/users/Matthieu-Tinycoaching/orgs",
"repos_url": "https://api.github.com/users/Matthieu-Tinycoaching/repos",
"events_url": "https://api.github.com/users/Matthieu-Tinycoaching/events{/privacy}",
"received_events_url": "https://api.github.com/users/Matthieu-Tinycoaching/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Maybe of interest to @nreimers ",
"Hi @Matthieu-Tinycoaching \r\n\r\nI was sadly not able to re-produce your error. Have you uploaded such a model to the hub? Could you post the link here?\r\n\r\nAnd how does your code look like to load the model?",
"Hi @nreimers \r\n\r\nI retried with including the custom class definition when loading the model and it worked.\r\n\r\n\r\n\r\n"
] | 1,622 | 1,622 | 1,622 | NONE | null | Hi community,
I would like to add mean pooling step inside a custom SentenceTransformer class derived from the model sentence-transformers/stsb-xlm-r-multilingual, in order to avoid to do this supplementary step after getting the tokens embeddings.
My aim is to push this custom model onto model hub. If not using this custom step, it is trivial as below:
```
from transformers import AutoTokenizer, AutoModel
Simple export
## Instanciate the model
model_name = "sentence-transformers/stsb-xlm-r-multilingual"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
## Save the model and tokenizer files into cloned repository
model.save_pretrained("path/to/repo/clone/your-model-name")
tokenizer.save_pretrained("path/to/repo/clone/your-model-name")
```
However, after defining my custom class SentenceTransformerCustom I canβt manage to push on model hub the definition of this class:
```
import transformers
import torch
#### Custom export ####
## 1. Load feature-extraction pipeline with specific sts model
model_name = "sentence-transformers/stsb-xlm-r-multilingual"
pipeline_name = "feature-extraction"
nlp = transformers.pipeline(pipeline_name, model=model_name, tokenizer=model_name)
tokenizer = nlp.tokenizer
## 2. Setting up a simple torch model, which inherits from the XLMRobertaModel model. The only thing we add is a weighted summation over the token embeddings and a clamp to prevent zero-division errors.
class SentenceTransformerCustom(transformers.XLMRobertaModel):
def __init__(self, config):
super().__init__(config)
# Naming alias for ONNX output specification
# Makes it easier to identify the layer
self.sentence_embedding = torch.nn.Identity()
def forward(self, input_ids, attention_mask):
# Get the token embeddings from the base model
token_embeddings = super().forward(
input_ids,
attention_mask=attention_mask
)[0]
# Stack the pooling layer on top of it
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return self.sentence_embedding(sum_embeddings / sum_mask)
## 3. Create the custom model based on the config of the original pipeline
model = SentenceTransformerCustom(config=nlp.model.config).from_pretrained(model_name)
## 4. Save the model and tokenizer files into cloned repository
model.save_pretrained("/home/matthieu/Deployment/HF/stsb-xlm-r-multilingual")
tokenizer.save_pretrained("/home/matthieu/Deployment/HF/stsb-xlm-r-multilingual")
```
Do I need to place this custom class definition inside a specific .py file ? Or is there anything to do in order to correctly import this custom class from model hub?
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11905/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11904 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11904/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11904/comments | https://api.github.com/repos/huggingface/transformers/issues/11904/events | https://github.com/huggingface/transformers/issues/11904 | 903,802,844 | MDU6SXNzdWU5MDM4MDI4NDQ= | 11,904 | 'error': 'Model Matthieu/stsb-xlm-r-multilingual is currently loading' | {
"login": "Matthieu-Tinycoaching",
"id": 77435960,
"node_id": "MDQ6VXNlcjc3NDM1OTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/77435960?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Matthieu-Tinycoaching",
"html_url": "https://github.com/Matthieu-Tinycoaching",
"followers_url": "https://api.github.com/users/Matthieu-Tinycoaching/followers",
"following_url": "https://api.github.com/users/Matthieu-Tinycoaching/following{/other_user}",
"gists_url": "https://api.github.com/users/Matthieu-Tinycoaching/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Matthieu-Tinycoaching/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Matthieu-Tinycoaching/subscriptions",
"organizations_url": "https://api.github.com/users/Matthieu-Tinycoaching/orgs",
"repos_url": "https://api.github.com/users/Matthieu-Tinycoaching/repos",
"events_url": "https://api.github.com/users/Matthieu-Tinycoaching/events{/privacy}",
"received_events_url": "https://api.github.com/users/Matthieu-Tinycoaching/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,625 | 1,625 | NONE | null | Hello,
I have pushed on model hub (https://huggingface.co/Matthieu/stsb-xlm-r-multilingual) a pretrained sentence transformer model (https://huggingface.co/sentence-transformers/stsb-xlm-r-multilingual).
However, when trying to get prediction via th API_URL I stil got the following error:
`{'error': 'Model Matthieu/stsb-xlm-r-multilingual is currently loading', 'estimated_time': 44.49033436}`
How could I deal with this problem?
Thanks!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11904/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11903 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11903/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11903/comments | https://api.github.com/repos/huggingface/transformers/issues/11903/events | https://github.com/huggingface/transformers/issues/11903 | 903,741,601 | MDU6SXNzdWU5MDM3NDE2MDE= | 11,903 | Problem when freezing all GPT2 model except the LM head | {
"login": "yana-xuyan",
"id": 38536635,
"node_id": "MDQ6VXNlcjM4NTM2NjM1",
"avatar_url": "https://avatars.githubusercontent.com/u/38536635?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yana-xuyan",
"html_url": "https://github.com/yana-xuyan",
"followers_url": "https://api.github.com/users/yana-xuyan/followers",
"following_url": "https://api.github.com/users/yana-xuyan/following{/other_user}",
"gists_url": "https://api.github.com/users/yana-xuyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yana-xuyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yana-xuyan/subscriptions",
"organizations_url": "https://api.github.com/users/yana-xuyan/orgs",
"repos_url": "https://api.github.com/users/yana-xuyan/repos",
"events_url": "https://api.github.com/users/yana-xuyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yana-xuyan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! In GPT-2, as with most models, the LM head is tied to the embeddings: it has the same weights.\r\n\r\nYou can play around with the `tie_word_embeddings` configuration option, but your LM head will be randomly initialized.",
"Thank you very much!"
] | 1,622 | 1,622 | 1,622 | NONE | null | ## Environment info
- `transformers` version: 4.6.1
- Platform: Linux-4.15.0-142-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.7
- PyTorch version (GPU?): 1.5.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Models:
- gpt2: @patrickvonplaten, @LysandreJik
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
[*] the official example scripts: (give details below)
When I try to print all the named parameters of GPT2 model with LM head, `model.lm_head` does not appear in the list.
In my experiment, I tried to freeze all the parameters except the lm head, however, the lm head is frozen together when model.transformer.wte is frozen.
## To reproduce
Steps to reproduce the behavior:
1. Load model
```
from transformers import AutoModelForCausalLM
gpt2 = AutoModelForCausalLM.from_pretrained("gpt2")
```
2. freeze the transformer part
```
for p in gpt2.transformer.parameters():
p.requires_grad=False
```
or just:
```
for p in gpt2.transformer.wte.parameters():
p.requires_grad=False
```
3. check lm_head
```
for p in gpt2.lm_head.parameters():
print(p.requires_grad)
```
and the output of the third step is False.
4. When I try printing all the named parameters
```
components = [k for k,v in gpt2.named_parameters()]
print(components)
```
The output is as follows:
['transformer.wte.weight', 'transformer.wpe.weight', 'transformer.h.0.ln_1.weight', 'transformer.h.0.ln_1.bias', 'transformer.h.0.attn.c_attn.weight', 'transformer.h.0.attn.c_attn.bias', 'transformer.h.0.attn.c_proj.weight', 'transformer.h.0.attn.c_proj.bias', 'transformer.h.0.ln_2.weight', 'transformer.h.0.ln_2.bias', 'transformer.h.0.mlp.c_fc.weight', 'transformer.h.0.mlp.c_fc.bias', 'transformer.h.0.mlp.c_proj.weight', 'transformer.h.0.mlp.c_proj.bias', 'transformer.h.1.ln_1.weight', 'transformer.h.1.ln_1.bias', 'transformer.h.1.attn.c_attn.weight', 'transformer.h.1.attn.c_attn.bias', 'transformer.h.1.attn.c_proj.weight', 'transformer.h.1.attn.c_proj.bias', 'transformer.h.1.ln_2.weight', 'transformer.h.1.ln_2.bias', 'transformer.h.1.mlp.c_fc.weight', 'transformer.h.1.mlp.c_fc.bias', 'transformer.h.1.mlp.c_proj.weight', 'transformer.h.1.mlp.c_proj.bias', 'transformer.h.2.ln_1.weight', 'transformer.h.2.ln_1.bias', 'transformer.h.2.attn.c_attn.weight', 'transformer.h.2.attn.c_attn.bias', 'transformer.h.2.attn.c_proj.weight', 'transformer.h.2.attn.c_proj.bias', 'transformer.h.2.ln_2.weight', 'transformer.h.2.ln_2.bias', 'transformer.h.2.mlp.c_fc.weight', 'transformer.h.2.mlp.c_fc.bias', 'transformer.h.2.mlp.c_proj.weight', 'transformer.h.2.mlp.c_proj.bias', 'transformer.h.3.ln_1.weight', 'transformer.h.3.ln_1.bias', 'transformer.h.3.attn.c_attn.weight', 'transformer.h.3.attn.c_attn.bias', 'transformer.h.3.attn.c_proj.weight', 'transformer.h.3.attn.c_proj.bias', 'transformer.h.3.ln_2.weight', 'transformer.h.3.ln_2.bias', 'transformer.h.3.mlp.c_fc.weight', 'transformer.h.3.mlp.c_fc.bias', 'transformer.h.3.mlp.c_proj.weight', 'transformer.h.3.mlp.c_proj.bias', 'transformer.h.4.ln_1.weight', 'transformer.h.4.ln_1.bias', 'transformer.h.4.attn.c_attn.weight', 'transformer.h.4.attn.c_attn.bias', 'transformer.h.4.attn.c_proj.weight', 'transformer.h.4.attn.c_proj.bias', 'transformer.h.4.ln_2.weight', 'transformer.h.4.ln_2.bias', 'transformer.h.4.mlp.c_fc.weight', 'transformer.h.4.mlp.c_fc.bias', 'transformer.h.4.mlp.c_proj.weight', 'transformer.h.4.mlp.c_proj.bias', 'transformer.h.5.ln_1.weight', 'transformer.h.5.ln_1.bias', 'transformer.h.5.attn.c_attn.weight', 'transformer.h.5.attn.c_attn.bias', 'transformer.h.5.attn.c_proj.weight', 'transformer.h.5.attn.c_proj.bias', 'transformer.h.5.ln_2.weight', 'transformer.h.5.ln_2.bias', 'transformer.h.5.mlp.c_fc.weight', 'transformer.h.5.mlp.c_fc.bias', 'transformer.h.5.mlp.c_proj.weight', 'transformer.h.5.mlp.c_proj.bias', 'transformer.h.6.ln_1.weight', 'transformer.h.6.ln_1.bias', 'transformer.h.6.attn.c_attn.weight', 'transformer.h.6.attn.c_attn.bias', 'transformer.h.6.attn.c_proj.weight', 'transformer.h.6.attn.c_proj.bias', 'transformer.h.6.ln_2.weight', 'transformer.h.6.ln_2.bias', 'transformer.h.6.mlp.c_fc.weight', 'transformer.h.6.mlp.c_fc.bias', 'transformer.h.6.mlp.c_proj.weight', 'transformer.h.6.mlp.c_proj.bias', 'transformer.h.7.ln_1.weight', 'transformer.h.7.ln_1.bias', 'transformer.h.7.attn.c_attn.weight', 'transformer.h.7.attn.c_attn.bias', 'transformer.h.7.attn.c_proj.weight', 'transformer.h.7.attn.c_proj.bias', 'transformer.h.7.ln_2.weight', 'transformer.h.7.ln_2.bias', 'transformer.h.7.mlp.c_fc.weight', 'transformer.h.7.mlp.c_fc.bias', 'transformer.h.7.mlp.c_proj.weight', 'transformer.h.7.mlp.c_proj.bias', 'transformer.h.8.ln_1.weight', 'transformer.h.8.ln_1.bias', 'transformer.h.8.attn.c_attn.weight', 'transformer.h.8.attn.c_attn.bias', 'transformer.h.8.attn.c_proj.weight', 'transformer.h.8.attn.c_proj.bias', 'transformer.h.8.ln_2.weight', 'transformer.h.8.ln_2.bias', 'transformer.h.8.mlp.c_fc.weight', 'transformer.h.8.mlp.c_fc.bias', 'transformer.h.8.mlp.c_proj.weight', 'transformer.h.8.mlp.c_proj.bias', 'transformer.h.9.ln_1.weight', 'transformer.h.9.ln_1.bias', 'transformer.h.9.attn.c_attn.weight', 'transformer.h.9.attn.c_attn.bias', 'transformer.h.9.attn.c_proj.weight', 'transformer.h.9.attn.c_proj.bias', 'transformer.h.9.ln_2.weight', 'transformer.h.9.ln_2.bias', 'transformer.h.9.mlp.c_fc.weight', 'transformer.h.9.mlp.c_fc.bias', 'transformer.h.9.mlp.c_proj.weight', 'transformer.h.9.mlp.c_proj.bias', 'transformer.h.10.ln_1.weight', 'transformer.h.10.ln_1.bias', 'transformer.h.10.attn.c_attn.weight', 'transformer.h.10.attn.c_attn.bias', 'transformer.h.10.attn.c_proj.weight', 'transformer.h.10.attn.c_proj.bias', 'transformer.h.10.ln_2.weight', 'transformer.h.10.ln_2.bias', 'transformer.h.10.mlp.c_fc.weight', 'transformer.h.10.mlp.c_fc.bias', 'transformer.h.10.mlp.c_proj.weight', 'transformer.h.10.mlp.c_proj.bias', 'transformer.h.11.ln_1.weight', 'transformer.h.11.ln_1.bias', 'transformer.h.11.attn.c_attn.weight', 'transformer.h.11.attn.c_attn.bias', 'transformer.h.11.attn.c_proj.weight', 'transformer.h.11.attn.c_proj.bias', 'transformer.h.11.ln_2.weight', 'transformer.h.11.ln_2.bias', 'transformer.h.11.mlp.c_fc.weight', 'transformer.h.11.mlp.c_fc.bias', 'transformer.h.11.mlp.c_proj.weight', 'transformer.h.11.mlp.c_proj.bias', 'transformer.ln_f.weight', 'transformer.ln_f.bias']
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
May I ask about the connect between LM head and the wte layer and is that possible to freeze the GPT2 model except LM head? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11903/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11902 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11902/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11902/comments | https://api.github.com/repos/huggingface/transformers/issues/11902/events | https://github.com/huggingface/transformers/pull/11902 | 903,631,277 | MDExOlB1bGxSZXF1ZXN0NjU0ODY2MjYz | 11,902 | [Flax] return attentions | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,622 | 1,622 | 1,622 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11902/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11902",
"html_url": "https://github.com/huggingface/transformers/pull/11902",
"diff_url": "https://github.com/huggingface/transformers/pull/11902.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11902.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11901 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11901/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11901/comments | https://api.github.com/repos/huggingface/transformers/issues/11901/events | https://github.com/huggingface/transformers/issues/11901 | 903,568,491 | MDU6SXNzdWU5MDM1Njg0OTE= | 11,901 | [Flax] Add attention weights outputs to all models | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Also, adding this feature will require us to bump up the Flax dependency to `>=0.3.4` for `flax` in https://github.com/huggingface/transformers/blob/master/setup.py",
"I am starting with Bert.",
"Hi @patrickvonplaten @patil-suraj https://github.com/huggingface/transformers/pull/11918"
] | 1,622 | 1,623 | 1,623 | MEMBER | null | # π Feature request
At the moment we cannot return a list of attention weight outputs in Flax as we can do in PyTorch.
In PyTorch, there is a `output_attentions` boolean in the forward call of every function, see [here](https://github.com/huggingface/transformers/blob/42fe0dc23e4a7495ebd08185f5850315a1a12dc0/src/transformers/models/bert/modeling_bert.py#L528) which when set to True collects all attention weights and returns them as a tuple.
In PyTorch, the attention weights are returned (if `output_attentions=True`) from the self-attention layer *e.g.* here: https://github.com/huggingface/transformers/blob/42fe0dc23e4a7495ebd08185f5850315a1a12dc0/src/transformers/models/bert/modeling_bert.py#L331 and then passed with the outputs.
Currently, this is not implemented in Flax and needs to be done. At the moment the function [`dot_product_attention`](https://github.com/google/flax/blob/6fb839c640de80f887580a533b222c6dddf04c0d/flax/linen/attention.py#L109) is used in every Flax model which makes it impossible to retrieve the attention weights, see [here](https://github.com/huggingface/transformers/blob/42fe0dc23e4a7495ebd08185f5850315a1a12dc0/src/transformers/models/bert/modeling_flax_bert.py#L244). However recently the Flax authors refactored this function into a smaller one called [`dot_product_attention_weights`](https://github.com/google/flax/blob/6fb839c640de80f887580a533b222c6dddf04c0d/flax/linen/attention.py#L37) which would allow us to correctly retrieve the attention weights if needed. To do so all `dot_product_attention` functions should be replaced by `dot_product_attention_weights`, followed by a `jnp.einsum`, see [here](https://github.com/google/flax/blob/6fb839c640de80f887580a533b222c6dddf04c0d/flax/linen/attention.py#L162) so that we can retrieve the attention weights.
Next, the whole `output_attentions` logic should be implemented for all Flax models analog to `output_hidden_states`.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11901/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11900 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11900/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11900/comments | https://api.github.com/repos/huggingface/transformers/issues/11900/events | https://github.com/huggingface/transformers/pull/11900 | 903,480,138 | MDExOlB1bGxSZXF1ZXN0NjU0NzMxNDQ4 | 11,900 | [Community Notebooks] Add Emotion Speech Noteboook | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Amazing notebook @m3hrdadfi !"
] | 1,622 | 1,622 | 1,622 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds a notebook for Emotion Classification
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11900/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11900/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11900",
"html_url": "https://github.com/huggingface/transformers/pull/11900",
"diff_url": "https://github.com/huggingface/transformers/pull/11900.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11900.patch",
"merged_at": 1622108770000
} |
https://api.github.com/repos/huggingface/transformers/issues/11899 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11899/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11899/comments | https://api.github.com/repos/huggingface/transformers/issues/11899/events | https://github.com/huggingface/transformers/issues/11899 | 903,436,089 | MDU6SXNzdWU5MDM0MzYwODk= | 11,899 | Provides an option to select the parallel mode of the Trainer. | {
"login": "hijkzzz",
"id": 19810594,
"node_id": "MDQ6VXNlcjE5ODEwNTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/19810594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hijkzzz",
"html_url": "https://github.com/hijkzzz",
"followers_url": "https://api.github.com/users/hijkzzz/followers",
"following_url": "https://api.github.com/users/hijkzzz/following{/other_user}",
"gists_url": "https://api.github.com/users/hijkzzz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hijkzzz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hijkzzz/subscriptions",
"organizations_url": "https://api.github.com/users/hijkzzz/orgs",
"repos_url": "https://api.github.com/users/hijkzzz/repos",
"events_url": "https://api.github.com/users/hijkzzz/events{/privacy}",
"received_events_url": "https://api.github.com/users/hijkzzz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is already implemented, it just depends on how you launch your training script. To use distributed data parallel, you have to launch it with `torch.distributed.launch`.",
"> This is already implemented, it just depends on how you launch your training script. To use distributed data parallel, you have to launch it with `torch.distributed.launch`.\r\n\r\nHi~\r\nHow to do for jupyter?\r\nAlso what should I do if I use DataParallel for train and only want to use one GPU for predict?\r\nAs the `DataParallel` requires droplast, which is not allowed in predict phase.\r\nThanks\r\n",
"You can't do this directly in jupyter, you have to launch a script using the pytorch utilities (it's not a Trainer limitation, it's a PyTorch one). You can completely predict in parallel with the `Trainer`, it will complete the last batch to make it the same size as the others and then truncate the predictions.",
"> You can't do this directly in jupyter, you have to launch a script using the pytorch utilities (it's not a Trainer limitation, it's a PyTorch one). You can completely predict in parallel with the `Trainer`, it will complete the last batch to make it the same size as the others and then truncate the predictions.\r\n\r\nBut we can't truncate predictions because it's a contest or a client's demand and we need all the test results we can get.\r\n\r\nAs a demo, #11833\r\nhttps://github.com/huggingface/transformers/issues/11833",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,625 | 1,625 | NONE | null | # π Feature request
Provides an option to select the parallel mode of the Trainer.
## Motivation
For multiple GPUs, Trainer uses `nn.DataParallel` for parallel computing by default, however, this approach results in a large memory occupation for the first GPU. Please provide an API to switch to `nn.parallel.DistributedDataParallel`. Also, for the `Trainer.predict()` function is there an option to turn off parallel computing?
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11899/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11898 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11898/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11898/comments | https://api.github.com/repos/huggingface/transformers/issues/11898/events | https://github.com/huggingface/transformers/issues/11898 | 903,373,151 | MDU6SXNzdWU5MDMzNzMxNTE= | 11,898 | mutil gpu errors | {
"login": "Soulscb",
"id": 39151881,
"node_id": "MDQ6VXNlcjM5MTUxODgx",
"avatar_url": "https://avatars.githubusercontent.com/u/39151881?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Soulscb",
"html_url": "https://github.com/Soulscb",
"followers_url": "https://api.github.com/users/Soulscb/followers",
"following_url": "https://api.github.com/users/Soulscb/following{/other_user}",
"gists_url": "https://api.github.com/users/Soulscb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Soulscb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Soulscb/subscriptions",
"organizations_url": "https://api.github.com/users/Soulscb/orgs",
"repos_url": "https://api.github.com/users/Soulscb/repos",
"events_url": "https://api.github.com/users/Soulscb/events{/privacy}",
"received_events_url": "https://api.github.com/users/Soulscb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Is this related to `transformers`? ",
"yes this is simpletransformers and i find transformers multi gpus is hard to run ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,625 | 1,625 | NONE | null | i want to use multi gpus to train,but it erros;
model = nn.DataParallel(model)
model = model.cuda()
model.train_model(train_df, eval_data=eval_df)
torch.nn.modules.module.ModuleAttributeError: 'DataParallel' object has no attribute 'train_model'
so how can i to use multi gpus | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11898/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11897 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11897/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11897/comments | https://api.github.com/repos/huggingface/transformers/issues/11897/events | https://github.com/huggingface/transformers/pull/11897 | 903,208,638 | MDExOlB1bGxSZXF1ZXN0NjU0NDkxNTE2 | 11,897 | Fix Tensorflow Bart-like positional encoding | {
"login": "JunnYu",
"id": 50394665,
"node_id": "MDQ6VXNlcjUwMzk0NjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/50394665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JunnYu",
"html_url": "https://github.com/JunnYu",
"followers_url": "https://api.github.com/users/JunnYu/followers",
"following_url": "https://api.github.com/users/JunnYu/following{/other_user}",
"gists_url": "https://api.github.com/users/JunnYu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JunnYu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JunnYu/subscriptions",
"organizations_url": "https://api.github.com/users/JunnYu/orgs",
"repos_url": "https://api.github.com/users/JunnYu/repos",
"events_url": "https://api.github.com/users/JunnYu/events{/privacy}",
"received_events_url": "https://api.github.com/users/JunnYu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,622 | 1,626 | 1,626 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #11724
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11897/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11897",
"html_url": "https://github.com/huggingface/transformers/pull/11897",
"diff_url": "https://github.com/huggingface/transformers/pull/11897.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11897.patch",
"merged_at": 1626211134000
} |
https://api.github.com/repos/huggingface/transformers/issues/11896 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11896/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11896/comments | https://api.github.com/repos/huggingface/transformers/issues/11896/events | https://github.com/huggingface/transformers/pull/11896 | 903,207,658 | MDExOlB1bGxSZXF1ZXN0NjU0NDkwNjQx | 11,896 | Update deepspeed config to reflect hyperparameter search parameters | {
"login": "Mindful",
"id": 2897172,
"node_id": "MDQ6VXNlcjI4OTcxNzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2897172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mindful",
"html_url": "https://github.com/Mindful",
"followers_url": "https://api.github.com/users/Mindful/followers",
"following_url": "https://api.github.com/users/Mindful/following{/other_user}",
"gists_url": "https://api.github.com/users/Mindful/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mindful/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mindful/subscriptions",
"organizations_url": "https://api.github.com/users/Mindful/orgs",
"repos_url": "https://api.github.com/users/Mindful/repos",
"events_url": "https://api.github.com/users/Mindful/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mindful/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Code quality check passes on my local machine π€ \r\nHappy to change formatting if necessary, just not sure what to change.",
"You probably have a different version of `black`.\r\n\r\nPlease try:\r\n\r\n```\r\ncd transformers\r\npip install -e .[dev]\r\nmake fixup\r\n```\r\n\r\nthis should re-align the versions.",
"Turns out I just ran the style check on the wrong branch the first time; my bad. \r\nShould be fixed now. ",
"the doc job failure is unrelated - we will re-run it when other jobs finish - the CI has been quite flakey...",
"Thanks for your PR!"
] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | # What does this PR do?
This PR adds a few lines of code to the Trainer so that it rebuilds the Deepspeed config when running hyperparameter_search. As is, if you run hyperparameter_search while using Deepspeed the TrainingArguments are updated but the Deepspeed config is not, the two become out of sync, and Deepspeed effectively ignores the parameters of any hyperparameter search trials which are set by the Deepspeed config.
This fixes #11894
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. --> https://github.com/huggingface/transformers/issues/11894
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
I ran the Deepspeed tests and Trainer tests locally; everything passed except for `test_stage3_nvme_offload` but I think that was a hardware compatibility issue on my local machine.
## Who can review?
@stas00 (and maybe whoever implemented hyperparameter_search() in the Trainer)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11896/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11896/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11896",
"html_url": "https://github.com/huggingface/transformers/pull/11896",
"diff_url": "https://github.com/huggingface/transformers/pull/11896.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11896.patch",
"merged_at": 1622116413000
} |
https://api.github.com/repos/huggingface/transformers/issues/11895 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11895/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11895/comments | https://api.github.com/repos/huggingface/transformers/issues/11895/events | https://github.com/huggingface/transformers/issues/11895 | 903,176,322 | MDU6SXNzdWU5MDMxNzYzMjI= | 11,895 | Small error in documentation / Typo | {
"login": "ashkamath",
"id": 14784179,
"node_id": "MDQ6VXNlcjE0Nzg0MTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/14784179?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashkamath",
"html_url": "https://github.com/ashkamath",
"followers_url": "https://api.github.com/users/ashkamath/followers",
"following_url": "https://api.github.com/users/ashkamath/following{/other_user}",
"gists_url": "https://api.github.com/users/ashkamath/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashkamath/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashkamath/subscriptions",
"organizations_url": "https://api.github.com/users/ashkamath/orgs",
"repos_url": "https://api.github.com/users/ashkamath/repos",
"events_url": "https://api.github.com/users/ashkamath/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashkamath/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for spotting. Feel free to open a PR to fix this :) \r\n\r\nBy the way, I see you're the main author of MDETR (amazing work!). I'm currently adding DETR to the repo (see #11653), so if you are up to help me add MDETR to the repo, feel free to reach out :) \r\n",
"Ooh thanks :D That sounds great, I'd be happy to help :) Will send you an email. "
] | 1,622 | 1,623 | 1,623 | NONE | null | The documentation for BART decoder layer mentions that it expects hidden states as well as the encoder hidden states to be expected in "(seq_len, batch, embed_dim)" instead of the "(batch, seq_len, embed_dim)" that is actually is expected. Lead to a bit of confusion so would be great if it were corrected! :)
Relevant lines:
https://github.com/huggingface/transformers/blob/996a315e76f6c972c854990e6114226a91bc0a90/src/transformers/models/bart/modeling_bart.py#L373
https://github.com/huggingface/transformers/blob/996a315e76f6c972c854990e6114226a91bc0a90/src/transformers/models/bart/modeling_bart.py#L376
@patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11895/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11894 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11894/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11894/comments | https://api.github.com/repos/huggingface/transformers/issues/11894/events | https://github.com/huggingface/transformers/issues/11894 | 903,139,331 | MDU6SXNzdWU5MDMxMzkzMzE= | 11,894 | Deepspeed integration ignores Optuna trial parameters in hyperparameter_search | {
"login": "Mindful",
"id": 2897172,
"node_id": "MDQ6VXNlcjI4OTcxNzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2897172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mindful",
"html_url": "https://github.com/Mindful",
"followers_url": "https://api.github.com/users/Mindful/followers",
"following_url": "https://api.github.com/users/Mindful/following{/other_user}",
"gists_url": "https://api.github.com/users/Mindful/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mindful/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mindful/subscriptions",
"organizations_url": "https://api.github.com/users/Mindful/orgs",
"repos_url": "https://api.github.com/users/Mindful/repos",
"events_url": "https://api.github.com/users/Mindful/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mindful/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thanks for the report, @Mindful!\r\n\r\nIt's very possible that I mistakenly bypassed some logic of optuna, as I have never used it. \r\n\r\nWould you like to have a look and see if you can fix it - since you have already been researching this - basically go into `src/transformers/trainer.py` look for `optuna` and see where `if self.deepspeed` skips it. Shouldn't be too difficult if you already have a test for it.\r\n\r\nPlease let me know if I can help.\r\n\r\nThank you!",
"@stas00 \r\n\r\nI would need to look a little closer to be sure but it really just looks like a timing issue - the Deepspeed config is built off the Trainer config, which then has its state changed *afterwards* by the Optuna integration so the two get out of sync. \r\n\r\nI am definitely open to trying to fix this myself (I've been looking for a reason to contribute), I just have two concerns:\r\n1. I'm pretty swamped right now, at least for the next week or two. Things should hopefully calm down after that, but it's hard for me to promise I can get to it by a certain date. \r\n2. It seems like the only options for fixing this are either somehow making the Deepspeed config automatically update to reflect updates to the training config (which would be complicated and probably overkill) or changing the hyperparameter_search method so that it also updates the Deepspeed config if necessary. I think the latter is the better option, but going attribute-by-attribute would basically mean duplicating the logic for copying training parameters from the TrainingArguments to the deepspeed config. I think the _best_ option is to just construct a new DeepSpeedConfigHF based on the updated training parameters, but there's a lot of logic there and I'm not sure if this is safe to do. \r\n\r\nActually, if the fix is as easy as rebuilding DeepSpeedConfigHF with the updated TrainingArguments, this might be relatively quick. I'm not sure. \r\nEdit: I wrote this out and then went back and looked at the code and I think doing the above might fix it (in which case this is an easy fix). Let me try this and get back to you.",
"Yeah, I just changed files locally so that the hyperparameter search rebuilds the DeepSpeedConfigHF object and that seems to have fixed it. I still need to double check tests pass/etc, but it looks like this was much easier than I thought.\r\n\r\nI'll open a PR shortly. ",
"Awesome. Looking forward to reading your PR, @Mindful - please tag me there.\r\n\r\nFor tests just run:\r\n\r\n```\r\nRUN_SLOW=1 pytest tests/deepspeed\r\n```\r\n\r\nand if you have an idea for a new test that would be a bonus."
] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux-4.19.0-16-cloud-amd64-x86_64-with-glibc2.10
- Python version: 3.8.10
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
Also in case it matters, my deepspeed version is 0.3.16.
### Who can help
Maybe @stas00 ? I'm not sure.
## Information
Model I am using: custom Pytorch transformer model, although I don't think it matters here.
The problem arises when using:
* [ ] the official example scripts: (probably, I haven't tried)
* [x] my own modified scripts: I'm running a simple custom MLM training script using the transformers trainer.
## To reproduce
Steps to reproduce the behavior:
1. Run a hyperparameter search using the transformers Trainer with the [default zero-2 config from the documentation](https://huggingface.co/transformers/main_classes/trainer.html#zero-2-example)
2. Observe that parameters taken from the Deepspeed config like gradient accumulation steps and optimizer/scheduler params are not updated to reflect the Optuna trial parameters.
Here is some output from the script I'm running, with the middle omitted for brevity. I'm printing trial params myself but I do it from inside the Trainer, so these are definitely the same trial params the Trainer is getting.
```
[I 2021-05-27 02:20:41,133] A new study created in memory with name: no-name-26248906-22c0-4666-a7d4-159173902bc5
current trial params {'learning_rate': 2.0670100636747183e-05, 'adam_beta2': 0.98, 'gradient_accumulation_steps': 6, 'dropout': 0.0, 'local_attention_window': 192, 'weight_decay': 0.01, 'warmup_ratio': 0.12, 'deep_transformer_stack_layers': 10}
.....
[2021-05-27 02:21:01,787] [INFO] [config.py:751:print] initial_dynamic_scale ........ 65536
[2021-05-27 02:21:01,787] [INFO] [config.py:751:print] loss_scale ................... 0
[2021-05-27 02:21:01,787] [INFO] [config.py:751:print] memory_breakdown ............. False
[2021-05-27 02:21:01,787] [INFO] [config.py:751:print] optimizer_legacy_fusion ...... False
[2021-05-27 02:21:01,787] [INFO] [config.py:751:print] optimizer_name ............... adamw
[2021-05-27 02:21:01,787] [INFO] [config.py:751:print] optimizer_params ............. {'lr': 5e-05, 'betas': [0.9, 0.999], 'eps': 1e-08, 'weight_decay': 0.0}
[2021-05-27 02:21:01,787] [INFO] [config.py:751:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0}
[2021-05-27 02:21:01,788] [INFO] [config.py:751:print] pld_enabled .................. False
....
```
I looked around briefly and I _think_ the issue comes from the fact that that the Deepspeed config [is built as part of TrainingArgs](https://github.com/huggingface/transformers/blob/996a315e76f6c972c854990e6114226a91bc0a90/src/transformers/training_args.py#L677) and then presumably never updated after that, even if the training args change. Consequently when the [training args are updated](https://github.com/huggingface/transformers/blob/996a315e76f6c972c854990e6114226a91bc0a90/src/transformers/trainer.py#L861) as part of setup for the hyperparameter search, it's not reflected in the Deepspeed config.
Note that this might also be an issue with Ray, I just haven't tried it with Ray.
## Expected behavior
Ideally Deepspeed would run with config/parameters that respected the content of the Optuna trials, although I know that getting two external integrations to play well together is easier said than done. In the meantime I'm going to see if I can work around this by using an HF scheduler and HF optimizer in the hopes that those will take their parameters from the training arguments directly.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11894/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11893 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11893/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11893/comments | https://api.github.com/repos/huggingface/transformers/issues/11893/events | https://github.com/huggingface/transformers/pull/11893 | 903,046,203 | MDExOlB1bGxSZXF1ZXN0NjU0MzQ4MTg5 | 11,893 | RAG-2nd2end-revamp | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I added all the minor changes and I would like to thank @patrickvonplaten and @lhoestq for the enormous amounts of support and advice. :)",
" Hey \r\nThanks for giving this end to end version. Am trying to run it and see its performance on my own dataset (much smaller and domain specific to see if get performance gains with end to end) but at the moment it is throwing a Pickling error with dummy dataset in the code. Am still stuck trying to understand how to fix this. Any idea how this can be dealt with\r\n\r\n \r\n File \"/home/shraeyb/anaconda3/envs/new_st/lib/python3.7/site-packages/ray/cloudpickle/cloudpickle_fast.py\", line 580, in dump\r\n return Pickler.dump(self, obj)\r\n File \"pyarrow/io.pxi\", line 1021, in pyarrow.lib.Buffer.__reduce_ex__\r\n AttributeError: module 'pickle' has no attribute 'PickleBuffer\r\n\r\nCheers\r\n",
"Could you please add this as a new issue. Also I would like to see the\nentire error log. Seems like something is wrong with your RAY installation.\n\nOn Fri, Jun 4, 2021, 00:49 Shraey Bhatia ***@***.***> wrote:\n\n> Hey\n> Thanks for giving this end to end version. Am trying to run it and see its\n> performance on my own dataset (much smaller and domain specific to see if\n> get performance gains with end to end) but at the moment it is throwing a\n> Pickling error with dummy dataset in the code. Am still stuck trying to\n> understand how to fix this. Any idea how this can be dealt with\n>\n> File \"/home/shraeyb/anaconda3/envs/new_st/lib/python3.7/site-packages/ray/cloudpickle/cloudpickle_fast.py\", line 580, in dump\n> return Pickler.dump(self, obj)\n> File \"pyarrow/io.pxi\", line 1021, in pyarrow.lib.Buffer.__reduce_ex__\n> AttributeError: module 'pickle' has no attribute 'PickleBuffer\n>\n> Cheers\n>\n> β\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/11893#issuecomment-853842534>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGWLRNB4P234HGI4JOLTQ5255ANCNFSM45TDZJNA>\n> .\n>\n",
"Sure, just created a new issue with full log"
] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | same as [shamanez:rag-retriever-end2end](https://github.com/huggingface/transformers/pull/11655) PR. In the previous version, I got some version control problems.
Really sorry to come up with duplicate PRs :(.
@lhoestq @patrickvonplaten
I conducted an experiment to check the difference and added a simple test run code.
Finally I added two test functions to test_modeling_rag.py and test_retrieval_rag.py.

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11893/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11893/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11893",
"html_url": "https://github.com/huggingface/transformers/pull/11893",
"diff_url": "https://github.com/huggingface/transformers/pull/11893.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11893.patch",
"merged_at": 1622529147000
} |
https://api.github.com/repos/huggingface/transformers/issues/11892 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11892/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11892/comments | https://api.github.com/repos/huggingface/transformers/issues/11892/events | https://github.com/huggingface/transformers/pull/11892 | 902,826,943 | MDExOlB1bGxSZXF1ZXN0NjU0MTUzOTEy | 11,892 | Link official Cloud TPU JAX docs | {
"login": "avital",
"id": 37586,
"node_id": "MDQ6VXNlcjM3NTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/37586?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avital",
"html_url": "https://github.com/avital",
"followers_url": "https://api.github.com/users/avital/followers",
"following_url": "https://api.github.com/users/avital/following{/other_user}",
"gists_url": "https://api.github.com/users/avital/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avital/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avital/subscriptions",
"organizations_url": "https://api.github.com/users/avital/orgs",
"repos_url": "https://api.github.com/users/avital/repos",
"events_url": "https://api.github.com/users/avital/events{/privacy}",
"received_events_url": "https://api.github.com/users/avital/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | # What does this PR do?
Adds a link to the new official Cloud TPU VM JAX docs.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11892/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11892",
"html_url": "https://github.com/huggingface/transformers/pull/11892",
"diff_url": "https://github.com/huggingface/transformers/pull/11892.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11892.patch",
"merged_at": 1622058280000
} |
https://api.github.com/repos/huggingface/transformers/issues/11891 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11891/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11891/comments | https://api.github.com/repos/huggingface/transformers/issues/11891/events | https://github.com/huggingface/transformers/issues/11891 | 902,807,995 | MDU6SXNzdWU5MDI4MDc5OTU= | 11,891 | GPT2 saved pb file cannot handle dynamic sequence length | {
"login": "sufengniu",
"id": 2965176,
"node_id": "MDQ6VXNlcjI5NjUxNzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2965176?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sufengniu",
"html_url": "https://github.com/sufengniu",
"followers_url": "https://api.github.com/users/sufengniu/followers",
"following_url": "https://api.github.com/users/sufengniu/following{/other_user}",
"gists_url": "https://api.github.com/users/sufengniu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sufengniu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sufengniu/subscriptions",
"organizations_url": "https://api.github.com/users/sufengniu/orgs",
"repos_url": "https://api.github.com/users/sufengniu/repos",
"events_url": "https://api.github.com/users/sufengniu/events{/privacy}",
"received_events_url": "https://api.github.com/users/sufengniu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi! As far as I'm aware, we don't support TF Hub integrations right now. If you want to save or load a TF2 model from Transformers, you can use the `save_pretrained` method like so:\r\n\r\n```\r\nfrom transformers import GPT2Tokenizer, TFGPT2Model\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2')\r\nmodel = TFGPT2Model.from_pretrained('gpt2')\r\nmodel.save_pretrained(\"saved_gpt2\")\r\nnew_model = TFGPT2Model.from_pretrained('saved_gpt2')\r\ntext = \"Replace me by any text you'd like.\"\r\nencoded_input = tokenizer(text, return_tensors='tf')\r\noutput = new_model(encoded_input)\r\n```\r\n\r\nI'm the TF maintainer here, and one of the things I'm actively working on is making our Tensorflow support cleaner and more idiomatic. If there's a reason you really want to export to the TF Hub or normal Keras formats, please let us know, and we'll take that into account when planning development!",
"Hello @Rocketknight1 \r\n\r\nThank you, I think export to tf hub is an important feature if you want to deploy to real production. as far as I know, many companies they would not use saved model weights, but rather do serving with packed computational graph. Therefore, it would be great if you could take tf hub export into account in the future. will close this ticket, by the way, do you want me open another one regarding to feature request?\r\n\r\nThank you!"
] | 1,622 | 1,622 | 1,622 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Darwin-20.4.0-x86_64-i386-64bit
- Python version: 3.7.7
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models: gpt2 @patrickvonplaten, @LysandreJik
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python3
import tensorflow as tf
import tensorflow_hub as hub
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2Model.from_pretrained('gpt2')
tf.saved_model.save(model, "gpt2")
model_hub = hub.KerasLayer("gpt2")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model_hub(encoded_input)
```
It would complain the input mismatched, where input is [None, 5], I think 5 is dummy input defined inside the file_utils.py. In other word, does gpt2 saved tf hub model must be fixed length (dummy input length)?
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Should run without issue after loading tfhub
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11891/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11890 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11890/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11890/comments | https://api.github.com/repos/huggingface/transformers/issues/11890/events | https://github.com/huggingface/transformers/pull/11890 | 902,574,938 | MDExOlB1bGxSZXF1ZXN0NjUzOTI5MjE0 | 11,890 | changing find_batch_size to work with tokenizer outputs | {
"login": "joerenner",
"id": 11395913,
"node_id": "MDQ6VXNlcjExMzk1OTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/11395913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joerenner",
"html_url": "https://github.com/joerenner",
"followers_url": "https://api.github.com/users/joerenner/followers",
"following_url": "https://api.github.com/users/joerenner/following{/other_user}",
"gists_url": "https://api.github.com/users/joerenner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joerenner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joerenner/subscriptions",
"organizations_url": "https://api.github.com/users/joerenner/orgs",
"repos_url": "https://api.github.com/users/joerenner/repos",
"events_url": "https://api.github.com/users/joerenner/events{/privacy}",
"received_events_url": "https://api.github.com/users/joerenner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Mmm, I don't think I am allowed to push commits on your branch and the CI decided to not run on your PR. Could you push an empty commit to trigger it?\r\n```\r\ngit commit --allow-empty -m \"Trigger CI\"\r\n```\r\nShould do this (and then push).\r\n",
"ah looks like it needs approval:\r\n\r\n```First-time contributors need a maintainer to approve running workflows```",
"No that's for the Git actions (and I clicked yes). Your empty commit did trigger circle CI so all is good, just have to wait for the green tick :-)"
] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | trainer_pt_utils.find_batch_size currently does not recognize the batch size of BatchEncoding objects. This can cause an error when a trainer relies on find_batch_size to report the number of observed examples in the evaluation loop, which is the case when the eval dataset is Iterable.
# What does this PR do?
Very simple change that lets find_batch_size find the batch size of BatchEncoding objects.
Fixes # (11882)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x ] Was this discussed via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
https://github.com/huggingface/transformers/issues/11882
- [x ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x ] Did you write any new necessary tests?
@LysandreJik @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11890/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11890/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11890",
"html_url": "https://github.com/huggingface/transformers/pull/11890",
"diff_url": "https://github.com/huggingface/transformers/pull/11890.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11890.patch",
"merged_at": 1622044746000
} |
https://api.github.com/repos/huggingface/transformers/issues/11889 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11889/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11889/comments | https://api.github.com/repos/huggingface/transformers/issues/11889/events | https://github.com/huggingface/transformers/pull/11889 | 902,519,679 | MDExOlB1bGxSZXF1ZXN0NjUzODgwMTA0 | 11,889 | Hubert | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"> batching\r\n\r\nIt uses `Wav2Vec2Processor` for feature extraction etc"
] | 1,622 | 1,623 | 1,623 | MEMBER | null | # What does this PR do?
This PR adds Hubert:
- https://ai.facebook.com/blog/hubert-self-supervised-representation-learning-for-speech-recognition-generation-and-compression
- https://arxiv.org/pdf/2106.07447.pdf?fbclid=IwAR3hI4uGqc4mV5j-ob8R5yLu-BaamVoe9ncxUoVmgFLjJXsE1IevP0rdNYY
Checkpoints are available here:
https://huggingface.co/models?filter=hubert
Hubert is essentially the same as Wav2Vec2 with some minor differences. The pretraining is completely different though, which is why we need to put it in a new modeling class. Pretraining functionality will be added in a second PR.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11889/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11889",
"html_url": "https://github.com/huggingface/transformers/pull/11889",
"diff_url": "https://github.com/huggingface/transformers/pull/11889.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11889.patch",
"merged_at": 1623842053000
} |
https://api.github.com/repos/huggingface/transformers/issues/11888 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11888/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11888/comments | https://api.github.com/repos/huggingface/transformers/issues/11888/events | https://github.com/huggingface/transformers/issues/11888 | 902,504,417 | MDU6SXNzdWU5MDI1MDQ0MTc= | 11,888 | Add a new pipeline for the Relation Extraction task. | {
"login": "xegulon",
"id": 74178038,
"node_id": "MDQ6VXNlcjc0MTc4MDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/74178038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xegulon",
"html_url": "https://github.com/xegulon",
"followers_url": "https://api.github.com/users/xegulon/followers",
"following_url": "https://api.github.com/users/xegulon/following{/other_user}",
"gists_url": "https://api.github.com/users/xegulon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xegulon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xegulon/subscriptions",
"organizations_url": "https://api.github.com/users/xegulon/orgs",
"repos_url": "https://api.github.com/users/xegulon/repos",
"events_url": "https://api.github.com/users/xegulon/events{/privacy}",
"received_events_url": "https://api.github.com/users/xegulon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"We have a voluntarily generic `token-classification` pipeline that should be suited for this, no?",
"> We have a voluntarily generic `token-classification` pipeline that should be suited for this, no?\r\n\r\nAs far as I understood, `token-classification` is just an alias for `ner` (in the source code, we can observe: `NerPipeline = TokenClassificationPipeline`). \r\n\r\nThe relation extraction part would be to classify pairs of entities (given by the `ner`/`token-classification` part of the pipeline) to a set of relation classes, such as `IS_CEO_OF_ORG`. \r\n\r\nI don't think it is possible to do this for now. Thanks for the reply!",
"This seems to be a popular repo for RE: https://github.com/thunlp/OpenNRE",
"For now, there's only 1 model that is capable of performing relation extraction out-of-the-box, and that's [LUKE](https://huggingface.co/transformers/model_doc/luke.html#overview). You can use `LukeForEntityPairClassification` to classify the relationship between two entities in a sentence:\r\n\r\n```\r\nfrom transformers import LukeTokenizer, LukeForEntityPairClassification\r\n\r\ntokenizer = LukeTokenizer.from_pretrained(\"studio-ousia/luke-large-finetuned-tacred\")\r\nmodel = LukeForEntityPairClassification.from_pretrained(\"studio-ousia/luke-large-finetuned-tacred\")\r\n\r\ntext = \"BeyoncΓ© lives in Los Angeles.\"\r\nentity_spans = [(0, 7), (17, 28)] # character-based entity spans corresponding to \"BeyoncΓ©\" and \"Los Angeles\"\r\ninputs = tokenizer(text, entity_spans=entity_spans, return_tensors=\"pt\")\r\noutputs = model(**inputs)\r\nlogits = outputs.logits\r\npredicted_class_idx = logits.argmax(-1).item()\r\nprint(\"Predicted class:\", model.config.id2label[predicted_class_idx])\r\n```\r\n\r\nHowever, relation extraction is a task that is solved in many different ways. So it's not straightforward to define a generic pipeline for it, in which you can plug different models.",
"Thanks @NielsRogge for your answer. I have 3 questions then:\r\n\r\n1. What do you mean exactly by:\r\n> relation extraction is a task that is solved in many different ways.\r\n\r\nbecause the task of RE itself is quite standardized, isn't it?\r\n\r\n2. Is the LUKE model you showed me usable with *any* dataset? If yes, which format of the dataset is needed?\r\n\r\n3. Wouldn't be good to choose *one* approach (maybe SpanBERT?, cf [this](https://kr2ml.github.io/2020/papers/KR2ML_12_paper.pdf)) and implement it in the HF `pipeline`?",
"> 1. What do you mean exactly by:\r\n> \r\n> > relation extraction is a task that is solved in many different ways.\r\n\r\nNER is always solved in the same way (in the Transformers library, at least), namely by placing a token classification head on top of the final hidden states of the tokens. However, relation extraction can be solved in many ways. LUKE for example has a very specific way, namely it considers a word sequence (tokens) and an entity sequence (entities), and it places a linear layer on top of the concatenation of the entity tokens. Another model, like [R-BERT](https://arxiv.org/abs/1905.08284) for example, does it differently. From the paper: \"(...) We apply the average operation to get a vector representation for each of the two target entities. Then after an activation operation (i.e. tanh), we add a fully connected layer to each of the two vectors (...)\":\r\n\r\n\r\n\r\nIn other words, as every relation extraction model does it in a different way, it's not straightforward to define a general pipeline for it. \r\n\r\n> 2. Is the LUKE model you showed me usable with _any_ dataset? If yes, which format of the dataset is needed?\r\n\r\nYes, you can use it with any dataset. I fine-tuned it myself on a custom dataset. You just need to prepare a csv file with 4 columns: sentence, entity 1, entity 2, relationship. I will prepare a notebook that illustrates how you can do it easily. \r\n\r\n> 3\\. Wouldn't be good to choose _one_ approach (maybe SpanBERT?, cf [this](https://kr2ml.github.io/2020/papers/KR2ML_12_paper.pdf)) and implement it in the HF `pipeline`?\r\n\r\nA pipeline is meant to be used for several models, I don't think it's nice to have a pipeline that only works for a single model.",
"Thanks for all your answers @NielsRogge!\r\n\r\nI understand it better now. In fact, the way you do it makes me think of the QA task, but here the context is replaced by the entity spans, and the output is the one of a `SequenceClassification` task.\r\n\r\nLooks pretty good for becoming the standard :wink:!",
"@xegulon here's a notebook that illustrates how to fine-tune `LukeForEntityPairClassification` on a custom dataset for relation extraction: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LUKE/Supervised_relation_extraction_with_LukeForEntityPairClassification.ipynb",
"Thanks a lot @NielsRogge ! \r\nHoping to see `pipeline('relation-classification')` and `pipeline('joint-ner-and-re')` someday ;) !",
"> @xegulon here's a notebook that illustrates how to fine-tune `LukeForEntityPairClassification` on a custom dataset for relation extraction: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LUKE/Supervised_relation_extraction_with_LukeForEntityPairClassification.ipynb\r\n\r\nThanks a lot @NielsRogge for this notebook. You saved me a lot of time! \r\nI have a doubt, a statement we're trying to annotate is:\r\nMukesh Ambani married Nita Ambani in 1985 and they have two sons, Akash and Anant, and a daughter, Isha.\r\n\r\nThere are multiple entities in one sentence and different relations between them.\r\nHow should i go about incorporating this in my dataset?\r\n\r\n1. The sentence column will have the above statement multiple times until all relations and entities are captured. The entity and label columns will change as per entities.\r\n2. Making this a multi label problem -- (which is more tricky)\r\n\r\nWould love to know your approach on this. Thanks!",
"> How should i go about incorporating this in my dataset?\r\n\r\nI think you need to create several training examples for this single sentence. Each training example should be <sentence, entity 1, entity 2, relationship>. So indeed, option 1 is what I would do. \r\n\r\nThere are other approaches to relation extraction, in which one applies a binary classifier to each possible pair of entities (an example is [this paper](https://www.sciencedirect.com/science/article/abs/pii/S095741741830455X?via%3Dihub)). However, LUKE doesn't work that way.",
"> I think you need to create several training examples for this single sentence. Each training example should be <sentence, entity 1, entity 2, relationship>. So indeed, option 1 is what I would do.\r\n\r\nI have gone ahead with the LUKE approach.\r\n\r\nThe [TACRED dataset](https://nlp.stanford.edu/pubs/zhang2017tacred.pdf) has 79.5% relation labels as 'no_relation'.\r\nThis seems logical because not every sentence consists of relations and also reduces false positives. (my model will be tested against newspaper articles, blogs, wiki text, etc)\r\n\r\nI have two doubts:\r\n1. Whilst making a custom dataset (like the one in your notebook) should we also include sentences that have no relations between entities? What percentage of no_relation labels would you suggest for our custom dataset ?\r\n\r\n2. How should we go about labelling this sentence: (such sentences are common in news articles or excerpts from interviews)\r\n\r\n\"Pep Guardiola was unhappy with the passing during the game.\"\r\n\r\nThis has only one entity (entity1 = PERSON). Do we consider this sentence since entity2 would be empty?\r\nWe have been discarding these as of now.\r\n",
"> Whilst making a custom dataset (like the one in your notebook) should we also include sentences that have no relations between entities? What percentage of no_relation labels would you suggest for our custom dataset ?\r\n\r\nYes, for sure. In that way, you can let the model learn that there are also a lot of sentences where there's no relationship between 2 entities. Probably, the percentage of no_relation labels depends on your domain, but it will probably be the most occuring class.\r\n\r\n> How should we go about labelling this sentence: (such sentences are common in news articles or excerpts from interviews)\r\n\r\nI don't think you need to add sentences that only have a single entity, you can simply discard these."
] | 1,622 | 1,630 | 1,622 | NONE | null | # π Feature request
Add a new pipeline option for the Relation Extraction task : `nlp = pipeline('relation-extraction')`
## Motivation
Relation Extraction between named entities is a well-known NLP task. For example, when you get entities relative to medications (let's say our entity types are DRUG and FORM (tablet, capsule, etc.)), you want to know which FORM entity goes with which DRUG entity, etc.
Reference: https://portal.dbmi.hms.harvard.edu/projects/n2c2-2018-t2/
This task is not limited to the biomedical domain.
## Your contribution
I still need to play more with the HF API to contribute !
But, as I see it, the pipeline would return a list of dictionaries, each dictionary representing an identified relation in the text.
The relation extraction model would probably sit on top of the NER model.
There are implementations of such models [here](https://nlpprogress.com/english/relationship_extraction.html).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11888/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11887 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11887/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11887/comments | https://api.github.com/repos/huggingface/transformers/issues/11887/events | https://github.com/huggingface/transformers/issues/11887 | 902,370,782 | MDU6SXNzdWU5MDIzNzA3ODI= | 11,887 | Wrong subword aggregation when using aggregation_strategy | {
"login": "Guidofaassen",
"id": 6339558,
"node_id": "MDQ6VXNlcjYzMzk1NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6339558?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Guidofaassen",
"html_url": "https://github.com/Guidofaassen",
"followers_url": "https://api.github.com/users/Guidofaassen/followers",
"following_url": "https://api.github.com/users/Guidofaassen/following{/other_user}",
"gists_url": "https://api.github.com/users/Guidofaassen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Guidofaassen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Guidofaassen/subscriptions",
"organizations_url": "https://api.github.com/users/Guidofaassen/orgs",
"repos_url": "https://api.github.com/users/Guidofaassen/repos",
"events_url": "https://api.github.com/users/Guidofaassen/events{/privacy}",
"received_events_url": "https://api.github.com/users/Guidofaassen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I suspect we might have an issue recognizing subwords when using the BPE tokenizer.\r\nUsing a BERT based model works as expected:\r\n```\r\n>> nlp = transformers.pipeline('ner', model='wietsedv/bert-base-dutch-cased-finetuned-conll2002-ner', aggregation_strategy='first')\r\n>> nlp(\"Groenlinks praat over Schiphol.\")\r\n[{'entity_group': 'org',\r\n 'score': 0.99999315,\r\n 'word': 'Groenlinks',\r\n 'start': 0,\r\n 'end': 10},\r\n {'entity_group': 'loc',\r\n 'score': 0.9999975,\r\n 'word': 'Schiphol',\r\n 'start': 22,\r\n 'end': 30}]\r\n```\r\n\r\nI'll take a closer look at the logic when using BPE sub-tokens and report back.",
"@LysandreJik @sgugger The problem is how we decide whether or not a token is a subword [here](https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/token_classification.py#L273),\r\nwhere we compare the token length with the corresponding span in the original text.\r\nFor WordPiece this works because `Groenlinks` is tokenized as `['Groen', '##link', '##s']`, so the last two tokens are tagged as subwords. However BPE tokenizes as `['_Groen', 'link', 's']`, so we incorrectly tag `_Groen` as subword and the other two tokens as words.",
"Similar to https://github.com/huggingface/transformers/issues/11794",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Any update on this issue?",
"@francescorubbo thanks for investigating this. So the `gather_pre_entities` function of the NER pipeline needs an update to also work with BPE tokenizers. \r\n\r\ncc @Narsil \r\n\r\nDo you mind opening a PR to support BPE tokenizers?",
"I think the problem is tricky, I don't think it is properly linked to BPE, but more for tokenizers that are word aware vs not.\r\n\r\nRight now, we use tokenizers that use `continuing_subword_prefix` to determine if a token is a subword.\r\nI don't think there is a \"correct\" way to do that with byte level BPE like gpt2 (roberta) as they don't posess the notion of \"word\".\r\n\r\nAs mentioned in a previous issue, if we can find a good heuristic that would be great, but byte BPE can :\r\n- have space as a prefix or suffix \r\n- use a different char than ' ' for space (_ for spm, `G for gpt2)\r\n- Potentially contain spaces (hence different words) within a single token (although I don't think I've seen it done for major tokenizers)\r\n\r\nSo classifying subwords for these tokenizers is always going to be tricky. We could however disable \"word\"-based strategies for tokenizers that do no provide \"continuing_subword_prefix\". It would be more correct (but less usable for sure)",
"I result is still incorrect after the new changes @Narsil.\r\n\r\nGiven the nightly build: '4.10.0.dev0'\r\n\r\nWith given code\r\n\r\n```\r\nnlp = pipeline('ner', model='xlm-roberta-large-finetuned-conll02-dutch', grouped_entities=True)\r\nsentence = \"Groenlinks praat over Schiphol.\"\r\nnlp(sentence)\r\n```\r\n\r\nyields\r\n\r\n```\r\n[{'entity_group': 'ORG',\r\n 'score': 0.98522276,\r\n 'word': 'Groenlink',\r\n 'start': 0,\r\n 'end': 9},\r\n {'entity_group': 'LOC',\r\n 'score': 0.9999288,\r\n 'word': 'Schi',\r\n 'start': 22,\r\n 'end': 26},\r\n {'entity_group': 'LOC',\r\n 'score': 0.99987257,\r\n 'word': 'hol',\r\n 'start': 27,\r\n 'end': 30}]\r\n```\r\n\r\nThe subwords are still not merged correctly as the found entities do not exist in the original text. I also tried setting `aggregation_strategy=AggregationStrategy.SIMPLE`, but that did not help either. Am I doing something wrong?\r\n\r\n"
] | 1,622 | 1,627 | 1,627 | NONE | null | ## Environment info
- `transformers` version: 4.7.0.dev0
- Platform: Windows
- Python version: 3.9.4
- PyTorch version (GPU?): No
- Tensorflow version (GPU?): No
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@Narsil
@francescorubbo
@elk-cloner
## Information
xlm-roberta-large-finetuned-conll02-dutch
The problem arises when using aggregation_strategy.
## To reproduce
Steps to reproduce the behavior:
Given this code:
```sentence = "Groenlinks praat over Schiphol."
nlp = pipeline('ner', model='xlm-roberta-large-finetuned-conll02-dutch')
nlp(sentence)
```
I get the following result:
```
[{'entity': 'B-ORG',
'score': 0.9769433,
'index': 1,
'word': 'βGroen',
'start': 0,
'end': 5},
{'entity': 'I-ORG',
'score': 0.9935022,
'index': 2,
'word': 'link',
'start': 5,
'end': 9},
{'entity': 'B-LOC',
'score': 0.9999288,
'index': 6,
'word': 'βSchi',
'start': 22,
'end': 26},
{'entity': 'I-LOC',
'score': 0.99987257,
'index': 8,
'word': 'hol',
'start': 27,
'end': 30}]
```
We received subwords, where I would prefer to have real words. I found that `aggregation_strategy` was added in the latest release (master branch, 4.7.0.dev0). In an attempt to fix this, I tried this:
```
sentence = "Groenlinks praat over Schiphol."
nlp = pipeline('ner', model='xlm-roberta-large-finetuned-conll02-dutch', aggregation_strategy="max")
nlp(sentence)
```
Which yields:
```
[{'entity_group': 'ORG',
'score': 0.98522276,
'word': 'Groenlink',
'start': 0,
'end': 9},
{'entity_group': 'LOC',
'score': 0.99987257,
'word': 'hol',
'start': 27,
'end': 30}]
```
## Expected behavior
This is different than expected, as subwords are merged in the wrong way. `Groenlink` and `hol` were both not part of the original sentence. I would expect this:
```
[{'entity_group': 'ORG',
'score': 0.98522276,
'word': 'Groenlinks',
'start': 0,
'end': 9},
{'entity_group': 'LOC',
'score': 0.99987257,
'word': 'Schiphol',
'start': 27,
'end': 30}]
```
Do you have any clues how to fix this?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11887/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11886 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11886/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11886/comments | https://api.github.com/repos/huggingface/transformers/issues/11886/events | https://github.com/huggingface/transformers/pull/11886 | 902,270,929 | MDExOlB1bGxSZXF1ZXN0NjUzNjU2NzE5 | 11,886 | [Flax] Allow dataclasses to be jitted | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,622 | 1,622 | 1,622 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Previously it was not possible to jit HF's `ModelOutput` . By changing `dataclass` to `flax.struct.dataclass` this is however possible.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11886/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11886/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11886",
"html_url": "https://github.com/huggingface/transformers/pull/11886",
"diff_url": "https://github.com/huggingface/transformers/pull/11886.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11886.patch",
"merged_at": 1622037673000
} |
https://api.github.com/repos/huggingface/transformers/issues/11885 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11885/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11885/comments | https://api.github.com/repos/huggingface/transformers/issues/11885/events | https://github.com/huggingface/transformers/issues/11885 | 902,258,244 | MDU6SXNzdWU5MDIyNTgyNDQ= | 11,885 | Find the requested files in the cached path without the internet | {
"login": "Bachstelze",
"id": 19904888,
"node_id": "MDQ6VXNlcjE5OTA0ODg4",
"avatar_url": "https://avatars.githubusercontent.com/u/19904888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bachstelze",
"html_url": "https://github.com/Bachstelze",
"followers_url": "https://api.github.com/users/Bachstelze/followers",
"following_url": "https://api.github.com/users/Bachstelze/following{/other_user}",
"gists_url": "https://api.github.com/users/Bachstelze/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bachstelze/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bachstelze/subscriptions",
"organizations_url": "https://api.github.com/users/Bachstelze/orgs",
"repos_url": "https://api.github.com/users/Bachstelze/repos",
"events_url": "https://api.github.com/users/Bachstelze/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bachstelze/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You can save your model and tokenizer to a directory using `save_pretrained` and load them from there! You only need to specify the directory path to the model/tokenizer arguments you pass to the `pipeline` method.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,625 | 1,625 | NONE | null | # π Feature request
The pipeline needs an internet connection to find the cached path. A request usually consumes time.
## Motivation
Could we search only localy for a better performance?
## Your contribution
My test code:
```
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-de-en")
model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-de-en")
de_en_translator = pipeline("translation_de_to_en", model=model, tokenizer=tokenizer)
translation = de_en_translator("Ein kleiner Test.")
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11885/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11885/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11884 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11884/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11884/comments | https://api.github.com/repos/huggingface/transformers/issues/11884/events | https://github.com/huggingface/transformers/issues/11884 | 902,170,031 | MDU6SXNzdWU5MDIxNzAwMzE= | 11,884 | Mask token mismatch with the model on hosted inference API of Model Hub | {
"login": "Ethan-yt",
"id": 9592150,
"node_id": "MDQ6VXNlcjk1OTIxNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9592150?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ethan-yt",
"html_url": "https://github.com/Ethan-yt",
"followers_url": "https://api.github.com/users/Ethan-yt/followers",
"following_url": "https://api.github.com/users/Ethan-yt/following{/other_user}",
"gists_url": "https://api.github.com/users/Ethan-yt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ethan-yt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ethan-yt/subscriptions",
"organizations_url": "https://api.github.com/users/Ethan-yt/orgs",
"repos_url": "https://api.github.com/users/Ethan-yt/repos",
"events_url": "https://api.github.com/users/Ethan-yt/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ethan-yt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"In the past, it was no error.\r\n<img width=\"510\" alt=\"lm-demo\" src=\"https://user-images.githubusercontent.com/9592150/120172538-bd4e1b80-c235-11eb-8576-6446a2dd0ed8.png\">\r\nI don't know when it starts to emit an error.\r\n<img width=\"544\" alt=\"image\" src=\"https://user-images.githubusercontent.com/9592150/120172600-ce972800-c235-11eb-91b0-a231cfaf4f5f.png\">\r\n",
"I've fixed it by explicitly specifying the `mask_token` in your model card metadata: https://huggingface.co/ethanyt/guwenbert-base/commit/30aaff24928389096312600511a9ca2fad1b3974",
"thanks for reporting!",
"> thanks for reporting!\r\n\r\nThanks!\r\n\r\n"
] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | ### Who can help
@LysandreJik
@julien-c
@mfuntowicz
## Information
In my model card: https://huggingface.co/ethanyt/guwenbert-base, I used to be able to run the hosted inference successfully, but recently it prompted an error: `"<mask>" must be present in your input.`
My model uses RoBERTa MLM and BERT Tokenizer. So the mask token is actually "[MASK]". I have already set it in `tokenizer_confg.json` but the inference API still mismatches with that.
In the past it is OK but recently it turns to prompt an error. Seems like the front-end start to double-check the mask token. How can I set the mask token in an appropriate way? Is it documented to set mask token in inference API?
Thanks!
## To reproduce
Steps to reproduce the behavior:
1. Go to https://huggingface.co/ethanyt/guwenbert-base
2. Run an example with "[MASK]"
## Expected behavior
In the past it was OK. See snapshot in https://github.com/ethan-yt/guwenbert/blob/main/README_EN.md
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11884/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11883 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11883/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11883/comments | https://api.github.com/repos/huggingface/transformers/issues/11883/events | https://github.com/huggingface/transformers/pull/11883 | 902,139,753 | MDExOlB1bGxSZXF1ZXN0NjUzNTM3MjAz | 11,883 | Add FlaxCLIP | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2934977194,
"node_id": "MDU6TGFiZWwyOTM0OTc3MTk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Flax",
"name": "Flax",
"color": "4862AD",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"- Added jitted tests for `get_image_features` and `get_text_features`\r\n- `__init__(...)` now takes `input_shape` as an arg, when `None` it's set using default values in `config`.\r\n\r\nMerging!"
] | 1,622 | 1,622 | 1,622 | MEMBER | null | # What does this PR do?
This PR adds the CLIP model in JAX/Flax. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11883/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11883",
"html_url": "https://github.com/huggingface/transformers/pull/11883",
"diff_url": "https://github.com/huggingface/transformers/pull/11883.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11883.patch",
"merged_at": 1622520871000
} |
https://api.github.com/repos/huggingface/transformers/issues/11882 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11882/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11882/comments | https://api.github.com/repos/huggingface/transformers/issues/11882/events | https://github.com/huggingface/transformers/issues/11882 | 902,129,696 | MDU6SXNzdWU5MDIxMjk2OTY= | 11,882 | BertForMaskedLM training fails when using iterable eval_dataset and DataCollatorForLanguageModeling collator. | {
"login": "joerenner",
"id": 11395913,
"node_id": "MDQ6VXNlcjExMzk1OTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/11395913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joerenner",
"html_url": "https://github.com/joerenner",
"followers_url": "https://api.github.com/users/joerenner/followers",
"following_url": "https://api.github.com/users/joerenner/following{/other_user}",
"gists_url": "https://api.github.com/users/joerenner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joerenner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joerenner/subscriptions",
"organizations_url": "https://api.github.com/users/joerenner/orgs",
"repos_url": "https://api.github.com/users/joerenner/repos",
"events_url": "https://api.github.com/users/joerenner/events{/privacy}",
"received_events_url": "https://api.github.com/users/joerenner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Might be of interest to @sgugger ",
"Fixed by #11890"
] | 1,622 | 1,623 | 1,623 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux-5.8.0-48-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Library:
- tokenizers: @LysandreJik
- trainer: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [ x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Instantiate a Trainer with a BertForMaskedLM model, and an iterable dataset passed in for the "eval_dataset", and DataCollatorForLanguageModeling as the collator.
2. Call train()
../../.pyenv/versions/3.7.9/lib/python3.7/site-packages/transformers/trainer.py:1334: in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
../../.pyenv/versions/3.7.9/lib/python3.7/site-packages/transformers/trainer.py:1405: in _maybe_log_save_evaluate
metrics = self.evaluate()
../../.pyenv/versions/3.7.9/lib/python3.7/site-packages/transformers/trainer.py:2011: in evaluate
output.metrics.update(speed_metrics(metric_key_prefix, start_time, output.num_samples))
```python
def speed_metrics(split, start_time, num_samples=None):
runtime = time.time() - start_time
result = {f"{split}_runtime": round(runtime, 4)}
if num_samples is not None:
samples_per_second = 1 / (runtime / num_samples) # ZeroDivisionError: float division by zero here
```
When evaluation_loop() gets called with an iterable eval dataset, it uses the "observed_num_examples" value to return the number of samples:
https://github.com/huggingface/transformers/blob/a9c797f93de97984771b7b902ce1e6b0aed98f96/src/transformers/trainer.py#L2155
```python
observed_num_examples = 0
# Main evaluation loop
for step, inputs in enumerate(dataloader):
# Update the observed num examples
observed_batch_size = find_batch_size(inputs)
if observed_batch_size is not None:
observed_num_examples += observed_batch_size
```
The problem is, transformers.trainer_pt_utils.find_batch_size fails to find the correct batch size if the input is a BatchEncoding object (which is what DataCollatorForLanguageModeling returns if it is passed a dict or BatchEncoding):
https://github.com/huggingface/transformers/blob/0b0a598452b02278075a75f84b5ca7bb457224ad/src/transformers/trainer_pt_utils.py#L106
```python
def find_batch_size(tensors):
"""
Find the first dimension of a tensor in a nested list/tuple/dict of tensors.
"""
if isinstance(tensors, (list, tuple)):
for t in tensors:
result = find_batch_size(t)
if result is not None:
return result
elif isinstance(tensors, dict): # <--- returns false if "tensors" is BatchEncoding, should maybe return True?
for key, value in tensors.items():
result = find_batch_size(value)
if result is not None:
return result
elif isinstance(tensors, torch.Tensor):
return tensors.shape[0] if len(tensors.shape) >= 1 else None
elif isinstance(tensors, np.ndarray):
return tensors.shape[0] if len(tensors.shape) >= 1 else None
```
This leads the observed_num_examples variable to not get updated, and since the input dataset is iterable, the output of evaluation_loop() has the "num_samples" variable set to 0:
https://github.com/huggingface/transformers/blob/a9c797f93de97984771b7b902ce1e6b0aed98f96/src/transformers/trainer.py#L2212
```python
if not isinstance(eval_dataset, IterableDataset):
num_samples = len(eval_dataset)
elif isinstance(eval_dataset, IterableDatasetShard):
num_samples = eval_dataset.num_examples
else:
num_samples = observed_num_examples # observed_num_examples is falsely set to 0
```
which leads to the above ZeroDivisionError error.
This should be a quick fix in the find_batch_size function unless I am mistaken.
## Expected behavior
The training finishes with no error.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11882/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11882/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11881 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11881/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11881/comments | https://api.github.com/repos/huggingface/transformers/issues/11881/events | https://github.com/huggingface/transformers/issues/11881 | 901,658,668 | MDU6SXNzdWU5MDE2NTg2Njg= | 11,881 | Adding new Jax Models. | {
"login": "jayendra13",
"id": 651057,
"node_id": "MDQ6VXNlcjY1MTA1Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/651057?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jayendra13",
"html_url": "https://github.com/jayendra13",
"followers_url": "https://api.github.com/users/jayendra13/followers",
"following_url": "https://api.github.com/users/jayendra13/following{/other_user}",
"gists_url": "https://api.github.com/users/jayendra13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jayendra13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jayendra13/subscriptions",
"organizations_url": "https://api.github.com/users/jayendra13/orgs",
"repos_url": "https://api.github.com/users/jayendra13/repos",
"events_url": "https://api.github.com/users/jayendra13/events{/privacy}",
"received_events_url": "https://api.github.com/users/jayendra13/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi there, thank you for your interest in JAX.\r\n\r\nWe plan to add as many models as possible in JAX/Flax. Right now we are working on improving the JAX support in the lib, better JAX/Flax tests, generation, cookie-cutter templates etc so that it'll become easier to add more models faster.\r\n\r\nPlease stay tuned, we'll soon share more details :)\r\n"
] | 1,621 | 1,623 | 1,623 | CONTRIBUTOR | null | Is there any board to track that how many current models have jax implementation ?
I would like to contribute to add jax implementation for the remaining ones, which model I can take to start ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11881/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11881/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11880 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11880/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11880/comments | https://api.github.com/repos/huggingface/transformers/issues/11880/events | https://github.com/huggingface/transformers/issues/11880 | 901,469,644 | MDU6SXNzdWU5MDE0Njk2NDQ= | 11,880 | KeyError: 'labels' in distill_classifier.py | {
"login": "adamFinastra",
"id": 46481258,
"node_id": "MDQ6VXNlcjQ2NDgxMjU4",
"avatar_url": "https://avatars.githubusercontent.com/u/46481258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adamFinastra",
"html_url": "https://github.com/adamFinastra",
"followers_url": "https://api.github.com/users/adamFinastra/followers",
"following_url": "https://api.github.com/users/adamFinastra/following{/other_user}",
"gists_url": "https://api.github.com/users/adamFinastra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adamFinastra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adamFinastra/subscriptions",
"organizations_url": "https://api.github.com/users/adamFinastra/orgs",
"repos_url": "https://api.github.com/users/adamFinastra/repos",
"events_url": "https://api.github.com/users/adamFinastra/events{/privacy}",
"received_events_url": "https://api.github.com/users/adamFinastra/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I am facing the same issue and cannot run the google colab examples either. Any help is appreciated! ",
"@joeddav was there a specific point in time to clone the repo from to get the scripts to run or anything recent that has changed which might have broken the code?",
"Experienced the same issue with labels trained on a custom dataset.\r\n\r\n## Environment info\r\ntorch 1.8.1+cu111\r\ntqdm 4.49.0\r\ntransformers 4.5.1\r\nPython 3.8.5 (default, Sep 3 2020, 21:29:08) [MSC v.1916 64 bit (AMD64)]\r\nWindows-10-10.0.17763-SP0\r\n\r\n## Issue\r\n\r\nExecuting this cell:\r\n```\r\n!python transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py \\\r\n--data_file email.txt \\\r\n--class_names_file class_names.txt \\\r\n--hypothesis_template \"This text is about {}.\" \\\r\n--student_name_or_path distilbert-base-uncased \\\r\n--output_dir ./distilbert-base-uncased-notino-student\r\n```\r\n\r\n\r\nI'm getting this output:\r\n```\r\n06/09/2021 15:12:04 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 3distributed training: False, 16-bits training: False\r\n06/09/2021 15:12:04 - INFO - __main__ - Training/evaluation parameters DistillTrainingArguments(output_dir='./distilbert-base-uncased-notino-student', overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=False, evaluation_strategy=<IntervalStrategy.NO: 'no'>, prediction_loss_only=False, per_device_train_batch_size=32, per_device_eval_batch_size=128, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=1.0, max_steps=-1, lr_scheduler_type=<SchedulerType.LINEAR: 'linear'>, warmup_ratio=0.0, warmup_steps=0, logging_dir='runs\\\\Jun09_15-12-04_dcvmdwhanl03', logging_strategy=<IntervalStrategy.STEPS: 'steps'>, logging_first_step=False, logging_steps=500, save_strategy=<IntervalStrategy.STEPS: 'steps'>, save_steps=500, save_total_limit=0, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', fp16_backend='auto', fp16_full_eval=False, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name='./distilbert-base-uncased-notino-student', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=[], deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, length_column_name='length', report_to=['tensorboard', 'wandb'], ddp_find_unused_parameters=None, dataloader_pin_memory=True, skip_memory_metrics=False, mp_parameters='')\r\n\r\n...\r\n\r\n100%|##########| 128069/128069 [00:47<00:00, 2710.05ex/s]\r\n[INFO|trainer.py:490] 2021-06-09 18:27:26,719 >> The following columns in the training set don't have a corresponding argument in `DistilBertForSequenceClassification.forward` and have been ignored: text.\r\n[INFO|trainer.py:1013] 2021-06-09 18:27:27,005 >> ***** Running training *****\r\n[INFO|trainer.py:1014] 2021-06-09 18:27:27,011 >> Num examples = 128069\r\n[INFO|trainer.py:1015] 2021-06-09 18:27:27,016 >> Num Epochs = 1\r\n[INFO|trainer.py:1016] 2021-06-09 18:27:27,022 >> Instantaneous batch size per device = 32\r\n[INFO|trainer.py:1017] 2021-06-09 18:27:27,028 >> Total train batch size (w. parallel, distributed & accumulation) = 96\r\n[INFO|trainer.py:1018] 2021-06-09 18:27:27,034 >> Gradient Accumulation steps = 1\r\n[INFO|trainer.py:1019] 2021-06-09 18:27:27,040 >> Total optimization steps = 1335\r\n[INFO|integrations.py:586] 2021-06-09 18:27:27,791 >> Automatic Weights & Biases logging enabled, to disable set os.environ[\"WANDB_DISABLED\"] = \"true\"\r\nwandb: Currently logged in as: dbs700 (use `wandb login --relogin` to force relogin)\r\nwandb: wandb version 0.10.31 is available! To upgrade, please run:\r\nwandb: $ pip install wandb --upgrade\r\nwandb: Tracking run with wandb version 0.10.30\r\nwandb: Syncing run ./distilbert-base-uncased-notino-student\r\nwandb: View project at https://wandb.ai/dbs700/huggingface\r\nwandb: View run at https://wandb.ai/dbs700/huggingface/runs/14c4hinu\r\nwandb: Run data is saved locally in C:\\Users\\dmitrii.storozhenko\\wandb\\run-20210609_182747-14c4hinu\r\nwandb: Run `wandb offline` to turn off syncing.\r\n\r\n 0%| | 0/1335 [00:00<?, ?it/s]Traceback (most recent call last):\r\n File \"transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py\", line 338, in <module>\r\n main()\r\n File \"transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py\", line 328, in main\r\n trainer.train()\r\n File \"C:\\Users\\dmitrii.storozhenko\\Anaconda3\\lib\\site-packages\\transformers\\trainer.py\", line 1120, in train\r\n tr_loss += self.training_step(model, inputs)\r\n File \"C:\\Users\\dmitrii.storozhenko\\Anaconda3\\lib\\site-packages\\transformers\\trainer.py\", line 1524, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py\", line 119, in compute_loss\r\n target_p = inputs[\"labels\"]\r\nKeyError: 'labels'\r\nwandb: Waiting for W&B process to finish, PID 5992\r\nwandb: Program failed with code 1. Press ctrl-c to abort syncing.\r\n```",
"Hi, sorry for the slow response. This is due to [a breaking change in the Datasets API](https://github.com/huggingface/datasets/releases/tag/1.6.2). I'll need to update the script accordingly. In the meantime, use datasets <= 1.6.1 and that should solve the problem.",
"That did the trick! "
] | 1,621 | 1,624 | 1,624 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Darwin-19.6.0-x86_64-i386-64bit
- Python version: 3.7.6
- PyTorch version (GPU?): 1.8.1 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Issue
I am trying to run the distill_classifier.py script from transformers/examples/research_projects/zero-shot-distillation/ with my own text data set and labels on the roberta-large-mnli model. There are a few hundred rows of text and 13 class labels. I am running the following in a cell of my notebook:
```
!python transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py \
--data_file ./distill_data/train_unlabeled.txt \
--class_names_file ./distill_data/class_names.txt \
--teacher_name_or_path roberta-large-mnli \
--hypothesis_template "This text is about {}." \
--output_dir ./my_student/distilled
```
The script starts to run but after a short while I receive the following error:
```
Trainer is attempting to log a value of "{'Science': 0, 'Math': 1, 'Social Studies': 2, 'Language Arts': 3, 'Statistics': 4, 'Calculus': 5, 'Linear Algebra': 6, 'Probability': 7, 'Chemistry': 8, 'Biology': 9, 'Supply chain management': 10, 'Economics': 11, 'Pottery': 12}"
for key "label2id" as a parameter.
MLflow's log_param() only accepts values no longer than 250 characters so we dropped this attribute.
0%| | 0/7 [00:00<?, ?it/s]Traceback (most recent call last):
File "transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py", line 338, in <module>
main()
File "transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py", line 328, in main
trainer.train()
File "/opt/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1272, in train
tr_loss += self.training_step(model, inputs)
File "/opt/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1734, in training_step
loss = self.compute_loss(model, inputs)
File "transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py", line 119, in compute_loss
target_p = inputs["labels"]
File "/opt/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 231, in __getitem__
return self.data[item]
KeyError: 'labels'
0%| | 0/7 [00:01<?, ?it/s]
```
I have re-examined my labels files and am exactly following this guide for distill_classifier.py
https://colab.research.google.com/drive/1mjBjd0cR8G57ZpsnFCS3ngGyo5nCa9ya?usp=sharing&utm_campaign=Hugging%2BFace&utm_medium=web&utm_source=Hugging_Face_8#scrollTo=ECt06ndcnpyb
Any help would be appreciated to distill!
**Edit:** Updated torch to latest version and receiving the same error. I reduced number of classes from 24 to 13 and still have this issue. When I print inputs in the compute loss function it looks like there is no key for labels:
```
{'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
...,
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0]]), 'input_ids': tensor([[ 101, 1999, 2262, ..., 0, 0, 0],
[ 101, 4117, 2007, ..., 0, 0, 0],
[ 101, 2130, 2295, ..., 0, 0, 0],
...,
[ 101, 1999, 2760, ..., 0, 0, 0],
[ 101, 2057, 6614, ..., 0, 0, 0],
[ 101, 2057, 1521, ..., 0, 0, 0]])}
```
Is there an additional parameter that is needed to assign the labels?
**Edit 2:** Just let the colab notebook "Distilling Zero Shot Classification.ipynb" run for a few hours and am receiving the same error with the agnews dataset. It looks like the code in the colab notebook might have an incompatibility with some other files possibly.
**Edit 3:** I have changed datasets and reduced to 3 classes and tried to add the label_names argument
`--label_names ["Carbon emissions", "Energy efficiency", "Water scarcity"]
`
my ./distill_data/class_names.txt file looks like:
```
Carbon Emissions
Energy Efficiency
Water Scarcity
```
and am still facing the same error.
### Who can help
@LysandreJik
@sgugger
@joeddav
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11880/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11880/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11879 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11879/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11879/comments | https://api.github.com/repos/huggingface/transformers/issues/11879/events | https://github.com/huggingface/transformers/issues/11879 | 901,460,002 | MDU6SXNzdWU5MDE0NjAwMDI= | 11,879 | Trainer : AttributeError: 'str' object has no attribute '_memory_tracker' | {
"login": "KhalidAlt",
"id": 49135754,
"node_id": "MDQ6VXNlcjQ5MTM1NzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/49135754?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KhalidAlt",
"html_url": "https://github.com/KhalidAlt",
"followers_url": "https://api.github.com/users/KhalidAlt/followers",
"following_url": "https://api.github.com/users/KhalidAlt/following{/other_user}",
"gists_url": "https://api.github.com/users/KhalidAlt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KhalidAlt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KhalidAlt/subscriptions",
"organizations_url": "https://api.github.com/users/KhalidAlt/orgs",
"repos_url": "https://api.github.com/users/KhalidAlt/repos",
"events_url": "https://api.github.com/users/KhalidAlt/events{/privacy}",
"received_events_url": "https://api.github.com/users/KhalidAlt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The `Trainer` will resume from the last epoch and continue the learning... if you create it. You are using the class directly, not a `Trainer` object."
] | 1,621 | 1,622 | 1,622 | NONE | null | ## Environment info
- `transformers` version: 4.6.0
- Platform: Linux-5.8.0-53-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.0+cu111 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
Library:
- trainer: @sgugger
## Information
Model I am using (Bert, XLNet ...):
T5ForConditionalGeneration
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Training a T5 from scratch over 20GB of data for one epoch and save the model checkpoints using Trainer library
2. Trying to (resume_from_checkpoint="./checkpoint-1320000")
The code :
```
model=T5ForConditionalGeneration.from_pretrained('./checkpoint-1320000/')
%%time
Trainer.train("./T5_model_result/checkpoint-1320000/")
```
The Error message :
```
AttributeError Traceback (most recent call last)
<timed eval> in <module>
~/anaconda3/envs/py38/lib/python3.8/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs)
998
999 # memory metrics - must set up as early as possible
-> 1000 self._memory_tracker.start()
1001
1002 args = self.args
AttributeError: 'str' object has no attribute '_memory_tracker'
```
## Expected behavior
The Traianer library should resume from the last epoch to continue the learning.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11879/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11879/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11878 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11878/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11878/comments | https://api.github.com/repos/huggingface/transformers/issues/11878/events | https://github.com/huggingface/transformers/pull/11878 | 901,354,613 | MDExOlB1bGxSZXF1ZXN0NjUyODIwODQ3 | 11,878 | [Wav2Vec2ForCTC] example typo fixed | {
"login": "madprogramer",
"id": 3719664,
"node_id": "MDQ6VXNlcjM3MTk2NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3719664?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/madprogramer",
"html_url": "https://github.com/madprogramer",
"followers_url": "https://api.github.com/users/madprogramer/followers",
"following_url": "https://api.github.com/users/madprogramer/following{/other_user}",
"gists_url": "https://api.github.com/users/madprogramer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/madprogramer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/madprogramer/subscriptions",
"organizations_url": "https://api.github.com/users/madprogramer/orgs",
"repos_url": "https://api.github.com/users/madprogramer/repos",
"events_url": "https://api.github.com/users/madprogramer/events{/privacy}",
"received_events_url": "https://api.github.com/users/madprogramer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue) n/a
Fixed Typo:
In the example code for `transformers.Wav2Vec2ForCTC` loss was being computed on `transcription` instead of the `target_transcription` variable. An acquaintance of mine noticed the error, and that it had been corrected elsewhere, namely in a [code snippet for a fairseq example](https://github.com/pytorch/fairseq/blob/master/examples/wav2vec/README.md).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11878/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11878/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11878",
"html_url": "https://github.com/huggingface/transformers/pull/11878",
"diff_url": "https://github.com/huggingface/transformers/pull/11878.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11878.patch",
"merged_at": 1621976774000
} |
https://api.github.com/repos/huggingface/transformers/issues/11877 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11877/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11877/comments | https://api.github.com/repos/huggingface/transformers/issues/11877/events | https://github.com/huggingface/transformers/issues/11877 | 901,351,725 | MDU6SXNzdWU5MDEzNTE3MjU= | 11,877 | basic_tokenizer don't preserve token encoding/format | {
"login": "valtheval",
"id": 24987833,
"node_id": "MDQ6VXNlcjI0OTg3ODMz",
"avatar_url": "https://avatars.githubusercontent.com/u/24987833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/valtheval",
"html_url": "https://github.com/valtheval",
"followers_url": "https://api.github.com/users/valtheval/followers",
"following_url": "https://api.github.com/users/valtheval/following{/other_user}",
"gists_url": "https://api.github.com/users/valtheval/gists{/gist_id}",
"starred_url": "https://api.github.com/users/valtheval/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/valtheval/subscriptions",
"organizations_url": "https://api.github.com/users/valtheval/orgs",
"repos_url": "https://api.github.com/users/valtheval/repos",
"events_url": "https://api.github.com/users/valtheval/events{/privacy}",
"received_events_url": "https://api.github.com/users/valtheval/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! The basic tokenizer is only a building block of the `BertTokenizer` - and it was not intended to be used independently.\r\n\r\nWhat are you trying to achieve especially, that the `BertTokenizer` cannot?\r\n\r\nUsually, it is best to assume that the tokenizer should be used as it is configured on the hub - as it is with that tokenizer that the model was trained, and staying consistent is important to obtain good results.",
"Hi thanks for the reply.\r\n\r\nMy original problem is that I want to decode bert token ids (int --> word)\r\nIf I use BertTokenizer it sometimes generates [UNK] token which can't be decode back to the original token (the real word that produced the [UNK] token). I then use basic_tokenizer to have the list of raw token and replace [UNK] by the right token using its index in the sentence. But I am facing inconsistancies\r\n\r\nHere is an example:\r\n```\r\nraw_sent = \"patches to aid safetyΓ’\\x80\\x95emphasizing\"\r\nids = bert_tokenizer.encode(raw_sent)\r\ntokens = bert_tokenizer.decode(ids)\r\nprint(ids)\r\nprint(tokens)\r\n```\r\ngives:\r\n```\r\n[2, 13453, 1701, 6974, 1, 3]\r\n'[CLS] patches to aid [UNK] [SEP]'\r\n```\r\nIn my pipeline I receive the raw sentence and the list of ids (int), I want to figure out wich word in the sentence produced the [UNK] token.\r\n\r\nI do:\r\n```\r\nbasic_token = bert_tokenizer.basic_tokenizer.tokenize(raw_sent)\r\nprint(basic_token)\r\n['patches', 'to', 'aid', 'safetyΓ’emphasizing']\r\n```\r\nSo I know that id `1` in the list `[2, 13453, 1701, 6974, 1, 3]` corresponds to `safetyΓ’emphasizing`. So here is the problem: `safetyΓ’emphasizing` is different from `safetyΓ’\\x80\\x95emphasizing` in the original sentence which leads to further errors in the following of the pipeline (especially finding de spans of the token)\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,625 | 1,625 | NONE | null | Hello all!
## Environment info
- `transformers` version: 4.3.2
- Platform: Linux-4.19.0-13-cloud-amd64-x86_64-with-glibc2.28
- Python version: 3.8.6
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
- the code was run on jupyter notebook
### Who can help
@LysandreJik
## Issue
I have the following code
```
model = 'microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext'
tokenizer = BertTokenizer.from_pretrained(model, do_lower_case=False)
s = 'View and Print-FDA Drug Safety Communication: FDA requiring color changes to Duragesic (fentanyl) pain patches to aid safetyΓ’\x80\x95emphasizing that accidental exposure to used patches can cause death'
```
When using `basic_tokenizer` it changes the token (not the same forme (encoding) of the original sentence)
```
tokenizer.basic_tokenizer.tokenize(s)
>>> ['View', 'and', 'Print', '-', 'FDA', 'Drug', 'Safety', 'Communication', ':', 'FDA', 'requiring', 'color', 'changes', 'to', 'Duragesic', '(', 'fentanyl', ')', 'pain', 'patches', 'to', 'aid', 'safetyΓ’emphasizing', 'that', 'accidental', 'exposure', 'to', 'used', 'patches', 'can', 'cause', 'death']
```
the original token `safetyΓ’\x80\x95emphasizing` is tokenized in `safetyΓ’emphasizing`
**2 issues then:**
- Is this the normal behavior ? it seems not or I am using it wrongly
- It seems there is no documentation about basic_tokenizer object in the huggingface documentation
Any help/explanation would be welcomed :)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11877/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11877/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11876 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11876/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11876/comments | https://api.github.com/repos/huggingface/transformers/issues/11876/events | https://github.com/huggingface/transformers/issues/11876 | 901,215,515 | MDU6SXNzdWU5MDEyMTU1MTU= | 11,876 | Cannot add tokenizer to model repo | {
"login": "Kvit",
"id": 1123272,
"node_id": "MDQ6VXNlcjExMjMyNzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1123272?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kvit",
"html_url": "https://github.com/Kvit",
"followers_url": "https://api.github.com/users/Kvit/followers",
"following_url": "https://api.github.com/users/Kvit/following{/other_user}",
"gists_url": "https://api.github.com/users/Kvit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kvit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kvit/subscriptions",
"organizations_url": "https://api.github.com/users/Kvit/orgs",
"repos_url": "https://api.github.com/users/Kvit/repos",
"events_url": "https://api.github.com/users/Kvit/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kvit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is a known issue on our side. Can you try once more? cc @n1t0 @sterchelen ",
"Same result:\r\n```\r\n---------------------------------------------------------------------------\r\nCalledProcessError Traceback (most recent call last)\r\n/usr/local/lib/python3.7/dist-packages/huggingface_hub/repository.py in git_push(self)\r\n 407 encoding=\"utf-8\",\r\n--> 408 cwd=self.local_dir,\r\n 409 )\r\n\r\n5 frames\r\n/usr/lib/python3.7/subprocess.py in run(input, capture_output, timeout, check, *popenargs, **kwargs)\r\n 511 raise CalledProcessError(retcode, process.args,\r\n--> 512 output=stdout, stderr=stderr)\r\n 513 return CompletedProcess(process.args, retcode, stdout, stderr)\r\n\r\nCalledProcessError: Command '['git', 'push']' returned non-zero exit status 1.\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nOSError Traceback (most recent call last)\r\n<ipython-input-64-82c7360864ec> in <module>()\r\n 1 # add tokenizer to repo\r\n----> 2 tokenizer.push_to_hub(repo_url='https://huggingface.co/vitali/Roberta' , use_auth_token='api_********')\r\n\r\n/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py in push_to_hub(self, repo_name, repo_url, commit_message, organization, private, use_auth_token)\r\n 1891 organization=organization,\r\n 1892 private=private,\r\n-> 1893 use_auth_token=use_auth_token,\r\n 1894 )\r\n 1895 \r\n\r\n/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py in _push_to_hub(cls, save_directory, save_files, repo_name, repo_url, commit_message, organization, private, use_auth_token)\r\n 1959 copy_tree(save_directory, tmp_dir)\r\n 1960 \r\n-> 1961 return repo.push_to_hub(commit_message=commit_message)\r\n\r\n/usr/local/lib/python3.7/dist-packages/huggingface_hub/repository.py in push_to_hub(self, commit_message)\r\n 422 self.git_add()\r\n 423 self.git_commit(commit_message)\r\n--> 424 return self.git_push()\r\n\r\n/usr/local/lib/python3.7/dist-packages/huggingface_hub/repository.py in git_push(self)\r\n 410 logger.info(result.stdout)\r\n 411 except subprocess.CalledProcessError as exc:\r\n--> 412 raise EnvironmentError(exc.stderr)\r\n 413 \r\n 414 return self.git_head_commit_url()\r\n\r\nOSError: error: RPC failed; HTTP 403 curl 22 The requested URL returned error: 403 Forbidden\r\nfatal: The remote end hung up unexpectedly\r\nfatal: The remote end hung up unexpectedly\r\nEverything up-to-date\r\n\r\n```",
"Hi @Kvit, I noticed many of your requests being blocked on our side, and should have fixed the problem. Can you try again?",
"It worked this time, thank you."
] | 1,621 | 1,621 | 1,621 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux-5.4.109+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- Google Colab Pro Notebook
## To reproduce
Steps to reproduce the behavior:
**From Google Colab Notebook,**
1. Push model to new repo:
2. Try to add tokenizer to rep_ur using `use_auth_token`
tokenizer.push_to_hub(repo_url="https://huggingface.co/vitali/Roberta" , use_auth_token="api_****************")`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
CalledProcessError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/huggingface_hub/repository.py in git_push(self)
407 encoding="utf-8",
--> 408 cwd=self.local_dir,
409 )
5 frames
/usr/lib/python3.7/subprocess.py in run(input, capture_output, timeout, check, *popenargs, **kwargs)
511 raise CalledProcessError(retcode, process.args,
--> 512 output=stdout, stderr=stderr)
513 return CompletedProcess(process.args, retcode, stdout, stderr)
CalledProcessError: Command '['git', 'push']' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-51-82c7360864ec> in <module>()
1 # add tokenizer to repo
----> 2 tokenizer.push_to_hub(repo_url='https://huggingface.co/vitali/Roberta' , use_auth_token='api_*******')
/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py in push_to_hub(self, repo_name, repo_url, commit_message, organization, private, use_auth_token)
1891 organization=organization,
1892 private=private,
-> 1893 use_auth_token=use_auth_token,
1894 )
1895
/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py in _push_to_hub(cls, save_directory, save_files, repo_name, repo_url, commit_message, organization, private, use_auth_token)
1959 copy_tree(save_directory, tmp_dir)
1960
-> 1961 return repo.push_to_hub(commit_message=commit_message)
/usr/local/lib/python3.7/dist-packages/huggingface_hub/repository.py in push_to_hub(self, commit_message)
422 self.git_add()
423 self.git_commit(commit_message)
--> 424 return self.git_push()
/usr/local/lib/python3.7/dist-packages/huggingface_hub/repository.py in git_push(self)
410 logger.info(result.stdout)
411 except subprocess.CalledProcessError as exc:
--> 412 raise EnvironmentError(exc.stderr)
413
414 return self.git_head_commit_url()
OSError: error: RPC failed; HTTP 403 curl 22 The requested URL returned error: 403 Forbidden
fatal: The remote end hung up unexpectedly
fatal: The remote end hung up unexpectedly
Everything up-to-date
```
## Expected behavior
Tokenizer should be added to the model repo
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11876/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11876/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11875 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11875/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11875/comments | https://api.github.com/repos/huggingface/transformers/issues/11875/events | https://github.com/huggingface/transformers/issues/11875 | 901,178,456 | MDU6SXNzdWU5MDExNzg0NTY= | 11,875 | [lm examples] replicate --config_overrides addition to other LM examples | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"I am getting started with this task. ",
"Thank you, @kumar-abhishek!"
] | 1,621 | 1,623 | 1,623 | CONTRIBUTOR | null | This PR https://github.com/huggingface/transformers/pull/11798 created for `run_clm.py` which adds a new feature `--config_overrides` needs to be replayed for other scripts under `examples/pytorch/language-modeling/`.
If you choose to work on this small project, please comment that you're working on it.
And thank you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11875/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11875/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11874 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11874/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11874/comments | https://api.github.com/repos/huggingface/transformers/issues/11874/events | https://github.com/huggingface/transformers/pull/11874 | 901,149,203 | MDExOlB1bGxSZXF1ZXN0NjUyNjM2NTE1 | 11,874 | [AutomaticSpeechRecognitionPipeline] Ensure input tensors are on device | {
"login": "francescorubbo",
"id": 5140987,
"node_id": "MDQ6VXNlcjUxNDA5ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5140987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/francescorubbo",
"html_url": "https://github.com/francescorubbo",
"followers_url": "https://api.github.com/users/francescorubbo/followers",
"following_url": "https://api.github.com/users/francescorubbo/following{/other_user}",
"gists_url": "https://api.github.com/users/francescorubbo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/francescorubbo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/francescorubbo/subscriptions",
"organizations_url": "https://api.github.com/users/francescorubbo/orgs",
"repos_url": "https://api.github.com/users/francescorubbo/repos",
"events_url": "https://api.github.com/users/francescorubbo/events{/privacy}",
"received_events_url": "https://api.github.com/users/francescorubbo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,622 | 1,622 | CONTRIBUTOR | null | # What does this PR do?
Enables using AutomaticSpeechRecognitionPipeline on GPU.
The feature extractor does not create tensors on the appropriate device, so we call `ensure_tensor_on_device` before feeding the processed inputs to the model.
Fixes #11829
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
@LysandreJik are there tests running on GPU? The other pipelines do not seem to test GPU inference, either.
## Who can review?
@LysandreJik, @Narsil
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11874/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11874/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11874",
"html_url": "https://github.com/huggingface/transformers/pull/11874",
"diff_url": "https://github.com/huggingface/transformers/pull/11874.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11874.patch",
"merged_at": 1622017177000
} |
https://api.github.com/repos/huggingface/transformers/issues/11873 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11873/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11873/comments | https://api.github.com/repos/huggingface/transformers/issues/11873/events | https://github.com/huggingface/transformers/issues/11873 | 900,970,372 | MDU6SXNzdWU5MDA5NzAzNzI= | 11,873 | Errors in Quickstart Documentation related to GPT-2 | {
"login": "williamsdoug",
"id": 2381871,
"node_id": "MDQ6VXNlcjIzODE4NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2381871?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/williamsdoug",
"html_url": "https://github.com/williamsdoug",
"followers_url": "https://api.github.com/users/williamsdoug/followers",
"following_url": "https://api.github.com/users/williamsdoug/following{/other_user}",
"gists_url": "https://api.github.com/users/williamsdoug/gists{/gist_id}",
"starred_url": "https://api.github.com/users/williamsdoug/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/williamsdoug/subscriptions",
"organizations_url": "https://api.github.com/users/williamsdoug/orgs",
"repos_url": "https://api.github.com/users/williamsdoug/repos",
"events_url": "https://api.github.com/users/williamsdoug/events{/privacy}",
"received_events_url": "https://api.github.com/users/williamsdoug/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Oh, this page should not be there anymore, it's part of the old documentation. Out of curiosity, how did you get on it?",
"Hi @sgugger -\r\n\r\nI think I see the problem. If the user navigates to huggingface.co first, then follows the links, it points to the updated documentation. Also, the link associated with the Transformers github repo points to the current docs..\r\n\r\nHowever, if the user navigates via a Google search and clicks on Quickstart, it redirects to an obsolete version of the docs. See screenshot attached.\r\n\r\n\r\n",
"I have removed the page from the doc website. Hopefully Google will update its recommendation soon!",
"Note that there probably are some SEO-related optimisations to our doc site layout to make sure Sitelinks are kept up-to-date (and the latest released version is the one best ranked by Google):\r\n\r\nhttps://support.google.com/webmasters/answer/47334?hl=en\r\n\r\nIn terms of UX, one can also display a banner on top on the doc for every non-latest version. cc @gary149 ",
"Note that here it was trying to render that file in the current version (the version selector said stable doc for instance), so it was really confusing.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,625 | 1,625 | NONE | null | To: @sgugger, @patrickvonplaten, @LysandreJik
## Environment info
- `transformers` version: 4.6.0
- Platform: Linux-5.4.0-73-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help
@sgugger, @patrickvonplaten, @LysandreJik
## Information
Model I am using (gpt-2 ...):
The problem arises when using:
* Example code in **Quickstart** page on online [documentation](https://huggingface.co/transformers/quickstart.html) section **OpenAI GPT-2** / **Using the past**
### Existing Code
```
for i in range(100):
print(i)
output, past = model(context, past=past)
token = torch.argmax(output[..., -1, :])
generated += [token.tolist()]
context = token.unsqueeze(0)
sequence = tokenizer.decode(generated)
print(sequence)
```
### Errors Encountered in statement ```output, past = model(context, past=past)```
- Obsolete named parameter **past**, replaced by **past_key_values** in current release
- Assignment to ```output, past =``` does not assign expected values
- model() statement returns value of type ```transformers.modeling_outputs.CausalLMOutputWithCrossAttentions```
### Suggested Corrected Version
```
for i in range(100):
print(i)
ret = model(context, past_key_values=past)
output, past = ret.logits, ret.past_key_values
# or
# output, past = ret[0], ret[1]
token = torch.argmax(output[..., -1, :])
generated += [token.tolist()]
context = token.unsqueeze(0)
sequence = tokenizer.decode(generated)
print(sequence)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11873/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11873/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11872 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11872/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11872/comments | https://api.github.com/repos/huggingface/transformers/issues/11872/events | https://github.com/huggingface/transformers/pull/11872 | 900,938,902 | MDExOlB1bGxSZXF1ZXN0NjUyNDQ4NTc5 | 11,872 | modify qa-trainer | {
"login": "zhangfanTJU",
"id": 58031744,
"node_id": "MDQ6VXNlcjU4MDMxNzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/58031744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangfanTJU",
"html_url": "https://github.com/zhangfanTJU",
"followers_url": "https://api.github.com/users/zhangfanTJU/followers",
"following_url": "https://api.github.com/users/zhangfanTJU/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangfanTJU/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangfanTJU/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangfanTJU/subscriptions",
"organizations_url": "https://api.github.com/users/zhangfanTJU/orgs",
"repos_url": "https://api.github.com/users/zhangfanTJU/repos",
"events_url": "https://api.github.com/users/zhangfanTJU/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangfanTJU/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I encountered an unexpected error in ci, could you help me to finish this pr? @LysandreJik ",
"Seems like a CircleCI failure, I just relaunched all tests.",
"the CircleCI encountered a HTTPError, what should I do? @LysandreJik "
] | 1,621 | 1,622 | 1,622 | CONTRIBUTOR | null | I fixed the evaluation failure for the TrainerQA-based script [`examples/pytorch/question-answering/run_qa.py`](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py) when using distributed training, which has been partially fixed in #11746.
```
Traceback (most recent call last):
File "run_qa.py", line 543, in <module>
main()
File "run_qa.py", line 509, in main
metrics = trainer.evaluate()
File "trainer_qa.py", line 44, in evaluate
ignore_keys=ignore_keys,
File "/home/x/anaconda3/envs/torch18/lib/python3.6/site-packages/transformers/trainer.py", line 2162, in evaluation_loop
logits = self._nested_gather(logits)
File "/home/x/anaconda3/envs/torch18/lib/python3.6/site-packages/transformers/trainer.py", line 2252, in _nested_gather
tensors = distributed_concat(tensors)
File "/home/x/anaconda3/envs/torch18/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 154, in distributed_concat
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/home/x/anaconda3/envs/torch18/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 154, in <genexpr>
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/home/x/anaconda3/envs/torch18/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 156, in distributed_concat
dist.all_gather(output_tensors, tensor)
File "/home/x/anaconda3/envs/torch18/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 1863, in all_gather
work = default_pg.allgather([tensor_list], [tensor])
RuntimeError: Tensors must be non-overlapping and dense
```
This failure is similar to the the previous commit (https://github.com/huggingface/transformers/pull/404/commits/fda2f623953bfe2290cd65429eb008f02ebdb152), but it also happens in pytorch 1.8 now.
Meanwhile, I added **Evaluation** and **Prediction** log for the script [`run_qa_no_trainer.py`](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa_no_trainer.py) according to the TrainerQA.
Thanks! @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11872/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11872",
"html_url": "https://github.com/huggingface/transformers/pull/11872",
"diff_url": "https://github.com/huggingface/transformers/pull/11872.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11872.patch",
"merged_at": 1622550522000
} |
https://api.github.com/repos/huggingface/transformers/issues/11871 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11871/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11871/comments | https://api.github.com/repos/huggingface/transformers/issues/11871/events | https://github.com/huggingface/transformers/issues/11871 | 900,909,558 | MDU6SXNzdWU5MDA5MDk1NTg= | 11,871 | Want to use bert-base-uncased model without internet connection | {
"login": "IamSparky",
"id": 42636586,
"node_id": "MDQ6VXNlcjQyNjM2NTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42636586?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IamSparky",
"html_url": "https://github.com/IamSparky",
"followers_url": "https://api.github.com/users/IamSparky/followers",
"following_url": "https://api.github.com/users/IamSparky/following{/other_user}",
"gists_url": "https://api.github.com/users/IamSparky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IamSparky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IamSparky/subscriptions",
"organizations_url": "https://api.github.com/users/IamSparky/orgs",
"repos_url": "https://api.github.com/users/IamSparky/repos",
"events_url": "https://api.github.com/users/IamSparky/events{/privacy}",
"received_events_url": "https://api.github.com/users/IamSparky/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This would be better suited on the Forum, but I would suggest doing (with git-lfs installed)\r\n\r\n```bash\r\ngit clone https://huggingface.co/bert-base-uncased\r\n```\r\n\r\nand then\r\n```python\r\nimport transformers\r\ntransformers.BertTokenizer.from_pretrained(\"./bert-base-uncased\", do_lower_case=True)\r\ntransformers.BertModel.from_pretrained(\"./bert-base-uncased\")\r\n```",
"> \r\n> \r\n> This would be better suited on the Forum, but I would suggest doing (with git-lfs installed)\r\n> \r\n> ```shell\r\n> git clone https://huggingface.co/bert-base-uncased\r\n> ```\r\n> \r\n> and then\r\n> \r\n> ```python\r\n> import transformers\r\n> transformers.BertTokenizer.from_pretrained(\"./bert-base-uncased\", do_lower_case=True)\r\n> transformers.BertModel.from_pretrained(\"./bert-base-uncased\")\r\n> ```\r\n\r\n@julien-c \r\nClone is failing with below errors\r\n\r\n```\r\nC:\\Users\\Karthik\\Desktop>git clone https://huggingface.co/bert-base-uncased\r\nCloning into 'bert-base-uncased'...\r\nremote: Enumerating objects: 52, done.\r\nremote: Counting objects: 100% (52/52), done.\r\nremote: Compressing objects: 100% (50/50), done.\r\nremote: Total 52 (delta 19), reused 0 (delta 0)\r\nUnpacking objects: 100% (52/52), 304.24 KiB | 61.00 KiB/s, done.\r\nUpdating files: 100% (10/10), done.\r\nwarning: Clone succeeded, but checkout failed.\r\nYou can inspect what was checked out with 'git status'\r\nand retry with 'git restore --source=HEAD :/'\r\n\r\nwarning: Clone succeeded, but checkout failed.\r\nYou can inspect what was checked out with 'git status'\r\nand retry with 'git restore --source=HEAD :/'\r\n\r\n\r\nExiting because of \"interrupt\" signal.\r\n```",
"@julien-c I also got the same error.",
"Can you add `GIT_CURL_VERBOSE=1 GIT_TRACE=1` to your command to get more info?",
"And also paste your `git --version` and `git lfs --version`\r\n",
"I was able to resolve this problem with the help of code :\r\n[LINK](https://www.kaggle.com/abhishek/bert-base-uncased)\r\n```\r\nBERT_MODEL_PATH = 'PATH FOR THE DATASET YOU SAVED IN YOUR LOCAL THROUGH THE LINK'\r\n\r\nTOKENIZER = transformers.BertTokenizer.from_pretrained(BERT_MODEL_PATH, do_lower_case=True, local_files_only=True)\r\nmodel = BertModel.from_pretrained(\"BERT_MODEL_PATH\")\r\n```",
"Posted my solution to my asked question here in the issue"
] | 1,621 | 1,622 | 1,622 | NONE | null | I want to use the bert-base-uncased model in offline , for that I need the bert tokenizer and bert model have there packages saved in my local . **I am unable to understand how should I achieve it in my local without any internet connection** ?
import transformers
transformers.BertTokenizer.from_pretrained("bert-base-uncased", do_lower_case=True)
transformers.BertModel.from_pretrained("bert-base-uncased")
currently getting the error

What file should I add inplace for ("bert-base-uncased") so that it can work corectly in offline?
here is the link to my [notebook](https://www.kaggle.com/soumochatterjee/inference-commonlit-readability) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11871/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11870 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11870/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11870/comments | https://api.github.com/repos/huggingface/transformers/issues/11870/events | https://github.com/huggingface/transformers/issues/11870 | 900,768,841 | MDU6SXNzdWU5MDA3Njg4NDE= | 11,870 | Issue: BART does not learn during fine-tuning for abstractive text summarization | {
"login": "DidiDerDenker",
"id": 31280364,
"node_id": "MDQ6VXNlcjMxMjgwMzY0",
"avatar_url": "https://avatars.githubusercontent.com/u/31280364?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DidiDerDenker",
"html_url": "https://github.com/DidiDerDenker",
"followers_url": "https://api.github.com/users/DidiDerDenker/followers",
"following_url": "https://api.github.com/users/DidiDerDenker/following{/other_user}",
"gists_url": "https://api.github.com/users/DidiDerDenker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DidiDerDenker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DidiDerDenker/subscriptions",
"organizations_url": "https://api.github.com/users/DidiDerDenker/orgs",
"repos_url": "https://api.github.com/users/DidiDerDenker/repos",
"events_url": "https://api.github.com/users/DidiDerDenker/events{/privacy}",
"received_events_url": "https://api.github.com/users/DidiDerDenker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @patrickvonplaten ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"@patrickvonplaten Hi, unfortunately I have not been able to make any progress in the last month and would appreciate if you have a solution for the unexpected behavior. Thank you! :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hey @DidiDerDenker,\r\n\r\nSorry it's very difficult to debug customized training runs that don't produce good results for us. Could you instead try to use the forum: https://discuss.huggingface.co"
] | 1,621 | 1,628 | 1,628 | NONE | null | ## Environment info
- transformers version: 4.5.1
- Python version: Python 3.7
- Using GPU in script? Yes
### Who can help
- @patrickvonplaten
## Information
I am currently working on abstractive text summarization. In the process I am trying to fine-tune BART on german text data. This works i.e. with bert-base-multilingual-cased and bert-base-german-cased. This does not work with i.e. deepset/gbert-base, deepset/gelectra-large and mbart-large-cc25. The training is not making any progress. The loss converges to zero very quickly. Am I doing something wrong? Do I need to use other classes?
## To reproduce
Here are a few code snippets to reproduce this behavior:
```ruby
# Config
language = "german"
model_name = "facebook/mbart-large-cc25"
tokenizer_name = "facebook/mbart-large-cc25"
batch_size = 8
# Imports
import datasets
import transformers
import tf2tf_tud_gpu_config as config
import tf2tf_tud_gpu_helpers as helpers
# Main
tokenizer = transformers.AutoTokenizer.from_pretrained(
config.tokenizer_name, strip_accent=False
)
if "mbart" in config.model_name:
tf2tf = transformers.MBartForConditionalGeneration.from_pretrained(
config.model_name
)
else:
tf2tf = transformers.EncoderDecoderModel.from_encoder_decoder_pretrained(
config.model_name, config.model_name, tie_encoder_decoder=True
)
train_data, val_data, test_data = helpers.load_data(
language=config.language,
ratio_corpus_wiki=config.ratio_corpus_wiki,
ratio_corpus_news=config.ratio_corpus_news
)
if "mbart" in config.model_name:
training_args = transformers.TrainingArguments(
output_dir=config.path_output,
logging_dir=config.path_output,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
num_train_epochs=1,
warmup_steps=500,
weight_decay=0.01
)
trainer = transformers.Trainer(
model=tf2tf,
args=training_args,
train_dataset=train_data,
eval_dataset=val_data
)
else:
training_args = transformers.Seq2SeqTrainingArguments(
predict_with_generate=True,
evaluation_strategy="steps",
per_device_train_batch_size=config.batch_size,
per_device_eval_batch_size=config.batch_size,
output_dir=config.path_output,
warmup_steps=1000,
save_steps=10000,
logging_steps=2000,
eval_steps=10000,
save_total_limit=1,
learning_rate=5e-5,
adafactor=True,
fp16=True
)
trainer = transformers.Seq2SeqTrainer(
model=tf2tf,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_data,
eval_dataset=val_data,
tokenizer=tokenizer
)
trainer.train()
```
## Expected behaviour
I would like to fine-tune BART profitably. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11870/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11870/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11869 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11869/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11869/comments | https://api.github.com/repos/huggingface/transformers/issues/11869/events | https://github.com/huggingface/transformers/issues/11869 | 900,714,735 | MDU6SXNzdWU5MDA3MTQ3MzU= | 11,869 | Custom train file not supported in run_qa.py | {
"login": "thuan00",
"id": 55292740,
"node_id": "MDQ6VXNlcjU1MjkyNzQw",
"avatar_url": "https://avatars.githubusercontent.com/u/55292740?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thuan00",
"html_url": "https://github.com/thuan00",
"followers_url": "https://api.github.com/users/thuan00/followers",
"following_url": "https://api.github.com/users/thuan00/following{/other_user}",
"gists_url": "https://api.github.com/users/thuan00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thuan00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thuan00/subscriptions",
"organizations_url": "https://api.github.com/users/thuan00/orgs",
"repos_url": "https://api.github.com/users/thuan00/repos",
"events_url": "https://api.github.com/users/thuan00/events{/privacy}",
"received_events_url": "https://api.github.com/users/thuan00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there. As mentioned in the main [README](https://github.com/huggingface/transformers#why-shouldnt-i-use-transformers) examples are just that: examples. The script is intended to work on SQUAD or any file that is structured exactly the same way. To make it work on your own dataset, you will need to make some slight adjustments, in particular renaming the columns used.",
"Okay,\r\nThank you for your work."
] | 1,621 | 1,621 | 1,621 | NONE | null | ## Environment info
- transformers version: 4.6.1
- Platform: Linux-5.4.109+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
Documentation, maintained examples (not research project or legacy): @sgugger
## Information
I am working on is QA task, fine-tuning on a SQUaD-like dataset.
The problem arises when using the example **run_qa.py** script with a custom --train_file (like SQuAd json file)
## To reproduce
run the script with param `--train_file (squad-like-dataset).json`
This was once **issued before** in [this](https://github.com/huggingface/transformers/issues/9370#issue-776942988).
But the traceback this time is:
```
Traceback (most recent call last):
File "run_qa.py", line 622, in <module>
main()
File "run_qa.py", line 321, in main
answer_column_name = "answers" if "answers" in column_names else column_names[2]
IndexError: list index out of range
```
## Expected behavior
As i debug, `column_names` = ['title', 'paragraphs']
`column_names` expected to be ['context', 'question', 'answers']
the [load_dataset()](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) function with the --train-file didn't do the right job
As described in the script's [README](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/README.md),
I expected that this script would handle --train_file like the legacy **run_squad.py** script but,
It seems that this script works with --dataset_name, datasets that are already on the hub, but doesn't work the same as the old **run_squad.py**.
The documentation about --train_file param may need to be clearer or provided with some examples that use --train_file, --validation_file
## Work around
As i read about this [load_dataset()](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) function:
I transformed the original squad json file into a table-row form like this:
```
{"version": "0.1.0",
"data": [{"id": 1, "context": "", "question": "", "answers": ...},
{"id": 2, "context": "", "question": "", "answers": ...}
...
}
```
with this snippet:
```
for article in data:
for p in article['paragraphs']:
for qas in p['qas']:
answers = {
"text": [],
"answer_start": []
}
for ans in qas['answers']:
answers['text'].append(ans['text'])
answers['answer_start'].append(ans['answer_start'])
output_data.append({
"id": qas['id'],
"context": p['context'],
"question": qas['question'],
"answers": answers
})
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11869/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11868 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11868/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11868/comments | https://api.github.com/repos/huggingface/transformers/issues/11868/events | https://github.com/huggingface/transformers/issues/11868 | 900,694,315 | MDU6SXNzdWU5MDA2OTQzMTU= | 11,868 | Wrong BlenderbotConfig description (max_position_embeddings) | {
"login": "shinishiho",
"id": 59284549,
"node_id": "MDQ6VXNlcjU5Mjg0NTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/59284549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shinishiho",
"html_url": "https://github.com/shinishiho",
"followers_url": "https://api.github.com/users/shinishiho/followers",
"following_url": "https://api.github.com/users/shinishiho/following{/other_user}",
"gists_url": "https://api.github.com/users/shinishiho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shinishiho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shinishiho/subscriptions",
"organizations_url": "https://api.github.com/users/shinishiho/orgs",
"repos_url": "https://api.github.com/users/shinishiho/repos",
"events_url": "https://api.github.com/users/shinishiho/events{/privacy}",
"received_events_url": "https://api.github.com/users/shinishiho/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,625 | 1,625 | NONE | null | Hi there, the documentation page for BlenderbotConfiguration has a wrong parameter description
[https://huggingface.co/transformers/model_doc/blenderbot.html](url)
max_position_embeddings (int, optional, defaults to 1024) β The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
But it's actually 128, as shown in the source code:
[https://huggingface.co/transformers/_modules/transformers/models/blenderbot/configuration_blenderbot.html#BlenderbotConfig](url)
def __init__( self, vocab_size=8008, max_position_embeddings=128, encoder_layers=2, encoder_ffn_dim=10240, ...
And by the way, do anyone know how to increase the maximum sequence length of this model? If I modify the config.max_position_embeddings, it will result in an error: (BlenderbotForConditionalGeneration)
`size mismatch for model.encoder.embed_positions.weight: copying a param with shape torch.Size([128, 2560]) from checkpoint, the shape in current model is torch.Size([1024, 2560]).`
`size mismatch for model.decoder.embed_positions.weight: copying a param with shape torch.Size([128, 2560]) from checkpoint, the shape in current model is torch.Size([1024, 2560]).`
With the length of 128 tokens, it will "forget" the conversation's topic quite fast, since the input has to be trimmed.
Thanks in advance.
@patrickvonplaten @patil-suraj
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11868/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11868/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11867 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11867/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11867/comments | https://api.github.com/repos/huggingface/transformers/issues/11867/events | https://github.com/huggingface/transformers/pull/11867 | 900,626,329 | MDExOlB1bGxSZXF1ZXN0NjUyMTY5ODA4 | 11,867 | Fix incorrect TPU pricing in Flax GLUE README.md | {
"login": "marcvanzee",
"id": 180100,
"node_id": "MDQ6VXNlcjE4MDEwMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/180100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcvanzee",
"html_url": "https://github.com/marcvanzee",
"followers_url": "https://api.github.com/users/marcvanzee/followers",
"following_url": "https://api.github.com/users/marcvanzee/following{/other_user}",
"gists_url": "https://api.github.com/users/marcvanzee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcvanzee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcvanzee/subscriptions",
"organizations_url": "https://api.github.com/users/marcvanzee/orgs",
"repos_url": "https://api.github.com/users/marcvanzee/repos",
"events_url": "https://api.github.com/users/marcvanzee/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcvanzee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,625 | 1,625 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11867/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11867/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11867",
"html_url": "https://github.com/huggingface/transformers/pull/11867",
"diff_url": "https://github.com/huggingface/transformers/pull/11867.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11867.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11866 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11866/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11866/comments | https://api.github.com/repos/huggingface/transformers/issues/11866/events | https://github.com/huggingface/transformers/issues/11866 | 900,614,620 | MDU6SXNzdWU5MDA2MTQ2MjA= | 11,866 | # π₯ Benchmarking `transformers` | {
"login": "turnertye74",
"id": 80142188,
"node_id": "MDQ6VXNlcjgwMTQyMTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/80142188?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/turnertye74",
"html_url": "https://github.com/turnertye74",
"followers_url": "https://api.github.com/users/turnertye74/followers",
"following_url": "https://api.github.com/users/turnertye74/following{/other_user}",
"gists_url": "https://api.github.com/users/turnertye74/gists{/gist_id}",
"starred_url": "https://api.github.com/users/turnertye74/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/turnertye74/subscriptions",
"organizations_url": "https://api.github.com/users/turnertye74/orgs",
"repos_url": "https://api.github.com/users/turnertye74/repos",
"events_url": "https://api.github.com/users/turnertye74/events{/privacy}",
"received_events_url": "https://api.github.com/users/turnertye74/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | NONE | null | # π₯ Benchmarking `transformers`
## Benchmark
Which part of `transformers` did you benchmark?
## Set-up
What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use?
## Results
Put your results here!
__Originally posted by @turnertye74 in https://github.com/huggingface/transformers/issues/11865__ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11866/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11866/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11865 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11865/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11865/comments | https://api.github.com/repos/huggingface/transformers/issues/11865/events | https://github.com/huggingface/transformers/issues/11865 | 900,614,025 | MDU6SXNzdWU5MDA2MTQwMjU= | 11,865 | [Benchmark] | {
"login": "turnertye74",
"id": 80142188,
"node_id": "MDQ6VXNlcjgwMTQyMTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/80142188?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/turnertye74",
"html_url": "https://github.com/turnertye74",
"followers_url": "https://api.github.com/users/turnertye74/followers",
"following_url": "https://api.github.com/users/turnertye74/following{/other_user}",
"gists_url": "https://api.github.com/users/turnertye74/gists{/gist_id}",
"starred_url": "https://api.github.com/users/turnertye74/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/turnertye74/subscriptions",
"organizations_url": "https://api.github.com/users/turnertye74/orgs",
"repos_url": "https://api.github.com/users/turnertye74/repos",
"events_url": "https://api.github.com/users/turnertye74/events{/privacy}",
"received_events_url": "https://api.github.com/users/turnertye74/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | NONE | null | # π₯ Benchmarking `transformers`
## Benchmark
Which part of `transformers` did you benchmark?
## Set-up
What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use?
## Results
Put your results here! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11865/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11865/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11864 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11864/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11864/comments | https://api.github.com/repos/huggingface/transformers/issues/11864/events | https://github.com/huggingface/transformers/issues/11864 | 900,566,822 | MDU6SXNzdWU5MDA1NjY4MjI= | 11,864 | Bart tokenizer and bart model for conditional generation have different vocab size | {
"login": "clementbernardd",
"id": 44928643,
"node_id": "MDQ6VXNlcjQ0OTI4NjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/44928643?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clementbernardd",
"html_url": "https://github.com/clementbernardd",
"followers_url": "https://api.github.com/users/clementbernardd/followers",
"following_url": "https://api.github.com/users/clementbernardd/following{/other_user}",
"gists_url": "https://api.github.com/users/clementbernardd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clementbernardd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clementbernardd/subscriptions",
"organizations_url": "https://api.github.com/users/clementbernardd/orgs",
"repos_url": "https://api.github.com/users/clementbernardd/repos",
"events_url": "https://api.github.com/users/clementbernardd/events{/privacy}",
"received_events_url": "https://api.github.com/users/clementbernardd/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there,\r\n\r\nThis is because the [orginal](https://github.com/pytorch/fairseq/tree/master/examples/bart#pre-trained-models) `bart-large-xsum` model uses 50264 for token_embedding, so you would probably need to extend the token embedding layer."
] | 1,621 | 1,622 | 1,622 | NONE | null | ## Environment info
- `transformers` version: 4.6.1
- Platform: Google colab
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1
- Using GPU in script?: No
### Who can help
@patrickvonplaten, @patil-suraj
Models:
- bart
Library:
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
## Information
Model I am using (BART):
The problem arises when using the pretrained model of bart : 'bart-large-xsum'.
I tried to load the tokenizer and the model from `bart-large-xsum`.
I then tried to add mask to the inputs (as mentionned in the original paper section 5.1 (https://arxiv.org/pdf/1910.13461.pdf)).
But the tokenizer and the bart model don't have the same vocab size.
Does it mean the `bart-large-xsum` model doesn't take masks as inputs ? Do I need to add it to the vocabulary myself ?
## To reproduce
Steps to reproduce the behavior:
```python
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large-xsum')
bart = BartForConditionalGeneration.from_pretrained('facebook/bart-large-xsum')
print(tokenizer.vocab_size)
print(bart.config.to_dict()['vocab_size'])
```

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11864/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11864/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11863 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11863/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11863/comments | https://api.github.com/repos/huggingface/transformers/issues/11863/events | https://github.com/huggingface/transformers/pull/11863 | 900,551,299 | MDExOlB1bGxSZXF1ZXN0NjUyMTAzNjAx | 11,863 | [Proposal] Adding infinite generation as an option to generate | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I get what you are saying, it's a valid concern.\r\n\r\n`position ids` tend to be very well correlated with their siblings, so it shouldn't matter (so much) for the first few iterations out of bounds.\r\nIt will matter more at full fledged `seq_length` drift, but I figure it's the callers responsibility to understand those caveats (hence the need for actual explicit decision required within `generate`). \r\nI ran some tests and the drift is much more substantial than what I expected for the full `seq_length`. the 20 topk tokens will share between 0 and 10 between non drifted and drifted versions. [gist](https://gist.github.com/Narsil/468af7fe59eaf1e20eb03ec0a4c9d249)\r\n\r\nWe also need to take into account that some models do not have position embeddings, or other position schemes (sinusoidal), that might change the perspective on this ?\r\n\r\nDisabling the cache might be an alternative, I guess the caller should know what are the tradeoffs he wants to make.\r\n\r\nAgain it is just that, for `pipelines` it is very hard to reason in number of tokens, and you can be hit by walls concerning number of tokens, without any recourse.\r\n\r\nAn example would be summarization, If I try to summarize some text too large to fit my model, I receive error 3051 tokens > 1024 tokens. It's fine, but now, how much of the string should I cut to actually get a summary ? It's impossible to know. cascading summaries is an option. It has some drawbacks but maybe that's still what I am looking for ?\r\n\r\nWhat I'm suggesting is coping mechanisms within `pipelines` than can be opted in to circumvent the issues mentioned above.\r\n`aggregation_strategy` is a good example of such a thing that was added in `token-classification`. It's a way to cope with incorrect/undesirable outputs from models to produce better end results to users, even if it's not perfectly representative of the model.\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,625 | 1,625 | CONTRIBUTOR | null | # What does this PR do?
`generate` is limited in a lot of models, where we can't generate
more tokens than the limit of a said model (`seq_length`,
`max_position_embeddings`).
It corresponds to a reality of how models where trained, but advanced
usage like inference might rely on using the models to generate longer
output than this. It is also quite inconvenient to hit that barrier when
using pipelines (where inputs are text and not tokens).
So the main goal is to make this behavior non-default but opted
in by users as they should understand what is happening and the
limits linked to this behavior.
The following proposal is to enable (model per model) infinite
generation. It *might* be doable generally, however `past` seems
to be model dependant so it could be harder to do in general.
The implementation here simply clips any left values if somehow
the `input_ids` (or `past`) is larger than what the model can cope with.
We also propose to enable that by default for `text-generation`
models (`text2text-generation` might also make use of it).
Happy to hear your thoughts on this:
1- Should we enable that kind of feature ?
2- Is optional, with enabled by default for pipelines correct ?
3- Can we enable for every models instead of on a model-per-model basis ?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten @LysandreJik @patil-suraj
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11863/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11863",
"html_url": "https://github.com/huggingface/transformers/pull/11863",
"diff_url": "https://github.com/huggingface/transformers/pull/11863.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11863.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11862 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11862/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11862/comments | https://api.github.com/repos/huggingface/transformers/issues/11862/events | https://github.com/huggingface/transformers/issues/11862 | 900,528,866 | MDU6SXNzdWU5MDA1Mjg4NjY= | 11,862 | 'SequenceClassifierOutput' object has no attribute 'log_softmax' | {
"login": "AngThanos",
"id": 41022754,
"node_id": "MDQ6VXNlcjQxMDIyNzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/41022754?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AngThanos",
"html_url": "https://github.com/AngThanos",
"followers_url": "https://api.github.com/users/AngThanos/followers",
"following_url": "https://api.github.com/users/AngThanos/following{/other_user}",
"gists_url": "https://api.github.com/users/AngThanos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AngThanos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AngThanos/subscriptions",
"organizations_url": "https://api.github.com/users/AngThanos/orgs",
"repos_url": "https://api.github.com/users/AngThanos/repos",
"events_url": "https://api.github.com/users/AngThanos/events{/privacy}",
"received_events_url": "https://api.github.com/users/AngThanos/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! Could you share the code you have that led to this error? Thanks",
"This is my source code. I use google colab. Thank you for your fast reply.\r\n[https://drive.google.com/file/d/1zRKRolc-IuKAt_J96gTCoO801r6eXu2q/view?usp=sharing](url)\r\n",
"I think the issue is that you have:\r\n```py\r\n\r\nclass CassvaImgClassifier(nn.Module):\r\n def __init__(self, model_arch, n_class, pretrained=False):\r\n super().__init__()\r\n self.model = ViTForImageClassification.from_pretrained('google/vit-base-patch16-224')\r\n ... \r\n\r\n def forward(self, x):\r\n x = self.model(x)\r\n return x\r\n\r\n```\r\nas a model; if I follow closely enough you're getting the outputs with:\r\n\r\n```py\r\nimage_preds = model(imgs)\r\n```\r\n\r\nThese will really be the outputs of the ViT model, which is a [`SequenceClassifierOutput`](https://huggingface.co/transformers/main_classes/output.html#transformers.modeling_outputs.SequenceClassifierOutput) as you can see from the [ViT docs](https://huggingface.co/transformers/model_doc/vit.html#transformers.ViTForImageClassification.forward)\r\n\r\nI suppose you're interested in the `logits`, so you would have to do:\r\n```py\r\nimage_preds = model(imgs).logits\r\n```\r\ninstead.\r\n\r\nHope that helps.",
"Thank you very much. It works for me. \r\n"
] | 1,621 | 1,621 | 1,621 | NONE | null | Hi there,
I'm trying to transfer the pre-trained ViT model (model base patch 16, image size 224) on Cassava Leaf Disease Dataset. However, when I started to train the model, I encountered an error: 'SequenceClassifierOutput' object has no attribute 'log_softmax' which is shown in details in the attached image.
Can someone help me with this error?
Many thanks.

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11862/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11861 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11861/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11861/comments | https://api.github.com/repos/huggingface/transformers/issues/11861/events | https://github.com/huggingface/transformers/issues/11861 | 900,518,760 | MDU6SXNzdWU5MDA1MTg3NjA= | 11,861 | ONNX model conversion | {
"login": "fdlci",
"id": 73292708,
"node_id": "MDQ6VXNlcjczMjkyNzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/73292708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fdlci",
"html_url": "https://github.com/fdlci",
"followers_url": "https://api.github.com/users/fdlci/followers",
"following_url": "https://api.github.com/users/fdlci/following{/other_user}",
"gists_url": "https://api.github.com/users/fdlci/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fdlci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fdlci/subscriptions",
"organizations_url": "https://api.github.com/users/fdlci/orgs",
"repos_url": "https://api.github.com/users/fdlci/repos",
"events_url": "https://api.github.com/users/fdlci/events{/privacy}",
"received_events_url": "https://api.github.com/users/fdlci/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,625 | 1,625 | NONE | null | Hi,
I have been comparing inference speeds between pytorch models and their ONNX versions. To convert a model from pytorch to ONNX I have used the code your provided in convert_graph_to_onnx.py.
I have built my onnx model as follows as I am applying it to QA: python transformers/src/transformers/convert_graph_to_onnx.py --framework pt --model Camembert-base-ccnet-fquad11 --quantize cam_onnx/camembert-base.onnx --pipeline 'question-answering'
This code outputs 3 models, camembert-base.onnx, camembert-base-optimized.onnx, camembert-base-optimized-quantize.onnx.
I run inference with the three models and I was expecting the quantize version to be much faster than the camembert-base.onnx, but it was the complete opposite. I don't understand why quantization doesn't increase the speedup in this case?
Thank you for your answer! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11861/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11861/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11860 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11860/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11860/comments | https://api.github.com/repos/huggingface/transformers/issues/11860/events | https://github.com/huggingface/transformers/pull/11860 | 900,459,302 | MDExOlB1bGxSZXF1ZXN0NjUyMDIzNTkx | 11,860 | Add some tests to the slow suite | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | MEMBER | null | This PR adds the torchscript tests to the slow suite. The current CI isn't passing because it crashes and exceeds the 10-minute timeout, this PR is a first step into fixing that.
Will look into re-enabling the torchscript tests on each commit (they'll be tested every day for now) once we refactor the test suite to be less hungry. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11860/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11860",
"html_url": "https://github.com/huggingface/transformers/pull/11860",
"diff_url": "https://github.com/huggingface/transformers/pull/11860.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11860.patch",
"merged_at": 1621929966000
} |
https://api.github.com/repos/huggingface/transformers/issues/11859 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11859/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11859/comments | https://api.github.com/repos/huggingface/transformers/issues/11859/events | https://github.com/huggingface/transformers/pull/11859 | 900,455,377 | MDExOlB1bGxSZXF1ZXN0NjUyMDE5OTQ5 | 11,859 | Enable memory metrics in tests that need it | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks a lot for catching and fixing @LysandreJik !"
] | 1,621 | 1,621 | 1,621 | MEMBER | null | PR https://github.com/huggingface/transformers/pull/11851 was merged without updating the tests to reflect the change in the argument default.
Explicitly specified the need for memory metrics for these tests.
Merging now to have CI pass, feel free to comment if that's not the right approach @stas00 @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11859/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11859/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11859",
"html_url": "https://github.com/huggingface/transformers/pull/11859",
"diff_url": "https://github.com/huggingface/transformers/pull/11859.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11859.patch",
"merged_at": 1621929979000
} |
https://api.github.com/repos/huggingface/transformers/issues/11858 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11858/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11858/comments | https://api.github.com/repos/huggingface/transformers/issues/11858/events | https://github.com/huggingface/transformers/pull/11858 | 900,370,756 | MDExOlB1bGxSZXF1ZXN0NjUxOTQyNjA0 | 11,858 | typo | {
"login": "WrRan",
"id": 7569098,
"node_id": "MDQ6VXNlcjc1NjkwOTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7569098?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WrRan",
"html_url": "https://github.com/WrRan",
"followers_url": "https://api.github.com/users/WrRan/followers",
"following_url": "https://api.github.com/users/WrRan/following{/other_user}",
"gists_url": "https://api.github.com/users/WrRan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WrRan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WrRan/subscriptions",
"organizations_url": "https://api.github.com/users/WrRan/orgs",
"repos_url": "https://api.github.com/users/WrRan/repos",
"events_url": "https://api.github.com/users/WrRan/events{/privacy}",
"received_events_url": "https://api.github.com/users/WrRan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | CONTRIBUTOR | null | # What does this PR do?
fix typo
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable
Fixes # (issue)
-->
## Before submitting
- [Y] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11858/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11858/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11858",
"html_url": "https://github.com/huggingface/transformers/pull/11858",
"diff_url": "https://github.com/huggingface/transformers/pull/11858.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11858.patch",
"merged_at": 1621931026000
} |
https://api.github.com/repos/huggingface/transformers/issues/11857 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11857/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11857/comments | https://api.github.com/repos/huggingface/transformers/issues/11857/events | https://github.com/huggingface/transformers/pull/11857 | 900,309,398 | MDExOlB1bGxSZXF1ZXN0NjUxODg3OTMz | 11,857 | [WIP] Fix cross attentions for TF T5 | {
"login": "stancld",
"id": 46073029,
"node_id": "MDQ6VXNlcjQ2MDczMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stancld",
"html_url": "https://github.com/stancld",
"followers_url": "https://api.github.com/users/stancld/followers",
"following_url": "https://api.github.com/users/stancld/following{/other_user}",
"gists_url": "https://api.github.com/users/stancld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stancld/subscriptions",
"organizations_url": "https://api.github.com/users/stancld/orgs",
"repos_url": "https://api.github.com/users/stancld/repos",
"events_url": "https://api.github.com/users/stancld/events{/privacy}",
"received_events_url": "https://api.github.com/users/stancld/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,624 | 1,624 | CONTRIBUTOR | null | This PR fixes cross attentions for TF T5 model. This includes adding a new input argument `cross_attn_head_mask` and also adding `cross_attentions` to the model's output.
<hr>
**Reviewers:** @patrickvonplaten @Rocketknight1 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11857/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11857/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11857",
"html_url": "https://github.com/huggingface/transformers/pull/11857",
"diff_url": "https://github.com/huggingface/transformers/pull/11857.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11857.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11856 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11856/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11856/comments | https://api.github.com/repos/huggingface/transformers/issues/11856/events | https://github.com/huggingface/transformers/pull/11856 | 900,294,521 | MDExOlB1bGxSZXF1ZXN0NjUxODc0ODI1 | 11,856 | fixed a small typo in the CONTRIBUTING doc | {
"login": "stsuchi",
"id": 8391010,
"node_id": "MDQ6VXNlcjgzOTEwMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8391010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stsuchi",
"html_url": "https://github.com/stsuchi",
"followers_url": "https://api.github.com/users/stsuchi/followers",
"following_url": "https://api.github.com/users/stsuchi/following{/other_user}",
"gists_url": "https://api.github.com/users/stsuchi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stsuchi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stsuchi/subscriptions",
"organizations_url": "https://api.github.com/users/stsuchi/orgs",
"repos_url": "https://api.github.com/users/stsuchi/repos",
"events_url": "https://api.github.com/users/stsuchi/events{/privacy}",
"received_events_url": "https://api.github.com/users/stsuchi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
I found a small typo in CONTRIBUTING.md. The fix was at the top of the second sentence around 5th paragraph after 4 bullet points as in "In particular there is a special [Good First
Issue](https://github.com/huggingface/transformers/contribute) listing. *It* will give you ..."
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11856/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11856",
"html_url": "https://github.com/huggingface/transformers/pull/11856",
"diff_url": "https://github.com/huggingface/transformers/pull/11856.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11856.patch",
"merged_at": 1621930735000
} |
https://api.github.com/repos/huggingface/transformers/issues/11855 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11855/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11855/comments | https://api.github.com/repos/huggingface/transformers/issues/11855/events | https://github.com/huggingface/transformers/pull/11855 | 900,148,509 | MDExOlB1bGxSZXF1ZXN0NjUxNzQ5NTE1 | 11,855 | [lm examples] fix overflow in perplexity calc | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | CONTRIBUTOR | null | This PR fixes overflow exception in perplexity calculation. Triggered when doing eval on untrained model and the loss is huge.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11855/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11855/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11855",
"html_url": "https://github.com/huggingface/transformers/pull/11855",
"diff_url": "https://github.com/huggingface/transformers/pull/11855.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11855.patch",
"merged_at": 1621955486000
} |
https://api.github.com/repos/huggingface/transformers/issues/11854 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11854/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11854/comments | https://api.github.com/repos/huggingface/transformers/issues/11854/events | https://github.com/huggingface/transformers/issues/11854 | 900,047,463 | MDU6SXNzdWU5MDAwNDc0NjM= | 11,854 | Permission denied for cardiffnlp/twitter-roberta-base-emotion | {
"login": "StephenQuirolgico",
"id": 4974765,
"node_id": "MDQ6VXNlcjQ5NzQ3NjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4974765?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StephenQuirolgico",
"html_url": "https://github.com/StephenQuirolgico",
"followers_url": "https://api.github.com/users/StephenQuirolgico/followers",
"following_url": "https://api.github.com/users/StephenQuirolgico/following{/other_user}",
"gists_url": "https://api.github.com/users/StephenQuirolgico/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StephenQuirolgico/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StephenQuirolgico/subscriptions",
"organizations_url": "https://api.github.com/users/StephenQuirolgico/orgs",
"repos_url": "https://api.github.com/users/StephenQuirolgico/repos",
"events_url": "https://api.github.com/users/StephenQuirolgico/events{/privacy}",
"received_events_url": "https://api.github.com/users/StephenQuirolgico/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The model seems accessible: https://huggingface.co/cardiffnlp/twitter-roberta-base-emotion \r\n\r\nAnd running the example code locally correctly loads the model, and outputs the following:\r\n\r\n```py\r\n1) joy 0.9382\r\n2) optimism 0.0362\r\n3) anger 0.0145\r\n4) sadness 0.0112\r\n```\r\n\r\nCould you try again to make sure it wasn't a connection issue?",
"@LysandreJik, thanks - it's working now."
] | 1,621 | 1,622 | 1,622 | NONE | null | @patrickvonplaten
When trying to access `cardiffnlp/twitter-roberta-base-emotion` using the example code, it can't seem to find the model. I also tried calling the model from an NLP framework (AdaptNLP) and it gave a Permission denied error. However, I don't get this error when using `cardiffnlp/twitter-roberta-base-sentiment`. Any suggestions? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11854/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11854/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11853 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11853/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11853/comments | https://api.github.com/repos/huggingface/transformers/issues/11853/events | https://github.com/huggingface/transformers/issues/11853 | 899,912,428 | MDU6SXNzdWU4OTk5MTI0Mjg= | 11,853 | Multi-node training for casual language modeling example does not work | {
"login": "hamid-ramezani",
"id": 73587486,
"node_id": "MDQ6VXNlcjczNTg3NDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/73587486?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hamid-ramezani",
"html_url": "https://github.com/hamid-ramezani",
"followers_url": "https://api.github.com/users/hamid-ramezani/followers",
"following_url": "https://api.github.com/users/hamid-ramezani/following{/other_user}",
"gists_url": "https://api.github.com/users/hamid-ramezani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hamid-ramezani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hamid-ramezani/subscriptions",
"organizations_url": "https://api.github.com/users/hamid-ramezani/orgs",
"repos_url": "https://api.github.com/users/hamid-ramezani/repos",
"events_url": "https://api.github.com/users/hamid-ramezani/events{/privacy}",
"received_events_url": "https://api.github.com/users/hamid-ramezani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Are you sure the port is open between the two machines? Not having any output is usually a symptom of that. I've tried on my side and I can run the script on multi-nodes.",
"@sgugger Thanks for the reply :)\r\n\r\n> Are you sure the port is open between the two machines?\r\n\r\nyes I made sure by giving different port numbers; none of them worked and I got this message after 15 mins:\r\n\r\n`RuntimeError: connect() timed out.` \r\n\r\n",
"No they need to have the same port number, otherwise they can't connect to each other.",
"Thank you very much. \r\n\r\n> No they need to have the same port number\r\n\r\nHere I meant I gave the same port number to both sides, but I tried multiple times with some numbers to make sure the port is open :) \r\n\r\nBut no worries, the issue is solved. I tried with the actual IP address of one of the machines and that solved the issue. ",
"Glad you solved your issue!"
] | 1,621 | 1,621 | 1,621 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.7.0.dev0
- Platform: Linux-4.19.0-14-amd64-x86_64-with-debian-10.8
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help
@sgugger
@patrickvonplaten, @LysandreJik
## Information
Model I am using (Bert, XLNet ...): GPT-2
The problem arises when using:
* [x] my own modified scripts:
```
nproc_per_node=4
python -m torch.distributed.launch \
--nproc_per_node=$nproc_per_node \
--nnodes=2 \
--node_rank=0 \
--master_addr="192.168.1.1" \
--master_port=1234 run_clm.py \
--model_name_or_path gpt2 \
--block_size 256 \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--do_train \
--do_eval \
--overwrite_output_dir \
--num_train_epochs 1 \
--output_dir /tmp/test-clm
```
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: language modeling
* [x] my own task or dataset: wikitext
## To reproduce
Steps to reproduce the behavior:
1. Have two nodes with at least 4 GPUs each.
2. In the first machine, run the above script.
3. In the second machine, run a script same as above except with the flag `--node_rank=1` instead of `--node_rank=0`.
I have waited for almost 15 mins. Nothing has happened. The training did not get started.
## Expected behavior
The training gets started.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11853/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11852 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11852/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11852/comments | https://api.github.com/repos/huggingface/transformers/issues/11852/events | https://github.com/huggingface/transformers/pull/11852 | 899,887,384 | MDExOlB1bGxSZXF1ZXN0NjUxNTIyNzg3 | 11,852 | Fix two typos in docs | {
"login": "nickls",
"id": 363928,
"node_id": "MDQ6VXNlcjM2MzkyOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/363928?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nickls",
"html_url": "https://github.com/nickls",
"followers_url": "https://api.github.com/users/nickls/followers",
"following_url": "https://api.github.com/users/nickls/following{/other_user}",
"gists_url": "https://api.github.com/users/nickls/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nickls/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nickls/subscriptions",
"organizations_url": "https://api.github.com/users/nickls/orgs",
"repos_url": "https://api.github.com/users/nickls/repos",
"events_url": "https://api.github.com/users/nickls/events{/privacy}",
"received_events_url": "https://api.github.com/users/nickls/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | CONTRIBUTOR | null | # What does this PR do?
Fixed two minor typos.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11852/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11852/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11852",
"html_url": "https://github.com/huggingface/transformers/pull/11852",
"diff_url": "https://github.com/huggingface/transformers/pull/11852.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11852.patch",
"merged_at": 1621880763000
} |
https://api.github.com/repos/huggingface/transformers/issues/11851 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11851/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11851/comments | https://api.github.com/repos/huggingface/transformers/issues/11851/events | https://github.com/huggingface/transformers/pull/11851 | 899,813,633 | MDExOlB1bGxSZXF1ZXN0NjUxNDYxNDM0 | 11,851 | Switch mem metrics flag | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | COLLABORATOR | null | # What does this PR do?
As flagged out in #11485, the memory metrics take a bit of performance, so this PR switches the flag that enable them to give the best performance by default (and the user can still manually activate them when they want them!)
Fixes #11845 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11851/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11851",
"html_url": "https://github.com/huggingface/transformers/pull/11851",
"diff_url": "https://github.com/huggingface/transformers/pull/11851.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11851.patch",
"merged_at": 1621877440000
} |
https://api.github.com/repos/huggingface/transformers/issues/11850 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11850/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11850/comments | https://api.github.com/repos/huggingface/transformers/issues/11850/events | https://github.com/huggingface/transformers/issues/11850 | 899,808,935 | MDU6SXNzdWU4OTk4MDg5MzU= | 11,850 | Gradient is None in after deepspeed backward | {
"login": "EarthXP",
"id": 15072042,
"node_id": "MDQ6VXNlcjE1MDcyMDQy",
"avatar_url": "https://avatars.githubusercontent.com/u/15072042?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EarthXP",
"html_url": "https://github.com/EarthXP",
"followers_url": "https://api.github.com/users/EarthXP/followers",
"following_url": "https://api.github.com/users/EarthXP/following{/other_user}",
"gists_url": "https://api.github.com/users/EarthXP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EarthXP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EarthXP/subscriptions",
"organizations_url": "https://api.github.com/users/EarthXP/orgs",
"repos_url": "https://api.github.com/users/EarthXP/repos",
"events_url": "https://api.github.com/users/EarthXP/events{/privacy}",
"received_events_url": "https://api.github.com/users/EarthXP/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"**edit** looks like Deepspeed needs to add an API to do that: https://github.com/microsoft/DeepSpeed/issues/1098\r\n\r\nMy original suggestion is very likely not to work. I have just never tried it in this context:\r\n\r\n---------\r\n\r\nSince params are sharded, you need to gather them before you can read their values. https://deepspeed.readthedocs.io/en/latest/zero3.html#gathering-parameters\r\n\r\nHere is an untested code:\r\n```\r\nimport deepspeed\r\nfor name, param in model.named_parameters():\r\n with deepspeed.zero.GatheredParameters(param, modifier_rank=None):\r\n if param.requires_grad: ....\r\n```\r\n",
"meanwhile please edit the OP to include the ds config file you used, so that we know what setup you're running it under.",
"> **edit** looks like Deepspeed needs to add an API to do that: [microsoft/DeepSpeed#1098](https://github.com/microsoft/DeepSpeed/issues/1098)\r\n> \r\n> My original suggestion is very likely not to work. I have just never tried it in this context:\r\n> \r\n> Since params are sharded, you need to gather them before you can read their values. https://deepspeed.readthedocs.io/en/latest/zero3.html#gathering-parameters\r\n> \r\n> Here is an untested code:\r\n> \r\n> ```\r\n> import deepspeed\r\n> for name, param in model.named_parameters():\r\n> with deepspeed.zero.GatheredParameters(param, modifier_rank=None):\r\n> if param.requires_grad: ....\r\n> ```\r\n\r\nthanks for asking this feature in deepspeed and confrimed None gradient is expected for now",
"> meanwhile please edit the OP to include the ds config file you used, so that we know what setup you're running it under.\r\n\r\n"
] | 1,621 | 1,621 | 1,621 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: Linux-4.15.0-112-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1+cu110 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: GeForce RTX 3090
- Using distributed or parallel set-up in script?: Deepspeed
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people. !-->
- @stas00
- @sgugger
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
I want to check if the gradient in different GPU process are same after backward, so I output the gradient in trainer.py:

The output shows that all gradients are None. I reproduced this in another script that I believe working as expected for a long time. So my quesion is:
1. is this by design? why it's None.
2. how to output the real gradient of the model's parameters after gradient are calcuated?
deepspeed config:
```
"gradient_accumulation_steps": 1,
"train_batch_size": 16,
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1,
"initial_scale_power": 16
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true,
"cpu_offload": false,
"find_unused_parameters": true
},
"zero_allow_untested_optimizer": true,
"optimizer": {
"type": "AdamW",
"params": {
"lr": 2e-5,
"betas": [0.9, 0.999],
"eps": 1e-6,
"weight_decay": 0.01
}
},
"scheduler": {
"type": "OneCycle",
"params": {
"cycle_first_step_size": 5000,
"cycle_first_stair_count": 500,
"cycle_second_step_size": 5000,
"cycle_second_stair_count": 500,
"decay_step_size": 1000,
"cycle_min_lr": 4e-5,
"cycle_max_lr": 1e-4,
"decay_lr_rate": 0.001,
"cycle_momentum": true,
"cycle_min_mom": 0.85,
"cycle_max_mom": 0.99,
"decay_mom_rate": 0.0
}
},
"steps_per_print": 500,
"wall_clock_breakdown": false
}
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11850/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11850/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11849 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11849/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11849/comments | https://api.github.com/repos/huggingface/transformers/issues/11849/events | https://github.com/huggingface/transformers/pull/11849 | 899,796,716 | MDExOlB1bGxSZXF1ZXN0NjUxNDQ3Mzgw | 11,849 | Add simple ByteTokenizer for Reformer | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Cool, thank you for opening this PR!\r\n> \r\n> Before merging, I think two issues should be resolved:\r\n> \r\n> * I'm not sure what that `ReformerByteTokenizer` can be used for - so if I'm unaware, I suppose users will be too. Adding a bit of documentation regarding what is that tokenizer and why it's here would be nice\r\n\r\nTried to add some documentation, is it better ? Ultimately I don't really have background for this model, so I added more or less what was said in the model card (and left a link to it just in case, I'm also unaware of other models that are Byte level too.).\r\n\r\n> \r\n> * According to what it can be used for, the appropriate tests should be put in place. For example there's no testing for saving/loading while there's a bit of a workaround to enable that - it would be nice to test that and all the other expected behaviors.\r\n\r\nI added it. Because it has only 1 special token (pad), which I arbitrarily set to 0 (does not seem to be explained in the model card either currently, but why would there be a 2 shift for input_ids otherwise ?) and it does not really make sense for this Tokenizer to try to use \"tokens\" in the current Tokenizer meaning (substrings that are part of the main string). It is impossible to do because of how utf-8 works, If we did we would run sooner or later into other issues:\r\n\r\n`len(chr(255).encode(\"utf-8\")) == 2` for instance which is `chr(255) == [195, 195]`, not `[255]`.\r\n\r\n@LysandreJik would love a small re-review, but we should keep this PR low profile, it's not that important anyway.\r\n",
"@patrickvonplaten can you give this one a look and merge if it looks good to you?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,625 | 1,625 | CONTRIBUTOR | null | # What does this PR do?
Fixes #11649
Adds a ReformerByteTokenizer for (https://huggingface.co/google/reformer-enwik8).
- Everything is a bit hardcoded in that tokenizer as very little configuration is expected
- no fast tokenizer (a bit useless as this manipulates raw bytes
and has very little python logic)
Added to the docs
Added very simple tests. For this tokenizer, the actual mixin is debattable as
"tokens" are raw bytes, and cannot be strings (255 for instance is not a valid string).
using b"\xff" instead of 255 is possible yet might not be exactly clearer.
This requires some modifications within the "google/reformer-enwik8" config.
Namely:
- Adding a `tokenizer_class` to the config
- Adding a dummy file so that AutoTokenizer won't fail because no file are needed.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik @patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11849/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11849/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11849",
"html_url": "https://github.com/huggingface/transformers/pull/11849",
"diff_url": "https://github.com/huggingface/transformers/pull/11849.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11849.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11848 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11848/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11848/comments | https://api.github.com/repos/huggingface/transformers/issues/11848/events | https://github.com/huggingface/transformers/issues/11848 | 899,773,395 | MDU6SXNzdWU4OTk3NzMzOTU= | 11,848 | Token Classification OOM | {
"login": "mirfan899",
"id": 3822565,
"node_id": "MDQ6VXNlcjM4MjI1NjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3822565?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mirfan899",
"html_url": "https://github.com/mirfan899",
"followers_url": "https://api.github.com/users/mirfan899/followers",
"following_url": "https://api.github.com/users/mirfan899/following{/other_user}",
"gists_url": "https://api.github.com/users/mirfan899/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mirfan899/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mirfan899/subscriptions",
"organizations_url": "https://api.github.com/users/mirfan899/orgs",
"repos_url": "https://api.github.com/users/mirfan899/repos",
"events_url": "https://api.github.com/users/mirfan899/events{/privacy}",
"received_events_url": "https://api.github.com/users/mirfan899/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You should use `eval_accumulation_steps=n` (for instance 20) to have the predictions be moved to the CPU every n steps during evaluation (n should be lower than the step you get an OOM error)",
"Thanks, it worked perfectly.",
"> You should use `eval_accumulation_steps=n` (for instance 20) to have the predictions be moved to the CPU every n steps during evaluation (n should be lower than the step you get an OOM error)\r\n\r\nThank you so much. I have been stuck at the point for several days until I landed here."
] | 1,621 | 1,653 | 1,622 | NONE | null | I am using the Token Classification Example on my dataset. It has around 20k lines for train and around 2k lines for validation and 2k for the test dataset. I have used the following example for my dataset.
https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/token_classification.ipynb
After training the model, when eval is called it eats up all the GPU memory even on batch size 1.
Here is the output.
```shell
Some weights of the model checkpoint at /home/irfan/Downloads/bert-base-uncased were not used when initializing BertForTokenClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']
- This IS expected if you are initializing BertForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BertForTokenClassification were not initialized from the model checkpoint at /home/irfan/Downloads/bert-base-uncased and are newly initialized: ['classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Reusing dataset gector (/home/irfan/.cache/huggingface/datasets/gector/gector config/1.0.0/410838f652242763258c1912a4316a2d985c51d27bbb94a3301ad15cde38de06)
100%|ββββββββββ| 29/29 [00:01<00:00, 24.00ba/s]
100%|ββββββββββ| 7/7 [00:00<00:00, 25.27ba/s]
100%|ββββββββββ| 6/6 [00:00<00:00, 25.62ba/s]
2%|β | 500/28347 [00:44<42:04, 11.03it/s]{'loss': 1.3763, 'learning_rate': 1.9647228983666704e-05, 'epoch': 0.02}
4%|β | 1000/28347 [01:27<40:33, 11.24it/s]{'loss': 0.8006, 'learning_rate': 1.9294457967333407e-05, 'epoch': 0.04}
5%|β | 1500/28347 [02:10<37:32, 11.92it/s]{'loss': 0.8302, 'learning_rate': 1.8941686951000106e-05, 'epoch': 0.05}
7%|β | 2000/28347 [02:52<35:53, 12.23it/s]{'loss': 0.7262, 'learning_rate': 1.858891593466681e-05, 'epoch': 0.07}
9%|β | 2500/28347 [03:35<36:11, 11.90it/s]{'loss': 0.7766, 'learning_rate': 1.823614491833351e-05, 'epoch': 0.09}
11%|β | 3000/28347 [04:19<37:29, 11.27it/s]{'loss': 0.7891, 'learning_rate': 1.7883373902000214e-05, 'epoch': 0.11}
12%|ββ | 3500/28347 [05:03<36:17, 11.41it/s]{'loss': 0.7966, 'learning_rate': 1.7530602885666913e-05, 'epoch': 0.12}
14%|ββ | 4000/28347 [05:47<35:28, 11.44it/s]{'loss': 0.7518, 'learning_rate': 1.7177831869333615e-05, 'epoch': 0.14}
16%|ββ | 4500/28347 [06:31<34:30, 11.52it/s]{'loss': 0.6913, 'learning_rate': 1.6825060853000318e-05, 'epoch': 0.16}
18%|ββ | 5000/28347 [07:15<33:57, 11.46it/s]{'loss': 0.7688, 'learning_rate': 1.647228983666702e-05, 'epoch': 0.18}
19%|ββ | 5500/28347 [08:01<37:43, 10.09it/s]{'loss': 0.8129, 'learning_rate': 1.6119518820333723e-05, 'epoch': 0.19}
21%|ββ | 6000/28347 [08:48<33:05, 11.25it/s]{'loss': 0.7222, 'learning_rate': 1.5766747804000426e-05, 'epoch': 0.21}
23%|βββ | 6500/28347 [09:37<31:53, 11.42it/s]{'loss': 0.7068, 'learning_rate': 1.5413976787667128e-05, 'epoch': 0.23}
25%|βββ | 7000/28347 [10:22<31:18, 11.36it/s]{'loss': 0.6917, 'learning_rate': 1.5061205771333829e-05, 'epoch': 0.25}
26%|βββ | 7500/28347 [11:07<30:42, 11.31it/s]{'loss': 0.7271, 'learning_rate': 1.4708434755000532e-05, 'epoch': 0.26}
28%|βββ | 8000/28347 [11:52<32:11, 10.53it/s]{'loss': 0.6844, 'learning_rate': 1.435566373866723e-05, 'epoch': 0.28}
30%|βββ | 8500/28347 [12:40<30:40, 10.78it/s]{'loss': 0.7325, 'learning_rate': 1.4002892722333933e-05, 'epoch': 0.3}
32%|ββββ | 9000/28347 [13:27<31:27, 10.25it/s]{'loss': 0.7981, 'learning_rate': 1.3650121706000636e-05, 'epoch': 0.32}
34%|ββββ | 9500/28347 [14:15<28:28, 11.03it/s]{'loss': 0.7432, 'learning_rate': 1.3297350689667338e-05, 'epoch': 0.34}
35%|ββββ | 10000/28347 [15:00<26:41, 11.45it/s]{'loss': 0.6548, 'learning_rate': 1.294457967333404e-05, 'epoch': 0.35}
37%|ββββ | 10500/28347 [15:44<26:11, 11.35it/s]{'loss': 0.7042, 'learning_rate': 1.2591808657000742e-05, 'epoch': 0.37}
39%|ββββ | 11000/28347 [16:27<25:18, 11.42it/s]{'loss': 0.6836, 'learning_rate': 1.2239037640667444e-05, 'epoch': 0.39}
41%|ββββ | 11500/28347 [17:11<24:17, 11.56it/s]{'loss': 0.7011, 'learning_rate': 1.1886266624334147e-05, 'epoch': 0.41}
42%|βββββ | 12000/28347 [17:55<24:45, 11.01it/s]{'loss': 0.6519, 'learning_rate': 1.153349560800085e-05, 'epoch': 0.42}
44%|βββββ | 12500/28347 [18:38<22:20, 11.83it/s]{'loss': 0.7423, 'learning_rate': 1.1180724591667549e-05, 'epoch': 0.44}
46%|βββββ | 13000/28347 [19:22<22:19, 11.46it/s]{'loss': 0.7353, 'learning_rate': 1.0827953575334251e-05, 'epoch': 0.46}
48%|βββββ | 13500/28347 [20:06<21:15, 11.64it/s]{'loss': 0.6847, 'learning_rate': 1.0475182559000954e-05, 'epoch': 0.48}
49%|βββββ | 14000/28347 [20:48<20:00, 11.95it/s]{'loss': 0.6356, 'learning_rate': 1.0122411542667656e-05, 'epoch': 0.49}
51%|βββββ | 14500/28347 [21:30<19:01, 12.14it/s]{'loss': 0.6993, 'learning_rate': 9.769640526334357e-06, 'epoch': 0.51}
53%|ββββββ | 15000/28347 [22:11<18:06, 12.29it/s]{'loss': 0.7461, 'learning_rate': 9.416869510001058e-06, 'epoch': 0.53}
55%|ββββββ | 15500/28347 [22:53<18:01, 11.88it/s]{'loss': 0.723, 'learning_rate': 9.06409849366776e-06, 'epoch': 0.55}
56%|ββββββ | 16000/28347 [23:35<17:18, 11.89it/s]{'loss': 0.7091, 'learning_rate': 8.711327477334463e-06, 'epoch': 0.56}
58%|ββββββ | 16500/28347 [24:16<16:33, 11.93it/s]{'loss': 0.7283, 'learning_rate': 8.358556461001164e-06, 'epoch': 0.58}
60%|ββββββ | 17000/28347 [24:58<15:44, 12.01it/s]{'loss': 0.6658, 'learning_rate': 8.005785444667867e-06, 'epoch': 0.6}
62%|βββββββ | 17500/28347 [25:39<15:14, 11.87it/s]{'loss': 0.7333, 'learning_rate': 7.65301442833457e-06, 'epoch': 0.62}
63%|βββββββ | 18000/28347 [26:21<15:00, 11.50it/s]{'loss': 0.6953, 'learning_rate': 7.300243412001271e-06, 'epoch': 0.63}
65%|βββββββ | 18500/28347 [27:07<14:39, 11.19it/s]{'loss': 0.6792, 'learning_rate': 6.947472395667973e-06, 'epoch': 0.65}
67%|βββββββ | 19000/28347 [27:53<15:01, 10.37it/s]{'loss': 0.7156, 'learning_rate': 6.594701379334675e-06, 'epoch': 0.67}
69%|βββββββ | 19500/28347 [28:41<13:53, 10.62it/s]{'loss': 0.6574, 'learning_rate': 6.241930363001376e-06, 'epoch': 0.69}
71%|βββββββ | 20000/28347 [29:28<13:19, 10.44it/s]{'loss': 0.724, 'learning_rate': 5.889159346668079e-06, 'epoch': 0.71}
72%|ββββββββ | 20500/28347 [30:13<11:11, 11.69it/s]{'loss': 0.6687, 'learning_rate': 5.5363883303347795e-06, 'epoch': 0.72}
74%|ββββββββ | 21000/28347 [30:56<10:29, 11.67it/s]{'loss': 0.6612, 'learning_rate': 5.183617314001482e-06, 'epoch': 0.74}
76%|ββββββββ | 21500/28347 [31:39<09:29, 12.03it/s]{'loss': 0.6861, 'learning_rate': 4.830846297668184e-06, 'epoch': 0.76}
78%|ββββββββ | 22000/28347 [32:23<08:31, 12.40it/s]{'loss': 0.6709, 'learning_rate': 4.478075281334886e-06, 'epoch': 0.78}
79%|ββββββββ | 22500/28347 [33:04<08:03, 12.08it/s]{'loss': 0.6689, 'learning_rate': 4.125304265001588e-06, 'epoch': 0.79}
81%|ββββββββ | 23000/28347 [33:46<07:14, 12.31it/s]{'loss': 0.5955, 'learning_rate': 3.7725332486682897e-06, 'epoch': 0.81}
83%|βββββββββ | 23500/28347 [34:27<06:42, 12.05it/s]{'loss': 0.7265, 'learning_rate': 3.4197622323349914e-06, 'epoch': 0.83}
85%|βββββββββ | 24000/28347 [35:09<06:01, 12.03it/s]{'loss': 0.6603, 'learning_rate': 3.0669912160016936e-06, 'epoch': 0.85}
86%|βββββββββ | 24500/28347 [35:50<05:11, 12.36it/s]{'loss': 0.5817, 'learning_rate': 2.7142201996683953e-06, 'epoch': 0.86}
88%|βββββββββ | 25000/28347 [36:32<04:34, 12.20it/s]{'loss': 0.6164, 'learning_rate': 2.3614491833350974e-06, 'epoch': 0.88}
90%|βββββββββ | 25500/28347 [37:13<03:57, 11.98it/s]{'loss': 0.6458, 'learning_rate': 2.0086781670017996e-06, 'epoch': 0.9}
92%|ββββββββββ| 26000/28347 [37:55<03:19, 11.77it/s]{'loss': 0.6093, 'learning_rate': 1.6559071506685013e-06, 'epoch': 0.92}
93%|ββββββββββ| 26500/28347 [38:36<02:29, 12.33it/s]{'loss': 0.6713, 'learning_rate': 1.3031361343352032e-06, 'epoch': 0.93}
95%|ββββββββββ| 27000/28347 [39:18<01:52, 11.97it/s]{'loss': 0.6891, 'learning_rate': 9.503651180019051e-07, 'epoch': 0.95}
97%|ββββββββββ| 27500/28347 [39:59<01:09, 12.19it/s]{'loss': 0.7025, 'learning_rate': 5.975941016686069e-07, 'epoch': 0.97}
99%|ββββββββββ| 28000/28347 [40:43<00:29, 11.58it/s]{'loss': 0.6524, 'learning_rate': 2.4482308533530886e-07, 'epoch': 0.99}
100%|ββββββββββ| 28347/28347 [41:11<00:00, 11.47it/s]
{'train_runtime': 2471.9347, 'train_samples_per_second': 11.468, 'epoch': 1.0}
35%|ββββ | 2276/6574 [01:49<06:12, 11.55it/s]Traceback (most recent call last):
File "/home/irfan/PycharmProjects/GecPytorch/token_classification.py", line 62, in <module>
trainer.evaluate()
File "/home/irfan/environments/Allen/lib/python3.6/site-packages/transformers/trainer.py", line 1764, in evaluate
metric_key_prefix=metric_key_prefix,
File "/home/irfan/environments/Allen/lib/python3.6/site-packages/transformers/trainer.py", line 1900, in prediction_loop
preds_host = logits if preds_host is None else nested_concat(preds_host, logits, padding_index=-100)
File "/home/irfan/environments/Allen/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 98, in nested_concat
return torch_pad_and_concatenate(tensors, new_tensors, padding_index=padding_index)
File "/home/irfan/environments/Allen/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 66, in torch_pad_and_concatenate
result = tensor1.new_full(new_shape, padding_index)
RuntimeError: CUDA out of memory. Tried to allocate 2.12 GiB (GPU 0; 7.80 GiB total capacity; 3.81 GiB already allocated; 2.18 GiB free; 3.97 GiB reserved in total by PyTorch)
35%|ββββ | 2276/6574 [01:49<03:26, 20.77it/s]
```
I'm using Python 3.6, GPU RTX 2060 super, OS Ubuntu 18.04
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11848/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/11848/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11847 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11847/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11847/comments | https://api.github.com/repos/huggingface/transformers/issues/11847/events | https://github.com/huggingface/transformers/issues/11847 | 899,724,165 | MDU6SXNzdWU4OTk3MjQxNjU= | 11,847 | Request addition of 'GPT2ForwardBackward' models | {
"login": "GenTxt",
"id": 22547261,
"node_id": "MDQ6VXNlcjIyNTQ3MjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/22547261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GenTxt",
"html_url": "https://github.com/GenTxt",
"followers_url": "https://api.github.com/users/GenTxt/followers",
"following_url": "https://api.github.com/users/GenTxt/following{/other_user}",
"gists_url": "https://api.github.com/users/GenTxt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GenTxt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GenTxt/subscriptions",
"organizations_url": "https://api.github.com/users/GenTxt/orgs",
"repos_url": "https://api.github.com/users/GenTxt/repos",
"events_url": "https://api.github.com/users/GenTxt/events{/privacy}",
"received_events_url": "https://api.github.com/users/GenTxt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [] | 1,621 | 1,621 | null | NONE | null | # π Request addition of 'GPT2ForwardBackward' models
## Model description
Code for running forward and backward versions of GPT-2 XL. These were trained for the paper:
**Reflective Decoding: Beyond Unidirectional Generation with Off-the-Shelf Language Models**; Peter West, Ximing Lu, Ari Holtzman, Chandra Bhagavatula, Jena Hwang, and Yejin Choi; ACL (2021)
https://arxiv.org/abs/2010.08566
## Open source status
* [ X] the model implementation is available: (https://github.com/peterwestuw/GPT2ForwardBackward)
* [ X] the model weights are available: (same link as above)
* [ X] who are the authors: (See arvix credits above)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11847/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11847/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11846 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11846/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11846/comments | https://api.github.com/repos/huggingface/transformers/issues/11846/events | https://github.com/huggingface/transformers/pull/11846 | 899,654,089 | MDExOlB1bGxSZXF1ZXN0NjUxMzI1NzI0 | 11,846 | Fix reference to XLNet | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | COLLABORATOR | null | # What does this PR do?
Fixes a reference to the XLNet page in the documentation of TrainingArguments.
Fixes #11831 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11846/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11846",
"html_url": "https://github.com/huggingface/transformers/pull/11846",
"diff_url": "https://github.com/huggingface/transformers/pull/11846.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11846.patch",
"merged_at": 1621862800000
} |
https://api.github.com/repos/huggingface/transformers/issues/11845 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11845/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11845/comments | https://api.github.com/repos/huggingface/transformers/issues/11845/events | https://github.com/huggingface/transformers/issues/11845 | 899,630,055 | MDU6SXNzdWU4OTk2MzAwNTU= | 11,845 | Regression in training speed since 4.4.0 | {
"login": "yaysummeriscoming",
"id": 11413145,
"node_id": "MDQ6VXNlcjExNDEzMTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/11413145?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yaysummeriscoming",
"html_url": "https://github.com/yaysummeriscoming",
"followers_url": "https://api.github.com/users/yaysummeriscoming/followers",
"following_url": "https://api.github.com/users/yaysummeriscoming/following{/other_user}",
"gists_url": "https://api.github.com/users/yaysummeriscoming/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yaysummeriscoming/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yaysummeriscoming/subscriptions",
"organizations_url": "https://api.github.com/users/yaysummeriscoming/orgs",
"repos_url": "https://api.github.com/users/yaysummeriscoming/repos",
"events_url": "https://api.github.com/users/yaysummeriscoming/events{/privacy}",
"received_events_url": "https://api.github.com/users/yaysummeriscoming/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for flagging! I used your reproducer (very nice by the way!) and a git bisect and it comes from #10937, which reworks the way memory metrics are computed. You just have to set `skip_memory_metrics=True` in your `TrainingArguments` to skip the memory metric computation and you will get your performance back.",
"Thanks, that did the trick! Turns out that setting skip_memory_metrics also improves performance ~5% on 4.4.0.\r\n\r\nCould I suggest that this parameter is enabled by default? It seems to me that this is debugging functionality, and shouldn't be enabled normally.",
"Yes, this is done by the PR mentioned above!",
"Quick comment on the already closed issue. Had to debug this issue independently, due to an older fork. \r\n\r\nIt seems that the the issue is not just slowdown when enabling memory metrics, but also, there is performance variability from run to run. \r\n\r\nOne of the signatures of the issue is that there is no performance loss or variability in backward() call (run in the Autograd C engine). The optimizer.step() had the greatest impact, followed by forward propagation. Based on those observations, the issue is suspected to be due to fast I/O ( gpu kernel calls during optimizer.step() and forward) affecting multithreading in the Python GIL. \r\n\r\nSkipping memory metrics fixes issue sufficiently. There is still a logging thread, and a progress-bar (tqdm) thread. Adding this note here as a warning that multithreading during forward or optimizer.step() might cause performance loss/variability. ",
"Ouch, hope that it didn't cost you too much time! Thanks for the further info on the problem"
] | 1,621 | 1,625 | 1,621 | NONE | null | ## Environment info
- `transformers` version: 4.4.0/4.6.1
- Platform: Linux-5.8.0-53-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: Yes, RTX 3090
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger @patrickvonplaten
## Information
I've noticed a training speed regression between 4.4.0 and 4.6.0:
4.4.0 BS8:
{'train_runtime': 21.363, 'train_samples_per_second': 5.851, 'epoch': 1.0}
{'train_runtime': 21.6148, 'train_samples_per_second': 5.783, 'epoch': 1.0}
{'train_runtime': 21.7867, 'train_samples_per_second': 5.737, 'epoch': 1.0}
4.6.1 BS8:
{'train_runtime': 23.7011, 'train_samples_per_second': 5.274, 'epoch': 1.0}
{'train_runtime': 24.2845, 'train_samples_per_second': 5.147, 'epoch': 1.0}
{'train_runtime': 23.5801, 'train_samples_per_second': 5.301, 'epoch': 1.0}
4.4.0 BS4:
{'train_runtime': 25.4107, 'train_samples_per_second': 9.838, 'epoch': 1.0}
4.6.1 BS4:
{'train_runtime': 31.2902, 'train_samples_per_second': 7.99, 'epoch': 1.0}
I'm running the Pytorch 1.8.1. release, on my RTX3090/Ryzen 3700X workstation.
The performance loss seems to increase with smaller batch sizes, leading me to think it's something in Trainer.
Although I found the regression with Sequence Classification, I've found the the slowdown transfers to other tasks as well.
## To reproduce
```
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForSequenceClassification, TrainingArguments, Trainer
BATCH_SIZE = 4
raw_datasets = load_dataset("imdb")
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True)
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000))
small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000))
full_train_dataset = tokenized_datasets["train"]
full_eval_dataset = tokenized_datasets["test"]
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2)
training_args = TrainingArguments("test_trainer", num_train_epochs=1, per_device_train_batch_size=BATCH_SIZE)
trainer = Trainer(
model=model, args=training_args, train_dataset=small_train_dataset, eval_dataset=small_eval_dataset
)
trainer.train()
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11845/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.