url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/25322
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25322/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25322/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25322/events
|
https://github.com/huggingface/transformers/issues/25322
| 1,837,090,414 |
I_kwDOCUB6oc5tf8Zu
| 25,322 |
Explanation of the default "auto" values for DeepSpeed stage 3?
|
{
"login": "garrett361",
"id": 44747910,
"node_id": "MDQ6VXNlcjQ0NzQ3OTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/44747910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/garrett361",
"html_url": "https://github.com/garrett361",
"followers_url": "https://api.github.com/users/garrett361/followers",
"following_url": "https://api.github.com/users/garrett361/following{/other_user}",
"gists_url": "https://api.github.com/users/garrett361/gists{/gist_id}",
"starred_url": "https://api.github.com/users/garrett361/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/garrett361/subscriptions",
"organizations_url": "https://api.github.com/users/garrett361/orgs",
"repos_url": "https://api.github.com/users/garrett361/repos",
"events_url": "https://api.github.com/users/garrett361/events{/privacy}",
"received_events_url": "https://api.github.com/users/garrett361/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.",
"Ah, thank you! Will ask there, I wasn't aware of the forums.",
"@garrett361,\r\n\r\n- The only real magical `auto` configs are documented here: https://huggingface.co/docs/transformers/main/en/main_classes/deepspeed#zero3-config - the document explains why those values were chosen and the know how came via oral recommendations from Deepspeed developers.\r\n- The rest `auto` values are just there to pass through `TrainingArguments` so that they match on both sides.",
"Thanks @stas00 , please ignore the similar thread I started in the forums.\r\n\r\n> the know how came via oral recommendations from Deepspeed developers\r\n\r\nAs I feared. Appreciate the confirmation, though. ",
"Please don't hesitate to file an Issue with Deepspeed to ask for documenting such essential nuances in the Deepspeed core documentation. I'm pretty sure there are other such tune up nuances that I don't think are documented anywhere on their website/repo. "
] | 1,691 | 1,692 | 1,691 |
NONE
| null |
Hi, I would like to know how the default values of the various [default DeepSpeed stage 3 parameters](https://huggingface.co/docs/transformers/main_classes/deepspeed?_sm_vck=D7sD5KLsJ4kfrtW4NKbQJqJWqH4bkFR7JkHK7fQtHk65Kst1nq7r#zero3-config) were determined when using `"auto"` fields. They seem to work quite well, but I can't find any documentation of their origins.
What experiments or computations were done to land on the `reduce_bucket_size`, `stage3_prefetch_bucket_size `, and `stage3_param_persistence_threshold ` formulas below?
Maybe @stas00 ? I see that you have written most of this code. Thank you in advance!
https://github.com/huggingface/transformers/blob/fdd81aea12f06e24ab5cf5ba3c7316df3ab1a779/src/transformers/deepspeed.py#L208-L212
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25322/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25321
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25321/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25321/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25321/events
|
https://github.com/huggingface/transformers/issues/25321
| 1,837,082,718 |
I_kwDOCUB6oc5tf6he
| 25,321 |
Adding gpu devices to Trainer.train
|
{
"login": "Ofir408",
"id": 33639234,
"node_id": "MDQ6VXNlcjMzNjM5MjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/33639234?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ofir408",
"html_url": "https://github.com/Ofir408",
"followers_url": "https://api.github.com/users/Ofir408/followers",
"following_url": "https://api.github.com/users/Ofir408/following{/other_user}",
"gists_url": "https://api.github.com/users/Ofir408/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ofir408/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ofir408/subscriptions",
"organizations_url": "https://api.github.com/users/Ofir408/orgs",
"repos_url": "https://api.github.com/users/Ofir408/repos",
"events_url": "https://api.github.com/users/Ofir408/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ofir408/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@Ofir408 thanks for raising this feature request!\r\n\r\n> This extension can be implemented by setting the environment variable CUDA_VISIBLE_DEVICES appropriately before the training process begins.\r\n\r\nIt's already possible to set the GPUs used during training with the `CUDA_VISIBLE_DEVICES` argument. See the docs here: https://huggingface.co/docs/transformers/main_classes/trainer#specific-gpus-selection\r\n ",
"@amyeroberts Thank you for your answer. I know it's possible, but I think it will be more convenient if we add this parameter to the `train` method of `Trainer`. What do you think?",
"Will verify today if that's even possible after spawning with Accelerate. If not, it's a limit of python and we can't support that :)",
"Yes, this isn't really going to be possible, it has to be set before `torchrun` or `accelerate launch` is performed, otherwise nothing will really happen. See below where I tried setting `CUDA_VISIBLE_DEVICES` before spinning up torch distributed via `Accelerate`'s `PartialState`:\r\n\r\n```python\r\nimport os\r\nfrom accelerate import PartialState\r\n\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0,1\"\r\nstate = PartialState()\r\nprint(state)\r\n```\r\n\r\nRunning via:\r\n```bash\r\ntorchrun --nproc_per_node 4 test.py\r\n```\r\n\r\nOutput:\r\n```python\r\nDistributed environment: MULTI_GPU Backend: nccl\r\nNum processes: 4\r\nProcess index: 1\r\nLocal process index: 1\r\nDevice: cuda:1\r\n\r\nDistributed environment: MULTI_GPU Backend: nccl\r\nNum processes: 4\r\nProcess index: 3\r\nLocal process index: 3\r\nDevice: cuda:3\r\n\r\nDistributed environment: MULTI_GPU Backend: nccl\r\nNum processes: 4\r\nProcess index: 2\r\nLocal process index: 2\r\nDevice: cuda:2\r\n\r\nDistributed environment: MULTI_GPU Backend: nccl\r\nNum processes: 4\r\nProcess index: 0\r\nLocal process index: 0\r\nDevice: cuda:0\r\n```\r\n\r\nSo you must set it either when running `torchrun`, or when doing `accelerate config` it will prompt and ask if there are device ID's you'd like to run",
"OK, thank you very much for checking it"
] | 1,691 | 1,691 | 1,691 |
NONE
| null |
### Feature request
It would be helpful to extend the `train` method of the `Trainer` class with additional parameters to specify the GPUs devices we want to use during training. This extension can be implemented by setting the environment variable ``CUDA_VISIBLE_DEVICES`` appropriately before the training process begins.
### Motivation
Greater flexibility in specifying the GPUs or devices they want to use during model training. This enhancement simplifies the process of training models on specific hardware configurations, making it more accessible and convenient for users to leverage available resources efficiently.
### Your contribution
I can submit a PR if you prefer.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25321/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25320
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25320/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25320/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25320/events
|
https://github.com/huggingface/transformers/pull/25320
| 1,836,902,132 |
PR_kwDOCUB6oc5XNLcd
| 25,320 |
[MusicGen] Add streamer to generate
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the review @gante! We now yield successive audio chunks as suggested",
"Ready for core-maintainer review! What are your thoughts about adding a streamer example to the docs @ArthurZucker @gante? The code is quite involved, so I was thinking that maybe this is better as a standalone gradio example\r\n\r\n<details>\r\n\r\n<summary> Streamer code: </summary>\r\n\r\n```python\r\nfrom queue import Queue\r\nfrom threading import Thread\r\nfrom typing import Optional\r\n\r\nimport numpy as np\r\nimport torch\r\n\r\nfrom transformers import MusicgenForConditionalGeneration, MusicgenProcessor, set_seed\r\nfrom transformers.generation.streamers import BaseStreamer\r\n\r\nimport gradio as gr\r\n\r\ndevice = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\r\n\r\nmodel = MusicgenForConditionalGeneration.from_pretrained(\"facebook/musicgen-small\")\r\nprocessor = MusicgenProcessor.from_pretrained(\"facebook/musicgen-small\")\r\n\r\nif device == \"cuda:0\":\r\n model.to(device).half();\r\n\r\nclass MusicgenStreamer(BaseStreamer):\r\n def __init__(\r\n self,\r\n model: MusicgenForConditionalGeneration,\r\n device: Optional[str] = None,\r\n play_steps: Optional[int] = 10,\r\n stride: Optional[int] = None,\r\n timeout: Optional[float] = None,\r\n ):\r\n \"\"\"\r\n Streamer that stores playback-ready audio in a queue, to be used by a downstream application as an iterator. This is\r\n useful for applications that benefit from acessing the generated audio in a non-blocking way (e.g. in an interactive\r\n Gradio demo).\r\n\r\n Parameters:\r\n model (`MusicgenForConditionalGeneration`):\r\n The MusicGen model used to generate the audio waveform.\r\n device (`str`, *optional*):\r\n The torch device on which to run the computation. If `None`, will default to the device of the model.\r\n play_steps (`int`, *optional*, defaults to 10):\r\n The number of generation steps with which to return the generated audio array. Using fewer steps will \r\n mean the first chunk is ready faster, but will require more codec decoding steps overall. This value \r\n should be tuned to your device and latency requirements.\r\n stride (`int`, *optional*):\r\n The window (stride) between adjacent audio samples. Using a stride between adjacent audio samples reduces\r\n the hard boundary between them, giving smoother playback. If `None`, will default to a value equivalent to \r\n play_steps // 6 in the audio space.\r\n timeout (`int`, *optional*):\r\n The timeout for the audio queue. If `None`, the queue will block indefinitely. Useful to handle exceptions\r\n in `.generate()`, when it is called in a separate thread.\r\n \"\"\"\r\n self.decoder = model.decoder\r\n self.audio_encoder = model.audio_encoder\r\n self.generation_config = model.generation_config\r\n self.device = device if device is not None else model.device\r\n\r\n # variables used in the streaming process\r\n self.play_steps = play_steps\r\n if stride is not None:\r\n self.stride = stride\r\n else:\r\n hop_length = np.prod(self.audio_encoder.config.upsampling_ratios)\r\n self.stride = hop_length * (play_steps - self.decoder.num_codebooks) // 6\r\n self.token_cache = None\r\n self.to_yield = 0\r\n\r\n # varibles used in the thread process\r\n self.audio_queue = Queue()\r\n self.stop_signal = None\r\n self.timeout = timeout\r\n\r\n def apply_delay_pattern_mask(self, input_ids):\r\n # build the delay pattern mask for offsetting each codebook prediction by 1 (this behaviour is specific to MusicGen)\r\n _, decoder_delay_pattern_mask = self.decoder.build_delay_pattern_mask(\r\n input_ids[:, :1],\r\n pad_token_id=self.generation_config.decoder_start_token_id,\r\n max_length=input_ids.shape[-1],\r\n )\r\n # apply the pattern mask to the input ids\r\n input_ids = self.decoder.apply_delay_pattern_mask(input_ids, decoder_delay_pattern_mask)\r\n\r\n # revert the pattern delay mask by filtering the pad token id\r\n input_ids = input_ids[input_ids != self.generation_config.pad_token_id].reshape(\r\n 1, self.decoder.num_codebooks, -1\r\n )\r\n\r\n # append the frame dimension back to the audio codes\r\n input_ids = input_ids[None, ...]\r\n\r\n # send the input_ids to the correct device\r\n input_ids = input_ids.to(self.audio_encoder.device)\r\n\r\n output_values = self.audio_encoder.decode(\r\n input_ids,\r\n audio_scales=[None],\r\n )\r\n audio_values = output_values.audio_values[0, 0]\r\n return audio_values.cpu().float().numpy()\r\n\r\n def put(self, value):\r\n batch_size = value.shape[0] // self.decoder.num_codebooks\r\n if batch_size > 1:\r\n raise ValueError(\"MusicgenStreamer only supports batch size 1\")\r\n\r\n if self.token_cache is None:\r\n self.token_cache = value\r\n else:\r\n self.token_cache = torch.concatenate([self.token_cache, value[:, None]], dim=-1)\r\n\r\n if self.token_cache.shape[-1] % self.play_steps == 0:\r\n audio_values = self.apply_delay_pattern_mask(self.token_cache)\r\n self.on_finalized_audio(audio_values[self.to_yield : -self.stride])\r\n self.to_yield += len(audio_values) - self.to_yield - self.stride\r\n\r\n def end(self):\r\n \"\"\"Flushes any remaining cache and appends the stop symbol.\"\"\"\r\n if self.token_cache is not None:\r\n audio_values = self.apply_delay_pattern_mask(self.token_cache)\r\n else:\r\n audio_values = np.zeros(self.to_yield)\r\n\r\n self.on_finalized_audio(audio_values[self.to_yield :], stream_end=True)\r\n\r\n def on_finalized_audio(self, audio: np.ndarray, stream_end: bool = False):\r\n \"\"\"Put the new audio in the queue. If the stream is ending, also put a stop signal in the queue.\"\"\"\r\n self.audio_queue.put(audio, timeout=self.timeout)\r\n if stream_end:\r\n self.audio_queue.put(self.stop_signal, timeout=self.timeout)\r\n\r\n def __iter__(self):\r\n return self\r\n\r\n def __next__(self):\r\n value = self.audio_queue.get(timeout=self.timeout)\r\n if not isinstance(value, np.ndarray) and value == self.stop_signal:\r\n raise StopIteration()\r\n else:\r\n return value\r\n\r\nsampling_rate = model.audio_encoder.config.sampling_rate\r\nframe_rate = model.audio_encoder.config.frame_rate\r\n\r\ndef generate_audio(text_prompt, audio_length_in_s=10.0, play_steps_in_s=2.0):\r\n inputs = processor(\r\n text=text_prompt,\r\n padding=True,\r\n return_tensors=\"pt\",\r\n )\r\n\r\n max_new_tokens = int(frame_rate * audio_length_in_s)\r\n play_steps = int(frame_rate * play_steps_in_s)\r\n\r\n streamer = MusicgenStreamer(model, device=device, play_steps=play_steps)\r\n\r\n generation_kwargs = dict(\r\n **inputs.to(device),\r\n streamer=streamer,\r\n max_new_tokens=max_new_tokens,\r\n )\r\n thread = Thread(target=model.generate, kwargs=generation_kwargs)\r\n thread.start()\r\n\r\n set_seed(0)\r\n for new_audio in streamer:\r\n yield (sampling_rate, new_audio)\r\n\r\ngenerator = generate_audio(\"Techno music with euphoric melodies\")\r\n\r\nfor chunk in generator:\r\n yield (sampling_rate, chunk)\r\n```\r\n\r\n</details>\r\n",
"Gently pinging @ArthurZucker for a review!",
"This is great @sanchit-gandhi, do you think it's possible to showcase the feature in a Space ?",
"Merging as the test showed that the streamer worked, and we'll showcase this directly in a gradio demo once streaming outputs are confirmed as working!"
] | 1,691 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
# What does this PR do?
Adds the `streamer` to MusicGen's generate, along with an example test for returning chunks of numpy audio arrays _on-the-fly_ as they are generated.
Facilitates using MusicGen with streaming mode as per the Gradio update: https://github.com/gradio-app/gradio/pull/5077
cc @Vaibhavs10 @ylacombe @aliabid94 @abidlabs
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25320/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25320/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25320",
"html_url": "https://github.com/huggingface/transformers/pull/25320",
"diff_url": "https://github.com/huggingface/transformers/pull/25320.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25320.patch",
"merged_at": 1694703549000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25319
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25319/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25319/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25319/events
|
https://github.com/huggingface/transformers/pull/25319
| 1,836,692,485 |
PR_kwDOCUB6oc5XMdK4
| 25,319 |
Document toc check and doctest check scripts
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,691 | 1,691 |
COLLABORATOR
| null |
# What does this PR do?
This PR continues to document the scripts used in our quality tooling with the one that checks the table of content and the one that checks the doctest list. For the second one, I added the option to auto-fix (as for everything else).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25319/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25319/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25319",
"html_url": "https://github.com/huggingface/transformers/pull/25319",
"diff_url": "https://github.com/huggingface/transformers/pull/25319.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25319.patch",
"merged_at": 1691159044000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25318
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25318/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25318/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25318/events
|
https://github.com/huggingface/transformers/pull/25318
| 1,836,676,011 |
PR_kwDOCUB6oc5XMZmu
| 25,318 |
Load state in else
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger good for another look here!"
] | 1,691 | 1,691 | 1,691 |
CONTRIBUTOR
| null |
# What does this PR do?
If we just want to load the most recent checkpoint by passing `resume_from_checkpoint`, we need to actually load it in
Fixes # (issue)
Solves https://github.com/huggingface/transformers/issues/25269
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25318/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25318/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25318",
"html_url": "https://github.com/huggingface/transformers/pull/25318",
"diff_url": "https://github.com/huggingface/transformers/pull/25318.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25318.patch",
"merged_at": 1691487661000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25317
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25317/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25317/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25317/events
|
https://github.com/huggingface/transformers/pull/25317
| 1,836,655,311 |
PR_kwDOCUB6oc5XMVIO
| 25,317 |
add docstring examples to Encoder repetition penalty logits processor
|
{
"login": "rajveer43",
"id": 64583161,
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajveer43",
"html_url": "https://github.com/rajveer43",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@gante , I will test the examples and make suitable changes in the files!",
"@gante Will you suggest something for this PR?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25317). All of your documentation changes will be reflected on that endpoint.",
"Hi @rajveer43 👋 \r\n\r\nThis logits processor has to be applied to an encoder-decoder model, like T5. I'd like to ask for a very short example: a 20 line example would be perfect :)\r\n\r\nYou would need to find a prompt that results in an output with repetition. Then, applying this logits processor, the resulting generation should not repeat as much",
"> Hi @rajveer43 👋\r\n> \r\n> This logits processor has to be applied to an encoder-decoder model, like T5. I'd like to ask for a very short example: a 20 line example would be perfect :)\r\n> \r\n> You would need to find a prompt that results in an output with repetition. Then, applying this logits processor, the resulting generation should not repeat as much\r\n\r\nSure, gotcha. and I will make sure it works perfectly. also will try to incorporate more examples .",
"Hi @rajveer43 👋 \r\n\r\nAs per [this comment](https://github.com/huggingface/transformers/issues/24783#issuecomment-1693225365), I will no longer accept this PR. Thank you for participating, and my apologies for the inconvenience :)",
"Sure"
] | 1,691 | 1,695 | 1,694 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Related issue: #24783
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25317/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25317/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25317",
"html_url": "https://github.com/huggingface/transformers/pull/25317",
"diff_url": "https://github.com/huggingface/transformers/pull/25317.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25317.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25316
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25316/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25316/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25316/events
|
https://github.com/huggingface/transformers/issues/25316
| 1,836,579,210 |
I_kwDOCUB6oc5td_mK
| 25,316 |
PreTrainedTokenizerFast converted from SentencePiece Unigram behaviour difference
|
{
"login": "meliksahturker",
"id": 67103746,
"node_id": "MDQ6VXNlcjY3MTAzNzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/67103746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meliksahturker",
"html_url": "https://github.com/meliksahturker",
"followers_url": "https://api.github.com/users/meliksahturker/followers",
"following_url": "https://api.github.com/users/meliksahturker/following{/other_user}",
"gists_url": "https://api.github.com/users/meliksahturker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meliksahturker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meliksahturker/subscriptions",
"organizations_url": "https://api.github.com/users/meliksahturker/orgs",
"repos_url": "https://api.github.com/users/meliksahturker/repos",
"events_url": "https://api.github.com/users/meliksahturker/events{/privacy}",
"received_events_url": "https://api.github.com/users/meliksahturker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @ArthurZucker ",
"Hey @meliksahturker sorry I did not have time to look at this. I ll have a look. \r\ncc @Narsil I think you already solved something like this! ",
"You can check this out: https://github.com/huggingface/tokenizers/pull/401\r\n\r\nBasically 100% parity is not possible. `...` -> `..` + `.` vs `.` + `..` is strictly equivalently OK from unigram algorithm perspective. Sentencepiece is not using stable sorts there so their own order will depend on the rest of the string.\r\n`tokenizers` also uses `f64` everywhere, where sentencepiece uses a mix of `f32` and `f64` leading to also float imprecision and tiny difference in ordering.\r\n\r\nIf you check the linked PR you'll sentencepiece is not even consistent with itself in the same string.\r\n\r\nIf you want 100% the same tokens, you have to use sentencepiece all the way. However the differences shouldn't be that significant in the grand scheme of things.\r\n",
"> You can check this out: [huggingface/tokenizers#401](https://github.com/huggingface/tokenizers/pull/401)\r\n> \r\n> Basically 100% parity is not possible. `...` -> `..` + `.` vs `.` + `..` is strictly equivalently OK from unigram algorithm perspective. Sentencepiece is not using stable sorts there so their own order will depend on the rest of the string. `tokenizers` also uses `f64` everywhere, where sentencepiece uses a mix of `f32` and `f64` leading to also float imprecision and tiny difference in ordering.\r\n> \r\n> If you check the linked PR you'll sentencepiece is not even consistent with itself in the same string.\r\n> \r\n> If you want 100% the same tokens, you have to use sentencepiece all the way. However the differences shouldn't be that significant in the grand scheme of things.\r\n\r\nHow about the first case that occurs for strings that end with \"\\n\", \"\\ufeff\" or \"�\" ?",
"Might be unigram bytefallback not being set in the conversion script. It was part of the latest release only ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,691 | 1,697 | 1,697 |
NONE
| null |
### System Info
transformers version: 4.28.1 (tested on 4.31.0 as well)
platform: windows
python version: 3.8.10
We have converted a pre-trained SentencePiece unigram tokenizer to PreTrainedTokenizerFast, setting vocabulary, special tokens and so on as below:
```
import sentencepiece as spm
from transformers import convert_slow_tokenizer
spm_tokenizer = spm.SentencePieceProcessor('SentencePiece_32k_Tokenizer.model')
spm_tokenizer.vocab_file = 'SentencePiece_32k_Tokenizer.model'
spm_converter = convert_slow_tokenizer.SpmConverter(spm_tokenizer)
converted = spm_converter.converted()
converted.save('converted.json')
tok = PreTrainedTokenizerFast.from_pretrained(pretrained_model_name_or_path='converted.json', clean_up_tokenization_spaces=False, pad_token='<PAD>', unk_token='<UNK>', bos_token='<BOS>', eos_token='<EOS>', mask_token='<MASK>', model_max_length=1024, padding_side='right', truncation_side='right')
tok.save_pretrained('ConvertedTokenizer')
```
You can download the original sentencepiece tokenizer from [here](https://vnlp-model-weights.s3.eu-west-1.amazonaws.com/SentencePiece_32k_Tokenizer.model) to reproduce.
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Then along with the original sentencepiece tokenizer (for comparison), we can load and use it as below:
```
spm_tokenizer = spm.SentencePieceProcessor('SentencePiece_32k_Tokenizer.model')
conv_tokenizer = PreTrainedTokenizerFast.from_pretrained('ConvertedTokenizer')
```
Converted tokenizer produces the exact same result with the original SentencePiece tokenizer on 99.8% of the cases.
For the 0.2% of the cases, the difference come from two classes:
1) Strings that end with "\n", "\ufeff" or "�"
```
t = "some string\ufeff"
print(spm_tokenizer.encode(t))
print(conv_tokenizer.encode(t))
```
produces
```
[2827, 167, 7285, 8181]
[2827, 167, 7285, 8181, 9]
```
Where 9 corresponds to empty string.
Replacing "\ufeff" with "\n" or "�" produces the same result.
2) Strings that contain repetitive characters
```
t = "eeeee"
print(spm_tokenizer.encode(t))
print(conv_tokenizer.encode(t))
```
produces
```
[152, 23128, 22]
[152, 22, 23128]
```
```
t = "Ürünün Videosu Detaylar Bölümünde mevcuttur...."
print(spm_tokenizer.encode(t))
print(conv_tokenizer.encode(t))
```
produces
```
[23708, 3965, 424, 20350, 18, 26330, 3288, 63, 4]
[23708, 3965, 424, 20350, 18, 26330, 3288, 4, 63]
```
However, for most cases, this happens only if continuous repetitive characters are part of a large string. Compare the example above with the below:
```
t = "Bölümünde mevcuttur...."
print(spm_tokenizer.encode(t))
print(conv_tokenizer.encode(t))
```
produces
```
[26330, 3288, 4, 63]
[26330, 3288, 4, 63]
```
### Expected behavior
Expected behaviour is that the two tokenizers should produce the same tokens no matter what.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25316/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25316/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25315
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25315/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25315/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25315/events
|
https://github.com/huggingface/transformers/pull/25315
| 1,836,575,303 |
PR_kwDOCUB6oc5XMDip
| 25,315 |
Give more memory in test_disk_offload
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,691 | 1,691 |
COLLABORATOR
| null |
# What does this PR do?
Give more memory available in `disk_offload_test` by using the second split (0.7) instead of the first (0.5). I haven't tried all of them, but it seems to fix all the ones I tried. We will know more at the next GPU CI run.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25315/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25315/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25315",
"html_url": "https://github.com/huggingface/transformers/pull/25315",
"diff_url": "https://github.com/huggingface/transformers/pull/25315.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25315.patch",
"merged_at": 1691151031000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25314
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25314/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25314/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25314/events
|
https://github.com/huggingface/transformers/issues/25314
| 1,836,478,015 |
I_kwDOCUB6oc5tdm4_
| 25,314 |
Reducing CPU usage during decoding
|
{
"login": "bloodraven66",
"id": 23132495,
"node_id": "MDQ6VXNlcjIzMTMyNDk1",
"avatar_url": "https://avatars.githubusercontent.com/u/23132495?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bloodraven66",
"html_url": "https://github.com/bloodraven66",
"followers_url": "https://api.github.com/users/bloodraven66/followers",
"following_url": "https://api.github.com/users/bloodraven66/following{/other_user}",
"gists_url": "https://api.github.com/users/bloodraven66/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bloodraven66/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bloodraven66/subscriptions",
"organizations_url": "https://api.github.com/users/bloodraven66/orgs",
"repos_url": "https://api.github.com/users/bloodraven66/repos",
"events_url": "https://api.github.com/users/bloodraven66/events{/privacy}",
"received_events_url": "https://api.github.com/users/bloodraven66/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @bloodraven66, thanks for raising this issue. \r\n\r\nFor us to be able to help you'll need to provide some more information and context: \r\n* How and where is `num_workers=1` being set?\r\n* Could you provide a code snippet which produces the conditions seen? We should be able to copy-paste the code and run it directly. \r\n* What do you mean by 'does not reduce the CPU load'? Is it the number of processes or RAM being used which doesn't reduce? \r\n* Is the memory spike being seeing during processing? Forward pass?\r\n\r\nYou can control the max memory to use on CPU for the model when loading with `from_pretrained` with `max_memory`. You'll also need to provide an offload folder. \r\n\r\n```\r\nmodel = Wav2VecForCTC.from_pretrained(checkpoint, max_memory={\"cpu\": max_cpu_ram}, offload_folder=offload_folder, offload_state_dict=True)\r\n```\r\n\r\n",
"Indeed an end-to-end reproducible code snippet to emulate the behaviour would be most useful! Note that you can also easily reduce the amount of CPU usage through the better transformers integration: https://huggingface.co/docs/transformers/perf_infer_cpu#bettertransformer-for-faster-inference\r\n\r\nBetter transformers will enable flash attention, which gives a nice memory gain and latency improvement during inference",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,691 | 1,694 | 1,694 |
NONE
| null |
### System Info
- `transformers` version: 4.24.0
- Platform: Linux-5.15.0-1032-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.13.0+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in
### Who can help?
@sanchit-gandhi I'm looking at doing CPU decoding with Wav2Vec2ForCTC, and Wav2Vec2Processor, using a fixed number of CPU cores. I've tried setting num_workers = 1 but it does not decrease the CPU load. I could not find any solution by looking at the documentation.
Are there any inbuilt args/kwargs to define CPU load? Or should I use external tooling for it?
### Information
- [X] The official example scripts
### Reproduction
```
processor = Wav2Vec2Processor.from_pretrained(wav2vec_hf_key, repo_type="model")
model = Wav2Vec2ForCTC.from_pretrained(wav2vec_hf_key)
audio_input, sample_rate = sf.read(wav_file)
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
```
### Expected behavior
I'm hoping that there is a way to control CPU load while decoding with transformers. I do not want the code to access all system resources.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25314/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25314/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25313
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25313/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25313/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25313/events
|
https://github.com/huggingface/transformers/issues/25313
| 1,836,410,947 |
I_kwDOCUB6oc5tdWhD
| 25,313 |
Consume some many memory when setting eval_accumulation_steps
|
{
"login": "SingL3",
"id": 20473466,
"node_id": "MDQ6VXNlcjIwNDczNDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/20473466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SingL3",
"html_url": "https://github.com/SingL3",
"followers_url": "https://api.github.com/users/SingL3/followers",
"following_url": "https://api.github.com/users/SingL3/following{/other_user}",
"gists_url": "https://api.github.com/users/SingL3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SingL3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SingL3/subscriptions",
"organizations_url": "https://api.github.com/users/SingL3/orgs",
"repos_url": "https://api.github.com/users/SingL3/repos",
"events_url": "https://api.github.com/users/SingL3/events{/privacy}",
"received_events_url": "https://api.github.com/users/SingL3/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I'm not sure how you want us to help without even telling us which arguments you used to launch this script or which exact model on the Hub you are using.",
"@sgugger, sorry that I didnt provide enough info\r\nLaunch script:\r\n```\r\ntorchrun --nproc_per_node=4 --master_port=30500 train.py \\\r\n --model_name_or_path <path_to_pythia_6.9b> \\\r\n --data_path ./alpaca_data.json \\\r\n --bf16 True \\\r\n --output_dir ./output_dir/ \\\r\n --num_train_epochs 3 \\\r\n --per_device_train_batch_size 1 \\\r\n --per_device_eval_batch_size 1 \\\r\n --gradient_accumulation_steps 8 \\\r\n --evaluation_strategy \"steps\" \\\r\n --eval_steps 1 \\\r\n --save_strategy \"steps\" \\\r\n --save_steps 10000 \\\r\n --save_total_limit 1 \\\r\n --learning_rate 2e-5 \\\r\n --weight_decay 0. \\\r\n --warmup_ratio 0.03 \\\r\n --lr_scheduler_type \"cosine\" \\\r\n --logging_steps 1 \\\r\n --fsdp \"full_shard auto_wrap\" \\\r\n --fsdp_transformer_layer_cls_to_wrap 'GPTNeoXLayer' \\\r\n --tf32 True \\\r\n --model_max_length 2048 \\\r\n --eval_accumulation_steps 1\r\n```\r\n(Need to set `use_fast=True` at [L196](https://github.com/tatsu-lab/stanford_alpaca/blob/main/train.py#L196) and set `eval_dataset=training_dataset` at [L179](https://github.com/tatsu-lab/stanford_alpaca/blob/main/train.py#L179))\r\nI have also using a metric func and add it to the trainer by `trainer = Trainer(model=model, tokenizer=tokenizer, args=training_args, compute_metrics=build_acc_metric_fn(), **data_module)` at [L215](https://github.com/tatsu-lab/stanford_alpaca/blob/main/train.py#L215)\r\n```\r\ndef build_acc_metric_fn(**kwargs) -> Callable[[EvalPrediction], Dict]:\r\n\r\n def acc_metric(pred: EvalPrediction):\r\n label_ids = pred.label_ids\r\n logits = pred.predictions\r\n\r\n # Get output tokens\r\n greedy_tokens = logits.argmax(axis=-1)\r\n\r\n # Shift label and logits to match\r\n label_ids = label_ids[:, 1:]\r\n greedy_tokens = greedy_tokens[:, :-1]\r\n\r\n token_match = 0\r\n valid_len = 0\r\n seq_match = 0\r\n for i in range(label_ids.shape[0]):\r\n # Mask out ignore\r\n mask = (label_ids[i] != IGNORE_INDEX)\r\n valid_len += mask.sum()\r\n token_match += (label_ids[i][mask] == greedy_tokens[i][mask]).sum()\r\n seq_match += (label_ids[i][mask] == greedy_tokens[i][mask]).all()\r\n\r\n return {\r\n 'seq_acc': float(seq_match / label_ids.shape[0]),\r\n 'token_acc': float(token_match / valid_len)\r\n }\r\n\r\n return acc_metric\r\n```",
"You cannot use such a metric function with the Trainer without using a large amount of memory: the logits (once accumulated) have a size of your dataset length x max sequence length x vocab size, which is huge. You should do this evaluation batch by batch outside of the Trainer (using Accelerate for instance).",
"@sgugger \r\nI see. So I actually using a small eval_dataset.\r\nHowever, this didnt explain why doing that using LoRA work on single gpu but dont work using all param finetune on eight gpus."
] | 1,691 | 1,692 | 1,692 |
NONE
| null |
### System Info
- `transformers` version: 4.28.1
- Platform: Linux-4.18.0-425.3.1.el8.x86_64-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Machine Info: 8xA100(80G), Memory 960GB
Codebase: [stanford_alpaca](https://github.com/tatsu-lab/stanford_alpaca)
Model: Pythia-6.9b
Dataset: [c-s-ale/dolly-15k-instruction-alpaca-format](https://huggingface.co/datasets/c-s-ale/dolly-15k-instruction-alpaca-format)
The eval dataset and the training dataset is all the same(the whole dataset).
Leaving `eval_accumilation_steps=None` would cause GPU OOM, so setting `eval_accumilation_steps` to a valid number (I have tried 1 and 8).
And the eval process would stuck at evaluating like about 20 samples and would take all the CPU memory and raise a CPU memory OOM.
I have tried the above process using deepspeed and torchrun on all 8 gpus.
However, when I tried running LoRA tuning on 1 gpu without deepspeed or torchrun under the same setting and datasets, the CPU memory will never OOM.
### Expected behavior
Running without OOM
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25313/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25313/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25311
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25311/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25311/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25311/events
|
https://github.com/huggingface/transformers/issues/25311
| 1,836,280,586 |
I_kwDOCUB6oc5tc2sK
| 25,311 |
Llama 2: NaN values when torch_dtype=torch.float16 and padding_side="left"
|
{
"login": "ValeKnappich",
"id": 39188710,
"node_id": "MDQ6VXNlcjM5MTg4NzEw",
"avatar_url": "https://avatars.githubusercontent.com/u/39188710?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ValeKnappich",
"html_url": "https://github.com/ValeKnappich",
"followers_url": "https://api.github.com/users/ValeKnappich/followers",
"following_url": "https://api.github.com/users/ValeKnappich/following{/other_user}",
"gists_url": "https://api.github.com/users/ValeKnappich/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ValeKnappich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ValeKnappich/subscriptions",
"organizations_url": "https://api.github.com/users/ValeKnappich/orgs",
"repos_url": "https://api.github.com/users/ValeKnappich/repos",
"events_url": "https://api.github.com/users/ValeKnappich/events{/privacy}",
"received_events_url": "https://api.github.com/users/ValeKnappich/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Closing as duplicate of #25065, but thanks a lot for putting effort in having a nice small reproducer! "
] | 1,691 | 1,691 | 1,691 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-4.18.0-477.13.1.el8_8.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @younesbelkada
Llama 2 seems to produce NaN-values when `torch_dtype=torch.float16` and `padding_side="left"`. The behavior seems very odd to me. See the MWE below.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, AutoConfig
WITH_DTYPE_KWARG = True
LEFT_PADDING = True
model_id = "meta-llama/Llama-2-7b-hf"
if WITH_DTYPE_KWARG:
model = AutoModelForCausalLM.from_pretrained(model_id, low_cpu_mem_usage=True, torch_dtype=torch.float16).cuda()
else:
model = AutoModelForCausalLM.from_pretrained(model_id, low_cpu_mem_usage=True).cuda()
if LEFT_PADDING:
tokenizer = AutoTokenizer.from_pretrained(model_id, padding_side="left")
else:
tokenizer = AutoTokenizer.from_pretrained(model_id, padding_side="right")
tokenizer.pad_token = tokenizer.eos_token
inputs = [
"Short input",
"Long long long input with lots of tokens so that there is a padding",
"Short input",
"Long long long input with lots of tokens",
]
enc = tokenizer(inputs, padding=True, return_tensors="pt").to(model.device)
o = model(**enc)
assert o.logits.isnan().sum().item() == 0
```
The assertion fails only when both WITH_DTYPE_KWARG and LEFT_PADDING are true. However, there also seems to be some interaction with the input, the problem does not appear for every input.
### Expected behavior
All variants should work or a meaningful error message should be shown.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25311/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25311/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25310
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25310/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25310/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25310/events
|
https://github.com/huggingface/transformers/pull/25310
| 1,836,194,673 |
PR_kwDOCUB6oc5XKyAb
| 25,310 |
Move usage of deprecated logging.warn to logging.warning
|
{
"login": "PeterJCLaw",
"id": 336212,
"node_id": "MDQ6VXNlcjMzNjIxMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/336212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PeterJCLaw",
"html_url": "https://github.com/PeterJCLaw",
"followers_url": "https://api.github.com/users/PeterJCLaw/followers",
"following_url": "https://api.github.com/users/PeterJCLaw/following{/other_user}",
"gists_url": "https://api.github.com/users/PeterJCLaw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PeterJCLaw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PeterJCLaw/subscriptions",
"organizations_url": "https://api.github.com/users/PeterJCLaw/orgs",
"repos_url": "https://api.github.com/users/PeterJCLaw/repos",
"events_url": "https://api.github.com/users/PeterJCLaw/events{/privacy}",
"received_events_url": "https://api.github.com/users/PeterJCLaw/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,691 | 1,691 |
CONTRIBUTOR
| null |
The former spelling is deprecated and has been discouraged for a while. The latter spelling seems to be more common in this project anyway, so this change ought to be safe.
Does this project have testing for things like warnings? Or a developer style guide we could add a note to? (Is this worth it?)
Fixes https://github.com/huggingface/transformers/issues/25283
cc @amyeroberts as you commented on the issue.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25310/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25310/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25310",
"html_url": "https://github.com/huggingface/transformers/pull/25310",
"diff_url": "https://github.com/huggingface/transformers/pull/25310.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25310.patch",
"merged_at": 1691149326000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25309
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25309/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25309/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25309/events
|
https://github.com/huggingface/transformers/issues/25309
| 1,836,019,521 |
I_kwDOCUB6oc5tb29B
| 25,309 |
the code in the main readme file isnt working
|
{
"login": "akintola4",
"id": 61349895,
"node_id": "MDQ6VXNlcjYxMzQ5ODk1",
"avatar_url": "https://avatars.githubusercontent.com/u/61349895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akintola4",
"html_url": "https://github.com/akintola4",
"followers_url": "https://api.github.com/users/akintola4/followers",
"following_url": "https://api.github.com/users/akintola4/following{/other_user}",
"gists_url": "https://api.github.com/users/akintola4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akintola4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akintola4/subscriptions",
"organizations_url": "https://api.github.com/users/akintola4/orgs",
"repos_url": "https://api.github.com/users/akintola4/repos",
"events_url": "https://api.github.com/users/akintola4/events{/privacy}",
"received_events_url": "https://api.github.com/users/akintola4/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"It indeed performs its intended function, and the issues you mentioned appear to be merely log messages.",
"I was expecting it to display a Image like the one in the main Readme file, correct me if am mistaken. this my first time using it, am new to all this.",
"For that, you have to write a script that would display all the bounding boxes on top of the image, For reference do have a look at [this](https://huggingface.co/docs/datasets/v2.6.1/en/object_detection). Do let me know if there are more related queries ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,691 | 1,694 | 1,694 |
NONE
| null |
### System Info
it brings out this error when i try to run the code for the cat and remote identification .
No model was supplied, defaulted to facebook/detr-resnet-50 and revision 2729413 (https://huggingface.co/facebook/detr-resnet-50).
Using a pipeline without specifying a model name and revision in production is not recommended.
Could not find image processor class in the image processor config or the model config. Loading based on pattern matching with the model's feature extractor configuration.
The `max_size` parameter is deprecated and will be removed in v4.26. Please specify in `size['longest_edge'] instead`.
### Who can help?
@Narsil
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
import requests
from PIL import Image
from transformers import pipeline
# Download an image with cute cats
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png"
image_data = requests.get(url, stream=True).raw
image = Image.open(image_data)
# Allocate a pipeline for object detection
object_detector = pipeline('object-detection')
object_detector(image)
[{'score': 0.9982201457023621,
'label': 'remote',
'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}},
{'score': 0.9960021376609802,
'label': 'remote',
'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}},
{'score': 0.9954745173454285,
'label': 'couch',
'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}},
{'score': 0.9988006353378296,
'label': 'cat',
'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}},
{'score': 0.9986783862113953,
'label': 'cat',
'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}]
### Expected behavior
it should identify what it was said to do in the readme file
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25309/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25309/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25308
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25308/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25308/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25308/events
|
https://github.com/huggingface/transformers/pull/25308
| 1,835,980,981 |
PR_kwDOCUB6oc5XKFnr
| 25,308 |
Fixed "Dynamic" issue in LlamaDynamicNTKScalingRotaryEmbedding
|
{
"login": "LetianLee",
"id": 73881739,
"node_id": "MDQ6VXNlcjczODgxNzM5",
"avatar_url": "https://avatars.githubusercontent.com/u/73881739?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LetianLee",
"html_url": "https://github.com/LetianLee",
"followers_url": "https://api.github.com/users/LetianLee/followers",
"following_url": "https://api.github.com/users/LetianLee/following{/other_user}",
"gists_url": "https://api.github.com/users/LetianLee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LetianLee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LetianLee/subscriptions",
"organizations_url": "https://api.github.com/users/LetianLee/orgs",
"repos_url": "https://api.github.com/users/LetianLee/repos",
"events_url": "https://api.github.com/users/LetianLee/events{/privacy}",
"received_events_url": "https://api.github.com/users/LetianLee/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @ArthurZucker and @gante ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25308). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,691 | 1,694 | 1,694 |
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/25306
In "[LlamaDynamicNTKScalingRotaryEmbedding](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L147C8-L147C8)" , when the Llama model infers a long context, the cached values of `cos_cached` and `sin_cached` are updated to adapt to the longer context. This causes the issue when the model infers a shorter context again.
This PR rewrites the `forward` function in the `LlamaDynamicNTKScalingRotaryEmbedding` class. It ensures that the `_set_cos_sin_cache` function is executed as long as the input length is not equal to the cached length. Meanwhile, the `inv_freq` will not be saved when it's changed to adapt to the long context. Here is my code for this class:
```
class LlamaDynamicNTKScalingRotaryEmbedding(LlamaRotaryEmbedding):
"""LlamaRotaryEmbedding extended with Dynamic NTK scaling. Credits to the Reddit users /u/bloc97 and /u/emozilla"""
def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None, scaling_factor=1.0):
self.scaling_factor = scaling_factor
super().__init__(dim, max_position_embeddings, base, device)
def _set_cos_sin_cache(self, seq_len, device, dtype):
self.max_seq_len_cached = seq_len
inv_freq = self.inv_freq.to(device)
if seq_len > self.max_position_embeddings:
base = self.base * (
(self.scaling_factor * seq_len / self.max_position_embeddings) - (self.scaling_factor - 1)
) ** (self.dim / (self.dim - 2))
inv_freq = 1.0 / (base ** (torch.arange(0, self.dim, 2).float().to(device) / self.dim))
t = torch.arange(self.max_seq_len_cached, device=device, dtype=self.inv_freq.dtype)
freqs = torch.einsum("i,j->ij", t, inv_freq)
# Different from paper, but it uses a different permutation in order to obtain the same calculation
emb = torch.cat((freqs, freqs), dim=-1)
self.register_buffer("cos_cached", emb.cos()[None, None, :, :].to(dtype), persistent=False)
self.register_buffer("sin_cached", emb.sin()[None, None, :, :].to(dtype), persistent=False)
def forward(self, x, seq_len=None):
# x: [bs, num_attention_heads, seq_len, head_size]
if seq_len != self.max_seq_len_cached:
self._set_cos_sin_cache(seq_len=seq_len, device=x.device, dtype=x.dtype)
return (
self.cos_cached[:, :, :seq_len, ...].to(dtype=x.dtype),
self.sin_cached[:, :, :seq_len, ...].to(dtype=x.dtype),
)
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
https://github.com/huggingface/transformers/issues/25306
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Hi @sgugger , would you please help me review it? Thanks!
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25308/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25308/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25308",
"html_url": "https://github.com/huggingface/transformers/pull/25308",
"diff_url": "https://github.com/huggingface/transformers/pull/25308.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25308.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25307
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25307/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25307/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25307/events
|
https://github.com/huggingface/transformers/issues/25307
| 1,835,972,226 |
I_kwDOCUB6oc5tbraC
| 25,307 |
Return updated attention mask from Wav2Vec 2.0
|
{
"login": "gau-nernst",
"id": 26946864,
"node_id": "MDQ6VXNlcjI2OTQ2ODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/26946864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gau-nernst",
"html_url": "https://github.com/gau-nernst",
"followers_url": "https://api.github.com/users/gau-nernst/followers",
"following_url": "https://api.github.com/users/gau-nernst/following{/other_user}",
"gists_url": "https://api.github.com/users/gau-nernst/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gau-nernst/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gau-nernst/subscriptions",
"organizations_url": "https://api.github.com/users/gau-nernst/orgs",
"repos_url": "https://api.github.com/users/gau-nernst/repos",
"events_url": "https://api.github.com/users/gau-nernst/events{/privacy}",
"received_events_url": "https://api.github.com/users/gau-nernst/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false | null |
[] |
[
"cc @sanchit-gandhi ",
"Hey @gau-nernst - thanks for the great issue report. Having gone through the fariseq code, I could see this being a valuable contribution, since the original implementation does indeed return the attention mask for downstream use cases. However, I'm not entirely sure what such use cases would be? Perhaps you could provide a few examples based on your use cast?\r\n\r\nSince Wav2Vec2 is quite a used model, we'd want to be quite certain on the improvements that we'd get by changing the signature of the model output before jumping into a PR!\r\n",
"Virtually any processing to be done on `last_hidden_state` requires the updated attention mask to avoid including padded data. Some use cases\r\n- (my current use case) Extract embeddings from audio samples using various wav2vec 2.0 models. After getting `last_hidden_state`, I need to do pooling (either mean or max). Padded data should be excluded, thus the updated attention mask is required.\r\n- Add a seq2seq module on top of wav2vec 2.0 e.g. transformer, ECAPA-TDNN. Again, the updated attention mask is required.\r\n\r\nIn HF, almost (if not all) derived Wav2Vec 2.0 models (e.g. Wav2Vec2ForCTC, Wav2Vec2ForSequenceClassification, Wav2Vec2ForXVector) require calling the private method, because \"any processing to be done on `last_hidden_state` requires the updated attention mask\". Although HF provides some custom Wav2Vec2 for downstream tasks, power users like myself prefer writing our own extra modules, either for more flexibility or because we want to use new/SOTA modules that are not available in HF.\r\n\r\nRegarding changing the model signature, I have always thought that HF wraps model output in a dataclass so that you can always extend it later without breaking old code. There is no change to current fields. The breaking change would be when a tuple is returned (`return_dict=False`).",
"Thanks for the descriptive explanation @gau-nernst - I agree that this definitely does help power users like yourself, and facilitates the community building on-top of HF Wav2Vec2 models.\r\n\r\nLet's go for it and add the `attention_mask` field in a PR! Would you like to open a PR to do this?",
"@sanchit-gandhi yes, sure, I will make a PR for this and request you to review."
] | 1,691 | 1,703 | null |
CONTRIBUTOR
| null |
### Feature request
In Wav2Vec 2.0, the first few convolution layers affect the attention mask. Thus, if I want to use all Wav2Vec 2.0 outputs (last_hidden_state), I need access to the updated attention mask. Currently the workaround is to call the private method `._get_feature_vector_attention_mask()`. To return this attention mask directly will make the experience much better.
Concretely, it can be an additional field in Wav2Vec2BaseModelOutput.
Additionally, data2vec-audio and HuBERT also have the same problem, since they are also based on Wav2Vec 2.0.
### Motivation
Fairseq implementation returns the updated attention mask for downstream uses.
https://github.com/facebookresearch/fairseq/blob/100cd91db19bb27277a06a25eb4154c805b10189/fairseq/models/wav2vec/wav2vec2.py#L696-L700
Then their Wav2Vec 2.0 for classification implementation is more elegant.
https://github.com/facebookresearch/fairseq/blob/100cd91db19bb27277a06a25eb4154c805b10189/fairseq/models/wav2vec/wav2vec2_classification.py#L84-L93
In HF, Wav2Vec 2.0 for classification requires calling the private method.
https://github.com/huggingface/transformers/blob/641adca55832ed9c5648f54dcd8926d67d3511db/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L2127
### Your contribution
I can help to submit a PR for this. It will be an additional field in Wav2Vec2BaseModelOutput. I'm not quite sure about other types of Wav2Vec2 output objects.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25307/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25307/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/25306
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25306/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25306/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25306/events
|
https://github.com/huggingface/transformers/issues/25306
| 1,835,894,897 |
I_kwDOCUB6oc5tbYhx
| 25,306 |
"Dynamic" Issue in LlamaDynamicNTKScalingRotaryEmbedding - Long context inference will impact short context inference.
|
{
"login": "LetianLee",
"id": 73881739,
"node_id": "MDQ6VXNlcjczODgxNzM5",
"avatar_url": "https://avatars.githubusercontent.com/u/73881739?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LetianLee",
"html_url": "https://github.com/LetianLee",
"followers_url": "https://api.github.com/users/LetianLee/followers",
"following_url": "https://api.github.com/users/LetianLee/following{/other_user}",
"gists_url": "https://api.github.com/users/LetianLee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LetianLee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LetianLee/subscriptions",
"organizations_url": "https://api.github.com/users/LetianLee/orgs",
"repos_url": "https://api.github.com/users/LetianLee/repos",
"events_url": "https://api.github.com/users/LetianLee/events{/privacy}",
"received_events_url": "https://api.github.com/users/LetianLee/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @gante @ArthurZucker ",
"Hey! Thanks for reporting, this is a duplicate of #25104. Will link it in the PR as well",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> Hey! Thanks for reporting, this is a duplicate of #25104. Will link it in the PR as well\r\n\r\nNo, they're not same. I understand #25104 is about the trade off between using kv cache and rotary embed inconsistence. But when you freeze everything during generation including random seeds, same input should give same output sequence. \r\n\r\nThe dynamic ntk rotary will only recalculate if input seq is longer than cached. What if the longest sequence is predicted at first? Cached embed will never change again. PR #25308 is a correct fix without extra calculate. I think it should be merged.\r\n@gante ",
"I see. Makes sense for me @gante if you can have a look! 🤗 ",
"@i4never I agree, it is a limitation of the technique when implemented as the authors suggest. #25308 is not the correct fix either -- we should only resize the `sin` and `cos` caches down to the original size, as smaller values will likely have a negative impact.\r\n\r\nWould you like to open a PR to fix it? :)",
"> @i4never I agree, it is a limitation of the technique when implemented as the authors suggest. #25308 is not the correct fix either -- we should only resize the `sin` and `cos` caches down up to the original size, as smaller values will likely have a negative impact.\r\n> \r\n> Would you like to open a PR to fix it? :)\r\n\r\n#27033"
] | 1,691 | 1,698 | 1,694 |
NONE
| null |
### System Info
- `transformers` version: 4.32.0.dev0
- Platform: Linux-5.15.109+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.22.0.dev0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.7.0 (gpu)
- Jax version: 0.4.13
- JaxLib version: 0.4.13
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Please see my colab code:
https://colab.research.google.com/drive/1SnQQxW7WMHgSOvAwF_HIlIDrAuXZ4IKp?usp=sharing
I asked the same prompt twice, with a long-context prompt inserted in between. However, this intermediate long-context inference resulted in different answers for the same question before and after it.
### Expected behavior
Since the input length of the tested prompts is within the maximum input token capacity the model can handle, the significance of "Dynamic" lies in ensuring that the embeddings for the inputs before and after remain the same, and consequently, the output results should also be the same.
I reviewed the code of the class "[LlamaDynamicNTKScalingRotaryEmbedding](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L147C8-L147C8)" and I think that due to caching, when the model infers a long context, the cached values of `cos_cached` and `sin_cached` are updated to adapt to the longer context. This causes the issue when the model infers a shorter context again.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25306/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25306/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25305
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25305/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25305/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25305/events
|
https://github.com/huggingface/transformers/issues/25305
| 1,835,859,392 |
I_kwDOCUB6oc5tbP3A
| 25,305 |
Unable to change default cache folders despite setting environment variables
|
{
"login": "kiasar",
"id": 23178294,
"node_id": "MDQ6VXNlcjIzMTc4Mjk0",
"avatar_url": "https://avatars.githubusercontent.com/u/23178294?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kiasar",
"html_url": "https://github.com/kiasar",
"followers_url": "https://api.github.com/users/kiasar/followers",
"following_url": "https://api.github.com/users/kiasar/following{/other_user}",
"gists_url": "https://api.github.com/users/kiasar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kiasar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kiasar/subscriptions",
"organizations_url": "https://api.github.com/users/kiasar/orgs",
"repos_url": "https://api.github.com/users/kiasar/repos",
"events_url": "https://api.github.com/users/kiasar/events{/privacy}",
"received_events_url": "https://api.github.com/users/kiasar/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @kiasar, thanks for raising this issue! \r\n\r\nCould you provide information about the hugging face libraries installed (run `transformers-cli env` in the terminal and copy-paste the output)? \r\n\r\nWhen setting the environment variables, are you running in the same python session i.e. are the `os.environ` commands in the same script? \r\n\r\nFor setting the cache, if you're just wanting to control where models and their files e.g. `config.json` are downloaded to, you only need to set one of these variables:\r\n\r\n```\r\nos.environ['TRANSFORMERS_CACHE'] = '/MyFolder/.cache/hub'\r\n```",
"Hi @amyeroberts, I hope you are doing great today.\r\nhere it is:\r\n\r\n```\r\n- `transformers` version: 4.31.0\r\n- Platform: Linux-5.15.0-71-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.12\r\n- Huggingface_hub version: 0.16.4\r\n- Safetensors version: 0.3.1\r\n- Accelerate version: 0.21.0\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 2.0.1+cu117 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n```\r\n",
"Hi @kiasar, thanks for providing information about the environment. Could you also answer the other questions about how the env variables are set, and confirm if setting just `os.environ['TRANSFORMERS_CACHE'] = '/MyFolder/.cache/hub'` works for you?",
"Hi. I confirm that. It still does not work!",
"@kiasar OK, that's useful to know. Could you answer the other questions about how the environment variables are being set within the script? They will need to be set before any `transformers` imports e.g.:\r\n\r\n```python\r\nimport os\r\nimport torch\r\n\r\nos.environ['TRANSFORMERS_CACHE'] = \"/MyFolder/.cache/hub\"\r\n\r\nfrom transformers import pipeline, AutoTokenizer\r\n\r\ncheckpoint = \"google/flan-t5-small\"\r\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\ngenerator = pipeline(...)\r\n```\r\n\r\n\r\n\r\n\r\n",
"Thank you. Solved.\r\nI recommend others reading this to do [this ](https://stackoverflow.com/a/76748390/7339624)instead. It will be cleaner.",
"it is need to set this variables before import transformers and torch\r\nlike this\r\n\r\n```\r\nimport os\r\nPATH = '/home/user/NEW_MODEL_CACHE/'\r\nos.environ['TRANSFORMERS_CACHE'] = PATH\r\nos.environ['HF_HOME'] = PATH\r\nos.environ['HF_DATASETS_CACHE'] = PATH\r\nos.environ['TORCH_HOME'] = PATH\r\n\r\nfrom transformers import LlamaTokenizer, LlamaForCausalLM\r\nimport torch\r\n\r\n```"
] | 1,691 | 1,702 | 1,691 |
NONE
| null |
### System Info
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-71-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 2080 Ti
GPU 1: NVIDIA GeForce RTX 2080 Ti
GPU 2: NVIDIA GeForce RTX 2080 Ti
GPU 3: NVIDIA GeForce RTX 2080 Ti
GPU 4: NVIDIA GeForce RTX 2080 Ti
GPU 5: NVIDIA GeForce RTX 2080 Ti
GPU 6: NVIDIA GeForce RTX 2080 Ti
GPU 7: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 530.30.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.3
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 3900.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-19,40-59
NUMA node1 CPU(s): 20-39,60-79
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] torch==2.0.1
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] Could not collect
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1- Set the following environment variables:
```
import os
os.environ['XDG_CACHE_HOME'] = '/MyFolder/.cache'
os.environ['HF_HOME'] = '/MyFolder/.cache/huggingface'
os.environ['HF_DATASETS_CACHE'] = '/MyFolder/.cache/datasets'
os.environ['TRANSFORMERS_CACHE'] = '/MyFolder/.cache/models'
os.environ['HUGGINGFACE_HUB_CACHE'] = '/MyFolder/.cache/hub'
```
2- Try to download a model. In my case, I do this:
```
model = "google/flan-t5-small"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text2text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
```
### Expected behavior
Expected behavior
The caches should be saved to the custom directories specified in the environment variables.
Actual behavior
The caches continue to be saved to the default locations and do not use the custom directories.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25305/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25305/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25304
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25304/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25304/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25304/events
|
https://github.com/huggingface/transformers/issues/25304
| 1,835,842,238 |
I_kwDOCUB6oc5tbLq-
| 25,304 |
Tokenizer failing to encode chatml correctly
|
{
"login": "ozreact",
"id": 130388602,
"node_id": "U_kgDOB8WSeg",
"avatar_url": "https://avatars.githubusercontent.com/u/130388602?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ozreact",
"html_url": "https://github.com/ozreact",
"followers_url": "https://api.github.com/users/ozreact/followers",
"following_url": "https://api.github.com/users/ozreact/following{/other_user}",
"gists_url": "https://api.github.com/users/ozreact/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ozreact/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ozreact/subscriptions",
"organizations_url": "https://api.github.com/users/ozreact/orgs",
"repos_url": "https://api.github.com/users/ozreact/repos",
"events_url": "https://api.github.com/users/ozreact/events{/privacy}",
"received_events_url": "https://api.github.com/users/ozreact/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi ! Could you demonstrate this issue with a small but complete code snippet. That will help us a lot, thanks in advance.",
"Yep, I had it linked above.\n\nhttps://gist.github.com/ozreact/a4b565cd2c7fac65d6cb76c78dbdf9e2\n\nJust replace the model path.",
"OK, thank you. I thought it was a full repository 😅 ",
"You can also try #25224, should fix it (deals with extra space, unk and decoding extra space)",
"Gave #25224 a shot. The slow tokenizer still outputs spaces around special tokens. The fast tokenizer is pretty close:\r\n\r\n```\r\n# IN\r\n<|im_start|>user Hello world<|im_end|><|im_start|>assistant\r\n# OUT\r\n<|im_start|> user Hello world<|im_end|><|im_start|> assistant\r\n```\r\n\r\nLooks like it still wants to emit a space after a BOS token. I _think_ this may be expected behavior?\r\n\r\nEdit: Also seeing odd behavior with newlines:\r\n\r\n```\r\n# IN\r\n<|im_start|> user Hello world<|im_end|>\\n<|im_start|> assistant\\n\r\n# OUT\r\n<|im_start|> user Hello world<|im_end|> \\n<|im_start|> assistant\\n\r\n```",
"It seems that you are not using `spaces_between_special_tokens`, the following is what I got for `use_fast=True` \r\n\r\n```python \r\n>>> tokenizer.decode(tokenized[\"input_ids\"], spaces_between_special_tokens = False)\r\n'<|im_start|> user Hello world<|im_end|><|im_start|> assistant Hello user<|im_end|>'\r\n```\r\nSo this solves part of the space issue. This argument is set to `True` by default. ",
"`spaces_between_special_tokens` brings it closer, I get the same results as you for an example string with no newlines.\r\n\r\nThe newlines still throw it off, e.g.:\r\n\r\n```\r\n'<|im_start|>system\\nYou a AI<|im_end|>\\n<|im_start|>user\\nHi<|im_end|>\\n<|im_start|>assistant\\nHello<|im_end|>'\r\n'<|im_start|> system\\nYou a AI<|im_end|> \\n<|im_start|> user\\nHi<|im_end|> \\n<|im_start|> assistant\\nHello<|im_end|>'\r\n```\r\n\r\nobtained via:\r\n\r\n```\r\n>>> type(t)\r\n<class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>\r\n>>> cml\r\n'<|im_start|>system\\nYou a AI<|im_end|>\\n<|im_start|>user\\nHi<|im_end|>\\n<|im_start|>assistant\\nHello<|im_end|>'\r\n>>> tokenized = t(cml, add_special_tokens=False)\r\n>>> tokenized\r\n{'input_ids': [32000, 1788, 13, 3492, 263, 319, 29902, 32001, 29871, 13, 32000, 1404, 13, 18567, 32001, 29871, 13, 32000, 20255, 13, 10994, 32001], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}\r\n>>> t.decode(tokenized[\"input_ids\"], spaces_between_special_tokens = False)\r\n'<|im_start|> system\\nYou a AI<|im_end|> \\n<|im_start|> user\\nHi<|im_end|> \\n<|im_start|> assistant\\nHello<|im_end|>'\r\n```\r\n\r\nI've somewhat solved this by manipulating the output. These two `replace` calls result in the encode/decode for all my test cases to pass (absent `spaces_between_special_tokens`):\r\n\r\n```\r\ndef decode_to_str(self, tokenized: BatchEncoding) -> str:\r\n ret = self.tokenizer.decode(tokenized[\"input_ids\"])\r\n ret = ret.replace(\"<|im_start|> \", \"<|im_start|>\")\r\n ret = ret.replace(\"<|im_end|> \\n\", \"<|im_end|>\\n\")\r\n return ret\r\n```\r\n\r\nIt would ofc still be ideal for a somewhat-naive encode to == decode.",
"Not sure if decode is also very important when you are training you compare the genereated ids.\r\nBut yes decoding adds spaces. Will try to adresse this 😉 ",
"> This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\r\n> \r\n> Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.\r\n\r\nthis issue still needs to be addressed ",
"This was fixed on main for `transformers`: with `meta-llama/Llama-2-7b-hf` and `use_fast=False, legacy=False` I am getting the correct decoded output.\r\nFor `tokenizers`, it's a different issue that is gonna take more time to get fixed",
"> This was fixed on main for `transformers`: with `meta-llama/Llama-2-7b-hf` and `use_fast=False, legacy=False` I am getting the correct decoded output. For `tokenizers`, it's a different issue that is gonna take more time to get fixed\r\n\r\nSo to clarify, we must use_fast=False and legacy=False for it to work?",
"For decoding yes, for encoding fast and slow should give the same results ",
"I also recommend you to use the new ChatTemplating introduced in #25323",
"@winglian please see for axolotl ^",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"BTW we'll add the `add_prefix_space` to the tokenizer config and encode call to easily de-activate this behavior.\r\nOther fix is also here: #26678 it will no longer add random spaces if you tokenize a full sequence. "
] | 1,691 | 1,702 | 1,698 |
NONE
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.14.0-284.18.1.el9_2.x86_64-x86_64-with-glibc2.34
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
Note: also tested and broken on:
- 641adca
- 4.30.2
- 4.30.1
- 4.30.0
- 4.29.2
- 4.29.1
- 4.29.0
- 4.28.1
- 4.28.0
- 4.27.4
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm attempting to finetune Llama2 with a ChatML format. No matter how I approach it, it seems to be failing to encode/decode correctly. I see multiple issues and PRs that are related, but this specific format seems to be hitting all of them with none of the workarounds being effective.
A repro is available here:
https://gist.github.com/ozreact/a4b565cd2c7fac65d6cb76c78dbdf9e2
#24565 recommends setting `legacy=false`, and further says that this only addresses a subset of issues with the slow tokenizer only. It also mentions that `decode` isn't fixed, so validating that the encoding step is working is fiddly.
This format, when newlines are used, is also impacted by #21120.
#25073 also breaks this.
#25176 recommends setting `legacy=True` to fix an invalid unk token that effectively over-writes a final token in a partial ChatML response, but this conflicts with attempting to fix the issues in #24565.
### Expected behavior
ChatML instruction format should 'just work', tokenize correctly, and decode correctly.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25304/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25304/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25303
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25303/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25303/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25303/events
|
https://github.com/huggingface/transformers/issues/25303
| 1,835,759,416 |
I_kwDOCUB6oc5ta3c4
| 25,303 |
loss reduction for `Llama2ForCausalLM.forward`
|
{
"login": "ain-soph",
"id": 13214530,
"node_id": "MDQ6VXNlcjEzMjE0NTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/13214530?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ain-soph",
"html_url": "https://github.com/ain-soph",
"followers_url": "https://api.github.com/users/ain-soph/followers",
"following_url": "https://api.github.com/users/ain-soph/following{/other_user}",
"gists_url": "https://api.github.com/users/ain-soph/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ain-soph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ain-soph/subscriptions",
"organizations_url": "https://api.github.com/users/ain-soph/orgs",
"repos_url": "https://api.github.com/users/ain-soph/repos",
"events_url": "https://api.github.com/users/ain-soph/events{/privacy}",
"received_events_url": "https://api.github.com/users/ain-soph/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @ain-soph \r\n\r\nThis is not supported. But you can choose `not to pass labels` to the `model forward`, but compute the loss in you own code 🤗 .",
"@ydshieh Thanks for your reply. Yeah, so my current workaround is to copy the implementation of `LlamaForCausalLM.forward` and calculate in my own, and it works.\r\n\r\nFeel free close the issue if maintainers think it's not appropriate to pass `reduction` argument to `forward`.",
"Glad it works!"
] | 1,691 | 1,691 | 1,691 |
NONE
| null |
### Feature request
In `forward` method, it outputs `loss` when `labels` are provided. But the `loss` shape is always `(1,)` because `reduction='mean'` in CrossEntropy. I wonder if I could pass `reduction='none'` and get a `(batch_size,)` shaped loss tensor.
https://github.com/huggingface/transformers/blob/641adca55832ed9c5648f54dcd8926d67d3511db/src/transformers/models/llama/modeling_llama.py#L837
### Motivation
I'm using this loss for reward-based learning.
### Your contribution
I could make a PR if needed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25303/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25303/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25302
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25302/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25302/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25302/events
|
https://github.com/huggingface/transformers/pull/25302
| 1,835,662,498 |
PR_kwDOCUB6oc5XJDFj
| 25,302 |
Fix typo: Roberta -> RoBERTa
|
{
"login": "MrGeislinger",
"id": 9027783,
"node_id": "MDQ6VXNlcjkwMjc3ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9027783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MrGeislinger",
"html_url": "https://github.com/MrGeislinger",
"followers_url": "https://api.github.com/users/MrGeislinger/followers",
"following_url": "https://api.github.com/users/MrGeislinger/following{/other_user}",
"gists_url": "https://api.github.com/users/MrGeislinger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MrGeislinger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MrGeislinger/subscriptions",
"organizations_url": "https://api.github.com/users/MrGeislinger/orgs",
"repos_url": "https://api.github.com/users/MrGeislinger/repos",
"events_url": "https://api.github.com/users/MrGeislinger/events{/privacy}",
"received_events_url": "https://api.github.com/users/MrGeislinger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,691 | 1,691 |
CONTRIBUTOR
| null |
# What does this PR do?
Small typo in docs: "Roberta" should have the correct capitalization "RoBERTa".
Fixes #25301
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
<!--
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
-->
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Documentation: @sgugger, @stevhliu and @MKhalusova
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25302/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25302/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25302",
"html_url": "https://github.com/huggingface/transformers/pull/25302",
"diff_url": "https://github.com/huggingface/transformers/pull/25302.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25302.patch",
"merged_at": 1691097451000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25301
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25301/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25301/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25301/events
|
https://github.com/huggingface/transformers/issues/25301
| 1,835,655,434 |
I_kwDOCUB6oc5taeEK
| 25,301 |
Minor typo referencing RoBERTa
|
{
"login": "MrGeislinger",
"id": 9027783,
"node_id": "MDQ6VXNlcjkwMjc3ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9027783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MrGeislinger",
"html_url": "https://github.com/MrGeislinger",
"followers_url": "https://api.github.com/users/MrGeislinger/followers",
"following_url": "https://api.github.com/users/MrGeislinger/following{/other_user}",
"gists_url": "https://api.github.com/users/MrGeislinger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MrGeislinger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MrGeislinger/subscriptions",
"organizations_url": "https://api.github.com/users/MrGeislinger/orgs",
"repos_url": "https://api.github.com/users/MrGeislinger/repos",
"events_url": "https://api.github.com/users/MrGeislinger/events{/privacy}",
"received_events_url": "https://api.github.com/users/MrGeislinger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,691 | 1,691 | 1,691 |
CONTRIBUTOR
| null |
"Roberta" should use the correct capitalization: "RoBERTa"
https://github.com/huggingface/transformers/blob/d27e4c18fe2970abcb9a48dcb8a824e48083b15f/docs/source/en/tokenizer_summary.md?plain=1#L144
Should be a simple fix.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25301/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25301/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25300
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25300/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25300/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25300/events
|
https://github.com/huggingface/transformers/issues/25300
| 1,835,650,285 |
I_kwDOCUB6oc5taczt
| 25,300 |
Add zero-shot classification task for BLIP-2
|
{
"login": "youssefadr",
"id": 104783077,
"node_id": "U_kgDOBj7c5Q",
"avatar_url": "https://avatars.githubusercontent.com/u/104783077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/youssefadr",
"html_url": "https://github.com/youssefadr",
"followers_url": "https://api.github.com/users/youssefadr/followers",
"following_url": "https://api.github.com/users/youssefadr/following{/other_user}",
"gists_url": "https://api.github.com/users/youssefadr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/youssefadr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/youssefadr/subscriptions",
"organizations_url": "https://api.github.com/users/youssefadr/orgs",
"repos_url": "https://api.github.com/users/youssefadr/repos",
"events_url": "https://api.github.com/users/youssefadr/events{/privacy}",
"received_events_url": "https://api.github.com/users/youssefadr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] |
open
| false | null |
[] |
[
"Yes so ideally you can add `get_image_feature` and `get_text_feature` to the Blip2ForConditionalGeneration class. For that you can refer to the [original implementation ](https://github.com/salesforce/LAVIS/blob/f982acc73288408bceda2d35471a8fcf55aa04ca/lavis/models/blip2_models/blip2_qformer.py#L387).",
"@youssefadr let me know if you need any help in this PR, I am also in need of adding multimodal feature extraction from the Blip2Qformer",
"Hello, thanks for your message, I will tackle it this week 👍 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Sorry, I have been caught be in work. Will finalize the PR today!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> Yes so ideally you can add `get_image_feature` and `get_text_feature` to the Blip2ForConditionalGeneration class. For that you can refer to the [original implementation ](https://github.com/salesforce/LAVIS/blob/f982acc73288408bceda2d35471a8fcf55aa04ca/lavis/models/blip2_models/blip2_qformer.py#L387).\r\n\r\nHi I want to know if this has been done?\r\nbecause I am trying to use get_image_feature but I am getting this error `AttributeError: 'Blip2ForConditionalGeneration' object has no attribute 'get_image_feature'`\r\n\r\nand I can not use Blip2Model because I have to use `load_in_8bit` that come with Blip2ForConditionalGeneration",
"Hi, no this feature hasn't been added yet.",
"> Hi, no this feature hasn't been added yet.\r\n\r\nThank you for your prompt response\r\nI have the following questions I would appreciate your input: \r\nQ1: is there any way to extract the feature of an image using BLIP-2 from hugging face checkpoints with `load_in_8bit`?\r\nQ2: is the feature extraction in this notebook https://github.com/salesforce/LAVIS/blob/main/examples/blip2_feature_extraction.ipynb works in the same way as `get_image_feature` ? \r\nQ3: if I want to extract or convert an image into a Victor so I can use it by another model and do you have any recommendation of the best way to do this other than using Clip model because it did not give me a good result.",
"@youssefadr hi, lmk please if help is needed here, would love to give a try to push things forward. It would actually be my first contribution, but I'm quite familiar with the BLIP2 model."
] | 1,691 | 1,705 | null |
CONTRIBUTOR
| null |
### Feature request
I would like to add the support for the zero-shot classification task using BLIP2, computing text-image similarities with the normalized embeddings, that would be accessed from BLIP2 feature extractor.
The idea is to enable calling the zero-shot classification pipeline using BLIP2, by implementing the `get_image_feature`and `get_text_features`methods.
I would love more guidance, if possible, on the criteria for accepting the PR.
### Motivation
This is related to the following the discussion on this issue on the hub, and the comment left by @NielsRogge here https://huggingface.co/Salesforce/blip2-opt-2.7b/discussions/3#64cbe5e487ec96aa473a1f54 .
### Your contribution
I would like to submit a PR to contribute for this feature.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25300/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25300/timeline
|
reopened
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25299
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25299/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25299/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25299/events
|
https://github.com/huggingface/transformers/issues/25299
| 1,835,580,863 |
I_kwDOCUB6oc5taL2_
| 25,299 |
cannot import name 'Module' from '_pytest.doctest'
|
{
"login": "jingyanwangms",
"id": 47403504,
"node_id": "MDQ6VXNlcjQ3NDAzNTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/47403504?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jingyanwangms",
"html_url": "https://github.com/jingyanwangms",
"followers_url": "https://api.github.com/users/jingyanwangms/followers",
"following_url": "https://api.github.com/users/jingyanwangms/following{/other_user}",
"gists_url": "https://api.github.com/users/jingyanwangms/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jingyanwangms/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jingyanwangms/subscriptions",
"organizations_url": "https://api.github.com/users/jingyanwangms/orgs",
"repos_url": "https://api.github.com/users/jingyanwangms/repos",
"events_url": "https://api.github.com/users/jingyanwangms/events{/privacy}",
"received_events_url": "https://api.github.com/users/jingyanwangms/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"You might need a `pip install --upgrade pytest`.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,691 | 1,694 | 1,694 |
NONE
| null |
### System Info
transformers 4.32.0.dev0
torch 2.1.0.dev20230523+cu117
Error:
Traceback (most recent call last):
File "/workspace/transformers/examples/pytorch/language-modeling/run_clm.py", line 52, in <module>
Traceback (most recent call last):
File "/workspace/transformers/examples/pytorch/language-modeling/run_clm.py", line 52, in <module>
from transformers.testing_utils import CaptureLogger
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers-4.32.0.dev0-py3.8.egg/transformers/testing_utils.py", line 111, in <module>
from transformers.testing_utils import CaptureLogger
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers-4.32.0.dev0-py3.8.egg/transformers/testing_utils.py", line 111, in <module>
from _pytest.doctest import (
ImportError: cannot import name 'Module' from '_pytest.doctest' (/opt/conda/envs/ptca/lib/python3.8/site-packages/_pytest/doctest.py)
from _pytest.doctest import (
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
python -m torch.distributed.launch --nproc_per_node=8 --use-env /workspace/transformers/examples/pytorch/language-modeling/run_clm.py --model_name_or_path xlnet-base-cased --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --label_smoothing 0.1 --do_train --output_dir /dev/shm --overwrite_output_dir --max_steps 200 --logging_steps 20 --per_device_train_batch_size 8 --fp16
### Expected behavior
example runs without error
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25299/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25299/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25298
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25298/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25298/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25298/events
|
https://github.com/huggingface/transformers/pull/25298
| 1,835,494,991 |
PR_kwDOCUB6oc5XIgRa
| 25,298 |
[Whisper] Better error message for outdated generation config
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,691 | 1,691 |
CONTRIBUTOR
| null |
# What does this PR do?
Gives a better error message in the case that a user tries using an outdated generation config with the new generation arguments `language` and `task` (as described in https://github.com/huggingface/transformers/issues/25084#issuecomment-1653722724).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25298/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25298/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25298",
"html_url": "https://github.com/huggingface/transformers/pull/25298",
"diff_url": "https://github.com/huggingface/transformers/pull/25298.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25298.patch",
"merged_at": 1691160837000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25297
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25297/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25297/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25297/events
|
https://github.com/huggingface/transformers/pull/25297
| 1,835,484,593 |
PR_kwDOCUB6oc5XIeHd
| 25,297 |
MaskFormer, Mask2Former - replace einsum for tracing
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Sorry, let me check something first before merge 🙏 .",
"Hi @amyeroberts \r\n\r\nSo the issue only happens at the 2nd `out = traced_model(torch.randn((2,3,640,640)).to(device))`. Before that (last) line, everything is fine. I am a dumb on this topic, but is it from the enisum? That's really weird (to me).\r\n\r\nStill good for me, I am just going to run the torchscript tests and let you know --> ✅ .\r\n\r\n",
"@ydshieh Yes, it's bizarre, although it does work on torch nightly. If you run the script above, it complains about the batch dimension in the einsum operation, and removing einsum resolved it, so I'm pretty sure that's the cause. I just don't know why 🤷♀️ \r\n\r\nFor setting `fx_compatible = True`, I didn't add initially because the tests would fail on the torchscipt runs even if I added it to one of the [compatible models](https://github.com/huggingface/transformers/blob/a6e6b1c622d8d08e2510a82cb6266d7b654f1cbf/src/transformers/utils/fx.py#L113). \r\n\r\nEssentially, the model can be traced using `torch.jit.trace` if you accept many warnings, but fails when we use the `HFTracer` to trace. The tracer warnings and failings when using `HFTracer` are the same: Mask2Former tries to iterate over tensors which breaks things e.g. [here](https://github.com/huggingface/transformers/blob/a6e6b1c622d8d08e2510a82cb6266d7b654f1cbf/src/transformers/models/mask2former/modeling_mask2former.py#L815). Resolving these is a bigger piece of work - I'm happy to add them to this PR or in a follow up. It's possible these are related to the [difference between traced and non-traced models on GPU](https://github.com/huggingface/transformers/issues/25261#issuecomment-1663526119).\r\n\r\nWere you able to run the tests successfully? There could be something funny in my env too. ",
"Hi @amyeroberts Thanks for the info.\r\n\r\nForget about my comment about `fx_compatible = True`. The passed CI I mentioned is just the `test_torchscript_` tests (not the `test_torch_fx` - I didn't try as I saw it's not the raw `torch.jit.trace`)"
] | 1,691 | 1,691 | 1,691 |
COLLABORATOR
| null |
# What does this PR do?
Maskformer cannot currently be traced because of einsum operations. This PR replaces the einsum operations with standard matmuls.
With this PR, the following now runs:
```python
import torch
from transformers import Mask2FormerForUniversalSegmentation
device = torch.device("cuda")
model = Mask2FormerForUniversalSegmentation.from_pretrained(
"facebook/mask2former-swin-tiny-coco-instance",
torchscript=True
).eval().to(device)
dummy_input = torch.randn((1,3,640,640)).to(device)
traced_model = torch.jit.trace(model, dummy_input)
with torch.no_grad():
out = traced_model(torch.randn((2,3,640,640)).to(device))
out = traced_model(torch.randn((2,3,640,640)).to(device))
```
Partially resolves #25261 - enables tracing but does not resolve the issue of different results between traced and non-traced model on GPU
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25297/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25297/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25297",
"html_url": "https://github.com/huggingface/transformers/pull/25297",
"diff_url": "https://github.com/huggingface/transformers/pull/25297.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25297.patch",
"merged_at": 1691487434000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25296
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25296/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25296/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25296/events
|
https://github.com/huggingface/transformers/issues/25296
| 1,835,422,058 |
I_kwDOCUB6oc5tZlFq
| 25,296 |
BertForSequenceClassification does not support 'device_map':"auto" yet
|
{
"login": "goodaytar",
"id": 65249001,
"node_id": "MDQ6VXNlcjY1MjQ5MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/65249001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/goodaytar",
"html_url": "https://github.com/goodaytar",
"followers_url": "https://api.github.com/users/goodaytar/followers",
"following_url": "https://api.github.com/users/goodaytar/following{/other_user}",
"gists_url": "https://api.github.com/users/goodaytar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/goodaytar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/goodaytar/subscriptions",
"organizations_url": "https://api.github.com/users/goodaytar/orgs",
"repos_url": "https://api.github.com/users/goodaytar/repos",
"events_url": "https://api.github.com/users/goodaytar/events{/privacy}",
"received_events_url": "https://api.github.com/users/goodaytar/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] |
open
| false | null |
[] |
[
"Hi @goodaytar, thanks for raising this issue! \r\n\r\nYes, the BERT models don't support the use of `device_map=xxx` yet. In the full error message, you should have seen: \r\n\r\n```\r\nBertForSequenceClassification not support `device_map=\"auto\"`. To implement support, the model class needs to implement the `_no_split_modules` attribute.\r\n```\r\n\r\nIn order to enable this the `_no_split_modules` attribute needs to be implemented for the model. If you or anyone else in the community would like to open a PR to add this, we'd be very happy to review! ",
"Thanks for the reply Amy. If you could give me a little bit more info on what needs adding, I'd be happy to.\r\n\r\nGet Outlook for Android<https://aka.ms/AAb9ysg>\r\n________________________________\r\nFrom: amyeroberts ***@***.***>\r\nSent: Friday, August 4, 2023 12:26:00 PM\r\nTo: huggingface/transformers ***@***.***>\r\nCc: goodaytar ***@***.***>; Mention ***@***.***>\r\nSubject: Re: [huggingface/transformers] BertForSequenceClassification does not support 'device_map':\"auto\" yet (Issue #25296)\r\n\r\n\r\nHi @goodaytar<https://github.com/goodaytar>, thanks for raising this issue!\r\n\r\nYes, the BERT models don't support the use of device_map=xxx yet. In the full error message, you should have seen:\r\n\r\nBertForSequenceClassification not support `device_map=\"auto\"`. To implement support, the model class needs to implement the `_no_split_modules` attribute.\r\n\r\n\r\nIn order to enable this the _no_split_modules attribute needs to be implemented for the model. If you or anyone else in the community would like to open a PR to add this, we'd be very happy to review!\r\n\r\n—\r\nReply to this email directly, view it on GitHub<https://github.com/huggingface/transformers/issues/25296#issuecomment-1665455659>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/APRZ52NZV7QNND7TR7TMJDTXTTL4RANCNFSM6AAAAAA3DCB7KY>.\r\nYou are receiving this because you were mentioned.Message ID: ***@***.***>\r\n",
"In order to know how to properly place the model onto difference devices, the models need to have `_no_split_modules` implemented in their `PreTrainedModel` class e.g. [like here for Roberta](https://github.com/huggingface/transformers/blob/fdaef3368b7495f6d3f26739fece0ee370fa7ce6/src/transformers/models/roberta/modeling_roberta.py#L596).\r\n\r\nFor some modules, it's necessary to place all of the weights on the same device e.g. like [`Pix2StructVisionLayer` for Pix2Struct](https://github.com/huggingface/transformers/blob/fdaef3368b7495f6d3f26739fece0ee370fa7ce6/src/transformers/models/pix2struct/modeling_pix2struct.py#L552).\r\n\r\nIn order to add, it'll be a case of iterating to find the modules that should be split or not. Once implemented, the [accelerate tests should be run and pass](https://github.com/huggingface/transformers/blob/fdaef3368b7495f6d3f26739fece0ee370fa7ce6/src/transformers/models/pix2struct/modeling_pix2struct.py#L552). This should be tested with 1 and 2 GPUs. \r\n",
"And how do I find the modules that should be split or not?",
"@goodaytar You'll need to experiment with the model to find out which modules should be split. I suggest starting with an empty list and looking at similar models to see how they set `_no_split_modules`. \r\n\r\nYou can inspect where the layers are allocated by using `infer_auto_device_map`:\r\n\r\n```python\r\ndevice_map = infer_auto_device_map(model, no_split_module_classes=[])\r\n```\r\n\r\nThe modules that can be added will be the layers defined in the modeling file e.g. `\"BertEmbeddings\"`\r\n\r\nOnce set, you can try running the accelerate tests (with GPUs!) to confirm the mapping works. If not, then inspect the device map. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @amyeroberts, I would like to add the `'device_map': \"auto\"` functionality to BERT Models!",
"@tanaymeh Great! From next week, I'll be off for a few weeks. Please ping @younesbelkada for review in that time. ",
"@tanaymeh that would be really great, in few words, you just need to make sure to add the module names that contain any skip connection to avoid potential device mismatch issues\r\nCheck for instance what has been done for RoBERTa here: https://github.com/huggingface/transformers/blob/main/src/transformers/models/roberta/modeling_roberta.py#L596",
"> @tanaymeh that would be really great, in a few words, you just need to make sure to add the module names that contain any skip connection to avoid potential device mismatch issues Check for instance what has been done for RoBERTa here: [`main`/src/transformers/models/roberta/modeling_roberta.py#L596](https://github.com/huggingface/transformers/blob/main/src/transformers/models/roberta/modeling_roberta.py?rgh-link-date=2023-09-13T10%3A58%3A02Z#L596)\r\n\r\nThat makes sense @younesbelkada! Will create a PR for this.\r\nOne question: Will the CI tests on Github also test my implementation of `device_map` (with 1 and 2 GPUs) every time I push a commit?",
"Hi @tanaymeh , \r\nThanks, will look into it! \r\nThe CI will not directly test it, we run \"slow\" tests every 24h on GPUs that will run those tests",
"@younesbelkada \r\nHi Younes\r\nCould you make it work for xlm_roberta_xl too?\r\nThanks \r\nRegards\r\nDragan",
"@younesbelkada Any updates? We can't wait to use this great feature.",
"@Hambaobao I am working on the PR for this feature but waiting for a revert from @younesbelkada!",
"any update on this issue? or anyone fixed it?\r\n"
] | 1,691 | 1,708 | null |
NONE
| null |
### System Info
I have trained a model and am now trying to load and quantise it but getting the error:
BertForSequenceClassification does not support 'device_map':"auto" yet
Code for loading is simply:
` model = AutoModelForSequenceClassification.from_pretrained(model_dir, device_map='auto', load_in_8bit=True)`
Help would be greatly appreciated!
Thanks,
Lee
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
model = AutoModelForSequenceClassification.from_pretrained(model_dir, device_map='auto', load_in_8bit=True)
### Expected behavior
The model would load and be usable.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25296/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25296/timeline
|
reopened
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25295
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25295/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25295/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25295/events
|
https://github.com/huggingface/transformers/pull/25295
| 1,835,410,228 |
PR_kwDOCUB6oc5XIOnn
| 25,295 |
[small] llama2.md typo
|
{
"login": "H-Huang",
"id": 14858254,
"node_id": "MDQ6VXNlcjE0ODU4MjU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14858254?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/H-Huang",
"html_url": "https://github.com/H-Huang",
"followers_url": "https://api.github.com/users/H-Huang/followers",
"following_url": "https://api.github.com/users/H-Huang/following{/other_user}",
"gists_url": "https://api.github.com/users/H-Huang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/H-Huang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/H-Huang/subscriptions",
"organizations_url": "https://api.github.com/users/H-Huang/orgs",
"repos_url": "https://api.github.com/users/H-Huang/repos",
"events_url": "https://api.github.com/users/H-Huang/events{/privacy}",
"received_events_url": "https://api.github.com/users/H-Huang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,691 | 1,691 |
CONTRIBUTOR
| null |
# What does this PR do?
`groupe` -> `grouped`
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25295/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25295/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25295",
"html_url": "https://github.com/huggingface/transformers/pull/25295",
"diff_url": "https://github.com/huggingface/transformers/pull/25295.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25295.patch",
"merged_at": 1691097427000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25294
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25294/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25294/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25294/events
|
https://github.com/huggingface/transformers/pull/25294
| 1,835,406,815 |
PR_kwDOCUB6oc5XIN5O
| 25,294 |
Generate: remove Marian hack
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,691 | 1,691 |
MEMBER
| null |
# What does this PR do?
Removes `adjust_logits_during_generation`, clearing a temporary hack for `Marian` that has been long fixed.
- In TF: `adjust_logits_during_generation` is defined but not called anywhere, and is not a public function, so it can be deleted.
- In PT: `adjust_logits_during_generation` is redundant with the `NoBadWordsLogitsProcessor` applied to the pad token, both set the log probability of the pad token to `-inf`. Marian models set the `bad_word_ids` to their pad token in their generation config file (e.g. [here](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh/blob/main/generation_config.json#L5)), so `adjust_logits_during_generation` is redundant. It is also not a public function, so all traces can be safely removed.
- No further traces of `adjust_logits_during_generation` exist in the codebase, after these changes.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25294/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25294/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25294",
"html_url": "https://github.com/huggingface/transformers/pull/25294",
"diff_url": "https://github.com/huggingface/transformers/pull/25294.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25294.patch",
"merged_at": 1691419105000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25293
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25293/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25293/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25293/events
|
https://github.com/huggingface/transformers/issues/25293
| 1,835,396,657 |
I_kwDOCUB6oc5tZe4x
| 25,293 |
MassFormer
|
{
"login": "yunyicheng",
"id": 55462866,
"node_id": "MDQ6VXNlcjU1NDYyODY2",
"avatar_url": "https://avatars.githubusercontent.com/u/55462866?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yunyicheng",
"html_url": "https://github.com/yunyicheng",
"followers_url": "https://api.github.com/users/yunyicheng/followers",
"following_url": "https://api.github.com/users/yunyicheng/following{/other_user}",
"gists_url": "https://api.github.com/users/yunyicheng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yunyicheng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yunyicheng/subscriptions",
"organizations_url": "https://api.github.com/users/yunyicheng/orgs",
"repos_url": "https://api.github.com/users/yunyicheng/repos",
"events_url": "https://api.github.com/users/yunyicheng/events{/privacy}",
"received_events_url": "https://api.github.com/users/yunyicheng/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false | null |
[] |
[
"Hey @amyeroberts, could I implement this?\r\nI think the implemented link is [here](https://github.com/Roestlab/massformer)",
"Hi Adithya, thank you for your interest in this project! However, this is my undergraduate research project in Röst Lab, and I am working with Adamo Young now, planning to optimize the original implementation and make it more suitable for hugging face.",
"Oh that's alright @yunyicheng, good luck on the implementation!!",
"Hi @yunyicheng, thanks for opening this new model feature request! \r\n\r\nThe recommended and easiest way to add a model is adding the model code to the hub directly using [this tutorial](https://huggingface.co/docs/transformers/custom_models). This avoids the lengthy review process and your model is still available through the typical transformers API e.g. `AutoModel.from_pretrained(...)`. \r\n\r\ncc @clefourrier for graph models :) ",
"Hi! I agree that adding the code to the hub directly would be the best way to go :)"
] | 1,691 | 1,691 | null |
NONE
| null |
### Model description
We propose adding a new model, MassFormer, to predict tandem mass spectra accurately. MassFormer uses a graph transformer architecture to model long-distance relationships between atoms in the molecule. The transformer module is initialized with parameters obtained through a chemical pre-training task, then fine-tuned on spectral data. MassFormer outperforms competing approaches for spectrum prediction on multiple datasets and is able to recover prior knowledge about the effect of collision energy on the spectrum. We demonstrate that the model can identify relationships between fragment peaks by employing gradient-based attribution methods. To further highlight MassFormer’s utility, we show that it can match or exceed existing prediction-based methods on two spectrum identification tasks. Our code is the first open-source implementation of a deep-learning MS/MS spectrum predictor and may encourage future research in this area.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
This model will be implemented according to the paper by @adamoyoung as listed below.
Reference:
Young, A., Wang, B. and Röst, H., 2021. MassFormer: Tandem mass spectrum prediction with graph transformers. arXiv preprint arXiv:2111.04824.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25293/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25293/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/25292
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25292/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25292/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25292/events
|
https://github.com/huggingface/transformers/pull/25292
| 1,835,383,654 |
PR_kwDOCUB6oc5XII65
| 25,292 |
Generate: get generation mode as an enum
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,691 | 1,691 |
MEMBER
| null |
# What does this PR do?
Currently, generate gets several `is_XXX_mode` flags, to determine the generation mode. This was cool when there were a handful of generation modes, but now it means we have many variables. This PR replaces that part of the logic by a single variable -- an enum.
In a future PR, I will use the enum to efficiently perform generate kwarg validation and throw informative warnings/exceptions -- for instance, all beam methods (with "beam" in the name) share a large set of restrictions!
Related PR: #24575
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25292/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25292/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25292",
"html_url": "https://github.com/huggingface/transformers/pull/25292",
"diff_url": "https://github.com/huggingface/transformers/pull/25292.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25292.patch",
"merged_at": 1691152510000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25291
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25291/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25291/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25291/events
|
https://github.com/huggingface/transformers/pull/25291
| 1,835,335,118 |
PR_kwDOCUB6oc5XH-vS
| 25,291 |
Document check copies
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Little late but great fix and the diff is really nice too!"
] | 1,691 | 1,691 | 1,691 |
COLLABORATOR
| null |
# What does this PR do?
This PR document a little bit better how or `Copied from` framework works, adds comments in the actual scripts and rework a bit the test to be better.
In passing I added a feature requested which was to make sure `make fix-copies` took the function definition or the superclass into account: currently it ignore the whole first line, but if we change the signature of a function / the superclass of a class which is copied from, that modification is not propagated (cc @Rocketknight1 who last requested it)
As you can see from the diff, that feature was direly needed... I had to add `BartPreTrainedModel` (right spelling to be consistent with other models) or break multiple copies, and you can see a lot of signatures or copied from statement being fixed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25291/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25291/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25291",
"html_url": "https://github.com/huggingface/transformers/pull/25291",
"diff_url": "https://github.com/huggingface/transformers/pull/25291.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25291.patch",
"merged_at": 1691153789000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25290
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25290/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25290/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25290/events
|
https://github.com/huggingface/transformers/pull/25290
| 1,835,297,393 |
PR_kwDOCUB6oc5XH2wU
| 25,290 |
Make `bark` could have tiny model
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Hi @ydshieh, many thanks for your help in that matter! Looks great to me, I will implement some actual tests ASAP\r\n\r\nI think you already tests in your PR. I will try to merger your PR with my own, see how those tests go and let you know.",
"> > Hi @ydshieh, many thanks for your help in that matter! Looks great to me, I will implement some actual tests ASAP\r\n> \r\n> I think you already tests in your PR. I will try to merger your PR with my own, see how those tests go and let you know.\r\n\r\nOh, you mean the `TODO` I put in this PR. Nice, thanks in advance",
"Going to merge. But `bark` is kind special and the tiny model for it still has some issue."
] | 1,691 | 1,691 | 1,691 |
COLLABORATOR
| null |
# What does this PR do?
Make `bark` could have tiny model. This is mainly for #24952
cc @ylacombe
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25290/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25290/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25290",
"html_url": "https://github.com/huggingface/transformers/pull/25290",
"diff_url": "https://github.com/huggingface/transformers/pull/25290.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25290.patch",
"merged_at": 1691154795000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25289
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25289/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25289/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25289/events
|
https://github.com/huggingface/transformers/issues/25289
| 1,835,268,080 |
I_kwDOCUB6oc5tY_fw
| 25,289 |
Quantized models + PEFT + multi-gpu setup failing during training
|
{
"login": "cassianlewis",
"id": 131266258,
"node_id": "U_kgDOB9L20g",
"avatar_url": "https://avatars.githubusercontent.com/u/131266258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cassianlewis",
"html_url": "https://github.com/cassianlewis",
"followers_url": "https://api.github.com/users/cassianlewis/followers",
"following_url": "https://api.github.com/users/cassianlewis/following{/other_user}",
"gists_url": "https://api.github.com/users/cassianlewis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cassianlewis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cassianlewis/subscriptions",
"organizations_url": "https://api.github.com/users/cassianlewis/orgs",
"repos_url": "https://api.github.com/users/cassianlewis/repos",
"events_url": "https://api.github.com/users/cassianlewis/events{/privacy}",
"received_events_url": "https://api.github.com/users/cassianlewis/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@younesbelkada maybe you can have a look at it? ",
"`ddp_find_unused_parameters=False` has fixed it for now :) ",
"I am having the same issue and in my case even \r\n`ddp_find_unused_parameters=False` does not fix it\r\n\r\n",
"I run into the same issues even after setting `ddp_find_unused_parameters=False` and `accelerator.prepare(model)`",
"Hi there, please have a look at my comment here: https://github.com/huggingface/accelerate/issues/1840#issuecomment-1683105994 that summarizes the solution and explains why you are getting the issue",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,691 | 1,695 | 1,695 |
NONE
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.10.178-162.673.amzn2.x86_64-x86_64-with-glibc2.26
- Python version: 3.10.8
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
### Who can help?
@younesbelkada
### Information
- [] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
To repoduce:
(Note, this is related to https://github.com/huggingface/accelerate/pull/1523)
```
accelerator = Accelerator()
model_id = "t5-base"
# Load tokenizer of FLAN-t5-XL
tokenizer = AutoTokenizer.from_pretrained(model_id, cache_dir = 'model_cache')
dataset = get_data()
tokenized_dataset = dataset.map(lambda sample: preprocess_function(sample, tokenizer), batched=True, remove_columns=["source", "target"])
# print(dist.get_rank())
model = AutoModelForSeq2SeqLM.from_pretrained(
model_id,
load_in_8bit=True,
device_map='auto',
cache_dir='model_cache')
# Define LoRA Config
lora_config = LoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.05,
bias="none",
task_type=TaskType.SEQ_2_SEQ_LM
)
# prepare int-8 model for training
model = prepare_model_for_int8_training(model)
# add LoRA adaptor
model = get_peft_model(model, lora_config)
model = accelerator.prepare(model)
label_pad_token_id = -100
data_collator = DataCollatorForSeq2Seq(
tokenizer,
label_pad_token_id=label_pad_token_id,
pad_to_multiple_of=None,
padding=False
)
# Define training args
training_args = TrainingArguments(
per_device_train_batch_size=1,
learning_rate=1e-3,
num_train_epochs=10,
logging_strategy='steps',
logging_steps=5,
weight_decay=0,
output_dir = 'weights',
seed=22
)
# Create Trainer instance
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_dataset['train'].select(range(10)),
data_collator=data_collator,
)
train_result = trainer.train()
```
`tokenized_dataset` can be an arbitrary dataset.
The problem arises when running `python -m torch.distributed.launch --nproc_per_node=4 multi-gpu.py`.
Note that it works fine if just using `python multi-gpu.py` (since only 1 GPU is used here).
I am running with four T4s.
### Expected behavior
Error message:
```
Traceback (most recent call last):
File "/home/ec2-user/SageMaker/training/scripts/multi-gpu.py", line 131, in <module>
main()
File "/home/ec2-user/SageMaker/training/scripts/multi-gpu.py", line 125, in main
train_result = trainer.train()
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/transformers/trainer.py", line 1656, in _inner_training_loop
model, self.optimizer = self.accelerator.prepare(self.model, self.optimizer)
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/accelerate/accelerator.py", line 1202, in prepare
result = tuple(
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/accelerate/accelerator.py", line 1203, in <genexpr>
self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/accelerate/accelerator.py", line 1030, in _prepare_one
return self.prepare_model(obj, device_placement=device_placement)
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/accelerate/accelerator.py", line 1270, in prepare_model
raise ValueError(
ValueError: You can't train a model that has been loaded in 8-bit precision on multiple devices in any distributed mode. In order to use 8-bit models that have been loaded across multiple GPUs the solution is to use Naive Pipeline Parallelism. Therefore you should not specify that you are under any distributed regime in your accelerate config.
```
Some notes:
- this works if I remove 8 bit training
- I have tried this with and without `accelerator.prepare(model)` and this makes no difference (although when I remove 8bit training but keep this line, I get another error. When I remove the line, it trains fine).
Any help appreciated!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25289/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
}
|
https://api.github.com/repos/huggingface/transformers/issues/25289/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25288
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25288/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25288/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25288/events
|
https://github.com/huggingface/transformers/issues/25288
| 1,835,116,493 |
I_kwDOCUB6oc5tYafN
| 25,288 |
device_map="auto" -> uninitialized parameters
|
{
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I think this should have been fixed by #25101 Could you try again with a source install?\r\n(Yes it is a false positive, just tied weights where the copies are not present in the state dict.)",
"Awesome, that works. Was afraid that I was messing something up with converting to safetensors. Glad that that is not the case.\r\n\r\nThanks for the prompt response! @sgugger "
] | 1,691 | 1,691 | 1,691 |
COLLABORATOR
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.11.4
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (True)
### Who can help?
@ArthurZucker @younesbelkada
Maybe also @sgugger because this is a general use-case about PyTorch models
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am encountering an issue that worries me slightly. When I load a model with `device_map`, everything goes fine - no warnings.
```python
from transformers import AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("BramVanroy/flan-t5-small-amr-en")
```
Howver, when I do use the device_map, I get the warning that some weights are not initialized
```python
from transformers import AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("BramVanroy/flan-t5-small-amr-en", device_map="auto")
```
Result:
> Some weights of T5ForConditionalGeneration were not initialized from the model checkpoint at BramVanroy/flan-t5-small-amr-en and are newly initialized: ['decoder.embed_tokens.weight', 'encoder.embed_tokens.weight']
> You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
However, I am wondering whether this isn't a false positive because the model performance seems the same with/without. My model repo contains both safetensors and the PyTorch *.bin, if that has something to do with it?
### Expected behavior
Either a warning in both or no warning in either.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25288/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25288/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25287
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25287/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25287/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25287/events
|
https://github.com/huggingface/transformers/issues/25287
| 1,835,034,516 |
I_kwDOCUB6oc5tYGeU
| 25,287 |
Transformers Agent suggesting it should use text_generator although it is not provided.
|
{
"login": "tordbb",
"id": 11409776,
"node_id": "MDQ6VXNlcjExNDA5Nzc2",
"avatar_url": "https://avatars.githubusercontent.com/u/11409776?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tordbb",
"html_url": "https://github.com/tordbb",
"followers_url": "https://api.github.com/users/tordbb/followers",
"following_url": "https://api.github.com/users/tordbb/following{/other_user}",
"gists_url": "https://api.github.com/users/tordbb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tordbb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tordbb/subscriptions",
"organizations_url": "https://api.github.com/users/tordbb/orgs",
"repos_url": "https://api.github.com/users/tordbb/repos",
"events_url": "https://api.github.com/users/tordbb/events{/privacy}",
"received_events_url": "https://api.github.com/users/tordbb/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I'm not too sure why you are reporting a bug. The agent is an LLM which sometimes hallucinate content (in this case, a tool that does not exist). If your prompt does not work, you should try refining it. You should also try using another model and see if it performs better.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,691 | 1,694 | 1,694 |
NONE
| null |
### System Info
I am running a version of [your notebook on Transformers Agent](https://colab.research.google.com/drive/1c7MHD-T1forUPGcC_jlwsIptOzpG3hSj), where I have added a cell where I ask the StarCoder agent to generate a sentence for me.
I am using StarCoder, as you can see:
```
#@title Agent init
agent_name = "StarCoder (HF Token)" #@param ["StarCoder (HF Token)", "OpenAssistant (HF Token)", "OpenAI (API Key)"]
import getpass
if agent_name == "StarCoder (HF Token)":
from transformers.tools import HfAgent
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder")
print("StarCoder is initialized 💪")
elif agent_name == "OpenAssistant (HF Token)":
from transformers.tools import HfAgent
agent = HfAgent(url_endpoint="https://api-inference.huggingface.co/models/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5")
print("OpenAssistant is initialized 💪")
if agent_name == "OpenAI (API Key)":
from transformers.tools import OpenAiAgent
pswd = getpass.getpass('OpenAI API key:')
agent = OpenAiAgent(model="text-davinci-003", api_key=pswd)
print("OpenAI is initialized 💪")
```
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Based on the notebook mentioned, I have added a cell where I prompt the following:
```
agent.run("Write a sentence of the form 'A_ V_ at P_', where A_ should be replaced by the name of an animal, V_ should be replaced by a verb, and P_ should be replaced by the name of a place. Examples for valid sentences are 'Dog eating at macdonalds', 'Horse jumping at a gym', 'Duck fishing at a supermarket'. ")
```
As you see in the printout below, it suggests it will use the tool 'text_generation', but then stops because it does not have access to it.
```
==Explanation from the agent==
I will use the following tools: `text_classifier` to classify the sentence, then `text_generator` to generate the sentence.
==Code generated by the agent==
sentence = text_generator(prompt="A_ V_ at P_")
print(f"The sentence is {sentence}.")
sentence_class = text_classifier(sentence)
print(f"The sentence class is {sentence_class}.")
==Result==
Evaluation of the code stopped at line 0 before the end because of the following error:
It is not permitted to evaluate other functions than the provided tools (tried to execute text_generator).
```
### Expected behavior
Either, the agent should not even consider using "text_generation" as a tool, or it should have access to this tool as default.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25287/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25287/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25286
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25286/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25286/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25286/events
|
https://github.com/huggingface/transformers/pull/25286
| 1,835,007,607 |
PR_kwDOCUB6oc5XG4Bd
| 25,286 |
[JAX] Bump min version
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,691 | 1,691 |
CONTRIBUTOR
| null |
# What does this PR do?
Bumps the minimum version of JAX to [0.4.1](https://jax.readthedocs.io/en/latest/changelog.html#jax-0-4-1-dec-13-2022), the earliest version where the new `jax.Array` API is introduced, replacing the deprecated `jax.numpy.DeviceArray` API. This allows compatibility with the latest JAX version [0.4.14](https://jax.readthedocs.io/en/latest/changelog.html#jax-0-4-14-july-27-2023), where `jax.numpy.DeviceArray` is removed entirely.
Related: #24875
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25286/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25286/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25286",
"html_url": "https://github.com/huggingface/transformers/pull/25286",
"diff_url": "https://github.com/huggingface/transformers/pull/25286.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25286.patch",
"merged_at": 1691075103000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25284
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25284/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25284/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25284/events
|
https://github.com/huggingface/transformers/pull/25284
| 1,834,924,179 |
PR_kwDOCUB6oc5XGll1
| 25,284 |
Fix Llama's attention map handling for left padding which causes numerical instability and performance drops
|
{
"login": "Randolph-zeng",
"id": 11933185,
"node_id": "MDQ6VXNlcjExOTMzMTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/11933185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Randolph-zeng",
"html_url": "https://github.com/Randolph-zeng",
"followers_url": "https://api.github.com/users/Randolph-zeng/followers",
"following_url": "https://api.github.com/users/Randolph-zeng/following{/other_user}",
"gists_url": "https://api.github.com/users/Randolph-zeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Randolph-zeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Randolph-zeng/subscriptions",
"organizations_url": "https://api.github.com/users/Randolph-zeng/orgs",
"repos_url": "https://api.github.com/users/Randolph-zeng/repos",
"events_url": "https://api.github.com/users/Randolph-zeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/Randolph-zeng/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @ArthurZucker ",
"Hi Arthur, thanks for the comment. Yes this PR should affect the inference because otherwise it causes performance drops or error. I think it applies both to training issue you mentioned and inference. I will do the checks you mentioned above locally and push again today after some meetings. Will keep you posted ! ",
"Hi Arthur, I tried the make style command and committed the changes. However, for the make fix-copies command, it simply cut my changes out from file ` src/transformers/models/llama/modeling_llama.py` and paste those lines to `src/transformers/models/deprecated/open_llama/modeling_open_llama.py`. I am not sure why script `python utils/check_copies.py --fix_and_overwrite` do this, so I did not include that in the commit. Please let me know if there is any other concern. Thanks!",
"You can probably just remove the `# Copied from` line above this function in the open llama modeling file! 😉 \r\n",
"Yes that works! I removed the copy statement from both functions in llama and openllama so that this copy script won't be triggered. ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25284). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @ArthurZucker , it seems this issue still needs to be fixed, at least for inference. Running the batched MMLU evaluation script [here ](https://github.com/allenai/open-instruct/blob/503407ee89553b4e4c09f349052aeeac296eb91e/scripts/eval/mmlu.sh#L5-L13) on llama2 7B checkpoint results in very bad performance with the latest transformers version `4.34.0`, but @Randolph-zeng 's PR gives me a good performance, matching the numbers reported in the llama2 paper. \r\n\r\nWill this PR be merged soon?",
"Hey pinging @Rocketknight1 as well, I think this is of interest yes, we had a hard time reproducing the nan but it seams that it is mostly related to beam search. We'll investigate and keep you posted! ",
"@Randolph-zeng do you mind explaining how inf/nan attention values on pad tokens breaks sampling? The values of those tokens should never be read by \"real\" non-pad tokens anyway, so I don't understand why this would make a difference."
] | 1,691 | 1,701 | 1,694 |
NONE
| null |
Hi this PR is trying to address the performance drop and potential numerical instability caused by vanilla left padding in Llama.
Here is the explanation:
1. If we initialize the tokenizer with left padding and call model.generate without passing in corresponding attention_mask, the code will run, but for the instances who are left padded, its unpadded tokens will "see" the padded tokens. This will cause performance drop a lot ! At least in my case, my performance of llama2 in socialQA drops from 55% to around 20% if I use left padded batch inference instead of one by one generate.
2. If instead, I passed in the attention_map generated by the left_padding tokenizer to model.generate function, the model will throw an error when doing sampling because some values in the hidden states are inf or nan. This numerical instability suddenly appeared because train-test mismatch: **By examining the locations of these infs/nans, I found them only shows up in the position of those padded token and are caused by the attention_map.**
3. The reason why attention map are causing the numerical instability is because the current way of generating attention mask did not considered the left padded situation and it will cause the left padded tokens to have a fully masked attention tensor ! While the model was never trained with any token that can not see any(including itself) token, the model thus generates anomaly values and creates nan/inf.
So this PR is trying to fix two bugs I observed:
1. The attention_mask created for left_padded values will contain -inf value due to the operation "expanded_attn_mask + combined_attention_mask". Consider the attention_map that looks like this ([[1, 1, 1, 1, 1], [0, 0, 0, 1, 1]]). The combined_attention_mask created by line 585 will look like this (under float16)
```
tensor([[[[ 0., -65504., -65504., -65504., -65504.],
[ 0., 0., -65504., -65504., -65504.],
[ 0., 0., 0., -65504., -65504.],
[ 0., 0., 0., 0., -65504.],
[ 0., 0., 0., 0., 0.]]],
[[[ 0., -65504., -65504., -65504., -65504.],
[ 0., 0., -65504., -65504., -65504.],
[ 0., 0., 0., -65504., -65504.],
[ 0., 0., 0., 0., -65504.],
[ 0., 0., 0., 0., 0.]]]], device='cuda:0',
dtype=torch.float16)
```
and the expanded_attn_mask created will look like this
```
tensor([[[[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.]]],
[[[-65504., -65504., -65504., 0., 0.],
[-65504., -65504., -65504., 0., 0.],
[-65504., -65504., -65504., 0., 0.],
[-65504., -65504., -65504., 0., 0.],
[-65504., -65504., -65504., 0., 0.]]]], device='cuda:0',
dtype=torch.float16)
```
And in line 598 these two variables are added together. I believe it will be now clear why left padding causes the attention_map itself contains -inf values and why some tokens has a fully masked attn tensor.
3. My solution then is straightforward, I clamped the variables so it does not overflow, and I forces the left padded values to at least attend to itself. Though the hidden states of the left padded values will not be used by the unpadded tokens due to the attention map, making it cleaned of inf/nan will not break the generation process.
4. I tested in my local cases and I did not observe any performance drop or nan errors during sampling. Though I am not sure if my patches will break any other use cases.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25284/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25284/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25284",
"html_url": "https://github.com/huggingface/transformers/pull/25284",
"diff_url": "https://github.com/huggingface/transformers/pull/25284.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25284.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25283
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25283/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25283/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25283/events
|
https://github.com/huggingface/transformers/issues/25283
| 1,834,888,888 |
I_kwDOCUB6oc5tXi64
| 25,283 |
Use of logging.warn is deprecated in favour of logging.warning
|
{
"login": "PeterJCLaw",
"id": 336212,
"node_id": "MDQ6VXNlcjMzNjIxMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/336212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PeterJCLaw",
"html_url": "https://github.com/PeterJCLaw",
"followers_url": "https://api.github.com/users/PeterJCLaw/followers",
"following_url": "https://api.github.com/users/PeterJCLaw/following{/other_user}",
"gists_url": "https://api.github.com/users/PeterJCLaw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PeterJCLaw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PeterJCLaw/subscriptions",
"organizations_url": "https://api.github.com/users/PeterJCLaw/orgs",
"repos_url": "https://api.github.com/users/PeterJCLaw/repos",
"events_url": "https://api.github.com/users/PeterJCLaw/events{/privacy}",
"received_events_url": "https://api.github.com/users/PeterJCLaw/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@PeterJCLaw Indeed! Happy to review a PR :) "
] | 1,691 | 1,691 | 1,691 |
CONTRIBUTOR
| null |
There are a few places where `transformers` uses the deprecated `warn` method on a logger, while most of the library uses `warning`. While this works for now, it will presumably be removed at some point (calling it emits a `DeprecationWarning`) and it means that strict test runners (such as `pytest`) complain about some codepaths.
As far as I can tell, all versions of Python supported by `transformers` support the new spelling (`warning` has been around for a _long_ time) so the upgrade should be simple.
I'd be happy to have a go at a PR for this.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25283/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25283/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25282
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25282/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25282/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25282/events
|
https://github.com/huggingface/transformers/issues/25282
| 1,834,637,369 |
I_kwDOCUB6oc5tWlg5
| 25,282 |
Timm models Safetensor weights give 'NoneType' object has no attribute 'get', weight re-initialization and wrong num_labels
|
{
"login": "sawradip",
"id": 67541368,
"node_id": "MDQ6VXNlcjY3NTQxMzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/67541368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sawradip",
"html_url": "https://github.com/sawradip",
"followers_url": "https://api.github.com/users/sawradip/followers",
"following_url": "https://api.github.com/users/sawradip/following{/other_user}",
"gists_url": "https://api.github.com/users/sawradip/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sawradip/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sawradip/subscriptions",
"organizations_url": "https://api.github.com/users/sawradip/orgs",
"repos_url": "https://api.github.com/users/sawradip/repos",
"events_url": "https://api.github.com/users/sawradip/events{/privacy}",
"received_events_url": "https://api.github.com/users/sawradip/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@sawradip `timm` weights on the hub work in timm, unless I'm missing something (some automatic conversion was added that I'm not aware) I don't think there is any expectation you can load them in `transformers`? I feel the pytorch native weights is a bug that it doesn't crash and it's probably not loading any keys...\r\n\r\n\r\n",
"Hello @rwightman, I have been a fan of `timm` models for years and having a response from you really made my day. Thank you for doing so much for the pytorch ecosystem, and building one of the prominent model libraries in deep learning space.\r\n\r\nI know that `timm` works perfectly with `timm.create_models`, but what I was expecting is, as all the models are already in `huggingface-hub`, why not add support for them for also working with in `transformers' ecosystem, so that they can be loaded with the same syntax like `AutoModel` and `AutoModelForImageClassification`.\r\n\r\nThe plan is to use the officially suggested `timm.create_model` inside, and abstract it in Hugingface style, but keep most of the flexibilities of `timm`.\r\n\r\nThis is what I have prepared for now,\r\n\r\n```\r\n\r\nclass TimmConfig(PretrainedConfig):\r\n model_type = \"timm\"\r\n\r\n @classmethod\r\n def from_pretrained(\r\n cls,\r\n pretrained_model_name_or_path: Union[str, os.PathLike],\r\n cache_dir: Optional[Union[str, os.PathLike]] = None,\r\n force_download: bool = False,\r\n local_files_only: bool = False,\r\n token: Optional[Union[str, bool]] = None,\r\n revision: str = \"main\",\r\n **kwargs,\r\n ) -> \"PretrainedConfig\":\r\n\r\n kwargs[\"cache_dir\"] = cache_dir\r\n kwargs[\"force_download\"] = force_download\r\n kwargs[\"local_files_only\"] = local_files_only\r\n kwargs[\"revision\"] = revision\r\n\r\n config_dict = load_model_config_from_hf(pretrained_model_name_or_path)[0]\r\n config_dict[\"num_labels\"] = config_dict.pop(\"num_classes\")\r\n config_dict[\"image_size\"] = config_dict.get(\"input_size\")[-1]\r\n\r\n return cls.from_dict(config_dict, **kwargs)\r\n\r\n\r\nclass TimmOnnxConfig(ViTOnnxConfig):\r\n DEFAULT_TIMM_ONNX_OPSET = 13\r\n outputs= OrderedDict([('logits', {0: 'batch_size'})])\r\n\r\n\r\n\r\nclass TimmPreTrainedModel(PreTrainedModel):\r\n config_class = TimmConfig\r\n base_model_prefix = \"timm\"\r\n main_input_name = \"pixel_values\"\r\n\r\n\r\nclass TimmModel(TimmPreTrainedModel):\r\n def __init__(self, \r\n config: TimmConfig, \r\n feature_only : bool = True, \r\n pretrained : bool = True, \r\n in_chans : int = 3, \r\n **kwargs):\r\n super().__init__(config)\r\n\r\n self.config = config\r\n if feature_only:\r\n self.timm_model = timm.create_model(\"hf-hub:\" + self.config.hf_hub_id,\r\n num_classes = 0,\r\n pretrained = pretrained,\r\n in_chans = in_chans)\r\n else:\r\n self.timm_model = timm.create_model(\"hf-hub:\" + self.config.hf_hub_id,\r\n num_classes = self.config.num_labels,\r\n pretrained = pretrained,\r\n in_chans = in_chans)\r\n self.timm_model.eval()\r\n\r\n @classmethod\r\n def from_pretrained(cls, model_name_or_path, **kwargs):\r\n config = TimmConfig.from_pretrained(model_name_or_path, **kwargs)\r\n return cls(config, **kwargs)\r\n\r\n def forward(\r\n self,\r\n pixel_values: Optional[torch.Tensor] = None,\r\n return_dict: Optional[bool] = None,\r\n ) -> Union[Tuple, BaseModelOutput]:\r\n\r\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\r\n\r\n if pixel_values is None:\r\n raise ValueError(\"You have to specify pixel_values\")\r\n\r\n # TODO: maybe have a cleaner way to cast the input (from `ImageProcessor` side?)\r\n # expected_dtype = self.embeddings.patch_embeddings.projection.weight.dtype\r\n # if pixel_values.dtype != expected_dtype:\r\n # pixel_values = pixel_values.to(expected_dtype)\r\n\r\n model_output = self.timm_model(pixel_values)\r\n\r\n if not return_dict:\r\n return model_output\r\n\r\n return BaseModelOutput(\r\n last_hidden_state=model_output,\r\n hidden_states= None\r\n )\r\n\r\n\r\nclass TimmForImageClassification(TimmPreTrainedModel):\r\n def __init__(self, config: TimmConfig, num_labels: int = None, **kwargs) -> None:\r\n super().__init__(config, **kwargs)\r\n\r\n if num_labels:\r\n config.num_labels = num_labels\r\n self.timm = TimmModel(config, feature_only = False)\r\n\r\n @classmethod\r\n def from_pretrained(cls, model_name_or_path, **kwargs):\r\n config = TimmConfig.from_pretrained(model_name_or_path, **kwargs)\r\n return cls(config, **kwargs)\r\n\r\n def forward(\r\n self,\r\n pixel_values: Optional[torch.Tensor] = None,\r\n labels: Optional[torch.Tensor] = None,\r\n return_dict: Optional[bool] = None,\r\n ) -> Union[tuple, ImageClassifierOutput]:\r\n r\"\"\"\r\n labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):\r\n Labels for computing the image classification/regression loss. Indices should be in `[0, ...,\r\n config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If\r\n `config.num_labels > 1` a classification loss is computed (Cross-Entropy).\r\n \"\"\"\r\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\r\n\r\n logits = self.timm(\r\n pixel_values,\r\n return_dict=return_dict,\r\n )\r\n\r\n loss = None\r\n if labels is not None:\r\n # move labels to correct device to enable model parallelism\r\n labels = labels.to(logits.device)\r\n if self.config.problem_type is None:\r\n if self.num_labels == 1:\r\n self.config.problem_type = \"regression\"\r\n elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):\r\n self.config.problem_type = \"single_label_classification\"\r\n else:\r\n self.config.problem_type = \"multi_label_classification\"\r\n\r\n if self.config.problem_type == \"regression\":\r\n loss_fct = MSELoss()\r\n if self.num_labels == 1:\r\n loss = loss_fct(logits.squeeze(), labels.squeeze())\r\n else:\r\n loss = loss_fct(logits, labels)\r\n elif self.config.problem_type == \"single_label_classification\":\r\n loss_fct = CrossEntropyLoss()\r\n loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))\r\n elif self.config.problem_type == \"multi_label_classification\":\r\n loss_fct = BCEWithLogitsLoss()\r\n loss = loss_fct(logits, labels)\r\n\r\n if not return_dict:\r\n return logits\r\n\r\n return ImageClassifierOutput(\r\n loss=loss,\r\n logits = logits.last_hidden_state,\r\n )\r\n\r\n```\r\n\r\nI would like to prepare a PR(linking my implemented class with `AutoModel` and `AutoModelForImageCLassification`) if you agree that this could be useful. Have a great day.",
"@sawradip you could certainly do something like this (wrap timm in a transformers interface). I'm all for anything that improves the HF ecosystem experience whether it's `timm` or `transformers`, and definitely like to see use of `timm` grow. For this one though it wouldn't be my call, it's up to the transformers maintainers if they see value and feel it fits the long term plans for the library...",
"@amyeroberts Any thoughts?",
"@sawradip Thanks for opening this issue and for sharing an example code snippet so quickly! We do already have a way loading in timm checkpoints in the library, however this is [specifically for backbones](https://github.com/huggingface/transformers/blob/5ee9693a1c77c617ebc43ef20194b6d3b674318e/src/transformers/models/timm_backbone/modeling_timm_backbone.py#L4). \r\n\r\nMy instinct is that at the moment we don't want to couple the two libraries with a more generic `TimmModel` class as they're both built for different things and it would be hard to guarantee that the models would behave as expected in the transformers environment e.g. when setting certain config values. I'd anticipate requiring a lot of if/else code to properly map things. \r\n\r\nIt would be good to have access to all of the timm checkpoints. Perhaps we could do the following: \r\n* Add this model on the hub, following [this tutorial](https://huggingface.co/docs/transformers/custom_models)\r\n* Add a warning on the model page and / or when loading the model that this is experimental and behaviour with config values isn't guaranteed\r\n\r\n@sgugger @rwightman What are you thoughts on this suggestion? \r\n\r\n",
"Works for me using the code on the Hub API for this class.\r\n\r\nNote that ultimately, the compatibility with `safetensors` weight can only work if `timm` adds the same metadata as Transformers to tell us the weights are in PyTorch format.",
"Hmm, all timm safetensors weights are in pytorch format, updating all safetensor weights with metadata isn't going to happen anytime soon. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,691 | 1,694 | 1,694 |
NONE
| null |
### System Info
My env information:
```
- `transformers` version: 4.31.0
- Platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.31
- Python version: 3.9.17
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.20.3
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
For a GSOC project under [Openvino Toolkit](https://summerofcode.withgoogle.com/archive/2022/organizations/openvino-toolkit), I have working with Timm models through Transformers.
As we know most of the timm models(on HF Hub) are trained or fine-tuned on some variation of Imagenet dataset, and thus are effectively Image classification models. If I attempt to load Timm models using `AutoModelForImageClassification`,
```
import torch
from transformers import AutoModelForImageClassification
model_id = "timm/vit_tiny_r_s16_p8_224.augreg_in21k"
hf_model = AutoModelForImageClassification.from_pretrained( model_id)
out = hf_model(pixel_values = torch.zeros((5, 3, hf_model.config.image_size, hf_model.config.image_size)))
print(out.logits.shape)
```
I get this Error:
```
Traceback (most recent call last):
File "/home/sawradip/Desktop/practice_code/practice_gsoc/optimum-intel/../demo.py", line 10, in <module>
hf_model = AutoModelForImageClassification.from_pretrained( model_id,
File "/home/sawradip/miniconda3/envs/gsoc_env/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 493, in from_pretrained
return model_class.from_pretrained(
File "/home/sawradip/miniconda3/envs/gsoc_env/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2629, in from_pretrained
state_dict = load_state_dict(resolved_archive_file)
File "/home/sawradip/miniconda3/envs/gsoc_env/lib/python3.9/site-packages/transformers/modeling_utils.py", line 449, in load_state_dict
if metadata.get("format") not in ["pt", "tf", "flax"]:
AttributeError: 'NoneType' object has no attribute 'get'
```
I find that this issue doesn't occur if I force transformers to use pytorch weights, and avoid `.safetensors`.
```
import torch
from transformers import AutoModelForImageClassification
model_id = "timm/vit_tiny_r_s16_p8_224.augreg_in21k"
hf_model = AutoModelForImageClassification.from_pretrained( model_id,
use_safetensors = False
)
out = hf_model(pixel_values = torch.zeros((5, 3, hf_model.config.image_size, hf_model.config.image_size)))
print(out.logits.shape)
```
But I still get this warnings in the output, that a lot of weights were not initialized successfully.
```
Some weights of ViTForImageClassification were not initialized from the model checkpoint at timm/vit_tiny_r_s16_p8_224.augreg_in21k and are newly initialized: ['encoder.layer.0.layernorm_before.bias', 'encoder.layer.11.attention.attention.query.weight', 'encoder.layer.1.attention.attention.query.weight', 'encoder.layer.11.attention.output.dense.bias', 'encoder.layer.4.attention.output.dense.bias', 'encoder.layer.4.layernorm_before.bias', 'encoder.layer.10.attention.attention.query.weight', 'encoder.layer.6.attention.attention.key.weight', 'encoder.layer.4.output.dense.bias', 'encoder.layer.0.attention.attention.key.bias', 'encoder.layer.2.layernorm_after.weight', 'encoder.layer.7.attention.output.dense.bias', 'encoder.layer.7.output.dense.weight', 'encoder.layer.10.layernorm_after.bias', 'layernorm.bias', 'encoder.layer.0.attention.attention.key.weight', 'encoder.layer.1.attention.attention.value.bias', 'encoder.layer.4.output.dense.weight', 'embeddings.patch_embeddings.projection.weight', 'encoder.layer.6.attention.output.dense.weight', 'encoder.layer.1.layernorm_after.weight', 'encoder.layer.2.attention.attention.query.weight', 'encoder.layer.3.attention.attention.key.bias', 'encoder.layer.11.layernorm_after.bias', 'encoder.layer.4.attention.output.dense.weight', 'encoder.layer.2.layernorm_before.weight', 'encoder.layer.4.attention.attention.query.bias', 'encoder.layer.6.layernorm_after.weight', 'encoder.layer.4.intermediate.dense.bias', 'encoder.layer.7.layernorm_before.weight', 'encoder.layer.8.attention.attention.value.bias', 'encoder.layer.6.attention.attention.query.weight', 'encoder.layer.8.attention.output.dense.weight', 'encoder.layer.10.layernorm_before.weight', 'encoder.layer.1.intermediate.dense.bias', 'encoder.layer.9.attention.attention.key.weight', 'encoder.layer.6.layernorm_after.bias', 'classifier.bias', 'encoder.layer.1.layernorm_before.bias', 'encoder.layer.6.attention.output.dense.bias', 'encoder.layer.8.intermediate.dense.weight', 'encoder.layer.2.attention.output.dense.bias', 'encoder.layer.10.attention.output.dense.bias', 'encoder.layer.10.attention.attention.query.bias', 'encoder.layer.3.layernorm_before.bias', 'encoder.layer.3.intermediate.dense.weight', 'encoder.layer.5.attention.attention.value.bias', 'encoder.layer.6.attention.attention.value.weight', 'encoder.layer.0.layernorm_after.weight', 'encoder.layer.10.intermediate.dense.bias', 'encoder.layer.0.output.dense.bias', 'encoder.layer.0.attention.output.dense.bias', 'encoder.layer.7.layernorm_after.weight', 'encoder.layer.8.output.dense.bias', 'layernorm.weight', 'encoder.layer.0.output.dense.weight', 'encoder.layer.11.attention.attention.key.weight', 'encoder.layer.2.attention.attention.query.bias', 'encoder.layer.11.attention.attention.value.weight', 'encoder.layer.3.layernorm_after.bias', 'classifier.weight', 'encoder.layer.4.attention.attention.value.weight', 'encoder.layer.8.layernorm_after.weight', 'encoder.layer.9.attention.attention.query.weight', 'encoder.layer.0.intermediate.dense.bias', 'encoder.layer.8.output.dense.weight', 'encoder.layer.1.attention.attention.value.weight', 'encoder.layer.6.output.dense.weight', 'encoder.layer.6.output.dense.bias', 'encoder.layer.5.attention.attention.query.bias', 'encoder.layer.6.attention.attention.key.bias', 'encoder.layer.9.layernorm_before.bias', 'encoder.layer.7.attention.attention.query.weight', 'encoder.layer.5.output.dense.bias', 'encoder.layer.8.layernorm_after.bias', 'encoder.layer.2.attention.attention.key.weight', 'encoder.layer.5.layernorm_after.bias', 'encoder.layer.10.attention.output.dense.weight', 'encoder.layer.7.layernorm_after.bias', 'encoder.layer.5.intermediate.dense.weight', 'encoder.layer.9.attention.attention.value.bias', 'encoder.layer.3.output.dense.weight', 'encoder.layer.2.attention.attention.value.bias', 'encoder.layer.5.attention.attention.key.weight', 'encoder.layer.6.intermediate.dense.bias', 'encoder.layer.6.attention.attention.query.bias', 'encoder.layer.9.output.dense.weight', 'encoder.layer.0.attention.attention.value.weight', 'encoder.layer.3.attention.attention.value.bias', 'encoder.layer.2.layernorm_before.bias', 'encoder.layer.2.output.dense.weight', 'encoder.layer.1.output.dense.weight', 'encoder.layer.4.intermediate.dense.weight', 'encoder.layer.5.attention.attention.value.weight', 'encoder.layer.9.intermediate.dense.weight', 'encoder.layer.8.attention.attention.key.weight', 'encoder.layer.3.attention.attention.value.weight', 'encoder.layer.11.intermediate.dense.weight', 'encoder.layer.7.attention.attention.key.weight', 'encoder.layer.0.attention.attention.value.bias', 'encoder.layer.2.attention.attention.value.weight', 'encoder.layer.5.layernorm_before.bias', 'encoder.layer.0.intermediate.dense.weight', 'encoder.layer.5.intermediate.dense.bias', 'encoder.layer.2.intermediate.dense.bias', 'encoder.layer.5.layernorm_before.weight', 'encoder.layer.1.attention.output.dense.weight', 'encoder.layer.7.attention.attention.value.weight', 'encoder.layer.6.layernorm_before.weight', 'encoder.layer.3.attention.attention.key.weight', 'encoder.layer.11.attention.attention.query.bias', 'encoder.layer.5.attention.output.dense.bias', 'encoder.layer.6.layernorm_before.bias', 'encoder.layer.3.attention.output.dense.weight', 'encoder.layer.11.attention.output.dense.weight', 'encoder.layer.9.attention.output.dense.bias', 'encoder.layer.10.attention.attention.value.weight', 'encoder.layer.7.attention.attention.key.bias', 'encoder.layer.10.attention.attention.value.bias', 'encoder.layer.3.attention.output.dense.bias', 'encoder.layer.4.attention.attention.value.bias', 'encoder.layer.0.attention.output.dense.weight', 'encoder.layer.5.attention.output.dense.weight', 'encoder.layer.2.attention.attention.key.bias', 'encoder.layer.3.intermediate.dense.bias', 'encoder.layer.5.output.dense.weight', 'encoder.layer.8.attention.attention.query.weight', 'encoder.layer.3.attention.attention.query.bias', 'encoder.layer.1.attention.attention.key.weight', 'encoder.layer.4.layernorm_after.weight', 'encoder.layer.7.intermediate.dense.bias', 'encoder.layer.7.attention.attention.value.bias', 'encoder.layer.3.layernorm_before.weight', 'encoder.layer.11.attention.attention.key.bias', 'encoder.layer.10.output.dense.bias', 'encoder.layer.8.intermediate.dense.bias', 'encoder.layer.9.intermediate.dense.bias', 'encoder.layer.11.output.dense.weight', 'encoder.layer.1.attention.output.dense.bias', 'encoder.layer.3.output.dense.bias', 'encoder.layer.4.attention.attention.key.weight', 'encoder.layer.10.attention.attention.key.weight', 'encoder.layer.4.layernorm_before.weight', 'encoder.layer.9.attention.attention.value.weight', 'encoder.layer.5.attention.attention.query.weight', 'encoder.layer.2.output.dense.bias', 'encoder.layer.0.attention.attention.query.weight', 'encoder.layer.10.intermediate.dense.weight', 'encoder.layer.8.attention.attention.value.weight', 'encoder.layer.4.attention.attention.key.bias', 'encoder.layer.4.layernorm_after.bias', 'encoder.layer.6.intermediate.dense.weight', 'encoder.layer.7.intermediate.dense.weight', 'encoder.layer.9.attention.output.dense.weight', 'encoder.layer.11.output.dense.bias', 'encoder.layer.0.layernorm_after.bias', 'encoder.layer.9.attention.attention.query.bias', 'encoder.layer.11.attention.attention.value.bias', 'encoder.layer.8.attention.attention.key.bias', 'encoder.layer.2.attention.output.dense.weight', 'encoder.layer.9.layernorm_after.bias', 'encoder.layer.11.layernorm_after.weight', 'encoder.layer.6.attention.attention.value.bias', 'encoder.layer.2.layernorm_after.bias', 'encoder.layer.9.layernorm_after.weight', 'encoder.layer.1.attention.attention.key.bias', 'encoder.layer.10.output.dense.weight', 'encoder.layer.7.attention.attention.query.bias', 'embeddings.cls_token', 'encoder.layer.2.intermediate.dense.weight', 'encoder.layer.11.layernorm_before.weight', 'encoder.layer.0.attention.attention.query.bias', 'encoder.layer.1.layernorm_after.bias', 'encoder.layer.3.attention.attention.query.weight', 'encoder.layer.1.output.dense.bias', 'encoder.layer.10.layernorm_after.weight', 'encoder.layer.5.layernorm_after.weight', 'encoder.layer.1.layernorm_before.weight', 'encoder.layer.0.layernorm_before.weight', 'encoder.layer.5.attention.attention.key.bias', 'encoder.layer.8.layernorm_before.weight', 'encoder.layer.3.layernorm_after.weight', 'encoder.layer.10.layernorm_before.bias', 'embeddings.position_embeddings', 'encoder.layer.11.intermediate.dense.bias', 'encoder.layer.7.layernorm_before.bias', 'encoder.layer.1.attention.attention.query.bias', 'encoder.layer.10.attention.attention.key.bias', 'encoder.layer.7.attention.output.dense.weight', 'encoder.layer.9.layernorm_before.weight', 'encoder.layer.1.intermediate.dense.weight', 'encoder.layer.4.attention.attention.query.weight', 'encoder.layer.8.attention.attention.query.bias', 'encoder.layer.7.output.dense.bias', 'encoder.layer.8.layernorm_before.bias', 'encoder.layer.9.output.dense.bias', 'encoder.layer.8.attention.output.dense.bias', 'embeddings.patch_embeddings.projection.bias', 'encoder.layer.11.layernorm_before.bias', 'encoder.layer.9.attention.attention.key.bias']
```
Meaning this models directly can not be used for classification on imagenet.
But I still get a output the shape,(number of output classes: 2) which is not the expected number of class for this model
```
torch.Size([5, 2])
```
Whereas the model name `timm/vit_tiny_r_s16_p8_224.augreg_in21k` indicates that, the weights were fine-tuned for `imagenet-21k`, meaning classes 21843.
This happens because the attached model `config` files for all timm models in the hub, contains the number of output classes in `num_classes` parameter. Whereas `AutoConfig` expects the `num_labels` parameter from the config file, and not finding such an parameter, it assigns the default value 2, as can be seen [here](https://github.com/huggingface/transformers/blob/2bd7a27a671fd1d98059124024f580f8f5c0f3b5/src/transformers/configuration_utils.py#L331).
So we can see in the model,
```
print(hf_model.config.num_classes)
-> 21843
print(hf_model.config.num_labels)
->2
```
### I know there are a number of issues, but it is not possible to reproduce the later ones without fixing the previous one. So creating separate issues for each one would be more cumbersome for the reader.
Let me summarize the points I am making:
1. Can not load timm models through `AutoModelForImageClassification` due to loading from `safetensors` weight.
2. If we mention explicitly`use_safetensors = False` , then the pytorch weights are loaded but Huge numbers of weights are initialized randomly.So the models won't be useful out of the box.
3. For all models, number of output classes are 2, and unlike timm's `create_model`, there is no option for specifying `num_classes` by users without modifying the config file.
Is this behaviour expected?
@amyeroberts @rwightman
### Expected behavior
Expected behavior is ,
This mentioned code block will output:
```
torch.Size([5, 21843])
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25282/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25282/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25281
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25281/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25281/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25281/events
|
https://github.com/huggingface/transformers/pull/25281
| 1,834,591,089 |
PR_kwDOCUB6oc5XFbri
| 25,281 |
Docs: Update list of `report_to` logging integrations in docstring
|
{
"login": "tomaarsen",
"id": 37621491,
"node_id": "MDQ6VXNlcjM3NjIxNDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomaarsen",
"html_url": "https://github.com/tomaarsen",
"followers_url": "https://api.github.com/users/tomaarsen/followers",
"following_url": "https://api.github.com/users/tomaarsen/following{/other_user}",
"gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions",
"organizations_url": "https://api.github.com/users/tomaarsen/orgs",
"repos_url": "https://api.github.com/users/tomaarsen/repos",
"events_url": "https://api.github.com/users/tomaarsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomaarsen/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,691 | 1,691 |
MEMBER
| null |
# What does this PR do?
## Pull Request overview
* Add missing `dagshub`, `codecarbon` and `flyte` integrations to `TrainingArguments` docstring.
* Update `report_to` type hint to allow strings.
## Details
I also converted the ordering back to alphabetical.
I considered using a typing `Literal` as the type hint to help users via their IDE, but I haven't implemented it here as to not clash with the existing style.
## Before submitting
- [x] This PR fixes a typo or improves the docs
## Who can review?
@sgugger
- Tom Aarsen
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25281/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25281/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25281",
"html_url": "https://github.com/huggingface/transformers/pull/25281",
"diff_url": "https://github.com/huggingface/transformers/pull/25281.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25281.patch",
"merged_at": 1691055285000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25280
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25280/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25280/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25280/events
|
https://github.com/huggingface/transformers/issues/25280
| 1,834,418,422 |
I_kwDOCUB6oc5tVwD2
| 25,280 |
How to download files from HF spaces
|
{
"login": "andysingal",
"id": 20493493,
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andysingal",
"html_url": "https://github.com/andysingal",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"repos_url": "https://api.github.com/users/andysingal/repos",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @andysingal, \r\n\r\nThere is a typo in the repo_id. The correct command is: \r\n\r\n```\r\nmodel_path = hf_hub_download(repo_id=\"xinyu1205/recognize_anything_model\", filename=\"tag2text_swin_14m.pth\", local_dir = \"/content\")\r\n```\r\n\r\nIf you receive an error that a repo doesn't exist, the best thing to do is check directly on the hub for the repo and file name. ",
"The file exists in the space\r\n\r\nOn Thu, Aug 3, 2023 at 15:41 amyeroberts ***@***.***> wrote:\r\n\r\n> Hi @andysingal <https://github.com/andysingal>,\r\n>\r\n> There is a typo in the repo_id. The correct command is:\r\n>\r\n> model_path = hf_hub_download(repo_id=\"xinyu1205/recognize_anything_model\", filename=\"tag2text_swin_14m.pth\", local_dir = \"/content\")\r\n>\r\n> If you receive an error that a repo doesn't exist, the best thing to do is\r\n> check directly on the hub for the repo and file name.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/25280#issuecomment-1663711815>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AE4LJNPJ7VV53GDNHXAUTCLXTN2N7ANCNFSM6AAAAAA3CJWHSU>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"If downloading from the space, then you should specify the repo type in the `hf_hub_download` command\r\n\r\n```\r\nmodel_path = hf_hub_download(repo_id=\"xinyu1205/recognize-anything\", filename=\"tag2text_swin_14m.pth\", local_dir = \"/content\", repo_type=\"space\")\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,691 | 1,694 | 1,694 |
NONE
| null |
### System Info
google colab
### Who can help?
@sanchit-gandhi @rock
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
i tried:
```
from huggingface_hub import hf_hub_download,hf_hub_url
# model_path = hf_hub_download(repo_id="xinyu1205/recognize-anything", filename="tag2text_swin_14m.pth", local_dir = "/content")
```
but throws an error repo not present
### Expected behavior
download the file
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25280/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25280/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25279
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25279/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25279/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25279/events
|
https://github.com/huggingface/transformers/pull/25279
| 1,834,334,251 |
PR_kwDOCUB6oc5XEj_5
| 25,279 |
CI 🚀 even more
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Well, request a review too quickly, sorry, but just a few tiny thing to fix ...",
"_The documentation is not available anymore as the PR was closed or merged._",
"OK, fair point. At least a (closed) PR is in the history for reference if we ever need it in the future. Thanks!",
"(we will need to keep an eye on the `torch_job` if something strange happens - mostly hanging in a full run: likely an OOM and some workers are killed.)",
"We can then go back to 6 workers instead of 8 if it happens."
] | 1,691 | 1,691 | 1,691 |
COLLABORATOR
| null |
# What does this PR do?
A follow up of #25274:
- To reduce `torch_job` reaches `95%` RAM --> with this PR, it reaches only `82%`.
- Also smaller RAM usage for: `tf_job`: `60%` | `flax_job`: `86%`
- Avoid the non-modeling files being tested redundantly
- we save the timing for ~ 2 x 8 = 16 min.
Now, all the jobs of the full suite CI runs < 10 minutes (except the new job `non_modeling_job`, but it takes ~2 min to restore the cache!)
<img width="206" alt="Screenshot 2023-08-03 081339" src="https://github.com/huggingface/transformers/assets/2521628/07a8b1b5-7521-4d8c-8d7e-11b176c427c4">
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25279/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25279/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25279",
"html_url": "https://github.com/huggingface/transformers/pull/25279",
"diff_url": "https://github.com/huggingface/transformers/pull/25279.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25279.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25278
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25278/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25278/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25278/events
|
https://github.com/huggingface/transformers/pull/25278
| 1,834,212,897 |
PR_kwDOCUB6oc5XEKDh
| 25,278 |
Llama tokenizer add_prefix_space
|
{
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25278). All of your documentation changes will be reflected on that endpoint.",
"Hi @sgugger , I have the same request here. My problem is as follows: \r\n\r\n\"\\nObservation\" is a substring of \"!\\nObservation\", but in the encoded version by the `LlamaTokenizerFast` tokenizer, it is not the case anymore. This can be solved if we enable passing the `add_prefix_space` parameter to the tokenizer. \r\n\r\nHere is my code:\r\n```python\r\nfrom transformers import AutoTokenizer\r\n\r\nmodel_name = 'lmsys/vicuna-13b-v1.3'\r\ntokenizer = AutoTokenizer.from_pretrained(model_name, add_special_tokens=False, padding=True, use_fast=True)\r\nprint(tokenizer)\r\nfor stop_word in ['\\nObservation', '!\\nObservation']:\r\n print(f'++++++++++{stop_word}+++++++++++++')\r\n tokens = tokenizer.tokenize(stop_word, add_special_tokens=False)\r\n print(tokens)\r\n ids = tokenizer.convert_tokens_to_ids(tokens)\r\n print(ids) \r\n```\r\n\r\nAnd here is the output:\r\n```bash\r\nYou are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This means that tokens that come after special tokens will not be properly handled. We recommend you to read the related pull request available at https://github.com/huggingface/transformers/pull/24565, and set the legacy attribute accordingly.\r\nLlamaTokenizerFast(name_or_path='lmsys/vicuna-13b-v1.3', vocab_size=32000, model_max_length=2048, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'bos_token': AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), 'eos_token': AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), 'unk_token': AddedToken(\"<unk>\", rstrip=False, lstrip=False, single_word=False, normalized=True), 'pad_token': '<unk>'}, clean_up_tokenization_spaces=False)\r\n++++++++++\r\nObservation+++++++++++++\r\n['▁', '<0x0A>', 'Ob', 'serv', 'ation']\r\n[29871, 13, 6039, 2140, 362]\r\n\r\n++++++++++!\r\nObservation+++++++++++++\r\n['▁!', '<0x0A>', 'Ob', 'serv', 'ation']\r\n[1738, 13, 6039, 2140, 362]\r\n```\r\n\r\nAs you can see, [29871, 13, 6039, 2140, 362] is not a subset of [1738, 13, 6039, 2140, 362] anymore. This is because the LlamaTokenizerFast always adds a prefix space before a word. \r\n\r\n",
"cc @ArthurZucker ",
"Hey! Not entirely I understand the example you provided @faany, you say that `\"\\nObservation\" is a substring of \"!\\nObservation\",` but I why? \r\n\r\nI am actually going to add this argument (following #25224 ), as we now control the addition of the SPIECE_UNDERLINE by hand. \r\n",
"@ArthurZucker maybe a bit more context to my example above: I want my model to stop generating when the string `\\nObservation` occurs and I used a self-defined StoppingCriteria for it. So when the model generates \"\\nObservation\", it should stop generating new texts. But what I found is that my model doesn't stop generating, even though it produces \"\\nObservation\". So I dig a bit deeper into this and found that it produces a `!` in front of `\\nObservation` and the token ids of `\\nObservation` are not the same as the token ids of `\\nObservation` in `!\\nObservation`. \r\n\r\nIf the token id of SPIECE_UNDERLINE disappears from [29871, 13, 6039, 2140, 362], it will work, because [13, 6039, 2140, 362] is now a sub-list of [1738, 13, 6039, 2140, 362]. \r\n\r\n",
"> Hey! Not entirely I understand the example you provided @Faany, you say that `\"\\nObservation\" is a substring of \"!\\nObservation\",` but I why?\r\n> \r\n> I am actually going to add this argument (following #25224 ), as we now control the addition of the SPIECE_UNDERLINE by hand.\r\n\r\nHi @ArthurZucker . Thanks for your reply. I use your patch [25224]( https://github.com/huggingface/transformers/pull/25224) and set `legacy=False; add_dummy_prefix=False`, but the issue proposed by @faaany still exists.\r\n\r\n```\r\n#########\r\nObservation##########\r\n['▁', '<0x0A>', 'Ob', 'serv', 'ation']\r\n[29871, 13, 6039, 2140, 362]\r\n\r\nObservation\r\n\r\n#########\"\r\nObservation##########\r\n['▁\"', '<0x0A>', 'Ob', 'serv', 'ation']\r\n[376, 13, 6039, 2140, 362]\r\n\"\r\nObservation\r\n```\r\n\r\nWe see that `\\nObservation` and `\"\\nObservation` are still recognized as two different tokens. We want the `token_ids` of `\\nObservation` to be the sub-list of the `token_ids` of `\"\\nObservation`, so I submitted this PR. The issue can be solved with my PR. The results are as follows\r\n```\r\n#########\r\nObservation##########\r\n['<0x0A>', 'Ob', 'serv', 'ation']\r\n[13, 6039, 2140, 362]\r\n\r\nObservation\r\n\r\n#########\"\r\nObservation##########\r\n['\"', '<0x0A>', 'Ob', 'serv', 'ation']\r\n[29908, 13, 6039, 2140, 362]\r\n\"\r\nObservation\r\n```",
"The patch does not include `add_dummy_prefix_space` yet, so it's quite normal that the dummy prefix space is sill added. Once it is supported, the patch will behave similarly as this PR 😉\r\n\r\nNow the issue that you mention:\r\n> the token ids of \\nObservation are not the same as the token ids of \\nObservation in !\\nObservation\r\n\r\nIt's expected no? They are different tokens. What you need to do is actually stop the generation when `\\nObserbation` is generated. This means that the stopping criteria should look like this:\r\n```python\r\nfrom transformers import StoppingCriteria\r\n\r\nclass VicunaStoppingCriteria(StoppingCriteria):\r\n def __init__(self, eos_sequence = [13, 6039, 2140, 362]):\r\n self.eos_sequence = eos_sequence\r\n\r\n def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:\r\n last_2_ids = input_ids[:,-2:].tolist()\r\n return self.eos_sequence in last_2_ids\r\n``` \r\nBecause the actual tokenization of `\\nObservation` is `[13, 6039, 2140, 362]`:\r\n```python \r\n>>> tokenizer.sp_model.decode([13, 6039, 2140, 362])\r\n>>> '\\nObservation'\r\n```",
"> The patch does not include `add_dummy_prefix_space` yet, so it's quite normal that the dummy prefix space is sill added. Once it is supported, the patch will behave similarly as this PR 😉\r\n> \r\n> Now the issue that you mention:\r\n> \r\n> > the token ids of \\nObservation are not the same as the token ids of \\nObservation in !\\nObservation\r\n> \r\n> It's expected no? They are different tokens. What you need to do is actually stop the generation when `\\nObserbation` is generated. This means that the stopping criteria should look like this:\r\n> \r\n> ```python\r\n> from transformers import StoppingCriteria\r\n> \r\n> class VicunaStoppingCriteria(StoppingCriteria):\r\n> def __init__(self, eos_sequence = [13, 6039, 2140, 362]):\r\n> self.eos_sequence = eos_sequence\r\n> \r\n> def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:\r\n> last_2_ids = input_ids[:,-2:].tolist()\r\n> return self.eos_sequence in last_2_ids\r\n> ```\r\n> \r\n> Because the actual tokenization of `\\nObservation` is `[13, 6039, 2140, 362]`:\r\n> \r\n> ```python\r\n> >>> tokenizer.sp_model.decode([13, 6039, 2140, 362])\r\n> >>> '\\nObservation'\r\n> ```\r\n\r\nThanks for the explanation. I see your point, but what if I want to pass the actual stop word in a string, e.g. `\\nObservation`, into `VicunaStoppingCriteria` instead of the `eos_sequence` in your example? \r\n\r\n```python\r\nfrom transformers import StoppingCriteria\r\n\r\nclass VicunaStoppingCriteria(StoppingCriteria):\r\n def __init__(self, tokenizer, stop_words = [\"\\nObservation\"]):\r\n encoded_stop_words = tokenizer(stop_words, add_special_tokens=False, padding=True, return_tensors=\"pt\")\r\n self.stop_words = encoded_stop_words.input_ids.to('cpu')\r\n\r\n def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:\r\n last_2_ids = input_ids[:,-2:].tolist()\r\n return self.stop_words in last_2_ids\r\n```\r\n\r\nThen the VicunaStoppingCriteriawould will return False due to the added prefix space.",
"@ArthurZucker Maybe the way how I define my `StoppingCriteria` is not recommended in general? ",
"Yes @faaany, I don't think using a string is the best way to go for two reasons: \r\n1. The `generate` function does not have access to the tokenizer\r\n2. If you encode `\"\\nObservation\"`, as you noticed, whether there is a prefix space or not will change the encoding. You can mitigate this using the `add_dummy_prefix` indeed, but generally it's better to have the actual ids. \r\n\r\nBut I'll make sure to include the `add_prefix_space` argument anyway I think it's important. ",
"> Yes @faaany, I don't think using a string is the best way to go for two reasons:\r\n> \r\n> 1. The `generate` function does not have access to the tokenizer\r\n> 2. If you encode `\"\\nObservation\"`, as you noticed, whether there is a prefix space or not will change the encoding. You can mitigate this using the `add_dummy_prefix` indeed, but generally it's better to have the actual ids.\r\n> \r\n> But I'll make sure to include the `add_prefix_space` argument anyway I think it's important.\r\n\r\nAwesome, thank you! Since @jiqing-feng has tested that the current PR #25224 still doesn't support the argument `add_prefix_space`, can we leave this PR open for the `add_prefix_space` argument? Once your PR gets merged with the support, we will be glad to test it again and we can close this one. ",
"Hi @ArthurZucker , I saw that you have merged the other PR without the new `add_prefix_space` argument. Would you consider this PR from @jiqing-feng then? Or do you have any other plans or thoughts? ",
"For now I will not consider this PR as it is very much incomplete, especially for the fast tokenization. \r\n- If you use a slow tokenizer, the prefix space is added in `prepare_for_tokenization`, which means you can use `tokenizer._tokenizer` to achieve what you are looking for. \r\n- If you are using a fast tokenizer, you will need to modify the backend tokenizer, and if you test your script, it will not work for now. You need to change the state of the `fast_tokenizer.backen_tokenizer`, to update the `normalizer` and the `decoder`. This is a bit tricky for now, and I would recommend going through another road: use a fast tokenizer and don't lookup strings but rather inputs ids. ",
"> For now I will not consider this PR as it is very much incomplete, especially for the fast tokenization.\r\n> \r\n> * If you use a slow tokenizer, the prefix space is added in `prepare_for_tokenization`, which means you can use `tokenizer._tokenizer` to achieve what you are looking for.\r\n> * If you are using a fast tokenizer, you will need to modify the backend tokenizer, and if you test your script, it will not work for now. You need to change the state of the `fast_tokenizer.backen_tokenizer`, to update the `normalizer` and the `decoder`. This is a bit tricky for now, and I would recommend going through another road: use a fast tokenizer and don't lookup strings but rather inputs ids.\r\n\r\nThanks a lot for the explanation! Just to make sure that I understood you correctly, do you mean that I can use the slow tokenizer instead of fast tokenizer to achieve my goal, e.g. like this:\r\n\r\n```python\r\nfrom transformers import AutoTokenizer\r\n\r\nmodel_name = 'lmsys/vicuna-13b-v1.3'\r\ntokenizer = AutoTokenizer.from_pretrained(model_name, add_special_tokens=False, padding=True, use_fast=False)\r\n\r\nfor stop_word in ['Observation', '!\\nObservation']:\r\n print(f'++++++++++{stop_word}+++++++++++++')\r\n tokens = tokenizer._tokenize(stop_word)\r\n print(tokens)\r\n ids = tokenizer.convert_tokens_to_ids(tokens)\r\n print(ids) \r\n```\r\n\r\nBut the output I got for the above code is this:\r\n\r\n```shell\r\n++++++++++Observation+++++++++++++\r\n['▁Observ', 'ation']\r\n[21651, 362]\r\n++++++++++!\r\nObservation+++++++++++++\r\n['▁!', '<0x0A>', 'Ob', 'serv', 'ation']\r\n[1738, 13, 6039, 2140, 362]\r\n```",
"I mean that the way you wanted to use the fast tokenizer would not have worked unless you convert it again (so the `tokenization_llama_fast` is wrong yes. \r\nFor the snippet did you set `legacy = False`? ",
"> I mean that the way you wanted to use the fast tokenizer would not have worked unless you convert it again (so the `tokenization_llama_fast` is wrong yes. For the snippet did you set `legacy = False`?\r\n\r\nAhh, thanks for the hint, it worked with `legacy=False`!!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,691 | 1,695 | 1,695 |
CONTRIBUTOR
| null |
Hi @sgugger
This PR enables llama tokenizer supporting `add_prefix_space`.
Would you please help me review it? Thanks!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25278/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25278/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25278",
"html_url": "https://github.com/huggingface/transformers/pull/25278",
"diff_url": "https://github.com/huggingface/transformers/pull/25278.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25278.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25277
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25277/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25277/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25277/events
|
https://github.com/huggingface/transformers/issues/25277
| 1,834,064,623 |
I_kwDOCUB6oc5tUZrv
| 25,277 |
Unable to quantize Meta's new AudioCraft MusicGen model
|
{
"login": "xNul",
"id": 894305,
"node_id": "MDQ6VXNlcjg5NDMwNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/894305?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xNul",
"html_url": "https://github.com/xNul",
"followers_url": "https://api.github.com/users/xNul/followers",
"following_url": "https://api.github.com/users/xNul/following{/other_user}",
"gists_url": "https://api.github.com/users/xNul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xNul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xNul/subscriptions",
"organizations_url": "https://api.github.com/users/xNul/orgs",
"repos_url": "https://api.github.com/users/xNul/repos",
"events_url": "https://api.github.com/users/xNul/events{/privacy}",
"received_events_url": "https://api.github.com/users/xNul/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I figured out a fix by adding the line\r\n```python\r\ninputs_embeds = inputs_embeds.to(torch.float16)\r\n```\r\nright after line 776, but I noticed commit https://github.com/huggingface/transformers/commit/03f98f96836477f6f5b86957d3ce98778cad5d94 which also fixes this bug. So the second bug is fixed if you're using a version of transformers since that commit a week ago.\r\n\r\nNow we are down to two problems: the original `deepcopy` bug and the fact that for some reason the quantized MusicGen model runs over 2x as slow as the non-quantized one. Not sure why that is because quantized models should be faster. I can't do anything about it so I'm at a dead end here.",
"Also, non-quantized, normal musicgen-large is about 2x slower on Transformers than Meta's own code. Interestingly musicgen-small is a bit faster than Meta's own code. About 10% faster.",
"cc @younesbelkada @sanchit-gandhi ",
"For benchmarking `transformers` vs `audiocraft` - could you ensure that the `transformers` model is put in half (fp16) precision? By default, we always load in fp32 precision on CPU, whereas `audiocraft` always loads the model in fp16 precision on the GPU. Running the `transformers` model in fp16 half precision should give a considerable speed-up vs fp32 full precision:\r\n\r\n```python\r\nmodel = MusicGenForConditionalGeneration.from_pretrained(\"facebook/musicgen-large\", torch_dtype=torch.float16)\r\n```\r\n\r\nWe can make this faster still by adding Flash Attention with a Better Transformers integration! This should give a further 10-15% speed-up",
"Regarding the quantisation, I was **not** able to load the model using bitsandbytes==0.40.0 using the following code snippet:\r\n```python\r\nfrom transformers import MusicgenForConditionalGeneration\r\n\r\nmodel = MusicgenForConditionalGeneration.from_pretrained(\"facebook/musicgen-small\", load_in_8bit=True)\r\n```\r\n\r\n<details>\r\n\r\n<summary> Traceback </summary>\r\n\r\n```python\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\nCell In[6], line 1\r\n----> 1 model = MusicgenForConditionalGeneration.from_pretrained(\"facebook/musicgen-small\", load_in_8bit=True)\r\n\r\nFile ~/transformers/src/transformers/models/musicgen/modeling_musicgen.py:1595, in MusicgenForConditionalGeneration.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 1589 logger.warning(\r\n 1590 \"Fast initialization is currently not supported for MusicgenForConditionalGeneration. \"\r\n 1591 \"Falling back to slow initialization...\"\r\n 1592 )\r\n 1593 kwargs[\"_fast_init\"] = False\r\n-> 1595 return super().from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)\r\n\r\nFile ~/transformers/src/transformers/modeling_utils.py:2744, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, config, cache_dir, ignore_mismatched_sizes, force_download, local_files_only, token, revision, use_safetensors, *model_args, **kwargs)\r\n 2742 # We keep some modules such as the lm_head in their original dtype for numerical stability reasons\r\n 2743 if llm_int8_skip_modules is None:\r\n-> 2744 modules_to_not_convert = get_keys_to_not_convert(model)\r\n 2745 else:\r\n 2746 modules_to_not_convert = llm_int8_skip_modules\r\n\r\nFile ~/transformers/src/transformers/utils/bitsandbytes.py:257, in get_keys_to_not_convert(model)\r\n 245 r\"\"\"\r\n 246 An utility function to get the key of the module to keep in full precision if any For example for CausalLM modules\r\n 247 we may want to keep the lm_head in full precision for numerical stability reasons. For other architectures, we want\r\n (...)\r\n 253 Input model\r\n 254 \"\"\"\r\n 255 # Create a copy of the model and tie the weights, then\r\n 256 # check if it contains tied weights\r\n--> 257 tied_model = deepcopy(model) # this has 0 cost since it is done inside `init_empty_weights` context manager`\r\n 258 tied_model.tie_weights()\r\n 260 tied_params = find_tied_parameters(tied_model)\r\n\r\nFile /usr/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)\r\n 170 y = x\r\n 171 else:\r\n--> 172 y = _reconstruct(x, memo, *rv)\r\n 174 # If is its own copy, don't memoize.\r\n 175 if y is not x:\r\n\r\nFile /usr/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)\r\n 269 if state is not None:\r\n 270 if deep:\r\n--> 271 state = deepcopy(state, memo)\r\n 272 if hasattr(y, '__setstate__'):\r\n 273 y.__setstate__(state)\r\n\r\nFile /usr/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nFile /usr/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)\r\n 229 memo[id(x)] = y\r\n 230 for key, value in x.items():\r\n--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)\r\n 232 return y\r\n\r\nFile /usr/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)\r\n 170 y = x\r\n 171 else:\r\n--> 172 y = _reconstruct(x, memo, *rv)\r\n 174 # If is its own copy, don't memoize.\r\n 175 if y is not x:\r\n\r\nFile /usr/lib/python3.10/copy.py:297, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)\r\n 295 for key, value in dictiter:\r\n 296 key = deepcopy(key, memo)\r\n--> 297 value = deepcopy(value, memo)\r\n 298 y[key] = value\r\n 299 else:\r\n\r\n [... skipping similar frames: deepcopy at line 172 (1 times)]\r\n\r\nFile /usr/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)\r\n 269 if state is not None:\r\n 270 if deep:\r\n--> 271 state = deepcopy(state, memo)\r\n 272 if hasattr(y, '__setstate__'):\r\n 273 y.__setstate__(state)\r\n\r\nFile /usr/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nFile /usr/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)\r\n 229 memo[id(x)] = y\r\n 230 for key, value in x.items():\r\n--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)\r\n 232 return y\r\n\r\n [... skipping similar frames: deepcopy at line 172 (1 times)]\r\n\r\nFile /usr/lib/python3.10/copy.py:297, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)\r\n 295 for key, value in dictiter:\r\n 296 key = deepcopy(key, memo)\r\n--> 297 value = deepcopy(value, memo)\r\n 298 y[key] = value\r\n 299 else:\r\n\r\n [... skipping similar frames: deepcopy at line 172 (6 times), _deepcopy_dict at line 231 (3 times), _reconstruct at line 271 (3 times), deepcopy at line 146 (3 times), _reconstruct at line 297 (2 times)]\r\n\r\nFile /usr/lib/python3.10/copy.py:297, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)\r\n 295 for key, value in dictiter:\r\n 296 key = deepcopy(key, memo)\r\n--> 297 value = deepcopy(value, memo)\r\n 298 y[key] = value\r\n 299 else:\r\n\r\nFile /usr/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)\r\n 170 y = x\r\n 171 else:\r\n--> 172 y = _reconstruct(x, memo, *rv)\r\n 174 # If is its own copy, don't memoize.\r\n 175 if y is not x:\r\n\r\nFile /usr/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)\r\n 269 if state is not None:\r\n 270 if deep:\r\n--> 271 state = deepcopy(state, memo)\r\n 272 if hasattr(y, '__setstate__'):\r\n 273 y.__setstate__(state)\r\n\r\nFile /usr/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nFile /usr/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)\r\n 229 memo[id(x)] = y\r\n 230 for key, value in x.items():\r\n--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)\r\n 232 return y\r\n\r\nFile /usr/lib/python3.10/copy.py:153, in deepcopy(x, memo, _nil)\r\n 151 copier = getattr(x, \"__deepcopy__\", None)\r\n 152 if copier is not None:\r\n--> 153 y = copier(memo)\r\n 154 else:\r\n 155 reductor = dispatch_table.get(cls)\r\n\r\nFile ~/hf/lib/python3.10/site-packages/torch/_tensor.py:86, in Tensor.__deepcopy__(self, memo)\r\n 84 return handle_torch_function(Tensor.__deepcopy__, (self,), self, memo)\r\n 85 if not self.is_leaf:\r\n---> 86 raise RuntimeError(\r\n 87 \"Only Tensors created explicitly by the user \"\r\n 88 \"(graph leaves) support the deepcopy protocol at the moment\"\r\n 89 )\r\n 90 if id(self) in memo:\r\n 91 return memo[id(self)]\r\n\r\nRuntimeError: Only Tensors created explicitly by the user (graph leaves) support the deepcopy protocol at the moment\r\n```\r\n\r\n</details>\r\n\r\nHowever, I was with:\r\n```python\r\nfrom transformers import MusicgenForConditionalGeneration\r\nimport torch\r\n\r\nwith torch.no_grad():\r\n model = MusicgenForConditionalGeneration.from_pretrained(\"facebook/musicgen-small\", load_in_8bit=True)\r\n```\r\n\r\nI can take a deeper look into why the bnb conversion is failing unless @younesbelkada has an idea from this behaviour!\r\n\r\nNote that if you care about inference speed, your best bet is to stick with fp16 inference here:\r\n```python\r\nfrom transformers import MusicgenForConditionalGeneration\r\n\r\nmodel = MusicgenForConditionalGeneration.from_pretrained(\"facebook/musicgen-small\", torch_dtype=torch.float16)\r\n```\r\n",
"@sanchit-gandhi \r\n\r\n> For benchmarking `transformers` vs `audiocraft` - could you ensure that the `transformers` model is put in half (fp16) precision? By default, we always load in fp32 precision on CPU, whereas `audiocraft` always loads the model in fp16 precision on the GPU. Running the `transformers` model in fp16 half precision should give a considerable speed-up vs fp32 full precision:\r\n> \r\n> ```python\r\n> model = MusicGenForConditionalGeneration.from_pretrained(\"facebook/musicgen-large\", torch_dtype=torch.float16)\r\n> ```\r\n> \r\n> We can make this faster still by adding Flash Attention with a Better Transformers integration! This should give a further 10-15% speed-up\r\n\r\nThis sped up generation _enormously_ for me with Transformers! The large model went from taking over 2 minutes to 50 seconds which is 20 seconds faster than I get with Meta's audiocraft code so thank you.\r\n\r\n> Regarding the quantisation, I was **not** able to load the model using bitsandbytes==0.40.0 using the following code snippet:\r\n> \r\n> ```python\r\n> from transformers import MusicgenForConditionalGeneration\r\n> \r\n> model = MusicgenForConditionalGeneration.from_pretrained(\"facebook/musicgen-small\", load_in_8bit=True)\r\n> ```\r\n> \r\n> Traceback\r\n> \r\n> However, I was with:\r\n> \r\n> ```python\r\n> from transformers import MusicgenForConditionalGeneration\r\n> import torch\r\n> \r\n> with torch.no_grad():\r\n> model = MusicgenForConditionalGeneration.from_pretrained(\"facebook/musicgen-small\", load_in_8bit=True)\r\n> ```\r\n> \r\n> I can take a deeper look into why the bnb conversion is failing unless @younesbelkada has an idea from this behaviour!\r\n> \r\n> Note that if you care about inference speed, your best bet is to stick with fp16 inference here:\r\n> \r\n> ```python\r\n> from transformers import MusicgenForConditionalGeneration\r\n> \r\n> model = MusicgenForConditionalGeneration.from_pretrained(\"facebook/musicgen-small\", torch_dtype=torch.float16)\r\n> ```\r\n\r\nI can confirm that `with torch.no_grad():` resolves that bug for me too :)\r\n\r\nAny idea why 8bit quantized inference speed is 4-5x worse than the normal 16bit? (50 seconds on fp16 vs 4 minutes 30 seconds on bitsandbytes 8bit) Recent LLMs like Llama, MPT, Falcon, etc all have faster inference when quantized. I thought the same would be true for MusicGen as well. My goal is to have musicgen-large generate audio with a length longer than it takes to inference, even if it degrades the quality a bit.",
"Also, this is probably a limitation of the model itself, but just in case it isn't, trying to generate over 2048 tokens with MusicGen causes this error:\r\n\r\n```python\r\n>>> audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=2049).to('cpu')\r\nC:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\native\\cuda\\Indexing.cu:1093: block: [4,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\nC:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\native\\cuda\\Indexing.cu:1093: block: [4,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\nC:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\native\\cuda\\Indexing.cu:1093: block: [4,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\nC:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\native\\cuda\\Indexing.cu:1093: block: [4,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\nC:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\native\\cuda\\Indexing.cu:1093: block: [4,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\nC:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\native\\cuda\\Indexing.cu:1093: block: [4,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\nC:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\native\\cuda\\Indexing.cu:1093: block: [4,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\nC:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\native\\cuda\\Indexing.cu:1093: block: [4,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\nC:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\native\\cuda\\Indexing.cu:1093: block: [4,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\nC:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\native\\cuda\\Indexing.cu:1093: block: [4,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\nC:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\native\\cuda\\Indexing.cu:1093: block: [4,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n....a lot of this...\r\nC:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\native\\cuda\\Indexing.cu:1093: block: [5,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\nC:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\native\\cuda\\Indexing.cu:1093: block: [5,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\nC:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\native\\cuda\\Indexing.cu:1093: block: [5,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\nC:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\native\\cuda\\Indexing.cu:1093: block: [5,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\nC:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\native\\cuda\\Indexing.cu:1093: block: [5,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\nC:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\native\\cuda\\Indexing.cu:1093: block: [5,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\nC:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\native\\cuda\\Indexing.cu:1093: block: [5,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"C:\\Users\\fkdlam\\anaconda3\\envs\\audiocraft2\\lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"C:\\Users\\fkdlam\\anaconda3\\envs\\audiocraft2\\lib\\site-packages\\transformers\\models\\musicgen\\modeling_musicgen.py\", line 2427, in generate\r\n outputs = self.sample(\r\n File \"C:\\Users\\fkdlam\\anaconda3\\envs\\audiocraft2\\lib\\site-packages\\transformers\\generation\\utils.py\", line 2642, in sample\r\n outputs = self(\r\n File \"C:\\Users\\fkdlam\\anaconda3\\envs\\audiocraft2\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"C:\\Users\\fkdlam\\anaconda3\\envs\\audiocraft2\\lib\\site-packages\\transformers\\models\\musicgen\\modeling_musicgen.py\", line 1913, in forward\r\n decoder_outputs = self.decoder(\r\n File \"C:\\Users\\fkdlam\\anaconda3\\envs\\audiocraft2\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"C:\\Users\\fkdlam\\anaconda3\\envs\\audiocraft2\\lib\\site-packages\\transformers\\models\\musicgen\\modeling_musicgen.py\", line 1026, in forward\r\n outputs = self.model(\r\n File \"C:\\Users\\fkdlam\\anaconda3\\envs\\audiocraft2\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"C:\\Users\\fkdlam\\anaconda3\\envs\\audiocraft2\\lib\\site-packages\\transformers\\models\\musicgen\\modeling_musicgen.py\", line 935, in forward\r\n decoder_outputs = self.decoder(\r\n File \"C:\\Users\\fkdlam\\anaconda3\\envs\\audiocraft2\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"C:\\Users\\fkdlam\\anaconda3\\envs\\audiocraft2\\lib\\site-packages\\transformers\\models\\musicgen\\modeling_musicgen.py\", line 845, in forward\r\n layer_outputs = decoder_layer(\r\n File \"C:\\Users\\fkdlam\\anaconda3\\envs\\audiocraft2\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"C:\\Users\\fkdlam\\anaconda3\\envs\\audiocraft2\\lib\\site-packages\\transformers\\models\\musicgen\\modeling_musicgen.py\", line 400, in forward\r\n hidden_states, self_attn_weights, present_key_value = self.self_attn(\r\n File \"C:\\Users\\fkdlam\\anaconda3\\envs\\audiocraft2\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"C:\\Users\\fkdlam\\anaconda3\\envs\\audiocraft2\\lib\\site-packages\\transformers\\models\\musicgen\\modeling_musicgen.py\", line 249, in forward\r\n key_states = self._shape(self.k_proj(hidden_states), -1, bsz)\r\n File \"C:\\Users\\fkdlam\\anaconda3\\envs\\audiocraft2\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"C:\\Users\\fkdlam\\anaconda3\\envs\\audiocraft2\\lib\\site-packages\\torch\\nn\\modules\\linear.py\", line 114, in forward\r\n return F.linear(input, self.weight, self.bias)\r\nRuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasGemmEx( handle, opa, opb, m, n, k, &falpha, a, CUDA_R_16F, lda, b, CUDA_R_16F, ldb, &fbeta, c, CUDA_R_16F, ldc, CUDA_R_32F, CUBLAS_GEMM_DFALT_TENSOR_OP)`\r\n```",
"@sanchit-gandhi Another thing, any audio I generate has some weird noise in the final second. I was able to get rid of it by cutting off the last second of audio, but when I tried using the [Audio-Prompted Generation](https://huggingface.co/docs/transformers/main/en/model_doc/musicgen#audioprompted-generation) feature, it came back.\r\n\r\nLet's say I generate 31 seconds of audio. I cut off the last second making it 30 seconds. The weird noise is gone. I take those audio values, pass it in for audio-prompted generation to generate the next 10 seconds of audio. The output of the audio-prompted generation will have that weird noise again at the 30 second mark.\r\n\r\nAt first, I thought it was because I wasn't generating the number of tokens to finish a full second of samples, but even when I made it so that the number of tokens outputted the exact number of samples, such that the number of samples in the 30 second audio was 30\\*32000=960000, nothing changed. I still had that weird noise at the end and had to cut it off and it would keep appearing with audio-prompted generation. Now I'm not sure how to get rid of it. Removing the padding doesn't seem to make a difference. Switching to musicgen-small doesn't seem to make a difference either.\r\n\r\nCode to reproduce:\r\n```python\r\nfrom transformers import AutoProcessor, MusicgenForConditionalGeneration\r\nimport torch, scipy\r\n\r\n# Load the model in fp16 precision and get sampling rate\r\nprocessor = AutoProcessor.from_pretrained(\"facebook/musicgen-large\")\r\nwith torch.no_grad():\r\n model = MusicgenForConditionalGeneration.from_pretrained(\"facebook/musicgen-large\", torch_dtype=torch.float16, device_map=\"cuda\")\r\nsampling_rate = model.config.audio_encoder.sampling_rate\r\n\r\n# Generate 30.66 seconds of audio based on the text prompt and save audio to disk. You can hear the weird noise in the last second. If you cut off the .66, the weird noise will not happen when playing the file saved to disk.\r\ninputs = processor(text=[\"80s pop track with bassy drums and synth\"], padding=True, return_tensors=\"pt\").to(\"cuda\")\r\naudio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=1536).to(\"cpu\")\r\n#audio_values = audio_values[0, 0, :-21120] # Uncomment to cut off the last .66 and remove the weird noise\r\naudio_values_wav = audio_values.to(torch.float32) # fp16 -> fp32 conversion for wav file compatibility\r\nscipy.io.wavfile.write(\"musicgen_out.wav\", rate=sampling_rate, data=audio_values_wav[0, 0].numpy())\r\n\r\n# Generate the final 10 seconds (40 seconds total) of audio based on Audio-Prompted Generation and save audio to disk. You can hear the weird noise at 30 seconds, even though it should be gone.\r\ninputs = processor(text=[\"80s pop track with bassy drums and synth\"], padding=True, return_tensors=\"pt\", sampling_rate=sampling_rate, audio=audio_values[0, 0]).to(\"cuda\")\r\ninputs[\"input_values\"] = inputs[\"input_values\"].to(torch.float16) # For some reason audio_values is cast back into fp32 which does not work in fp16 precision so we must recast before passing to the model\r\naudio_values2 = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=512).to(\"cpu\")\r\naudio_values2_wav = audio_values2.to(torch.float32) # fp16 -> fp32 conversion for wav file compatibility\r\nscipy.io.wavfile.write(\"musicgen_out2.wav\", rate=sampling_rate, data=audio_values2_wav[0, 0].numpy())\r\n```",
"Ok I think I figured it out. MusicGen can't actually generate more than 30 seconds of audio so the max number of tokens is 1506 and going over that number will generate the weird noise. 502 tokens per 10 seconds. You input the first 20 seconds and then auto-prompt generate the last 10.",
"> Any idea why 8bit quantized inference speed is 4-5x worse than the normal 16bit?\r\n\r\nThe slowness can have multiple reasons. One might be your GPU (if its a not a Turning/Ampere). Otherwise, if your inputs are relatively large compared to your weights, things will also be slow. Another reason might be the implementation currently has a large overhead if the matmul is very small (e.g. many layers of small matmuls), which explains why there's a slowdown for small models like MusicGen (<1B params), but not for LLMs like LLaMA (>10B params)\r\n\r\nYou can also try the new 4bit quantisation by installing transformers from main and setting `load_in_4bit=True` -> this claims to be much faster for smaller models, but I haven't tried it out yet for MusicGen",
"Indeed - the sinusoidal positional embeddings limit the model to 30s inputs. Nice job on finding this! It's not super well documented actually - would you like to open a PR to update the docs? https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/musicgen.md",
"> \r\n\r\n\r\n\r\n> > Any idea why 8bit quantized inference speed is 4-5x worse than the normal 16bit?\r\n> \r\n> The slowness can have multiple reasons. One might be your GPU (if its a not a Turning/Ampere). Otherwise, if your inputs are relatively large compared to your weights, things will also be slow. Another reason might be the implementation currently has a large overhead if the matmul is very small (e.g. many layers of small matmuls), which explains why there's a slowdown for small models like MusicGen (<1B params), but not for LLMs like LLaMA (>10B params)\r\n\r\nIt's not Turing/Ampere, but it's the 4090 so Ada Lovelace. Is that an issue? I wouldn't expect so since it's the latest except for the H100's Hopper architecture.\r\n\r\n> You can also try the new 4bit quantisation by installing transformers from main and setting `load_in_4bit=True` -> this claims to be much faster for smaller models, but I haven't tried it out yet for MusicGen\r\n\r\nOh great! I'll try it out!",
"> Indeed - the sinusoidal positional embeddings limit the model to 30s inputs. Nice job on finding this! It's not super well documented actually - would you like to open a PR to update the docs? https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/musicgen.md\r\n\r\nSure! How about I make a PR with these https://github.com/xNul/transformers/commit/905dd488e73ed046a76c39f08ef3cab94fb9f1a6 changes?",
"Yes that hardware should be fine! You can double check on the bitsandbytes repo: https://github.com/TimDettmers/bitsandbytes#requirements--installation\r\n\r\nThat change looks great - feel free to open a PR and I can review/merge asap!",
"Opened! https://github.com/huggingface/transformers/pull/25510",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey @xNul, hope you're doing well! Just circling back to this issue after a little while. I believe we fixed the 30s limit problem with your PR: #25510. Regarding the quantisation issue, are you still experiencing any difficulties? Happy to take a look if so! Otherwise, feel free to close this issue :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @sanchit-gandhi , @xNul.\r\nAny updates on the original `deepcopy` issue? I'm having the same error with \"facebook/musicgen-small\" and following versions of transformers and bitsandbytes\r\n`transformers==4.37.1` and `bitsandbytes==0.42.0`"
] | 1,691 | 1,706 | 1,697 |
CONTRIBUTOR
| null |
### System Info
- Windows 11 64bit
- Python 3.10.12
- Torch v2.0.1+cu117
- Transformers v4.31.0
- audiocraft v0.0.2
- bitsandbytes v0.41.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi, I'm attempting to quantize Meta's new MusicGen model with bitsandbytes (through the Transformers library) and I've run into a bug with the `deepcopy` function. I'm not familiar with PyTorch's deepcopy function or why this error may be occurring, but I am able to side-step it with a hack and get a bit further until I reach another error, this time with the Transformers library.
The first error:
```python
>>> from transformers import AutoProcessor, MusicgenForConditionalGeneration
bin C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\site-packages\bitsandbytes\libbitsandbytes_cuda117.dll
>>> processor = AutoProcessor.from_pretrained("facebook/musicgen-small", load_in_8bit=True)
>>> model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small", load_in_8bit=True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\site-packages\transformers\models\musicgen\modeling_musicgen.py", line 1599, in from_pretrained
return super().from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\site-packages\transformers\modeling_utils.py", line 2719, in from_pretrained
modules_to_not_convert = get_keys_to_not_convert(model)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\site-packages\transformers\utils\bitsandbytes.py", line 257, in get_keys_to_not_convert
tied_model = deepcopy(model) # this has 0 cost since it is done inside `init_empty_weights` context manager`
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 271, in _reconstruct
state = deepcopy(state, memo)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 146, in deepcopy
y = copier(x, memo)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 231, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 297, in _reconstruct
value = deepcopy(value, memo)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 271, in _reconstruct
state = deepcopy(state, memo)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 146, in deepcopy
y = copier(x, memo)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 231, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 297, in _reconstruct
value = deepcopy(value, memo)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 271, in _reconstruct
state = deepcopy(state, memo)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 146, in deepcopy
y = copier(x, memo)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 231, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 297, in _reconstruct
value = deepcopy(value, memo)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 271, in _reconstruct
state = deepcopy(state, memo)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 146, in deepcopy
y = copier(x, memo)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 231, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 297, in _reconstruct
value = deepcopy(value, memo)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 271, in _reconstruct
state = deepcopy(state, memo)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 146, in deepcopy
y = copier(x, memo)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 231, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 297, in _reconstruct
value = deepcopy(value, memo)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 271, in _reconstruct
state = deepcopy(state, memo)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 146, in deepcopy
y = copier(x, memo)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 231, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\copy.py", line 153, in deepcopy
y = copier(memo)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\site-packages\torch\_tensor.py", line 86, in __deepcopy__
raise RuntimeError(
RuntimeError: Only Tensors created explicitly by the user (graph leaves) support the deepcopy protocol at the moment
```
The hack:
```python
torch.save(model, "temp.pt")
tied_model = torch.load("temp.pt")
```
The second error after using the hack:
```python
>>> from transformers import AutoProcessor, MusicgenForConditionalGeneration
bin C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\site-packages\bitsandbytes\libbitsandbytes_cuda117.dll
>>> processor = AutoProcessor.from_pretrained("facebook/musicgen-small", load_in_8bit=True)
>>> model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small", load_in_8bit=True)
>>> inputs = processor(text=["80s pop track with bassy drums and synth"], padding=True, return_tensors="pt")
>>> audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\site-packages\transformers\models\musicgen\modeling_musicgen.py", line 2430, in generate
outputs = self.sample(
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\site-packages\transformers\generation\utils.py", line 2642, in sample
outputs = self(
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\site-packages\transformers\models\musicgen\modeling_musicgen.py", line 1916, in forward
decoder_outputs = self.decoder(
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\site-packages\transformers\models\musicgen\modeling_musicgen.py", line 1029, in forward
outputs = self.model(
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\site-packages\transformers\models\musicgen\modeling_musicgen.py", line 938, in forward
decoder_outputs = self.decoder(
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\site-packages\transformers\models\musicgen\modeling_musicgen.py", line 848, in forward
layer_outputs = decoder_layer(
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\site-packages\transformers\models\musicgen\modeling_musicgen.py", line 394, in forward
hidden_states = self.self_attn_layer_norm(hidden_states)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\site-packages\torch\nn\modules\normalization.py", line 190, in forward
return F.layer_norm(
File "C:\Users\fkdlam\anaconda3\envs\audiocraft\lib\site-packages\torch\nn\functional.py", line 2515, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: expected scalar type Float but found Half
```
This is the same code provided in [an example](https://huggingface.co/docs/transformers/main/en/model_doc/musicgen#textconditional-generation) for generating music in the Transformers documentation, except I've added the `load_in_8bit` flag. I'm not sure how to fix this one though. I've created [an issue](https://github.com/TimDettmers/bitsandbytes/issues/669) in the bitsandbytes repository too.
### Expected behavior
Being able to run the MusicGen quantized model with bitsandbytes and obtain audio data output.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25277/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25277/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25276
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25276/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25276/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25276/events
|
https://github.com/huggingface/transformers/pull/25276
| 1,833,894,412 |
PR_kwDOCUB6oc5XDFaW
| 25,276 |
vectorize PrefixConstrainedLogitsProcessor
|
{
"login": "erip",
"id": 2348806,
"node_id": "MDQ6VXNlcjIzNDg4MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2348806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erip",
"html_url": "https://github.com/erip",
"followers_url": "https://api.github.com/users/erip/followers",
"following_url": "https://api.github.com/users/erip/following{/other_user}",
"gists_url": "https://api.github.com/users/erip/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erip/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erip/subscriptions",
"organizations_url": "https://api.github.com/users/erip/orgs",
"repos_url": "https://api.github.com/users/erip/repos",
"events_url": "https://api.github.com/users/erip/events{/privacy}",
"received_events_url": "https://api.github.com/users/erip/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25276). All of your documentation changes will be reflected on that endpoint.",
"There's a silly shape thing happening here which I'll try to debug ASAP (unless others are interested). Unfortunately testing locally is not working since I'm on Silicon and some dependencies for dev aren't available ☹️ but this looks close. I'll want to think hard about the vectorization of the function (which is slightly different and hopefully not breaking).",
"@erip thank you for jumping into the issue 💪 LMK when it is ready for review (assuming it yields speedups)",
"I believe it'll yield some improvements since there will be much less CPU<->GPU with masking ops. Whether they're significant will be hard to measure. My big concern is that the semantics of the prefix fn will change slightly (reflected in the test); whether this is acceptable is unclear.",
"Worst case scenario, a flag could be set at init time (of the logits processor), if the function supports vectorization",
"cc @gante I think this is ready for review. Nothing too controversial here, but I can add a fallback to original behavior in case the fn doesn't support vectorization. I'd like to test the speedup eventually, but I think this won't incur regressions at the very least.",
"Hmm, I just tested this hoping to benchmark against the original unvectorized impl and it seems like it's not quite ready.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,691 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #25217 (in part).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25276/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25276/timeline
| null | true |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25276",
"html_url": "https://github.com/huggingface/transformers/pull/25276",
"diff_url": "https://github.com/huggingface/transformers/pull/25276.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25276.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25275
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25275/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25275/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25275/events
|
https://github.com/huggingface/transformers/pull/25275
| 1,833,832,593 |
PR_kwDOCUB6oc5XC36I
| 25,275 |
Replace jnp.DeviceArray with jax.Array in FLAX models
|
{
"login": "akhilgoe",
"id": 114951738,
"node_id": "U_kgDOBtoGOg",
"avatar_url": "https://avatars.githubusercontent.com/u/114951738?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akhilgoe",
"html_url": "https://github.com/akhilgoe",
"followers_url": "https://api.github.com/users/akhilgoe/followers",
"following_url": "https://api.github.com/users/akhilgoe/following{/other_user}",
"gists_url": "https://api.github.com/users/akhilgoe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akhilgoe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akhilgoe/subscriptions",
"organizations_url": "https://api.github.com/users/akhilgoe/orgs",
"repos_url": "https://api.github.com/users/akhilgoe/repos",
"events_url": "https://api.github.com/users/akhilgoe/events{/privacy}",
"received_events_url": "https://api.github.com/users/akhilgoe/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Thanks for the fix @akhilgoe - believe this is a duplicate of #24875?",
"\r\n\r\n\r\n> Thanks for the fix @akhilgoe - believe this is a duplicate of #24875?\r\n\r\nYes correct! ",
"If it's okay with you can we give @mariecwhite the opportunity to finish their PR since they've worked on it since last week? (should be merged asap, just requires CircleCI authentication) Very much appreciate you opening this PR to fix the deprecation though!",
"I'm still running into CircleCI issues with https://github.com/huggingface/transformers/pull/24875. Feel free to merge this PR instead.",
"Hey guys...Thanks for the update! I don't have a preference, We can use either of the 2 PRs.\r\n",
"Closing since the fix has been merged. Thanks!"
] | 1,691 | 1,691 | 1,691 |
NONE
| null |
## What does this PR do?
Recent JAX versions have dropped support for jax.numpy.DeviceArray. Many FLAX models refer to jax.numpy.DeviceArray which causes a crash. This PR replaces all references to jax.numpy.DeviceArray with jax.Array.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
cc @sanchit-gandhi
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25275/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25275/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25275",
"html_url": "https://github.com/huggingface/transformers/pull/25275",
"diff_url": "https://github.com/huggingface/transformers/pull/25275.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25275.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25274
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25274/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25274/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25274/events
|
https://github.com/huggingface/transformers/pull/25274
| 1,833,772,102 |
PR_kwDOCUB6oc5XCqcG
| 25,274 |
CI with `pytest_num_workers=8` for torch/tf jobs
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,691 | 1,691 | 1,691 |
COLLABORATOR
| null |
We set `pytest_num_workers` to `3` for `torch_job` and 6 for `tf_job` to avoid OOM. With the recent efforts of reducing model size in CI, we can actually set `pytest_num_workers=8`.
- The full suite: all 3 jobs (PT/TF/Flax): `12-15 minutes`
- On the latest nightly CI (without all PRs merged today): `PT: 37 min | TF: 25 min | Flax: 20 min)`
The `torch_job` reach `95%` of RAM (peak), and `tf_job` is at `80%` of RAM. The `torch_job` with `n8` is a bit dangerous, but I think I have a way to further improve things in follow PR(s).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25274/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25274/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25274",
"html_url": "https://github.com/huggingface/transformers/pull/25274",
"diff_url": "https://github.com/huggingface/transformers/pull/25274.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25274.patch",
"merged_at": 1691006433000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25273
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25273/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25273/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25273/events
|
https://github.com/huggingface/transformers/pull/25273
| 1,833,767,739 |
PR_kwDOCUB6oc5XCpfI
| 25,273 |
use `pytest_num_workers=8` for `torch_job` and `tf_job`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25273). All of your documentation changes will be reflected on that endpoint."
] | 1,691 | 1,691 | 1,691 |
COLLABORATOR
| null |
# What does this PR do?
We set `pytest_num_workers` to `3` for `torch_job` and `6` for `tf_job` to avoid OOM. With the recent efforts of reducing model size in CI, we can actually set `pytest_num_workers=8`.
The full suite: all 3 jobs (PT/TF/Flax) 12-15 minutes
(on the latest nightly CI without all PRs merged today: PT: 37 min | TF: 25 min | Flax: 20 min)
The `torch_job` reach 95% of RAM (peak), and `tf_job` is at 80% of RAM. The `torch_job` with `n8` is a bit dangerous, but I think I have a way to further improvement in follow PR(s).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25273/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25273/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25273",
"html_url": "https://github.com/huggingface/transformers/pull/25273",
"diff_url": "https://github.com/huggingface/transformers/pull/25273.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25273.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25272
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25272/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25272/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25272/events
|
https://github.com/huggingface/transformers/issues/25272
| 1,833,591,983 |
I_kwDOCUB6oc5tSmSv
| 25,272 |
Question about generate method for AutoModelForCausalLM
|
{
"login": "alimirgh75",
"id": 31241070,
"node_id": "MDQ6VXNlcjMxMjQxMDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/31241070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alimirgh75",
"html_url": "https://github.com/alimirgh75",
"followers_url": "https://api.github.com/users/alimirgh75/followers",
"following_url": "https://api.github.com/users/alimirgh75/following{/other_user}",
"gists_url": "https://api.github.com/users/alimirgh75/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alimirgh75/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alimirgh75/subscriptions",
"organizations_url": "https://api.github.com/users/alimirgh75/orgs",
"repos_url": "https://api.github.com/users/alimirgh75/repos",
"events_url": "https://api.github.com/users/alimirgh75/events{/privacy}",
"received_events_url": "https://api.github.com/users/alimirgh75/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports."
] | 1,690 | 1,690 | 1,690 |
NONE
| null |
Hi,
I am trying to use the git model from the pretrained to pass to captum API for calculation of the attribution score.
`
### Initialize the attribution algorithm
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("microsoft/git-base")
ig = IntegratedGradients(model)
`
However, in order for the IG algorithm to work, the "model" should be the forward function of the model.
I need to understand how the output of the model
`
outputs = model(input_ids=training_batch["input_ids"],
attention_mask=training_batch["attention_mask"],
pixel_values=training_batch["pixel_values"],
labels=training_batch["input_ids"])
`
corresponds with output of the generate method `generated_ids = model.generate(pixel_values=pixel_values, max_length=80)`
?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25272/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25272/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25271
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25271/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25271/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25271/events
|
https://github.com/huggingface/transformers/issues/25271
| 1,833,358,501 |
I_kwDOCUB6oc5tRtSl
| 25,271 |
EncoderDecoder does not automatically create decoder_attention_mask to match decoder_input_ids
|
{
"login": "StevenSong",
"id": 26208374,
"node_id": "MDQ6VXNlcjI2MjA4Mzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/26208374?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StevenSong",
"html_url": "https://github.com/StevenSong",
"followers_url": "https://api.github.com/users/StevenSong/followers",
"following_url": "https://api.github.com/users/StevenSong/following{/other_user}",
"gists_url": "https://api.github.com/users/StevenSong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StevenSong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StevenSong/subscriptions",
"organizations_url": "https://api.github.com/users/StevenSong/orgs",
"repos_url": "https://api.github.com/users/StevenSong/repos",
"events_url": "https://api.github.com/users/StevenSong/events{/privacy}",
"received_events_url": "https://api.github.com/users/StevenSong/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] |
closed
| false | null |
[] |
[
"somewhat related, it seems like in the notebook, the `decoder_input_ids` nor the `labels` are shifted; Patrick claims it's because:\r\n> `\"labels\"` are shifted automatically to the left for language modeling training.\r\n\r\nbut I don't see any evidence of this in the implementation. Was this behavior changed at some point? The notebook seems like it might be out of date?\r\n\r\nMy current solution to the original `decoder_attention_mask` issue is to manually pass in `decoder_input_ids` shifted 1 to the right with matching `decoder_attention_mask`, while `labels` remains unchanged.",
"cc @ArthurZucker @younesbelkada ",
"Sorry @StevenSong did not really have the time to look at this, will do so when I can! ",
"Edit, as this is not super high priority, I'll leave the community work on it. It's tagged as a good second issue. \r\nMain \"concern\" is that the decoder attention masks are not always the shifted labels and can be model specific, but we can still have a default! \r\n🤗 ",
"Hi, I've noticed this seems to be the same for other model classes, e.g. BART/mBART and T5. For all of them, the documentation states:\r\n```\r\ndecoder_attention_mask (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*):\r\n Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also\r\n be used by default.\r\n```\r\n\r\nbut then it seems only a causal mask is used if no attention mask is passed to the model explicitly, see e.g. https://github.com/huggingface/transformers/blob/2f3ea08a077ba3133fa8a604b22436cad250b055/src/transformers/models/bart/modeling_bart.py#L932-L953).\r\nIn comparison, the original fairseq implementation for BART/mBART takes padding into account by default: https://github.com/facebookresearch/fairseq/blob/7409af7f9a7b6ddac4cbfe7cafccc715b3c1b21e/fairseq/models/transformer/transformer_decoder.py#L327-L329. I would think this is the same for T5. \r\n\r\nThe fact this doesn't seem to be done here is a bit misleading. Users might not be aware they need to pass the correct attention masks themselves, especially considering none of the examples in the respective model docs or training scripts like https://github.com/huggingface/transformers/blob/v4.32.0/examples/pytorch/translation/run_translation_no_trainer.py pass decoder attention masks either.\r\n\r\n",
"Re: @xplip - I believe there was a discussion around why they don't automatically create the attention mask [here](https://github.com/huggingface/transformers/issues/15479#issuecomment-1066639938). There is usually a warning if they find a padding token when attention_mask = None, but I think it's missing in BART (I can look into adding the warning to BART in a separate change).\r\n\r\nIn this particular case, since we are automatically creating decoder_input_ids from labels, I think it makes sense to automatically create a default corresponding attention_mask as well (if it's not already supplied by the user). Seems like the warning is working as intended since it raised this particular issue.\r\n",
"I created a pull request to fix this here: #26752 ",
"Re: @StevenSong regarding \"labels\" are shifted automatically to the left for language modeling training.\r\n\r\nThe colab you linked seems to use BertLMHeadModel for the decoder. I believe the left shifting behavior is [here](https://github.com/huggingface/transformers/blob/e42587f596181396e1c4b63660abf0c736b10dae/src/transformers/models/bert/modeling_bert.py#L1258C48-L1258C48)."
] | 1,690 | 1,698 | 1,698 |
NONE
| null |
### System Info
```
- `transformers` version: 4.31.0
- Platform: Linux-4.15.0-192-generic-x86_64-with-glibc2.27
- Python version: 3.11.4
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@ArthurZucker @NielsRogge
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm using a pretrained BERT model to make a bert2bert model using an EncoderDecoderModel. According to the [documentation](https://huggingface.co/docs/transformers/model_doc/encoder-decoder#transformers.EncoderDecoderModel.forward.decoder_input_ids) and a deprecation warning in the [source code](https://github.com/huggingface/transformers/blob/bef02fd6b9cde975c51607fb936050ef706ff6d8/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L42-L47), it says that you no longer need to pass in `decoder_input_ids` as they'll be automatically generated using `labels`. In the docs specifically, [it also goes on to say](https://huggingface.co/docs/transformers/model_doc/encoder-decoder#transformers.EncoderDecoderModel.forward.decoder_attention_mask) that the default behavior of `decoder_attention_mask` is to automatically generate it based on padded tokens in `decoder_input_ids`, so you don't need to pass the decoder attention mask either, as expected.
However, when trying to just pass `input_ids + attention_mask` for the encoder and `labels`, I get a warning that says something to the effect of "we strongly recommend passing an attention mask". If I explicitly pass `input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, and labels`, the warning goes away. Looking at the implementation of creating the `decoder_input_ids` from `labels`, it does indeed seem to skip the generation of `decoder_attention_mask` and simply passes through the value from the arguments, in this case `None`:
https://github.com/huggingface/transformers/blob/e42587f596181396e1c4b63660abf0c736b10dae/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L619-L637
You can recreate the warning in the notebook that Patrick made for the blog (https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Leveraging_Pre_trained_Checkpoints_for_Encoder_Decoder_Models.ipynb#scrollTo=yoN2q0hZUbXN&line=11&uniqifier=1). Specifically, in the `process_data_to_model_inputs` function, you can just comment out the lines which explicitly set `decoder_input_ids` and `decoder_attention_mask`.
### Expected behavior
I'd expect that if you can just pass `labels` to the forward call of EncoderDecoder and it will create `decoder_input_ids`, it would also create `decoder_attention_mask`. The fix is probably a few lines:
```python
if (labels is not None) and (decoder_input_ids is None and decoder_inputs_embeds is None):
decoder_input_ids = shift_tokens_right(
labels, self.config.pad_token_id, self.config.decoder_start_token_id
)
if decoder_attention_mask is not None:
raise Exception # some error for passing 1/2 of decoder input_id/attn_mask?
decoder_attention_mask = torch.where(decoder_input_ids == self.config.pad_token_id, 0, 1)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25271/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25271/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25270
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25270/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25270/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25270/events
|
https://github.com/huggingface/transformers/issues/25270
| 1,833,213,811 |
I_kwDOCUB6oc5tRJ9z
| 25,270 |
Device errors when loading in 8 bit
|
{
"login": "cassianlewis",
"id": 131266258,
"node_id": "U_kgDOB9L20g",
"avatar_url": "https://avatars.githubusercontent.com/u/131266258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cassianlewis",
"html_url": "https://github.com/cassianlewis",
"followers_url": "https://api.github.com/users/cassianlewis/followers",
"following_url": "https://api.github.com/users/cassianlewis/following{/other_user}",
"gists_url": "https://api.github.com/users/cassianlewis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cassianlewis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cassianlewis/subscriptions",
"organizations_url": "https://api.github.com/users/cassianlewis/orgs",
"repos_url": "https://api.github.com/users/cassianlewis/repos",
"events_url": "https://api.github.com/users/cassianlewis/events{/privacy}",
"received_events_url": "https://api.github.com/users/cassianlewis/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"You cannot re-dispatch a model that was loaded in 8bit. You need to pass along your `max_memory` or `device_map` to the call to `from_pretrained`.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"The explanation above makes sense to me! Feel free to re-open the issue if you think that doesn't answer your quesiton"
] | 1,690 | 1,693 | 1,693 |
NONE
| null |
### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.31.0
- Platform: Linux-5.10.178-162.673.amzn2.x86_64-x86_64-with-glibc2.26
- Python version: 3.10.10
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes (4 GPUs)
- Using distributed or parallel set-up in script?:
### Who can help?
@younesbelkada
@sgugger
@mue
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
This error occurs when trying to split a quantised `t5-large` model (or any t5 model for that matter) across 4 GPUs using a custom device map (which works when it is not quantised)!
Steps to reproduce:
1.
```
from transformers import AutoTokenizer, DataCollatorWithPadding, TrainingArguments, Trainer, AutoModelForCausalLM, AutoModelForSeq2SeqLM
from peft import get_peft_config, get_peft_model, PromptTuningInit, PromptTuningConfig, TaskType, PeftType
from torch.utils.data import TensorDataset, DataLoader,Dataset
from accelerate import dispatch_model, infer_auto_device_map, init_empty_weights
from accelerate.utils import get_balanced_memory
model_name = "t5-large"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name, cache_dir = 'models', load_in_8bit=True)
```
2.
```
max_memory = get_balanced_memory(
model,
max_memory=None,
no_split_module_classes=["T5Block"],
dtype='float16',
low_zero=False,
)
```
max_memory:
`{0: 263982848, 1: 263982848, 2: 263982848, 3: 13860929536, 'cpu': 189321494528}`
3.
```
device_map = infer_auto_device_map(
model,
max_memory=max_memory,
no_split_module_classes=["T5Block"],
dtype='float16'
)
```
I won't show the entire device_map, just the important part:
```
{'shared': 0,
'decoder.embed_tokens': 0,
'encoder.embed_tokens': 0,
'lm_head': 0,
'encoder.block.0': 0,
'encoder.block.1': 0,
'encoder.block.2': 0,
'encoder.block.3': 0,
'encoder.block.4': 0,
'encoder.block.5': 0,
'encoder.block.6': 0,
'encoder.block.7': 0,
'encoder.block.8': 0,
'encoder.block.9': 0,
'encoder.block.10': 1,
'encoder.block.11': 1,
'encoder.block.12': 1,
```
4.
```
model = dispatch_model(model, device_map=device_map)
for i in model.named_parameters():
print(f"{i[0]} -> {i[1].device}")
```
Again, just the pertinent part:
```
encoder.block.10.layer.0.SelfAttention.q.weight -> cuda:0
encoder.block.10.layer.0.SelfAttention.k.weight -> cuda:0
encoder.block.10.layer.0.SelfAttention.v.weight -> cuda:0
encoder.block.10.layer.0.SelfAttention.o.weight -> cuda:0
encoder.block.10.layer.0.layer_norm.weight -> cuda:0
encoder.block.10.layer.1.DenseReluDense.wi.weight -> cuda:0
encoder.block.10.layer.1.DenseReluDense.wo.weight -> cuda:0
encoder.block.10.layer.1.layer_norm.weight -> cuda:0
encoder.block.11.layer.0.SelfAttention.q.weight -> cuda:1
encoder.block.11.layer.0.SelfAttention.k.weight -> cuda:1
encoder.block.11.layer.0.SelfAttention.v.weight -> cuda:1
encoder.block.11.layer.0.SelfAttention.o.weight -> cuda:1
encoder.block.11.layer.0.layer_norm.weight -> cuda:1
encoder.block.11.layer.1.DenseReluDense.wi.weight -> cuda:1
encoder.block.11.layer.1.DenseReluDense.wo.weight -> cuda:1
encoder.block.11.layer.1.layer_norm.weight -> cuda:1
```
5.
```
batch = tokenizer("Hello World", return_tensors="pt")
model(**batch, decoder_input_ids = batch['input_ids'])
```
### Expected behavior
Error:
```
File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:260, in T5LayerNorm.forward(self, hidden_states)
257 if self.weight.dtype in [torch.float16, torch.bfloat16]:
258 hidden_states = hidden_states.to(self.weight.dtype)
--> 260 return self.weight * hidden_states
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!
```
Note that repeating this with `load_in_8bit = False` works normally.
Thanks!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25270/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25270/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25269
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25269/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25269/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25269/events
|
https://github.com/huggingface/transformers/issues/25269
| 1,833,213,138 |
I_kwDOCUB6oc5tRJzS
| 25,269 |
run_clm_no_trainer.py example - problem with most recent checkpoint loading
|
{
"login": "TomerRonen34",
"id": 38310481,
"node_id": "MDQ6VXNlcjM4MzEwNDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/38310481?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TomerRonen34",
"html_url": "https://github.com/TomerRonen34",
"followers_url": "https://api.github.com/users/TomerRonen34/followers",
"following_url": "https://api.github.com/users/TomerRonen34/following{/other_user}",
"gists_url": "https://api.github.com/users/TomerRonen34/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TomerRonen34/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TomerRonen34/subscriptions",
"organizations_url": "https://api.github.com/users/TomerRonen34/orgs",
"repos_url": "https://api.github.com/users/TomerRonen34/repos",
"events_url": "https://api.github.com/users/TomerRonen34/events{/privacy}",
"received_events_url": "https://api.github.com/users/TomerRonen34/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi @TomerRonen34, thanks for raising this issue! \r\n\r\nCan you make sure to follow the issue template and include: \r\n* A reproducible code snippet\r\n* Details of the expected and observed behaviour including the full traceback if it exists\r\n* Information about the running environment: run `transformers-cli env` in the terminal and copy-paste the output",
"Thanks, this has now been fixed via #25318 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,690 | 1,693 | 1,693 |
NONE
| null |
The example has code for finding the latest checkpoint, but accelerator.load_state isn't called.
https://github.com/huggingface/transformers/blob/1baeed5bdf3c58b723a6125632567f97bdf322c6/examples/pytorch/language-modeling/run_clm_no_trainer.py#L561C15-L561C15
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25269/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25269/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25268
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25268/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25268/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25268/events
|
https://github.com/huggingface/transformers/pull/25268
| 1,833,200,046 |
PR_kwDOCUB6oc5XAtCn
| 25,268 |
recommend DeepSpeed's Argument Parsing documentation
|
{
"login": "BurnzZ",
"id": 3449761,
"node_id": "MDQ6VXNlcjM0NDk3NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3449761?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BurnzZ",
"html_url": "https://github.com/BurnzZ",
"followers_url": "https://api.github.com/users/BurnzZ/followers",
"following_url": "https://api.github.com/users/BurnzZ/following{/other_user}",
"gists_url": "https://api.github.com/users/BurnzZ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BurnzZ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BurnzZ/subscriptions",
"organizations_url": "https://api.github.com/users/BurnzZ/orgs",
"repos_url": "https://api.github.com/users/BurnzZ/repos",
"events_url": "https://api.github.com/users/BurnzZ/events{/privacy}",
"received_events_url": "https://api.github.com/users/BurnzZ/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @pacman100 ",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
# What does this PR do?
Clarify how to properly set the arguments passed by `deepspeed` when running in CLI.
For example the following errors might be raised when running something like `deepspeed --num_gpus=2 fine-tune.py google/flan-t5-xxl` due to args passed by `deepspeed`:
```
usage: fine-tune.py [-h] model_id
fine-tune.py: error: unrecognized arguments: --local_rank=0 --deepspeed llms/flan-t5-fp16-z3.json
usage: fine-tune.py [-h] model_id
fine-tune.py: error: unrecognized arguments: --local_rank=1 --deepspeed llms/flan-t5-fp16-z3.json
```
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@stas00 @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25268/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25268/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25268",
"html_url": "https://github.com/huggingface/transformers/pull/25268",
"diff_url": "https://github.com/huggingface/transformers/pull/25268.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25268.patch",
"merged_at": 1690991320000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25267
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25267/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25267/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25267/events
|
https://github.com/huggingface/transformers/pull/25267
| 1,833,188,471 |
PR_kwDOCUB6oc5XAqfm
| 25,267 |
[MMS] Fix mms
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ydshieh ok to merge or should we run some more tests?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25267). All of your documentation changes will be reflected on that endpoint."
] | 1,690 | 1,690 | 1,690 |
MEMBER
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #25260.
The problem is that the model state_dict is retrieved before the weights are tied which in the case of MMS/Wav2Vec2 means before the state dict is rewritten to the correct expected structure since MMS/Wav2Vec2 loads adapter weights when modeling_utils calls `tie_weights`.
I'm not 100% sure if the moving `model.tie_weights()` up here a couple of lines is ok, but it's necessary to fix MMS.
I'm pretty sure it's fine because `tie_weights` should not fundamentally change the state_dict architectures for models != MMS.
I'm not able to fully pinpoint the reason for how this bug came to be, but as stated in #25260 loading MMS
worked on the PR and without having `accelerate` installed it also worked on the main.
There were a couple of PRs that touched similar logic around at the same time or a bit later/sooner which might have caused the issue.
- https://github.com/huggingface/transformers/pull/24200
- https://github.com/huggingface/transformers/pull/24505
- https://github.com/huggingface/transformers/pull/24310
I might have accidentally also not synced my PR branch with "main" before merging so that between starting to work on it and merging a different logic creeped in.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25267/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25267/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25267",
"html_url": "https://github.com/huggingface/transformers/pull/25267",
"diff_url": "https://github.com/huggingface/transformers/pull/25267.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25267.patch",
"merged_at": 1690992676000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25266
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25266/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25266/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25266/events
|
https://github.com/huggingface/transformers/pull/25266
| 1,833,157,441 |
PR_kwDOCUB6oc5XAjsZ
| 25,266 |
CI with layers=2
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,691 | 1,691 |
COLLABORATOR
| null |
# What does this PR do?
Running a (sub) set of 24315 tests (given by test fetcher) - only tests in `test_modeling_xxx.py`.
(for a full run like nightly run, it doesn't seem change anything about running time - need more investigation)
Running time:
- num_layers = mixed (2, 3, 4, 5, 6) - currently `main`
- torch: 16m
- tf: : 8m
- flax: 11m30
- num_layers = 2
- torch: 12m30
- tf: 8m (not sure nothing change)
- flax: 8m30
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25266/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25266/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25266",
"html_url": "https://github.com/huggingface/transformers/pull/25266",
"diff_url": "https://github.com/huggingface/transformers/pull/25266.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25266.patch",
"merged_at": 1691000556000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25265
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25265/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25265/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25265/events
|
https://github.com/huggingface/transformers/pull/25265
| 1,833,141,863 |
PR_kwDOCUB6oc5XAgS9
| 25,265 |
[`Docs` / `BetterTransformer` ] Added more details about flash attention + SDPA
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks a lot for the extensive review @stevhliu ! 🎉 ",
"Thanks @fxmarty for all the reviews, @stevhliu this is ready for another pass !"
] | 1,690 | 1,692 | 1,692 |
CONTRIBUTOR
| null |
# What does this PR do?
as discussed offline with @LysandreJik
This PR clarifies to users how it is possible to use Flash Attention as a backend for most used models in transformers. As we have a seen some questions from users asking whether it is possible to integrate flash attention into HF models, whereas you can already benefit from it when using `model.to_bettertransformer()`, leveraging the `BetterTransformer` API from 🤗 optimum.
The informations are based from the [official documentation of `torch.nn.functional.scaled_dot_product`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html?highlight=scaled_dot_product_attention#torch.nn.functional.scaled_dot_product_attention)
In the near future, we could also have a small blogpost explaining this as well
To do list / To clarify list:
- Clarify that it is possible to do that for training as well (I did not added much on the training section)
- Maybe add a few lines in overview of performance and scalability to emphasize this?
Let me know if I missed anything else
cc @fxmarty @MKhalusova @stevhliu
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25265/reactions",
"total_count": 4,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25265/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25265",
"html_url": "https://github.com/huggingface/transformers/pull/25265",
"diff_url": "https://github.com/huggingface/transformers/pull/25265.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25265.patch",
"merged_at": 1692347549000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25264
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25264/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25264/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25264/events
|
https://github.com/huggingface/transformers/issues/25264
| 1,833,087,035 |
I_kwDOCUB6oc5tQrA7
| 25,264 |
[Question] How to load AutoFeatureExtractor on GPU?
|
{
"login": "treya-lin",
"id": 86940562,
"node_id": "MDQ6VXNlcjg2OTQwNTYy",
"avatar_url": "https://avatars.githubusercontent.com/u/86940562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/treya-lin",
"html_url": "https://github.com/treya-lin",
"followers_url": "https://api.github.com/users/treya-lin/followers",
"following_url": "https://api.github.com/users/treya-lin/following{/other_user}",
"gists_url": "https://api.github.com/users/treya-lin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/treya-lin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/treya-lin/subscriptions",
"organizations_url": "https://api.github.com/users/treya-lin/orgs",
"repos_url": "https://api.github.com/users/treya-lin/repos",
"events_url": "https://api.github.com/users/treya-lin/events{/privacy}",
"received_events_url": "https://api.github.com/users/treya-lin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @treya-lin, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n\r\nYou can move arrays prepared by the feature extractor to the GPU using the `to` method on its outputs: \r\n\r\n```\r\ndef preprocess_function(examples):\r\n audio_arrays = [x[\"array\"] for x in tqdm(examples[\"audio\"])]\r\n inputs = feature_extractor(\r\n audio_arrays, sampling_rate=feature_extractor.sampling_rate, max_length=16000, truncation=True\r\n ).to(\"cuda\")\r\n return inputs\r\n``` ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,690 | 1,694 | 1,694 |
NONE
| null |
Hi, I am following this guide to learn how to do audio classification with wav2vec2: https://huggingface.co/docs/transformers/main/tasks/audio_classification
I intend to extract features of my data with the following codes
```
feature_extractor = AutoFeatureExtractor.from_pretrained("/workspace/models/wav2vec2-large-robust")
def preprocess_function(examples):
audio_arrays = [x["array"] for x in tqdm(examples["audio"])]
inputs = feature_extractor(
audio_arrays, sampling_rate=feature_extractor.sampling_rate, max_length=16000, truncation=True
)
return inputs
encoded_audio_dataset_train = audio_dataset_train.map(preprocess_function, remove_columns="audio", batched=True)
```
But it seems the extractor is loaded to CPU instead of GPU, and I didn't find in documentation how to set the device for loading feature extractor. I assume the feature extraction is done by the wav2vec2 model itself right? If so how to do this on GPU? Or is it mentioned in any documentation that I didn't notice?
This is my first time to use transformers library in audio processing so please forgive my clumsiness.
Any help is much appreciated.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25264/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25264/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25263
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25263/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25263/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25263/events
|
https://github.com/huggingface/transformers/pull/25263
| 1,833,007,052 |
PR_kwDOCUB6oc5XAC3l
| 25,263 |
Remove `pytest_options={"rA": None}` in CI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"\r\n> For reference, I think `-rA` generates a [detailed summary report for all groups](https://docs.pytest.org/en/6.2.x/usage.html#detailed-summary-report).\r\n\r\nOh yes, my memory mixed the `--make-reports` and `-rA` things. Thanks!\r\n",
"> As it was removed for the torch job a long time ago, I'm happy for it to be removed here :)\r\n\r\nIf you were not happy, we will have to spend more🤑 on CircleCI credits 💸 😆 (and for nothing)\r\n\r\n"
] | 1,690 | 1,690 | 1,690 |
COLLABORATOR
| null |
# What does this PR do?
This option causes the (TF/Flax) jobs to spend 6-8 minutes (for a full set run) to prepare something for reporting after the actual tests are finished.
Taking [this TF job (nightly run)](https://app.circleci.com/pipelines/github/huggingface/transformers/69562/workflows/8fd9db08-9730-4d57-90b5-660c8a48a55c/jobs/872686/steps) for example, we can see the situation in the following screenshot
<img width="1044" alt="Screenshot 2023-08-02 132209" src="https://github.com/huggingface/transformers/assets/2521628/67e6bc89-d0d3-4d6a-9090-f3e1042be639">
Note that the torch job doesn't have this option, as it is removed ~ 3 years ago by Stas in #7995. Also, we still have all the reports we need in the artifact tab. (I don't remember the details about `-rA` though - Stas is the expert of this)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25263/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25263/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25263",
"html_url": "https://github.com/huggingface/transformers/pull/25263",
"diff_url": "https://github.com/huggingface/transformers/pull/25263.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25263.patch",
"merged_at": 1690980785000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25262
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25262/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25262/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25262/events
|
https://github.com/huggingface/transformers/issues/25262
| 1,832,981,522 |
I_kwDOCUB6oc5tQRQS
| 25,262 |
model.push_to_hub not working for gtr-large while loading with 8-bit using bnb
|
{
"login": "nss-programmer",
"id": 3127373,
"node_id": "MDQ6VXNlcjMxMjczNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3127373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nss-programmer",
"html_url": "https://github.com/nss-programmer",
"followers_url": "https://api.github.com/users/nss-programmer/followers",
"following_url": "https://api.github.com/users/nss-programmer/following{/other_user}",
"gists_url": "https://api.github.com/users/nss-programmer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nss-programmer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nss-programmer/subscriptions",
"organizations_url": "https://api.github.com/users/nss-programmer/orgs",
"repos_url": "https://api.github.com/users/nss-programmer/repos",
"events_url": "https://api.github.com/users/nss-programmer/events{/privacy}",
"received_events_url": "https://api.github.com/users/nss-programmer/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @nss-programmer, thanks for raising this issue. \r\n\r\nThere's been quite a few updates between bitsandbytes and transformers recently. Could you update your local transformers version to the most recent release `pip install --upgrade transformers` and try again? If that doesn't work, then could you try from source `pip install git+https://github.com/huggingface/transformers` and let us know if either of these work? This way, we can figure out if the issue has already been resolved. \r\n\r\nCould you also share more information about the running environment )run `transformers-cli env` in the terminal and copy-paste the output) specifically, the bitsandbytes and huggingface_hub versions installed? \r\n\r\ncc @younesbelkada ",
"Thanks for the ping! The issue you are describing is really close to what I have described in https://github.com/huggingface/transformers/pull/24416 I believe installing the lib from source as @amyeroberts mentioned should resolve it!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,690 | 1,694 | 1,694 |
NONE
| null |
### System Info
Issue :- I want to load gtr-large model in 8-bits using bitsandbytes and save it for future usage
model = T5ForConditionalGeneration.from_pretrained('sentence-transformers/gtr-t5-large',load_in_8bit=True)
model.push_to_hub("snigdhachandan/gtr_large_8bit")
Error :-
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/glide/anaconda/envs/llm/lib/python3.11/site-packages/transformers/utils/hub.py", line 814, in push_to_hub
self.save_pretrained(work_dir, max_shard_size=max_shard_size, safe_serialization=safe_serialization)
File "/glide/anaconda/envs/llm/lib/python3.11/site-packages/transformers/modeling_utils.py", line 1820, in save_pretrained
shards, index = shard_checkpoint(state_dict, max_shard_size=max_shard_size, weights_name=weights_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/glide/anaconda/envs/llm/lib/python3.11/site-packages/transformers/modeling_utils.py", line 318, in shard_checkpoint
storage_id = id_tensor_storage(weight)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/glide/anaconda/envs/llm/lib/python3.11/site-packages/transformers/pytorch_utils.py", line 290, in id_tensor_storage
return tensor.device, storage_ptr(tensor), storage_size(tensor)
^^^^^^^^^^^^^
AttributeError: 'str' object has no attribute 'device'
Transformers Version :- 4.30.2
Torch Version :- 2.0.1+cu117
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
model = T5ForConditionalGeneration.from_pretrained('sentence-transformers/gtr-t5-large',load_in_8bit=True)
model.push_to_hub("snigdhachandan/gtr_large_8bit")
### Expected behavior
It should have been push to Huggingface Hub
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25262/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25262/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25261
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25261/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25261/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25261/events
|
https://github.com/huggingface/transformers/issues/25261
| 1,832,964,496 |
I_kwDOCUB6oc5tQNGQ
| 25,261 |
Mask2Former broadcasting issue when running inference on model traced with GPU device
|
{
"login": "matteot11",
"id": 15927868,
"node_id": "MDQ6VXNlcjE1OTI3ODY4",
"avatar_url": "https://avatars.githubusercontent.com/u/15927868?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matteot11",
"html_url": "https://github.com/matteot11",
"followers_url": "https://api.github.com/users/matteot11/followers",
"following_url": "https://api.github.com/users/matteot11/following{/other_user}",
"gists_url": "https://api.github.com/users/matteot11/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matteot11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matteot11/subscriptions",
"organizations_url": "https://api.github.com/users/matteot11/orgs",
"repos_url": "https://api.github.com/users/matteot11/repos",
"events_url": "https://api.github.com/users/matteot11/events{/privacy}",
"received_events_url": "https://api.github.com/users/matteot11/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi @matteot11, thanks for reporting this and for providing such a detailed and clean issue report ❤️ \r\n\r\nLooking into it 🔍 ",
"@matteot11 I'm going to open up a PR soon to resolve this and remove the einsum operations. In the meantime, if you need to be able to run a compiled model now, it will run on torch nightly (with a bunch of tracer warnings). ",
"Hi @amyeroberts, thanks for your fast reply.\r\nWith torch nightly I am able to correctly forward the `traced_model` multiple times (even if it was exported using `torch==2.0.1`). Thanks for the hint!\r\n\r\nI don't know if this is expected, but when running the model traced on GPU, the following assert sometimes fails:\r\n```\r\ndevice = torch.device(\"cuda\")\r\ndummy_input = torch.randn((2,3,640,640)).to(device)\r\nassert torch.isclose(model(dummy_input)[0], traced_model(dummy_input)[0]).all()\r\n```\r\nThis does not happen when exporting the model to the CPU.\r\n\r\nWaiting for your PR!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,690 | 1,694 | 1,694 |
NONE
| null |
### System Info
```
- System information: x86_64 GNU/Linux
- Ubuntu version: 18.04
- Python version: 3.8.12
- CUDA version: 11.1
- PyTorch version: 2.0.1
- transformers version: 4.31.0
```
### Who can help?
@amyeroberts
@sgugger
@muellerzr
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import torch
from transformers import Mask2FormerForUniversalSegmentation
device = torch.device("cuda")
model = Mask2FormerForUniversalSegmentation.from_pretrained(
"facebook/mask2former-swin-tiny-coco-instance",
torchscript=True
).eval().to(device)
dummy_input = torch.randn((1,3,640,640)).to(device)
traced_model = torch.jit.trace(model, dummy_input)
with torch.no_grad():
out = traced_model(torch.randn((2,3,640,640)).to(device))
out = traced_model(torch.randn((2,3,640,640)).to(device))
```
The above code generates the following error when calling the **second** forward of `traced_model` (last line):
```
Traceback (most recent call last):
File "mask2former_trace.py", line 14, in <module>
out = traced_model(torch.randn((2,3,640,640)).to(device))
File "~/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
~/python3.8/site-packages/torch/functional.py(378): einsum
~/python3.8/site-packages/transformers/models/mask2former/modeling_mask2former.py(2015): forward
~/python3.8/site-packages/torch/nn/modules/module.py(1488): _slow_forward
~/python3.8/site-packages/torch/nn/modules/module.py(1501): _call_impl
~/python3.8/site-packages/transformers/models/mask2former/modeling_mask2former.py(1852): forward
~/python3.8/site-packages/torch/nn/modules/module.py(1488): _slow_forward
~/python3.8/site-packages/torch/nn/modules/module.py(1501): _call_impl
~/python3.8/site-packages/transformers/models/mask2former/modeling_mask2former.py(2080): forward
~/python3.8/site-packages/torch/nn/modules/module.py(1488): _slow_forward
~/python3.8/site-packages/torch/nn/modules/module.py(1501): _call_impl
~/python3.8/site-packages/transformers/models/mask2former/modeling_mask2former.py(2271): forward
~/python3.8/site-packages/torch/nn/modules/module.py(1488): _slow_forward
~/python3.8/site-packages/torch/nn/modules/module.py(1501): _call_impl
~/python3.8/site-packages/transformers/models/mask2former/modeling_mask2former.py(2496): forward
~/python3.8/site-packages/torch/nn/modules/module.py(1488): _slow_forward
~/python3.8/site-packages/torch/nn/modules/module.py(1501): _call_impl
~/python3.8/site-packages/torch/jit/_trace.py(1056): trace_module
~/python3.8/site-packages/torch/jit/_trace.py(794): trace
mask2former_trace.py(10): <module>
RuntimeError: einsum(): subscript b has size 2 for operand 1 which does not broadcast with previously seen size 400
```
If I trace the model with batch size 2, i.e. `dummy_input = torch.randn((2,3,640,640)).to(device)`, the same error arises at the **first** forward call of `traced_model`
The issue seems to be [here](https://github.com/huggingface/transformers/blob/e42587f596181396e1c4b63660abf0c736b10dae/src/transformers/models/mask2former/modeling_mask2former.py#L2015)
### Expected behavior
When tracing on CPU, i.e. in the code above:
```
device = torch.device("cpu")
```
everything works fine. I would expect similar behaviour when tracing on GPU device.
**Additional notes**:
I already tried tracing the model on CPU device, then moving `traced_model` (as well as the input tensors) to GPU, and running inference, but I got the following error:
```
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
```
I know this is a known issue:
https://github.com/huggingface/transformers/issues/5664
https://github.com/huggingface/transformers/issues/22038
so I guess there should be some tensors in Mask2Former created at forward time with the same device as the input, and torchscript does not change that device when running on GPU.
This is the reason why I need to trace the model on GPU.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25261/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25261/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25260
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25260/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25260/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25260/events
|
https://github.com/huggingface/transformers/issues/25260
| 1,832,893,935 |
I_kwDOCUB6oc5tP73v
| 25,260 |
⚠️ [Wav2Vec2-MMS] `pipeline` and `from_pretrained` fail to load the Wav2Vec2 MMS checkpoints
|
{
"login": "Vaibhavs10",
"id": 18682411,
"node_id": "MDQ6VXNlcjE4NjgyNDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/18682411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vaibhavs10",
"html_url": "https://github.com/Vaibhavs10",
"followers_url": "https://api.github.com/users/Vaibhavs10/followers",
"following_url": "https://api.github.com/users/Vaibhavs10/following{/other_user}",
"gists_url": "https://api.github.com/users/Vaibhavs10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vaibhavs10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vaibhavs10/subscriptions",
"organizations_url": "https://api.github.com/users/Vaibhavs10/orgs",
"repos_url": "https://api.github.com/users/Vaibhavs10/repos",
"events_url": "https://api.github.com/users/Vaibhavs10/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vaibhavs10/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @patrickvonplaten ",
"It looks like it's related to some recent changes and accelerate.\r\n\r\nIf you checkout this commit:\r\nhttps://github.com/huggingface/transformers/commit/b0513b013b10939a2b47ab94933c2cca909716a2\r\n\r\nand uninstall accelerate the code snippet works fine for me.",
"IIRC, fast loading with accelerate never worked with Wav2Vec2 before because Wav2Vec2 has a weird weight norm parameter, so load adapter was not tested with it. It seems like there were a couple of recent changes though with accelerate and loading with might be related.\r\n\r\nI'm sadly not going to have the time to dive deeper here I think. @amyeroberts or @sanchit-gandhi could you try to take this one maybe?",
"Also: cc: @muellerzr for accelerate!",
"#25267 should fix it, but it'd be good to get a review from @sgugger and @ydshieh here."
] | 1,690 | 1,690 | 1,690 |
MEMBER
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.15.109+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.0 (cpu)
- Jax version: 0.4.13
- JaxLib version: 0.4.13
- Using GPU in script?: `No`
- Using distributed or parallel set-up in script?: `No`
### Who can help?
@sanchit-gandhi @patrickvonplaten
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Put together a quick colab to run the model as mentioned in [our documentation](https://huggingface.co/docs/transformers/model_doc/mms#loading) - [colab notebook](https://github.com/Vaibhavs10/scratchpad/blob/main/wav2vec2_mms_repro.ipynb)
code snippets:
`Pipeline`
```python
from transformers import pipeline
model_id = "facebook/mms-1b-all"
target_lang = "fra"
pipe = pipeline(model=model_id, model_kwargs={"target_lang": target_lang, "ignore_mismatched_sizes": True})
```
Error (full traceback in the [colab notebook](https://github.com/Vaibhavs10/scratchpad/blob/main/wav2vec2_mms_repro.ipynb)):
```
RuntimeError: Error(s) in loading state_dict for Wav2Vec2ForCTC:
size mismatch for lm_head.weight: copying a param with shape torch.Size([154, 1280]) from checkpoint, the shape in current model is torch.Size([314, 1280]).
size mismatch for lm_head.bias: copying a param with shape torch.Size([154]) from checkpoint, the shape in current model is torch.Size([314]).
You may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method.
```
`Processor` + `Model`
```python
from transformers import Wav2Vec2ForCTC, AutoProcessor
model_id = "facebook/mms-1b-all"
target_lang = "fra"
processor = AutoProcessor.from_pretrained(model_id, target_lang=target_lang)
model = Wav2Vec2ForCTC.from_pretrained(model_id, target_lang=target_lang, ignore_mismatched_sizes=True)
```
Error (full traceback in the [colab notebook](https://github.com/Vaibhavs10/scratchpad/blob/main/wav2vec2_mms_repro.ipynb)):
```
RuntimeError: Error(s) in loading state_dict for Wav2Vec2ForCTC:
size mismatch for lm_head.weight: copying a param with shape torch.Size([154, 1280]) from checkpoint, the shape in current model is torch.Size([314, 1280]).
size mismatch for lm_head.bias: copying a param with shape torch.Size([154]) from checkpoint, the shape in current model is torch.Size([314]).
You may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method.
```
Similar issues reported by @xenova here: https://github.com/huggingface/transformers/issues/24223#issuecomment-1661174505
### Expected behavior
The expected behaviour would be that dispite the mismatch the model weights are loaded and the mismatch is rectified via `load_adapter` for pipeline (as mentioned here:https://github.com/huggingface/transformers/issues/24223#issuecomment-1595856093)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25260/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25260/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25259
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25259/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25259/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25259/events
|
https://github.com/huggingface/transformers/pull/25259
| 1,832,861,030 |
PR_kwDOCUB6oc5W_jD1
| 25,259 |
Update rescale tests - cast to float after rescaling to reflect #25229
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,690 | 1,690 |
COLLABORATOR
| null |
# What does this PR do?
In #25229 - the casting to float was moved back to after rescaling. This wasn't reflected in the specific rescaling tests for EfficientNet and ViVit, resulting in failing tests.
This PR resolves this.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25259/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25259/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25259",
"html_url": "https://github.com/huggingface/transformers/pull/25259",
"diff_url": "https://github.com/huggingface/transformers/pull/25259.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25259.patch",
"merged_at": 1690972196000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25258
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25258/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25258/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25258/events
|
https://github.com/huggingface/transformers/issues/25258
| 1,832,802,857 |
I_kwDOCUB6oc5tPlop
| 25,258 |
Why I cannot assign new parameter to the whisper pretrained config?
|
{
"login": "teinhonglo",
"id": 7367516,
"node_id": "MDQ6VXNlcjczNjc1MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7367516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/teinhonglo",
"html_url": "https://github.com/teinhonglo",
"followers_url": "https://api.github.com/users/teinhonglo/followers",
"following_url": "https://api.github.com/users/teinhonglo/following{/other_user}",
"gists_url": "https://api.github.com/users/teinhonglo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/teinhonglo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/teinhonglo/subscriptions",
"organizations_url": "https://api.github.com/users/teinhonglo/orgs",
"repos_url": "https://api.github.com/users/teinhonglo/repos",
"events_url": "https://api.github.com/users/teinhonglo/events{/privacy}",
"received_events_url": "https://api.github.com/users/teinhonglo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @teinhonglo, thanks for raising this issue! \r\n\r\nThe reason for not being able to assign through the `from_pretrained` call is a safety check. Unknown kwargs are not applied: their application is ambigious - should they control the `from_pretrained` behaviour or be set as a config attribute? You can see which kwargs weren't set using `return_unused_kwargs` argument c.f. [here](https://huggingface.co/docs/transformers/v4.31.0/en/main_classes/configuration#transformers.PretrainedConfig.from_pretrained.return_unused_kwargs) and [here](https://huggingface.co/docs/transformers/v4.31.0/en/main_classes/configuration#transformers.PretrainedConfig.from_pretrained.kwargs) in the docs.\r\n\r\nAfter loading in the config, you can set attributes e.g.:\r\n```\r\nfrom transformers import AutoConfig, WhisperModel\r\nconfig = AutoConfig.from_pretrained(\"openai/whisper-small\")\r\nconfig.final_dropout = 0.1\r\n```\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,690 | 1,694 | 1,694 |
NONE
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.4.0-155-generic-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Why can I not assign a new parameter to the whisper pretrained config?
Note that the parameter "final_dropout" is not in a config of the "openai/whisper-small".
I used the code piece as following:
```
from transformers import AutoConfig, WhisperModel
config = AutoConfig.from_pretrained("openai/whisper-small", final_dropout=0.1)
config.final_dropout
```
The error is shown below:
```
AttributeError: 'WhisperConfig' object has no attribute 'final_dropout'
```
### Expected behavior
config.final_dropout=0.1
Any guidance would be appreciated.
Tien-Hong
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25258/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25258/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25257
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25257/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25257/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25257/events
|
https://github.com/huggingface/transformers/issues/25257
| 1,832,775,037 |
I_kwDOCUB6oc5tPe19
| 25,257 |
how to print out the data loaded by each epoch during trainer.train() training?
|
{
"login": "ahong007007",
"id": 22077027,
"node_id": "MDQ6VXNlcjIyMDc3MDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/22077027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahong007007",
"html_url": "https://github.com/ahong007007",
"followers_url": "https://api.github.com/users/ahong007007/followers",
"following_url": "https://api.github.com/users/ahong007007/following{/other_user}",
"gists_url": "https://api.github.com/users/ahong007007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahong007007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahong007007/subscriptions",
"organizations_url": "https://api.github.com/users/ahong007007/orgs",
"repos_url": "https://api.github.com/users/ahong007007/repos",
"events_url": "https://api.github.com/users/ahong007007/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahong007007/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @ahong007007, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,690 | 1,694 | 1,694 |
NONE
| null |
### Feature request
please tell to me,
how to print out the data loaded by each epoch during trainer.train() training?
### Motivation
how to print out the data loaded by each epoch during trainer.train() training?
### Your contribution
how to print out the data loaded by each epoch during trainer.train() training?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25257/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25257/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25256
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25256/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25256/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25256/events
|
https://github.com/huggingface/transformers/issues/25256
| 1,832,745,919 |
I_kwDOCUB6oc5tPXu_
| 25,256 |
Use 'transformers.BertModel.from_pretrained', The code is blocked
|
{
"login": "yangh0597",
"id": 86940083,
"node_id": "MDQ6VXNlcjg2OTQwMDgz",
"avatar_url": "https://avatars.githubusercontent.com/u/86940083?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yangh0597",
"html_url": "https://github.com/yangh0597",
"followers_url": "https://api.github.com/users/yangh0597/followers",
"following_url": "https://api.github.com/users/yangh0597/following{/other_user}",
"gists_url": "https://api.github.com/users/yangh0597/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yangh0597/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yangh0597/subscriptions",
"organizations_url": "https://api.github.com/users/yangh0597/orgs",
"repos_url": "https://api.github.com/users/yangh0597/repos",
"events_url": "https://api.github.com/users/yangh0597/events{/privacy}",
"received_events_url": "https://api.github.com/users/yangh0597/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi, are you running the script/command in some particular setting?\r\n\r\nLooks like it's in a multiprocessing setting? Could you provide a self-complete code snippet instead of just uploading screenshot? Thanks in advance.",
"if not use pyrocketmq is ok. but use pyrocketmq not ok. the code is:\r\n```\r\nimport jpype.imports\r\n\r\njpype.startJVM(classpath=['D:\\\\soft\\\\rocketmq-all-4.3.2-bin-release\\\\lib\\\\*', ])\r\nfrom pyrocketmq import *\r\n\r\n# import json\r\n# from pyrocketmq.common.message import Message\r\n# from pyrocketmq.client.producer import Producer, SendStatus\r\n# pr = Producer('test_producer')\r\n# pr.setNamesrvAddr('10.2.10.6:9876')\r\n# pr.start()\r\n# body = json.dumps({'name':'Alice', 'age':1}).encode('utf-8')\r\n# msg = Message(topic='test_topic', body=body, tags='girl')\r\n# # send, tcp-like, return sendStatus\r\n# sr = pr.send(msg)\r\n# assert(sr.sendStatus == SendStatus.SEND_OK)\r\n# pr.shutdown()\r\n\r\n\r\n\r\n\r\nfrom multiprocessing import Pool\r\nimport json\r\nimport time\r\nfrom typing import List\r\nfrom pyrocketmq.client.consumer.listener import ConsumeConcurrentlyContext, ConsumeConcurrentlyStatus, MessageListenerConcurrently\r\nfrom pyrocketmq.client.consumer.consumer import MessageSelector, PushConsumer\r\nfrom pyrocketmq.common.common import ConsumeFromWhere\r\nfrom pyrocketmq.common.message import MessageExt\r\n\r\n\r\ndef from_pretrained():\r\n print('--from_pretrained1--')\r\n transformers.BertModel.from_pretrained('/opt/model-service/volume/resource/bert_base')\r\n print('--from_pretrained2--')\r\n\r\n return True\r\n\r\n\r\n# subclass MessageListenerConcurrently to write your own consume action\r\nclass MyMessageListenerConcurrently(MessageListenerConcurrently):\r\n def _consumeMessage(self, msgs:List[MessageExt], context:ConsumeConcurrentlyContext) -> ConsumeConcurrentlyStatus:\r\n print('Concurrently', context.ackIndex)\r\n for msg in msgs:\r\n print(msg.body)\r\n print('--_main--')\r\n pool = Pool(processes=2)\r\n bert_res_future = pool.apply_async(func=from_pretrained)\r\n res = bert_res_future.get()\r\n print(res)\r\n return ConsumeConcurrentlyStatus.CONSUME_SUCCESS\r\n\r\ncs = PushConsumer('test_push_consumer')\r\ncs.setNamesrvAddr('10.2.10.6:9876')\r\nselector = MessageSelector.byTag('model')\r\nml = MyMessageListenerConcurrently()\r\ncs.registerMessageListener(ml)\r\ncs.subscribe('test_topic', selector)\r\ncs.setConsumeFromWhere(ConsumeFromWhere.CONSUME_FROM_LAST_OFFSET)\r\ncs.start()\r\n```\r\n\r\n\r\nThe code below is problematic, the code above is not\r\n```\r\nimport transformers\r\n\r\n\r\ndef from_pretrained():\r\n print('--from_pretrained1--')\r\n\r\n transformers.BertModel.from_pretrained('/opt/model-service/volume/resource/bert_base')\r\n print('--from_pretrained2--')\r\n\r\n return True\r\n\r\n\r\n\r\n\r\n\r\nif __name__ == '__main__':\r\n from multiprocessing import Pool\r\n\r\n print('--_main--')\r\n pool = Pool(processes=2)\r\n bert_res_future = pool.apply_async(func=from_pretrained)\r\n res=bert_res_future.get()\r\n print(res)\r\n\r\n```\r\n\r\n",
"Thanks for clarification @yangh0597, appreciated. This is more `pyrocketmq` issue (or the way it works) rather than `transformers`.\r\n\r\nIn general, when doing such multiprocessing thing or inter-communication stuff between processes, we should not pass large objects (inputs, models) etc., but rather creating the necessary objects in the target process(es). It's on the users to take care what would be necessary steps to avoid the blocking.\r\n\r\nWe wouldn't be able to help with the details, especially it involves 3rd party library `pyrocketmq`. But I hope the above comment give you some hint(s) to figure out a working solution.",
"thakns very much",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"done"
] | 1,690 | 1,694 | 1,694 |
NONE
| null |

this is py-spy result:

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25256/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25256/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25255
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25255/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25255/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25255/events
|
https://github.com/huggingface/transformers/pull/25255
| 1,832,722,517 |
PR_kwDOCUB6oc5W_FDd
| 25,255 |
fix bad URL to Llama 2
|
{
"login": "fangli80",
"id": 9782948,
"node_id": "MDQ6VXNlcjk3ODI5NDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9782948?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fangli80",
"html_url": "https://github.com/fangli80",
"followers_url": "https://api.github.com/users/fangli80/followers",
"following_url": "https://api.github.com/users/fangli80/following{/other_user}",
"gists_url": "https://api.github.com/users/fangli80/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fangli80/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fangli80/subscriptions",
"organizations_url": "https://api.github.com/users/fangli80/orgs",
"repos_url": "https://api.github.com/users/fangli80/repos",
"events_url": "https://api.github.com/users/fangli80/events{/privacy}",
"received_events_url": "https://api.github.com/users/fangli80/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@fangli80 Running`make fix-copies` and pushing the changes will resolve the failing quality CI checks",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,690 | 1,694 | 1,694 |
NONE
| null |
# What does this PR do?
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25255/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25255/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25255",
"html_url": "https://github.com/huggingface/transformers/pull/25255",
"diff_url": "https://github.com/huggingface/transformers/pull/25255.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25255.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25254
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25254/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25254/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25254/events
|
https://github.com/huggingface/transformers/pull/25254
| 1,832,688,911 |
PR_kwDOCUB6oc5W-9s5
| 25,254 |
Add FlaxCLIPTextModelWithProjection
|
{
"login": "pcuenca",
"id": 1177582,
"node_id": "MDQ6VXNlcjExNzc1ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1177582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pcuenca",
"html_url": "https://github.com/pcuenca",
"followers_url": "https://api.github.com/users/pcuenca/followers",
"following_url": "https://api.github.com/users/pcuenca/following{/other_user}",
"gists_url": "https://api.github.com/users/pcuenca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pcuenca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pcuenca/subscriptions",
"organizations_url": "https://api.github.com/users/pcuenca/orgs",
"repos_url": "https://api.github.com/users/pcuenca/repos",
"events_url": "https://api.github.com/users/pcuenca/events{/privacy}",
"received_events_url": "https://api.github.com/users/pcuenca/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Should we maybe for now just add it in a subfolder of sdxl in diffusers here: https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/stable_diffusion_xl instead of having to rely on `transformers` here? I'm not 100% convinced this model is really needed for core transformers usage.\r\n\r\nWould also not force the user to have to install transformers from main :-) ",
"> Should we maybe for now just add it in a subfolder of sdxl in diffusers here: https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/stable_diffusion_xl instead of having to rely on `transformers` here? I'm not 100% convinced this model is really needed for core transformers usage.\r\n\r\nThe [PyTorch version of the same model was added 9 months ago](https://github.com/huggingface/transformers/blob/bd90cda9a6bb4723515c17df1192e53abc8e36e3/src/transformers/models/clip/modeling_clip.py#L1198), so I assumed it was ok.\r\n\r\nBut sure, we can do that. In that case, how do we deal with it?\r\n- Change the library to `diffusers` here: https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/blob/main/model_index.json#L15. Unless I'm mistaken, then we'd need to distribute the flax weights separately, or use a branch.\r\n- Create a hack in diffusers to map the library.\r\n\r\n> \r\n> Would also not force the user to have to install transformers from main :-)\r\n\r\nYes, of course, this was meant as the long-term solution.\r\n\r\n",
"Ah yeah good point JAX & PyTorch share the same config - this will become complicated indeed then. Ok let's try to get it merged here. CLIP is important enough to be merged to `transformers` indeed ",
"Thanks a lot for your in-depth review @sanchit-gandhi! 🙌 I couldn't get back to this PR until today, but I think I addressed all your comments and fixed an error in the example docstring. I was getting some seemingly unrelated CI failures so I just merged the latest `main` to see if they pass.",
"cc @patrickvonplaten too, in case we want to consider other alternatives :) Otherwise feel free to merge when appropriate, as I can't do it."
] | 1,690 | 1,692 | 1,692 |
MEMBER
| null |
# What does this PR do?
`FlaxCLIPTextModelWithProjection` is necessary to support the Flax port of Stable Diffusion XL: https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/blob/fb6d705fb518524cabc79c77f13a0e7921bcab3a/text_encoder_2/config.json#L3
I can add some tests, if necessary, after this approach is validated.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten @patil-suraj @sanchit-gandhi @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25254/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25254/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25254",
"html_url": "https://github.com/huggingface/transformers/pull/25254",
"diff_url": "https://github.com/huggingface/transformers/pull/25254.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25254.patch",
"merged_at": 1692953894000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25253
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25253/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25253/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25253/events
|
https://github.com/huggingface/transformers/issues/25253
| 1,832,621,154 |
I_kwDOCUB6oc5tO5Ri
| 25,253 |
RWKV-WORLD-4
|
{
"login": "CosmoLM",
"id": 138301484,
"node_id": "U_kgDOCD5QLA",
"avatar_url": "https://avatars.githubusercontent.com/u/138301484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CosmoLM",
"html_url": "https://github.com/CosmoLM",
"followers_url": "https://api.github.com/users/CosmoLM/followers",
"following_url": "https://api.github.com/users/CosmoLM/following{/other_user}",
"gists_url": "https://api.github.com/users/CosmoLM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CosmoLM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CosmoLM/subscriptions",
"organizations_url": "https://api.github.com/users/CosmoLM/orgs",
"repos_url": "https://api.github.com/users/CosmoLM/repos",
"events_url": "https://api.github.com/users/CosmoLM/events{/privacy}",
"received_events_url": "https://api.github.com/users/CosmoLM/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false | null |
[] |
[
"Hi @CosmoLM, thanks for opening this model request! \r\n\r\nThe RWKV-4 model already exists in transformers -- [PR](https://github.com/huggingface/transformers/pull/22797), [docs](https://huggingface.co/docs/transformers/v4.31.0/en/model_doc/rwkv#rwkv-attention-and-the-recurrent-formulas). To enable loading the model through `Rwkv.from_pretrained`, the checkpoints would need to be converted and model configs push to the hub using [the conversion script.](https://github.com/huggingface/transformers/blob/8021c684ec3023295513be36bdc30e27e6f28cfc/src/transformers/models/rwkv/convert_rwkv_checkpoint_to_hf.py#L4) \r\n\r\nI'd suggest opening a discussion on the hub to see if the repo owners would be interested in doing this. \r\n ",
"The RWKV-pile models are available but not the RWKV-world models because\r\nits tokenizer is not in the json format it is in txt format.\r\n\r\nOn Wed, 2 Aug, 2023, 4:24 pm amyeroberts, ***@***.***> wrote:\r\n\r\n> Hi @CosmoLM <https://github.com/CosmoLM>, thanks for opening this model\r\n> request!\r\n>\r\n> The RWKV-4 model already exists in transformers -- PR\r\n> <https://github.com/huggingface/transformers/pull/22797>, docs\r\n> <https://huggingface.co/docs/transformers/v4.31.0/en/model_doc/rwkv#rwkv-attention-and-the-recurrent-formulas>.\r\n> To enable loading the model through Rwkv.from_pretrained, the checkpoints\r\n> would need to be converted and model configs push to the hub using the\r\n> conversion script.\r\n> <https://github.com/huggingface/transformers/blob/8021c684ec3023295513be36bdc30e27e6f28cfc/src/transformers/models/rwkv/convert_rwkv_checkpoint_to_hf.py#L4>\r\n>\r\n> I'd suggest opening a discussion on the hub to see if the repo owners\r\n> would be interested in doing this.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/25253#issuecomment-1661993346>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/BA7FALGYW7ERQ3LODEA6NADXTIWVPANCNFSM6AAAAAA3A3B6CY>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"Is there an ETA for this?"
] | 1,690 | 1,699 | null |
NONE
| null |
### Model description
BlinkDL/rwkv-4-world is a repo present on Huggingface i want the model's tokenizer and the model to be added to the Transformers Lib.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25253/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25253/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/25252
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25252/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25252/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25252/events
|
https://github.com/huggingface/transformers/issues/25252
| 1,832,606,551 |
I_kwDOCUB6oc5tO1tX
| 25,252 |
run_mae.py can not be used directly on own dir
|
{
"login": "CheungZeeCn",
"id": 2025362,
"node_id": "MDQ6VXNlcjIwMjUzNjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2025362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CheungZeeCn",
"html_url": "https://github.com/CheungZeeCn",
"followers_url": "https://api.github.com/users/CheungZeeCn/followers",
"following_url": "https://api.github.com/users/CheungZeeCn/following{/other_user}",
"gists_url": "https://api.github.com/users/CheungZeeCn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CheungZeeCn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CheungZeeCn/subscriptions",
"organizations_url": "https://api.github.com/users/CheungZeeCn/orgs",
"repos_url": "https://api.github.com/users/CheungZeeCn/repos",
"events_url": "https://api.github.com/users/CheungZeeCn/events{/privacy}",
"received_events_url": "https://api.github.com/users/CheungZeeCn/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false |
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] |
[
"The error\r\n\r\n> FileNotFoundError: Unable to find '/home/ana/data4/datasets/rvl_cdip/data/pretrain_images/train/' at /\r\n\r\nshows you don't have local datasets (or there is some issue to locate it). Could you verify this on your own side? Thanks.",
"Hi @CheungZeeCn, thanks for raising this issue! \r\n\r\nSo that we can best help you, could you:\r\n* make sure code snippets and errors are properly formatted - placed between pairs of three backticks e.g. ` ``` code here ``` `. \r\n* Add information about the running environment: run `transformers-cli env` in the terminal and copy-paste the output\r\n\r\nAs @ydshieh mentions, it looks like the issue is coming from the paths being passed in for `train_dir` and `validation_dir`. They should be the names of folders containing the train and validation datasets relative to `dataset_name`. Based on the paths, the arguments should be:\r\n\r\n```\r\n--dataset_name /home/ana/data4/datasets/rvl_cdip/data/pretrain_images\r\n--train_dir train\r\n--validation_dir eval\r\n```",
"@ydshieh @amyeroberts thank's for your replies, \r\n\r\n```\r\n--dataset_name /home/ana/data4/datasets/rvl_cdip/data/pretrain_images\r\n--train_dir train\r\n--validation_dir eval\r\n```\r\ncan not solve my problem.\r\n\r\nThat's how I fix it:\r\n\r\nstep1: download dataset python file from: https://huggingface.co/datasets/nateraw/imagefolder/tree/main/ than put it in \r\nmy local diretory: /home/ana/data4/datasets/rvl_cdip/data/pretrain_images \r\n\r\nstep2: use the following params:\r\n```\r\n--dataset_name \\\r\n/home/ana/data4/datasets/rvl_cdip/data/pretrain_images \\\r\n--train_dir \\\r\n\"/home/ana/data4/datasets/rvl_cdip/data/pretrain_images/train/*\" \\\r\n--validation_dir \\\r\n\"/home/ana/data4/datasets/rvl_cdip/data/pretrain_images/eval/*\" \r\n```\r\nIt's not the same as the doc.",
"Hi @CheungZeeCn \r\n\r\nGlad that you managed to make it work.\r\n\r\nJust to make sure, what is works it with `--dataset_name nateraw/image-folder ` like the following\r\n\r\n```bash\r\n--dataset_name nateraw/image-folder \r\n--train_dir \\\r\n\"/home/ana/data4/datasets/rvl_cdip/data/pretrain_images/train/*\" \\\r\n--validation_dir \\\r\n\"/home/ana/data4/datasets/rvl_cdip/data/pretrain_images/eval/*\" \r\n```\r\n\r\nor the one with `/home/ana/data4/datasets/rvl_cdip/data/pretrain_images \\\r\n--train_dir \\`?\r\n\r\nThanks in advance!",
"Hi, @ydshieh \r\n\r\nThat's how my local dataset directory looks like:\r\n\r\n```\r\n(torch2) ana@pts-m1:~/data4/datasets/rvl_cdip/data/pretrain_images$ pwd\r\n/home/ana/data4/datasets/rvl_cdip/data/pretrain_images\r\n(torch2) ana@pts-m1:~/data4/datasets/rvl_cdip/data/pretrain_images$ ls\r\neval imagefolder.py train\r\n(torch2) ana@pts-m1:~/data4/datasets/rvl_cdip/data/pretrain_images$ ls eval |head -10\r\n0000298044.jpg\r\n0000553824.jpg\r\n0012197285.jpg\r\n0060128913.jpg\r\n```\r\nand the imagefolder.py is the same as this one https://huggingface.co/datasets/nateraw/imagefolder/blob/main/imagefolder.py \r\n\r\nusing the following is OK:\r\n```\r\nexport WANDB_DISABLED=true\r\npython run_mae.py \\\r\n--model_name_or_path \\\r\n/home/ana/data4/models/vit-mae-base \\\r\n--dataset_name \\\r\n/home/ana/data4/datasets/rvl_cdip/data/pretrain_images \\\r\n--train_dir \\\r\n\"/home/ana/data4/datasets/rvl_cdip/data/pretrain_images/train/*\" \\\r\n--validation_dir \\\r\n\"/home/ana/data4/datasets/rvl_cdip/data/pretrain_images/eval/*\" \\\r\n--output_dir \\\r\n/home/ana/data4/output_models/rvl_mae_pretrain_demo_10k_100 \\\r\n--remove_unused_columns \\\r\nFalse \\\r\n--label_names \\\r\npixel_values \\\r\n--mask_ratio \\\r\n0.5 \\\r\n--base_learning_rate \\\r\n1.5e-4 \\\r\n--lr_scheduler_type \\\r\ncosine \\\r\n--weight_decay \\\r\n0.05 \\\r\n--num_train_epochs \\\r\n800 \\\r\n--warmup_ratio \\\r\n0.05 \\\r\n--per_device_train_batch_size \\\r\n32 \\\r\n--gradient_accumulation_steps \\\r\n8 \\\r\n--per_device_eval_batch_size \\\r\n8 \\\r\n--logging_strategy \\\r\nsteps \\\r\n--logging_steps \\\r\n10 \\\r\n--evaluation_strategy \\\r\nepoch \\\r\n--save_strategy \\\r\nepoch \\\r\n--load_best_model_at_end \\\r\nTrue \\\r\n--save_total_limit \\\r\n5 \\\r\n--seed \\\r\n1337 \\\r\n--do_train \\\r\n--do_eval \\\r\n--overwrite_output_dir\r\n```\r\n\r\nHowever, if I tried this:\r\n```\r\npython run_mae.py\r\n--model_name_or_path\r\n/home/ana/data4/models/vit-mae-base\r\n--dataset_name nateraw/image-folder\r\n--train_dir\r\n\"/home/ana/data4/datasets/rvl_cdip/data/pretrain_images/train/*\"\r\n--validation_dir\r\n\"/home/ana/data4/datasets/rvl_cdip/data/pretrain_images/eval/*\"\r\n--output_dir\r\n/home/ana/data4/output_models/rvl_mae_pretrain_demo_10k_100_tmp\r\n--remove_unused_columns\r\nFalse\r\n--label_names\r\npixel_values\r\n--mask_ratio\r\n0.5\r\n--base_learning_rate\r\n1.5e-4\r\n--lr_scheduler_type\r\ncosine\r\n--weight_decay\r\n0.05\r\n--num_train_epochs\r\n800\r\n--warmup_ratio\r\n0.05\r\n--per_device_train_batch_size\r\n32\r\n--gradient_accumulation_steps\r\n8\r\n--per_device_eval_batch_size\r\n8\r\n--logging_strategy\r\nsteps\r\n--logging_steps\r\n10\r\n--evaluation_strategy\r\nepoch\r\n--save_strategy\r\nepoch\r\n--load_best_model_at_end\r\nTrue\r\n--save_total_limit\r\n5\r\n--seed\r\n1337\r\n--do_train\r\n--do_eval\r\n```\r\nthe output is:\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/ana/data4/projects/hf_mae/run_mae.py\", line 397, in <module>\r\n main()\r\n File \"/home/ana/data4/projects/hf_mae/run_mae.py\", line 222, in main\r\n ds = load_dataset(\r\n File \"/home/ana/data1/anaconda3/envs/torch2/lib/python3.8/site-packages/datasets/load.py\", line 1773, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/home/ana/data1/anaconda3/envs/torch2/lib/python3.8/site-packages/datasets/load.py\", line 1528, in load_dataset_builder\r\n builder_instance: DatasetBuilder = builder_cls(\r\n File \"/home/ana/data1/anaconda3/envs/torch2/lib/python3.8/site-packages/datasets/builder.py\", line 350, in __init__\r\n info.update(self._info())\r\n File \"/home/ana/.cache/huggingface/modules/datasets_modules/datasets/nateraw--image-folder/a2b5eb21064d8bd9b44c3b3fc91ae8205c3002a441852e1b02da78e8025c332e/image-folder.py\", line 30, in _info\r\n classes = sorted([x.name.lower() for x in Path(folder).glob('*/**')])\r\n File \"/home/ana/data1/anaconda3/envs/torch2/lib/python3.8/pathlib.py\", line 1041, in __new__\r\n self = cls._from_parts(args, init=False)\r\n File \"/home/ana/data1/anaconda3/envs/torch2/lib/python3.8/pathlib.py\", line 682, in _from_parts\r\n drv, root, parts = self._parse_args(args)\r\n File \"/home/ana/data1/anaconda3/envs/torch2/lib/python3.8/pathlib.py\", line 666, in _parse_args\r\n a = os.fspath(a)\r\nTypeError: expected str, bytes or os.PathLike object, not DataFilesList\r\n\r\n```\r\n",
"Thanks a lot, we will take a look and update the doc if necessary!",
"Hi, I'm facing the same problem. \r\n\r\nThe goal is to train a ViT-MAE from scratch using medical images. \r\n\r\nI've followed the recc and put the folder of the images as the dataset_name. However, it returns a `FileNotFoundError: Couldn't find a dataset script at /Desktop/pRCC_nolabel/pRCC_nolabel.py or any data file in the same directory.`\r\n\r\nI've also tried `--dataset_name nateraw/image-folder` and got the same error: TypeError: expected str, bytes or os.PathLike object, not DataFilesList\r\n\r\nAny advice on how to solve this?\r\n```\r\npython run_mae.py \\\r\n --dataset_name /Desktop/pRCC_nolabel/ \\\r\n --train_dir \"/Desktop/pRCC_nolabel/*\" \\\r\n --output_dir ./demo \\\r\n --remove_unused_columns \\\r\n False \\\r\n --label_names \\\r\n pixel_values \\\r\n --mask_ratio \\\r\n 0.5 \\\r\n --base_learning_rate \\\r\n 1.5e-4 \\\r\n --lr_scheduler_type \\\r\n cosine \\\r\n --weight_decay \\\r\n 0.05 \\\r\n --num_train_epochs \\\r\n 800 \\\r\n --warmup_ratio \\\r\n 0.05 \\\r\n --per_device_train_batch_size \\\r\n 32 \\\r\n --gradient_accumulation_steps \\\r\n 8 \\\r\n --per_device_eval_batch_size \\\r\n 8 \\\r\n --logging_strategy \\\r\n steps \\\r\n --logging_steps \\\r\n 10 \\\r\n --evaluation_strategy \\\r\n epoch \\\r\n --save_strategy \\\r\n epoch \\\r\n --load_best_model_at_end \\\r\n True \\\r\n --save_total_limit \\\r\n 5 \\\r\n --seed \\\r\n 1337 \\\r\n--do_train \\\r\n--do_eval \\\r\n--overwrite_output_dir\r\n```\r\n"
] | 1,690 | 1,707 | null |
NONE
| null |
### System Info
ref: https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining
python run_mae.py \
--model_type vit_mae \
--dataset_name nateraw/image-folder \
--train_dir <path-to-train-root> \
--output_dir ./outputs/ \
--remove_unused_columns False \
--label_names pixel_values \
--do_train \
--do_eval
My params:
--model_name_or_path /home/ana/data4/models/vit-mae-base
--dataset_name nateraw/image-folder
--train_dir /home/ana/data4/datasets/rvl_cdip/data/pretrain_images/train/
--validation_dir /home/ana/data4/datasets/rvl_cdip/data/pretrain_images/eval/
--output_dir /home/ana/data4/output_models/rvl_mae_pretrain_demo_10k_100
--remove_unused_columns False
--label_names pixel_values
--mask_ratio 0.75
--norm_pix_loss
--base_learning_rate 1.5e-4
--lr_scheduler_type cosine
--weight_decay 0.05
--num_train_epochs 800
--warmup_ratio 0.05
--per_device_train_batch_size 8
--per_device_eval_batch_size 8
--logging_strategy steps
--logging_steps 10
--evaluation_strategy epoch
--save_strategy epoch
--load_best_model_at_end True
--save_total_limit 5
--seed 1337
--do_train
--do_eval
output:
Traceback (most recent call last):
File "/home/ana/data4/projects/hf_mae/run_mae.py", line 397, in <module>
main()
File "/home/ana/data4/projects/hf_mae/run_mae.py", line 222, in main
ds = load_dataset(
File "/home/ana/data1/anaconda3/envs/torch2/lib/python3.8/site-packages/datasets/load.py", line 1773, in load_dataset
builder_instance = load_dataset_builder(
File "/home/ana/data1/anaconda3/envs/torch2/lib/python3.8/site-packages/datasets/load.py", line 1528, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "/home/ana/data1/anaconda3/envs/torch2/lib/python3.8/site-packages/datasets/builder.py", line 329, in __init__
data_files = DataFilesDict.from_local_or_remote(
File "/home/ana/data1/anaconda3/envs/torch2/lib/python3.8/site-packages/datasets/data_files.py", line 783, in from_local_or_remote
DataFilesList.from_local_or_remote(
File "/home/ana/data1/anaconda3/envs/torch2/lib/python3.8/site-packages/datasets/data_files.py", line 751, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/home/ana/data1/anaconda3/envs/torch2/lib/python3.8/site-packages/datasets/data_files.py", line 349, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/home/ana/data1/anaconda3/envs/torch2/lib/python3.8/site-packages/datasets/data_files.py", line 293, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/home/ana/data4/datasets/rvl_cdip/data/pretrain_images/train/' at /
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
build a dir like:
dataset/
train/
1.jpg
2.jpg
eval/
1.jpg
2.jpg
run:
python run_mae.py \
--model_name_or_path /home/ana/data4/models/vit-mae-base
--dataset_name nateraw/image-folder
--train_dir /home/ana/data4/datasets/rvl_cdip/data/pretrain_images/train/
--validation_dir /home/ana/data4/datasets/rvl_cdip/data/pretrain_images/eval/
--output_dir /home/ana/data4/output_models/rvl_mae_pretrain_demo_10k_100
--remove_unused_columns False
--label_names pixel_values
--mask_ratio 0.75
--norm_pix_loss
--base_learning_rate 1.5e-4
--lr_scheduler_type cosine
--weight_decay 0.05
--num_train_epochs 800
--warmup_ratio 0.05
--per_device_train_batch_size 8
--per_device_eval_batch_size 8
--logging_strategy steps
--logging_steps 10
--evaluation_strategy epoch
--save_strategy epoch
--load_best_model_at_end True
--save_total_limit 5
--seed 1337
--do_train
--do_eval
### Expected behavior
output:
Traceback (most recent call last):
File "/home/ana/data4/projects/hf_mae/run_mae.py", line 397, in <module>
main()
File "/home/ana/data4/projects/hf_mae/run_mae.py", line 222, in main
ds = load_dataset(
File "/home/ana/data1/anaconda3/envs/torch2/lib/python3.8/site-packages/datasets/load.py", line 1773, in load_dataset
builder_instance = load_dataset_builder(
File "/home/ana/data1/anaconda3/envs/torch2/lib/python3.8/site-packages/datasets/load.py", line 1528, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "/home/ana/data1/anaconda3/envs/torch2/lib/python3.8/site-packages/datasets/builder.py", line 329, in __init__
data_files = DataFilesDict.from_local_or_remote(
File "/home/ana/data1/anaconda3/envs/torch2/lib/python3.8/site-packages/datasets/data_files.py", line 783, in from_local_or_remote
DataFilesList.from_local_or_remote(
File "/home/ana/data1/anaconda3/envs/torch2/lib/python3.8/site-packages/datasets/data_files.py", line 751, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/home/ana/data1/anaconda3/envs/torch2/lib/python3.8/site-packages/datasets/data_files.py", line 349, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/home/ana/data1/anaconda3/envs/torch2/lib/python3.8/site-packages/datasets/data_files.py", line 293, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/home/ana/data4/datasets/rvl_cdip/data/pretrain_images/train/' at /
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25252/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25252/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/25251
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25251/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25251/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25251/events
|
https://github.com/huggingface/transformers/issues/25251
| 1,832,446,081 |
I_kwDOCUB6oc5tOOiB
| 25,251 |
Defining top_k within pipeline changes output from list to nested list
|
{
"login": "Harjas123",
"id": 107530287,
"node_id": "U_kgDOBmjILw",
"avatar_url": "https://avatars.githubusercontent.com/u/107530287?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Harjas123",
"html_url": "https://github.com/Harjas123",
"followers_url": "https://api.github.com/users/Harjas123/followers",
"following_url": "https://api.github.com/users/Harjas123/following{/other_user}",
"gists_url": "https://api.github.com/users/Harjas123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Harjas123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Harjas123/subscriptions",
"organizations_url": "https://api.github.com/users/Harjas123/orgs",
"repos_url": "https://api.github.com/users/Harjas123/repos",
"events_url": "https://api.github.com/users/Harjas123/events{/privacy}",
"received_events_url": "https://api.github.com/users/Harjas123/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi @Harjas123 thank you for reporting! Our team will take a look.",
"also cc @Narsil ",
"I agree that this is inconsistent but I don't think there is much to do about it now since this has been the case for the past three years, and making any change would break a lot of users code.",
"I understand. Would it at least be possible to add a mention of this somewhere in the docs?",
"Harmonizing outputs of pipelines is definitely in my mind for V5 if/when it happens :)"
] | 1,690 | 1,691 | 1,691 |
NONE
| null |
### System Info
```
- `transformers` version: 4.30.2
- Platform: Linux-5.14.0-162.22.2.el9_1.x86_64-x86_64-with-glibc2.34
- Python version: 3.9.17
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 1.11.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@Narsil
@sgugger
### Reproduction
Was trying to output all scores for a single-label classification problem. Initially tried to use `return_all_scores` as written in the docs for TextClassificationPipeline, which returned this error:
```UserWarning: return_all_scores is now deprecated, if want a similar funcionality use top_k=None instead of return_all_scores=True or top_k=1 instead of return_all_scores=False.```
Switched to top_k, but some of my code broke in strange ways. Eventually realized that it was because calling pipeline without top_k returns a list containing a dictionary, but calling it with top_k returns a list containing a list containing a dictionary, regardless of what value top_k is set to.
Without top_k=1:
`from transformers import pipeline`
`classifier = pipeline("sentiment-analysis", model="ProsusAI/finbert")`
`classifier("Inflation Remains Risk Confronting Financial Markets")`
Resulting output:
`[{'label': 'negative', 'score': 0.8932788372039795}]`
With top_k=1:
`from transformers import pipeline`
`classifier = pipeline("sentiment-analysis", model="ProsusAI/finbert", top_k=1)`
`classifier("Inflation Remains Risk Confronting Financial Markets")`
Resulting output:
`[[{'label': 'negative', 'score': 0.8932788372039795}]]`
With top_k=None:
`from transformers import pipeline`
`classifier = pipeline("sentiment-analysis", model="ProsusAI/finbert", top_k=None)`
`classifier("Inflation Remains Risk Confronting Financial Markets")`
Resulting output:
`[[{'label': 'negative', 'score': 0.8932788372039795},`
`{'label': 'neutral', 'score': 0.07486031949520111},`
`{'label': 'positive', 'score': 0.03186087682843208}]]`
This issue does not occur if top_k is set within `__call__`:
`from transformers import pipeline`
`classifier = pipeline("sentiment-analysis", model="ProsusAI/finbert")`
`classifier("Inflation Remains Risk Confronting Financial Markets", top_k=None)`
Resulting output:
`[{'label': 'negative', 'score': 0.8932788372039795},`
`{'label': 'neutral', 'score': 0.07486031949520111},`
`{'label': 'positive', 'score': 0.03186087682843208}]`
### Expected behavior
Behavior should be consistent regardless of whether top_k has been set within pipeline, set within `__call__`, or not set at all.
Also, [the documentation for TextClassificationPipeline](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.TextClassificationPipeline) says that top_k is a parameter under `__call__`, but does not explain that top_k is also a parameter under pipeline.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25251/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25251/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25250
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25250/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25250/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25250/events
|
https://github.com/huggingface/transformers/pull/25250
| 1,832,373,593 |
PR_kwDOCUB6oc5W9592
| 25,250 |
Ko perf train gpu one
|
{
"login": "HongB1",
"id": 54663536,
"node_id": "MDQ6VXNlcjU0NjYzNTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/54663536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HongB1",
"html_url": "https://github.com/HongB1",
"followers_url": "https://api.github.com/users/HongB1/followers",
"following_url": "https://api.github.com/users/HongB1/following{/other_user}",
"gists_url": "https://api.github.com/users/HongB1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HongB1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HongB1/subscriptions",
"organizations_url": "https://api.github.com/users/HongB1/orgs",
"repos_url": "https://api.github.com/users/HongB1/repos",
"events_url": "https://api.github.com/users/HongB1/events{/privacy}",
"received_events_url": "https://api.github.com/users/HongB1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"번역 문서가 새롭게 업데이트 되었습니다! 아직 번역이 완료되지 않으셨다면, 문서 업데이트 부탁드립니다 😄 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,690 | 1,697 | 1,697 |
NONE
| null |
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.md` to Korean" 으로 부탁드립니다! -->
# What does this PR do?
Translated the `<your_file>.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [ ] Check for missing / redundant translations (번역 누락/중복 검사)
- [ ] Grammar Check (맞춤법 검사)
- [ ] Review or Add new terms to glossary (용어 확인 및 추가)
- [ ] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [ ] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
<!-- 1. 위 체크가 모두 완료된 뒤에, 이 아래에 리뷰를 요청할 팀원들을 멘션해주세요! -->
<!-- May you please review this PR? @keonju2 @harheem @junejae @wonhyeongseo ... -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25250/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25250/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25250",
"html_url": "https://github.com/huggingface/transformers/pull/25250",
"diff_url": "https://github.com/huggingface/transformers/pull/25250.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25250.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25249
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25249/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25249/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25249/events
|
https://github.com/huggingface/transformers/pull/25249
| 1,832,311,548 |
PR_kwDOCUB6oc5W9s3G
| 25,249 |
Bump cryptography from 41.0.2 to 41.0.3 in /examples/research_projects/decision_transformer
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`.\n\nIf you change your mind, just re-open this PR and I'll resolve any conflicts on it.",
"@dependabot ignore this major version",
"OK, I won't notify you about version 41.x.x again, unless you re-open this PR."
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
Bumps [cryptography](https://github.com/pyca/cryptography) from 41.0.2 to 41.0.3.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's changelog</a>.</em></p>
<blockquote>
<p>41.0.3 - 2023-08-01</p>
<pre><code>
* Fixed performance regression loading DH public keys.
* Fixed a memory leak when using
:class:`~cryptography.hazmat.primitives.ciphers.aead.ChaCha20Poly1305`.
* Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL 3.1.2.
<p>.. _v41-0-2:
</code></pre></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pyca/cryptography/commit/b22271cf3c3dd8dc8978f8f4b00b5c7060b6538d"><code>b22271c</code></a> bump for 41.0.3 (<a href="https://redirect.github.com/pyca/cryptography/issues/9330">#9330</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/774a4a16cbd22a89fdb4195ade9e4fcee27a7afa"><code>774a4a1</code></a> Only check DH key validity when loading a private key. (<a href="https://redirect.github.com/pyca/cryptography/issues/9071">#9071</a>) (<a href="https://redirect.github.com/pyca/cryptography/issues/9319">#9319</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/bfa4d95f0f356f2d535efd5c775e0fb3efe90ef2"><code>bfa4d95</code></a> changelog for 41.0.3 (<a href="https://redirect.github.com/pyca/cryptography/issues/9320">#9320</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/0da7165aa73c0a4865b0a4d9e019db3c16eea55a"><code>0da7165</code></a> backport fix the memory leak in fixedpool (<a href="https://redirect.github.com/pyca/cryptography/issues/9272">#9272</a>) (<a href="https://redirect.github.com/pyca/cryptography/issues/9309">#9309</a>)</li>
<li>See full diff in <a href="https://github.com/pyca/cryptography/compare/41.0.2...41.0.3">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25249/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25249/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25249",
"html_url": "https://github.com/huggingface/transformers/pull/25249",
"diff_url": "https://github.com/huggingface/transformers/pull/25249.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25249.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25248
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25248/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25248/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25248/events
|
https://github.com/huggingface/transformers/pull/25248
| 1,831,987,116 |
PR_kwDOCUB6oc5W8m5n
| 25,248 |
Allow `trust_remote_code` in example scripts
|
{
"login": "Jackmin801",
"id": 56836461,
"node_id": "MDQ6VXNlcjU2ODM2NDYx",
"avatar_url": "https://avatars.githubusercontent.com/u/56836461?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jackmin801",
"html_url": "https://github.com/Jackmin801",
"followers_url": "https://api.github.com/users/Jackmin801/followers",
"following_url": "https://api.github.com/users/Jackmin801/following{/other_user}",
"gists_url": "https://api.github.com/users/Jackmin801/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jackmin801/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jackmin801/subscriptions",
"organizations_url": "https://api.github.com/users/Jackmin801/orgs",
"repos_url": "https://api.github.com/users/Jackmin801/repos",
"events_url": "https://api.github.com/users/Jackmin801/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jackmin801/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Will do flax and tf tomorrow. I have a few questions though:\r\n1. @ydshieh, this script is still using `use_auth_token`. Is this intended? \r\nhttps://github.com/huggingface/transformers/blob/main/examples/pytorch/image-pretraining/run_mim_no_trainer.py#L450\r\n2. This script doesnt use `token` or `use_auth_token` for the tokenizer\r\nhttps://github.com/huggingface/transformers/blob/main/examples/pytorch/contrastive-image-text/run_clip.py#L333-L340\r\n3. The Permutation Language Modeling [script](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_plm.py) only uses Auto for config and tokenizer, the model is hardcoded to XLNet. So there are 2 options:\r\n a. Not put `trust_remote_code` in this script -- only the transformers XLNet will be supported.\r\n b. Change the XLNet lines to use Auto, though Im not sure which Auto to use here.\r\n",
"\r\n> 1. @ydshieh, this script is still using `use_auth_token`. Is this intended?\r\nNo, it's a miss from my side. Nice catch and thanks!\r\n\r\n> 2. This script doesnt use `token` or `use_auth_token` for the tokenizer\r\n> https://github.com/huggingface/transformers/blob/main/examples/pytorch/contrastive-image-text/run_clip.py#L333-L340\r\nIt's probably already been this even before my `token` PRs. I will update them too :-)\r\n\r\n\r\n> 3. The Permutation Language Modeling [script](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_plm.py) only uses Auto for config and tokenizer, the model is hardcoded to XLNet. So there are 2 options:\r\n> a. Not put `trust_remote_code` in this script -- only the transformers XLNet will be supported.\r\n\r\nLet's just keep `a` .\r\n\r\nLooking forward your PR completed 🚀 \r\n\r\n",
"Couple more places not using `token` or `use_auth_token`\r\n- Tensorflow examples\r\n - run_clip: Tokenizer\r\n - run_clm: Config, Tokenizer, Model\r\n - run_mlm: Config, Tokenizer, Model\r\n - run_ner: Config, Tokenizer, Model\r\n \r\nMost of the no_trainer scripts don't have `token` or `use_auth_token` in the args.\r\nDo we want to add them?",
"That's all the places where `trust_remote_code` can be used in the examples that I am aware of.\r\n@ydshieh, would appreciate a review. Thanks!",
"Thanks for the information! I will take care of `token` this week.\r\n\r\n> Most of the no_trainer scripts don't have token or use_auth_token in the args. Do we want to add them?\r\nLet's not adding things to them in this PR. We can take a look once this PR being merged.\r\n\r\n> @ydshieh, would appreciate a review. Thanks!\r\n\r\nSure!\r\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25248). All of your documentation changes will be reflected on that endpoint."
] | 1,690 | 1,691 | 1,691 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Update example scripts to use `trust_remote_code`.
This PR is similar to https://github.com/huggingface/transformers/pull/25167 but for adding the `trust_remote_code` arg instead of updating the `token` arg.
I am not sure if this feature is welcome so I have only modified pytorch `run_glue.py` for now.
I will modify the other files (every file that was modified in https://github.com/huggingface/transformers/pull/25167) if the change is welcome and after you all are happy with the help string
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ydshieh @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25248/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25248/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25248",
"html_url": "https://github.com/huggingface/transformers/pull/25248",
"diff_url": "https://github.com/huggingface/transformers/pull/25248.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25248.patch",
"merged_at": 1691418745000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25247
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25247/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25247/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25247/events
|
https://github.com/huggingface/transformers/issues/25247
| 1,831,903,410 |
I_kwDOCUB6oc5tMKCy
| 25,247 |
Enable use of best epoch in Trial, with early stopping, during hyperparameter search
|
{
"login": "antonioalegria",
"id": 49322,
"node_id": "MDQ6VXNlcjQ5MzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/49322?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antonioalegria",
"html_url": "https://github.com/antonioalegria",
"followers_url": "https://api.github.com/users/antonioalegria/followers",
"following_url": "https://api.github.com/users/antonioalegria/following{/other_user}",
"gists_url": "https://api.github.com/users/antonioalegria/gists{/gist_id}",
"starred_url": "https://api.github.com/users/antonioalegria/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antonioalegria/subscriptions",
"organizations_url": "https://api.github.com/users/antonioalegria/orgs",
"repos_url": "https://api.github.com/users/antonioalegria/repos",
"events_url": "https://api.github.com/users/antonioalegria/events{/privacy}",
"received_events_url": "https://api.github.com/users/antonioalegria/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2155169140,
"node_id": "MDU6TGFiZWwyMTU1MTY5MTQw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/trainer",
"name": "trainer",
"color": "2ef289",
"default": false,
"description": ""
},
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] |
open
| false | null |
[] |
[
"cc @sgugger ",
"Yes this is not currently supported. Could be nice to add, but this is not high-priority on our side, so it would have to be a contribution :-) Happy to review a PR!",
"Hi, I was looking to contribute here so I dug into the code. @antonioalegria , what hyperparameter search backend are you searching? Based on my reading of the code, it seems this is already supported for the `RAY` backend. Passing in `ray_scope=all` to `TrainingArguments` should work. \r\n * Code pointer on [huggingface](https://github.com/huggingface/transformers/blob/main/src/transformers/integrations/integration_utils.py#L360) end.\r\n * https://docs.ray.io/en/latest/tune/api/doc/ray.tune.ExperimentAnalysis.get_best_trial.html#ray-tune-experimentanalysis-get-best-trial\r\n\r\nI can look to contribute this for the wandb and optuna backends. However, the sigopt backend seems rather opaque."
] | 1,690 | 1,693 | null |
NONE
| null |
### Feature request
When running a `Trainer.hyperparameter_search`, each trial's value is calculated from the last epoch's chosen metric. However, especially when using early stopping and `load_best_model_at_end`, it would be useful to use the best model instead.
This could be a parameter of `Trainer.hyperparameter_search` or a an overridable function getting the best value, or some callback.
### Motivation
Often, we use early stopping and take the best model from a particular run because it's possible for models to start overfitting and dropping off after a certain number of epochs. This phenomenon can also appear during hyper parameter search and, as such, we'd like to be able to use the best epoch's value to compare trials.
Without this we may get results that are not fully representative.
### Your contribution
Happy to help testing or in other ways I can. Not sure where to start but if there is a clear place to do it I'd be open to help.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25247/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25247/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/25246
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25246/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25246/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25246/events
|
https://github.com/huggingface/transformers/pull/25246
| 1,831,809,902 |
PR_kwDOCUB6oc5W8BK9
| 25,246 |
Fix return_dict_in_generate bug in InstructBlip generate function
|
{
"login": "euanong",
"id": 8283298,
"node_id": "MDQ6VXNlcjgyODMyOTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8283298?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/euanong",
"html_url": "https://github.com/euanong",
"followers_url": "https://api.github.com/users/euanong/followers",
"following_url": "https://api.github.com/users/euanong/following{/other_user}",
"gists_url": "https://api.github.com/users/euanong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/euanong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/euanong/subscriptions",
"organizations_url": "https://api.github.com/users/euanong/orgs",
"repos_url": "https://api.github.com/users/euanong/repos",
"events_url": "https://api.github.com/users/euanong/events{/privacy}",
"received_events_url": "https://api.github.com/users/euanong/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
# What does this PR do?
Previously, the postprocessing conducted on generated sequences in InstructBlip's generate function assumed these sequences were tensors (i.e. that `return_dict_in_generate == False`).
This PR updates the InstructBlip generate function to check whether the result of the call to the wrapped language model `generate()` is a tensor: if it's not, we attempt to postprocess the sequence attribute of the returned results object rather than the object itself.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- (Not quite a typo, but a very small bugfix...)
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- Vision model bug: @amyeroberts
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25246/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25246/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25246",
"html_url": "https://github.com/huggingface/transformers/pull/25246",
"diff_url": "https://github.com/huggingface/transformers/pull/25246.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25246.patch",
"merged_at": 1690980235000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25245
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25245/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25245/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25245/events
|
https://github.com/huggingface/transformers/issues/25245
| 1,831,801,078 |
I_kwDOCUB6oc5tLxD2
| 25,245 |
BLIP-2 request: If it's even possible, can you please provide an official example script of how to get the text(caption) features and image features into the same vector space (e.g. for cross-modal retrieval/search using BLIP-2 models, similar to what we can already do with CLIP.) Thanks in advance.
|
{
"login": "wingz1",
"id": 30269996,
"node_id": "MDQ6VXNlcjMwMjY5OTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/30269996?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wingz1",
"html_url": "https://github.com/wingz1",
"followers_url": "https://api.github.com/users/wingz1/followers",
"following_url": "https://api.github.com/users/wingz1/following{/other_user}",
"gists_url": "https://api.github.com/users/wingz1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wingz1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wingz1/subscriptions",
"organizations_url": "https://api.github.com/users/wingz1/orgs",
"repos_url": "https://api.github.com/users/wingz1/repos",
"events_url": "https://api.github.com/users/wingz1/events{/privacy}",
"received_events_url": "https://api.github.com/users/wingz1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @wingz1, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n\r\nThere are code examples of how to use [BLIP](https://huggingface.co/docs/transformers/v4.31.0/en/model_doc/blip#transformers.BlipModel.forward.example) and [BLIP-2](https://huggingface.co/docs/transformers/v4.31.0/en/model_doc/blip-2#transformers.Blip2Model) in the docs. Both have a similar API to CLIP and have the same methods e.g. `get_text_features`, `get_image_features` implemented and return similar outputs. ",
"Thanks, I figured that -- I will check the forums! Indeed those methods do exist in BLIP-2, but those outputs don't share the same dimensionality or mean the same thing as the equivalent commands in CLIP due to the how the model is set up.",
"Not really a useful answer, but from the following lines in the modeling file, you can go `language_projection` to get the same dimension. But it's super questionable regarding if this is `the same space` with the meaningful text/image features.\r\n\r\n(and yes, further question on this topic should be on the forum)\r\n\r\n> self.language_projection = nn.Linear(config.qformer_config.hidden_size, config.text_config.hidden_size)\r\n\r\n> ilanguage_model_inputs = self.language_projection(query_output)\r\n\r\n> inputs_embeds = self.language_model.get_input_embeddings()(input_ids)\r\n> inputs_embeds = torch.cat([language_model_inputs, inputs_embeds], dim=1)",
"Hi I think multimodal embeddings is something lacking in the current implementation, where we can't extract embeddings obtained by passing both text and image to the QFormer, infact the Qformer in HF doesn't even take text `input_ids` as input [here](https://github.com/huggingface/transformers/blob/66c240f3c950612fa05b2e14c85d4b86c88e473e/src/transformers/models/blip_2/modeling_blip_2.py#L1081 ) \r\n\r\nWhereas the original Qformer implementation did take text inputs as input_id [here](https://github.com/salesforce/LAVIS/blob/91c8e6863b4b02d7d75167e7d18037ef3a96c54b/lavis/models/blip2_models/Qformer.py#L804) , along with the image and this can be used to extract multimodal embeddings as done in the `extract_features` fn [here](https://github.com/salesforce/LAVIS/blob/f982acc73288408bceda2d35471a8fcf55aa04ca/lavis/models/blip2_models/blip2_qformer.py#L387)",
"@ayushtues Indeed, it seems that wasn't included when the model was first added to the library. @NielsRogge - was there a reason for not including this? \r\n\r\nIf there wasn't a specific reason - it seems like a useful addition :) @ayushtues would you be interested in opening a PR to add this? This would mean you get the github contribution for adding the feature.\r\n",
"A similar request for it is here: https://github.com/huggingface/transformers/issues/25300",
"I was working on integrating BlipDiffusion into diffusers https://github.com/huggingface/diffusers/pull/4388/, which also needs multimodal features. Made a local copy of Blip2Qformer and was modifying in this PR, but having the change integrated into HF will make it much cleaner\r\n\r\n",
"Great - let's add it into transformers then :) !",
"@youssefadr is picking this up as discussed in https://github.com/huggingface/transformers/issues/25300, happy to help him if needed ",
"@ayushtues Yes, I'll open a PR this week asap!",
"Hi @youssefadr \r\n\r\nI hope it is fine that I opened a draft PR #25612 to share some progress about multimodal features. I started to try to contribute to huggingface this week :)\r\n\r\nThe weights of the original blip2 itm model are converted into Blip2ForImageTextRetrieval.\r\nThe idea of adding Blip2ForImageTextRetrieval has not been discussed at all. wdyt?\r\n\r\nFeel free to use what I did, if it makes sense. \r\nPlease let me know if it makes sense for me to continue trying to implement Blip2ForImageTextRetrieval, maybe you are already working in this part, or maybe it is not really necessary to try to implement Blip2ForImageTextRetrieval.\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,690 | 1,695 | 1,695 |
NONE
| null |
### System Info
linux, python 3.8+, pytorch '1.13.0+cu116'
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
N/A
### Expected behavior
N/A
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25245/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25245/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25244
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25244/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25244/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25244/events
|
https://github.com/huggingface/transformers/pull/25244
| 1,831,770,731 |
PR_kwDOCUB6oc5W744_
| 25,244 |
VQA task guide
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Addressed @NielsRogge's and @Steven's feedback. @amyeroberts please take a look :) "
] | 1,690 | 1,691 | 1,691 |
CONTRIBUTOR
| null |
This PR adds a new Visual Question Answering task guide to the transformers docs:
fine-tuning ViLT, based on @NielsRogge 's [notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/ViLT/Fine_tuning_ViLT_for_VQA.ipynb)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25244/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25244/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25244",
"html_url": "https://github.com/huggingface/transformers/pull/25244",
"diff_url": "https://github.com/huggingface/transformers/pull/25244.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25244.patch",
"merged_at": 1691584147000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25243
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25243/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25243/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25243/events
|
https://github.com/huggingface/transformers/issues/25243
| 1,831,740,024 |
I_kwDOCUB6oc5tLiJ4
| 25,243 |
RetNet model support
|
{
"login": "yoinked-h",
"id": 63889420,
"node_id": "MDQ6VXNlcjYzODg5NDIw",
"avatar_url": "https://avatars.githubusercontent.com/u/63889420?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yoinked-h",
"html_url": "https://github.com/yoinked-h",
"followers_url": "https://api.github.com/users/yoinked-h/followers",
"following_url": "https://api.github.com/users/yoinked-h/following{/other_user}",
"gists_url": "https://api.github.com/users/yoinked-h/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yoinked-h/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yoinked-h/subscriptions",
"organizations_url": "https://api.github.com/users/yoinked-h/orgs",
"repos_url": "https://api.github.com/users/yoinked-h/repos",
"events_url": "https://api.github.com/users/yoinked-h/events{/privacy}",
"received_events_url": "https://api.github.com/users/yoinked-h/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
},
{
"id": 5724035499,
"node_id": "LA_kwDOCUB6oc8AAAABVS3Zqw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Model%20on%20the%20Hub",
"name": "Model on the Hub",
"color": "9CA0E9",
"default": false,
"description": ""
}
] |
open
| false | null |
[] |
[
"cc @ArthurZucker @younesbelkada ",
"p.s. if google offered any bigger TPU's for TRC; i could train retnet-3b (the point at which retnet is better than regular transformers), but as of now; theres retnet_base (small) and retnet_medium (ill upload it when it gets good)",
"I am wondering if the original authors released the trained models?",
"as far as i know, no official pretrained models were released by microsoft; but the training code is on the torchscale repo, so thats how i am training the models",
"Cool model! But as long as we don't have official/ very good pretraining checkpoints, not really anything we can do! ",
"ah, understood, i'll try to get a good checkpoint; but for now, i assume i can close this and reopen when it finishes training",
"oops",
"https://huggingface.co/parsee-mizuhashi/retnet/tree/main\r\ntrained it on 1m steps, loss is around `4.2`, hope this is good enough for some inference code",
"My recommendation would be to put the model on the hub following [this tutorial](https://huggingface.co/docs/transformers/custom_models), which will help having a working code without going trough the hassle of all the review process! Then if the models is highly requested/has as lot of usage or has official released checkpoints then we'll add it in transformers! \r\nDoes that make sens for you @yoinked-h ? 🤗 ",
"If you implement it or link some useful code for training we could provide some computing power ",
"> My recommendation would be to put the model on the hub following [this tutorial](https://huggingface.co/docs/transformers/custom_models), which will help having a working code without going trough the hassle of all the review process! Then if the models is highly requested/has as lot of usage or has official released checkpoints then we'll add it in transformers! Does that make sens for you @yoinked-h ? 🤗\r\n\r\nyeah, i'll try to make the custom model scripts and push them to the hub\r\n\r\n> If you implement it or link some useful code for training we could provide some computing power\r\n\r\nthe training code is kind of buggy (doesnt work with TPU accelerate) but [here](https://github.com/microsoft/torchscale/tree/main/examples/fairseq), i also have a shell script which does most of the work for setup->training",
"I started an training of small (around 300m params) model with german data.\r\nIts HF compatible and should push the code to the hub too.",
"300m and 1300m models are training\r\nAfter finding a bug in learning rate scheduling the loss is decreasing again.\r\nThe text is grammatical okay but doesn't make sense right now.\r\nLooking forward to the new run 😁\r\nWill push the weights and code to the hub on Friday I think.",
"https://huggingface.co/flozi00/RetNet-300m-German\r\n\r\nMaybe I find some time to train larger models, for example 7b, when i am not ill anymore",
"https://huggingface.co/papers/2307.08621#64bff688661694889faecdb2\r\n\r\nWill be waiting for the release from Microsoft ",
"Hello everyone, Is there any better pre-trained model available now?",
"hey @yoinked-h , can you further assist me about how you manage to train a retnet model? I cant seem to manage it ? If possible can you share a python file or notebook ? Thank you so much in advance"
] | 1,690 | 1,695 | null |
CONTRIBUTOR
| null |
### Model description
RetNet / Retentive Networks is a new model *archetype* released by microsoft; the research paper is [here](https://arxiv.org/pdf/2307.08621.pdf). As of now, there is *one* model for retnet; [made by me](https://huggingface.co/parsee-mizuhashi/retnet-tiny-wikitext-undertrained); which is undertrained (`loss=8`!) and I am trying to make a second model on a larger arch.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
[commit that has retnet training](https://github.com/microsoft/torchscale/commit/bf65397b26469ac9c24d83a9b779b285c1ec640b)
@donglixp was the main author for commit and cited on the paper
all code is licensed under MIT, including model weights
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25243/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25243/timeline
|
reopened
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25242
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25242/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25242/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25242/events
|
https://github.com/huggingface/transformers/pull/25242
| 1,831,600,184 |
PR_kwDOCUB6oc5W7UVu
| 25,242 |
In assisted decoding, pass model_kwargs to model's forward call (fix prepare_input_for_generation in all models)
|
{
"login": "sinking-point",
"id": 17532243,
"node_id": "MDQ6VXNlcjE3NTMyMjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/17532243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sinking-point",
"html_url": "https://github.com/sinking-point",
"followers_url": "https://api.github.com/users/sinking-point/followers",
"following_url": "https://api.github.com/users/sinking-point/following{/other_user}",
"gists_url": "https://api.github.com/users/sinking-point/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sinking-point/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sinking-point/subscriptions",
"organizations_url": "https://api.github.com/users/sinking-point/orgs",
"repos_url": "https://api.github.com/users/sinking-point/repos",
"events_url": "https://api.github.com/users/sinking-point/events{/privacy}",
"received_events_url": "https://api.github.com/users/sinking-point/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@sinking-point the PR has \"WIP\" in the title -- is it still under development, or is it ready to review?",
"Not ready yet. Still have to fix more models and see what's breaking the other test. I've deprioritised this somewhat as it's quite time consuming, but I'll keep chipping away at it whenever I can.\n\nIf you need this done quickly, you're welcome to help - lmk and I'll add you as a collaborator on my branch.",
"Not urgent -- simply double-checking whether it was in need of a review or not :)",
"@gante This should be ready for review now. Thanks in advance.",
"@ArthurZucker @LysandreJik This PR is a big one and touches in a core piece of logic for all generative models, so I'm tagging 2 core maintainers.\r\n\r\n\r\n### Context\r\nAdvanced generation techniques (like assisted generation or [medusa](https://github.com/FasterDecoding/Medusa)) may generate more than one token per model forward pass. The original implementation of assisted generation had a lot of custom code, as it breaks one of the assumptions in the models' `prepare_inputs_for_generation` -- that only one token is generated per `forward` pass.\r\n\r\n### Solution\r\n@sinking-point has kindly put forward a proposal to get rid of the custom code in assisted generation. After iterating with me, the plan was to remove the assumption of one token per `forward` in `prepare_inputs_for_generation` -- the slicing therein should be done based on how many tokens do not have corresponding past KV values, and not simply taking the last set of inputs. This is the change this PR implements, as well as the removal of some custom assisted generation code. Needless to say, it is fully backwards compatible 😉 \r\n\r\n### Postface\r\nTo reiterate: this PR gets the green light from me in terms of logic 🟢, and it is a big contribution by @sinking-point. This PR is also important to future-proof our generative techniques -- we will be ready for new types of multiple-token-per-forward-pass strategies as a result of this PR.\r\n\r\nI'll be off the next few weeks, but I'm sure this PR will get a quick resolution 🤗 ",
"Thanks @gante , I'll take a look at your comments tomorrow 👍",
"Hi @sinking-point! Sorry for the delay - I'm taking over this PR from @gante because he's out on a well-deserved rest right now. Is everything ready for review, or are there any other issues you want to discuss with the team before we take a final look at it?",
"No worries @Rocketknight1 . Thanks for taking this on.\n\nThere's one discussion gante opened that I haven't resolved. Could you give your input on this? https://github.com/huggingface/transformers/pull/25242#discussion_r1326154797",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25242). All of your documentation changes will be reflected on that endpoint.",
"@sinking-point Replied to the last remaining issue up above!",
"Thanks @Rocketknight1 , I'll take a look on Monday",
"Some random tests started failing so I rebased onto main where they're fixed, but it looks like I have some more work to do now.",
"Ick, yeah. I'm not sure what's causing those test failures, but if you can't figure it out, let me know and I'll dive in!",
"@Rocketknight1 I think I've sorted it out. The hub test fails seem unrelated to my changes, and might have been a network failure or something. The failing Musicgen pipeline is broken on main as well.",
"Could you block new generative model merges until this PR is merged? Otherwise I will have to keep applying the changes to any new models that are added.",
"That's a big ask, unfortunately, but they shouldn't be too frequent! Let's just try to get this PR over the line quickly, so you don't have to keep rebasing and updating. I agree with you on the failing tests, btw, they look like they're unrelated.\r\n\r\nAre you happy for me to go ahead and merge this once the doc tests pass, or are there other things left to be cleaned up?",
"Should be ready to merge if you're happy with it. Thanks!",
"Looks like doc tests passed @Rocketknight1 , so as you said let's make this a priority before any more models are added.",
"Understood! It's quite a big PR since it touches so many models, but I'll try to get an internal review in the next few days.",
"> Could you block new generative model merges until this PR is merged?\r\n\r\nAlternatively, could you require that new generative models' `prepare_inputs_for_generation` method follows this PR? That is, instead assuming that if `past_key_values` is provided it covers all but the last position, you should calculate how many positions are remaining after `past_key_values` and keep those.",
"Hi @Rocketknight1 , any update on this?",
"Hey @sinking-point 👋 \r\n\r\nI'm back from holidays and I'll be doing a quick final check now. Assuming the check comes out positive, we'll tag a core maintainer to greenlight the merge. \r\n\r\nOur apologies for the slow process, it should be quick now 🤗 ",
"@sinking-point regarding the failing test: rebasing the PR should fix it, the bug was fixed last week :)",
"ping @LysandreJik -- this PR should be ready to be merged after it is rebased. Please read [this comment](https://github.com/huggingface/transformers/pull/25242#issuecomment-1718079624) for context :)",
"Thanks @gante :)",
"This seems ok to me but I'd like to ask @patrickvonplaten for his opinion and eventual approval given the experience maintaining this part of the code",
"Amazing, thank you @LysandreJik and @patrickvonplaten ",
"And thank you @sinking-point for this big contribution 🔥 💛 "
] | 1,690 | 1,697 | 1,697 |
CONTRIBUTOR
| null |
# What does this PR do?
Previously, assisted decoding would ignore any additional kwargs that it doesn't explicitly handle. This was inconsistent with other generation methods, which pass the model_kwargs through prepare_inputs_for_generation and forward the returned dict to the model's forward call.
The prepare_inputs_for_generation method needs to be amended in all models, as previously it only kept the last input ID when a past_key_values was passed.
Fixes #25020
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25242/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25242/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25242",
"html_url": "https://github.com/huggingface/transformers/pull/25242",
"diff_url": "https://github.com/huggingface/transformers/pull/25242.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25242.patch",
"merged_at": 1697023122000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25241
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25241/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25241/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25241/events
|
https://github.com/huggingface/transformers/issues/25241
| 1,831,599,250 |
I_kwDOCUB6oc5tK_yS
| 25,241 |
Bug in `PreTrainedModel.resize_token_embeddings` When Using DeepSpeed Zero Stage 3
|
{
"login": "sinamoeini",
"id": 4393595,
"node_id": "MDQ6VXNlcjQzOTM1OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4393595?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sinamoeini",
"html_url": "https://github.com/sinamoeini",
"followers_url": "https://api.github.com/users/sinamoeini/followers",
"following_url": "https://api.github.com/users/sinamoeini/following{/other_user}",
"gists_url": "https://api.github.com/users/sinamoeini/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sinamoeini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sinamoeini/subscriptions",
"organizations_url": "https://api.github.com/users/sinamoeini/orgs",
"repos_url": "https://api.github.com/users/sinamoeini/repos",
"events_url": "https://api.github.com/users/sinamoeini/events{/privacy}",
"received_events_url": "https://api.github.com/users/sinamoeini/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi! Would it possible for you to do `resize_token_embeddings` without DeepSpeed, save the model, and load the new model in the script where you use DeepSpeed.\r\n\r\nThis might be easier and quicker in terms of solution/workaround (if it works).",
"Hi, thanks for the suggestion. I have RCed this and have a nonhacky solution that works nicely. I will create a PR in the next two days to resolve this.",
"Thank you @sinamoeini for fixing this issue 🤗",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,690 | 1,695 | 1,695 |
CONTRIBUTOR
| null |
### System Info
transformers version: 4.31.0
Platform: Linux 5.4.238-148.346.amzn2.x86_64
Python version: 3.8.10
Huggingface_hub version: 0.14.1
Safetensors version: 0.3.1
PyTorch version (GPU?): 2.0.1+cu117 (True)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using GPU in script?: yes
Using distributed or parallel set-up in script?: yes
### Who can help?
@pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
This is a simple test to highlight this inconsistency. Here is brief description of what test script does:
* Starts deepspeed
* Loads a pretrained model
* Using gather gets the weights of first 50 embeddings on each device and stores them in a local tensor
* Reduce the number of embeddings to 50 by using `PreTrainedModel.resize_token_embeddings`
* gets the embedding weights again (note that at this point they are not ds pararmeters anymore)
* Checks the result on each device to see if it matches what we recorded earlier
The script is executed on a multi gpu node as follows
```
deepspeed test.py
```
Where the contents of `test.py` are
```
from transformers import (
TrainingArguments,
AutoModelForCausalLM,
set_seed,
)
import os
import deepspeed
def main() -> None:
set_seed(0)
# enable deepspeed stage 3
training_args = TrainingArguments(output_dir="dummy", remove_unused_columns=False, deepspeed="zero3.json")
# load pretrained model
model_path = "openlm-research/open_llama_3b"
model = AutoModelForCausalLM.from_pretrained(model_path)
# store first 50 embeddings locally in ref
with deepspeed.zero.GatheredParameters(list(model.lm_head.parameters())):
ref = model.lm_head.weight.data[:50, :].clone()
# reduce embeddings to 5, using resize_token_embeddings
model.resize_token_embeddings(50)
# check if the embeddings match what we recorded earlier on each device
# note that after resizng, resize_token_embeddings does not convert the embedding layers to ds parameters
rank = int(os.environ["RANK"])
sanity = all((ref == model.lm_head.weight.data).reshape(-1).tolist())
print(f"{rank}: sanity pass: {sanity}")
if __name__ == "__main__":
main()
```
And contents of `zero3.json` are
```
{
"train_micro_batch_size_per_gpu": "auto",
"train_batch_size": "auto",
"zero_allow_untested_optimizer": true,
"gradient_clipping": "auto",
"gradient_accumulation_steps": "auto",
"bfloat16": {
"enabled": true
},
"zero_optimization": {
"stage": 3,
"contiguous_gradients": false,
"overlap_comm": true,
"allgather_bucket_size": 1e8,
"reduce_bucket_size": 2e8,
"stage3_max_live_parameters": 0.7e8,
"stage3_param_persistence_threshold": 5e6,
"stage3_gather_fp16_weights_on_model_save": true
},
"activation_checkpointing": {
"partition_activations": false,
"contiguous_memory_optimization": false,
"number_checkpoints": 100,
"cpu_checkpointing": false
},
"optimizer": {
"type": "Adam",
"params": {
"weight_decay": "auto",
"betas": [
0.9,
0.999
],
"eps": "auto",
"lr": "auto"
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto",
"total_num_steps": "auto"
}
}
}
```
### Expected behavior
It is expected that each device would return True. But as of now only the 0th device has the correct value.
Note that in majority of the cases `deepspeed.initialize` is called by Trainer right after `resize_token_embeddings`, where rank 0 values would be scattered and we will have consistency. However if an operation happens in between there will be no consistency
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25241/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25241/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25240
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25240/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25240/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25240/events
|
https://github.com/huggingface/transformers/pull/25240
| 1,831,586,487 |
PR_kwDOCUB6oc5W7RSu
| 25,240 |
Docs: introduction to generation with LLMs
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Do we really want to include non-text parts so prominently here? I think 99% of the users clicking on \"Generation\" expect to see only text generation and not anything multi-modal.",
"I would actually just call it \"text-generation\" and not \"autoregressive generation\"",
"@patrickvonplaten the non-text parts correspond to a tiny portion of the docs -- given than a significant number of issues in `transformers` come from models like Whisper or BLIP, the benefits may be huge. Pointers to things like quantization or generate classes also apply to them.\r\n\r\nThe decision to have a separate generate section is somewhat tied to including other modalities. If we include them, then it should be separate. If we don't, I still think generate deserves its own section.\r\n\r\nNote that this would be the only guide that is planned to include the non-LLM case :) ",
"> @patrickvonplaten the non-text parts correspond to a tiny portion of the docs -- given than a significant number of issues in `transformers` come from models like Whisper or BLIP, the benefits may be huge. Pointers to things like quantization or generate classes also apply to them.\r\n> \r\n> The decision to have a separate generate section is somewhat tied to including other modalities. If we include them, then it should be separate. If we don't, I still think generate deserves its own section.\r\n> \r\n> Note that this would be the only guide that is planned to include the non-LLM case :)\r\n\r\n\r\nSorry this might not be super in-line with what we discussed in our call earlier, but I think since we're in the task guide here we should stay in a \"task\"-format that the user expects, no? So more generally speaking: I'm not really looking for a \"auto-regressive generation\" task - I'm looking for \"Text generation\" or \"Speech recognition\" task. Auto-regressive generation is just the underlying method of different tasks but for someone that just looks at how to do a certain task they don't need to know about auto-regressive generation right away no? I think when explaining text-generation on the main page it's good to mention auto-regressive generation, but it shouldn't be the title IMO.\r\n\r\nTaking a step back here, I don't fully understand is what is the different between \"natural language processing\" and \"text-generation\"?\r\nTo me we should either:\r\n- a) Change NLP to NLU and move all text-generation based tasks like \"summarization\", \"translation\" and potentially copy \"question answering\" to \"Text Generation\"\r\n- b) Or text generation should just live under NLP\r\n\r\nI think a) is better to make text generation more prominent and then we can also add more sub sections like \"chat\", \"code generation\", maybe below.\r\n\r\nThe other things we talked about such as k/v cache, speeding up inference / prompting etc... could maybe have sections under \"Tutorials\" and we link from the different \"sub-generation\" tasks since they are related to all of them no? ",
"An update from an offline discussion with @patrickvonplaten:\r\n\r\nThe plan forward is to:\r\n1. Rework this PR into an LLM-only tutorial\r\n2. Sections: NLP will be reworded into NLU, and text generation tasks will go under the new section. At a later stage, we may include new tasks there, like code generation and chatbots. The new docs will fall in the following sections\r\n a. prompting -> tutorial section\r\n b. LLM performance -> performance and scalability section \r\n3. Since the use of `generate` with other modalities is very similar to LLMs, there will be a `<Tip>` block at the beginning of the other tasks pointing to this LLM tutorial (which in turn points to the whole generate ecosystem)\r\n\r\nThese changes will be built over a few PRs, focusing on speed, so we can then share with the rest of the team (and community) and use the feedback for further improvements 🚀 ",
"Sounds good to me!",
"@stevhliu suggestions included 👍 I've also fixed a few visual issues that we could see in the preview. If you agree, I'll merge this PR :)",
"Very cool doc & gifs!"
] | 1,690 | 1,693 | 1,691 |
MEMBER
| null |
# What does this PR do?
This PR adds a sort of landing page on `generate`, which was missing in our docs. This page is useful for beginners and experienced users alike -- it goes through the basic generate API for both LLMs and non-text tasks, common caveats, and ends with pointers for advanced exploration.
I expect that the consolidation of pointers for advanced exploration in a single page will massively improve the discoverability of our various generate-related efforts!
👉 best viewed in the doc preview, since there are gifs :)
Related issue: #24575
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25240/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25240/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25240",
"html_url": "https://github.com/huggingface/transformers/pull/25240",
"diff_url": "https://github.com/huggingface/transformers/pull/25240.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25240.patch",
"merged_at": 1691575760000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25239
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25239/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25239/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25239/events
|
https://github.com/huggingface/transformers/pull/25239
| 1,831,467,512 |
PR_kwDOCUB6oc5W63ZA
| 25,239 |
Fix set of model parallel in the Trainer when no GPUs are available
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,690 | 1,690 |
COLLABORATOR
| null |
# What does this PR do?
Fixes how `self.is_model_parallel` is set in the Trainer when no GPUs are available.
Fixes #25236
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25239/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25239/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25239",
"html_url": "https://github.com/huggingface/transformers/pull/25239",
"diff_url": "https://github.com/huggingface/transformers/pull/25239.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25239.patch",
"merged_at": 1690961340000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25238
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25238/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25238/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25238/events
|
https://github.com/huggingface/transformers/pull/25238
| 1,831,456,260 |
PR_kwDOCUB6oc5W608w
| 25,238 |
TF-OPT attention mask fixes
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"No response, but we should probably merge anyway. Pinging @amyeroberts for core maintainer review!",
"@amyeroberts Sorry for the delay, I lost track of this one!"
] | 1,690 | 1,694 | 1,694 |
MEMBER
| null |
With apologies for the delay, this PR should hopefully resolve the issues in #24637. @abb128 can you please try installing from this PR and verify if it resolves your issues? You can install from this PR with:
`pip install --upgrade git+https://github.com/huggingface/transformers.git@tf_opt_fixes`
Fixes #24637
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25238/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25238/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25238",
"html_url": "https://github.com/huggingface/transformers/pull/25238",
"diff_url": "https://github.com/huggingface/transformers/pull/25238.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25238.patch",
"merged_at": 1694003848000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25237
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25237/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25237/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25237/events
|
https://github.com/huggingface/transformers/pull/25237
| 1,831,440,926 |
PR_kwDOCUB6oc5W6xmX
| 25,237 |
Deal with nested configs better in base class
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ArthurZucker the `is_composition=True` is not necessary anymore except for configs which have no default for their subconfigs. And it should only be set to `True` in that case, otherwise in `to_diff_dict` we put too much stuff. I adapted the common test to check for that, will also adapt the doc.\r\n\r\nI'll also add a test for the instantiation of a subconfig with a dict.",
"This broke the yet unmerged IDEFICS https://github.com/huggingface/transformers/pull/24796\r\n\r\nI created a new issue to track https://github.com/huggingface/transformers/issues/25597",
"Isn't the `use_diff=True` in [`self.to_json_file(output_config_file, use_diff=True)`](https://github.com/huggingface/transformers/blob/c2123626aa3cd6c1ae4869ec9bc8869d1a408166/src/transformers/configuration_utils.py#L459) by default dangerous? This means that a saved config.json will break in case the default of a config in transformers is modified. This adds a lot of trust in transformers, while being greedy allows to purely rely on the json without having to rely on transformers defaults.\r\n\r\nFor example, this PR changes the way subconfigs are saved, being greedy before, to not being greedy now.\r\n\r\nSimilarly, `print(config)` is now much less informative than before:\r\n\r\nBefore:\r\n\r\n```\r\nconfig SamConfig {\r\n \"_commit_hash\": \"96a685daa603136baf75b975e0c854e199c07928\",\r\n \"_name_or_path\": \"fxmarty/sam-vit-tiny-random\",\r\n \"architectures\": [\r\n \"SamModel\"\r\n ],\r\n \"initializer_range\": 0.02,\r\n \"mask_decoder_config\": {\r\n \"_name_or_path\": \"\",\r\n \"add_cross_attention\": false,\r\n \"architectures\": null,\r\n \"attention_downsample_rate\": 2,\r\n \"bad_words_ids\": null,\r\n \"begin_suppress_tokens\": null,\r\n \"bos_token_id\": null,\r\n \"chunk_size_feed_forward\": 0,\r\n \"cross_attention_hidden_size\": null,\r\n \"decoder_start_token_id\": null,\r\n \"diversity_penalty\": 0.0,\r\n \"do_sample\": false,\r\n \"early_stopping\": false,\r\n \"encoder_no_repeat_ngram_size\": 0,\r\n \"eos_token_id\": null,\r\n \"exponential_decay_length_penalty\": null,\r\n \"finetuning_task\": null,\r\n \"forced_bos_token_id\": null,\r\n \"forced_eos_token_id\": null,\r\n \"hidden_act\": \"relu\",\r\n \"hidden_size\": 32,\r\n \"id2label\": {\r\n \"0\": \"LABEL_0\",\r\n \"1\": \"LABEL_1\"\r\n },\r\n \"iou_head_depth\": 3,\r\n \"iou_head_hidden_dim\": 256,\r\n \"is_decoder\": false,\r\n \"is_encoder_decoder\": false,\r\n \"label2id\": {\r\n \"LABEL_0\": 0,\r\n \"LABEL_1\": 1\r\n },\r\n \"layer_norm_eps\": 1e-06,\r\n \"length_penalty\": 1.0,\r\n \"max_length\": 20,\r\n \"min_length\": 0,\r\n \"mlp_dim\": 2048,\r\n \"model_type\": \"\",\r\n \"no_repeat_ngram_size\": 0,\r\n \"num_attention_heads\": 8,\r\n \"num_beam_groups\": 1,\r\n \"num_beams\": 1,\r\n \"num_hidden_layers\": 2,\r\n \"num_multimask_outputs\": 3,\r\n \"num_return_sequences\": 1,\r\n \"output_attentions\": false,\r\n \"output_hidden_states\": false,\r\n \"output_scores\": false,\r\n \"pad_token_id\": null,\r\n \"prefix\": null,\r\n \"problem_type\": null,\r\n \"pruned_heads\": {},\r\n \"remove_invalid_values\": false,\r\n \"repetition_penalty\": 1.0,\r\n \"return_dict\": true,\r\n \"return_dict_in_generate\": false,\r\n \"sep_token_id\": null,\r\n \"suppress_tokens\": null,\r\n \"task_specific_params\": null,\r\n \"temperature\": 1.0,\r\n \"tf_legacy_loss\": false,\r\n \"tie_encoder_decoder\": false,\r\n \"tie_word_embeddings\": true,\r\n \"tokenizer_class\": null,\r\n \"top_k\": 50,\r\n \"top_p\": 1.0,\r\n \"torch_dtype\": null,\r\n \"torchscript\": false,\r\n \"transformers_version\": \"4.32.0.dev0\",\r\n \"typical_p\": 1.0,\r\n \"use_bfloat16\": false\r\n },\r\n \"model_type\": \"sam\",\r\n \"prompt_encoder_config\": {\r\n \"_name_or_path\": \"\",\r\n \"add_cross_attention\": false,\r\n \"architectures\": null,\r\n \"bad_words_ids\": null,\r\n \"begin_suppress_tokens\": null,\r\n \"bos_token_id\": null,\r\n \"chunk_size_feed_forward\": 0,\r\n \"cross_attention_hidden_size\": null,\r\n \"decoder_start_token_id\": null,\r\n \"diversity_penalty\": 0.0,\r\n \"do_sample\": false,\r\n \"early_stopping\": false,\r\n \"encoder_no_repeat_ngram_size\": 0,\r\n \"eos_token_id\": null,\r\n \"exponential_decay_length_penalty\": null,\r\n \"finetuning_task\": null,\r\n \"forced_bos_token_id\": null,\r\n \"forced_eos_token_id\": null,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_size\": 32,\r\n \"id2label\": {\r\n \"0\": \"LABEL_0\",\r\n \"1\": \"LABEL_1\"\r\n },\r\n \"image_embedding_size\": 64,\r\n \"image_size\": 1024,\r\n \"is_decoder\": false,\r\n \"is_encoder_decoder\": false,\r\n \"label2id\": {\r\n \"LABEL_0\": 0,\r\n \"LABEL_1\": 1\r\n },\r\n \"layer_norm_eps\": 1e-06,\r\n \"length_penalty\": 1.0,\r\n \"mask_input_channels\": 16,\r\n \"max_length\": 20,\r\n \"min_length\": 0,\r\n \"model_type\": \"\",\r\n \"no_repeat_ngram_size\": 0,\r\n \"num_beam_groups\": 1,\r\n \"num_beams\": 1,\r\n \"num_point_embeddings\": 4,\r\n \"num_return_sequences\": 1,\r\n \"output_attentions\": false,\r\n \"output_hidden_states\": false,\r\n \"output_scores\": false,\r\n \"pad_token_id\": null,\r\n \"patch_size\": 16,\r\n \"prefix\": null,\r\n \"problem_type\": null,\r\n \"pruned_heads\": {},\r\n \"remove_invalid_values\": false,\r\n \"repetition_penalty\": 1.0,\r\n \"return_dict\": true,\r\n \"return_dict_in_generate\": false,\r\n \"sep_token_id\": null,\r\n \"suppress_tokens\": null,\r\n \"task_specific_params\": null,\r\n \"temperature\": 1.0,\r\n \"tf_legacy_loss\": false,\r\n \"tie_encoder_decoder\": false,\r\n \"tie_word_embeddings\": true,\r\n \"tokenizer_class\": null,\r\n \"top_k\": 50,\r\n \"top_p\": 1.0,\r\n \"torch_dtype\": null,\r\n \"torchscript\": false,\r\n \"transformers_version\": \"4.32.0.dev0\",\r\n \"typical_p\": 1.0,\r\n \"use_bfloat16\": false\r\n },\r\n \"torch_dtype\": \"float32\",\r\n \"transformers_version\": null,\r\n \"vision_config\": {\r\n \"_name_or_path\": \"\",\r\n \"add_cross_attention\": false,\r\n \"architectures\": null,\r\n \"attention_dropout\": 0.0,\r\n \"bad_words_ids\": null,\r\n \"begin_suppress_tokens\": null,\r\n \"bos_token_id\": null,\r\n \"chunk_size_feed_forward\": 0,\r\n \"cross_attention_hidden_size\": null,\r\n \"decoder_start_token_id\": null,\r\n \"diversity_penalty\": 0.0,\r\n \"do_sample\": false,\r\n \"dropout\": 0.0,\r\n \"early_stopping\": false,\r\n \"encoder_no_repeat_ngram_size\": 0,\r\n \"eos_token_id\": null,\r\n \"exponential_decay_length_penalty\": null,\r\n \"finetuning_task\": null,\r\n \"forced_bos_token_id\": null,\r\n \"forced_eos_token_id\": null,\r\n \"global_attn_indexes\": [\r\n 2,\r\n 5,\r\n 8,\r\n 11\r\n ],\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_size\": 96,\r\n \"id2label\": {\r\n \"0\": \"LABEL_0\",\r\n \"1\": \"LABEL_1\"\r\n },\r\n \"image_size\": 1024,\r\n \"initializer_factor\": 1.0,\r\n \"initializer_range\": 1e-10,\r\n \"intermediate_size\": 768,\r\n \"is_decoder\": false,\r\n \"is_encoder_decoder\": false,\r\n \"label2id\": {\r\n \"LABEL_0\": 0,\r\n \"LABEL_1\": 1\r\n },\r\n \"layer_norm_eps\": 1e-06,\r\n \"length_penalty\": 1.0,\r\n \"max_length\": 20,\r\n \"min_length\": 0,\r\n \"mlp_dim\": 384,\r\n \"mlp_ratio\": 4.0,\r\n \"model_type\": \"\",\r\n \"no_repeat_ngram_size\": 0,\r\n \"num_attention_heads\": 1,\r\n \"num_beam_groups\": 1,\r\n \"num_beams\": 1,\r\n \"num_channels\": 3,\r\n \"num_hidden_layers\": 12,\r\n \"num_pos_feats\": 16,\r\n \"num_return_sequences\": 1,\r\n \"output_attentions\": false,\r\n \"output_channels\": 32,\r\n \"output_hidden_states\": false,\r\n \"output_scores\": false,\r\n \"pad_token_id\": null,\r\n \"patch_size\": 16,\r\n \"prefix\": null,\r\n \"problem_type\": null,\r\n \"projection_dim\": 64,\r\n \"pruned_heads\": {},\r\n \"qkv_bias\": true,\r\n \"remove_invalid_values\": false,\r\n \"repetition_penalty\": 1.0,\r\n \"return_dict\": true,\r\n \"return_dict_in_generate\": false,\r\n \"sep_token_id\": null,\r\n \"suppress_tokens\": null,\r\n \"task_specific_params\": null,\r\n \"temperature\": 1.0,\r\n \"tf_legacy_loss\": false,\r\n \"tie_encoder_decoder\": false,\r\n \"tie_word_embeddings\": true,\r\n \"tokenizer_class\": null,\r\n \"top_k\": 50,\r\n \"top_p\": 1.0,\r\n \"torch_dtype\": null,\r\n \"torchscript\": false,\r\n \"transformers_version\": \"4.32.0.dev0\",\r\n \"typical_p\": 1.0,\r\n \"use_abs_pos\": true,\r\n \"use_bfloat16\": false,\r\n \"use_rel_pos\": true,\r\n \"window_size\": 14\r\n }\r\n}\r\n```\r\n\r\nNow:\r\n\r\n```\r\nSamConfig {\r\n \"_name_or_path\": \"fxmarty/sam-vit-tiny-random\",\r\n \"architectures\": [\r\n \"SamModel\"\r\n ],\r\n \"initializer_range\": 0.02,\r\n \"mask_decoder_config\": {\r\n \"hidden_size\": 32,\r\n \"model_type\": \"\"\r\n },\r\n \"model_type\": \"sam\",\r\n \"prompt_encoder_config\": {\r\n \"hidden_size\": 32,\r\n \"model_type\": \"\"\r\n },\r\n \"torch_dtype\": \"float32\",\r\n \"transformers_version\": \"4.33.0.dev0\",\r\n \"vision_config\": {\r\n \"dropout\": 0.0,\r\n \"hidden_size\": 96,\r\n \"initializer_factor\": 1.0,\r\n \"intermediate_size\": 768,\r\n \"mlp_dim\": 384,\r\n \"model_type\": \"\",\r\n \"num_attention_heads\": 1,\r\n \"num_pos_feats\": 16,\r\n \"output_channels\": 32,\r\n \"projection_dim\": 64\r\n }\r\n}\r\n```"
] | 1,690 | 1,692 | 1,691 |
COLLABORATOR
| null |
# What does this PR do?
This PR removes the need to override `to_dict` in model configs by implementing the whole logic in the base class. It also deals better with `to_diff_dict` for those configs, by analyzing the dict of sub-configs key by key and not as a whole. This also removes the `is_composition` flag from configs that do not need it: this flag is used to see if the config can be instantiated without any args (like `EncoderDecoderConfig`) but a CLIP config can be instantiated with `CLIPConfig()`.
Lastly this adds an option to set a custom subconfig using a dict instead of the config class, e.g. if someone wants to do:
```py
from transformers import AutoConfig
config = AutoConfig.from_pretrained("openai/clip-vit-base-patch16", text_config = dict(num_hidden_layers = 2))
```
this will now result in `config.text_config` being a proper `CLIPTextConfig` instead of a dict so loading a model like this:
```py
from transformers import CLIPModel
CLIPModel.from_pretrained("openai/clip-vit-base-patch16", text_config = dict(num_hidden_layers = 2))
```
will now work (well assuming shapes match so probably another text config to pass 😅 )
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25237/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25237/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25237",
"html_url": "https://github.com/huggingface/transformers/pull/25237",
"diff_url": "https://github.com/huggingface/transformers/pull/25237.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25237.patch",
"merged_at": 1691153770000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25236
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25236/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25236/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25236/events
|
https://github.com/huggingface/transformers/issues/25236
| 1,831,429,780 |
I_kwDOCUB6oc5tKWaU
| 25,236 |
Fails to create Trainer object. IndexError: list index out of range at --> torch.device(devices[0]);
|
{
"login": "nkgrush",
"id": 25302233,
"node_id": "MDQ6VXNlcjI1MzAyMjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/25302233?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nkgrush",
"html_url": "https://github.com/nkgrush",
"followers_url": "https://api.github.com/users/nkgrush/followers",
"following_url": "https://api.github.com/users/nkgrush/following{/other_user}",
"gists_url": "https://api.github.com/users/nkgrush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nkgrush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nkgrush/subscriptions",
"organizations_url": "https://api.github.com/users/nkgrush/orgs",
"repos_url": "https://api.github.com/users/nkgrush/repos",
"events_url": "https://api.github.com/users/nkgrush/events{/privacy}",
"received_events_url": "https://api.github.com/users/nkgrush/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Same issue as: https://discuss.huggingface.co/t/indexerror-on-devices-0-when-initializing-a-trainer/46410",
"I can fix that particular issue but you won't be able to actually train a model with CPU/disk offload, only do evaluation.",
"I figured out in my case removing\r\n`os.environ[\"CUDA_VISIBLE_DEVICES\"]=\"0\"`\r\nseem to fix the issue. \r\nBut it is still stange as an original tutorial I followed had it set and worked on colab https://colab.research.google.com/drive/1Jt9Rpd9J1mEnf5NXREYqM5hSj-UqL24M#scrollTo=o0BZjNgEBvXH\r\n",
"[Edit: it was caused by device_map=\"auto\" and is probably what you have meant in your reply. I managed to train by not using device_map=\"auto\". Thank you for your fast reply.]\r\n\r\nAlso then I instantly run into \r\n```\r\n\r\n---------------------------------------------------------------------------\r\nNotImplementedError Traceback (most recent call last)\r\n[<ipython-input-25-c52c20b5cf4b>](https://localhost:8080/#) in <cell line: 14>()\r\n 12 )\r\n 13 \r\n---> 14 trainer = transformers.Trainer(\r\n 15 model=model,\r\n 16 train_dataset=mapped_qa_dataset[\"train\"],\r\n\r\n13 frames\r\n[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in __init__(self, model, args, data_collator, train_dataset, eval_dataset, tokenizer, model_init, compute_metrics, callbacks, optimizers, preprocess_logits_for_metrics)\r\n 496 # Quantized models doesn't support `.to` operation.\r\n 497 if self.place_model_on_device and not getattr(model, \"is_quantized\", False):\r\n--> 498 self._move_model_to_device(model, args.device)\r\n 499 \r\n 500 # Force n_gpu to 1 to avoid DataParallel as MP will manage the GPUs\r\n\r\n[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _move_model_to_device(self, model, device)\r\n 725 \r\n 726 def _move_model_to_device(self, model, device):\r\n--> 727 model = model.to(device)\r\n 728 # Moving a model to an XLA device disconnects the tied weights, so we have to retie them.\r\n 729 if self.args.parallel_mode == ParallelMode.TPU and hasattr(model, \"tie_weights\"):\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in to(self, *args, **kwargs)\r\n 1143 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)\r\n 1144 \r\n-> 1145 return self._apply(convert)\r\n 1146 \r\n 1147 def register_full_backward_pre_hook(\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _apply(self, fn)\r\n 795 def _apply(self, fn):\r\n 796 for module in self.children():\r\n--> 797 module._apply(fn)\r\n 798 \r\n 799 def compute_should_use_set_data(tensor, tensor_applied):\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _apply(self, fn)\r\n 795 def _apply(self, fn):\r\n 796 for module in self.children():\r\n--> 797 module._apply(fn)\r\n 798 \r\n 799 def compute_should_use_set_data(tensor, tensor_applied):\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _apply(self, fn)\r\n 795 def _apply(self, fn):\r\n 796 for module in self.children():\r\n--> 797 module._apply(fn)\r\n 798 \r\n 799 def compute_should_use_set_data(tensor, tensor_applied):\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _apply(self, fn)\r\n 795 def _apply(self, fn):\r\n 796 for module in self.children():\r\n--> 797 module._apply(fn)\r\n 798 \r\n 799 def compute_should_use_set_data(tensor, tensor_applied):\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _apply(self, fn)\r\n 795 def _apply(self, fn):\r\n 796 for module in self.children():\r\n--> 797 module._apply(fn)\r\n 798 \r\n 799 def compute_should_use_set_data(tensor, tensor_applied):\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _apply(self, fn)\r\n 795 def _apply(self, fn):\r\n 796 for module in self.children():\r\n--> 797 module._apply(fn)\r\n 798 \r\n 799 def compute_should_use_set_data(tensor, tensor_applied):\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _apply(self, fn)\r\n 795 def _apply(self, fn):\r\n 796 for module in self.children():\r\n--> 797 module._apply(fn)\r\n 798 \r\n 799 def compute_should_use_set_data(tensor, tensor_applied):\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _apply(self, fn)\r\n 795 def _apply(self, fn):\r\n 796 for module in self.children():\r\n--> 797 module._apply(fn)\r\n 798 \r\n 799 def compute_should_use_set_data(tensor, tensor_applied):\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _apply(self, fn)\r\n 795 def _apply(self, fn):\r\n 796 for module in self.children():\r\n--> 797 module._apply(fn)\r\n 798 \r\n 799 def compute_should_use_set_data(tensor, tensor_applied):\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _apply(self, fn)\r\n 818 # `with torch.no_grad():`\r\n 819 with torch.no_grad():\r\n--> 820 param_applied = fn(param)\r\n 821 should_use_set_data = compute_should_use_set_data(param, param_applied)\r\n 822 if should_use_set_data:\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in convert(t)\r\n 1141 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None,\r\n 1142 non_blocking, memory_format=convert_to_format)\r\n-> 1143 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)\r\n 1144 \r\n 1145 return self._apply(convert)\r\n\r\nNotImplementedError: Cannot copy out of meta tensor; no data!\r\n\r\n```"
] | 1,690 | 1,690 | 1,690 |
NONE
| null |
### System Info
The system is google colab, transformers related packages are installed from git.
```
- `transformers` version: 4.32.0.dev0
- Platform: Linux-5.15.109+-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.22.0.dev0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.7.0 (gpu)
- Jax version: 0.4.13
- JaxLib version: 0.4.13
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: using one GPU
```
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
!pip install -q datasets
!pip install git+https://github.com/microsoft/LoRA
!pip install git+https://github.com/huggingface/accelerate.git
!pip install -q git+https://github.com/huggingface/peft.git
!pip install -q git+https://github.com/huggingface/transformers.git
!pip install -i https://test.pypi.org/simple/ bitsandbytes
!pip install -q sentencepiece
import torch
import os
os.environ["CUDA_VISIBLE_DEVICES"]="0"
import torch
import torch.nn as nn
import bitsandbytes as bnb
from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM
from peft import AutoPeftModelForCausalLM
MODEL_NAME = <some lora llama2 checkpoint>
model = AutoPeftModelForCausalLM.from_pretrained(
MODEL_NAME,
device_map='auto',
low_cpu_mem_usage=True,
torch_dtype=torch.float16,
is_trainable=True
)
class CastOutputToFloat(nn.Sequential):
def forward(self, x): return super().forward(x).to(torch.float32)
model.lm_head = CastOutputToFloat(model.lm_head)
for param in model.parameters():
if param.ndim == 1:
# cast the small parameters (e.g. layernorm) to fp32 for stability
param.data = param.data.to(torch.float32)
model.gradient_checkpointing_enable()
model.enable_input_require_grads()
from datasets import load_dataset
qa_dataset = load_dataset("squad_v2")
def create_prompt(context, question, answer):
if len(answer["text"]) < 1:
answer = "Cannot Find Answer"
else:
answer = answer["text"][0]
prompt_template = f"### CONTEXT\n{context}\n\n### QUESTION\n{question}\n\n### ANSWER\n{answer}</s>"
return prompt_template
mapped_qa_dataset = qa_dataset.map(lambda samples: tokenizer(create_prompt(samples['context'], samples['question'], samples['answers'])))
import transformers
train_args = transformers.TrainingArguments(
per_device_train_batch_size=1,
gradient_accumulation_steps=1,
warmup_steps=100,
max_steps=100,
learning_rate=1e-3,
fp16=True,
logging_steps=1,
output_dir='outputs',
)
trainer = transformers.Trainer(
model=model,
train_dataset=mapped_qa_dataset["train"],
args=train_args,
data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),
)
```
Trainer init crashes here:
```
IndexError Traceback (most recent call last)
[<ipython-input-114-29de745c4455>](https://localhost:8080/#) in <cell line: 14>()
12 )
13
---> 14 trainer = transformers.Trainer(
15 model=model,
16 train_dataset=mapped_qa_dataset["train"],
[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in __init__(self, model, args, data_collator, train_dataset, eval_dataset, tokenizer, model_init, compute_metrics, callbacks, optimizers, preprocess_logits_for_metrics)
380 self.is_model_parallel = True
381 else:
--> 382 self.is_model_parallel = self.args.device != torch.device(devices[0])
383
384 # warn users
IndexError: list index out of range
```
### Expected behavior
Trainer object should be constructed correctly.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25236/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25236/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25235
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25235/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25235/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25235/events
|
https://github.com/huggingface/transformers/pull/25235
| 1,831,427,657 |
PR_kwDOCUB6oc5W6ur6
| 25,235 |
Docs: separate generate section
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,691 | 1,691 |
MEMBER
| null |
# What does this PR do?
A conclusion of the latest doc brainstorming section with @patrickvonplaten was that generate-related doc discoverability will become harder as we add more guides. The plan would envision a tutorial page and a few new developer guides -- in addition to the existing task pages, developer guide, and API reference.
As such, we converged on the need for a new doc section, under which most new docs will reside (see #24575 for the plan), with a focus on the first L of LLMs.
There is no section that would fit perfectly, this is (IMO) the best compromise: it contains a bit of "task", "developer guide", and "performance and scalability", but "task" is the most obvious place to look for this information 🤗
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25235/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25235/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25235",
"html_url": "https://github.com/huggingface/transformers/pull/25235",
"diff_url": "https://github.com/huggingface/transformers/pull/25235.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25235.patch",
"merged_at": 1691067117000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25234
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25234/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25234/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25234/events
|
https://github.com/huggingface/transformers/pull/25234
| 1,831,221,063 |
PR_kwDOCUB6oc5W6CQ4
| 25,234 |
Update bark doc
|
{
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @MKhalusova and @sanchit-gandhi , I've updated the docs according to your comments!\r\nThanks for the review!",
"Thanks @ylacombe for the recent round of changes!"
] | 1,690 | 1,691 | 1,691 |
COLLABORATOR
| null |
# What does this PR do?
Bark can be greatly optimized with a few lines of code, which is discussed and explained in more detail in this [blog post](https://github.com/huggingface/blog/pull/1353). To encourage adoption and promote the use of optimization, I've added a few lines to the Bark documentation to reflect this.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs
## Who can review?
@sanchit-gandhi , @sgugger, @MKhalusova, feel free to comment on what can improved or clearer!
many thanks!
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25234/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25234/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25234",
"html_url": "https://github.com/huggingface/transformers/pull/25234",
"diff_url": "https://github.com/huggingface/transformers/pull/25234.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25234.patch",
"merged_at": 1691068119000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25233
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25233/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25233/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25233/events
|
https://github.com/huggingface/transformers/pull/25233
| 1,831,079,305 |
PR_kwDOCUB6oc5W5jRL
| 25,233 |
add generate method to SpeechT5ForTextToSpeech
|
{
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @gante as well",
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @sanchit-gandhi and @sgugger , thanks for the review! \r\n\r\nI would like to add `SpeechT5ForTextToSpeechWithHiFiGAN` in another PR if that's ok with you, since it requires additional tests, and since the changes made in the current PR are enough to use `SpeechT5ForTextToSpeech` with the incoming TTS pipeline!\r\n\r\nI can open an issue to talk about `SpeechT5ForTextToSpeechWithHiFiGAN` in the meantime if you want,\r\n\r\nthanks ",
"Yep good with me to add in a follow-up PR!"
] | 1,690 | 1,691 | 1,691 |
COLLABORATOR
| null |
# What does this PR do?
This simple PR aims at adding a `generate` method to `SpeechT5ForTextToSpeech`, which does exactly the same than `generate_speech`.
`generate_speech` was left for backward compatibility.
The goal is to make `SpeechT5ForTextToSpeech` compatible with the [incoming TTS pipeline](https://github.com/huggingface/transformers/pull/24952) which should not implement any special cases for older models. More on the matter in [this comment](https://github.com/huggingface/transformers/pull/24952#pullrequestreview-1556507240).
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
- [x] Did you make sure to update the documentation with your changes?
- [x] Did you write any new necessary tests?
## Who can review?
@sanchit-gandhi and @sgugger , WDYT?
Thanks for your help!
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25233/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25233/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25233",
"html_url": "https://github.com/huggingface/transformers/pull/25233",
"diff_url": "https://github.com/huggingface/transformers/pull/25233.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25233.patch",
"merged_at": 1691068328000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25232
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25232/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25232/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25232/events
|
https://github.com/huggingface/transformers/issues/25232
| 1,831,027,195 |
I_kwDOCUB6oc5tI0H7
| 25,232 |
AddedToken problems in LlamaTokenizer
|
{
"login": "wlhgtc",
"id": 16603773,
"node_id": "MDQ6VXNlcjE2NjAzNzcz",
"avatar_url": "https://avatars.githubusercontent.com/u/16603773?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wlhgtc",
"html_url": "https://github.com/wlhgtc",
"followers_url": "https://api.github.com/users/wlhgtc/followers",
"following_url": "https://api.github.com/users/wlhgtc/following{/other_user}",
"gists_url": "https://api.github.com/users/wlhgtc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wlhgtc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wlhgtc/subscriptions",
"organizations_url": "https://api.github.com/users/wlhgtc/orgs",
"repos_url": "https://api.github.com/users/wlhgtc/repos",
"events_url": "https://api.github.com/users/wlhgtc/events{/privacy}",
"received_events_url": "https://api.github.com/users/wlhgtc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"This is part of the `stripping` issue mentionned on the PR. As you can see the following works as expected:\r\n```python \r\n\r\n>>> dd = {\"additional_special_tokens\": [AddedToken(\"<bot>\", rstrip = False)]}\r\n\r\n>>> tokenizer2.add_special_tokens(dd)\r\n>>> t1 = tokenizer1.tokenize(txt)\r\n>>> t2 = tokenizer2.tokenize(txt)\r\n>>> print(t1)\r\n>>> print(t2)\r\n['▁hello', '<0x0A>', '<', 'bot', '>', 'How', '▁are', '▁you']\r\n['▁hello', '<0x0A>', '<bot>', '▁How', '▁are', '▁you']\r\n```\r\nThe call to `strip` also removed the `\\n`:\r\n```python \r\n>>> 'hello\\n'.strip()\r\n'hello'\r\n```\r\n",
"@ArthurZucker \r\nAfter reviewing the documentation on `tokenizers`, I noticed there appear to be two additional parameters concerning `AddedToken`: `single_word` and `normalized`. I attempted a few basic tests to better understand their behavior:\r\n```python\r\n tokenizer = LlamaTokenizer.from_pretrained(\r\n \"./resources/models/llama-2-7b-hf\", legacy=True\r\n )\r\n dd = {\"additional_special_tokens\": [AddedToken(\"<bot>\", single_word=True)]}\r\n tokenizer.add_special_tokens(dd)\r\n t1 = tokenizer.tokenize(\"How are you<bot>\")\r\n t2 = tokenizer.tokenize(\"How are you <bot>\")\r\n print(\"t1:\", t1)\r\n print(\"t2:\", t2)\r\n```\r\nThe output:\r\n```\r\nt1: ['▁How', '▁are', '▁you', '<bot>']\r\nt2: ['▁How', '▁are', '▁you', '▁', '<bot>']\r\n```\r\nIf I set `single_word` to False, shouldn't `<bot>` in `t1` fail to match? I couldn't find any code snippets or documentation that clearly define this parameter. Could you perhaps point me to some resources that elaborate on these parameters?\"\r\n\r\n\r\n\r\n",
"Again, this is also reported, `single_word` is not supported yet (in slow tokenizers) which is why you have no documentation 😉 this is also going to be adressed"
] | 1,690 | 1,695 | 1,695 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: macOS-13.5-x86_64-i386-64bit
- Python version: 3.9.5
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker This is a bug reported from my colleague. And I'm not sured whether it's in the list of #23909
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code:
```python
from transformers import LlamaTokenizer
txt = "hello\n" + "<bot>" + "How are you"
dd = {"additional_special_tokens": ["<bot>"]}
tokenizer1 = LlamaTokenizer.from_pretrained(
"./resources/models/llama-2-7b-hf", legacy=True, use_fast=False
)
tokenizer2 = LlamaTokenizer.from_pretrained(
"./resources/models/llama-2-7b-hf", legacy=True, use_fast=False
)
tokenizer2.add_special_tokens(dd)
t1 = tokenizer1.tokenize(txt)
t2 = tokenizer2.tokenize(txt)
print(t1)
print(t2)
```
Output:
```
t1: ['▁hello', '<0x0A>', '<', 'bot', '>', 'How', '▁are', '▁you']
t2: ['▁hello', '<bot>', '▁How', '▁are', '▁you']
```
### Expected behavior
Output:
```
t1: ['▁hello', '<0x0A>', '<', 'bot', '>', 'How', '▁are', '▁you']
t2: ['▁hello', '<0x0A>', '<bot>', '▁How', '▁are', '▁you']
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25232/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25232/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25231
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25231/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25231/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25231/events
|
https://github.com/huggingface/transformers/issues/25231
| 1,830,926,003 |
I_kwDOCUB6oc5tIbaz
| 25,231 |
Seq2SeqTrainer.evaluate and predict don't yield the right number of predictions when num_return_sequences > 1
|
{
"login": "antonioalegria",
"id": 49322,
"node_id": "MDQ6VXNlcjQ5MzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/49322?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antonioalegria",
"html_url": "https://github.com/antonioalegria",
"followers_url": "https://api.github.com/users/antonioalegria/followers",
"following_url": "https://api.github.com/users/antonioalegria/following{/other_user}",
"gists_url": "https://api.github.com/users/antonioalegria/gists{/gist_id}",
"starred_url": "https://api.github.com/users/antonioalegria/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antonioalegria/subscriptions",
"organizations_url": "https://api.github.com/users/antonioalegria/orgs",
"repos_url": "https://api.github.com/users/antonioalegria/repos",
"events_url": "https://api.github.com/users/antonioalegria/events{/privacy}",
"received_events_url": "https://api.github.com/users/antonioalegria/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] |
[
"It looks more like something in `accelerate`, so cc @muellerzr .\r\n\r\nBut @antonioalegria \r\n\r\n> . It drops num_return_sequences - 1 sequences in the last batch\r\n\r\nCould you explain a bit more about this number? It doesn't seem corresponding to what you showed in the code snippet ..?",
"Apologies for not being clear.\r\n\r\nLet's say you are generating from 100 input samples, `num_return_sequences` = 2 and eval batch size is 16.\r\n\r\nYou will have 6 full batches of 16, each generating 32 sequences, and a final batch of size 4. This final batch comes out of `model.generate` with 8 generated sequences but 4 of them are discarded in `Accelerator.gather_for_metrics`.\r\n\r\nIf you had `num_return_sequences` = 3, then the final batch would have originally 12 generated sequences, with 8 of them discarded in the end.\r\n\r\nSo final batch will always have the number of generated sequences equal to the last batch size.",
"Thanks! This should be solved via #27025 "
] | 1,690 | 1,699 | 1,699 |
NONE
| null |
### System Info
transformers: 4.31.0
accelerate: 0.21.0
python: 2.11.3
env: macOS 13.4.1
### Who can help?
@gante, I think, because this is related with generation
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When calling evaluate or predict with `predict_with_generate` and `num_return_sequences` > 1, it does not pass the right amount of sequences to the `compute_metrics` function. It drops `num_return_sequences - 1` sequences in the last batch, in `Accelerator.gather_for_metrics`.
This does not happen when calling `model.generate`, which behaves as expected.
To reproduce run the following script:
```python
from transformers import (AutoModelForSeq2SeqLM, AutoTokenizer,
DataCollatorForSeq2Seq, GenerationConfig,
Seq2SeqTrainer, Seq2SeqTrainingArguments,
T5Tokenizer,BatchEncoding, PreTrainedTokenizer)
from transformers.utils import ModelOutput
from transformers.generation.utils import BeamSearchEncoderDecoderOutput
from datasets import Dataset, load_dataset
INPUT_COLUMN = "question"
TARGET_COLUMN = "answer"
MAX_INPUT_LENGTH = 256
MAX_TARGET_LENGTH = 256
dataset = load_dataset("gsm8k", "main", split="train[:38]")
model = AutoModelForSeq2SeqLM.from_pretrained("t5-small")
tokenizer=T5Tokenizer.from_pretrained("t5-small")
data_collator=DataCollatorForSeq2Seq(tokenizer, model=model, return_tensors="pt", padding="longest")
gen_config = GenerationConfig.from_pretrained("t5-small")
gen_config._from_model_config = None
gen_config.max_length = None
gen_config.min_length = None
gen_config.max_new_tokens = 256
gen_config.min_new_tokens = 1
gen_config.num_beams = 5
training_args=Seq2SeqTrainingArguments('.', predict_with_generate=True)
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=lambda x: {"samples": x[0].shape[0]},
)
def prepare_data(examples: Dataset) -> BatchEncoding:
# Remove pairs where at least one record is none
inputs = examples[INPUT_COLUMN]
targets = examples[TARGET_COLUMN]
model_inputs = tokenizer(inputs, max_length=MAX_INPUT_LENGTH, truncation=True)
labels = tokenizer(text_target=targets, max_length=MAX_TARGET_LENGTH, truncation=True)
model_inputs["labels"] = labels["input_ids"]
return model_inputs
prepared_dataset = dataset.map(prepare_data, batched=True, remove_columns=[INPUT_COLUMN, TARGET_COLUMN])
dataset_len = len(prepared_dataset) # 38
gen_config.num_return_sequences = 1
metrics = trainer.evaluate(eval_dataset=prepared_dataset, num_beams = 5, generation_config=gen_config)
assert metrics["eval_samples"] == dataset_len
# THESE WILL FAIL -- THE NUMBER OF GENERATED SAMPLES WILL BE 70: 2*16 + 2*16 + 6 (last batch will discard the remaining 6 sequences)
gen_config.num_return_sequences = 2
metrics = trainer.evaluate(eval_dataset=prepared_dataset, num_beams = 5, generation_config=gen_config)
assert metrics["eval_samples"] == 2 * dataset_len # should be 76
# THESE WILL FAIL -- THE NUMBER OF GENERATED SAMPLES WILL BE 102: 3*16 + 3*16 + 6 (last batch will discard the remaining 32 sequences)
gen_config.num_return_sequences = 3
metrics = trainer.evaluate(eval_dataset=prepared_dataset, num_beams = 5, generation_config=gen_config)
assert metrics["eval_samples"] == 3 * dataset_len # should be 114
```
### Expected behavior
I would expect that the compute_metrics function would receive a tensor of shape (samples * num_return_sequences, max_len). Currently it receives a few less because the last batch gets half the sequences dropped in Accelerator.gather_for_metrics.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25231/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25231/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25230
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25230/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25230/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25230/events
|
https://github.com/huggingface/transformers/pull/25230
| 1,830,887,645 |
PR_kwDOCUB6oc5W46qX
| 25,230 |
[`Detr`] Fix detr BatchNorm replacement issue
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25230). All of your documentation changes will be reflected on that endpoint."
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes the current failing CI on #25077 / related failing jobs: https://app.circleci.com/pipelines/github/huggingface/transformers/69452/workflows/999f3686-2d9a-4324-bed6-1c858f4d8246/jobs/871127
In #25077 I decided to [add a property method `current_adapter`](https://github.com/younesbelkada/transformers/blob/peft-integration-attempt-2/src/transformers/adapters/peft_mixin.py#L156) to easily switch between adapters. This leads to failing CI because `PreTrainedModel` will inherit from `AdapterMixin` (that will contain that attribute) and `replace_batch_norm` loops over `dir(model)` and calls `getattr(model, attr_str)`, therefore checks for all available attributes including `current_adapter`.
I can also change the property method to an instance method to avoid this issue, but I find it cleaner to do the module replacement in a pure PyTorch manner rather than using `dir(model)` which can cause weird behaviours in the future .
Can confirm slow DETR / DETA integration tests pass with this change
cc @sgugger @amyeroberts
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25230/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25230/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25230",
"html_url": "https://github.com/huggingface/transformers/pull/25230",
"diff_url": "https://github.com/huggingface/transformers/pull/25230.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25230.patch",
"merged_at": 1690885310000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25229
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25229/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25229/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25229/events
|
https://github.com/huggingface/transformers/pull/25229
| 1,830,859,164 |
PR_kwDOCUB6oc5W40mY
| 25,229 |
Move rescale dtype recasting to match torchvision ToTensor
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you very much, Amy!"
] | 1,690 | 1,690 | 1,690 |
COLLABORATOR
| null |
# What does this PR do?
The dtype casting of the input image when rescaling was moved in #25174 so that precision was kept when rescaling if desired. However, this broke equivalence tests with torchvision's `ToTensor` transform c.f. [this comment](https://github.com/huggingface/transformers/pull/24796#issuecomment-1657275333).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25229/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25229/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25229",
"html_url": "https://github.com/huggingface/transformers/pull/25229",
"diff_url": "https://github.com/huggingface/transformers/pull/25229.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25229.patch",
"merged_at": 1690889593000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25228
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25228/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25228/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25228/events
|
https://github.com/huggingface/transformers/issues/25228
| 1,830,856,427 |
I_kwDOCUB6oc5tIKbr
| 25,228 |
chatglm2 load_in_8bit=true can't reduce gpu memory when using transformer==4.31.0
|
{
"login": "zhaotyer",
"id": 89376832,
"node_id": "MDQ6VXNlcjg5Mzc2ODMy",
"avatar_url": "https://avatars.githubusercontent.com/u/89376832?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhaotyer",
"html_url": "https://github.com/zhaotyer",
"followers_url": "https://api.github.com/users/zhaotyer/followers",
"following_url": "https://api.github.com/users/zhaotyer/following{/other_user}",
"gists_url": "https://api.github.com/users/zhaotyer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhaotyer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhaotyer/subscriptions",
"organizations_url": "https://api.github.com/users/zhaotyer/orgs",
"repos_url": "https://api.github.com/users/zhaotyer/repos",
"events_url": "https://api.github.com/users/zhaotyer/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhaotyer/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"ref:https://github.com/THUDM/ChatGLM2-6B/issues/163",
"cc @younesbelkada ",
"+1",
"Thanks, my feeling is that it is related with the issue described in https://github.com/huggingface/transformers/pull/25105 \r\nCan you try that version of transformers meanwhile and let me know if this fixes your issue?\r\n\r\n```bash\r\npip install -U git+https://github.com/ranchlai/transformers.git@fix_get_keys_to_not_convert\r\n```",
"> \r\n\r\nit's work,but i get other problem when i use git+https://github.com/ranchlai/transformers.git@fix_get_keys_to_not_convert , please ref: https://github.com/huggingface/transformers/issues/25197",
"Now #250105 is on main, you can install it with:\r\n\r\n```bash\r\npip install -U git+https://github.com/huggingface/transformers.git\r\n```\r\n\r\nI will close this issue as this issue is solved with the above PR. Feel free to re-open if you think that's not the case",
"> Now #250105 is on main, you can install it with:\r\n> \r\n> ```shell\r\n> pip install -U git+https://github.com/huggingface/transformers.git\r\n> ```\r\n> \r\n> I will close this issue as this issue is solved with the above PR. Feel free to re-open if you think that's not the case\r\n\r\nwhen i use pip install -U git+https://github.com/huggingface/transformers.git for https://github.com/huggingface/transformers/issues/25228 , this multiplied problem still exists,use transformer==4.31.0 https://github.com/huggingface/transformers/issues/25228 problem exists,So is there a version that solves both problems at the same time",
"@zhaotyer do you still face the same issue?"
] | 1,690 | 1,692 | 1,691 |
NONE
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-3.10.0-1160.92.1.el7.x86_64-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM, TextIteratorStreamer
import transformers
from peft import PeftModel
import bitsandbytes as bnb
import torch
from threading import Thread, currentThread
import time
model = "/workspace/model-files/chatglm2"
model = AutoModelForCausalLM.from_pretrained(model, device_map='auto', trust_remote_code=True, load_in_8bit=True)
cls = bnb.nn.Linear8bitLt
print(model.get_memory_footprint())
for name, module in model.named_modules():
# print(name)
if isinstance(module, cls):
names = name.split('.')
print(names)
```
Regardless of whether load_in8bit is set or not, the gpu memory usage is always 12487168064
but when use transformer==4.29.2 load_in_8bit=True the gpu memory usage is 6776623168
### Expected behavior
transformers latest version work well
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25228/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25228/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25227
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25227/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25227/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25227/events
|
https://github.com/huggingface/transformers/pull/25227
| 1,830,784,393 |
PR_kwDOCUB6oc5W4k0x
| 25,227 |
resolving zero3 init when using accelerate config with Trainer
|
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,690 | 1,690 | 1,690 |
CONTRIBUTOR
| null |
# What does this PR do?
1. Fixes https://github.com/huggingface/accelerate/issues/1801
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25227/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25227/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25227",
"html_url": "https://github.com/huggingface/transformers/pull/25227",
"diff_url": "https://github.com/huggingface/transformers/pull/25227.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25227.patch",
"merged_at": 1690969048000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25226
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25226/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25226/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25226/events
|
https://github.com/huggingface/transformers/pull/25226
| 1,830,767,603 |
PR_kwDOCUB6oc5W4hSN
| 25,226 |
Add offline mode for agents
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I'm getting an error:\r\n\r\n```\r\nValueError: image-transformation is not implemented on the Hub.\r\n```\r\n\r\nIt's coming from ```_setup_default_tools``` called from the ```__init__```.\r\n\r\nIt's because of the for loop that check ```HUGGINGFACE_DEFAULT_TOOLS_FROM_HUB```.",
"Thanks for the check! Could you try again with the updated branch?",
"It's working great!\r\nThank you!"
] | 1,690 | 1,691 | 1,691 |
COLLABORATOR
| null |
# What does this PR do?
This PR adds a check in the remote tools setup to bypass it when Transformers is in offline mode.
Fixes #25223
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25226/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25226/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25226",
"html_url": "https://github.com/huggingface/transformers/pull/25226",
"diff_url": "https://github.com/huggingface/transformers/pull/25226.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25226.patch",
"merged_at": 1691153758000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25225
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25225/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25225/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25225/events
|
https://github.com/huggingface/transformers/issues/25225
| 1,830,737,244 |
I_kwDOCUB6oc5tHtVc
| 25,225 |
[Bis] Adding new tokens while preserving tokenization of adjacent tokens
|
{
"login": "Madjakul",
"id": 37739377,
"node_id": "MDQ6VXNlcjM3NzM5Mzc3",
"avatar_url": "https://avatars.githubusercontent.com/u/37739377?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Madjakul",
"html_url": "https://github.com/Madjakul",
"followers_url": "https://api.github.com/users/Madjakul/followers",
"following_url": "https://api.github.com/users/Madjakul/following{/other_user}",
"gists_url": "https://api.github.com/users/Madjakul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Madjakul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Madjakul/subscriptions",
"organizations_url": "https://api.github.com/users/Madjakul/orgs",
"repos_url": "https://api.github.com/users/Madjakul/repos",
"events_url": "https://api.github.com/users/Madjakul/events{/privacy}",
"received_events_url": "https://api.github.com/users/Madjakul/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey! This has already been answered, and is a duplicate of #14770. Will be fixed by #23909. \r\n"
] | 1,690 | 1,690 | 1,690 |
NONE
| null |
### System Info
* `transformers` version: 4.31
* Platform: Linux [...] 5.19.0-50-generic 50-Ubuntu x86_64 GNU/Linux
* Python version: 3.10.12
* Huggingface_hub version: 0.16.4
* PyTorch version (GPU?): 2.0.1+cu118 (True)
* Using GPU in script?: No
* Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
This issue is related to [this HuggingFace post on the official forum](https://discuss.huggingface.co/t/adding-new-tokens-while-preserving-tokenization-of-adjacent-tokens/12604), hence the similar title, and to my knowledge, no answer was given as to whether this is the normal tokenizer behavior. I ran into the same problem as the original poster while trying to tokenize a sentence after adding new tokens: the adjacent tokens of the newly added ones aren't computed with their preceded escape symbol.
```py
>>> import transformers
>>> tok = transformers.RobertaTokenizer.from_pretrained("roberta-base")
>>> lotr_sent = 'Aragorn told Frodo to mind Lothlorien'
>>> tok.convert_ids_to_tokens(tok(lotr_sent)['input_ids'])
['<s>', 'Ar', 'ag', 'orn', 'Ġtold', 'ĠFro', 'do', 'Ġto', 'Ġmind', 'ĠL', 'oth', 'lor', 'ien', '</s>']
>>> tok.add_tokens(['Aragorn', 'Frodo', 'Lothlorien'])
3
>>> tok.convert_ids_to_tokens(tok(lotr_sent)['input_ids'])
['<s>', 'Aragorn', 'told', 'Frodo', 'to', 'Ġmind', 'Lothlorien', '</s>']
```
### Expected behavior
The tokens `told`, `Frodo`, `to` and `Lothlorien` should be preceded with a `Ġ` character if I am not mistaken ; e.g.:
```py
>>> import transformers
>>> tok = transformers.RobertaTokenizer.from_pretrained("roberta-base")
>>> lotr_sent = 'Aragorn told Frodo to mind Lothlorien'
>>> tok.convert_ids_to_tokens(tok(lotr_sent)['input_ids'])
['<s>', 'Ar', 'ag', 'orn', 'Ġtold', 'ĠFro', 'do', 'Ġto', 'Ġmind', 'ĠL', 'oth', 'lor', 'ien', '</s>']
>>> tok.add_tokens(['Aragorn', 'Frodo', 'Lothlorien'])
3
>>> tok.convert_ids_to_tokens(tok(lotr_sent)['input_ids'])
['<s>', 'Aragorn', 'Ġtold', 'ĠFrodo', 'Ġto', 'Ġmind', 'ĠLothlorien', '</s>']
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25225/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25225/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25224
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25224/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25224/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25224/events
|
https://github.com/huggingface/transformers/pull/25224
| 1,830,639,403 |
PR_kwDOCUB6oc5W4Fx4
| 25,224 |
🚨🚨🚨 [`SPM`] Finish fix spm models 🚨🚨🚨
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Will fix the prefixing of special tokens! ",
"@ArthurZucker any update to this PR? ",
"Hey @faaany, I am updating it right now! ",
"Reverted the changes as adding proper support for `add_prefix_space` is actully questionable. The usecase is already wrong as you should be reverse looking for ids not strings. See #24846 (adding prefix space was almost never done properly as the decoders were not updated as well)",
"pinging @sgugger for a final review !",
"Will do so in a follow up PR! ",
"I have added the legacy=True as \"enc = AutoTokenizer.from_pretrained(model_path, legacy = True, use_fast=False)\" but I have still gotten an error, which is a \"Not a string\" error. Anyone can give a hint what is going on here?\r\n\r\n<img width=\"1257\" alt=\"截屏2023-08-20 21 40 35\" src=\"https://github.com/huggingface/transformers/assets/32901895/840e31b4-0b56-4615-ba5a-beb31fe6c376\">\r\n\r\n",
"@zhacmsra the issue is in loading the vocabulary file, not 100% sure it's related to this. Can you open a new Issue with a reproducer please? ",
"> \r\n\r\nHi Arthur, you are correct. I figured out that it is not related to this PR. The networking problem broke the input model and result in error inputs. Sorry for this trouble. Thank you for the kind and timely response."
] | 1,690 | 1,695 | 1,692 |
COLLABORATOR
| null |
# What does this PR do?
Modifies `Llama` and `T5` other sentencepiece based tokenizer will follow.
Previous behaviour is always possible with ` tokenizer = AutoTokenizer.from_pretrained("huggyllama/llama-7b", legacy = True)`
## The goal of `transformers`'s wrapping around `sentencepiece`
To clarify, we want to:
- be able to choose the behaviour of the special/added tokens. This means handling the `stripping`, encoding and decoding of such tokens
- allow users to easily add new tokens, with `tokenenizer.add_tokens(...)` instead of having to load the protobuf file, modify the vocab, save it and reload the sentencepiece processor.
## The current and past problems with our wrappers
Let's use both T5 and Llama as reference models. Currently, we do not mimic the behaviour of adding words to the actual `sentencepiece` vocabulary. This is an issue for anyone expecting (and rightfully) that adding tokens does not modify the behaviour of the model.
### Adding a word to sentencepiece's vocab
This can be done using: ([source](https://github.com/google/sentencepiece/issues/121#issuecomment-400362011))
```python
>>> # wget https://huggingface.co/huggyllama/llama-7b/resolve/main/tokenizer.model
>>> from sentencepiece import sentencepiece_model_pb2 as model
>>> import sentencepiece as spm
>>> sp_model = model.ModelProto()
>>> sp_model.ParseFromString(open('tokenizer.model', 'rb').read())
>>> token = "your_token"
>>> sp_model.pieces.add(piece=f"{token}",score=0.0,type=model.ModelProto.SentencePiece.USER_DEFINED,)
>>> with open('new.model', 'wb') as f:
... f.write(sp_model.SerializeToString())
```
then load the `sp_model`:
```python
>>> sp_model = spm.SentencePieceProcessor()
>>> sp_model.Load('new.model')
```
Then, try the following :
```python
>>> sp_model.encode("your_tokenHello", out_type=str)
["_", "your_token", "Hello"]
```
### Adding a word to a `PretrainedTokenizer
This can be done using `tokenizer.add_tokens(["your_token"])`. It is a lot simpler indeed.
But the output you will get is:
```python
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("huggyllama/llama-7b", legacy = True, use_fast = False)
>>> tokenizer.add_tokens(["your_token"])
>>> tokenizer.tokenize("your_tokenHello")
["your_token", "_Hello"]
>>> tokenizer = AutoTokenizer.from_pretrained("huggyllama/llama-7b", legacy = False, use_fast = False)
>>> tokenizer.add_tokens(["your_token"])
>>> tokenizer.tokenize("your_tokenHello")
["your_token", "Hello"]
```
This is because we always split the text on the added tokens, and give the text on the left and right to the `sentencepiece` model. But, most sentencepiece models add a prefix space `_` (the `SPIECE_UNDERLINE` character). Thus, when the `transformers` tokenizers splits `"your_tokenHello"`, it encode `your_token` with the `tokenizer.added_tokens_encoder` and thus does not add a prefix space, and then encode `Hello` with the sentencepiece model, which adds a prefix space and thus outputs `_Hello`.
Other missmatches:
```python
# t5-base tokenizer
>>> tokenizer.encode("<extra_id_0>. Hello", add_special_tokens = False)
[32099, 3, 5, 8774] # ['<extra_id_0>', ' ▁', '.', '▁Hello']
# seqio.SentencePieceVocabulary(vocab_path, extra_ids = 300)
>>> processor.encode("<extra_id_0>. Hello")
[32099, 5, 8774] # ['<extra_id_0>', '.', '▁Hello']
```
TLDR; this shows the only way we can actually and properly handle added tokens and sentencepiece. We have to disable automatic prefix addition, and always encode with a token that is part of the vocab at the beginning to properly encode the first token, whether it has a prefix space or not. Yes this is dirty and sad, but the previous fix was removing the extra space, which was cleaner but had a corner cases #25176.
### The same issue happens with fast tokenizers:
```python
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("huggyllama/llama-7b", use_fast = True)
>>> tokenizer.add_tokens(["your_token"])
>>> tokenizer.tokenize("your_tokenHello")
["_your_token", "Hello"]
>>> tokenizer.add_tokens(["your_token_special"], True)
>>> tokenizer.tokenize("your_token_specialHello")
['your_token_special', '▁Hello']
```
### Another issue 😈
So, here, the issue is that before the special token, even if there is no `rstrip` or `lstrip` (both are set to False), we have very strange behaviours:
```python
>>> tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf", use_fast = True)
>>> tokenizer.tokenize("<s>inform<s>")
# prefix space is eaten
['<s>', '▁inform', '<s>']
>>> tokenizer.tokenize("<s>inform <s>")
# prefix space is not eaten for the second <s>
['<s>', '▁inform', '▁', '<s>']
>>> tokenizer.tokenize(" <s>inform <s>")
# prefix space is not eaten for the second <s>
['▁▁', '<s>', '▁inform', '▁', '<s>']
>>> tokenizer.tokenize(" <s>inform <s> ")
# prefix space is not eaten for the first <s>, extra space added (known)
['▁▁', '<s>', '▁inform', '▁', '<s>', '▁▁']
>>> tokenizer.tokenize("inform <s> ")
# prefix space is added to inform
['▁inform', '▁', '<s>', '▁▁']
```
Note that `tokenizer.convert_tokens_to_ids("▁▁") = 259` while `tokenizer.convert_tokens_to_ids("▁") = 29871`
Also if we add a prefix space to special tokens the beginning, we are probably gonna break a lot of things
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25224/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25224/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25224",
"html_url": "https://github.com/huggingface/transformers/pull/25224",
"diff_url": "https://github.com/huggingface/transformers/pull/25224.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25224.patch",
"merged_at": 1692284886000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25223
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25223/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25223/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25223/events
|
https://github.com/huggingface/transformers/issues/25223
| 1,830,634,404 |
I_kwDOCUB6oc5tHUOk
| 25,223 |
Agent trying to load remote tools when being offline
|
{
"login": "Romainlg29",
"id": 31577471,
"node_id": "MDQ6VXNlcjMxNTc3NDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/31577471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Romainlg29",
"html_url": "https://github.com/Romainlg29",
"followers_url": "https://api.github.com/users/Romainlg29/followers",
"following_url": "https://api.github.com/users/Romainlg29/following{/other_user}",
"gists_url": "https://api.github.com/users/Romainlg29/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Romainlg29/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Romainlg29/subscriptions",
"organizations_url": "https://api.github.com/users/Romainlg29/orgs",
"repos_url": "https://api.github.com/users/Romainlg29/repos",
"events_url": "https://api.github.com/users/Romainlg29/events{/privacy}",
"received_events_url": "https://api.github.com/users/Romainlg29/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @Romainlg29 \r\n\r\nCould you provide a complete code snippet instead of definitions like `model = ...`. Thanks in advance!",
"> Hi @Romainlg29\r\n> \r\n> Could you provide a complete code snippet instead of definitions like `model = ...`. Thanks in advance!\r\n\r\nHi,\r\n\r\nIt's the following.\r\n\r\n```\r\nimport os\r\nos.environ['TRANSFORMERS_OFFLINE'] = '1'\r\n\r\nfrom transformers import LocalAgent, AutoModelForCausalLM, AutoTokenizer\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\"tiiuae/falcon-7b-instruct\", trust_remote_code=True)\r\ntokenizer = AutoTokenizer.from_pretrained(\"tiiuae/falcon-7b-instruct\")\r\n\r\nagent = LocalAgent(model=model, tokenizer=tokenizer) # Error here\r\n\r\nagent.run(\"my query\");\r\n```",
"cc our agent @sgugger 😆 ",
"Agents do not work in offline mode since the prompts are fetched online and we have some tools defined on the Hub only.",
"If not too much work, probably not to try to connect if `os.environ['TRANSFORMERS_OFFLINE'] = '1'` and raise an error directly with a more specific message?",
"> Agents do not work in offline mode since the prompts are fetched online and we have some tools defined on the Hub only.\r\n\r\nCan't we have an offline mode for the agent, where we only load our tools through additional_tools and using a custom prompt ?",
"@Romainlg29 You can load your tools via `additional_tools`, but the default tools are still loaded. We could add some guards around that in the future to not try to load tools from the Hub in offline mode, but it is not supported now.",
"Drafted a PR to add this, could you try the PR linked above? I believe it should work in offline mode as long as you have all the necessary models in the cache, and either pass custom prompts or also have the prompts in the cache. It will ignore remote tools.",
"> Drafted a PR to add this, could you try the PR linked above? I believe it should work in offline mode as long as you have all the necessary models in the cache, and either pass custom prompts or also have the prompts in the cache. It will ignore remote tools.\r\n\r\nOk, I'm going on that."
] | 1,690 | 1,691 | 1,691 |
NONE
| null |
### System Info
Transformers 4.31
Python 3.11.4
Windows 10
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code:
```
import os
os.environ['TRANSFORMERS_OFFLINE'] = '1'
from transformers import LocalAgent, AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-7b-instruct", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-7b-instruct")
agent = LocalAgent(model=model, tokenizer=tokenizer) # Error here
agent.run("my query");
```
Error:
```
Max retries exceeded with url: /api/spaces?author=huggingface-tools
```
### Expected behavior
To not access the remote tools.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25223/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25223/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25222
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25222/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25222/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25222/events
|
https://github.com/huggingface/transformers/issues/25222
| 1,830,611,419 |
I_kwDOCUB6oc5tHOnb
| 25,222 |
config.json file not available
|
{
"login": "andysingal",
"id": 20493493,
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andysingal",
"html_url": "https://github.com/andysingal",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"repos_url": "https://api.github.com/users/andysingal/repos",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The error on the shared colab is \r\n```python \r\nOSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'\r\nIf this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or \r\nlog in with `huggingface-cli login` and pass `use_auth_token=True`.\r\n```\r\nwhen you call \r\n```python \r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n config.base_model_name_or_path,\r\n return_dict=True,\r\n quantization_config=bnb_config,\r\n device_map=\"auto\",\r\n trust_remote_code=True,\r\n)\r\n```\r\nAs you can see [here](https://huggingface.co/Andyrasika/qlora-2-7b-andy/blob/main/adapter_config.json#L2) the `config.base_model_name_or_path` is not properly set. \r\nIf the script was provided in the PEFT library , pinging @younesbelkada to transfer the issue there and update if needed. Otherwise you should make sure the base model path is defined / use a correct path to a checkpoint",
"> The error on the shared colab is\r\n> \r\n> ```python\r\n> OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'\r\n> If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or \r\n> log in with `huggingface-cli login` and pass `use_auth_token=True`.\r\n> ```\r\n> \r\n> when you call\r\n> \r\n> ```python\r\n> model = AutoModelForCausalLM.from_pretrained(\r\n> config.base_model_name_or_path,\r\n> return_dict=True,\r\n> quantization_config=bnb_config,\r\n> device_map=\"auto\",\r\n> trust_remote_code=True,\r\n> )\r\n> ```\r\n> \r\n> As you can see [here](https://huggingface.co/Andyrasika/qlora-2-7b-andy/blob/main/adapter_config.json#L2) the `config.base_model_name_or_path` is not properly set. If the script was provided in the PEFT library , pinging @younesbelkada to transfer the issue there and update if needed. Otherwise you should make sure the base model path is defined / use a correct path to a checkpoint\r\n\r\nThank you for your instant response(i have already authenticated huggingface token initially while loading the libraries). Any advice on how to address the issue in the notebook shared? @ArthurZucker @younesbelkada ",
"Closing as it is an exact duplicate of #25215. \r\nFeel free to ask your question on the [forum](https://discuss.huggingface.co/), there are no problem on our side, see @younesbelkada's answers.",
"Hi, in my case this problem occurred when I fine tunned already fine tunned model then in `adapter_config.json` I've got `base_model_name_or_path` null instead of path to the base model."
] | 1,690 | 1,691 | 1,691 |
NONE
| null |
### System Info
colab
notebook: https://colab.research.google.com/drive/118RTcKAQFIICDsgTcabIF-_XKmOgM-cc?usp=sharing
### Who can help?
@ArthurZucker @youn
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
While running the notebook and Andyrasika/qlora-2-7b-andy i get the following error(Note: adapter_config.json is already there)
```
Andyrasika/qlora-2-7b-andy does not appear to have a file named config.json. Checkout 'https://huggingface.co/Andyrasika/qlora-2-7b-andy/7a0facc5b1f630824ac5b38853dec5e988a5569e' for available files.
```
### Expected behavior
same as above
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25222/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25222/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25221
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25221/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25221/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25221/events
|
https://github.com/huggingface/transformers/issues/25221
| 1,830,471,138 |
I_kwDOCUB6oc5tGsXi
| 25,221 |
[BUG REPORT] inconsistent inference results between batch of samples and a single sample in BLIP / BLIP2
|
{
"login": "xk-huang",
"id": 33593707,
"node_id": "MDQ6VXNlcjMzNTkzNzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/33593707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xk-huang",
"html_url": "https://github.com/xk-huang",
"followers_url": "https://api.github.com/users/xk-huang/followers",
"following_url": "https://api.github.com/users/xk-huang/following{/other_user}",
"gists_url": "https://api.github.com/users/xk-huang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xk-huang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xk-huang/subscriptions",
"organizations_url": "https://api.github.com/users/xk-huang/orgs",
"repos_url": "https://api.github.com/users/xk-huang/repos",
"events_url": "https://api.github.com/users/xk-huang/events{/privacy}",
"received_events_url": "https://api.github.com/users/xk-huang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @younesbelkada , but @xk-huang Could you first try all the suggestions in [Reproducibility](https://pytorch.org/docs/stable/notes/randomness.html) 🙏 Thanks a lot.\r\n\r\nAlso\r\n\r\n```\r\n# `False` is already the default\r\ntorch.backends.cuda.matmul.allow_tf32 = False\r\n\r\n# The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.\r\ntorch.backends.cudnn.allow_tf32 = False\r\n```",
"Thanks for your kind advice! @ydshieh \r\n\r\nI have already adopted the reproducibility suggestions in Torch documents by setting `transformers.enable_full_determinism(SEED)`. After I turn off `torch.backends.cudnn.allow_tf32`, the differences are largely reduced. Here is the comparison:\r\n\r\n```\r\nModel: Salesforce/blip-image-captioning-base with BlipForConditionalGeneration, using cuda device\r\nMapping None (type <class 'transformers.models.blip.modeling_blip.BlipForConditionalGenerationModelOutput'>):\r\n diff of loss (shape=torch.Size([])): 1.9073486328125e-06\r\n diff of decoder_logits (shape=torch.Size([1, 17, 30524])): 8.58306884765625e-06\r\n diff of image_embeds (shape=torch.Size([1, 577, 768])): 0.0\r\n diff of last_hidden_state (shape=torch.Size([1, 577, 768])): 0.0\r\n```\r\n\r\nI am wondering whether this level of error is acceptable. ",
"Glad it works 🚀 !\r\n\r\nI would say with strong confidence it's very acceptable :-).\r\n(Welcome to the whole numeric world 😅 )\r\n\r\n",
"Thank you so much for your reply! I'm ready to explore the numeric rabbit hole!"
] | 1,690 | 1,691 | 1,691 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.15.0-1041-azure-x86_64-with-glibc2.31
- Python version: 3.9.17
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: YES (a single A100, 80GB)
- Using distributed or parallel set-up in script?: NO
### Who can help?
@ArthurZucker and @younesbelkada@amyeroberts
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Inconsistent inference results between batch of samples and a single sample in BLIP / BLIP2.
Here is the script. We can change `DEVICE`, `CAPTION_PRETRAIN_MODEL`, and `pixel_values_shape` to test different models on different accelrators.
```python
import transformers
from transformers import AutoModel, AutoProcessor, AutoConfig
import torch
import numpy as np
from typing import Mapping, Sequence
SEED = 42
transformers.enable_full_determinism(SEED)
CAPTION_PRETRAIN_MODELS_NAMES = [
"Salesforce/blip-image-captioning-base",
"Salesforce/blip-image-captioning-large",
"Salesforce/blip2-opt-2.7b",
]
CAPTION_PRETRAIN_MODEL = CAPTION_PRETRAIN_MODELS_NAMES[1]
# NOTE: If you use BLIP2 model, you need to change the `pixel_values_shape` below accordingly.
CACHE_DIR = ".model.cache/"
DEVICE = "cpu"
# DEVICE = "cuda"
# MODEL
config = AutoConfig.from_pretrained(CAPTION_PRETRAIN_MODEL, cache_dir=CACHE_DIR)
caption_architectures = config.architectures
if len(caption_architectures) != 1:
print(f"captioner_architectures: {caption_architectures} has to be of length 1")
caption_architecture = caption_architectures[0]
module = getattr(transformers, caption_architecture)
model = module.from_pretrained(CAPTION_PRETRAIN_MODEL, cache_dir=CACHE_DIR)
processor = AutoProcessor.from_pretrained(CAPTION_PRETRAIN_MODEL, cache_dir=CACHE_DIR)
model.to(DEVICE)
# Data
pixel_values_shape = [1, 3, 384, 384] # shape for BLIP
# pixel_values_shape = [1, 3, 224, 224] # shape for BLIP2
input_ids_shape = [1, 17]
attention_mask_shape = [1, 17]
labels_shape = [1, 17]
single_sample_inputs = {
"pixel_values": torch.ones(pixel_values_shape),
"input_ids": torch.ones(input_ids_shape, dtype=torch.long),
"attention_mask": torch.ones(attention_mask_shape, dtype=torch.long),
"labels": torch.ones(labels_shape, dtype=torch.long),
}
batch_size = 2
batch_sample_inputs = {
"pixel_values": single_sample_inputs["pixel_values"].repeat(batch_size, 1, 1, 1),
"input_ids": single_sample_inputs["input_ids"].repeat(batch_size, 1),
"attention_mask": single_sample_inputs["attention_mask"].repeat(batch_size, 1),
"labels": single_sample_inputs["labels"].repeat(batch_size, 1),
}
for k in single_sample_inputs:
single_sample_inputs[k] = single_sample_inputs[k].to(DEVICE)
for k in batch_sample_inputs:
batch_sample_inputs[k] = batch_sample_inputs[k].to(DEVICE)
with torch.no_grad():
single_sample_outputs = model(**single_sample_inputs)
batch_sample_outputs = model(**batch_sample_inputs)
print(f"Model: {CAPTION_PRETRAIN_MODEL} with {caption_architecture}, using {DEVICE} device")
def recursive_compare_print(outputs_1, outputs_2, tensor_slice=None, key=None, depth=0):
if type(outputs_1) != type(outputs_2):
raise ValueError(f"outputs_1: {type(outputs_1)} vs outputs_2: {type(outputs_2)}")
elif isinstance(outputs_1, torch.Tensor):
if tensor_slice is None:
tensor_slice = slice(None)
if len(outputs_1.shape) == 0:
print(
"\t" * depth
+ f"diff of {key} (shape={outputs_1.shape}): {torch.max(torch.abs(outputs_1 - outputs_2))}"
)
else:
print(
"\t" * depth
+ f"diff of {key} (shape={outputs_1.shape}): {torch.max(torch.abs(outputs_1[tensor_slice] - outputs_2[tensor_slice]))}"
)
elif isinstance(outputs_1, Mapping):
print("\t" * depth + f"Mapping {key} (type {type(outputs_1)}):")
for k in outputs_1:
recursive_compare_print(outputs_1[k], outputs_2[k], tensor_slice=tensor_slice, key=k, depth=depth + 1)
elif isinstance(outputs_1, Sequence):
print("\t" * depth + f"Sequence {key} (type {type(outputs_1)}):")
for output_1, output_2 in zip(outputs_1, outputs_2):
recursive_compare_print(output_1, output_2, tensor_slice=tensor_slice, depth=depth + 1)
else:
print("\t" * depth + f"Unexpected type with {k}: {type(outputs_1)}")
recursive_compare_print(single_sample_outputs, batch_sample_outputs, slice(0, 1))
```
- When `DEVICE=CPU`, the results are ok except for logits having a small difference of 1e-5
```
Model: Salesforce/blip-image-captioning-base with BlipForConditionalGeneration, using cpu device
Mapping: (type <class 'transformers.models.blip.modeling_blip.BlipForConditionalGenerationModelOutput'>)
diff of loss (shape=torch.Size([])): 0.0
diff of decoder_logits (shape=torch.Size([1, 17, 30524])): 1.049041748046875e-05
diff of image_embeds (shape=torch.Size([1, 577, 768])): 0.0
diff of last_hidden_state (shape=torch.Size([1, 577, 768])): 0.0
```
- When `DEVICE="cuda"`, the results are having a large difference.
```
Model: Salesforce/blip-image-captioning-base with BlipForConditionalGeneration, using cuda device
Mapping: (type <class 'transformers.models.blip.modeling_blip.BlipForConditionalGenerationModelOutput'>)
diff of loss (shape=torch.Size([])): 7.62939453125e-06
diff of decoder_logits (shape=torch.Size([1, 17, 30524])): 0.0015845298767089844
diff of image_embeds (shape=torch.Size([1, 577, 768])): 0.19360780715942383
diff of last_hidden_state (shape=torch.Size([1, 577, 768])): 0.19360780715942383
```
### Expected behavior
The result of GPU inference should be at least the same as those of CPU.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25221/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25221/timeline
|
completed
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.