url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
β | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
β | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/26130
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26130/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26130/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26130/events
|
https://github.com/huggingface/transformers/issues/26130
| 1,893,894,632 |
I_kwDOCUB6oc5w4ono
| 26,130 |
the NoRepeatNGramLogitsProcessor is slow
|
{
"login": "nevakrien",
"id": 101988414,
"node_id": "U_kgDOBhQ4Pg",
"avatar_url": "https://avatars.githubusercontent.com/u/101988414?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nevakrien",
"html_url": "https://github.com/nevakrien",
"followers_url": "https://api.github.com/users/nevakrien/followers",
"following_url": "https://api.github.com/users/nevakrien/following{/other_user}",
"gists_url": "https://api.github.com/users/nevakrien/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nevakrien/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nevakrien/subscriptions",
"organizations_url": "https://api.github.com/users/nevakrien/orgs",
"repos_url": "https://api.github.com/users/nevakrien/repos",
"events_url": "https://api.github.com/users/nevakrien/events{/privacy}",
"received_events_url": "https://api.github.com/users/nevakrien/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @gante ",
"Hey @nevakrien π \r\n\r\nOur main focus is the breadth and availability of techniques, which means performance sometimes gets relegated to a secondary position. Your contribution (with a few benchmarks before/after the changes) would definitely be appreciated π€ ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,694 | 1,697 | 1,697 |
NONE
| null |
### Feature request
its relatively easy to just cash the values ahead of time and then use those, rn its build in a way that requires looking through the whole text every single time. u could have simply kept the dict and added 1 value to it.
### Motivation
I am trying to make generation faster
### Your contribution
I could probably help with the pull request if you guys r interested
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26130/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26129
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26129/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26129/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26129/events
|
https://github.com/huggingface/transformers/pull/26129
| 1,893,869,289 |
PR_kwDOCUB6oc5aMz6X
| 26,129 |
Update logits_process.py docstrings to clarify penalty and reward cases
|
{
"login": "larekrow",
"id": 127832774,
"node_id": "U_kgDOB56Sxg",
"avatar_url": "https://avatars.githubusercontent.com/u/127832774?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/larekrow",
"html_url": "https://github.com/larekrow",
"followers_url": "https://api.github.com/users/larekrow/followers",
"following_url": "https://api.github.com/users/larekrow/following{/other_user}",
"gists_url": "https://api.github.com/users/larekrow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/larekrow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/larekrow/subscriptions",
"organizations_url": "https://api.github.com/users/larekrow/orgs",
"repos_url": "https://api.github.com/users/larekrow/repos",
"events_url": "https://api.github.com/users/larekrow/events{/privacy}",
"received_events_url": "https://api.github.com/users/larekrow/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey @gante @ArthurZucker! This is ready for review π "
] | 1,694 | 1,696 | 1,696 |
CONTRIBUTOR
| null |
This PR fixes point 3 of #25970 by clarifying the penalty and reward cases for `RepetitionPenaltyLogitsProcessor` and `EncoderRepetitionPenaltyLogitsProcessor` within the docstrings.
`EncoderRepetitionPenaltyLogitsProcessor` may require further iterations as the class name does not accurately reflect what the class is doing.
@gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26129/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26129",
"html_url": "https://github.com/huggingface/transformers/pull/26129",
"diff_url": "https://github.com/huggingface/transformers/pull/26129.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26129.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/26128
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26128/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26128/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26128/events
|
https://github.com/huggingface/transformers/issues/26128
| 1,893,723,406 |
I_kwDOCUB6oc5w3-0O
| 26,128 |
AttributeError: 'LlamaForCausalLM' object has no attribute 'save_checkpoint'
|
{
"login": "dittops",
"id": 12937285,
"node_id": "MDQ6VXNlcjEyOTM3Mjg1",
"avatar_url": "https://avatars.githubusercontent.com/u/12937285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dittops",
"html_url": "https://github.com/dittops",
"followers_url": "https://api.github.com/users/dittops/followers",
"following_url": "https://api.github.com/users/dittops/following{/other_user}",
"gists_url": "https://api.github.com/users/dittops/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dittops/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dittops/subscriptions",
"organizations_url": "https://api.github.com/users/dittops/orgs",
"repos_url": "https://api.github.com/users/dittops/repos",
"events_url": "https://api.github.com/users/dittops/events{/privacy}",
"received_events_url": "https://api.github.com/users/dittops/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 5616426447,
"node_id": "LA_kwDOCUB6oc8AAAABTsPdzw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/solved",
"name": "solved",
"color": "B1D6DC",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] |
[
"The model is saving fine while changing to stage 3",
"> CUDA_VISIBLE_DEVICES=0,3 python train.py --data_path /path/to/data.json --output_dir output --per_device_train_batch_size 4 --num_train_epochs 2 --learning_rate 2e-5 --fp16 True --logging_steps 10 --save_strategy steps --save_steps 20 --save_total_limit 5 --adam_beta2 0.95 --deepspeed ds_config.json\r\n\r\n\r\nThe command used for launching is incorrect. Please use `torchrun`, `deepspeed` or `accelerate launch` when running in distributed setup. Please go through the docs here: https://huggingface.co/docs/transformers/main_classes/deepspeed#deployment-with-multiple-gpus",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,694 | 1,700 | 1,700 |
NONE
| null |
### System Info
I'm training a llama model. When I try using Deepspeed along with the trainer, I get the below error
<img width="695" alt="image" src="https://github.com/huggingface/transformers/assets/12937285/3cd5a3c1-34c9-46b3-93e0-e5dd454b8a3b">
Training script
```
tokenizer = LlamaTokenizer.from_pretrained(model_args.base_model)
tokenizer.pad_token_id = 0
model = LlamaForCausalLM.from_pretrained(
model_args.base_model,
device_map="auto"
)
total_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
print("total params: ", total_params)
if data_args.data_path.endswith(".json") or data_args.data_path.endswith(".jsonl"):
data = load_dataset("json", data_files=data_args.data_path)
else:
data = load_dataset(data_args.data_path)
dataset = load_data(tokenizer, data['train'], data_args.max_seq_length)
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
trainer = Trainer(
model=model,
tokenizer=tokenizer,
args=training_args,
data_collator=data_collator,
**dataset
)
model.config.use_cache = False
trainer.train()
trainer.save_model()
```
I'm using deepspeed stage 2
```
{
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"zero_allow_untested_optimizer": true,
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"initial_scale_power": 16,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 5e8,
"reduce_scatter": true,
"reduce_bucket_size": 5e8,
"overlap_comm": false,
"contiguous_gradients": true
}
}
```
If I use FSDP instead of deepspeed, the checkpoints are getting saved.
transformers==4.34.0.dev0 (I tried with transformers==4.33.0 as well)
deepspeed==0.10.3
### Who can help?
@pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Command used for launching the script
`CUDA_VISIBLE_DEVICES=0,3 python train.py --data_path /path/to/data.json --output_dir output --per_device_train_batch_size 4 --num_train_epochs 2 --learning_rate 2e-5 --fp16 True --logging_steps 10 --save_strategy steps --save_steps 20 --save_total_limit 5 --adam_beta2 0.95 --deepspeed ds_config.json`
``
### Expected behavior
The model gets saved without any errors.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26128/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26126
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26126/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26126/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26126/events
|
https://github.com/huggingface/transformers/issues/26126
| 1,893,502,036 |
I_kwDOCUB6oc5w3IxU
| 26,126 |
[Bug + workaround] Keypoints 0.0 are confusing ../transformers/models/detr/image_processing_detr.py
|
{
"login": "duckheada",
"id": 89459321,
"node_id": "MDQ6VXNlcjg5NDU5MzIx",
"avatar_url": "https://avatars.githubusercontent.com/u/89459321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/duckheada",
"html_url": "https://github.com/duckheada",
"followers_url": "https://api.github.com/users/duckheada/followers",
"following_url": "https://api.github.com/users/duckheada/following{/other_user}",
"gists_url": "https://api.github.com/users/duckheada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/duckheada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/duckheada/subscriptions",
"organizations_url": "https://api.github.com/users/duckheada/orgs",
"repos_url": "https://api.github.com/users/duckheada/repos",
"events_url": "https://api.github.com/users/duckheada/events{/privacy}",
"received_events_url": "https://api.github.com/users/duckheada/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] |
[
"Hi @duckheada, thanks for raising this issue! \r\n\r\nWould you like to open a PR with this fix?\r\n\r\ncc @rafaelpadilla ",
"@amyeroberts not really. I don't know enough about the library",
"OK, I'll open it up to the community if someone wants to add this π ",
"@amyeroberts can I work on this?",
"@amyeroberts @ArthurZucker Can you please review this PR?",
"I will! Sorry for the delay ",
"Hi @duckheada and @hackpk ,\r\n\r\nThank you for opening this issue. :) \r\n\r\nI was following the referenced notebook ([here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb)), but I could not replicate the issue.\r\n\r\nAre you working with a dataset other than the \"balloon\" dataset? If possible, could you share that specific dataset with me? This would help me better understand and duplicate the error you're experiencing.",
"Hi @rafaelpadilla, yes I was working a different dataset. No I cannot share it.",
"Hi @duckheada, \r\nNo problem, I totally understand if you can't share it. :)\r\n\r\nThis error appears if we use they annotations of the keypoints (e.g. `person_keypoints_val2017.json`), which is not in the example of the referenced notebook. That's why I couldn't replicate it initially. \r\n\r\nThank you for reporting this issue! Great work! π The solution in PR #26250"
] | 1,694 | 1,701 | 1,701 |
NONE
| null |
### System Info
- transformers: 4.33.1
- Python: 3.9.18
- Macbook pro M2 chip
### Who can help?
@Narsil
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Trying to fine-tune Detr on a dataset in coco-format following [this](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) hugging-face suggested notebook.
At cell [16]:
`from torch.utils.data import DataLoader
def collate_fn(batch):
pixel_values = [item[0] for item in batch]
encoding = processor.pad(pixel_values, return_tensors="pt")
labels = [item[1] for item in batch]
batch = {}
batch['pixel_values'] = encoding['pixel_values']
batch['pixel_mask'] = encoding['pixel_mask']
batch['labels'] = labels
return batch
train_dataloader = DataLoader(train_dataset, collate_fn=collate_fn, batch_size=4, shuffle=True)
val_dataloader = DataLoader(val_dataset, collate_fn=collate_fn, batch_size=2)
batch = next(iter(train_dataloader))`
I get this error:
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
/Users/me/Documents/dev/ai_sb/detection/obj_det_train.ipynb Cell 10 line 1
13 train_dataloader = DataLoader(train_dataset, collate_fn=collate_fn, batch_size=4, shuffle=True)
14 # val_dataloader = DataLoader(val_dataset, collate_fn=collate_fn, batch_size=2)
---> 16 batch = next(iter(train_dataloader))
File .../.conda/lib/python3.9/site-packages/torch/utils/data/dataloader.py:630, in _BaseDataLoaderIter.__next__(self)
627 if self._sampler_iter is None:
628 # TODO(https://github.com/pytorch/pytorch/issues/76750)
629 self._reset() # type: ignore[call-arg]
--> 630 data = self._next_data()
632 self._num_yielded += 1
633 if self._dataset_kind == _DatasetKind.Iterable and \
634 self._IterableDataset_len_called is not None and \
635 self._num_yielded > self._IterableDataset_len_called:
File .../.conda/lib/python3.9/site-packages/torch/utils/data/dataloader.py:675, in _SingleProcessDataLoaderIter._next_data(self)
673 def _next_data(self):
674 index = self._next_index() # may raise StopIteration
--> 675 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
676 if self._pin_memory:
677 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device)
File .../.conda/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py:51, in _MapDatasetFetcher.fetch(self, possibly_batched_index)
49 data = self.dataset.__getitems__(possibly_batched_index)
50 else:
---> 51 data = [self.dataset[idx] for idx in possibly_batched_index]
52 else:
53 data = self.dataset[possibly_batched_index]
File .../.conda/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py:51, in <listcomp>(.0)
49 data = self.dataset.__getitems__(possibly_batched_index)
50 else:
---> 51 data = [self.dataset[idx] for idx in possibly_batched_index]
52 else:
53 data = self.dataset[possibly_batched_index]
/Users/me/Documents/dev/ai_sb/detection/obj_det_train.ipynb Cell 10 line 2
20 image_id = self.ids[idx]
21 annotations = {'image_id': image_id, 'annotations': annotations}
---> 22 encoding = self.image_processor(images=images, annotations=annotations, return_tensors="pt")
23 pixel_values = encoding["pixel_values"].squeeze()
24 target = encoding["labels"][0]
File .../.conda/lib/python3.9/site-packages/transformers/image_processing_utils.py:546, in BaseImageProcessor.__call__(self, images, **kwargs)
544 def __call__(self, images, **kwargs) -> BatchFeature:
545 """Preprocess an image or a batch of images."""
--> 546 return self.preprocess(images, **kwargs)
File .../.conda/lib/python3.9/site-packages/transformers/models/detr/image_processing_detr.py:1253, in DetrImageProcessor.preprocess(self, images, annotations, return_segmentation_masks, masks_path, do_resize, size, resample, do_rescale, rescale_factor, do_normalize, image_mean, image_std, do_pad, format, return_tensors, data_format, input_data_format, **kwargs)
1251 prepared_annotations = []
1252 for image, target in zip(images, annotations):
-> 1253 target = self.prepare_annotation(
1254 image,
1255 target,
1256 format,
1257 return_segmentation_masks=return_segmentation_masks,
1258 masks_path=masks_path,
1259 input_data_format=input_data_format,
1260 )
1261 prepared_images.append(image)
1262 prepared_annotations.append(target)
File .../.conda/lib/python3.9/site-packages/transformers/models/detr/image_processing_detr.py:858, in DetrImageProcessor.prepare_annotation(self, image, target, format, return_segmentation_masks, masks_path, input_data_format)
856 if format == AnnotionFormat.COCO_DETECTION:
857 return_segmentation_masks = False if return_segmentation_masks is None else return_segmentation_masks
--> 858 target = prepare_coco_detection_annotation(
859 image, target, return_segmentation_masks, input_data_format=input_data_format
860 )
861 elif format == AnnotionFormat.COCO_PANOPTIC:
862 return_segmentation_masks = True if return_segmentation_masks is None else return_segmentation_masks
File .../.conda/lib/python3.9/site-packages/transformers/models/detr/image_processing_detr.py:333, in prepare_coco_detection_annotation(image, target, return_segmentation_masks, input_data_format)
331 num_keypoints = keypoints.shape[0]
332 keypoints = keypoints.reshape((-1, 3)) if num_keypoints else keypoints
--> 333 new_target["keypoints"] = keypoints[keep]
338 if return_segmentation_masks:
339 segmentation_masks = [obj["segmentation"] for obj in annotations]
IndexError: boolean index did not match indexed array along dimension 0; dimension is 294 but corresponding boolean dimension is 1
```
### Expected behavior
Nothing.
**My workaround**
To fix this, I had to edit this file in transformers library:
**.conda/lib/python3.9/site-packages/transformers/models/detr/image_processing_detr.py**
I changed this:
```
# if annotations and "keypoints" in annotations[0]:
# keypoints = [obj["keypoints"] for obj in annotations]
# print("keypoints", keypoints) #TODO: remove
# keypoints = np.asarray(keypoints, dtype=np.float32)
# num_keypoints = keypoints.shape[0]
# keypoints = keypoints.reshape((-1, 3)) if num_keypoints else keypoints
# new_target["keypoints"] = keypoints[keep]
```
To this:
```
if annotations and "keypoints" in annotations[0]:
keypoints = [obj["keypoints"] for obj in annotations]
# Apply the keep mask here to filter the relevant annotations
keypoints = [keypoints[i] for i in range(len(keypoints)) if keep[i]]
# converting the filtered keypoints list to a numpy array and reshape it
keypoints = np.asarray(keypoints, dtype=np.float32)
num_keypoints = keypoints.shape[0]
keypoints = keypoints.reshape((-1, 3)) if num_keypoints else keypoints
new_target["keypoints"] = keypoints # We no longer apply keep mask here
```
**Why?**
To ensure that the filtering applied to the keypoints respects its original structure (number of keypoints per annotation). When you reshape keypoints with keypoints.reshape((-1, 3)), it loses the information about which keypoints belong to which annotation.
Here is what needed to be done (at least in my little hack-ish workaround):
1. Before reshaping the keypoints array, I had to apply the keep mask to retain only the annotations I'm are interested in. Only after this cound I reshape the keypoints array to apply further operations.
2. Then, I applied the keep mask on the keypoints list before converting it into a numpy array and reshaping it. This ensured that I only keep the keypoints corresponding to the bounding boxes that satisfy the condition in the keep mask.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26126/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26125
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26125/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26125/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26125/events
|
https://github.com/huggingface/transformers/pull/26125
| 1,893,408,967 |
PR_kwDOCUB6oc5aLT0B
| 26,125 |
Overload pipeline to return the appropriate type for a task
|
{
"login": "aliabid94",
"id": 7870876,
"node_id": "MDQ6VXNlcjc4NzA4NzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7870876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aliabid94",
"html_url": "https://github.com/aliabid94",
"followers_url": "https://api.github.com/users/aliabid94/followers",
"following_url": "https://api.github.com/users/aliabid94/following{/other_user}",
"gists_url": "https://api.github.com/users/aliabid94/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aliabid94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aliabid94/subscriptions",
"organizations_url": "https://api.github.com/users/aliabid94/orgs",
"repos_url": "https://api.github.com/users/aliabid94/repos",
"events_url": "https://api.github.com/users/aliabid94/events{/privacy}",
"received_events_url": "https://api.github.com/users/aliabid94/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26125). All of your documentation changes will be reflected on that endpoint.",
"cc @Narsil ",
"I do understand where this is coming from, and I do agree the pipeline object is quite complex, having some helps from LSP would be super nice ! \r\n\r\nI am a big hater of overload.\r\nHow do you know which call you are actually making ?\r\nIt causes many bugs where you're not calling what you're supposed to and you don't realize.\r\n\r\nI would much more strongly advocate having independant functions, or much better yet IMO, simple low level code that would make everything obvious.\r\n\r\n```\r\npipeline_kwargs = resolve_pipeline(....)\r\npipeline = AutomaticSpeechRecognition(**pipeline_kwargs)\r\n```\r\n\r\n\r\nWouldn't `transcriber: AutomaticSpeechRecognition = pipeline(\"automatic-speech-recognition\", model=\"openai/whisper-base.en\")`\r\nalready solve most of your issues actually ?",
"> Wouldn't transcriber: AutomaticSpeechRecognition = pipeline(\"automatic-speech-recognition\", model=\"openai/whisper-base.en\")\r\nalready solve most of your issues actually ?\r\n\r\nI had no idea beforehand that pipeline would return an AutomaticSpeechRecognition object though. Our users would not know this either. If pipeline is the core object that transformers runs all of its predictions through, I really would like to know through my IDE what arguments I can pass to it, in what format. Right now, it's `Any`, which means I **need** to use the internet to find out what I can pass to this function. I really think we should improve this developer experience!\r\n\r\n> I am a big hater of overload. How do you know which call you are actually making ?\r\n\r\nHere we are only changing the signature, there is still only one function body. We can even add a test to make sure none of the overloads diverge from the main implementation. And the overloads can be put in another file / cleaned up, made this PR to start the discussion.",
"> I would much more strongly advocate having independant functions, or much better yet IMO, simple low level code that would make everything obvious.\r\n\r\nThe thing is, throughout our docs and existing code, the format that is used and encouraged is: `transcriber = pipeline(\"automatic-speech-recognition\", model=\"openai/whisper-base.en\")`. This provides a fix for this specific format that our users are already familiar with.\r\n\r\ne.g. \r\n\r\n",
"> I really think we should improve this developer experience!\r\n\r\nβ on this, unleashing the full potential of pipelines would be great and not requiring to go only is good for devs. But yes let's find a great solution that is easy to maintain! ",
"I could add CI tests that ensure that no overloaded `pipeline` signature ever diverges from the main pipeline signature, other than the task name argument, and that there is only one function body for this method. Happy to hear other ideas though!",
"are there any other comments/suggestions @ArthurZucker @Narsil ?",
"I think I am fine with adding a check for this yes! ",
"Commenting here as there's a related open PR regarding #27275 and the use of overloads for types. Previously a similar proposal was very strongly rejected ([comment](https://github.com/huggingface/transformers/issues/23980#issuecomment-1576991446) and [PR](https://github.com/huggingface/transformers/pull/24035)) and I agree with @Narsil's comments. I'm generally against introducing overloads. \r\n\r\nHowever, this does seem to be an issue which is raised by many independent community members and finding an easy to maintain alternative we can agree on would be good. Perhaps we can open an issue to discuss addressing this more generally?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,694 | 1,704 | 1,704 |
NONE
| null |
Previously, if I use the pipeline method, e.g. `transcriber = pipeline("automatic-speech-recognition", model="openai/whisper-base.en")`, the type hinting assumes `transcriber` is a generic `Pipeline` object. This means I get no help from editor to get any of the documentation for the specific arguments I can pass to this type of pipeline, the AutomaticSpeechRecognitionPipeline. I have to open my browser and search the internet to see what inputs the pipeline accepts in what format, etc.
Fixed this by overloading the return type based on the task passed to the constructor. Makes everything much easier to use for our end users!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26125/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/26125/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26125",
"html_url": "https://github.com/huggingface/transformers/pull/26125",
"diff_url": "https://github.com/huggingface/transformers/pull/26125.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26125.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/26124
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26124/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26124/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26124/events
|
https://github.com/huggingface/transformers/pull/26124
| 1,893,317,081 |
PR_kwDOCUB6oc5aLDee
| 26,124 |
Fixed inconsistency in BertTokenizerFast
|
{
"login": "Towdo",
"id": 13337196,
"node_id": "MDQ6VXNlcjEzMzM3MTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/13337196?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Towdo",
"html_url": "https://github.com/Towdo",
"followers_url": "https://api.github.com/users/Towdo/followers",
"following_url": "https://api.github.com/users/Towdo/following{/other_user}",
"gists_url": "https://api.github.com/users/Towdo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Towdo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Towdo/subscriptions",
"organizations_url": "https://api.github.com/users/Towdo/orgs",
"repos_url": "https://api.github.com/users/Towdo/repos",
"events_url": "https://api.github.com/users/Towdo/events{/privacy}",
"received_events_url": "https://api.github.com/users/Towdo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,694 | 1,696 | 1,696 |
CONTRIBUTOR
| null |
`if token_ids_1:` will fail if `token_ids_1` is a empty list.
In `BertTokenizerFast.create_token_type_ids_from_sequences` and `BertTokenizer` we check `if token_ids_1 is None:`.
A empty list is not considered None, but it will return False if converted to bool.
Which makes this an inconsistency.
Fixes #26123
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
( I don't know what to tick here. It's just a small fix :L)
@ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26124/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26124",
"html_url": "https://github.com/huggingface/transformers/pull/26124",
"diff_url": "https://github.com/huggingface/transformers/pull/26124.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26124.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/26123
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26123/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26123/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26123/events
|
https://github.com/huggingface/transformers/issues/26123
| 1,893,311,268 |
I_kwDOCUB6oc5w2aMk
| 26,123 |
Inconsistency in BertFastTokenizer
|
{
"login": "Towdo",
"id": 13337196,
"node_id": "MDQ6VXNlcjEzMzM3MTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/13337196?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Towdo",
"html_url": "https://github.com/Towdo",
"followers_url": "https://api.github.com/users/Towdo/followers",
"following_url": "https://api.github.com/users/Towdo/following{/other_user}",
"gists_url": "https://api.github.com/users/Towdo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Towdo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Towdo/subscriptions",
"organizations_url": "https://api.github.com/users/Towdo/orgs",
"repos_url": "https://api.github.com/users/Towdo/repos",
"events_url": "https://api.github.com/users/Towdo/events{/privacy}",
"received_events_url": "https://api.github.com/users/Towdo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,694 | 1,696 | 1,696 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-4.18.0-425.19.2.el8_7.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.8.6
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Program to reproduce:
from transformers import BertTokenizer, BertTokenizerFast
A = [1, 2, 3]
B = []
tokenizer = BertTokenizer.from_pretrained("prajjwal1/bert-tiny")
fast_tokenizer = BertTokenizerFast.from_pretrained("prajjwal1/bert-tiny")
X = tokenizer.build_inputs_with_special_tokens(A, B)
Y = fast_tokenizer.build_inputs_with_special_tokens(A, B)
print(X) --> [101, 1, 2, 3, 102, 102]
print(Y) --> [101, 1, 2, 3, 102]
### Expected behavior
X == Y
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26123/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26122
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26122/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26122/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26122/events
|
https://github.com/huggingface/transformers/issues/26122
| 1,893,225,016 |
I_kwDOCUB6oc5w2FI4
| 26,122 |
Unclear kwargs options in docstrings
|
{
"login": "matsuobasho",
"id": 13874772,
"node_id": "MDQ6VXNlcjEzODc0Nzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/13874772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matsuobasho",
"html_url": "https://github.com/matsuobasho",
"followers_url": "https://api.github.com/users/matsuobasho/followers",
"following_url": "https://api.github.com/users/matsuobasho/following{/other_user}",
"gists_url": "https://api.github.com/users/matsuobasho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matsuobasho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matsuobasho/subscriptions",
"organizations_url": "https://api.github.com/users/matsuobasho/orgs",
"repos_url": "https://api.github.com/users/matsuobasho/repos",
"events_url": "https://api.github.com/users/matsuobasho/events{/privacy}",
"received_events_url": "https://api.github.com/users/matsuobasho/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @matsuobasho, thanks for raising this issue! \r\n\r\nThe API you linked to is for the tokenizer classes. When calling AutoModelForCausalLM, the `from_pretrained` API is [this one](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModelForCausalLM.from_pretrained). In the docstring, you'll see that for kwargs it says: \r\n\r\n```\r\nIf a configuration is not provided, kwargs will be first passed to the configuration class initialization function ([from_pretrained()](https://huggingface.co/docs/transformers/main/en/main_classes/configuration#transformers.PretrainedConfig.from_pretrained)). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs __init__ function.\r\n``` \r\n\r\nThe `load_in_4bit` argument is not a config argument, and so is passed to the [PretrainedModel class](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.PreTrainedModel.from_pretrained.load_in_4bit). \r\n\r\nThis is admittedly a bit difficult to track. The AutoXxx API allows us to seamlessly loaded many different architectures which perform the same task. This ultimately means that there's lots of arguments which get passed forward through the use of kwargs and handled in our Pretrained classes. If looking for an argument's use, I'd suggest searching in the docs for the argument, it (should) normally point you in the correct spot. ",
"@amyeroberts, thank you for the explanation, this makes sense."
] | 1,694 | 1,694 | 1,694 |
NONE
| null |
### System Info
<This issue is about the docstrings, see no other more relevant categories when I create issue.>
### Reproduction
The problem is that kwargs documentation is too general to understand all the options of arguments for a particular function. Here is an example.
The [tutorial ](https://huggingface.co/docs/transformers/llm_tutorial)on generation has this example:
```
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"openlm-research/open_llama_7b", device_map="auto", load_in_4bit=True)
```
I wanted to understand the load_in_4bit argument, so examined the `from_pretrained` [API](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoTokenizer.from_pretrained). It doesn't have it as a named argument, so that makes it seem like it can be part of kwargs:
```
kwargs (additional keyword arguments, optional) β Will be passed to the Tokenizer __init__() method. Can be used to set special tokens like bos_token, eos_token, unk_token, sep_token, pad_token, cls_token, mask_token, additional_special_tokens. See parameters in the __init__() for more details.
```
So I then examined the __init__ API:
```
| __init__(self, vocab_file=None, merges_file=None, tokenizer_file=None, unk_token='<|endoftext|>', bos_token='<|endoftext|>', eos_token='<|endoftext|>', add_prefix_space=False, **kwargs)
| Initialize self. See help(type(self)) for accurate signature.
```
So it has its own kwargs with no reference of what those may be. There's no way for me to understand what the load_in_4bit or other kwargs options are for `from_pretrained`.
### Expected behavior
A clear explanation of what the arguments for kwargs can be or a reference to another specific class/function documentation for more information.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26122/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26121
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26121/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26121/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26121/events
|
https://github.com/huggingface/transformers/pull/26121
| 1,893,071,393 |
PR_kwDOCUB6oc5aKNJW
| 26,121 |
Add missing space in generation/utils.py
|
{
"login": "jbochi",
"id": 292712,
"node_id": "MDQ6VXNlcjI5MjcxMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/292712?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jbochi",
"html_url": "https://github.com/jbochi",
"followers_url": "https://api.github.com/users/jbochi/followers",
"following_url": "https://api.github.com/users/jbochi/following{/other_user}",
"gists_url": "https://api.github.com/users/jbochi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jbochi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbochi/subscriptions",
"organizations_url": "https://api.github.com/users/jbochi/orgs",
"repos_url": "https://api.github.com/users/jbochi/repos",
"events_url": "https://api.github.com/users/jbochi/events{/privacy}",
"received_events_url": "https://api.github.com/users/jbochi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26121). All of your documentation changes will be reflected on that endpoint."
] | 1,694 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
Warning now reads as "... to control thegeneration length. We ..."
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerz and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
generate: @gante
Documentation: @stevhliu and @MKhalusova
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26121/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26121/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26121",
"html_url": "https://github.com/huggingface/transformers/pull/26121",
"diff_url": "https://github.com/huggingface/transformers/pull/26121.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26121.patch",
"merged_at": 1694609155000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26120
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26120/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26120/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26120/events
|
https://github.com/huggingface/transformers/issues/26120
| 1,892,991,338 |
I_kwDOCUB6oc5w1MFq
| 26,120 |
Issue with reloading model, please help me what I should I change
|
{
"login": "Shruthipriya-BS",
"id": 48410689,
"node_id": "MDQ6VXNlcjQ4NDEwNjg5",
"avatar_url": "https://avatars.githubusercontent.com/u/48410689?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shruthipriya-BS",
"html_url": "https://github.com/Shruthipriya-BS",
"followers_url": "https://api.github.com/users/Shruthipriya-BS/followers",
"following_url": "https://api.github.com/users/Shruthipriya-BS/following{/other_user}",
"gists_url": "https://api.github.com/users/Shruthipriya-BS/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shruthipriya-BS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shruthipriya-BS/subscriptions",
"organizations_url": "https://api.github.com/users/Shruthipriya-BS/orgs",
"repos_url": "https://api.github.com/users/Shruthipriya-BS/repos",
"events_url": "https://api.github.com/users/Shruthipriya-BS/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shruthipriya-BS/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @Shruthipriya-BS , please submit minimal reproducer with your config. It's hard to help you with the current description. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,694 | 1,697 | 1,697 |
NONE
| null |
My code :
' ' ' import torch
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
logging,
)
import torch
import pandas as pd
from datasets import load_dataset, concatenate_datasets
from peft import LoraConfig, PeftModel
from trl import SFTTrainer
from peft import get_peft_model
import gc
import timeit
import pandas as pd
from transformers import AutoTokenizer
import nltk
nltk.download('punkt')
nltk.download('stopwords')
import numpy as np
from transformers import BartTokenizer, BartForConditionalGeneration
from nltk.corpus import stopwords
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics.pairwise import cosine_similarity
import torchaudio
from IPython.display import Audio
import re
dataset1_name = "knkarthick/dialogsum"
dataset1 = load_dataset(dataset1_name, split='train')
import pandas as pd
train_df = pd.DataFrame(dataset)
test_df = pd.DataFrame(test_dataset)
# instruction finetuning data preparation function
def prepare_dataset(df,split='train'):
text_col = []
instruction = """Write a concise summary of the below input text.Return your response in bullet points which covers the key points of the text. """ # change instuction according to the task
if split == 'train':
for _ , row in df.iterrows():
input_q = row["dialogue"]
output = row["summary"]
text = ("### Instruction: \n" + instruction +
"\n### Input: \n" + input_q +
"\n### Response :\n" + output) # keeping output column in training dataset
text_col.append(text)
df.loc[:,'text'] = text_col
else:
for _ , row in df.iterrows():
input_q = row["dialogue"]
text = ("### Instruction: \n" + instruction +
"\n### Input: \n" + input_q +
"\n### Response :\n" ) # not keeping output column in test dataset
text_col.append(text)
df.loc[:,'text'] = text_col
return df
train_df = prepare_dataset(train_df,'train')
test_df = prepare_dataset(test_df,'test')
dataset = Dataset.from_pandas(train_df)
fp16 = False
# Number of training epochs
num_train_epochs = 2
# Enable bf16 training
bf16 = True
model_name = "NousResearch/Llama-2-7b-chat-hf"
bnb_4bit_compute_dtype = "bfloat16"
compute_dtype = getattr(torch, bnb_4bit_compute_dtype)
# Quantization config
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype="bfloat16",
bnb_4bit_use_double_quant=False
)
if compute_dtype == torch.float16 and True:
major, _ = torch.cuda.get_device_capability()
if major >= 8:
print("=" * 80)
print("Your GPU supports bfloat16, you can accelerate training with the argument --bf16")
print("=" * 80)
# loading the model with quantization config
model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
device_map = 'auto',
)
model.config.use_cache = False
model.config.pretraining_tp = 1
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True )
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
from peft import LoraConfig, get_peft_model
lora_alpha = 16
lora_dropout = 0.1
lora_r = 64 # rank
# Parameter efficient finetuning for LoRA configuration
peft_config = LoraConfig(
lora_alpha=lora_alpha,
lora_dropout=lora_dropout,
#target_modules= ["q_proj","v_proj"], # we will only create adopters for q, v metrices of attention module
r=lora_r,
bias="none",
task_type="CAUSAL_LM"
)
# arguments are self explanatory
import transformers
# Tensorboard logs
tb_log_dir = "./results/logs"
training_arguments = transformers.TrainingArguments(
output_dir="llama2_qlora_finetuned",
per_device_train_batch_size=1,
gradient_accumulation_steps=1,
optim="paged_adamw_32bit",
save_steps=10,
logging_steps = 1,
learning_rate=2e-4,
fp16=False,
bf16=True,
max_grad_norm=0.3,
max_steps = 250,
warmup_ratio = 0.03,
group_by_length=True,
lr_scheduler_type="cosine",
report_to="tensorboard"
)
from trl import SFTTrainer
trainer = SFTTrainer(
model=model,
train_dataset=dataset,
peft_config=peft_config, # passing peft config
dataset_text_field="text", # mentioned the required column
args=training_arguments, # training agruments
tokenizer=tokenizer, # tokenizer
packing=False,
max_seq_length=512
)
trainer.train()
trainer.model.save_pretrained('modeldir')
Steps to follow:
runtime -> Restart runetime
run the below
import os
import gc
import torch
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
)
from peft import LoraConfig, PeftModel, get_peft_model
gc.collect()
# Set the environment variable before importing PyTorch
os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "0"
model_name = "NousResearch/Llama-2-7b-chat-hf"
#device_map = {"": 0}
device_map='auto'
output_dir='modeldir'
base_model = AutoModelForCausalLM.from_pretrained(
model_name,
low_cpu_mem_usage=True,
return_dict=True,
torch_dtype=torch.bfloat16,
device_map=device_map,
offload_folder='offload/',
offload_state_dict = True
)
model = PeftModel.from_pretrained(base_model, output_dir, offload_folder = "offload/")
model = model.merge_and_unload()
gc.collect()
# Reload tokenizer to save it
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
model.push_to_hub("modified_llama2", use_auth_token = True)
tokenizer.push_to_hub("modified_llama2", use_auth_token = True)
Restart runtime to clear VRAM to load in 4bit for inference
run the below for inference
new_model="modified_llama2"
huggingface_profile = "Shruthipriya"
full_path = huggingface_profile + "/" + new_model
# Activate 4-bit precision base model loading
use_4bit = True
# Activate nested quantization for 4-bit base models
use_nested_quant = False
# Compute dtype for 4-bit base models
bnb_4bit_compute_dtype = "bfloat16"
# Quantization type (fp4 or nf4)
bnb_4bit_quant_type = "nf4"
def load_model(model_name):
# Load tokenizer and model with QLoRA configuration
compute_dtype = getattr(torch, bnb_4bit_compute_dtype)
bnb_config = BitsAndBytesConfig(
load_in_4bit=use_4bit,
bnb_4bit_quant_type=bnb_4bit_quant_type,
bnb_4bit_compute_dtype=compute_dtype,
bnb_4bit_use_double_quant=use_nested_quant,
)
if compute_dtype == torch.float16 and use_4bit:
major, _ = torch.cuda.get_device_capability()
if major >= 8:
print("=" * 80)
print("Your GPU supports bfloat16, you can accelerate training with the argument --bf16")
print("=" * 80)
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map='auto',
quantization_config=bnb_config,
)
model.config.use_cache = False
model.config.pretraining_tp = 1
# Load LoRA configuration
peft_config = LoraConfig(
lora_alpha=lora_alpha,
lora_dropout=lora_dropout,
r=lora_r,
bias="none",
task_type="CAUSAL_LM",
)
# Load Tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
#tokenizer.pad_token = "<PAD>"
tokenizer.padding_side = "right"
return model, tokenizer, peft_config
model, tokenizer = load_model(full_path)
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
[/home/shruthipriya/Documents/sp/sp-env/summary.ipynb](https://file+.vscode-resource.vscode-cdn.net/home/shruthipriya/Documents/sp/sp-env/summary.ipynb) Cell 21 line 6
[57](vscode-notebook-cell:/home/shruthipriya/Documents/sp/sp-env/summary.ipynb#X25sZmlsZQ%3D%3D?line=56) tokenizer.padding_side = "right"
[59](vscode-notebook-cell:/home/shruthipriya/Documents/sp/sp-env/summary.ipynb#X25sZmlsZQ%3D%3D?line=58) return model, tokenizer, peft_config
---> [61](vscode-notebook-cell:/home/shruthipriya/Documents/sp/sp-env/summary.ipynb#X25sZmlsZQ%3D%3D?line=60) model, tokenizer = load_model(full_path)
[/home/shruthipriya/Documents/sp/sp-env/summary.ipynb](https://file+.vscode-resource.vscode-cdn.net/home/shruthipriya/Documents/sp/sp-env/summary.ipynb) Cell 21 line 3
[32](vscode-notebook-cell:/home/shruthipriya/Documents/sp/sp-env/summary.ipynb#X25sZmlsZQ%3D%3D?line=31) print("Your GPU supports bfloat16, you can accelerate training with the argument --bf16")
[33](vscode-notebook-cell:/home/shruthipriya/Documents/sp/sp-env/summary.ipynb#X25sZmlsZQ%3D%3D?line=32) print("=" * 80)
---> [35](vscode-notebook-cell:/home/shruthipriya/Documents/sp/sp-env/summary.ipynb#X25sZmlsZQ%3D%3D?line=34) model = AutoModelForCausalLM.from_pretrained(
[36](vscode-notebook-cell:/home/shruthipriya/Documents/sp/sp-env/summary.ipynb#X25sZmlsZQ%3D%3D?line=35) model_name,
[37](vscode-notebook-cell:/home/shruthipriya/Documents/sp/sp-env/summary.ipynb#X25sZmlsZQ%3D%3D?line=36) device_map='auto',
[38](vscode-notebook-cell:/home/shruthipriya/Documents/sp/sp-env/summary.ipynb#X25sZmlsZQ%3D%3D?line=37) quantization_config=bnb_config,
[39](vscode-notebook-cell:/home/shruthipriya/Documents/sp/sp-env/summary.ipynb#X25sZmlsZQ%3D%3D?line=38) )
[41](vscode-notebook-cell:/home/shruthipriya/Documents/sp/sp-env/summary.ipynb#X25sZmlsZQ%3D%3D?line=40) model.config.use_cache = False
[42](vscode-notebook-cell:/home/shruthipriya/Documents/sp/sp-env/summary.ipynb#X25sZmlsZQ%3D%3D?line=41) model.config.pretraining_tp = 1
File [~/Documents/sp/sp-env/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:493](https://file+.vscode-resource.vscode-cdn.net/home/shruthipriya/Documents/sp/sp-env/~/Documents/sp/sp-env/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:493), in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
491 elif type(config) in cls._model_mapping.keys():
492 model_class = _get_model_class(config, cls._model_mapping)
--> 493 return model_class.from_pretrained(
494 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs
495 )
496 raise ValueError(
497 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
498 f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}."
499 )
File [~/Documents/sp/sp-env/lib/python3.10/site-packages/transformers/modeling_utils.py:2903](https://file+.vscode-resource.vscode-cdn.net/home/shruthipriya/Documents/sp/sp-env/~/Documents/sp/sp-env/lib/python3.10/site-packages/transformers/modeling_utils.py:2903), in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, config, cache_dir, ignore_mismatched_sizes, force_download, local_files_only, token, revision, use_safetensors, *model_args, **kwargs)
2893 if dtype_orig is not None:
2894 torch.set_default_dtype(dtype_orig)
2896 (
2897 model,
2898 missing_keys,
2899 unexpected_keys,
2900 mismatched_keys,
2901 offload_index,
2902 error_msgs,
-> 2903 ) = cls._load_pretrained_model(
2904 model,
2905 state_dict,
2906 loaded_state_dict_keys, # XXX: rename?
2907 resolved_archive_file,
2908 pretrained_model_name_or_path,
2909 ignore_mismatched_sizes=ignore_mismatched_sizes,
2910 sharded_metadata=sharded_metadata,
2911 _fast_init=_fast_init,
2912 low_cpu_mem_usage=low_cpu_mem_usage,
2913 device_map=device_map,
2914 offload_folder=offload_folder,
2915 offload_state_dict=offload_state_dict,
2916 dtype=torch_dtype,
2917 is_quantized=(load_in_8bit or load_in_4bit),
2918 keep_in_fp32_modules=keep_in_fp32_modules,
2919 )
2921 model.is_loaded_in_4bit = load_in_4bit
2922 model.is_loaded_in_8bit = load_in_8bit
File [~/Documents/sp/sp-env/lib/python3.10/site-packages/transformers/modeling_utils.py:3260](https://file+.vscode-resource.vscode-cdn.net/home/shruthipriya/Documents/sp/sp-env/~/Documents/sp/sp-env/lib/python3.10/site-packages/transformers/modeling_utils.py:3260), in PreTrainedModel._load_pretrained_model(cls, model, state_dict, loaded_keys, resolved_archive_file, pretrained_model_name_or_path, ignore_mismatched_sizes, sharded_metadata, _fast_init, low_cpu_mem_usage, device_map, offload_folder, offload_state_dict, dtype, is_quantized, keep_in_fp32_modules)
3250 mismatched_keys += _find_mismatched_keys(
3251 state_dict,
3252 model_state_dict,
(...)
3256 ignore_mismatched_sizes,
3257 )
3259 if low_cpu_mem_usage:
-> 3260 new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
3261 model_to_load,
3262 state_dict,
3263 loaded_keys,
3264 start_prefix,
3265 expected_keys,
3266 device_map=device_map,
3267 offload_folder=offload_folder,
3268 offload_index=offload_index,
3269 state_dict_folder=state_dict_folder,
3270 state_dict_index=state_dict_index,
3271 dtype=dtype,
3272 is_quantized=is_quantized,
3273 is_safetensors=is_safetensors,
3274 keep_in_fp32_modules=keep_in_fp32_modules,
3275 )
3276 error_msgs += new_error_msgs
3277 else:
File [~/Documents/sp/sp-env/lib/python3.10/site-packages/transformers/modeling_utils.py:725](https://file+.vscode-resource.vscode-cdn.net/home/shruthipriya/Documents/sp/sp-env/~/Documents/sp/sp-env/lib/python3.10/site-packages/transformers/modeling_utils.py:725), in _load_state_dict_into_meta_model(model, state_dict, loaded_state_dict_keys, start_prefix, expected_keys, device_map, offload_folder, offload_index, state_dict_folder, state_dict_index, dtype, is_quantized, is_safetensors, keep_in_fp32_modules)
722 fp16_statistics = None
724 if "SCB" not in param_name:
--> 725 set_module_quantized_tensor_to_device(
726 model, param_name, param_device, value=param, fp16_statistics=fp16_statistics
727 )
729 return error_msgs, offload_index, state_dict_index
File [~/Documents/sp/sp-env/lib/python3.10/site-packages/transformers/utils/bitsandbytes.py:77](https://file+.vscode-resource.vscode-cdn.net/home/shruthipriya/Documents/sp/sp-env/~/Documents/sp/sp-env/lib/python3.10/site-packages/transformers/utils/bitsandbytes.py:77), in set_module_quantized_tensor_to_device(module, tensor_name, device, value, fp16_statistics)
75 new_value = old_value.to(device)
76 elif isinstance(value, torch.Tensor):
---> 77 new_value = value.to("cpu")
78 if value.dtype == torch.int8:
79 is_8bit_serializable = version.parse(importlib.metadata.version("bitsandbytes")) > version.parse(
80 "0.37.2"
81 )
NotImplementedError: Cannot copy out of meta tensor; no data!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26120/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26119
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26119/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26119/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26119/events
|
https://github.com/huggingface/transformers/pull/26119
| 1,892,916,324 |
PR_kwDOCUB6oc5aJrIO
| 26,119 |
[Whisper] Use torch for stft if available
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26119). All of your documentation changes will be reflected on that endpoint.",
"cc @Vaibhavs10 you'll probably get a speed-up again using the torch backend for STFT if you install from `main`",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Given the increasing usage of the Whisper model as the 'de-facto' speech recognition model in the library, I propose merging this PR to improve the pre-processing time for Whisper, and then generalising the changes for all other ASR models in the library in a follow-up PR (since this will be more involved and take some more time). WDYT @ArthurZucker @ylacombe?",
"I also ran some long-form transcription algorithms for Flax Whisper to demonstrate the WER equivalence with using the torch implementation, and the huge speed-up we get to the overall transcription time: https://wandb.ai/sanchit-gandhi/distil-whisper-long-form-test/reports/Whisper-FE-numpy-vs-torch--Vmlldzo2MzAwNzYy",
"Yep for sure! Already approved, make sure to rebase on main! ",
"I think it's a great idea given the visibility of the model and the speedup you found!"
] | 1,694 | 1,703 | 1,703 |
CONTRIBUTOR
| null |
# What does this PR do?
PoC PR to highlight what using the `torch` backend for STFT would look like. In a toy benchmark over 2500 samples, the torch backend is approx 4x faster than the native `numpy` one: https://github.com/sanchit-gandhi/codesnippets/blob/main/benchmark_preprocess.ipynb
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26119/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26119",
"html_url": "https://github.com/huggingface/transformers/pull/26119",
"diff_url": "https://github.com/huggingface/transformers/pull/26119.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26119.patch",
"merged_at": 1703156645000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26118
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26118/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26118/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26118/events
|
https://github.com/huggingface/transformers/pull/26118
| 1,892,907,383 |
PR_kwDOCUB6oc5aJpLB
| 26,118 |
Text2text pipeline: don't parameterize from the config
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@amyeroberts To answer your top-level question: the main change in the PR is that we do not set `max_length` when the user does not pass it. The warning was raised when the user passed `max_new_tokens`, which would clash with the (removed) `max_length` set from the default values. If the user passed `max_length`, no warning was raised, as it should.\r\n\r\nNote: in `generate()`, because we have the old default value of `max_length=20`, we rely on detecting whether `max_length` was passed to throw relevant length-related warnings (like `max_length` and `max_new_tokens` being mutually exclusive). "
] | 1,694 | 1,694 | 1,694 |
MEMBER
| null |
# What does this PR do?
Fixes the warning raised in [this comment](https://github.com/huggingface/transformers/pull/23139#issuecomment-1674143603).
Similar to other recent PRs, it removes a few lines that attempt to parameterize `.generate()` using values from the model config, when they are absent. The `GenerationConfig` class already handles these default values, so it's redundant. Moreover, when an unused kwarg is explicitly passed to `.generate()`, warnings are triggered (which was the issue in the comment).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26118/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26118",
"html_url": "https://github.com/huggingface/transformers/pull/26118",
"diff_url": "https://github.com/huggingface/transformers/pull/26118.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26118.patch",
"merged_at": 1694540446000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26117
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26117/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26117/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26117/events
|
https://github.com/huggingface/transformers/pull/26117
| 1,892,901,396 |
PR_kwDOCUB6oc5aJn3Z
| 26,117 |
Fix AutoTokenizer docstring typo
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,694 | 1,694 | 1,694 |
COLLABORATOR
| null |
# What does this PR do?
Fixes #26108
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26117/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26117",
"html_url": "https://github.com/huggingface/transformers/pull/26117",
"diff_url": "https://github.com/huggingface/transformers/pull/26117.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26117.patch",
"merged_at": 1694599947000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26116
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26116/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26116/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26116/events
|
https://github.com/huggingface/transformers/issues/26116
| 1,892,847,371 |
I_kwDOCUB6oc5w0o8L
| 26,116 |
can't use pretrained weights when working with a tpu
|
{
"login": "level14taken",
"id": 60062160,
"node_id": "MDQ6VXNlcjYwMDYyMTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/60062160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/level14taken",
"html_url": "https://github.com/level14taken",
"followers_url": "https://api.github.com/users/level14taken/followers",
"following_url": "https://api.github.com/users/level14taken/following{/other_user}",
"gists_url": "https://api.github.com/users/level14taken/gists{/gist_id}",
"starred_url": "https://api.github.com/users/level14taken/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/level14taken/subscriptions",
"organizations_url": "https://api.github.com/users/level14taken/orgs",
"repos_url": "https://api.github.com/users/level14taken/repos",
"events_url": "https://api.github.com/users/level14taken/events{/privacy}",
"received_events_url": "https://api.github.com/users/level14taken/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hm, if it works on Colab but not Kaggle, the issue is probably not in `transformers` itself. It's either in the versions of TF / transformers you're using on both platforms, or the way you're connecting to TPU on Kaggle. Maybe try `tpu = tf.distribute.cluster_resolver.TPUClusterResolver() ` instead of using `tpu='local'`?",
"Hi @Rocketknight1,\r\nThanks for the reply.\r\n\r\nI've checked the versions(tranformers,tf) in colab and kaggle and they were exactly the same.\r\nI've also checked with ``` tpu = tf.distribute.cluster_resolver.TPUClusterResolver() ``` but same issue came up. \r\nBy the way kaggle uses TPU v3 and colab is using v2 \r\nDoes the error(registered transfer manager....) really mean something in huggingface or is it just pointing to a tensorflow op?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,694 | 1,697 | 1,697 |
NONE
| null |
### System Info
transformers-4.32
python-3.8
### Who can help?
@Rocketknight1 @sanchit-gandhi@gante
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
1. The error happens only on kaggle tpu, and it works fine on colab
2. But, it just follows the tpu tutorial and is very basic.
```python
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu="local")
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.TPUStrategy(resolver)
from transformers import TFWav2Vec2Model,AutoModel
with strategy.scope():
model = TFWav2Vec2Model.from_pretrained(model_checkpoint,from_pt=True) #####error
model.compile(optimizer="adam")
```
### Expected behavior
Full stack trace of the error.
```python
TFWav2Vec2Model has backpropagation operations that are NOT supported on CPU. If you wish to train/fine-tune this model, you need a GPU or a TPU
---------------------------------------------------------------------------
NotFoundError Traceback (most recent call last)
Cell In[7], line 3
1 from transformers import TFWav2Vec2Model,AutoModel
2 with strategy.scope():
----> 3 model = TFWav2Vec2Model.from_pretrained(model_checkpoint,from_pt=True)
4 # You can compile with jit_compile=True when debugging on CPU or GPU to check
5 # that XLA compilation works. Remember to take it out when actually running
6 # on TPU, though - XLA compilation will be handled for you when running with a
7 # TPUStrategy!
8 model.compile(optimizer="adamw")
File /usr/local/lib/python3.8/site-packages/transformers/modeling_tf_utils.py:2894, in TFPreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, config, cache_dir, ignore_mismatched_sizes, force_download, local_files_only, token, revision, *model_args, **kwargs)
2891 from .modeling_tf_pytorch_utils import load_pytorch_checkpoint_in_tf2_model
2893 # Load from a PyTorch checkpoint
-> 2894 return load_pytorch_checkpoint_in_tf2_model(
2895 model,
2896 resolved_archive_file,
2897 allow_missing_keys=True,
2898 output_loading_info=output_loading_info,
2899 _prefix=load_weight_prefix,
2900 tf_to_pt_weight_rename=tf_to_pt_weight_rename,
2901 )
2903 # we might need to extend the variable scope for composite models
2904 if load_weight_prefix is not None:
File /usr/local/lib/python3.8/site-packages/transformers/modeling_tf_pytorch_utils.py:189, in load_pytorch_checkpoint_in_tf2_model(tf_model, pytorch_checkpoint_path, tf_inputs, allow_missing_keys, output_loading_info, _prefix, tf_to_pt_weight_rename)
185 pt_state_dict.update(torch.load(pt_path, map_location="cpu"))
187 logger.info(f"PyTorch checkpoint contains {sum(t.numel() for t in pt_state_dict.values()):,} parameters")
--> 189 return load_pytorch_weights_in_tf2_model(
190 tf_model,
191 pt_state_dict,
192 tf_inputs=tf_inputs,
193 allow_missing_keys=allow_missing_keys,
194 output_loading_info=output_loading_info,
195 _prefix=_prefix,
196 tf_to_pt_weight_rename=tf_to_pt_weight_rename,
197 )
File /usr/local/lib/python3.8/site-packages/transformers/modeling_tf_pytorch_utils.py:230, in load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs, allow_missing_keys, output_loading_info, _prefix, tf_to_pt_weight_rename)
227 raise
229 pt_state_dict = {k: v.numpy() for k, v in pt_state_dict.items()}
--> 230 return load_pytorch_state_dict_in_tf2_model(
231 tf_model,
232 pt_state_dict,
233 tf_inputs=tf_inputs,
234 allow_missing_keys=allow_missing_keys,
235 output_loading_info=output_loading_info,
236 _prefix=_prefix,
237 tf_to_pt_weight_rename=tf_to_pt_weight_rename,
238 )
File /usr/local/lib/python3.8/site-packages/transformers/modeling_tf_pytorch_utils.py:263, in load_pytorch_state_dict_in_tf2_model(tf_model, pt_state_dict, tf_inputs, allow_missing_keys, output_loading_info, _prefix, tf_to_pt_weight_rename, ignore_mismatched_sizes)
261 if tf_inputs:
262 with tf.name_scope(_prefix):
--> 263 tf_model(tf_inputs, training=False) # Make sure model is built
264 # Convert old format to new format if needed from a PyTorch state_dict
265 tf_keys_to_pt_keys = {}
File /usr/local/lib/python3.8/site-packages/keras/utils/traceback_utils.py:70, in filter_traceback.<locals>.error_handler(*args, **kwargs)
67 filtered_tb = _process_traceback_frames(e.__traceback__)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
File /usr/local/lib/python3.8/site-packages/transformers/modeling_tf_utils.py:426, in unpack_inputs.<locals>.run_call_with_unpacked_inputs(self, *args, **kwargs)
423 config = self.config
425 unpacked_inputs = input_processing(func, config, **fn_args_and_kwargs)
--> 426 return func(self, **unpacked_inputs)
File /usr/local/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_tf_wav2vec2.py:1407, in TFWav2Vec2Model.call(self, input_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict, training)
1404 output_attentions = output_attentions if output_attentions else self.config.output_attentions
1405 return_dict = return_dict if return_dict else self.config.return_dict
-> 1407 outputs = self.wav2vec2(
1408 input_values=input_values,
1409 attention_mask=attention_mask,
1410 token_type_ids=token_type_ids,
1411 position_ids=position_ids,
1412 head_mask=head_mask,
1413 inputs_embeds=inputs_embeds,
1414 output_attentions=output_attentions,
1415 output_hidden_states=output_hidden_states,
1416 return_dict=return_dict,
1417 training=training,
1418 )
1420 return outputs
File /usr/local/lib/python3.8/site-packages/transformers/modeling_tf_utils.py:426, in unpack_inputs.<locals>.run_call_with_unpacked_inputs(self, *args, **kwargs)
423 config = self.config
425 unpacked_inputs = input_processing(func, config, **fn_args_and_kwargs)
--> 426 return func(self, **unpacked_inputs)
File /usr/local/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_tf_wav2vec2.py:1158, in TFWav2Vec2MainLayer.call(self, input_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict, training, **kwargs)
1155 if training:
1156 hidden_states = self._mask_hidden_states(hidden_states, mask_time_indices=mask_time_indices)
-> 1158 encoder_outputs = self.encoder(
1159 hidden_states,
1160 attention_mask=attention_mask,
1161 output_attentions=output_attentions,
1162 output_hidden_states=output_hidden_states,
1163 return_dict=return_dict,
1164 training=training,
1165 )
1166 hidden_states = encoder_outputs[0]
1168 if not return_dict:
File /usr/local/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_tf_wav2vec2.py:1007, in TFWav2Vec2EncoderStableLayerNorm.call(self, hidden_states, attention_mask, output_attentions, output_hidden_states, return_dict, training)
1004 else:
1005 attention_mask = None
-> 1007 position_embeddings = self.pos_conv_embed(hidden_states)
1008 hidden_states = hidden_states + position_embeddings
1009 hidden_states = self.dropout(hidden_states, training=training)
File /usr/local/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_tf_wav2vec2.py:568, in TFWav2Vec2PositionalConvEmbedding.call(self, hidden_states)
567 def call(self, hidden_states: tf.Tensor) -> tf.Tensor:
--> 568 hidden_states = self.conv(hidden_states)
569 hidden_states = self.padding(hidden_states)
570 hidden_states = self.activation(hidden_states)
File /usr/local/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_tf_wav2vec2.py:480, in TFWav2Vec2WeightNormConv1D.call(self, inputs)
477 self._normalize_kernel()
479 padded_inputs = tf.pad(inputs, ((0, 0), (self.explicit_padding, self.explicit_padding), (0, 0)))
--> 480 output = super().call(padded_inputs)
482 return output
NotFoundError: Exception encountered when calling layer 'conv' (type TFWav2Vec2WeightNormConv1D).
could not find registered transfer manager for platform Host -- check target linkage [Op:__inference__jit_compiled_convolution_op_1397]
Call arguments received by layer 'conv' (type TFWav2Vec2WeightNormConv1D):
β’ inputs=tf.Tensor(shape=(1, 1, 1024), dtype=float32)
[ ]:
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26116/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/26116/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26115
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26115/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26115/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26115/events
|
https://github.com/huggingface/transformers/pull/26115
| 1,892,783,269 |
PR_kwDOCUB6oc5aJOG3
| 26,115 |
[VITS] Fix speaker_embed device mismatch
|
{
"login": "fakhirali",
"id": 32309516,
"node_id": "MDQ6VXNlcjMyMzA5NTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/32309516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fakhirali",
"html_url": "https://github.com/fakhirali",
"followers_url": "https://api.github.com/users/fakhirali/followers",
"following_url": "https://api.github.com/users/fakhirali/following{/other_user}",
"gists_url": "https://api.github.com/users/fakhirali/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fakhirali/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fakhirali/subscriptions",
"organizations_url": "https://api.github.com/users/fakhirali/orgs",
"repos_url": "https://api.github.com/users/fakhirali/repos",
"events_url": "https://api.github.com/users/fakhirali/events{/privacy}",
"received_events_url": "https://api.github.com/users/fakhirali/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26115). All of your documentation changes will be reflected on that endpoint.",
"Requesting final review from @ArthurZucker. I can merge this once Arthur gives the thumbs-up and the final suggestion is added!"
] | 1,694 | 1,695 | 1,695 |
CONTRIBUTOR
| null |
- pass device arg to speaker_id tensor
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #26055
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. #26055
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sanchit-gandhi @hollance
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerz and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26115/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26115/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26115",
"html_url": "https://github.com/huggingface/transformers/pull/26115",
"diff_url": "https://github.com/huggingface/transformers/pull/26115.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26115.patch",
"merged_at": 1695891397000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26114
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26114/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26114/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26114/events
|
https://github.com/huggingface/transformers/pull/26114
| 1,892,658,836 |
PR_kwDOCUB6oc5aIyyq
| 26,114 |
Update spectrogram and waveform model mapping for TTS/A pipeline
|
{
"login": "Vaibhavs10",
"id": 18682411,
"node_id": "MDQ6VXNlcjE4NjgyNDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/18682411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vaibhavs10",
"html_url": "https://github.com/Vaibhavs10",
"followers_url": "https://api.github.com/users/Vaibhavs10/followers",
"following_url": "https://api.github.com/users/Vaibhavs10/following{/other_user}",
"gists_url": "https://api.github.com/users/Vaibhavs10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vaibhavs10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vaibhavs10/subscriptions",
"organizations_url": "https://api.github.com/users/Vaibhavs10/orgs",
"repos_url": "https://api.github.com/users/Vaibhavs10/repos",
"events_url": "https://api.github.com/users/Vaibhavs10/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vaibhavs10/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sanchit-gandhi I just tested, and I essentially get no output!\r\nDo you think that's good?",
"Requesting final review from Arthur! (who I believe is now on watch π)"
] | 1,694 | 1,694 | 1,694 |
MEMBER
| null |
# What does this PR do?
Updates the mapping names for the models used for TTS & TTA pipelines.
Fixes # (issue)
`MODEL_FOR_TEXT_TO_SPECTROGRAM_NAMES` -> `MODEL_FOR_TEXT_TO_SPECTROGRAM_MAPPING_NAMES`
`MODEL_FOR_TEXT_TO_SPECTROGRAM_NAMES` -> `MODEL_FOR_TEXT_TO_WAVEFORM_MAPPING_NAMES`
## Who can review?
@ylacombe @sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26114/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26114",
"html_url": "https://github.com/huggingface/transformers/pull/26114",
"diff_url": "https://github.com/huggingface/transformers/pull/26114.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26114.patch",
"merged_at": 1694610311000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26113
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26113/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26113/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26113/events
|
https://github.com/huggingface/transformers/issues/26113
| 1,892,640,552 |
I_kwDOCUB6oc5wz2co
| 26,113 |
NLLB-CLIP model implementation
|
{
"login": "visheratin",
"id": 3251552,
"node_id": "MDQ6VXNlcjMyNTE1NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3251552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/visheratin",
"html_url": "https://github.com/visheratin",
"followers_url": "https://api.github.com/users/visheratin/followers",
"following_url": "https://api.github.com/users/visheratin/following{/other_user}",
"gists_url": "https://api.github.com/users/visheratin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/visheratin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/visheratin/subscriptions",
"organizations_url": "https://api.github.com/users/visheratin/orgs",
"repos_url": "https://api.github.com/users/visheratin/repos",
"events_url": "https://api.github.com/users/visheratin/events{/privacy}",
"received_events_url": "https://api.github.com/users/visheratin/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
},
{
"id": 5724035499,
"node_id": "LA_kwDOCUB6oc8AAAABVS3Zqw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Model%20on%20the%20Hub",
"name": "Model on the Hub",
"color": "9CA0E9",
"default": false,
"description": ""
}
] |
open
| false | null |
[] |
[
"Hi @visheratin, awesome work! \r\n\r\nWe have recently been trying to push forΒ `model on the hub`Β and have as much support as we can there. It will also be easier to integrate it. Here is aΒ [tutorial](https://huggingface.co/docs/transformers/custom_models)Β if that sound good to you!",
"Thank you, @amyeroberts!\r\n\r\n[Here](https://github.com/visheratin/nllb-clip/tree/main/hf_model) is my current implementation that works with the hub. And I just uploaded the [base](https://huggingface.co/visheratin/nllb-clip-base) and [large](https://huggingface.co/visheratin/nllb-clip-large) variants of NLLB-CLIP to the hub.\r\n\r\nThe test code that I used as a sanity check is below:\r\n\r\n```\r\nfrom transformers import AutoTokenizer, CLIPProcessor\r\nimport requests\r\nfrom PIL import Image\r\n\r\nfrom modeling_nllb_clip import NLLBCLIPModel # for now, this is a local file from the repo\r\n\r\nprocessor = CLIPProcessor.from_pretrained(\"openai/clip-vit-base-patch32\")\r\nprocessor = processor.image_processor\r\ntokenizer = AutoTokenizer.from_pretrained(\r\n \"facebook/nllb-200-distilled-600M\"\r\n)\r\nimage_path = \"https://huggingface.co/spaces/jjourney1125/swin2sr/resolve/main/samples/butterfly.jpg\"\r\nimage = Image.open(requests.get(image_path, stream=True).raw)\r\nimage_inputs = processor(images=image, return_tensors=\"pt\")\r\ntext_inputs = tokenizer(\r\n [\"cat\", \"dog\", \"butterfly\"],\r\n padding=\"longest\",\r\n return_tensors=\"pt\",\r\n)\r\n\r\nhf_model = NLLBCLIPModel.from_pretrained(\"visheratin/nllb-clip-base\")\r\n\r\noutputs = hf_model(input_ids = text_inputs.input_ids, attention_mask = text_inputs.attention_mask, pixel_values=image_inputs.pixel_values)\r\n\r\n```\r\n\r\nLet me know if I can do anything else!",
"@visheratin Awesome! The only thing to do would be adding the modeling files directly onto the hub alongside the checkpoint e.g. [like here for phi-1_5](https://huggingface.co/microsoft/phi-1_5/blob/main/modeling_mixformer_sequential.py)",
"Added modeling files and a code sample to the README file. Over the weekend, I will add a proper description to the model card."
] | 1,694 | 1,695 | null |
NONE
| null |
### Model description
Hi!
I recently trained a CLIP model with an NLLB text encoder to extend CLIP capabilities to 201 languages of the Flores-200 dataset. As far as the implementation goes, it is HF CLIP implementation with an M2M100 encoder from NLLB models. I'm wondering if you'd be interested in having NLLB-CLIP in the library? If yes, I can bring my implementation in accordance with other CLIP models and create a PR.
The link to the paper with description and results - https://arxiv.org/abs/2309.01859
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26113/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26113/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26112
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26112/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26112/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26112/events
|
https://github.com/huggingface/transformers/issues/26112
| 1,892,617,810 |
I_kwDOCUB6oc5wzw5S
| 26,112 |
`model.get_input_embeddings()` returns an embedding size of 0 with DS-ZeRO3
|
{
"login": "regisss",
"id": 15324346,
"node_id": "MDQ6VXNlcjE1MzI0MzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/15324346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/regisss",
"html_url": "https://github.com/regisss",
"followers_url": "https://api.github.com/users/regisss/followers",
"following_url": "https://api.github.com/users/regisss/following{/other_user}",
"gists_url": "https://api.github.com/users/regisss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/regisss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/regisss/subscriptions",
"organizations_url": "https://api.github.com/users/regisss/orgs",
"repos_url": "https://api.github.com/users/regisss/repos",
"events_url": "https://api.github.com/users/regisss/events{/privacy}",
"received_events_url": "https://api.github.com/users/regisss/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hello, yes, that is the issue. Does the below code changes solve the issue?\r\n\r\n```\r\nembeddings = model.get_input_embeddings()\r\nwith deepspeed.zero.GatheredParameters(embeddings.weight, modifier_rank=None):\r\n embedding_size = embeddings.weight.shape[0]\r\n```\r\n\r\n",
"That works, thanks!\r\nShall I open a PR to add this code snippet if DS ZeRO-3 is enabled in the summarization example?",
"Does something like `embedding_size = model.get_input_embeddings().num_embeddings` also work? Taken from #26264",
"Hmm where would that `num_embeddings` attribute come from? Would it be computed from the embedding shapes at some point?",
"<img width=\"525\" alt=\"Screenshot 2023-10-18 at 3 01 56β―PM\" src=\"https://github.com/huggingface/transformers/assets/13534540/4f89d92f-cf33-461d-9fd7-803824f91677\">\r\n\r\n@ArthurZucker, `num_embeddings` only keeps the number when creating the embedding object but it would be incorrect if someone wants the vocab size post reshaping the embedding layers to add the special tokens. Example of this is given above ",
"Arf did not know this, thanks π "
] | 1,694 | 1,697 | 1,697 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.34.0.dev0
- Platform: Linux-5.15.0-1015-aws-x86_64-with-glibc2.10
- Python version: 3.8.13
- Huggingface_hub version: 0.17.1
- Safetensors version: 0.3.3
- Accelerate version: 0.22.0
- Accelerate config: not found
- PyTorch version (GPU?): 1.12.0a0+bd13bc6 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: ?
- Using distributed or parallel set-up in script?: DeepSpeed ZeRO-3
### Who can help?
@pacman100
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the command suggested in the [summarization example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) with DeepSpeed:
```
deepspeed --num_gpus 8 examples/pytorch/summarization/run_summarization.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate \
--deepspeed ds_config.json
```
with `ds_config.json` being:
```json
{
"fp16": {
"enabled": true
},
"optimizer": {
"type": "adam",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto",
"torch_adam": "torch_impl",
"adam_w_mode": false
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": 1666777,
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": false
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
### Expected behavior
The embedding size [here](https://github.com/huggingface/transformers/blob/6acc27eea853885270dba5313181443d43e31f2c/examples/pytorch/summarization/run_summarization.py#L466) should not be 0, I guess this happens because ZeRO-3 spreads the model params across devices. It doesn't happen with ZeRO-2.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26112/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26111
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26111/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26111/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26111/events
|
https://github.com/huggingface/transformers/pull/26111
| 1,892,445,242 |
PR_kwDOCUB6oc5aIENA
| 26,111 |
Step0
|
{
"login": "yunbolyu",
"id": 101203664,
"node_id": "U_kgDOBgg-0A",
"avatar_url": "https://avatars.githubusercontent.com/u/101203664?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yunbolyu",
"html_url": "https://github.com/yunbolyu",
"followers_url": "https://api.github.com/users/yunbolyu/followers",
"following_url": "https://api.github.com/users/yunbolyu/following{/other_user}",
"gists_url": "https://api.github.com/users/yunbolyu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yunbolyu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yunbolyu/subscriptions",
"organizations_url": "https://api.github.com/users/yunbolyu/orgs",
"repos_url": "https://api.github.com/users/yunbolyu/repos",
"events_url": "https://api.github.com/users/yunbolyu/events{/privacy}",
"received_events_url": "https://api.github.com/users/yunbolyu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,694 | 1,694 | 1,694 |
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerz and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26111/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26111",
"html_url": "https://github.com/huggingface/transformers/pull/26111",
"diff_url": "https://github.com/huggingface/transformers/pull/26111.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26111.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/26110
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26110/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26110/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26110/events
|
https://github.com/huggingface/transformers/issues/26110
| 1,892,366,539 |
I_kwDOCUB6oc5wyzjL
| 26,110 |
[New model] Phi-1 and Phi-1_5
|
{
"login": "xenova",
"id": 26504141,
"node_id": "MDQ6VXNlcjI2NTA0MTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xenova",
"html_url": "https://github.com/xenova",
"followers_url": "https://api.github.com/users/xenova/followers",
"following_url": "https://api.github.com/users/xenova/following{/other_user}",
"gists_url": "https://api.github.com/users/xenova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xenova/subscriptions",
"organizations_url": "https://api.github.com/users/xenova/orgs",
"repos_url": "https://api.github.com/users/xenova/repos",
"events_url": "https://api.github.com/users/xenova/events{/privacy}",
"received_events_url": "https://api.github.com/users/xenova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] |
[
"Hi @xenova , are you contributing this model(s)? If not, can I please take this issue?",
"@susnato I just made the request, so feel free to take it! π€"
] | 1,694 | 1,699 | 1,699 |
CONTRIBUTOR
| null |
### Model description
From their respective model cards,
phi-1:
> The language model phi-1 is a Transformer with 1.3 billion parameters, specialized for basic Python coding. Its training involved a variety of data sources, including subsets of Python codes from [The Stack v1.2](https://huggingface.co/datasets/bigcode/the-stack), Q&A content from [StackOverflow](https://archive.org/download/stackexchange), competition code from [code_contests](https://github.com/deepmind/code_contests), and synthetic Python textbooks and exercises generated by [gpt-3.5-turbo-0301](https://platform.openai.com/docs/models/gpt-3-5). Even though the model and the datasets are relatively small compared to contemporary Large Language Models (LLMs), phi-1 has demonstrated an impressive accuracy rate exceeding 50% on the simple Python coding benchmark, HumanEval.
phi-1_5:
> The language model phi-1.5 is a Transformer with 1.3 billion parameters. It was trained using the same data sources as [phi-1](https://huggingface.co/microsoft/phi-1), augmented with a new data source that consists of various NLP synthetic texts. When assessed against benchmarks testing common sense, language understanding, and logical reasoning, phi-1.5 demonstrates a nearly state-of-the-art performance among models with less than 10 billion parameters.
>
> We did not fine-tune phi-1.5 either for instruction following or through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
Benchmarks for the 1.5 model (from their [paper](https://arxiv.org/pdf/2309.05463.pdf)) indicate better (or on-par) performance with llama and llama2 models, especially on multi-step reasoning.

### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
### Phi-1
Paper: https://arxiv.org/abs/2306.11644
Model + code: https://huggingface.co/microsoft/phi-1
### Phi-1_5
Paper: https://arxiv.org/abs/2309.05463
Model + code: https://huggingface.co/microsoft/phi-1_5
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26110/reactions",
"total_count": 7,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 7,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26110/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26109
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26109/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26109/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26109/events
|
https://github.com/huggingface/transformers/issues/26109
| 1,892,334,235 |
I_kwDOCUB6oc5wyrqb
| 26,109 |
A potential bug in convert_llama_weights_to_hf.py
|
{
"login": "lumosity4tpj",
"id": 46916880,
"node_id": "MDQ6VXNlcjQ2OTE2ODgw",
"avatar_url": "https://avatars.githubusercontent.com/u/46916880?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lumosity4tpj",
"html_url": "https://github.com/lumosity4tpj",
"followers_url": "https://api.github.com/users/lumosity4tpj/followers",
"following_url": "https://api.github.com/users/lumosity4tpj/following{/other_user}",
"gists_url": "https://api.github.com/users/lumosity4tpj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lumosity4tpj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lumosity4tpj/subscriptions",
"organizations_url": "https://api.github.com/users/lumosity4tpj/orgs",
"repos_url": "https://api.github.com/users/lumosity4tpj/repos",
"events_url": "https://api.github.com/users/lumosity4tpj/events{/privacy}",
"received_events_url": "https://api.github.com/users/lumosity4tpj/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @lumosity4tpj \r\nThanks a lot for your message. Your explanation seems to makes sense, note we used this script internally to convert Llama weights into HF format and it worked, we probably did not supported all possible edge cases but the conversion script would work for the models that are currently on the Hub. Maybe @ArthurZucker has more context for this specific issue",
"Feel free to open a PR with a fix if you want (mathematically yes, we would have a fail) but no checkpoints were released with `num_key_value_heads != 8`. This is also related to performance optimisation, as you would have more copies to make locally if `num_key_value_heads != num_key_value_groups`.\r\n\r\n> just correct\r\n\r\nit's already correct with released checkpoints",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,694 | 1,697 | 1,697 |
NONE
| null |
### System Info
I did not run the hf code, just found a potential bug while referring to the code
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I found a potential bug in [convert_llama_weights_to_hf_L114](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py#L114), the cause of the error is that `num_key_value_heads` is used as` num_key_value_groups`, the reason for the success of the` llama2 `conversion weights, Because `num_key_value_heads=num_key_value_groups=8`,
A simple example would be: `num_key_value_heads=32, num_heads=64, num_shards=8`, then n_heads_per_shard = n_heads // num_shards=8 `, num_local_key_value_heads = n_heads_per_shard // num_key_value_heads = 0 `
`num_key_value_heads` should, as I understand it, refer to the total number of` kv_heads`, See [this](https://github.com/facebookresearch/llama/blob/main/llama/model.py#L186),
### Expected behavior
just correct
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26109/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26109/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26107
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26107/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26107/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26107/events
|
https://github.com/huggingface/transformers/issues/26107
| 1,892,146,620 |
I_kwDOCUB6oc5wx928
| 26,107 |
Too strange translation result in NLLB-200-3.3B
|
{
"login": "molokanov50",
"id": 85157008,
"node_id": "MDQ6VXNlcjg1MTU3MDA4",
"avatar_url": "https://avatars.githubusercontent.com/u/85157008?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/molokanov50",
"html_url": "https://github.com/molokanov50",
"followers_url": "https://api.github.com/users/molokanov50/followers",
"following_url": "https://api.github.com/users/molokanov50/following{/other_user}",
"gists_url": "https://api.github.com/users/molokanov50/gists{/gist_id}",
"starred_url": "https://api.github.com/users/molokanov50/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/molokanov50/subscriptions",
"organizations_url": "https://api.github.com/users/molokanov50/orgs",
"repos_url": "https://api.github.com/users/molokanov50/repos",
"events_url": "https://api.github.com/users/molokanov50/events{/privacy}",
"received_events_url": "https://api.github.com/users/molokanov50/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey π€ thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nYou should ask this kind of question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\n\r\nOtherwise I think it's expected as the distilled model was trained for longer. If you think the issue is / might be with the tokenizer you could try just using the distilled tokenizer here, but I don't think you'll get much better results.\r\n\r\nThe pair, russian / spanish might just be undertained compared to other pair ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,694 | 1,697 | 1,697 |
NONE
| null |
### System Info
- `transformers` version: 4.21.1
- Platform: Linux-5.15.0-83-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- Huggingface_hub version: 0.13.3
- PyTorch version (GPU?): 1.10.1+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
In `NLLB-200-3.3B`, there occurs a strange translation result at certain source texts and certain lang pairs. Looks like a trivial bug.
For example:
```
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
modelPath = "facebook/nllb-200-3.3B"
tokenizer = AutoTokenizer.from_pretrained(modelPath, src_lang="rus_Cyrl")
model = AutoModelForSeq2SeqLM.from_pretrained(modelPath)
article = "ΠΡΠΈΠΊΠ°Π· ΠΠΈΠ½ΠΈΡΡΠ΅ΡΡΡΠ²Π° ΡΠΈΠ½Π°Π½ΡΠΎΠ² Π ΠΎΡΡΠΈΠΉΡΠΊΠΎΠΉ Π€Π΅Π΄Π΅ΡΠ°ΡΠΈΠΈ ΠΎΡ 10.08.2023 β 129Π½ \"Π ΠΏΡΠΈΠ·Π½Π°Π½ΠΈΠΈ ΡΡΡΠ°ΡΠΈΠ²ΡΠΈΠΌΠΈ ΡΠΈΠ»Ρ ΠΏΡΠΈΠΊΠ°Π·ΠΎΠ² ΠΠΈΠ½ΠΈΡΡΠ΅ΡΡΡΠ²Π° ΡΠΈΠ½Π°Π½ΡΠΎΠ² Π ΠΎΡΡΠΈΠΉΡΠΊΠΎΠΉ Π€Π΅Π΄Π΅ΡΠ°ΡΠΈΠΈ"
inputs = tokenizer(article, return_tensors="pt")
translated_tokens = model.generate(
**inputs, forced_bos_token_id=tokenizer.lang_code_to_id["spa_Latn"]
)
print(tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0])
```
The translation is as follows:
`ObligaciΓ³n de la ComisiΓ³n de la UniΓ³n Europea de establecer normas de aplicaciΓ³n para la aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de las normas de aplicaciΓ³n de aplicaciΓ³n de las normas de aplicaciΓ³n de aplicaciΓ³n de las normas de aplicaciΓ³n de`
which is absolutely incorrect and ugly-looking due to multiple repetitions.
This is not observed if I translate to `eng_Latn` or other languages; at those target languages, the translations are correct.
Meanwhile, if I use a simpler NLLB-200 model, e.g., if I change my `modelPath` to `facebook/nllb-200-distilled-600M`, then I get the following translation:
`El 10 de agosto de 2023 n 129n " Sobre la pΓ©rdida de la fuerza de los pedidos del Ministerio de Finanzas de la FederaciΓ³n de Rusia "`
which is appropriate by meaning.
Also, if I leave my `modelPath` unmodified (i.e., `facebook/nllb-200-3.3B`) and slightly change my source text (i.e. `article`), for example, delete the date or registration number, the translation gets correct.
### Expected behavior
At least, getting into the meaning of the source text.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26107/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26106
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26106/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26106/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26106/events
|
https://github.com/huggingface/transformers/pull/26106
| 1,892,001,974 |
PR_kwDOCUB6oc5aGiWc
| 26,106 |
[`core`] Import tensorflow inside relevant methods in `trainer_utils`
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,694 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
# What does this PR do?
Imports tensorflow inside relevant methods in `trainer_utils`.
When Deepspeed and accelerate are available in your env, it will try to import transformers all the way down to https://github.com/huggingface/transformers/blob/main/src/transformers/trainer_utils.py#L49
Leading to considerable slowdowns in import time

This PR imports tensorflow inside the relevant methods so that it will never get globally imported by transformers anymore.
If this patch gets applied, there is no speedup, however this can be fixed by lazy loading TF in accelerate itself:

cc @amyeroberts @patrickvonplaten @pacman100 @muellerzr @SunMarc
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26106/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26106",
"html_url": "https://github.com/huggingface/transformers/pull/26106",
"diff_url": "https://github.com/huggingface/transformers/pull/26106.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26106.patch",
"merged_at": 1694512147000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26105
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26105/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26105/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26105/events
|
https://github.com/huggingface/transformers/pull/26105
| 1,891,923,480 |
PR_kwDOCUB6oc5aGRUR
| 26,105 |
Place AMD GPU tests in a separate workflow (correct branch)
|
{
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"LGTM waiting from transformers folks feedback βπ»",
"cc @ydshieh "
] | 1,694 | 1,694 | 1,694 |
COLLABORATOR
| null |
As suggested https://github.com/huggingface/transformers/pull/26007#discussion_r1318684244 and discussed offline cc @mfuntowicz
Note: this PR merges into a branch, not main.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26105/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26105",
"html_url": "https://github.com/huggingface/transformers/pull/26105",
"diff_url": "https://github.com/huggingface/transformers/pull/26105.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26105.patch",
"merged_at": 1694530488000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26104
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26104/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26104/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26104/events
|
https://github.com/huggingface/transformers/pull/26104
| 1,891,921,620 |
PR_kwDOCUB6oc5aGQ6a
| 26,104 |
Place AMD GPU tests in a separate workflow
|
{
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,694 | 1,694 | 1,694 |
COLLABORATOR
| null |
As suggested https://github.com/huggingface/transformers/pull/26007#discussion_r1318684244 and discussed offline cc @mfuntowicz
Note: this PR merges into a branch, not main.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26104/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26104",
"html_url": "https://github.com/huggingface/transformers/pull/26104",
"diff_url": "https://github.com/huggingface/transformers/pull/26104.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26104.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/26103
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26103/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26103/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26103/events
|
https://github.com/huggingface/transformers/issues/26103
| 1,891,841,061 |
I_kwDOCUB6oc5wwzQl
| 26,103 |
Add gradient_checkpointing_segment_size
|
{
"login": "zfang",
"id": 780646,
"node_id": "MDQ6VXNlcjc4MDY0Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/780646?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zfang",
"html_url": "https://github.com/zfang",
"followers_url": "https://api.github.com/users/zfang/followers",
"following_url": "https://api.github.com/users/zfang/following{/other_user}",
"gists_url": "https://api.github.com/users/zfang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zfang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zfang/subscriptions",
"organizations_url": "https://api.github.com/users/zfang/orgs",
"repos_url": "https://api.github.com/users/zfang/repos",
"events_url": "https://api.github.com/users/zfang/events{/privacy}",
"received_events_url": "https://api.github.com/users/zfang/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] |
open
| false | null |
[] |
[
"Hi @zfang, thanks for opening this feature request and for the detailed explanation and diff! \r\n\r\nConfiguring the gradient checkpointing is indeed an interesting feature. In general, we try to keep our forward passes as simple as possible. As gradient checkpointing is used by most of our models, it is a change which would have to be applied across the library, and so requires careful consideration of how it's configured e.g. would this lead to many additional future arguments? And backwards compatibility. \r\n\r\nWhat I suggest is leaving this issue open. If we see more interest in the community (measured by π on this issue) in adding this, then we can consider its integration. \r\n\r\ncc @ArthurZucker ",
"Just wanted to indicate interest in this issue. There is a big gap in speed & VRAM between gradient checkpointing + naive MP vs more sophisticated methods such as deepspeed, FSDP, etc., for training very large models (bigger than can fit in a single GPU).\r\n\r\nBeing able to control the checkpointing strategy will let us incrementally take advantage of any additional VRAM, and increase speed with naive MP + checkpointing, without having to resort to those other methods (which typically need a lot more VRAM to pay off and don't support the same features).\r\n\r\nEDIT: Looking closer at the diff I'm not sure the code will work as intended (except for the case of 2 segments maybe). I think we'd need to wrap a chunk of decoder layers into a single forward and pass that to the checkpoint function (in the new transformers implementation). In the above diff, instead, the intermediate layers inside each segment (when `idx % self.gradient_checkpointing_segment_size != 0`) will be run with the usual `forward()`, which will not be with `no_grad()` or using the correct non-reentrant logic.\r\n\r\nEDIT2: Given the description of `gradient_checkpointing_segment_size` it _does_ seem to do what the author intended (checkpoints are enabled every these many layers, and the rest are processed normally without checkpointing). However, this is not the same meaning of a segment as defined in `checkpoint_sequential` which moves in the opposite direction (a segment is a set of layers where only the input to the first layer in that set is saved/checkpointed, and everything in the interior of the set is processed egs., with `no_grad()` in reentrant mode)."
] | 1,694 | 1,708 | null |
NONE
| null |
### Feature request
Currently when we enable gradient checkpointing, e.g. in `LlamaModel`, we call `torch.utils.checkpoint.checkpoint` on every `LlamaDecoderLayer`. As per [Training Deep Nets with Sublinear Memory Cost](https://arxiv.org/pdf/1604.06174.pdf), https://github.com/cybertronai/gradient-checkpointing and hinted by [torch.utils.checkpoint.checkpoint_sequential](https://pytorch.org/docs/stable/checkpoint.html#torch.utils.checkpoint.checkpoint_sequential), we can strike a balance between memory and compute by supporting `gradient_checkpointing_segment_size`.
### Motivation
Gradient checkpointing makes training slow and people enable it mainly to avoid VRAM OOM. It would be great if we can leverage `gradient_checkpointing_segment_size` and make gradient checkpointing configurable.
### Your contribution
Here is a diff to start off with:
```diff
diff --git a/src/transformers/modeling_utils.py b/src/transformers/modeling_utils.py
index bbf229b58..685400ffd 100644
--- a/src/transformers/modeling_utils.py
+++ b/src/transformers/modeling_utils.py
@@ -1745,6 +1745,16 @@ class PreTrainedModel(nn.Module, ModuleUtilsMixin, GenerationMixin, PushToHubMix
"""
return any(hasattr(m, "gradient_checkpointing") and m.gradient_checkpointing for m in self.modules())
+ def set_gradient_checkpointing_segment_size(self, value: int):
+ """
+ Set gradient checkpointing segment size for the current model.
+ """
+ if not self.supports_gradient_checkpointing:
+ raise ValueError(f"{self.__class__.__name__} does not support gradient checkpointing.")
+ if not hasattr(self.model, "_set_gradient_checkpointing_segment_size"):
+ raise ValueError(f"{self.__class__.__name__} does not support configuring gradient checkpointing segment size.")
+ self.apply(partial(self._set_gradient_checkpointing_segment_size, value=value))
+
def save_pretrained(
self,
save_directory: Union[str, os.PathLike],
diff --git a/src/transformers/models/llama/modeling_llama.py b/src/transformers/models/llama/modeling_llama.py
index 5e7a879c0..682e49b21 100644
--- a/src/transformers/models/llama/modeling_llama.py
+++ b/src/transformers/models/llama/modeling_llama.py
@@ -491,6 +491,10 @@ class LlamaPreTrainedModel(PreTrainedModel):
if isinstance(module, LlamaModel):
module.gradient_checkpointing = value
+ def _set_gradient_checkpointing_segment_size(self, module, value=1):
+ if isinstance(module, LlamaModel):
+ module.gradient_checkpointing_segment_size = value
+
LLAMA_INPUTS_DOCSTRING = r"""
Args:
@@ -578,6 +582,7 @@ class LlamaModel(LlamaPreTrainedModel):
self.norm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
self.gradient_checkpointing = False
+ self.gradient_checkpointing_segment_size = 1
# Initialize weights and apply final processing
self.post_init()
@@ -689,7 +694,7 @@ class LlamaModel(LlamaPreTrainedModel):
past_key_value = past_key_values[idx] if past_key_values is not None else None
- if self.gradient_checkpointing and self.training:
+ if self.gradient_checkpointing and self.training and (idx % self.gradient_checkpointing_segment_size == 0):
def create_custom_forward(module):
def custom_forward(*inputs):
diff --git a/src/transformers/trainer.py b/src/transformers/trainer.py
index 2b8cb013f..f7d5e87ea 100755
--- a/src/transformers/trainer.py
+++ b/src/transformers/trainer.py
@@ -1687,6 +1687,9 @@ class Trainer:
if args.gradient_checkpointing:
self.model.gradient_checkpointing_enable()
+ if args.gradient_checkpointing_segment_size > 1:
+ self.model.set_gradient_checkpointing_segment_size(args.gradient_checkpointing_segment_size)
+
model = self._wrap_model(self.model_wrapped)
if (is_sagemaker_mp_enabled() or self.is_fsdp_enabled) and resume_from_checkpoint is not None:
diff --git a/src/transformers/training_args.py b/src/transformers/training_args.py
index 2f72622d9..ecffc5660 100644
--- a/src/transformers/training_args.py
+++ b/src/transformers/training_args.py
@@ -586,6 +586,8 @@ class TrainingArguments:
Unless this is `True`, the `Trainer` will skip pushing a checkpoint when the previous push is not finished.
gradient_checkpointing (`bool`, *optional*, defaults to `False`):
If True, use gradient checkpointing to save memory at the expense of slower backward pass.
+ gradient_checkpointing_segment_size (`int`, *optional*, defaults to `1`):
+ If gradient_checkpointing is True, use gradient checkpointing for every segment size
include_inputs_for_metrics (`bool`, *optional*, defaults to `False`):
Whether or not the inputs will be passed to the `compute_metrics` function. This is intended for metrics
that need inputs, predictions and references for scoring calculation in Metric class.
@@ -1144,6 +1146,12 @@ class TrainingArguments:
"help": "If True, use gradient checkpointing to save memory at the expense of slower backward pass."
},
)
+ gradient_checkpointing_segment_size: int = field(
+ default=1,
+ metadata={
+ "help": "If gradient_checkpointing is True, use gradient checkpointing for every segment size."
+ },
+ )
include_inputs_for_metrics: bool = field(
default=False, metadata={"help": "Whether or not the inputs will be passed to the `compute_metrics` function."}
)
@@ -2137,6 +2145,7 @@ class TrainingArguments:
gradient_accumulation_steps: int = 1,
seed: int = 42,
gradient_checkpointing: bool = False,
+ gradient_checkpointing_segment_size: int = 1,
):
"""
A method that regroups all basic arguments linked to the training.
@@ -2179,6 +2188,8 @@ class TrainingArguments:
parameters.
gradient_checkpointing (`bool`, *optional*, defaults to `False`):
If True, use gradient checkpointing to save memory at the expense of slower backward pass.
+ gradient_checkpointing_segment_size (`int`, *optional*, defaults to `1`):
+ If gradient_checkpointing is True, use gradient checkpointing for every segment size
Example:
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26103/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26103/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26102
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26102/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26102/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26102/events
|
https://github.com/huggingface/transformers/pull/26102
| 1,891,824,327 |
PR_kwDOCUB6oc5aF772
| 26,102 |
[FIX] resize_token_embeddings
|
{
"login": "passaglia",
"id": 8333102,
"node_id": "MDQ6VXNlcjgzMzMxMDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8333102?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/passaglia",
"html_url": "https://github.com/passaglia",
"followers_url": "https://api.github.com/users/passaglia/followers",
"following_url": "https://api.github.com/users/passaglia/following{/other_user}",
"gists_url": "https://api.github.com/users/passaglia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/passaglia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/passaglia/subscriptions",
"organizations_url": "https://api.github.com/users/passaglia/orgs",
"repos_url": "https://api.github.com/users/passaglia/repos",
"events_url": "https://api.github.com/users/passaglia/events{/privacy}",
"received_events_url": "https://api.github.com/users/passaglia/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26102). All of your documentation changes will be reflected on that endpoint.",
"I'm not so familiar with the transformers repo -- where should a test for this code go?",
"SHould be part of [this tests](https://github.com/huggingface/transformers/blob/main/tests/test_modeling_common.py#L1364)",
"@ArthurZucker please confirm this test works, I couldn't run it myself since `pytest tests/test_modeling_common.py` returned 0 collected tests.",
"Sure, I'll test this, can you run ` make style` as well? To have green cis",
"@ArthurZucker Ready for merge"
] | 1,694 | 1,695 | 1,695 |
CONTRIBUTOR
| null |
# What does this PR do?
When `resize_token_embeddings(new_num_tokens, pad_to_multiple)` is called with `new_num_tokens` a multiple of `pad_to_multiple`, the model should be resized to `new_num_tokens`. Due to a math error, it is instead resized to `new_num_tokens+pad_to_multiple`. This PR fixes that bug.
@ArthurZucker https://github.com/huggingface/transformers/pull/25088
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26102/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26102",
"html_url": "https://github.com/huggingface/transformers/pull/26102",
"diff_url": "https://github.com/huggingface/transformers/pull/26102.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26102.patch",
"merged_at": 1695152681000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26101
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26101/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26101/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26101/events
|
https://github.com/huggingface/transformers/issues/26101
| 1,891,552,474 |
I_kwDOCUB6oc5wvsza
| 26,101 |
Adding new non-latin tokens to the T5 tokenizer creates unnecessary whitespaces
|
{
"login": "sl5035",
"id": 86555996,
"node_id": "MDQ6VXNlcjg2NTU1OTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/86555996?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sl5035",
"html_url": "https://github.com/sl5035",
"followers_url": "https://api.github.com/users/sl5035/followers",
"following_url": "https://api.github.com/users/sl5035/following{/other_user}",
"gists_url": "https://api.github.com/users/sl5035/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sl5035/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sl5035/subscriptions",
"organizations_url": "https://api.github.com/users/sl5035/orgs",
"repos_url": "https://api.github.com/users/sl5035/repos",
"events_url": "https://api.github.com/users/sl5035/events{/privacy}",
"received_events_url": "https://api.github.com/users/sl5035/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @sl5035 π I see you're using quite an outdated version of transformers (from a ~6 months ago). Could you try upgrade to see if that solves your problem?\n\nEdit: It may have been fixed already at https://github.com/huggingface/transformers/commit/b52a03cd3bec92d0ee84f0b1f7edee0d5117200a (2 months ago)",
"Hey, @xenova \r\n\r\nEven after upgrading transformers to 4.33.1 unfortunately that didn't solve my problem",
"Solved this problem by directly adding the tokens from the tokenizer.json file. It seems like it's an issue from the `add_tokens()` method.",
"Hey! I'll see what I can do! I would recommend trying on `main` as well",
"Thanks, @ArthurZucker !",
"Did it work ? ",
"Hey, @ArthurZucker \r\nJust tried using the transformers 4.34.0.dev0 version, but it still outputs the same.",
"Ok! Thanks for testing. I suspect this has to do with the non legacy behaviour. \r\n```python\r\n>>> processor = Pix2StructProcessor.from_pretrained(\"deplot_models/deplot_base_model/\", is_vqa=True, legacy = False, use_fast = False)\r\n>>> processor.tokenizer.add_tokens(['ν¬','λ','μ΄','μ '])\r\n>>> processor.tokenizer.tokenize('ν ν¬λμ΄μ ν
μ€νΈ μ€μ
λλ€')\r\n['β', 'ν ', 'ν¬', 'λ', 'μ΄', 'μ ', 'ν
', 'μ€νΈ', 'βμ€', 'μ
λλ€']\r\n```\r\nThe tokens are not added as already part of the vocab: \r\n```\r\n 2558: AddedToken(\"μ΄\", rstrip=True, lstrip=True, single_word=False, normalized=True, special=False),\r\n 9772: AddedToken(\"λ\", rstrip=True, lstrip=True, single_word=False, normalized=True, special=False),\r\n 24470: AddedToken(\"ν¬\", rstrip=True, lstrip=True, single_word=False, normalized=True, special=False),\r\n 26199: AddedToken(\"μ \", rstrip=True, lstrip=True, single_word=False, normalized=True, special=False),\r\n```\r\nbut marked as addedTokens.\r\nIn that case since you don't specify the behaviour, it will always strip left and right. \r\nThe extra space is related to #25224 ",
"Thanks, @ArthurZucker !\r\nRan a few tests, seems like setting `legacy=False, use_fast=False` did the trick but it removed most of the whitespaces after fine-tuning and decoding using `model.generate()`. I've decided to stick with modifying the pre-tokenizer by setting `add_prefix_spaces=False`. It seems like this method tokenizes well and produces no error when decoding.",
"Cool! Could you elaborate on the spaces that were removed? If it's not expected then it's a bug on our end",
"Sorry for the late reply @ArthurZucker ,\r\nAlmost all of the whitespaces were removed, regardless of whether it is a separate token or not.\r\n\r\nPrediction:\r\nTITLE | 36μΈ λ¨μκ°μ½§λ±μ ν΅μ¦μλλ°ν νΌλΆλ°μ§μμ£Όμλ‘<0x0A> | λμ¬μ½λΌμ΄λ¦¬ |νμ°¨μ <0x0A> 0 | 0.02 | 0.1 <0x0A> 8 | 0.02 | 0.03 <0x0A> 16 | 0.03 | 0.32 <0x0A> 24 | 0.15 | 0.02 <0x0A> 32 | 0.01 | 0.12 <0x0A> 40 | 0.02 | 0.03 <0x0A> 48 | 0.38 | 0.12 <0x0A> CAPTION |μ λΆλ μ§λ 4μ 29μΌκ΅μ κ°λ°νλ ₯μμνμν΅ν©μ‘°μ κΈ°λ₯κ°νλ±μλ΄μ©μΌλ‘νλκ΅μ κ°λ°νλ ₯κΈ°λ³Έλ² μ λΆκ°μ μμ΄κ΅νμμ ν΅κ³Όλ¨μλ°λΌμ λ΅μ립μ¬μ
κΈ°νλ°κ΅΄μ¬μ
μ¬μ¬μ‘°μ μ κ²νκ°λ±ODAμ£ΌκΈ°λ₯Όνμ νλμμ
μμΆμ§νκ³ μλ€\r\n\r\nTarget:\r\nTITLE | 36μΈ λ¨μκ° μ½§λ±μ ν΅μ¦μ λλ°ν νΌλΆλ°μ μ μ£Όμλ‘ <0x0A> | λμ¬μ½λΌμ΄λ¦¬ | νμ°¨μ <0x0A> 0 | 0.02 | 0.1 <0x0A> 8 | 0.02 | 0.03 <0x0A> 16 | 0.03 | 0.32 <0x0A> 24 | 0.15 | 0.02 <0x0A> 32 | 0.01 | 0.12 <0x0A> 40 | 0.02 | 0.03 <0x0A> 48 | 0.38 | 0.12 <0x0A> CAPTION | μ λΆλ μ§λ 4μ 29μΌ κ΅μ κ°λ°νλ ₯μμνμ ν΅ν©μ‘°μ κΈ°λ₯ κ°ν λ±μ λ΄μ©μΌλ‘ νλ κ΅μ κ°λ°νλ ₯κΈ°λ³Έλ² μ λΆκ°μ μμ΄ κ΅νμμ ν΅κ³Όλ¨μ λ°λΌ μ λ΅μ립μ¬μ
κΈ°νλ°κ΅΄μ¬μ
μ¬μ¬μ‘°μ μ κ²νκ° λ± ODA μ£ΌκΈ°λ₯Ό νμ νλ μμ
μ μΆμ§νκ³ μλ€\r\n\r\nMaybe it's a bug on my side of the code, but this is what I got from the model!",
"When decoding you might want to use `spaces_between_special_tokens` but otherwise I think it is expected. T5 will encode `'β'` as `''` and remove it. That's why you need to add the `''βλ'` token as well making sure it does not strip",
"@ArthurZucker ,\r\nAh, that makes sense. Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,694 | 1,699 | 1,699 |
NONE
| null |
### System Info
- `transformers` version: 4.27.4
- Platform: Linux-5.4.0-99-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@Arthur
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hi, I am trying to add tokens to the Pix2structProcessor which uses the T5 tokenizer. I have added non-latin characters (Korean) to the tokenizer using the below script so the `len(processor.tokenizer.get_vocab())` becomes 65536 (from its original 50344).
```
processor = Pix2StructProcessor.from_pretrained("deplot_models/deplot_base_model/", is_vqa=True)
model = Pix2StructForConditionalGeneration.from_pretrained(
"deplot_models/deplot_base_model/", is_vqa=True
)
with open("data/full_vocab.txt", "r+") as f:
full_v = [v.strip("\n") for v in f.readlines()]
new_t = full_v[50345:]
processor.tokenizer.add_tokens(new_t)
print("Processor loaded!")
```
The problem arises when I try to tokenize some korean sentences using the extended tokenizer:
`processor.tokenizer.tokenize('ν ν¬λμ΄μ ν
μ€νΈ μ€μ
λλ€')` outputs `['β', 'ν ', 'ν¬', 'λμ΄', 'βμ ', 'β', 'ν
', 'μ€νΈ', 'βμ€', 'μ
λλ€']`.
The fifth token should be 'μ ' instead of 'βμ ' (with the underscore) since I've added both of them to the vocab, but instead the tokenizer outputs the wrong version.
TL;DR
1. Extend the T5 tokenizer using non-latin characters.
2. Tokenize a sentence.
3. Error happens.
### Expected behavior
The original T5 tokenizer (without extending vocabs) outputs correctly:
`['β', 'ν ', 'ν¬', 'λ', 'μ΄', 'μ ', 'β', 'ν
', 'μ€νΈ', 'βμ€', 'μ
λλ€']`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26101/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26100
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26100/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26100/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26100/events
|
https://github.com/huggingface/transformers/pull/26100
| 1,891,265,991 |
PR_kwDOCUB6oc5aEDrC
| 26,100 |
Just testing improvements for the CI
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,694 | 1,695 | 1,695 |
COLLABORATOR
| null |
# What does this PR do?
Nothing for now!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26100/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/26100/timeline
| null | true |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26100",
"html_url": "https://github.com/huggingface/transformers/pull/26100",
"diff_url": "https://github.com/huggingface/transformers/pull/26100.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26100.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/26099
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26099/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26099/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26099/events
|
https://github.com/huggingface/transformers/issues/26099
| 1,891,256,044 |
I_kwDOCUB6oc5wukbs
| 26,099 |
OWL-ViT box_predictor is wildly inefficient since box bias is not precomputed
|
{
"login": "5hadytru",
"id": 46176503,
"node_id": "MDQ6VXNlcjQ2MTc2NTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/46176503?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/5hadytru",
"html_url": "https://github.com/5hadytru",
"followers_url": "https://api.github.com/users/5hadytru/followers",
"following_url": "https://api.github.com/users/5hadytru/following{/other_user}",
"gists_url": "https://api.github.com/users/5hadytru/gists{/gist_id}",
"starred_url": "https://api.github.com/users/5hadytru/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/5hadytru/subscriptions",
"organizations_url": "https://api.github.com/users/5hadytru/orgs",
"repos_url": "https://api.github.com/users/5hadytru/repos",
"events_url": "https://api.github.com/users/5hadytru/events{/privacy}",
"received_events_url": "https://api.github.com/users/5hadytru/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @5hadytru, thanks for raising this issue! \r\n\r\nIndeed - this isn't ideal. Would you like to open a PR to make the bias calculation a bit more sane and efficient? \r\n\r\ncc @rafaelpadilla ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,694 | 1,697 | 1,697 |
NONE
| null |
### System Info
The box_predictor function computes box bias from scratch each time it is called, but box bias does not actually depend on anything besides the shape of the feature map (which depends on the batch size (which can be broadcasted), number of ViT patches, and ViT token dim) and therefore should be precomputed. This is a super simple fix and will result in a >>10x inference speedup. (https://github.com/huggingface/transformers/blob/ce2e7ef3d96afaf592faf3337b7dd997c7ad4928/src/transformers/models/owlvit/modeling_owlvit.py#L1389C35-L1389C35)
### Who can help?
@amyeroberts @pasqualedem @alaradirik
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the behavior:
1. Add some lines of code to box_predictor which time its overall execution + the execution of each individual line
2. Run one of the inference examples
### Expected behavior
compute_box_bias will take a substantial amount of time (due to the normalize_grid_corner_coordinates call) while the other lines will be negligible
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26099/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26099/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26098
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26098/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26098/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26098/events
|
https://github.com/huggingface/transformers/issues/26098
| 1,891,107,616 |
I_kwDOCUB6oc5wuAMg
| 26,098 |
LoRA A and B param.requires_grad showing as false during training
|
{
"login": "RonanKMcGovern",
"id": 78278410,
"node_id": "MDQ6VXNlcjc4Mjc4NDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/78278410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RonanKMcGovern",
"html_url": "https://github.com/RonanKMcGovern",
"followers_url": "https://api.github.com/users/RonanKMcGovern/followers",
"following_url": "https://api.github.com/users/RonanKMcGovern/following{/other_user}",
"gists_url": "https://api.github.com/users/RonanKMcGovern/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RonanKMcGovern/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RonanKMcGovern/subscriptions",
"organizations_url": "https://api.github.com/users/RonanKMcGovern/orgs",
"repos_url": "https://api.github.com/users/RonanKMcGovern/repos",
"events_url": "https://api.github.com/users/RonanKMcGovern/events{/privacy}",
"received_events_url": "https://api.github.com/users/RonanKMcGovern/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @RonanKMcGovern, this happens because you used `prepare_model_for_kbit_training` after loading the adapters. `prepare_model_for_kbit_training` should be used before adding the adapters to the quantized model. Another solution would be to fix `prepare_model_for_kbit_training` so that we only freeze the base model layers. ",
"ah thanks @SunMarc , I re-ran and now am just seeing the base layer frozen and True for the LoRA A and B"
] | 1,694 | 1,694 | 1,694 |
NONE
| null |
### System Info
I'm running the latest version of transformers in a Colab notebook.
I'm simply printing out the param.requires_grad in the model's named parameters to see if it is set True or false, and I'm finding that all have requires_grad set to false. If there is some other list showing the LoRA params with required gradients, how can I access that?
```
base_model.model.model.embed_tokens.weight: False
base_model.model.model.layers.0.self_attn.q_proj.weight: False
base_model.model.model.layers.0.self_attn.q_proj.lora_A.default.weight: False
base_model.model.model.layers.0.self_attn.q_proj.lora_B.default.weight: False
base_model.model.model.layers.0.self_attn.k_proj.weight: False
base_model.model.model.layers.0.self_attn.k_proj.lora_A.default.weight: False
base_model.model.model.layers.0.self_attn.k_proj.lora_B.default.weight: False`
...etc.
```
### Who can help?
@SunMarc @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
model = PeftModel.from_pretrained(base_model, path)
model.gradient_checkpointing_enable()
model = prepare_model_for_kbit_training(model)
model.to(input_ids.device)
# Print trainable parameters here
for name, param in model.named_parameters():
print(f"{name}: {param.requires_grad}")
```
### Expected behavior
I would have expected that all of the LoRA A and B matrixes would show that they require gradiants...
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26098/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26098/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26097
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26097/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26097/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26097/events
|
https://github.com/huggingface/transformers/issues/26097
| 1,891,035,386 |
I_kwDOCUB6oc5wtuj6
| 26,097 |
Simplify cache interface for Falcon model
|
{
"login": "RezaYazdaniAminabadi",
"id": 44502768,
"node_id": "MDQ6VXNlcjQ0NTAyNzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/44502768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RezaYazdaniAminabadi",
"html_url": "https://github.com/RezaYazdaniAminabadi",
"followers_url": "https://api.github.com/users/RezaYazdaniAminabadi/followers",
"following_url": "https://api.github.com/users/RezaYazdaniAminabadi/following{/other_user}",
"gists_url": "https://api.github.com/users/RezaYazdaniAminabadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RezaYazdaniAminabadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RezaYazdaniAminabadi/subscriptions",
"organizations_url": "https://api.github.com/users/RezaYazdaniAminabadi/orgs",
"repos_url": "https://api.github.com/users/RezaYazdaniAminabadi/repos",
"events_url": "https://api.github.com/users/RezaYazdaniAminabadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/RezaYazdaniAminabadi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @Rocketknight1 ",
"Hi @RezaYazdaniAminabadi, yes, the cache is still a bit of a mess there! Falcon started as custom code, and I wrote the Hugging Face port of that custom code. The custom code copied the strange cache shapes from BLOOM, but there were issues in the implementation that severely hurt generation performance. In the port, I focused on getting generation performance up while ensuring correctness, but I was aware that the cache could probably do with a more fundamental rewrite.\r\n\r\nIf you think you can clean up the cache code, I'd be happy to review a PR! I suspect you could probably delete quite a lot of the cache handling code, but you might have to make changes in other modules to compensate for that.",
"Cool, then I add some changes and then add you as the reviewer to check this.\r\nThanks.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,694 | 1,697 | 1,697 |
CONTRIBUTOR
| null |
### Feature request
Hi,
I have been looking at the Falcon modeling file and I am seeing that the cache interface might have got a bit too complicated! It, however, could have been simplified a bit.
There are the two functions, `_convert_cache_to_standard_format` and `_convert_to_rw_cache` which are called after and before the network's forward, and if I understand correctly it is mostly for finding the current generation context length (as mentioned in the comment of [this line](https://github.com/huggingface/transformers/blob/main/src/transformers/models/falcon/modeling_falcon.py#L885)).
By looking at the code, I realized that we can always get the context-length by just taking the `shape[-2]` form the layer_past tuple. By doing this, we can just remove these two function calls to change the format of the kv-cache.
If this makes sense, I can make a PR and fix this, otherwise, please feel free to close this!
Thanks,
Reza
### Motivation
Simplifying the cache interface for Falcon model.
### Your contribution
Yes, I can make a PR if neede.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26097/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26097/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26096
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26096/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26096/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26096/events
|
https://github.com/huggingface/transformers/issues/26096
| 1,890,919,153 |
I_kwDOCUB6oc5wtSLx
| 26,096 |
Lllama2 layer outputs NaN when using dual GPU
|
{
"login": "ekim322",
"id": 69325621,
"node_id": "MDQ6VXNlcjY5MzI1NjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/69325621?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ekim322",
"html_url": "https://github.com/ekim322",
"followers_url": "https://api.github.com/users/ekim322/followers",
"following_url": "https://api.github.com/users/ekim322/following{/other_user}",
"gists_url": "https://api.github.com/users/ekim322/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ekim322/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ekim322/subscriptions",
"organizations_url": "https://api.github.com/users/ekim322/orgs",
"repos_url": "https://api.github.com/users/ekim322/repos",
"events_url": "https://api.github.com/users/ekim322/events{/privacy}",
"received_events_url": "https://api.github.com/users/ekim322/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey, is the `self.model` in `fp16` or `bfloat16`?",
"I was loading in `fp16`. Here are my configs\r\n\r\n```\r\nbnb_cfg = BitsAndBytesConfig(\r\n load_in_4bit=True,\r\n bnb_4bit_quant_type='nf4',\r\n bnb_4bit_use_double_quant=True,\r\n bnb_4bit_compute_dtype=torch.float16 # torch.bfloat16\r\n)\r\n\r\ntrain_args = TrainingArguments(\r\n output_dir=os.path.join(\"runs\", cfg.wandb.project, cfg.wandb.group, cfg.exp),\r\n num_train_epochs=cfg.hparams.epochs,\r\n per_device_train_batch_size=cfg.hparams.batch_size,\r\n gradient_accumulation_steps=cfg.hparams.grad_accum,\r\n gradient_checkpointing=True,\r\n # logging_steps=self.cfg.hparams.grad_accum, \r\n logging_steps=1, \r\n save_strategy=\"steps\",\r\n save_steps=self.cfg.save_steps,\r\n report_to=\"wandb\",\r\n optim=cfg.hparams.optimizer,\r\n learning_rate=cfg.hparams.learning_rate,\r\n lr_scheduler_type=cfg.hparams.lr_scheduler_type,\r\n max_grad_norm=cfg.hparams.max_grad_norm,\r\n warmup_ratio=cfg.hparams.warmup_ratio,\r\n evaluation_strategy='steps',\r\n eval_steps=self.cfg.eval_steps, \r\n eval_accumulation_steps=1, \r\n fp16=True, # bf16=True,\r\n ddp_find_unused_parameters=False,\r\n)\r\n```\r\n\r\nWhen I switch to `bf16`, none of the 40 layers output NaN values. But I still get errors in the cross entropy call. \r\n```\r\n../aten/src/ATen/native/cuda/Loss.cu:240: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n../aten/src/ATen/native/cuda/Loss.cu:240: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n...\r\n...\r\n...\r\n../aten/src/ATen/native/cuda/Loss.cu:240: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [30,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n../aten/src/ATen/native/cuda/Loss.cu:240: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [31,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nTraceback (most recent call last):\r\n File \"/home/ekim/test/trainer.py\", line 178, in train\r\n trainer.train()\r\n File \"/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/transformers/trainer.py\", line 1553, in train\r\n return inner_training_loop(\r\n File \"/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/transformers/trainer.py\", line 1835, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n File \"/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/transformers/trainer.py\", line 2679, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/transformers/trainer.py\", line 2704, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/accelerate/utils/operations.py\", line 632, in forward\r\n return model_forward(*args, **kwargs)\r\n File \"/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/accelerate/utils/operations.py\", line 620, in __call__\r\n return convert_to_fp32(self.model_forward(*args, **kwargs))\r\n File \"/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/torch/amp/autocast_mode.py\", line 14, in decorate_autocast\r\n return func(*args, **kwargs)\r\n File \"/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/peft/peft_model.py\", line 918, in forward\r\n return self.base_model(\r\n File \"/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/peft/tuners/tuners_utils.py\", line 94, in forward\r\n return self.model.forward(*args, **kwargs)\r\n File \"/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n File \"/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py\", line 861, in forward\r\n loss = loss_fct(shift_logits, shift_labels)\r\n File \"/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/torch/nn/modules/loss.py\", line 1174, in forward\r\n return F.cross_entropy(input, target, weight=self.weight,\r\n File \"/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/torch/nn/functional.py\", line 3029, in cross_entropy\r\n return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)\r\nRuntimeError: CUDA error: device-side assert triggered\r\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\r\n```\r\n\r\n[This](https://github.com/pytorch/pytorch/blob/e7bd9c53158fd6a08b0f93462ee883646c7451f9/torch/nn/functional.py#L3053) is the specific line that causes `CUDA error: device-side assert triggered`. \r\n\r\n`torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)`\r\n\r\n```\r\ninput\r\ntensor([[ 2.1562, 9.9375, -0.0903, ..., 1.7656, -0.4590, 1.3906],\r\n [ 2.1562, 9.9375, -0.0903, ..., 1.7656, -0.4590, 1.3906],\r\n [ 2.1562, 9.9375, -0.0903, ..., 1.7656, -0.4590, 1.3906],\r\n ...,\r\n [ 2.1562, 9.9375, -0.0903, ..., 1.7656, -0.4590, 1.3906],\r\n [ 2.1562, 9.9375, -0.0903, ..., 1.7656, -0.4590, 1.3906],\r\n [ 2.1562, 9.9375, -0.0903, ..., 1.7656, -0.4590, 1.3906]],\r\n device='cuda:1', grad_fn=<ViewBackward0>)\r\n\r\ntarget\r\ntensor([3577293384251152737, 3577293384251152737, 3577293384251152737,\r\n 3577293384251152737, 3577293384251152737, 3577293384251152737,\r\n 3577293384251152737, 3577293384251152737, 3577293384251152737,\r\n 3577293384251152737, 3577293384251152737, 3577293384251152737,\r\n 3577293384251152737 ...... 3577293384251152736], device='cuda:1')\r\n\r\nweight\r\nNone\r\n\r\nreduction\r\n'mean'\r\n\r\nignore_index\r\n-100\r\n\r\nlabel_smoothing\r\n0.0\r\n```\r\n\r\nWhen I run my code with a single GPU `target` is different than when I use two GPUs (I was expecting it to be the same). \r\n```\r\ntarget # with single GPU\r\ntensor([ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, ..., ..., ..., ... -100, -100, -100, -100,\r\n 13, 450, 17097, 8818, 29769, 1353, 310, 278, 8010, 402,\r\n 6935, 2552, 5449, 653, 9000, 10606, 338, 29871, 29953, 29946,\r\n 29899, 29953, 29906, 29906, 29896, 29945, 29946, 29896, 29889, -100],\r\n device='cuda:0')\r\n```\r\n\r\nThe unexpected `target` values for dual GPU seem to originate from [here](https://github.com/huggingface/transformers/blob/ce2e7ef3d96afaf592faf3337b7dd997c7ad4928/src/transformers/models/llama/modeling_llama.py#L851). `shift_labels = shift_labels.to(shift_logits.device)` \r\n\r\n\r\n```\r\nif labels is not None:\r\n # Shift so that tokens < n predict n\r\n shift_logits = logits[..., :-1, :].contiguous() # slogitA\r\n shift_labels = labels[..., 1:].contiguous() # slabelA\r\n # Flatten the tokens\r\n loss_fct = CrossEntropyLoss()\r\n shift_logits = shift_logits.view(-1, self.config.vocab_size) # slogitB\r\n shift_labels = shift_labels.view(-1) # slabelB\r\n # Enable model parallelism\r\n shift_labels = shift_labels.to(shift_logits.device) # slabelC\r\n loss = loss_fct(shift_logits, shift_labels)\r\n```\r\n\r\n```\r\nlogits\r\ntensor([[[ 2.1562, 9.9375, -0.0903, ..., 1.7656, -0.4590, 1.3906],\r\n [ 2.1562, 9.9375, -0.0903, ..., 1.7656, -0.4590, 1.3906],\r\n [ 2.1562, 9.9375, -0.0903, ..., 1.7656, -0.4590, 1.3906],\r\n ...,\r\n [ 2.1562, 9.9375, -0.0903, ..., 1.7656, -0.4590, 1.3906],\r\n [ 2.1562, 9.9375, -0.0903, ..., 1.7656, -0.4590, 1.3906],\r\n [ 2.1562, 9.9375, -0.0903, ..., 1.7656, -0.4590, 1.3906]]],\r\n device='cuda:1', grad_fn=<ToCopyBackward0>)\r\n\r\nlabels\r\ntensor([[ -100, -100, -100, -100, -100, -100, -100, -100, -100, \r\n -100, -100, -100, -100, -100, -100, ...., ...., ...., -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, 13, 450, 17097, 8818, 29769, 1353, 310, 278, 8010,\r\n 402, 6935, 2552, 5449, 653, 9000, 10606, 338, 29871, 29953,\r\n 29946, 29899, 29953, 29906, 29906, 29896, 29945, 29946, 29896, 29889,\r\n -100]], device='cuda:0')\r\n\r\nshift_logits # slogitA\r\ntensor([[[ 2.1562, 9.9375, -0.0903, ..., 1.7656, -0.4590, 1.3906],\r\n [ 2.1562, 9.9375, -0.0903, ..., 1.7656, -0.4590, 1.3906],\r\n [ 2.1562, 9.9375, -0.0903, ..., 1.7656, -0.4590, 1.3906],\r\n ...,\r\n [ 2.1562, 9.9375, -0.0903, ..., 1.7656, -0.4590, 1.3906],\r\n [ 2.1562, 9.9375, -0.0903, ..., 1.7656, -0.4590, 1.3906],\r\n [ 2.1562, 9.9375, -0.0903, ..., 1.7656, -0.4590, 1.3906]]],\r\n device='cuda:1', grad_fn=<SliceBackward0>)\r\n\r\nshift_labels # slabelA\r\ntensor([[ -100, -100, -100, -100, -100, -100, -100, -100, -100, \r\n -100, -100, -100, -100, -100, -100, ..., ..., ..., -100, -100,\r\n 13, 450, 17097, 8818, 29769, 1353, 310, 278, 8010, 402,\r\n 6935, 2552, 5449, 653, 9000, 10606, 338, 29871, 29953, 29946,\r\n 29899, 29953, 29906, 29906, 29896, 29945, 29946, 29896, 29889, -100]],\r\n device='cuda:0')\r\n\r\nshift_logits # slogitB\r\ntensor([[ 2.1562, 9.9375, -0.0903, ..., 1.7656, -0.4590, 1.3906],\r\n [ 2.1562, 9.9375, -0.0903, ..., 1.7656, -0.4590, 1.3906],\r\n [ 2.1562, 9.9375, -0.0903, ..., 1.7656, -0.4590, 1.3906],\r\n ...,\r\n [ 2.1562, 9.9375, -0.0903, ..., 1.7656, -0.4590, 1.3906],\r\n [ 2.1562, 9.9375, -0.0903, ..., 1.7656, -0.4590, 1.3906],\r\n [ 2.1562, 9.9375, -0.0903, ..., 1.7656, -0.4590, 1.3906]],\r\n device='cuda:1', grad_fn=<ViewBackward0>)\r\n\r\n\r\nshift_labels # slabelB\r\ntensor([ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, ..., ..., ..., ... , -100, -100,\r\n 13, 450, 17097, 8818, 29769, 1353, 310, 278, 8010, 402,\r\n 6935, 2552, 5449, 653, 9000, 10606, 338, 29871, 29953, 29946,\r\n 29899, 29953, 29906, 29906, 29896, 29945, 29946, 29896, 29889, -100],\r\n device='cuda:0')\r\n\r\n\r\nshift_labels # slabelC\r\ntensor([3577293384251152737, 3577293384251152737, 3577293384251152737,\r\n 3577293384251152737, ..., ..., ..., 3974362539628152236], device='cuda:1')\r\n```\r\n\r\n\r\n\r\nAlso \r\n\r\nSometimes `CUDA error: device-side assert triggered` happens [here](https://github.com/huggingface/transformers/blob/ce2e7ef3d96afaf592faf3337b7dd997c7ad4928/src/transformers/models/llama/modeling_llama.py#L184) `cos = cos[position_ids].unsqueeze(1)`. I am using vscode debugger and the error is not consistent and seem to occur randomly. \r\n\r\n\r\n",
"You should train the model in bfloat16. It's expected to not work if you train in float16 you need to monkey patch the code !",
"bfloat16 is not working either.\r\nWhen I transfer data from GPU to GPU the values are getting modified randomly. \r\n\r\nGPU to CPU and CPU to GPU is working fine.\r\nFor example if I send tensor from CPU -> GPU0 -> CPU -> GPU1, all the values are preserved.\r\n\r\n```\r\nlabels = torch.tensor([[ 0.3316, -2.1440, -0.6160, -0.0321, 0.0256]])\r\nlabels = labels.to('cuda:0') # tensor([[ 0.3316, -2.1440, -0.6160, -0.0321, 0.0256]], device='cuda:0')\r\nlabels = labels.to('cpu') # tensor([[ 0.3316, -2.1440, -0.6160, -0.0321, 0.0256]])\r\nlabels = labels.to('cuda:1') # tensor([[ 0.3316, -2.1440, -0.6160, -0.0321, 0.0256]], device='cuda:1')\r\n```\r\n\r\nHowever when I transfer from GPU0 -> GPU1, the tensor values change. \r\n```\r\nlabels = torch.tensor([[ 0.3316, -2.1440, -0.6160, -0.0321, 0.0256]])\r\nlabels = labels.to('cuda:0') # tensor([[ 0.3316, -2.1440, -0.6160, -0.0321, 0.0256]], device='cuda:0')\r\nlabels = labels.to('cuda:1') # tensor([[0., 0., 0., 0., 0.]], device='cuda:1')\r\n```\r\n\r\nI've tried [this](https://discuss.pytorch.org/t/tensor-totally-changes-when-allocating-moving-from-gpu-to-gpu/73930/16) `export NCCL_P2P_DISABLE=1` but this is not working either. \r\n\r\n\r\n\r\n\r\nMy temporary fix is to move the data to CPU first than to GPU rather than direct GPU to GPU\r\n```\r\nshift_labels = shift_labels.to('cpu')\r\nshift_labels = shift_labels.to(shift_logits.device)\r\n```\r\n\r\nHowever, there are numerous other variables that have this issue as well and I don't think editing the source code one by one is the best approach since the library gets updated quite often. \r\n\r\nAny thoughts on how to fix this?",
"Okay, I'll let @pacman100 handle this! But we need you to share a full reproducer, as minimal as possible ( which model you are using from which repo etc)",
"Hello, this seems to Hardware issue, referring to https://discuss.pytorch.org/t/problem-transfering-tensor-between-gpus/60263 and https://discuss.pytorch.org/t/tensor-totally-changes-when-allocating-moving-from-gpu-to-gpu/73930/16",
"This was a communication issue between the two GPUs (2x4090) I had. \r\n\r\n\r\n@pacman100 The solutions in the links you copied did not work for me. \r\n\r\n[Here](https://forums.developer.nvidia.com/t/standard-nvidia-cuda-tests-fail-with-dual-rtx-4090-linux-box/233202/22?page=2) is how I found out my solution.\r\n\r\nI faced errors when using Nvidia Driver 520 and 530. \r\nUsing Nvidia Driver 525.105.17 with Cuda 11.8 solved the issue",
"> This was a communication issue between the two GPUs (2x4090) I had.\r\n> \r\n> @pacman100 The solutions in the links you copied did not work for me.\r\n> \r\n> [Here](https://forums.developer.nvidia.com/t/standard-nvidia-cuda-tests-fail-with-dual-rtx-4090-linux-box/233202/22?page=2) is how I found out my solution.\r\n> \r\n> I faced errors when using Nvidia Driver 520 and 530. Using Nvidia Driver 525.105.17 with Cuda 11.8 solved the issue\r\n\r\nWas the version you were using before 11.7? ",
"Just to note here that I had the same problem with nvidia drivers 545 and cuda 12.1. Switch to 525 and 11.8 and it's all working again with a dual 4090 setup.",
"I finally solved this by disabling ACS in bios, ref https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/troubleshooting.html#pci-access-control-services-acs. Changing nvidia driver and cuda version doesn't help.\r\n\r\nThis test is very helpful. https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/troubleshooting.html#gpu-to-gpu-communication"
] | 1,694 | 1,703 | 1,696 |
NONE
| null |
### System Info
- `transformers` version: 4.33.1
- Platform: Linux-5.15.0-83-generic-x86_64-with-glibc2.31
- Python version: 3.9.17
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.22.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker @SunMarc @pacman100
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm facing errors when attempting to QLoRA fine tuning LLAMA2 using a dual GPU set up.
The code runs properly when if I use only one of the two GPUs I have (RTX4090 Founders Edition and RTX4090 Liquid Suprim). The exact code also runs fine on a GCP vm instance with a two Nvidia T4s.
It's just not working when I run it on my local dual GPU set up.
I use trl.SFTTrainer to start training, but the errors seem to be occuring from the hugging face library so I am opening the issue here.
```
from trl import SFTTrainer
trainer = SFTTrainer(
model = self.model,
tokenizer=self.tokenizer,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
peft_config=peft_config,
args=train_args,
max_seq_length=self.cfg.hparams.max_token_len,
formatting_func=data_format_fn,
data_collator=data_collator,
callbacks=callbacks
)
trainer.train()
```
Here is the error I am facing
```
../aten/src/ATen/native/cuda/Loss.cu:240: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:240: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed.
...
...
...
../aten/src/ATen/native/cuda/Loss.cu:240: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [30,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:240: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [31,0,0] Assertion `t >= 0 && t < n_classes` failed.
Traceback (most recent call last):
File "/home/ekim/test/trainer.py", line 178, in train
trainer.train()
File "/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/transformers/trainer.py", line 1553, in train
return inner_training_loop(
File "/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/transformers/trainer.py", line 1835, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/transformers/trainer.py", line 2679, in training_step
loss = self.compute_loss(model, inputs)
File "/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/transformers/trainer.py", line 2704, in compute_loss
outputs = model(**inputs)
File "/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/accelerate/utils/operations.py", line 632, in forward
return model_forward(*args, **kwargs)
File "/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/accelerate/utils/operations.py", line 620, in __call__
return convert_to_fp32(self.model_forward(*args, **kwargs))
File "/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/torch/amp/autocast_mode.py", line 14, in decorate_autocast
return func(*args, **kwargs)
File "/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/peft/peft_model.py", line 918, in forward
return self.base_model(
File "/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/peft/tuners/tuners_utils.py", line 94, in forward
return self.model.forward(*args, **kwargs)
File "/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 861, in forward
loss = loss_fct(shift_logits, shift_labels)
File "/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/torch/nn/modules/loss.py", line 1174, in forward
return F.cross_entropy(input, target, weight=self.weight,
File "/home/ekim/anaconda3/envs/torch/lib/python3.9/site-packages/torch/nn/functional.py", line 3029, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
RuntimeError: CUDA error: device-side assert triggered
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
The error seems to be occuring because there are NaN values passed into torch.F.cross_entropy.
### Expected behavior
When checking the forward pass in LlamaModel ([link](https://github.com/huggingface/transformers/blob/ce2e7ef3d96afaf592faf3337b7dd997c7ad4928/src/transformers/models/llama/modeling_llama.py#L615C13-L615C13)), there seems to be 40 layers. The first 17 layers are allocated to GPU:0 and the rest of the layers (18-40) are on GPU:1.
Observations:
- The outputs from first 17 layers (on GPU:0) are as expected
- Layers 18 to 20 (on GPU:1) also produce expected outputs
However, starting from layer 21 (on GPU:1), all the layer_outputs ([link](https://github.com/huggingface/transformers/blob/ce2e7ef3d96afaf592faf3337b7dd997c7ad4928/src/transformers/models/llama/modeling_llama.py#L701)) yield NaNs. This is not expected behavior and is the source of the cuda error.
The same exact code runs without problems on a single GPU or another GCP instance with dual GPU. I'm thinking it might be some communication issue between the two GPUs I have but I am not sure how to fix this.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26096/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26096/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26095
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26095/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26095/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26095/events
|
https://github.com/huggingface/transformers/pull/26095
| 1,890,655,617 |
PR_kwDOCUB6oc5aB_Q-
| 26,095 |
[docs] Updates to TTS task guide with regards to the new TTS pipeline
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,694 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
This PR adds:
- An example of inference with the newly added text-to-speech pipeline
- A necessary training code update to make sure the fine-tuned checkpoint is usable via the pipeline (saving the processor)
- An introduction example of inference with Bark and a link to the course for more examples
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26095/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26095",
"html_url": "https://github.com/huggingface/transformers/pull/26095",
"diff_url": "https://github.com/huggingface/transformers/pull/26095.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26095.patch",
"merged_at": 1694532547000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26094
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26094/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26094/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26094/events
|
https://github.com/huggingface/transformers/pull/26094
| 1,890,580,400 |
PR_kwDOCUB6oc5aBvBJ
| 26,094 |
[AutoBackbone] Add test
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@amyeroberts feel free to take over, I see there's an issue regarding `TimmBackboneTester`",
"(She will be of for a while now, do you need this to be taken over?) ",
"I've fixed it, feel free to merge"
] | 1,694 | 1,695 | 1,695 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds a test which verifies that out_indices and out_features get saved properly.
The test fails if a Backbone class first inherits from `PretrainedConfig`, then from `BackboneConfigMixin`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26094/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26094/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26094",
"html_url": "https://github.com/huggingface/transformers/pull/26094",
"diff_url": "https://github.com/huggingface/transformers/pull/26094.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26094.patch",
"merged_at": 1695073674000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26093
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26093/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26093/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26093/events
|
https://github.com/huggingface/transformers/issues/26093
| 1,890,579,291 |
I_kwDOCUB6oc5wr_Nb
| 26,093 |
ODD BUG=latest version sentencepiece + latest version torch + old glibc
|
{
"login": "DragonMengLong",
"id": 65579910,
"node_id": "MDQ6VXNlcjY1NTc5OTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/65579910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DragonMengLong",
"html_url": "https://github.com/DragonMengLong",
"followers_url": "https://api.github.com/users/DragonMengLong/followers",
"following_url": "https://api.github.com/users/DragonMengLong/following{/other_user}",
"gists_url": "https://api.github.com/users/DragonMengLong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DragonMengLong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DragonMengLong/subscriptions",
"organizations_url": "https://api.github.com/users/DragonMengLong/orgs",
"repos_url": "https://api.github.com/users/DragonMengLong/repos",
"events_url": "https://api.github.com/users/DragonMengLong/events{/privacy}",
"received_events_url": "https://api.github.com/users/DragonMengLong/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,694 | 1,697 | 1,697 |
NONE
| null |
### System Info
- `transformers` version: 4.34.0.dev0
- Platform: Linux-3.10.0-229.el7.x86_64-x86_64-with-glibc2.18
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.22.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
- A very odd bug happened after i pip install the requirements in the examples/pytorch/translation and try to `from transformers import T5ForConditionalGeneration`
- The bug message is `dlopen: cannot load any more object with static TLS`
- I found that it could be reproduced everytime when i using the lastest version of sentencepiece(0.1.99) and the latest version of pytorch(2.0.1) on a old glibc(2.18)
- Because sometimes it's very difficult to update the glibc, it could be temporarily addressed by lower the pytorch version.
### Expected behavior
I guess it would be better if It could be warned in the readme
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26093/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26093/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26092
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26092/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26092/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26092/events
|
https://github.com/huggingface/transformers/pull/26092
| 1,890,377,039 |
PR_kwDOCUB6oc5aBCax
| 26,092 |
Add DINOv2 depth estimation
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@amyeroberts thanks for your review, I've addressed all comments. Feel free to merge :)",
"I've addressed all your comments, feel free to merge. To do:\r\n\r\n- [ ] update all checkpoints on the hub to use `size_divisor` rather than `size_divisbility` in the preprocessor_config.json",
"@NielsRogge Thanks again for this addition - merged! I'll let you handle the updates to the configs on the hub. ",
"Hello guys. Thank you for adding this. I have a silly question that is there any plan that facebook will push this to the huggingface repo? Like `facebook/dpt-dinov2-large-nyu`? And could you can kindly inform me who is actually operating the companies accounts... Seems it is not all done by their side based on my guess.",
"Hi @Starlento,\r\n\r\nAll models are on the hub: https://huggingface.co/models?pipeline_tag=depth-estimation&other=dinov2&sort=trending. I'll open a PR to make this more explicit in the docs",
"Hey, guys, I want to know if it is possible to release the training part, and if it is possible, when will it be released?"
] | 1,694 | 1,705 | 1,699 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #26057
PR that implements a part of #25799. It extends the DPT framework to use the `AutoBackbone` class. Next it uses `Dinov2Backbone` to convert the DINOv2+DPT checkpoints released by the authors [here](https://github.com/facebookresearch/dinov2/tree/main#pretrained-heads---depth-estimation).
To do:
- [x] add test in common backbone tests file to verify `out_indices` are saved properly => done in #26094
- [x] add support for transforms in `DPTImageProcessor`
- [x] add tests which test DPT with a backbone
- [x] add integration test and convert remaining checkpoints
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26092/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26092/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26092",
"html_url": "https://github.com/huggingface/transformers/pull/26092",
"diff_url": "https://github.com/huggingface/transformers/pull/26092.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26092.patch",
"merged_at": 1699892442000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26091
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26091/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26091/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26091/events
|
https://github.com/huggingface/transformers/pull/26091
| 1,890,207,975 |
PR_kwDOCUB6oc5aAdRR
| 26,091 |
Fix `MarianTokenizer` to remove metaspace character in `decode`
|
{
"login": "tanaymeh",
"id": 26519539,
"node_id": "MDQ6VXNlcjI2NTE5NTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/26519539?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tanaymeh",
"html_url": "https://github.com/tanaymeh",
"followers_url": "https://api.github.com/users/tanaymeh/followers",
"following_url": "https://api.github.com/users/tanaymeh/following{/other_user}",
"gists_url": "https://api.github.com/users/tanaymeh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tanaymeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanaymeh/subscriptions",
"organizations_url": "https://api.github.com/users/tanaymeh/orgs",
"repos_url": "https://api.github.com/users/tanaymeh/repos",
"events_url": "https://api.github.com/users/tanaymeh/events{/privacy}",
"received_events_url": "https://api.github.com/users/tanaymeh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"> make sure the fast tokenizers also works!\r\n\r\nUnfortunately, there's no `MarianTokenizerFast` π
",
"Thanks for the review, @ArthurZucker!\r\nI added the following test (it checks with starting special characters in a string like an underscore).\r\n\r\n```python\r\ndef test_tokenizer_decode(self):\r\n tokenizer = MarianTokenizer.from_pretrained(\"Helsinki-NLP/opus-mt-en-es\")\r\n source_text = \"_This is 1 text string that starts with an _ and ends with one too _\"\r\n ids = tokenizer(source_text)[\"input_ids\"]\r\n output_text = tokenizer.decode(ids, skip_special_tokens=True)\r\n self.assertEqual(source_text, output_text)\r\n```\r\n\r\nDoes it look good to you?",
"> > make sure the fast tokenizers also works!\r\n> \r\n> Unfortunately, there's no `MarianTokenizerFast` π
\r\n\r\n@xenova @ArthurZucker I would love to add the `MarianTokenizerFast` if that is considered a worthwhile addition to π€ transformers!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26091). All of your documentation changes will be reflected on that endpoint.",
"Thanks for the reviews and merging!\r\nI was wondering @ArthurZucker if `MarianTokenizerFast` would be a worthwhile addition to HF transformers and if I can contribute to adding it.",
"It does not seem to have been requested a lot π but feel free to add it if you want some good experience with `tokenizers`",
"I think this would be a good addition, simply because of the number of monthly downloads the >1400 models get π. The top 5 models alone total ~2.75M downloads in the past month.\n\nhttps://huggingface.co/models?sort=downloads&search=Helsinki-NLP%2Fopus-mt"
] | 1,694 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes the `MarianTokenizer` so that it removes the metaspace character during decode (`β`).
Fixes #26018
## Who can review?
@ArthurZucker, @xenova
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26091/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26091/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26091",
"html_url": "https://github.com/huggingface/transformers/pull/26091",
"diff_url": "https://github.com/huggingface/transformers/pull/26091.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26091.patch",
"merged_at": 1694548411000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26090
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26090/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26090/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26090/events
|
https://github.com/huggingface/transformers/pull/26090
| 1,890,022,930 |
PR_kwDOCUB6oc5Z_0pO
| 26,090 |
[Core] Add lazy import structure to imports
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Also cc @DN6 - this should then also improve importing from `diffusers`:\r\n\r\n`time python -c \"from diffusers import StableDiffusionPipeline\"` is pretty slow at the moment because of accelerate and this\r\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"Failing tests seem unrelated - ready for a review."
] | 1,694 | 1,694 | 1,694 |
MEMBER
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Importing model classes from `transformers` has become quite slow. This is mainly because we link more and more directories and files to `modeling_utils.py` and `generation_utils.py` both files which are imported almost by every model (e.g. as soon as `PreTrainedModel` is imported).
At the moment we're always importing all third party libraries:
```
from .integration_utils import (
INTEGRATION_TO_CALLBACK,
AzureMLCallback,
ClearMLCallback,
CodeCarbonCallback,
CometCallback,
DagsHubCallback,
FlyteCallback,
MLflowCallback,
NeptuneCallback,
NeptuneMissingConfiguration,
TensorBoardCallback,
WandbCallback,
get_available_reporting_integrations,
get_reporting_integration_callbacks,
hp_params,
is_azureml_available,
is_clearml_available,
is_codecarbon_available,
is_comet_available,
is_dagshub_available,
is_fairscale_available,
is_flyte_deck_standard_available,
is_flytekit_available,
is_mlflow_available,
is_neptune_available,
is_optuna_available,
is_ray_available,
is_ray_tune_available,
is_sigopt_available,
is_tensorboard_available,
is_wandb_available,
rewrite_logs,
run_hp_search_optuna,
run_hp_search_ray,
run_hp_search_sigopt,
run_hp_search_wandb,
)
```
which makes import very slow. This PR adds the lazy import structure to the integrations init file so that we only import all third-party libraries when actually needed.
For environments where e.g. tensorflow is installed we now see a significant speed-up (if we make sure to remove `accelerate` before as `accelerate` is really slow to import.
To reproduce:
1. Make sure `accelerate` is not installed, but `tensorflow` is installed.
2. Run `time python -c "from transformers import CLIPTextModel"` # or any other model import
**Before PR**:
```
2023-09-11 11:07:52.010179: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
python3 -c "from transformers import CLIPTextModel" 3.31s user 3.06s system 220% cpu 2.893 total
```
**After PR**:
```
python3 -c "from transformers import CLIPTextModel" 1.70s user 1.49s system 220% cpu 1.447 total
```
=> We're getting a 2x speed-up.
We should also make sure that importing accelerate is sped up as it become a utility library for more and more libraries (@muellerzr @SunMarc)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26090/reactions",
"total_count": 6,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 6,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26090/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26090",
"html_url": "https://github.com/huggingface/transformers/pull/26090",
"diff_url": "https://github.com/huggingface/transformers/pull/26090.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26090.patch",
"merged_at": 1694445629000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26089
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26089/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26089/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26089/events
|
https://github.com/huggingface/transformers/issues/26089
| 1,889,987,649 |
I_kwDOCUB6oc5wpuxB
| 26,089 |
CFG for RWKV models
|
{
"login": "KnutJaegersberg",
"id": 17965169,
"node_id": "MDQ6VXNlcjE3OTY1MTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/17965169?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KnutJaegersberg",
"html_url": "https://github.com/KnutJaegersberg",
"followers_url": "https://api.github.com/users/KnutJaegersberg/followers",
"following_url": "https://api.github.com/users/KnutJaegersberg/following{/other_user}",
"gists_url": "https://api.github.com/users/KnutJaegersberg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KnutJaegersberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KnutJaegersberg/subscriptions",
"organizations_url": "https://api.github.com/users/KnutJaegersberg/orgs",
"repos_url": "https://api.github.com/users/KnutJaegersberg/repos",
"events_url": "https://api.github.com/users/KnutJaegersberg/events{/privacy}",
"received_events_url": "https://api.github.com/users/KnutJaegersberg/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @ArthurZucker @gante ",
"Hey! Could you provide a full reproducer? `past_key_values` should be supported as its required for fast generation using `use_cache=True`! ",
"Hey @KnutJaegersberg π \r\n\r\nThe root issue is that RWKV, being an RNN at its core, does not have a growing key-value cache (`past_key_values`) that can be sliced. Alternatively, it has the state of the recurrent neural net, which is updated at each iteration of generation.\r\n\r\nSince the implementation of CFG and contrastive search (and some other methods) rely on the ability to slice the cache to remove old data, there is no immediate solution for RWKV. \r\n\r\nYou probably can implement equivalent versions of these techniques for models that have a state (as opposed to a growing cache), by recomputing the RWKV state as needed :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,694 | 1,699 | 1,699 |
NONE
| null |
### System Info
- `transformers` version: 4.33.1
- Platform: Linux-5.10.192-1-MANJARO-x86_64-with-glibc2.38
- Python version: 3.10.9
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.2
- Accelerate version: 0.22.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I got this error in the textgen-webui using HF transformers converted RWKV models, i.e. RWKV/rwkv-4-1b5-pile, but I think it is a HF tf issue, i.e. "TypeError: RwkvForCausalLM.forward() got an unexpected keyword argument 'past_key_values'". Perhaps it's not implemented yet?
I used the options for CFG (and also contrastive search, but not in the same go), and for CFG, I got this error message:
CFG just didn't work without error message.
```python
Output generated in 0.65 seconds (0.00 tokens/s, 0 tokens, context 49, seed 499979764)
Traceback (most recent call last):
File "/run/media/knut/HD/text-generation-webui/modules/callbacks.py", line 56, in gentask
ret = self.mfunc(callback=_callback, *args, **self.kwargs)
File "/run/media/knut/HD/text-generation-webui/modules/text_generation.py", line 321, in generate_with_callback
shared.model.generate(**kwargs)
File "/home/knut/miniconda3/envs/textgen/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/knut/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/generation/utils.py", line 1648, in generate
return self.sample(
File "/home/knut/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/generation/utils.py", line 2743, in sample
next_token_scores = logits_processor(input_ids, next_token_logits)
File "/home/knut/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/generation/logits_process.py", line 97, in __call__
scores = processor(input_ids, scores)
File "/home/knut/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/generation/logits_process.py", line 1655, in __call__
logits = self.get_unconditional_logits(input_ids)
File "/home/knut/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/generation/logits_process.py", line 1640, in get_unconditional_logits
out = self.model(
File "/home/knut/miniconda3/envs/textgen/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
TypeError: RwkvForCausalLM.forward() got an unexpected keyword argument 'past_key_values'
Output generated in 0.63 seconds (0.00 tokens/s, 0 tokens, context 49, seed 1428846449)
```
### Expected behavior
Generate some nice CFGded tokens. Also contrastive search tokens.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26089/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26088
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26088/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26088/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26088/events
|
https://github.com/huggingface/transformers/issues/26088
| 1,889,663,696 |
I_kwDOCUB6oc5wofrQ
| 26,088 |
BART Impelemntation in JAX vs Pytorch
|
{
"login": "DavidAkinpelu",
"id": 43585081,
"node_id": "MDQ6VXNlcjQzNTg1MDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/43585081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DavidAkinpelu",
"html_url": "https://github.com/DavidAkinpelu",
"followers_url": "https://api.github.com/users/DavidAkinpelu/followers",
"following_url": "https://api.github.com/users/DavidAkinpelu/following{/other_user}",
"gists_url": "https://api.github.com/users/DavidAkinpelu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DavidAkinpelu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DavidAkinpelu/subscriptions",
"organizations_url": "https://api.github.com/users/DavidAkinpelu/orgs",
"repos_url": "https://api.github.com/users/DavidAkinpelu/repos",
"events_url": "https://api.github.com/users/DavidAkinpelu/events{/privacy}",
"received_events_url": "https://api.github.com/users/DavidAkinpelu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @DavidAkinpelu, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.",
"Okay."
] | 1,694 | 1,694 | 1,694 |
NONE
| null |
It is possible to initiate BART decoder as a less causal decoder based in Jax implementation and but It doesn't seems straight forward in Pytorch?
I there any reason why this is designed this way?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26088/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26088/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26087
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26087/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26087/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26087/events
|
https://github.com/huggingface/transformers/pull/26087
| 1,889,611,853 |
PR_kwDOCUB6oc5Z-bhu
| 26,087 |
Adding grounding dino
|
{
"login": "EduardoPach",
"id": 69953243,
"node_id": "MDQ6VXNlcjY5OTUzMjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/69953243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EduardoPach",
"html_url": "https://github.com/EduardoPach",
"followers_url": "https://api.github.com/users/EduardoPach/followers",
"following_url": "https://api.github.com/users/EduardoPach/following{/other_user}",
"gists_url": "https://api.github.com/users/EduardoPach/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EduardoPach/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EduardoPach/subscriptions",
"organizations_url": "https://api.github.com/users/EduardoPach/orgs",
"repos_url": "https://api.github.com/users/EduardoPach/repos",
"events_url": "https://api.github.com/users/EduardoPach/events{/privacy}",
"received_events_url": "https://api.github.com/users/EduardoPach/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] |
[
"@EduardoPach Thanks for opening this model PR! From next week, I'll be away for a few weeks. If you need a review in that time please ping @rafaelpadilla. ",
"@rafaelpadilla hey, so I've finished implementing the model and validated with the original implementation. Still have to clean up some things and make sure the documentation is correct. \n\nMy main question is about pushing the model to the hub, because the authors uploaded already the checkpoints (two checkpoints in the same repo) they made available to the model, but it's under an user instead of their org (IDEA-Research), what is usually done in this case?",
"Hi @EduardoPach ,\r\n\r\nAre you referring to `groundingdino_swinb_cogcoor.pth` and `groundingdino_swint_ogc.pth`, placed [here](https://huggingface.co/ShilongLiu/GroundingDINO/tree/main), right?\r\n\r\nIn this case, let's consult @ArthurZucker and @younesbelkada. ",
"> Hi @EduardoPach ,\n> \n> \n> \n> Are you referring to `groundingdino_swinb_cogcoor.pth` and `groundingdino_swint_ogc.pth`, placed [here](https://huggingface.co/ShilongLiu/GroundingDINO/tree/main), right?\n> \n> \n> \n> In this case, let's consult @ArthurZucker and @younesbelkada. \n\nPrecisely, I'm asking this because I had the impression that model repos contain only one checkpoint and also the IDEA-Research group has other models that we've could add to the transformers library later on so it might be helpful if there was an account for this org.",
"> > Hi @EduardoPach ,\r\n> > Are you referring to `groundingdino_swinb_cogcoor.pth` and `groundingdino_swint_ogc.pth`, placed [here](https://huggingface.co/ShilongLiu/GroundingDINO/tree/main), right?\r\n> > In this case, let's consult @ArthurZucker and @younesbelkada.\r\n> \r\n> Precisely, I'm asking this because I had the impression that model repos contain only one checkpoint and also the IDEA-Research group has other models that we've could add to the transformers library later on so it might be helpful if there was an account for this org.\r\n\r\nFor now, you can upload weights to the hub under your own personal profile and use them until this PR is ready to merge. \r\nAfterwards, we'll move the weights under the organization on the hub, and update all the paths to point to those.",
"Hi @EduardoPach \r\nI second what @rafaelpadilla said, for the `groundingdino_swinb_cogcoor.pth` and `groundingdino_swint_ogc.pth` you can create two different repositories under your personal name space with a suffix that is distinguishable (e.g. `yournamespace/groundingdino-swinb-cogcoor` and `yournamespace/groundingdino-swint-ogc`, and make sure the files has been renamed to `pytorch_model.bin`",
"@rafaelpadilla Hey! Could you help me out with these questions?\r\n\r\n1. The `ImageProcessor` from the original implementation is exactly the same as we have in `DeformableDetr`. Should I copy the `ImageProcessor` and just remove the segmentation-related things? (Since `GroundingDINO` is used only for Object Detection)\r\n2. Their tokenizer is the same as `Bert` with a few extra steps after tokenizing so I copied and added this step, but I'm unsure how to push the pre-trained tokenizer to the hub\r\n3. My implementation of `GroundingDINOConfig` has an attribute called `text_backbone_config` which is a `GroundingDINOTextPrenetConfig` which is just a copy of Bert config. However, after pushing the model to the hub when I try to instantiate the model with `.from_pretrained` I get an error saying:\r\n\r\n```\r\nValueError: Parameter config in `GroundingDINOTextPrenet(config)` should be an instance of class `PretrainedConfig`. To create a model from a pretrained model use `model = GroundingDINOTextPrenet.from_pretrained(PRETRAINED_MODEL_NAME)`\r\n```\r\nand when I do `AutoConfig.from_pretrained(\"EduardoPacheco/grounding-dino-base\").text_backbone_config` I get `{'model_type': 'grounding-dino-text-prenet'}` is there anything different that I need to do to have a config as an attribute? I've tried to look at `CLIP`'s configuration to get some idea of how to do it, but I'm unsure why I am not getting the full `GroundingDINOTextPrenetConfig` after pushing the model to the hub",
"Hi @EduardoPach ,\r\n\r\nIf your `ImageProcessor` is an exact copy from another model you must include the `#Copied from`. If somehow your `ImageProcessor` uses parts of other code, it would be good to have a `#Modified from` comment.\r\n\r\nIf I understood correctly, you have already generated the tokens using the newly extra steps, right? For pushing your tokens to the hub you could could use the hub api. See an example [here](https://huggingface.co/docs/huggingface_hub/guides/upload#upload-a-file)\r\n\r\nI'm not sure if the problem is regarding `AutoConfig`, as it could not load correctly your `GroundingDINOConfig`. Have you tried loading it directly with `GroundingDINOTextPrenet.from_pretrained(\"EduardoPacheco/grounding-dino-base\")`?\r\n",
"@rafaelpadilla the `ImageProcessor` is precisely the same, but the `DeformableDetr` one works for object detection and segmentation. Right now I've copied the processor and just removed the segmentation stuff, is that okay?\r\n\r\nAlso, about the config, sorry I had forgotten to push the modifications I've done to the `configuration_grounding_dino.py` file\r\n\r\n**EDIT**\r\n\r\nI figured out what the issue was haha it was somewhat dumb. Either way, I wasn't aware that when we push the config to the hub the config class is then converted to a config.json, and any Nested configuration is also modified to a dictionary so I only had to change my `GroundingDINOConfig` implementation a bit when creating the attribute `text_backbone_config`",
"Hi @EduardoPach do you need any help in finishing this PR? Really great to see you're leveraging `Copied from` for the text encoder and all parts taken from Deformable DETR. Also, if the image processor is exactly the same as Deformable DETR, then we typically don't add a new image processor to the library, but rather just add a line in [image_processing_auto](https://github.com/huggingface/transformers/blob/main/src/transformers/models/auto/image_processing_auto.py#L55), which will allow people to do:\r\n```\r\nfrom transformers import AutoImageProcessor\r\n\r\nimage_processor = AutoImageProcessor.from_pretrained(\"sensetime/grounding-dino-base\")\r\n```\r\n\r\nthis will then automatically create a `DeformableDetrImageProcessor`.",
"> Hi @EduardoPach do you need any help in finishing this PR? Really great to see you're leveraging `Copied from` for the text encoder and all parts taken from Deformable DETR. Also, if the image processor is exactly the same as Deformable DETR, then we typically don't add a new image processor to the library, but rather just add a line in [image_processing_auto](https://github.com/huggingface/transformers/blob/main/src/transformers/models/auto/image_processing_auto.py#L55), which will allow people to do:\r\n> \r\n> ```\r\n> from transformers import AutoImageProcessor\r\n> \r\n> image_processor = AutoImageProcessor.from_pretrained(\"sensetime/grounding-dino-base\")\r\n> ```\r\n> \r\n> this will then automatically create a `DeformableDetrImageProcessor`.\r\n\r\nWriting here just for record\r\n\r\nAs we discussed through Discord I'll make that and will do the same for the `Tokenizer` part and will just create a `GroundingDINOProcessor`. ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26087). All of your documentation changes will be reflected on that endpoint.",
"Hey @rafaelpadilla or @NielsRogge right now I'm missing only the `test_tied_weights_keys` would love some help on this one. \r\n\r\nAlso, for some reason when I try to run `make fixup` it throws an error so code quality tests are falling as well not sure if you could run this command @NielsRogge, since you already have the write permission to the branch. \r\n\r\nDefinitely, after the test is fixed I'll have to review the modeling documentation to make sure everything is correct",
"@NielsRogge it looks good now! All tests are fine and I improved the docstrings. I think you're good to go to start the review process",
"@NielsRogge or @rafaelpadilla or even @amyeroberts Do I need to add anything else?",
"@rafaelpadilla @NielsRogge could you give another round of review? ",
"Hey @NielsRogge just tagging you here as a reminder ",
"@NielsRogge @EduardoPach Thanks for adding this model! As a core maintainer I get a lot of PR review requests so will only do a final review once another person has approved (@NielsRogge I'm assuming your approval here because you're committing) and all tests are passing. Some failing tests look unrelated, but some model specific tests one are still failing. Let me know if you need help getting them to pass! ",
"> @NielsRogge @EduardoPach Thanks for adding this model! As a core maintainer I get a lot of PR review requests so will only do a final review once another person has approved (@NielsRogge I'm assuming your approval here because you're committing) and all tests are passing. Some failing tests look unrelated, but some model specific tests one are still failing. Let me know if you need help getting them to pass!\r\n\r\nThe model-specific tests that are failing in the `test_pr_documentation` is due to the missing repo_id `idea-research/grounding-dino-tiny`. Right now the model is under my account at `EduardoPacheco/grounding-dino-tiny` and the idea was to change ownership of the checkpoint either to one account created for the research lab or maybe to one of the authors since I believe [this account](https://huggingface.co/ShilongLiu/GroundingDINO) belongs to one of them.\r\n\r\nc.c. @amyeroberts ",
"Hey @ArthurZucker, fixed the main points you brought up ",
"@amyeroberts or @ArthurZucker not sure which one of you has the bandwidth to review this so pinging both of you here",
"@EduardoPach I'm on watch this week so will review :) ",
"@amyeroberts didn't have the time yet to make the changes. I will probably have some time in the next couple of days otherwise next week I can work on it",
"@EduardoPach Happy new year! OK, thanks for the update. Let us know if you have any Qs in the meantime. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Getting back to work on this one",
"Thanks @amyeroberts for the review! Together with @EduardoPach, we've addressed all comments.\r\n\r\nCI is green, one unrelated test is failing. See also my comments above regarding why we wouldn't use nested outputs, and keep the same names as in Deformable DETR."
] | 1,694 | 1,708 | null |
NONE
| null |
# What does this PR do?
This PR adds Grounding DINO
Fixes https://github.com/huggingface/transformers/issues/25423
To-Do's:
- [x] Port vision backbone
- [x] Port Text Backbone
- [x] Port Encoder
- [x] Port Decoder
- [x] Port tokenizer
- [x] Port Image processing
- [x] Validate results
- [x] Check documentation
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26087/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/26087/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26087",
"html_url": "https://github.com/huggingface/transformers/pull/26087",
"diff_url": "https://github.com/huggingface/transformers/pull/26087.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26087.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/26086
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26086/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26086/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26086/events
|
https://github.com/huggingface/transformers/pull/26086
| 1,889,452,162 |
PR_kwDOCUB6oc5Z94LM
| 26,086 |
Mask2former hungarian matcher Add normalization before matmul
|
{
"login": "pjh4993",
"id": 12472082,
"node_id": "MDQ6VXNlcjEyNDcyMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/12472082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pjh4993",
"html_url": "https://github.com/pjh4993",
"followers_url": "https://api.github.com/users/pjh4993/followers",
"following_url": "https://api.github.com/users/pjh4993/following{/other_user}",
"gists_url": "https://api.github.com/users/pjh4993/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pjh4993/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pjh4993/subscriptions",
"organizations_url": "https://api.github.com/users/pjh4993/orgs",
"repos_url": "https://api.github.com/users/pjh4993/repos",
"events_url": "https://api.github.com/users/pjh4993/events{/privacy}",
"received_events_url": "https://api.github.com/users/pjh4993/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26086). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,694 | 1,697 | 1,697 |
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #25938
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
#25938
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerz and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@amyeroberts
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26086/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26086",
"html_url": "https://github.com/huggingface/transformers/pull/26086",
"diff_url": "https://github.com/huggingface/transformers/pull/26086.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26086.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/26085
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26085/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26085/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26085/events
|
https://github.com/huggingface/transformers/issues/26085
| 1,889,387,725 |
I_kwDOCUB6oc5wncTN
| 26,085 |
AutoModel.from_pretrained is messing with stack traces in jupyter
|
{
"login": "qrdlgit",
"id": 129564070,
"node_id": "U_kgDOB7j9pg",
"avatar_url": "https://avatars.githubusercontent.com/u/129564070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qrdlgit",
"html_url": "https://github.com/qrdlgit",
"followers_url": "https://api.github.com/users/qrdlgit/followers",
"following_url": "https://api.github.com/users/qrdlgit/following{/other_user}",
"gists_url": "https://api.github.com/users/qrdlgit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qrdlgit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qrdlgit/subscriptions",
"organizations_url": "https://api.github.com/users/qrdlgit/orgs",
"repos_url": "https://api.github.com/users/qrdlgit/repos",
"events_url": "https://api.github.com/users/qrdlgit/events{/privacy}",
"received_events_url": "https://api.github.com/users/qrdlgit/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @qrdlgit, thanks for opening an issue!\r\n\r\nWould it be possible to share an example of the trackback itself and what it looks like? Just from this description it's not possible to know what the issue is. For example, what relevant frames are missing or what is considered to be 'wacky'.",
"This has been fixed in latest issue by making rich opt-in https://github.com/huggingface/accelerate/issues/1630"
] | 1,694 | 1,694 | 1,694 |
NONE
| null |
### System Info
AutoModel.from_pretrained is messing with stack traces
I get these wacky stack traces displayed for unhandled exceptions after running AutoModel.from_pretrained("BAAI/bge-small-en"). Colored text, etc. Do. Not. Want.
Worse - these stack traces are missing relevant frames which make it impossible to debug.
How do I get rid of this / stop this from happening?
I tried resetting the sys.excepthook, but it didn't help
Took me way too long to find the culprit. At first I thought it was sentence transformers, than I went through every dep / and line by line code. These sorts of insane bundling side effect nonsense things which permanently mess with your environment beyond the lifetime of the process have to stop.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
AutoModel.from_pretrained("BAAI/bge-smal-en").
### Expected behavior
Certainly not that.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26085/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26085/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26084
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26084/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26084/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26084/events
|
https://github.com/huggingface/transformers/pull/26084
| 1,889,278,603 |
PR_kwDOCUB6oc5Z9Umy
| 26,084 |
add zh translation for installation
|
{
"login": "yyLeaves",
"id": 76979429,
"node_id": "MDQ6VXNlcjc2OTc5NDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/76979429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yyLeaves",
"html_url": "https://github.com/yyLeaves",
"followers_url": "https://api.github.com/users/yyLeaves/followers",
"following_url": "https://api.github.com/users/yyLeaves/following{/other_user}",
"gists_url": "https://api.github.com/users/yyLeaves/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yyLeaves/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yyLeaves/subscriptions",
"organizations_url": "https://api.github.com/users/yyLeaves/orgs",
"repos_url": "https://api.github.com/users/yyLeaves/repos",
"events_url": "https://api.github.com/users/yyLeaves/events{/privacy}",
"received_events_url": "https://api.github.com/users/yyLeaves/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26084). All of your documentation changes will be reflected on that endpoint.",
"> Thanks for your translation! π€\r\n> \r\n> For some reason, the second-level headings aren't being recognized so we're going to look into that. Once it gets resolved, we'll be free to merge!\r\n\r\nNoted with thanks! @stevhliu ",
"The issue should be resolved now, thanks again for your contribution and patience!"
] | 1,694 | 1,696 | 1,696 |
CONTRIBUTOR
| null |
# What does this PR do?
- Add zh (Chinese) translation for installation section #20095
- Add installation to _toctree.yml
## Who can review?
Documentation: @sgugger @stevhliu and @MKhalusova
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26084/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26084",
"html_url": "https://github.com/huggingface/transformers/pull/26084",
"diff_url": "https://github.com/huggingface/transformers/pull/26084.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26084.patch",
"merged_at": 1696437542000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26081
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26081/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26081/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26081/events
|
https://github.com/huggingface/transformers/issues/26081
| 1,889,185,964 |
I_kwDOCUB6oc5wmrCs
| 26,081 |
batching in pipeline gives single output
|
{
"login": "znb899",
"id": 83007857,
"node_id": "MDQ6VXNlcjgzMDA3ODU3",
"avatar_url": "https://avatars.githubusercontent.com/u/83007857?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/znb899",
"html_url": "https://github.com/znb899",
"followers_url": "https://api.github.com/users/znb899/followers",
"following_url": "https://api.github.com/users/znb899/following{/other_user}",
"gists_url": "https://api.github.com/users/znb899/gists{/gist_id}",
"starred_url": "https://api.github.com/users/znb899/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/znb899/subscriptions",
"organizations_url": "https://api.github.com/users/znb899/orgs",
"repos_url": "https://api.github.com/users/znb899/repos",
"events_url": "https://api.github.com/users/znb899/events{/privacy}",
"received_events_url": "https://api.github.com/users/znb899/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"You are looping on the outputs of `pipe(KeyDataset(dataset, \"text\"), batch_size=8, truncation=\"only_first\")`.\r\nIf you just print `pipe(KeyDataset(dataset, \"text\"), batch_size=8, truncation=\"only_first\")` you'll see what you want. The example in the doc shows how you can create an iterator over these outputs, which sometimes is faster than a direct call on the batch."
] | 1,694 | 1,695 | 1,694 |
NONE
| null |
### System Info
```
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
import datasets
dataset = datasets.load_dataset("imdb", name="plain_text", split="unsupervised")
pipe = pipeline("text-classification", device=0)
for out in pipe(KeyDataset(dataset, "text"), batch_size=8, truncation="only_first"):
print(out)
# [{'label': 'POSITIVE', 'score': 0.9998743534088135}]
# Exactly the same output as before, but the content are passed
# as batches to the model
```
Why is batching not giving a list (of 8 here) of outputs? Isn't it the point of batching or am I missing something? Eitherway, it's counter intuitive.
This code is from the [the pipeline doc page](https://huggingface.co/docs/transformers/main_classes/pipelines#pipeline-batching). I haven't tried it but I have this issue (?) with another pipeline task.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
run the above code
### Expected behavior
[{'label': 'POSITIVE', 'score': 0.9998743534088135}, {'label': x, 'score': x}, {'label': x, 'score': x} ....]
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26081/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26080
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26080/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26080/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26080/events
|
https://github.com/huggingface/transformers/issues/26080
| 1,889,071,391 |
I_kwDOCUB6oc5wmPEf
| 26,080 |
Tesnsor size issue
|
{
"login": "J-JEMINA",
"id": 141778914,
"node_id": "U_kgDOCHNf4g",
"avatar_url": "https://avatars.githubusercontent.com/u/141778914?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/J-JEMINA",
"html_url": "https://github.com/J-JEMINA",
"followers_url": "https://api.github.com/users/J-JEMINA/followers",
"following_url": "https://api.github.com/users/J-JEMINA/following{/other_user}",
"gists_url": "https://api.github.com/users/J-JEMINA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/J-JEMINA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/J-JEMINA/subscriptions",
"organizations_url": "https://api.github.com/users/J-JEMINA/orgs",
"repos_url": "https://api.github.com/users/J-JEMINA/repos",
"events_url": "https://api.github.com/users/J-JEMINA/events{/privacy}",
"received_events_url": "https://api.github.com/users/J-JEMINA/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @J-JEMINA, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.",
"Thank you so much for the information!\r\nBut when I post there the bot classifies it as spam and hides it. Can anyone help me to get an answer on this?",
"Hi @J-JEMINA, \r\n\r\nSorry to hear that. If there's a link to the post, I can see if I can clear it as a real human's question. Another alternative is asking on our discord: https://discord.com/invite/hugging-face-879548962464493619",
"https://discuss.huggingface.co/t/facing-tensor-size-issue-in-tranformer-tool/54510\r\n\r\nThis is the link to the query I raised in the forum! I would be happy if you can help",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,694 | 1,697 | 1,697 |
NONE
| null |
Sorry! I am new here and didn't know where to post my query.
I am using PhageRBP detection which uses bioembeddings tool. In that, I am trying to create embeddings for a genome of e.coli phage of size 108485kb but I get this error immediately. I am working in Google Colab since I am a complete beginner and only comfortable working in that
The expanded size of the tensor (108485) must match the existing size (40000) at non-singleton dimension 1. Target sizes: [1, 108485]. Tensor sizes: [1, 40000]. This most likely means that you don't have enough GPU RAM to embed a protein this long.
As I tried to navigate, I realized it originates from the transformer tool that they are trying to use. I tried seeing if chatgpt can be of any use but it just says the expand function cannot expand beyond 40000. I am willing to pay for Colab Pro or Pro+ as well if that would solve this issue. I have 100s of genomes of similar size to run as well. I do not know how to solve this.
Can someone help me with how to resolve this, please?
Any help would be appreciated.
Thank you!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26080/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26079
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26079/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26079/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26079/events
|
https://github.com/huggingface/transformers/pull/26079
| 1,889,069,039 |
PR_kwDOCUB6oc5Z8tP1
| 26,079 |
reformat: change the eval_dataset * num_device code
|
{
"login": "pphuc25",
"id": 81808312,
"node_id": "MDQ6VXNlcjgxODA4MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/81808312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pphuc25",
"html_url": "https://github.com/pphuc25",
"followers_url": "https://api.github.com/users/pphuc25/followers",
"following_url": "https://api.github.com/users/pphuc25/following{/other_user}",
"gists_url": "https://api.github.com/users/pphuc25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pphuc25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pphuc25/subscriptions",
"organizations_url": "https://api.github.com/users/pphuc25/orgs",
"repos_url": "https://api.github.com/users/pphuc25/repos",
"events_url": "https://api.github.com/users/pphuc25/events{/privacy}",
"received_events_url": "https://api.github.com/users/pphuc25/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,694 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
Hi,
In this code, it appears that there might be some confusion regarding the assign of per_device_eval_batch_size to another variable and then multiplying it by jax.device_count(). However, there is a discrepancy between the code styles used for training and evaluation, which makes it less clear. To improve clarity and align with the coding style used for train_batch_size, I have simplified eval_batch_size to a single line for a cleaner and more consistent coding style.
I would like cc @sanchit-gandhi to review my PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26079/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26079/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26079",
"html_url": "https://github.com/huggingface/transformers/pull/26079",
"diff_url": "https://github.com/huggingface/transformers/pull/26079.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26079.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/26078
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26078/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26078/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26078/events
|
https://github.com/huggingface/transformers/issues/26078
| 1,889,053,696 |
I_kwDOCUB6oc5wmKwA
| 26,078 |
broken FLAVA example code
|
{
"login": "morrisalp",
"id": 8263996,
"node_id": "MDQ6VXNlcjgyNjM5OTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8263996?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/morrisalp",
"html_url": "https://github.com/morrisalp",
"followers_url": "https://api.github.com/users/morrisalp/followers",
"following_url": "https://api.github.com/users/morrisalp/following{/other_user}",
"gists_url": "https://api.github.com/users/morrisalp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/morrisalp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/morrisalp/subscriptions",
"organizations_url": "https://api.github.com/users/morrisalp/orgs",
"repos_url": "https://api.github.com/users/morrisalp/repos",
"events_url": "https://api.github.com/users/morrisalp/events{/privacy}",
"received_events_url": "https://api.github.com/users/morrisalp/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @morrisalp \r\nThanks for the issue, in fact, the documentation snippets looks wrong indeed. To retrieve `contrastive_logits_per_image `. one needs to use `FlavaForPreTraining` as the `global_contrastive_head` that is used to compute the contrastive logits seems to be available only for that class. \r\n\r\nThis snippet seems to work fine for me:\r\n\r\n```python\r\nfrom PIL import Image\r\nimport requests\r\nfrom transformers import AutoProcessor, FlavaForPreTraining\r\n\r\nmodel = FlavaForPreTraining.from_pretrained(\"facebook/flava-full\")\r\nprocessor = AutoProcessor.from_pretrained(\"facebook/flava-full\")\r\n\r\nurl = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\n\r\ninputs = processor(text=[\"a photo of a cat\"], images=image, return_tensors=\"pt\", padding=True, return_codebook_pixels=True, return_image_mask=True)\r\n\r\noutputs = model(**inputs)\r\nlogits_per_image = outputs.contrastive_logits_per_image # this is the image-text similarity score\r\nprobs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities\r\n```\r\n\r\nWould you be happy to open a PR to update the docs? Otherwise happy to do it!"
] | 1,694 | 1,697 | 1,697 |
NONE
| null |
### System Info
`transformers==4.33.1`
### Who can help?
@ArthurZucker @younesbelkada @amyeroberts
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Error is thrown when running the following [sample code from the FLAVA documentation](https://huggingface.co/docs/transformers/model_doc/flava#transformers.FlavaModel):
```
from PIL import Image
import requests
from transformers import AutoProcessor, FlavaModel
model = FlavaModel.from_pretrained("facebook/flava-full")
processor = AutoProcessor.from_pretrained("facebook/flava-full")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["a photo of a cat"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.contrastive_logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
```
Error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[9], line 1
----> 1 logits_per_image = outputs.contrastive_logits_per_image # this is the image-text similarity score
2 probs = logits_per_image.softmax(dim=1)
AttributeError: 'FlavaModelOutput' object has no attribute 'contrastive_logits_per_image'
```
### Expected behavior
Add `contrastive_logits_per_image` to output or fix example in documentation.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26078/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26077
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26077/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26077/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26077/events
|
https://github.com/huggingface/transformers/pull/26077
| 1,889,051,590 |
PR_kwDOCUB6oc5Z8poq
| 26,077 |
docs: update link huggingface map
|
{
"login": "pphuc25",
"id": 81808312,
"node_id": "MDQ6VXNlcjgxODA4MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/81808312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pphuc25",
"html_url": "https://github.com/pphuc25",
"followers_url": "https://api.github.com/users/pphuc25/followers",
"following_url": "https://api.github.com/users/pphuc25/following{/other_user}",
"gists_url": "https://api.github.com/users/pphuc25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pphuc25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pphuc25/subscriptions",
"organizations_url": "https://api.github.com/users/pphuc25/orgs",
"repos_url": "https://api.github.com/users/pphuc25/repos",
"events_url": "https://api.github.com/users/pphuc25/events{/privacy}",
"received_events_url": "https://api.github.com/users/pphuc25/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26077). All of your documentation changes will be reflected on that endpoint."
] | 1,694 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
Hi,
I have discovered that the link to the dataset mapping documentation is no longer up to date. Therefore, I have updated the link to the current version, which can be found here: https://huggingface.co/docs/datasets/process#map
I would like to cc @stevhliu for review my code.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26077/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26077/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26077",
"html_url": "https://github.com/huggingface/transformers/pull/26077",
"diff_url": "https://github.com/huggingface/transformers/pull/26077.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26077.patch",
"merged_at": 1694433424000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26076
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26076/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26076/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26076/events
|
https://github.com/huggingface/transformers/pull/26076
| 1,889,047,788 |
PR_kwDOCUB6oc5Z8o3i
| 26,076 |
docs: feat: add llama2 notebook resources from OSSCA community
|
{
"login": "junejae",
"id": 55151385,
"node_id": "MDQ6VXNlcjU1MTUxMzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/55151385?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/junejae",
"html_url": "https://github.com/junejae",
"followers_url": "https://api.github.com/users/junejae/followers",
"following_url": "https://api.github.com/users/junejae/following{/other_user}",
"gists_url": "https://api.github.com/users/junejae/gists{/gist_id}",
"starred_url": "https://api.github.com/users/junejae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/junejae/subscriptions",
"organizations_url": "https://api.github.com/users/junejae/orgs",
"repos_url": "https://api.github.com/users/junejae/repos",
"events_url": "https://api.github.com/users/junejae/events{/privacy}",
"received_events_url": "https://api.github.com/users/junejae/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26076). All of your documentation changes will be reflected on that endpoint.",
"> Nice, thanks for creating a notebook! π€\r\n> \r\n> I wasn't able to get the code in it to run successfully though. I get the following error because I think in your implementation of the `TrainingArgument` class you have `batch_size` instead of `per_device_train_batch_size`?\r\n> \r\n> ```python\r\n> TypeError: TrainingArguments.__init__() got an unexpected keyword argument 'per_device_train_batch_size'\r\n> ```\r\n\r\n@stevhliu \r\nIt's fixed! Could you try it again?\r\n\r\nI messed up my code while polishing some variable names.\r\nI rolled back the change and tested. It works fine now!",
"Cool, it works now; feel free to mark this as ready, and then we can merge!"
] | 1,694 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
# What does this PR do?
Adds a notebook resource of LLaMA2 on how to fine-tune LLaMA2 as a Korean text classifier.
The notebook is created by our OSSCA community.
Part of https://github.com/huggingface/transformers/issues/20055
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
May you please review this PR?
@bolizabeth, @nuatmochoi, @heuristicwave, @mjk0618, @keonju2, @harheem, @HongB1, @junejae, @54data, @seank021, @augustinLib, @sronger, @TaeYupNoh, @kj021, @eenzeenee
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26076/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26076/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26076",
"html_url": "https://github.com/huggingface/transformers/pull/26076",
"diff_url": "https://github.com/huggingface/transformers/pull/26076.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26076.patch",
"merged_at": 1694618862000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26075
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26075/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26075/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26075/events
|
https://github.com/huggingface/transformers/issues/26075
| 1,889,013,226 |
I_kwDOCUB6oc5wmA3q
| 26,075 |
what audio input files are supported by ASR model "openai/whisper-large-v2" on huggingface?
|
{
"login": "pythonvijay",
"id": 144582559,
"node_id": "U_kgDOCJ4nnw",
"avatar_url": "https://avatars.githubusercontent.com/u/144582559?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pythonvijay",
"html_url": "https://github.com/pythonvijay",
"followers_url": "https://api.github.com/users/pythonvijay/followers",
"following_url": "https://api.github.com/users/pythonvijay/following{/other_user}",
"gists_url": "https://api.github.com/users/pythonvijay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pythonvijay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pythonvijay/subscriptions",
"organizations_url": "https://api.github.com/users/pythonvijay/orgs",
"repos_url": "https://api.github.com/users/pythonvijay/repos",
"events_url": "https://api.github.com/users/pythonvijay/events{/privacy}",
"received_events_url": "https://api.github.com/users/pythonvijay/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hello @pythonvijay !\r\nI had a look at the hub & docs , in this case, the model [Usage section](https://huggingface.co/openai/whisper-large-v2#usage) to backtrack your challenge. \r\n\r\n1. As I'm understanding from [code snippets](https://huggingface.co/openai/whisper-large-v2#english-to-english), you donΒ΄t actually use HF `pipeline` to load the model, and you need the `Whisperprocessor` beforehand. The WhisperProcessor is used to:\r\n\r\nPre-process the audio inputs (converting them to log-Mel spectrograms for the model)\r\nPost-process the model outputs (converting them from tokens to text) \r\n\r\nHave you tried something like this?\r\n\r\n```\r\nfrom transformers import WhisperProcessor, WhisperForConditionalGeneration\r\n\r\n\r\n# load model and processor\r\nprocessor = WhisperProcessor.from_pretrained(\"openai/whisper-large-v2\")\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\"openai/whisper-large-v2\")\r\nmodel.config.forced_decoder_ids = None\r\n```\r\n\r\n2. Regarding the audio files, what I have done to try to clarify this is to search for the dataset the model was [evaluated with](https://huggingface.co/openai/whisper-large-v2#evaluation) , and ended up [finding](https://huggingface.co/datasets/librispeech_asr#data-fields) that apparently is a `.flac` format [file](https://en.wikipedia.org/wiki/FLAC). On the other hand, there are [spaces](https://huggingface.co/spaces/innev/whisper-Base) built that accept `.wav` and `.mp3` file formats so Im assuming that the input for inference could be diverse. \r\n\r\nHope this helps! π€π€",
"cc @sanchit-gandhi ",
"Hey @pythonvijay and @SoyGema!\r\n\r\nThere are two ways of using the Whisper model with HF Transformers:\r\n\r\n### 1. model + processor\r\n\r\nHere, the **user** is responsible for loading the input audio as a numpy array. Hence, any file format you are able to read as a numpy array is valid. \r\n\r\nIn the following code-snippet, we show how to do this with the [`soundfile`](https://pysoundfile.readthedocs.io/en/latest/#read-write-functions) library, which supports a number of input file formats. Overall, `model` + `processor` requires you to specify the 3 distinct stages of inference (pre-processing, model forward, post-processing)\r\n\r\n```python\r\nfrom transformers import WhisperProcessor, WhisperForConditionalGeneration\r\nimport soundfile as sf\r\n\r\n# load model and processor from pre-trained\r\nprocessor = WhisperProcessor.from_pretrained(\"openai/whisper-large-v2\")\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\"openai/whisper-large-v2\")\r\n\r\n# load audio file: user is responsible for loading the audio files themselves\r\narray, sampling_rate = sf.read(\"audio.mp3\", samplerate=processor.feature_extractor.sampling_rate\r\n\r\n# pre-process to get the input features\r\ninput_features = processor(array, sampling_rate=sampling_rate, return_tensors=\"pt\").input_features \r\n\r\n# generate token ids by running model forward sequentially\r\npredicted_ids = model.generate(input_features, max_new_tokens=256)\r\n\r\n# post-process token ids to text\r\ntranscription = processor.batch_decode(predicted_ids, skip_special_tokens=True)\r\nprint(transcription)\r\n```\r\n\r\n### 2. pipeline class\r\n\r\nTakes care of loading the audio file for you with [`ffmpeg`](https://www.ffmpeg.org/general.html#File-Formats), which also supports a number of different file formats. Overall, [`pipeline`](https://huggingface.co/docs/transformers/v4.33.0/en/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) is a higher-level helper function that allows you to run the models with minimal lines of code:\r\n\r\n```python\r\nfrom transformers import pipeline\r\n\r\npipe = pipeline(\"automatic-speech-recognition\", model=\"openai/whisper-large-v2\")\r\ntranscription = pipe(\"audio.mp3\")\r\n```",
"Hi Sanchit,\r\n\r\nThanks. The code looks intuitive. However, I get the following error, which should be self explanatory;\r\n\r\n`AttributeError: 'WhisperProcessor' object has no attribute 'sampling_rate`\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"++\r\n\r\n```\r\n# load audio file: user is responsible for loading the audio files themselves\r\narray, sampling_rate = sf.read(\"audio.mp3\", samplerate=processor.sampling_rate)\r\n```\r\n\r\nWhen I try the using pipeline(), I get a file not found error (which I am sure will happen in the aforementioned code aswell, which got stuck at 'sampling-rate' attribute error . I defined the path of my audio file pretty much the way we define the path of a text file and assigned it to a variable 'audio'.\r\n\r\nIs there a different way to read the audio file?\r\n",
"Thanks for flagging @pythonvijay - fixed the `model` + `processor` code snippet as required.\r\n\r\nNope, it should work specifying either the absolute or relative path to your audio file. Can you try first downloading a dummy audio file with the bash command:\r\n```bash\r\nwget https://huggingface.co/spaces/speechbox/whisper-restore-punctuation/resolve/main/sample1.flac\r\n```\r\n\r\nAnd then run the pipeline as:\r\n```python\r\nfrom transformers import pipeline\r\n\r\npipe = pipeline(\"automatic-speech-recognition\", model=\"openai/whisper-large-v2\")\r\ntranscription = pipe(\"./sample1.flac\")\r\n```\r\n\r\n",
"Hi Sanchit,\r\n\r\nWhen you fixed, did you change something in the pipeline code as I dont see any change in the visible code?\r\n\r\nhere is the error I get:\r\n\r\n`File [\"<ipython-input-7-b38d5899e442>\"](https://localhost:8080/#), line 4\r\n wget https://huggingface.co/spaces/speechbox/whisper-restore-punctuation/resolve/main/sample1.flac\r\n ^\r\nSyntaxError: invalid syntax`",
"No change to the pipeline code. Note that you need to execute the `wget` command in **bash** (as specified previously), or pre-pend it with a `!` to work in a colab cell.\r\n\r\nBash:\r\n```\r\nwget https://huggingface.co/spaces/speechbox/whisper-restore-punctuation/resolve/main/sample1.flac\r\n```\r\n\r\nipython/jupyter/colab:\r\n```\r\n!wget https://huggingface.co/spaces/speechbox/whisper-restore-punctuation/resolve/main/sample1.flac\r\n```",
"Thanks Sanchit,\r\n\r\nGot that.\r\n\r\nCan you please advise what am I doing wrong while passing my own .flac file?\r\n\r\nIt gives en error\r\n\r\n```\r\naudio = '/Users/vijay/Downloads/audio.flac'\r\ntranscription = pipe(\"audio.flac\")\r\n```\r\n`FileNotFoundError: [Errno 2] No such file or directory: 'audio.flac'`\r\n",
"did you try: \r\n```python\r\naudio = '/Users/vijay/Downloads/audio.flac'\r\ntranscription = pipe(audio)\r\n```",
"Sorry, that was my bad as I overlooked that audio was a variable.\r\n\r\nSomehow, I still get the same error when I pass audio as a variable in pipe;\r\n\r\n```\r\nfrom transformers import pipeline\r\npipe = pipeline(\"automatic-speech-recognition\", model=\"openai/whisper-large-v2\")\r\naudio = '/Users/vijay/Downloads/audio.flac'\r\ntranscription = pipe(audio)\r\n```\r\n`FileNotFoundError: [Errno 2] No such file or directory: '/Users/vijay/Downloads/audio.flac'`",
"Also, if I process \r\n`!wget https://huggingface.co/spaces/speechbox/whisper-restore-punctuation/resolve/main/sample1.flac`\r\nusing model + processor (which somehow seems more intuitive to me as I can see what is happening at each step), I get a type error as follows:\r\n\r\n```\r\nfrom transformers import WhisperProcessor, WhisperForConditionalGeneration\r\nimport soundfile as sf\r\n\r\n# load model and processor from pre-trained\r\nprocessor = WhisperProcessor.from_pretrained(\"openai/whisper-large-v2\")\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\"openai/whisper-large-v2\")\r\n\r\n!wget https://huggingface.co/spaces/speechbox/whisper-restore-punctuation/resolve/main/sample1.flac\r\n\r\n# load audio file: user is responsible for loading the audio files themselves\r\narray, sampling_rate = sf.read(\"./sample1.flac\", samplerate=processor.feature_extractor.sampling_rate)\r\n\r\n# pre-process to get the input features\r\ninput_features = processor(array, sampling_rate=sampling_rate, return_tensors=\"pt\").input_features\r\n\r\n# generate token ids by running model forward sequentially\r\npredicted_ids = model.generate(input_features, max_new_tokens=256)\r\n\r\n# post-process token ids to text\r\ntranscription = processor.batch_decode(predicted_ids, skip_special_tokens=True)\r\nprint(transcription)\r\n```\r\n\r\nerror:\r\n\r\n```\r\n---> 11 array, sampling_rate = sf.read(\"./sample1.flac\", samplerate=processor.feature_extractor.sampling_rate)\r\n 12 \r\n 13 # pre-process to get the input features\r\n\r\n2 frames\r\n[/usr/local/lib/python3.10/dist-packages/soundfile.py](https://localhost:8080/#) in _create_info_struct(file, mode, samplerate, channels, format, subtype, endian)\r\n 1481 if any(arg is not None for arg in (\r\n 1482 samplerate, channels, original_format, subtype, endian)):\r\n-> 1483 raise TypeError(\"Not allowed for existing files (except 'RAW'): \"\r\n 1484 \"samplerate, channels, format, subtype, endian\")\r\n 1485 return info\r\n\r\nTypeError: Not allowed for existing files (except 'RAW'): samplerate, channels, format, subtype, endian\r\n```",
"The `FileNotFoundError` means that your variable `audio` is not pointing to the correct path to your audio file. Please double check that you are setting `audio` to the correct path.\r\n\r\nThe error you've posted regarding the `soundfile` library not being able to convert the sampling rate on-the-fly should be taken up with the `soundfile` repository. As mentioned in the first comment, it is the users' responsibility to load the audio file if using the `model` + `processor` approach. If you don't want to load the audio yourself, you are advised to use the `pipeline`.\r\n\r\nIf you want to try a different approach, have a go with this code snippet. Note that you will need to pass the correct path to your audio file for this to work and ensure that you have `ffmpeg` installed on your system: https://ffmpeg.org/download.html\r\n\r\n```python\r\nfrom transformers import WhisperProcessor, WhisperForConditionalGeneration\r\nfrom transformers.pipelines.audio_utils import ffmpeg_read\r\n\r\n# load model and processor from pre-trained\r\nprocessor = WhisperProcessor.from_pretrained(\"openai/whisper-large-v2\")\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\"openai/whisper-large-v2\")\r\n\r\n# load audio file: user is responsible for loading the audio files themselves\r\naudio_path = \"./sample1.flac\"\r\nsampling_rate = processor.feature_extractor.sampling_rate\r\n\r\n# load as bytes\r\nwith open(audio_path, \"rb\") as f:\r\n inputs = f.read()\r\n\r\n# read bytes as array\r\ninputs = ffmpeg_read(inputs, sampling_rate=sampling_rate)\r\n\r\n# pre-process to get the input features\r\ninput_features = processor(inputs, sampling_rate=sampling_rate, return_tensors=\"pt\").input_features\r\n\r\n# generate token ids by running model forward sequentially\r\npredicted_ids = model.generate(input_features, max_new_tokens=256)\r\n\r\n# post-process token ids to text\r\ntranscription = processor.batch_decode(predicted_ids, skip_special_tokens=True)\r\nprint(transcription)\r\n```",
"Specifically on the path of the audio file, I am unsure what is the right way to read the file:\r\n\r\nFirst, I tried this:\r\n\r\n```\r\nfrom google.colab import files\r\nuploaded = files.upload()\r\n\r\ntranscription = pipe(uploaded)\r\n```\r\n\r\nerror:\r\n`ValueError: When passing a dictionary to AutomaticSpeechRecognitionPipeline, the dict needs to contain a \"raw\" key containing the numpy array representing the audio and a \"sampling_rate\" key, containing the sampling_rate associated with that array`\r\n\r\nNext, I tried this (defining the path):\r\n\r\n```\r\nfile_id = \"1RjF9CRj8xTQakH5jex1ppBrTw1heHwPS\"\r\nurl = 'https://drive.google.com/uc?id={}'.format(file_id)\r\n\r\ntranscription = pipe(url)\r\n```\r\nerror\r\n\r\n`ValueError: Malformed soundfile`\r\n\r\nfile reading (whether audio, pdf ) is something I am struggling with....everything else seems straight forward\r\n\r\nas an aside, if there is a good resource to sharpen skills on how to access various data file in colab such that they are in the desired format for pipeline(), please do suggest\r\n",
"Hey @SoyGema - for the audio path, you don't need to _read_ the audio file. You just need to set it to the path where it is saved.\r\n\r\nAs for uploading files to Google Colab, this is not really the best place to get such help, since it's not a `transformers` related question. One easy way I would suggest is that you upload the file to the Hugging Face Hub: https://huggingface.co/docs/hub/models-uploading\r\n\r\nAnd then clone the repository locally on your Google Cloud runtime: https://huggingface.co/docs/hub/repositories-getting-started#cloning-repositories",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey @pythonvijay - did the above comments help answer your question? Feel free to close this issue if so! Otherwise, on-hand to answer any follow-ups! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,694 | 1,700 | 1,700 |
NONE
| null |
Hi,
I am trying to use hugging face pipeline to invoke the model "openai/whisper-large-v2' to perform ASR task. Theerafter, I am wrapping the model within gradio as below but when I submit, I get an error.
Model card of this model says "To transcribe audio samples, the model has to be used alongside a [WhisperProcessor]"
How do I know what audio input is supported by this model and also how can I convert my audio sample (.m4a file) default generated by macbook voice memos app to a format supported by this model?
code as here:
```
from transformers import pipeline
import gradio as gr
#model = pipeline(model = "facebook/wav2vec2-base-960h")
model = pipeline( model= "openai/whisper-large-v2")
def transcribe_audio(mic=None, file = None):
if mic is not None:
audio = mic
elif file is not None:
audio = file
else:
return "You must either provide a mic recording or a file"
transcription = model(audio)["text"]
return transcription
gr.Interface(
fn=transcribe_audio,
inputs=[
gr.Audio(source="microphone", type="filepath", optional=True),
gr.Audio(source="upload", type="filepath", optional=True),
],
outputs="text",
).launch(share = True)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26075/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26108
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26108/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26108/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26108/events
|
https://github.com/huggingface/transformers/issues/26108
| 1,892,184,988 |
I_kwDOCUB6oc5wyHOc
| 26,108 |
Typo in the doc of AutoTokenizer? Two parameters for slow_tokenizer_class
|
{
"login": "xukp20",
"id": 71265541,
"node_id": "MDQ6VXNlcjcxMjY1NTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/71265541?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xukp20",
"html_url": "https://github.com/xukp20",
"followers_url": "https://api.github.com/users/xukp20/followers",
"following_url": "https://api.github.com/users/xukp20/following{/other_user}",
"gists_url": "https://api.github.com/users/xukp20/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xukp20/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xukp20/subscriptions",
"organizations_url": "https://api.github.com/users/xukp20/orgs",
"repos_url": "https://api.github.com/users/xukp20/repos",
"events_url": "https://api.github.com/users/xukp20/events{/privacy}",
"received_events_url": "https://api.github.com/users/xukp20/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi! i think this issue should be moved to the `transformers` repo",
"@xukp20 Indeed, thanks for pointing out! Would you like to open a PR to fix the typo? This way you get the github contribution. ",
"@amyeroberts Thanks, but I've never opened a PR for transformers before and never learned about writing the doc of transformers, so it's hard to find where to fix it quickly. I believe this minor change does not worth the trouble, so it would be nice if you can fix it at your convenience or remind anyone else to do so. Never mind the contribution! :)\r\n",
"@xukp20, ok no worries. I've opened a PR to fix it. Thanks again for highlighting!\r\n\r\nIf you ever fancy opening a PR in this repo on day, here's a guide: https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md/#create-a-pull-request"
] | 1,694 | 1,694 | 1,694 |
NONE
| null |
**Is your feature request related to a problem? Please describe.**
The parameters [here](https://huggingface.co/docs/transformers/v4.33.0/en/model_doc/auto#transformers.AutoTokenizer.register.slow_tokenizer_class) contain two slow_tokenizer_class which should be a typo I think.
**Additional context**
Add any other context or screenshots about the feature request here.

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26108/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26073
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26073/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26073/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26073/events
|
https://github.com/huggingface/transformers/issues/26073
| 1,888,966,239 |
I_kwDOCUB6oc5wl1Zf
| 26,073 |
Can't saved finetuned model in local machine
|
{
"login": "50516017",
"id": 23068536,
"node_id": "MDQ6VXNlcjIzMDY4NTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/23068536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/50516017",
"html_url": "https://github.com/50516017",
"followers_url": "https://api.github.com/users/50516017/followers",
"following_url": "https://api.github.com/users/50516017/following{/other_user}",
"gists_url": "https://api.github.com/users/50516017/gists{/gist_id}",
"starred_url": "https://api.github.com/users/50516017/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/50516017/subscriptions",
"organizations_url": "https://api.github.com/users/50516017/orgs",
"repos_url": "https://api.github.com/users/50516017/repos",
"events_url": "https://api.github.com/users/50516017/events{/privacy}",
"received_events_url": "https://api.github.com/users/50516017/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @younesbelkada ",
"Hi @50516017 \r\nThanks a lot for raising this up, \r\nThere are a couple of issues in your script\r\n\r\n1- You are performing pure fine-tuning with the 8-bit model, which is not supported. If you want to train models with 8-bit weights, you need attach adapters on it using `peft` package. Please have a look at few examples here: https://github.com/huggingface/peft/tree/main/examples/int8_training\r\n2- You are using bitsandbytes compiled on windows, I am not sure how the interaction of that package + transformers will behave. In our case we only support this bitsandbytes package: https://github.com/TimDettmers/bitsandbytes so you might encounter some issues we cannot catch\r\n\r\nCan you print the model and share the result here? Thanks!",
"I set the LoRa parameters based on the link and executed the learning, and it worked! thank you very much!",
"Awesome, @50516017 , glad that it worked!"
] | 1,694 | 1,695 | 1,695 |
NONE
| null |
### System Info
Hi
I want to create fine tuning using "rinna/japanese-gpt-neox-3.6b-instruction-ppo" on windows os
However, when I ran training and tried to save, the following error occurred and the model was not saved to output_dir.
How should I solve it?
I am building an environment using WSL2 and installing bitsandytes using the following. Could that be the cause?
https://github.com/jllllll/bitsandbytes-windows-webui
If this repository is causing problems, shouldn't I be using bitsandbytes in a windows environment?
## enviroment
```
OS: ubuntu22.04 on windows11 using WSL2
GPU:NVIDIA Geforce RTX 4060Ti(16GB)
CPU:AMN Rzen 5 4500 6-core (16GB)
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.05 Driver Version: 537.13 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 4060 Ti On | 00000000:01:00.0 On | N/A |
| 0% 43C P8 10W / 165W | 478MiB / 16380MiB | 10% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 276 G /Xwayland N/A |
```
pip list
```
Package Version
------------------------- ------------
accelerate 0.20.3
adapter-transformers 3.2.1
aiofiles 23.2.1
aiohttp 3.8.5
aiosignal 1.3.1
altair 5.1.1
anyio 4.0.0
appdirs 1.4.4
async-timeout 4.0.3
attrs 23.1.0
bitsandbytes 0.39.0
certifi 2023.7.22
charset-normalizer 3.2.0
click 8.1.7
cmake 3.27.4.1
contourpy 1.1.0
ctranslate2 3.19.0
cycler 0.11.0
datasets 2.12.0
dill 0.3.6
docker-pycreds 0.4.0
exceptiongroup 1.1.3
fastapi 0.95.2
ffmpy 0.3.1
filelock 3.12.3
fonttools 4.42.1
frozenlist 1.4.0
fsspec 2023.9.0
gitdb 4.0.10
GitPython 3.1.35
gradio 3.31.0
gradio_client 0.5.0
h11 0.14.0
httpcore 0.17.3
httpx 0.24.1
huggingface-hub 0.16.4
idna 3.4
Jinja2 3.1.2
jsonschema 4.19.0
jsonschema-specifications 2023.7.1
kiwisolver 1.4.5
linkify-it-py 2.0.2
lit 16.0.6
loralib 0.1.1
markdown-it-py 2.2.0
MarkupSafe 2.1.3
matplotlib 3.7.2
mdit-py-plugins 0.3.3
mdurl 0.1.2
mpmath 1.3.0
multidict 6.0.4
multiprocess 0.70.14
networkx 3.1
numpy 1.25.2
nvidia-cublas-cu11 11.10.3.66
nvidia-cuda-cupti-cu11 11.7.101
nvidia-cuda-nvrtc-cu11 11.7.99
nvidia-cuda-runtime-cu11 11.7.99
nvidia-cudnn-cu11 8.5.0.96
nvidia-cufft-cu11 10.9.0.58
nvidia-curand-cu11 10.2.10.91
nvidia-cusolver-cu11 11.4.0.1
nvidia-cusparse-cu11 11.7.4.91
nvidia-nccl-cu11 2.14.3
nvidia-nvtx-cu11 11.7.91
orjson 3.9.5
packaging 23.1
pandas 2.1.0
pathtools 0.1.2
peft 0.4.0
Pillow 10.0.0
pip 23.2.1
protobuf 3.20.0
psutil 5.9.5
pyarrow 13.0.0
pydantic 1.10.12
pydub 0.25.1
Pygments 2.16.1
pyparsing 3.0.9
python-dateutil 2.8.2
python-multipart 0.0.6
pytz 2023.3.post1
PyYAML 6.0.1
referencing 0.30.2
regex 2023.8.8
requests 2.31.0
responses 0.18.0
rpds-py 0.10.2
safetensors 0.3.3
scipy 1.10.1
semantic-version 2.10.0
sentencepiece 0.1.99
sentry-sdk 1.30.0
setproctitle 1.3.2
setuptools 59.6.0
six 1.16.0
smmap 5.0.0
sniffio 1.3.0
starlette 0.27.0
sympy 1.12
tokenizers 0.13.3
toolz 0.12.0
torch 2.0.1
torchaudio 2.0.2
torchvision 0.15.2
tqdm 4.66.1
transformers 4.33.1
triton 2.0.0
typing_extensions 4.7.1
tzdata 2023.3
uc-micro-py 1.0.2
urllib3 2.0.4
uvicorn 0.23.2
wandb 0.15.10
websockets 11.0.3
wheel 0.41.2
xxhash 3.3.0
yarl 1.9.2
```
### Who can help?
@pacman100 : @muellerz
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
execute training code
``` Code
model_name = "rinna/japanese-gpt-neox-3.6b-instruction-ppo"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
config = AutoConfig.from_pretrained(model_name,use_fast=False)
model = AutoModelForCausalLM.from_pretrained(
model_name,
config=config,
device_map="auto",
#torch_dtype=torch.bfloat16,
load_in_8bit=True
)
eval_steps = 11
save_steps = 33
logging_steps = 3
MICRO_BATCH_SIZE = 2
BATCH_SIZE = 16
trainer = transformers.Trainer(
model = model,
data_collator=collator,
train_dataset=tokenized_train,
eval_dataset=tokenized_val,
args=transformers.TrainingArguments(
num_train_epochs=1,
#learning_rate=3e-5,
evaluation_strategy="steps",
save_strategy="steps",
eval_steps=eval_steps,
save_steps=save_steps,
#warmup_ratio=0.15,
per_device_train_batch_size=MICRO_BATCH_SIZE,
per_device_eval_batch_size=MICRO_BATCH_SIZE,
gradient_accumulation_steps=BATCH_SIZE // MICRO_BATCH_SIZE,
#bf16=True,
dataloader_num_workers=12,
logging_steps=logging_steps,
output_dir="./output",
#report_to="wandb",
save_total_limit=1,
load_best_model_at_end=True,
greater_is_better=False,
metric_for_best_model="eval_loss",
fp16=True,
auto_find_batch_size=True
)
)
```
### Expected behavior
## error message
```
bin /lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so
/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('unix')}
warn(msg)
/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] files: {PosixPath('/usr/local/cuda/lib64/libcudart.so.11.0'), PosixPath('/usr/local/cuda/lib64/libcudart.so')}.. We'll flip a coin and try one of these, in order to fail forward.
Either way, this might cause trouble in the future:
If you get `CUDA error: invalid device function` errors, the above might be the cause and the solution is to make sure only one ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] in the paths that we search based on your env.
warn(msg)
CUDA exception! Error code: no CUDA-capable device is detected
CUDA exception! Error code: initialization error
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so.11.0
/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: No GPU detected! Check your CUDA paths. Proceeding to load CPU-only library...
warn(msg)
CUDA SETUP: Detected CUDA version 118
CUDA SETUP: Loading binary /lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so...
0
You are using the default legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. If you see this, DO NOT PANIC! This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thouroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
The model weights are not tied. Please use the `tie_weights` method before using the `infer_auto_device` function.
/lib/python3.10/site-packages/transformers/models/t5/tokenization_t5.py:283: UserWarning: This sequence already has </s>. In future versions this behavior may lead to duplicated eos tokens being added.
{'loss': 218.7709, 'learning_rate': 2.357142857142857e-05, 'epoch': 0.21}
{'loss': 0.0, 'learning_rate': 1.7142857142857142e-05, 'epoch': 0.42}
{'loss': 0.0, 'learning_rate': 1.0714285714285714e-05, 'epoch': 0.64}
{'eval_loss': nan, 'eval_runtime': 2.0215, 'eval_samples_per_second': 5.442, 'eval_steps_per_second': 2.968, 'epoch': 0.78}
{'loss': 0.0, 'learning_rate': 4.2857142857142855e-06, 'epoch': 0.85}
{'train_runtime': 83.9278, 'train_samples_per_second': 2.693, 'train_steps_per_second': 0.167, 'train_loss': 46.87947736467634, 'epoch': 0.99}
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 14/14 [01:10<00:00, 5.06s/it]/lib/python3.10/site-packages/transformers/modeling_utils.py:1825: UserWarning: You are calling `save_pretrained` to a 8-bit converted model you may likely encounter unexepected behaviors. If you want to save 8-bit models, make sure to have `bitsandbytes>0.37.2` installed.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26073/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26072
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26072/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26072/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26072/events
|
https://github.com/huggingface/transformers/issues/26072
| 1,888,959,893 |
I_kwDOCUB6oc5wlz2V
| 26,072 |
Regarding padding and batched inference for LLAMA-2 and CodeLLAMA
|
{
"login": "anmolagarwal999",
"id": 54059763,
"node_id": "MDQ6VXNlcjU0MDU5NzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/54059763?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anmolagarwal999",
"html_url": "https://github.com/anmolagarwal999",
"followers_url": "https://api.github.com/users/anmolagarwal999/followers",
"following_url": "https://api.github.com/users/anmolagarwal999/following{/other_user}",
"gists_url": "https://api.github.com/users/anmolagarwal999/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anmolagarwal999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anmolagarwal999/subscriptions",
"organizations_url": "https://api.github.com/users/anmolagarwal999/orgs",
"repos_url": "https://api.github.com/users/anmolagarwal999/repos",
"events_url": "https://api.github.com/users/anmolagarwal999/events{/privacy}",
"received_events_url": "https://api.github.com/users/anmolagarwal999/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey! The warning is a general warning. `Left` padding is the usual recommendation, but the original Llama codebase (and Code-Llama is part of the Llama codebase) use a default `right` padding. Our goal is to have similar results out of the box (so right padding) but still allow users to have the best results and we thus give recommendation on the padding side. \r\nThere is a guideline: CodeLlama is the same as Llama. Would it be clearer if the tip section is copied over to CodeLlama? ",
"> Would it be clearer if the tip section is copied over to CodeLlama?\r\n\r\nYes it would help. Should I create a PR for this? ",
"Sure π ",
"Hey @anmolagarwal999 π \r\n\r\nOut of curiosity, have you passed the attention mask that came out of the tokenizer to `model.generate`? That is a common cause of performance degradation that would explain what you're seeing :)",
"https://github.com/huggingface/transformers/issues/26072#issue-1888959893",
"Hi @rafa852 π Have a look at this doc sections about padding sides: https://huggingface.co/docs/transformers/llm_tutorial#wrong-padding-side\r\n\r\nAs for the padding token, it's common to set `tokenizer.pad_token = tokenizer.eos_token` :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Is there any solution to close the warning?",
"If you read the issue you will see that you can simply do `tokenizer.pad_token = tokenizer.eos_token`. Could you read the issue? ",
"is it the same as setting `tokenizer.pad_token = 0` ?",
"No, you cannot set a `token` to an `id`. It is the same as `tokenzier.pad_token_id = 0` if `tokenizer.eos_token_id` is `0`",
"> No, you cannot set a `token` to an `id`. It is the same as `tokenzier.pad_token_id = 0` if `tokenizer.eos_token_id` is `0`\r\n\r\nsorry, I mean setting `tokenizer.pad_token_id = 0` and `tokenizer.eos_token_id != 0` 0 is actually the id for `<unk>` token in llama2 config. Would this affect the inference time result?",
"It should not really affect inference not, by default it is what is used. Feel free to use the eos token as it is common practice",
"@ArthurZucker Hi, just to make sure I understood correctly from this issue, to run batched generation with llama 2 models is this enough?\r\n```\r\ntokenizer = AutoTokenizer.from_pretrained(\"llama-2-7b-hf\") # padding_side will default to \"right\"\r\ntokenizer.pad_token_id = tokenized.eos_token_id\r\n```\r\nI can't be 100% sure neither reading this issue or the tip section",
"@gpucce left-padding should be used for batched inference (see [this comment](https://github.com/huggingface/transformers/issues/26072#issuecomment-1777711664))",
"@gante thank you very much, would this be the case also for T5 models? ",
"@gpucce nope, encoder-decoder models should use right-padding :)",
"> @gpucce nope, encoder-decoder models should use right-padding :)\r\n\r\n@gante can you explain a bit why this is the case?",
"@gpucce Decoder-only models continue generating from the input prompt and can't have gaps between the end of the prompt and the start of generation. They were not trained to handle these gaps.\r\n\r\nEncoder-decoder models convert the input prompt into an encoded vector, which is fed to a decoder. In this case, the decoder starts with an embedded input and `input_ids = [bos_token_id]`. The encoder was trained to handle padding on the right, but not padding on the left."
] | 1,694 | 1,705 | 1,701 |
NONE
| null |
### System Info
Platform:
* transformers_version: "4.33.0.dev0"
* Python: 3.8
### Who can help?
@ArthurZucker @younesbelkada @gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
#### Regarding LLAMA-2 CHAT
I have been using `LLAMA-2 13B chat` for batched inference. I have the followed the steps in the TIPS section [here](https://huggingface.co/docs/transformers/model_doc/llama2). My question is regarding the padding_side to be chosen. I have tried setting the padding_side to be both left and right and my observations are as follows:
* The results with padding_side = left and really very bad. The results with padding_side = right seem to be coherent and very good. This also seems to be backed up with [here](https://github.com/huggingface/transformers/issues/25022#issuecomment-1647573640).
* However, on using the model with padding_side = right, I get the warning: ```
A decoder-only architecture is being used, but right-padding was detected! For correct generation results, please set `padding_side='left'` when initializing the tokenizer.
```
What is the `padding_side` to be used ?
#### Regarding CodeLLAMA
No guidelines on how to deal with the absence of a padding token is to be dealt with is present on the model page for CodeLLAMA. It would be good to have some documentation on parameters such as "what padding token is to be set", what is the 'padding_side' to be kept, etc.
### Expected behavior
Consistent behaviour ie better results to come during the case when there is no warning.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26072/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26072/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26071
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26071/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26071/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26071/events
|
https://github.com/huggingface/transformers/issues/26071
| 1,888,927,354 |
I_kwDOCUB6oc5wlr56
| 26,071 |
can't use load_in_8bit in rinna/japanese-gpt-neox-3.6b-instruction-ppo
|
{
"login": "50516017",
"id": 23068536,
"node_id": "MDQ6VXNlcjIzMDY4NTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/23068536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/50516017",
"html_url": "https://github.com/50516017",
"followers_url": "https://api.github.com/users/50516017/followers",
"following_url": "https://api.github.com/users/50516017/following{/other_user}",
"gists_url": "https://api.github.com/users/50516017/gists{/gist_id}",
"starred_url": "https://api.github.com/users/50516017/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/50516017/subscriptions",
"organizations_url": "https://api.github.com/users/50516017/orgs",
"repos_url": "https://api.github.com/users/50516017/repos",
"events_url": "https://api.github.com/users/50516017/events{/privacy}",
"received_events_url": "https://api.github.com/users/50516017/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,694 | 1,694 | 1,694 |
NONE
| null |
### System Info
Hi!
I want to use "rinna/japanese-gpt-neox-3.6b-instruction-ppo" model & want to use load_in_8bit.
but AutoModelForCausalLM.from_pretrained throw error
ValueError: The model you want to train is loaded in 8-bit precision. Training an 8-bit model is not supported yet.
when i tried training without load_in_8bit ,RuntimeError: No executable batch size found, reached zero. occured.
Is this happening due to insufficient memory?
I am a beginner who is new to deep learning and environment building.
I would like to fine tune a model on the following PC spec.
Does anyone have a solution to this?
please help me.
## enviroment
```
OS: ubuntu22.04 on windows11 using WSL2
GPU:NVIDIA Geforce RTX 4060Ti(16GB)
CPU:AMN Rzen 5 4500 6-core (16GB)
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.05 Driver Version: 537.13 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 4060 Ti On | 00000000:01:00.0 On | N/A |
| 0% 43C P8 10W / 165W | 478MiB / 16380MiB | 10% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 276 G /Xwayland N/A |
```
### Who can help?
@pacman100
@muellerz
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. running Training_code
## source code
```
model_name = "rinna/japanese-gpt-neox-3.6b-instruction-ppo"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
config = AutoConfig.from_pretrained(model_name,use_fast=False)
model = AutoModelForCausalLM.from_pretrained(
model_name,
config=config,
device_map="auto",
#torch_dtype=torch.bfloat16,
load_in_8bit=True
)
eval_steps = 11
save_steps = 33
logging_steps = 3
MICRO_BATCH_SIZE = 2
BATCH_SIZE = 16
trainer = transformers.Trainer(
model = model,
data_collator=collator,
train_dataset=tokenized_train,
eval_dataset=tokenized_val,
args=transformers.TrainingArguments(
num_train_epochs=1,
#learning_rate=3e-5,
evaluation_strategy="steps",
save_strategy="steps",
eval_steps=eval_steps,
save_steps=save_steps,
#warmup_ratio=0.15,
per_device_train_batch_size=MICRO_BATCH_SIZE,
per_device_eval_batch_size=MICRO_BATCH_SIZE,
gradient_accumulation_steps=BATCH_SIZE // MICRO_BATCH_SIZE,
#bf16=True,
dataloader_num_workers=12,
logging_steps=logging_steps,
output_dir="./output",
#report_to="wandb",
save_total_limit=1,
load_best_model_at_end=True,
greater_is_better=False,
metric_for_best_model="eval_loss",
fp16=True,
auto_find_batch_size=True
)
)
```
### Expected behavior
## error message(using load_in_8bit)
```
bin /venv/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so
/home/venv/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('unix')}
warn(msg)
/venv/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] files: {PosixPath('/usr/local/cuda/lib64/libcudart.so'), PosixPath('/usr/local/cuda/lib64/libcudart.so.11.0')}.. We'll flip a coin and try one of these, in order to fail forward.
Either way, this might cause trouble in the future:
If you get `CUDA error: invalid device function` errors, the above might be the cause and the solution is to make sure only one ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] in the paths that we search based on your env.
warn(msg)
CUDA exception! Error code: no CUDA-capable device is detected
CUDA exception! Error code: initialization error
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
/venv/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: No GPU detected! Check your CUDA paths. Proceeding to load CPU-only library...
warn(msg)
CUDA SETUP: Detected CUDA version 118
CUDA SETUP: Loading binary /venv/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so...
0
The model weights are not tied. Please use the `tie_weights` method before using the `infer_auto_device` function.
/venv/lib/python3.10/site-packages/transformers/models/t5/tokenization_t5.py:226: UserWarning: This sequence already has </s>. In future versions this behavior may lead to duplicated eos tokens being added.
warnings.warn(
Traceback (most recent call last):
File "/source.py", line 207, in <module>
trainer = transformers.Trainer(
File "/venv/lib/python3.10/site-packages/transformers/trainer.py", line 362, in __init__
raise ValueError(
ValueError: The model you want to train is loaded in 8-bit precision. Training an 8-bit model is not supported yet.
```
## error message( without load_in_8bit)
```
bin /lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so
/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('unix')}
warn(msg)
/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] files: {PosixPath('/usr/local/cuda/lib64/libcudart.so.11.0'), PosixPath('/usr/local/cuda/lib64/libcudart.so')}.. We'll flip a coin and try one of these, in order to fail forward.
Either way, this might cause trouble in the future:
If you get `CUDA error: invalid device function` errors, the above might be the cause and the solution is to make sure only one ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] in the paths that we search based on your env.
warn(msg)
CUDA exception! Error code: no CUDA-capable device is detected
CUDA exception! Error code: initialization error
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so.11.0
/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: No GPU detected! Check your CUDA paths. Proceeding to load CPU-only library...
warn(msg)
CUDA SETUP: Detected CUDA version 118
CUDA SETUP: Loading binary /lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so...
warnings.warn(
Using cuda_amp half precision backend
/lib/python3.10/site-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
warnings.warn(
***** Running training *****
Num examples = 226
Num Epochs = 1
Instantaneous batch size per device = 2
Total train batch size (w. parallel, distributed & accumulation) = 16
Gradient Accumulation steps = 8
Total optimization steps = 14
Number of trainable parameters = 3607245312
Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true"
0%| | 0/14 [00:00<?, ?it/s]***** Running training *****
Num examples = 226
Num Epochs = 1
Instantaneous batch size per device = 2
Total train batch size (w. parallel, distributed & accumulation) = 16
Gradient Accumulation steps = 8
Total optimization steps = 28
Number of trainable parameters = 3607245312
0%| | 0/14 [00:38<?, ?it/s]Traceback (most recent call last): | 0/28 [00:00<?, ?it/s] File "/home/shimizu/create_LLM/macin/work2/prepare_dataset.py", line 237, in <module>
trainer.train()
File "/lib/python3.10/site-packages/transformers/trainer.py", line 1543, in train
return inner_training_loop(
File "/lib/python3.10/site-packages/accelerate/utils/memory.py", line 130, in decorator
raise RuntimeError("No executable batch size found, reached zero.")
RuntimeError: No executable batch size found, reached zero.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26071/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26070
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26070/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26070/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26070/events
|
https://github.com/huggingface/transformers/issues/26070
| 1,888,878,797 |
I_kwDOCUB6oc5wlgDN
| 26,070 |
Tokenizer and unicode problems
|
{
"login": "bombel28",
"id": 80328658,
"node_id": "MDQ6VXNlcjgwMzI4NjU4",
"avatar_url": "https://avatars.githubusercontent.com/u/80328658?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bombel28",
"html_url": "https://github.com/bombel28",
"followers_url": "https://api.github.com/users/bombel28/followers",
"following_url": "https://api.github.com/users/bombel28/following{/other_user}",
"gists_url": "https://api.github.com/users/bombel28/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bombel28/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bombel28/subscriptions",
"organizations_url": "https://api.github.com/users/bombel28/orgs",
"repos_url": "https://api.github.com/users/bombel28/repos",
"events_url": "https://api.github.com/users/bombel28/events{/privacy}",
"received_events_url": "https://api.github.com/users/bombel28/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Sorry could you explain what « obabooga » is? ",
"text generation web UI, sorry for typo, should be oobabooga. For error log, I just had to change one more time....\r\n\r\n```\r\n if len(tokens) > 1 and tokens[0] == SPIECE_UNDERLINE and tokens[1] in self.all_special_tokens:\r\n\r\n # Log the text to a file\r\n with open(\"error_log.txt\", \"a\", encoding='utf-8') as log_file:\r\n log_file.write(text + \"\\n\")\r\n\t\t\t\r\n tokens = tokens[1:]\r\n return tokens\r\n\r\n```\r\nBut as using UTF-8 BOM, encoding seems to work.",
"seems to be solved."
] | 1,694 | 1,695 | 1,695 |
NONE
| null |
### System Info
UnicodeEncodeError: 'charmap' codec can't encode character '\u016b' in position 13: character maps to <undefined>
This is new...
Next new issue: when using hardcut in oobabooga to use eos token, tokenizer can't handle two eos (or bos)-tokens.
All issues belongs to llama model (llama-2-13B here)
EDIT: seems like wrong encoding is used. Earlier version uses correctly utf-8
EDIT again: converting training data to UTF-8 BOM solves the problem with unicode.
### Who can help?
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
1. Use a raw text training file in UTF8 in different languages like czech, french
2. use many linefeeds at once (six or more)
3. activate Add EOS token in oobabooga and set Hard Cut String to \n\n\n (default)
Start training on an early llama-2 model with basic / original tokenizer
Two problems will cause to don't start training.
I solved the eos-token-problem meanwhile this way for me:
in tokenization_llama.py, i added /changed this in def tokenize(self, text: "TextInput", **kwargs):
if len(tokens) > 1 and tokens[0] == SPIECE_UNDERLINE and tokens[1] in self.all_special_tokens:
# Log the text to a file
with open("error_log.txt", "a") as log_file:
log_file.write(text + "\n")
tokens = tokens[1:]
return tokens
So the last unusable token is removed, and the line is logged for further error handling in training data.
### Expected behavior
as in old version before, this problems did not occure on same training data
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26070/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26069
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26069/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26069/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26069/events
|
https://github.com/huggingface/transformers/pull/26069
| 1,888,875,464 |
PR_kwDOCUB6oc5Z8IkE
| 26,069 |
refactor: change default block_size in block size > max position embeddings
|
{
"login": "pphuc25",
"id": 81808312,
"node_id": "MDQ6VXNlcjgxODA4MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/81808312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pphuc25",
"html_url": "https://github.com/pphuc25",
"followers_url": "https://api.github.com/users/pphuc25/followers",
"following_url": "https://api.github.com/users/pphuc25/following{/other_user}",
"gists_url": "https://api.github.com/users/pphuc25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pphuc25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pphuc25/subscriptions",
"organizations_url": "https://api.github.com/users/pphuc25/orgs",
"repos_url": "https://api.github.com/users/pphuc25/repos",
"events_url": "https://api.github.com/users/pphuc25/events{/privacy}",
"received_events_url": "https://api.github.com/users/pphuc25/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Thank you for your helpful information. ",
"No worries @pphuc25! Would you like to open a PR to set the block size to the minimum of `(1024, config.max_position_embeddings)`? This will prevent the error when we set the block size > max position embeddings",
"That's the cool idea, I will do it, thank you @sanchit-gandhi.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26069). All of your documentation changes will be reflected on that endpoint."
] | 1,694 | 1,695 | 1,695 |
CONTRIBUTOR
| null |
Hi,
In the original code, this appears to function correctly when the default block_size is set to 1024. However, I believe that this setting might potentially hinder the training performance. Therefore, I have adjusted the default to be max_position_embeddings when it doesn't match case.
I would like to cc @sanchit-gandhi to review my PR, thank you so much
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26069/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26069",
"html_url": "https://github.com/huggingface/transformers/pull/26069",
"diff_url": "https://github.com/huggingface/transformers/pull/26069.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26069.patch",
"merged_at": 1695052078000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26068
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26068/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26068/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26068/events
|
https://github.com/huggingface/transformers/pull/26068
| 1,888,872,146 |
PR_kwDOCUB6oc5Z8H9c
| 26,068 |
chore: correct update_step and correct gradient_accumulation_steps
|
{
"login": "pphuc25",
"id": 81808312,
"node_id": "MDQ6VXNlcjgxODA4MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/81808312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pphuc25",
"html_url": "https://github.com/pphuc25",
"followers_url": "https://api.github.com/users/pphuc25/followers",
"following_url": "https://api.github.com/users/pphuc25/following{/other_user}",
"gists_url": "https://api.github.com/users/pphuc25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pphuc25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pphuc25/subscriptions",
"organizations_url": "https://api.github.com/users/pphuc25/orgs",
"repos_url": "https://api.github.com/users/pphuc25/repos",
"events_url": "https://api.github.com/users/pphuc25/events{/privacy}",
"received_events_url": "https://api.github.com/users/pphuc25/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26068). All of your documentation changes will be reflected on that endpoint."
] | 1,694 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
Hi,
As discussed in issue #25691, it seems that the update step is incorrect. I have made corrections to all the files to ensure synchronization. However, during my edits, I noticed that there were multiple instances of incorrect variable name `args.gradient_accumulation_steps` so I have rectified these issues as well.
I would like cc @ArthurZucker to review my code.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26068/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26068",
"html_url": "https://github.com/huggingface/transformers/pull/26068",
"diff_url": "https://github.com/huggingface/transformers/pull/26068.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26068.patch",
"merged_at": 1694539884000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26067
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26067/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26067/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26067/events
|
https://github.com/huggingface/transformers/pull/26067
| 1,888,868,762 |
PR_kwDOCUB6oc5Z8HVb
| 26,067 |
docs: add space to docs
|
{
"login": "pphuc25",
"id": 81808312,
"node_id": "MDQ6VXNlcjgxODA4MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/81808312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pphuc25",
"html_url": "https://github.com/pphuc25",
"followers_url": "https://api.github.com/users/pphuc25/followers",
"following_url": "https://api.github.com/users/pphuc25/following{/other_user}",
"gists_url": "https://api.github.com/users/pphuc25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pphuc25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pphuc25/subscriptions",
"organizations_url": "https://api.github.com/users/pphuc25/orgs",
"repos_url": "https://api.github.com/users/pphuc25/repos",
"events_url": "https://api.github.com/users/pphuc25/events{/privacy}",
"received_events_url": "https://api.github.com/users/pphuc25/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26067). All of your documentation changes will be reflected on that endpoint."
] | 1,694 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
Hi,
In many code this seem that there no space in doc, so I add space to make this look better.
I would like to cc @amyeroberts to review my code.
thank you so much for reading my PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26067/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26067",
"html_url": "https://github.com/huggingface/transformers/pull/26067",
"diff_url": "https://github.com/huggingface/transformers/pull/26067.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26067.patch",
"merged_at": 1694466206000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26066
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26066/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26066/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26066/events
|
https://github.com/huggingface/transformers/issues/26066
| 1,888,704,524 |
I_kwDOCUB6oc5wk1gM
| 26,066 |
"RuntimeError: FlashAttention only supports fp16 and bf16 data type" when training llama-2-7b-hf on databricks-dolly-15k dataset
|
{
"login": "DrishtiShrrrma",
"id": 129742046,
"node_id": "U_kgDOB7u03g",
"avatar_url": "https://avatars.githubusercontent.com/u/129742046?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DrishtiShrrrma",
"html_url": "https://github.com/DrishtiShrrrma",
"followers_url": "https://api.github.com/users/DrishtiShrrrma/followers",
"following_url": "https://api.github.com/users/DrishtiShrrrma/following{/other_user}",
"gists_url": "https://api.github.com/users/DrishtiShrrrma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DrishtiShrrrma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DrishtiShrrrma/subscriptions",
"organizations_url": "https://api.github.com/users/DrishtiShrrrma/orgs",
"repos_url": "https://api.github.com/users/DrishtiShrrrma/repos",
"events_url": "https://api.github.com/users/DrishtiShrrrma/events{/privacy}",
"received_events_url": "https://api.github.com/users/DrishtiShrrrma/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @younesbelkada ",
"Hello @DrishtiShrrrma\r\nYou are using a monkey-patched version of Llama modeling script so I can't really know what is going on under the hood.\r\nFlash Attention 2 support is being added https://github.com/huggingface/transformers/pull/25598 if you want to already use it please run:\r\n```bash\r\npip install git+https://github.com/younesbelkada/transformers.git@add-flash-attn-2\r\n```\r\nAnd make sure to add `use_flash_attn_2=True` when calling `from_pretrained` as stated here: https://github.com/huggingface/transformers/pull/25598#issue-1857032190\r\n ",
"I see... Thanks for the help, @younesbelkada! "
] | 1,694 | 1,694 | 1,694 |
NONE
| null |
### System Info
transformers version: 4.34.0.dev0
Platform: Linux-5.15.109+-x86_64-with-glibc2.35
Python version: 3.10.12
Huggingface_hub version: 0.16.4
Safetensors version: 0.3.3
Accelerate version: 0.23.0.dev0
Accelerate config: Distributed environment: NO
Num processes: 1
Process index: 0
Local process index: 0
Device: cuda
Mixed precision type: no
PyTorch version (GPU?): 2.0.1+cu118 (True)
Tensorflow version (GPU?): 2.13.0 (NA)
Flax version (CPU?/GPU?/TPU?): 0.7.2 (NA)
Jax version: 0.4.14
JaxLib version: 0.4.14
Using GPU in script?: Yes, A100
Using distributed or parallel set-up in script?:
### Who can help?
@philschmid @ArthurZucker @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from datasets import load_dataset
from random import randrange
#### Load dataset from the hub
dataset = load_dataset("databricks/databricks-dolly-15k", split="train")
print(f"dataset size: {len(dataset)}")
print(dataset[randrange(len(dataset))])
def format_instruction(sample):
return f"""### Instruction:
Use the Input below to create an instruction, which could have been used to generate the input using an LLM.
### Input:
{sample['response']}
### Response:
{sample['instruction']}
"""
from random import randrange
print(format_instruction(dataset[randrange(len(dataset))]))
!python -c "import torch; assert torch.cuda.get_device_capability()[0] >= 8, 'Hardware not supported for Flash Attention'"
!pip install ninja packaging
!MAX_JOBS=4 pip install flash-attn --no-build-isolation
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
use_flash_attention = True
if torch.cuda.get_device_capability()[0] >= 8:
from utils.llama_patch import replace_attn_with_flash_attn
print("Using flash attention")
replace_attn_with_flash_attn()
use_flash_attention = True
### Hugging Face model id
model_id = "NousResearch/Llama-2-7b-hf" # non-gated
### model_id = "meta-llama/Llama-2-7b-hf" # gated
### BitsAndBytesConfig int-4 config
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
### Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, use_cache=False, device_map="auto")
model.config.pretraining_tp = 1
### Validate that the model is using flash attention, by comparing doc strings
if use_flash_attention:
from utils.llama_patch import forward
assert model.model.layers[0].self_attn.forward.__doc__ == forward.__doc__, "Model is not using flash attention"
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
from peft import LoraConfig, prepare_model_for_kbit_training, get_peft_model
### LoRA config based on QLoRA paper
peft_config = LoraConfig(
lora_alpha=16,
lora_dropout=0.1,
r=64,
bias="none",
task_type="CAUSAL_LM",
)
### prepare model for training
model = prepare_model_for_kbit_training(model)
model = get_peft_model(model, peft_config)
from transformers import TrainingArguments
args = TrainingArguments(
output_dir="llama-7-int4-dolly",
num_train_epochs=3,
per_device_train_batch_size=6 if use_flash_attention else 4,
gradient_accumulation_steps=2,
gradient_checkpointing=True,
optim="paged_adamw_32bit",
logging_steps=10,
save_strategy="epoch",
learning_rate=2e-4,
bf16=True,
tf32=True,
max_grad_norm=0.3,
warmup_ratio=0.03,
lr_scheduler_type="constant",
disable_tqdm=True # disable tqdm since with packing values are in correct
)
from trl import SFTTrainer
max_seq_length = 2048 # max sequence length for model and packing of the dataset
trainer = SFTTrainer(
model=model,
train_dataset=dataset,
peft_config=peft_config,
max_seq_length=max_seq_length,
tokenizer=tokenizer,
packing=True,
formatting_func=format_instruction,
args=args,
)
from trl import SFTTrainer
max_seq_length = 2048 # max sequence length for model and packing of the dataset
trainer = SFTTrainer(
model=model,
train_dataset=dataset,
peft_config=peft_config,
max_seq_length=max_seq_length,
tokenizer=tokenizer,
packing=True,
formatting_func=format_instruction,
args=args,
)
### train
trainer.train() # there will not be a progress bar since tqdm is disabled
### save model
trainer.save_model()
### Expected behavior
Hi,
I was trying to train llama-2-7b-hf (quantized to 4-bit) on databricks/databricks-dolly-15k dataset. Essentially, I was trying to follow this amazing tutorial https://www.philschmid.de/instruction-tune-llama-2. But for some reason, I'm running into an error that states that **RuntimeError: FlashAttention only supports fp16 and bf16 data type**.
Colab Notebook: https://colab.research.google.com/drive/13UUTyjXmSSkSkBjnLz_2gy4yQHBVCEcg?usp=sharing
This error seems to occur when the trainer.train() method is called. The error backtrace seems to go all the way down to the flash_attn module.
Things I have tried:
1. I updated the package versions --installed from the branch itself.
2. I tried setting tf32=False, but it didn't help.
3. I also tried to run the script without bf16 and tf32 + enabled fp16=True
4. Lastly, I also changed the optimizer from **paged_adamw_32bit** (which is a 32-bit optimizer) to a normal Adam optimizer.
Unfortunately, none of this worked. What am I doing wrong?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26066/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26065
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26065/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26065/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26065/events
|
https://github.com/huggingface/transformers/issues/26065
| 1,888,684,455 |
I_kwDOCUB6oc5wkwmn
| 26,065 |
ImportError: cannot import name 'check_peft_version' from 'transformers.utils'
|
{
"login": "nrchan",
"id": 31267546,
"node_id": "MDQ6VXNlcjMxMjY3NTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/31267546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nrchan",
"html_url": "https://github.com/nrchan",
"followers_url": "https://api.github.com/users/nrchan/followers",
"following_url": "https://api.github.com/users/nrchan/following{/other_user}",
"gists_url": "https://api.github.com/users/nrchan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nrchan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nrchan/subscriptions",
"organizations_url": "https://api.github.com/users/nrchan/orgs",
"repos_url": "https://api.github.com/users/nrchan/repos",
"events_url": "https://api.github.com/users/nrchan/events{/privacy}",
"received_events_url": "https://api.github.com/users/nrchan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"CC @pacman100 ",
"Same problem here",
"Cc @pacman100 ",
"cc @younesbelkada as he has worked on the PEFT integration in Transformers and can quickly look into it",
"Hi @nrchan \r\nThanks for the issue, I tried to reproduce the issue and did not managed to. I used `transformers==4.33.0` and I tried your command with and without peft library being installed. Note that `check_peft_version` method is correctly part of `transformers.utils`: https://github.com/huggingface/transformers/blob/main/src/transformers/utils/__init__.py#L189 then: https://github.com/huggingface/transformers/blob/main/src/transformers/utils/peft_utils.py#L106\r\n\r\nI am also very surprised about this issue because it seems quite critical and it is strange that we did not flagged it either in our CI or nightly build tests.\r\n\r\nCan you try again with transformers == 4.33.1 ? \r\n\r\n```bash\r\npip install -U transformers\r\n```",
"@younesbelkada Hi, I tried to install all my packages again and the issue was gone. Not really sure what is happening... \r\nThanks for helping :)",
"Thanks! Closing the issue! Glad it worked :) ",
"I am getting the same error with `transformers==4.33.0`:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nImportError Traceback (most recent call last)\r\n[/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py](https://localhost:8080/#) in _get_module(self, module_name)\r\n 1116 try:\r\n-> 1117 return importlib.import_module(\".\" + module_name, self.__name__)\r\n 1118 except Exception as e:\r\n\r\n13 frames\r\nImportError: cannot import name 'check_peft_version' from 'transformers.utils' (/usr/local/lib/python3.10/dist-packages/transformers/utils/__init__.py)\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nRuntimeError Traceback (most recent call last)\r\n[/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py](https://localhost:8080/#) in _get_module(self, module_name)\r\n 1117 return importlib.import_module(\".\" + module_name, self.__name__)\r\n 1118 except Exception as e:\r\n-> 1119 raise RuntimeError(\r\n 1120 f\"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its\"\r\n 1121 f\" traceback):\\n{e}\"\r\n\r\nRuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback):\r\ncannot import name 'check_peft_version' from 'transformers.utils' (/usr/local/lib/python3.10/dist-packages/transformers/utils/__init__.py)\r\n```",
"ImportError Traceback (most recent call last)\r\nCell In[29], line 1\r\n----> 1 from transformers import BertForCausalLM\r\n\r\nImportError: cannot import name 'BertForCausalLM' from 'transformers' (/opt/miniconda/lib/python3.11/site-packages/transformers/__init__.py)\r\nSelection deleted\r\n"
] | 1,694 | 1,706 | 1,694 |
NONE
| null |
### System Info
The error happens when I try to import Trainer
```python
from transformers import Trainer
```
Here's the traceback:
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
File /miniconda3/lib/python3.10/site-packages/transformers/utils/import_utils.py:1130, in _LazyModule._get_module(self, module_name)
1129 try:
-> 1130 return importlib.import_module("." + module_name, self.__name__)
1131 except Exception as e:
File /miniconda3/lib/python3.10/importlib/__init__.py:126, in import_module(name, package)
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1050, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1027, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:1006, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:688, in _load_unlocked(spec)
File <frozen importlib._bootstrap_external>:883, in exec_module(self, module)
File <frozen importlib._bootstrap>:241, in _call_with_frames_removed(f, *args, **kwds)
File /miniconda3/lib/python3.10/site-packages/transformers/trainer.py:40
38 # Integrations must be imported before ML frameworks:
39 # isort: off
---> 40 from .integrations import (
41 get_reporting_integration_callbacks,
42 hp_params,
43 is_fairscale_available,
44 )
46 # isort: on
File /miniconda3/lib/python3.10/site-packages/transformers/integrations/__init__.py:71
33 from .integration_utils import (
34 INTEGRATION_TO_CALLBACK,
35 AzureMLCallback,
(...)
69 run_hp_search_wandb,
70 )
---> 71 from .peft import PeftAdapterMixin
File /miniconda3/lib/python3.10/site-packages/transformers/integrations/peft.py:17
15 from typing import Optional
---> 17 from ..utils import (
18 check_peft_version,
19 find_adapter_config_file,
20 is_accelerate_available,
21 is_peft_available,
22 logging,
23 )
26 if is_accelerate_available():
ImportError: cannot import name 'check_peft_version' from 'transformers.utils' (/miniconda3/lib/python3.10/site-packages/transformers/utils/__init__.py)
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
Cell In[1], line 2
1 import torch
----> 2 from transformers import T5Tokenizer, T5Config, T5ForConditionalGeneration, Trainer, TrainingArguments
3 from torch.utils.data import Dataset
4 import pandas as pd
File <frozen importlib._bootstrap>:1075, in _handle_fromlist(module, fromlist, import_, recursive)
File /miniconda3/lib/python3.10/site-packages/transformers/utils/import_utils.py:1120, in _LazyModule.__getattr__(self, name)
1118 value = self._get_module(name)
1119 elif name in self._class_to_module.keys():
-> 1120 module = self._get_module(self._class_to_module[name])
1121 value = getattr(module, name)
1122 else:
File /miniconda3/lib/python3.10/site-packages/transformers/utils/import_utils.py:1132, in _LazyModule._get_module(self, module_name)
1130 return importlib.import_module("." + module_name, self.__name__)
1131 except Exception as e:
-> 1132 raise RuntimeError(
1133 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
1134 f" traceback):\n{e}"
1135 ) from e
RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback):
cannot import name 'check_peft_version' from 'transformers.utils' (/miniconda3/lib/python3.10/site-packages/transformers/utils/__init__.py)
```
I'm sorry ```transformers-cli env``` doesn't work somehow, I'm using transformers 4.33.0 and pytorch 2.0.1.
Thanks for any help!
### Who can help?
@muellerzr @pac
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the behavior:
1. Run ```from transformers import Trainer```
2. Exception happen
### Expected behavior
This code used to run last week, I guess there might be some packages conflicting (?)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26065/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26064
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26064/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26064/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26064/events
|
https://github.com/huggingface/transformers/issues/26064
| 1,888,644,715 |
I_kwDOCUB6oc5wkm5r
| 26,064 |
FLAVA returns unpooled embeddings by mistake
|
{
"login": "morrisalp",
"id": 8263996,
"node_id": "MDQ6VXNlcjgyNjM5OTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8263996?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/morrisalp",
"html_url": "https://github.com/morrisalp",
"followers_url": "https://api.github.com/users/morrisalp/followers",
"following_url": "https://api.github.com/users/morrisalp/following{/other_user}",
"gists_url": "https://api.github.com/users/morrisalp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/morrisalp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/morrisalp/subscriptions",
"organizations_url": "https://api.github.com/users/morrisalp/orgs",
"repos_url": "https://api.github.com/users/morrisalp/repos",
"events_url": "https://api.github.com/users/morrisalp/events{/privacy}",
"received_events_url": "https://api.github.com/users/morrisalp/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey! \r\nI don't really think this is a bug, as the comment mentions: `pooled_output = text_outputs[0] # last_hidden_state` maybe the name of the temporary variable `pooled_outputs` is wrong as it is using the last hidden states. \r\nThe documentation for the `pooled_output` states the following:\r\n\r\n> pooler_output (`torch.FloatTensor` of shape `(batch_size, hidden_size)`):\r\n Last layer hidden-state of the first token of the sequence (classification token) after further processing\r\n through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns\r\n the classification token after processing through a linear layer and a tanh activation function. The linear\r\n layer weights are trained from the next sentence prediction (classification) objective during pretraining.\r\n\r\nwhen you want the text / image embeddings you want all the embedding not just the pooled one. ",
"The documentation states e.g. for `get_text_features`: \r\n```\r\nReturns:\r\n text_features (`torch.FloatTensor` of shape `(batch_size, output_dim`):\r\n(etc)\r\n```\r\nThis is wrong since it returns a tensor of shape `(batch_size, n_tokens, hidden_size)`\r\n\r\n> when you want the text / image embeddings you want all the embedding not just the pooled one\r\n\r\nNot for my specific use case... Additionally the CLIP API functions like `get_text_features` also return pooled embeddings.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,694 | 1,699 | 1,699 |
NONE
| null |
### System Info
transformers v4.33.0
### Who can help?
@ArthurZucker @younesbelkada @amyeroberts
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
FLAVA models' `get_text_features`, `get_image_features`, and related functions return unpooled embeddings of shape `(batch_size, n_tokens, hidden_size)` rather than pooled `(batch_size, hidden_size)` as expected and stated in documentation.
Note the bug in the source code [HERE](https://github.com/huggingface/transformers/blob/v4.33.0/src/transformers/models/flava/modeling_flava.py#L1226):
```
pooled_output = text_outputs[0] # last_hidden_state
```
Actually, `text_outputs` has ordered keys `last_hidden_state` and `pooler_output` in that order, the former being unpooled and the latter pooled.
### Expected behavior
Should returned pooled embeddings.
Presumably this also affects other model methods using this such as the contrastive loss calculation.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26064/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26063
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26063/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26063/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26063/events
|
https://github.com/huggingface/transformers/pull/26063
| 1,888,557,235 |
PR_kwDOCUB6oc5Z7KFY
| 26,063 |
[`CITests`] skip failing tests until #26054 is merged
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"fyi @amyeroberts and @LysandreJik to have a green weekend π ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26063). All of your documentation changes will be reflected on that endpoint."
] | 1,694 | 1,694 | 1,694 |
COLLABORATOR
| null |
# What does this PR do?
Simply skips the 3 failing whisper test until @sanchit-gandhi finishes up #26054
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26063/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26063",
"html_url": "https://github.com/huggingface/transformers/pull/26063",
"diff_url": "https://github.com/huggingface/transformers/pull/26063.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26063.patch",
"merged_at": 1694231006000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26062
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26062/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26062/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26062/events
|
https://github.com/huggingface/transformers/issues/26062
| 1,888,437,671 |
I_kwDOCUB6oc5wj0Wn
| 26,062 |
More explicit bounding box formats
|
{
"login": "tonydavis629",
"id": 84199903,
"node_id": "MDQ6VXNlcjg0MTk5OTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/84199903?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tonydavis629",
"html_url": "https://github.com/tonydavis629",
"followers_url": "https://api.github.com/users/tonydavis629/followers",
"following_url": "https://api.github.com/users/tonydavis629/following{/other_user}",
"gists_url": "https://api.github.com/users/tonydavis629/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tonydavis629/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tonydavis629/subscriptions",
"organizations_url": "https://api.github.com/users/tonydavis629/orgs",
"repos_url": "https://api.github.com/users/tonydavis629/repos",
"events_url": "https://api.github.com/users/tonydavis629/events{/privacy}",
"received_events_url": "https://api.github.com/users/tonydavis629/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] |
open
| false | null |
[] |
[
"Hi @tonydavis629, thanks for raising this issue. \r\n\r\nYes - this is something we've been thinking about a lot recently and it's on our roadmap to address! \r\n\r\ncc @rafaelpadilla "
] | 1,694 | 1,694 | null |
NONE
| null |
### Feature request
I am having a difficult time managing different bounding box formats in this library. For example, the DETR model preprocessor input format is COCO (xywh), but the output of the postprocessor is a pascal_voc (xyxy). Why not keep everything the same format? Furthermore, the documentation does not say which format is which so it's required to dig through the source to identify that information.
### Motivation
Ease of use
### Your contribution
Just a request
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26062/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26061
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26061/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26061/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26061/events
|
https://github.com/huggingface/transformers/issues/26061
| 1,888,348,928 |
I_kwDOCUB6oc5wjesA
| 26,061 |
How to perform batch inference?
|
{
"login": "ryanshrott",
"id": 13425718,
"node_id": "MDQ6VXNlcjEzNDI1NzE4",
"avatar_url": "https://avatars.githubusercontent.com/u/13425718?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ryanshrott",
"html_url": "https://github.com/ryanshrott",
"followers_url": "https://api.github.com/users/ryanshrott/followers",
"following_url": "https://api.github.com/users/ryanshrott/following{/other_user}",
"gists_url": "https://api.github.com/users/ryanshrott/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ryanshrott/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryanshrott/subscriptions",
"organizations_url": "https://api.github.com/users/ryanshrott/orgs",
"repos_url": "https://api.github.com/users/ryanshrott/repos",
"events_url": "https://api.github.com/users/ryanshrott/events{/privacy}",
"received_events_url": "https://api.github.com/users/ryanshrott/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I opened a PR for this in #24432 to illustrate it for GPT-2, but it will be incorporated in a bigger PR.\r\n\r\ncc @gante @stevhliu",
"See #24575",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey there @ryanshrott @NielsRogge π \r\n\r\nI've added a short section in our basic LLM tutorial page on how to do batched generation in [this PR](https://github.com/huggingface/transformers/pull/26937). \r\n\r\nTaken from the updated guide, here's an example:\r\n```py\r\n>>> from transformers import AutoTokenizer, AutoModelForCausalLM\r\n\r\n>>> model = AutoModelForCausalLM.from_pretrained(\r\n... \"mistralai/Mistral-7B-v0.1\", device_map=\"auto\", load_in_4bit=True\r\n... )\r\n>>> tokenizer = AutoTokenizer.from_pretrained(\"mistralai/Mistral-7B-v0.1\", padding_side=\"left\")\r\n>>> tokenizer.pad_token = tokenizer.eos_token # Most LLMs don't have a pad token by default\r\n>>> model_inputs = tokenizer(\r\n... [\"A list of colors: red, blue\", \"Portugal is\"], return_tensors=\"pt\", padding=True\r\n... ).to(\"cuda\")\r\n>>> generated_ids = model.generate(**model_inputs)\r\n>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)\r\n['A list of colors: red, blue, green, yellow, orange, purple, pink,',\r\n'Portugal is a country in southwestern Europe, on the Iber']\r\n```",
"@gante Thanks. Is this faster than running them in a loop?",
"@ryanshrott yes, much faster when measured in thorughput! The caveat is that it requires slightly more memory from your hardware, and it will have a slightly higher latency"
] | 1,694 | 1,698 | 1,697 |
NONE
| null |
### Feature request
I want to pass a list of tests to model.generate.
text = "hey there"
inputs = tokenizer(text, return_tensors="pt").to(0)
out = model.generate(**inputs, max_new_tokens=184)
print(tokenizer.decode(out[0], skip_special_tokens=True))
### Motivation
I want to do batch inference.
### Your contribution
Testing
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26061/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26060
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26060/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26060/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26060/events
|
https://github.com/huggingface/transformers/pull/26060
| 1,888,272,356 |
PR_kwDOCUB6oc5Z6L_7
| 26,060 |
Fix eval accumulation when `accelerate` > 0.20.3
|
{
"login": "sam-scale",
"id": 106690182,
"node_id": "U_kgDOBlv2hg",
"avatar_url": "https://avatars.githubusercontent.com/u/106690182?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sam-scale",
"html_url": "https://github.com/sam-scale",
"followers_url": "https://api.github.com/users/sam-scale/followers",
"following_url": "https://api.github.com/users/sam-scale/following{/other_user}",
"gists_url": "https://api.github.com/users/sam-scale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sam-scale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sam-scale/subscriptions",
"organizations_url": "https://api.github.com/users/sam-scale/orgs",
"repos_url": "https://api.github.com/users/sam-scale/repos",
"events_url": "https://api.github.com/users/sam-scale/events{/privacy}",
"received_events_url": "https://api.github.com/users/sam-scale/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26060). All of your documentation changes will be reflected on that endpoint.",
"@muellerzr any thoughts :) ",
"Thanks! @amyeroberts @muellerzr is there any way to track when this fix will get added to a release?"
] | 1,694 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
As mentioned in: https://github.com/huggingface/transformers/issues/25641
Eval accumulation will never happen with `accelerate > 0.20.3`, so this change ensures that `sync_gradients` is ignored if accelerate is > 0.20.3
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerz and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26060/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26060/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26060",
"html_url": "https://github.com/huggingface/transformers/pull/26060",
"diff_url": "https://github.com/huggingface/transformers/pull/26060.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26060.patch",
"merged_at": 1694685467000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26059
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26059/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26059/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26059/events
|
https://github.com/huggingface/transformers/pull/26059
| 1,888,170,499 |
PR_kwDOCUB6oc5Z51fb
| 26,059 |
Remove unnecessary `view`s of `position_ids`
|
{
"login": "ramiro050",
"id": 20114526,
"node_id": "MDQ6VXNlcjIwMTE0NTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/20114526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ramiro050",
"html_url": "https://github.com/ramiro050",
"followers_url": "https://api.github.com/users/ramiro050/followers",
"following_url": "https://api.github.com/users/ramiro050/following{/other_user}",
"gists_url": "https://api.github.com/users/ramiro050/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ramiro050/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ramiro050/subscriptions",
"organizations_url": "https://api.github.com/users/ramiro050/orgs",
"repos_url": "https://api.github.com/users/ramiro050/repos",
"events_url": "https://api.github.com/users/ramiro050/events{/privacy}",
"received_events_url": "https://api.github.com/users/ramiro050/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"For context, I work in the [Torch-MLIR project](https://github.com/llvm/torch-mlir), which converts PyTorch models into [MLIR](https://mlir.llvm.org/). Handling the `view` op in its most general form (especially the use of `-1`s) has been a big challenge for us for a while now (there's some work we need to do to improve the shape information propagation and dynamic shape support in our project). While the goal is to support this as is in the future, in the short term, removing this `view` from the computation would unblock us to run LLaMA in our compiler.\r\n",
"pinging @gante for a second look ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26059). All of your documentation changes will be reflected on that endpoint.",
"Running `RUN_SLOW=1 pytest tests/models/llama`, I get `172 passed, 47 skipped, 35 warnings`. The warnings are either due to deprecation or from using PyTorch's tracer; they don't seem related to the changes in this PR. The skipped tests are these:\r\n\r\n```\r\ntest_modeling_llama.py::LlamaModelTest::test_cpu_offload SKIPPED (test requires CUDA)\r\ntest_modeling_llama.py::LlamaModelTest::test_disk_offload SKIPPED (test requires CUDA)\r\ntest_modeling_llama.py::LlamaModelTest::test_equivalence_flax_to_pt SKIPPED (test is PT+FLAX test)\r\ntest_modeling_llama.py::LlamaModelTest::test_equivalence_pt_to_flax SKIPPED (test is PT+FLAX test)\r\ntest_modeling_llama.py::LlamaModelTest::test_model_parallel_beam_search SKIPPED (test requires multiple GPUs)\r\ntest_modeling_llama.py::LlamaModelTest::test_model_parallel_equal_results SKIPPED (test requires multiple GPUs)\r\ntest_modeling_llama.py::LlamaModelTest::test_model_parallelism SKIPPED (test requires multiple GPUs)\r\ntest_modeling_llama.py::LlamaModelTest::test_model_parallelization SKIPPED (test requires multiple GPUs)\r\ntest_modeling_llama.py::LlamaModelTest::test_multi_gpu_data_parallel_forward SKIPPED (test requires multiple GPUs)\r\ntest_modeling_llama.py::LlamaModelTest::test_pipeline_audio_classification SKIPPED (LlamaModelTest::test_pipeline_audio_classification is skipped: `audio-classification` is not in `self.pipeline_model_mapping` for `...)\r\ntest_modeling_llama.py::LlamaModelTest::test_pipeline_automatic_speech_recognition SKIPPED (LlamaModelTest::test_pipeline_automatic_speech_recognition is skipped: `automatic-speech-recognition` is not in `self.pipel...)\r\ntest_modeling_llama.py::LlamaModelTest::test_pipeline_conversational SKIPPED (LlamaModelTest::test_pipeline_conversational is skipped: `conversational` is not in `self.pipeline_model_mapping` for `LlamaModelTest`.)\r\ntest_modeling_llama.py::LlamaModelTest::test_pipeline_depth_estimation SKIPPED (LlamaModelTest::test_pipeline_depth_estimation is skipped: `depth-estimation` is not in `self.pipeline_model_mapping` for `LlamaModelTe...)\r\ntest_modeling_llama.py::LlamaModelTest::test_pipeline_document_question_answering SKIPPED (LlamaModelTest::test_pipeline_document_question_answering is skipped: `document-question-answering` is not in `self.pipeline...)\r\ntest_modeling_llama.py::LlamaModelTest::test_pipeline_fill_mask SKIPPED (LlamaModelTest::test_pipeline_fill_mask is skipped: `fill-mask` is not in `self.pipeline_model_mapping` for `LlamaModelTest`.)\r\ntest_modeling_llama.py::LlamaModelTest::test_pipeline_image_classification SKIPPED (LlamaModelTest::test_pipeline_image_classification is skipped: `image-classification` is not in `self.pipeline_model_mapping` for `...)\r\ntest_modeling_llama.py::LlamaModelTest::test_pipeline_image_segmentation SKIPPED (LlamaModelTest::test_pipeline_image_segmentation is skipped: `image-segmentation` is not in `self.pipeline_model_mapping` for `LlamaM...)\r\ntest_modeling_llama.py::LlamaModelTest::test_pipeline_image_to_text SKIPPED (LlamaModelTest::test_pipeline_image_to_text is skipped: `image-to-text` is not in `self.pipeline_model_mapping` for `LlamaModelTest`.)\r\ntest_modeling_llama.py::LlamaModelTest::test_pipeline_mask_generation SKIPPED (`run_pipeline_test` is currently not implemented.)\r\ntest_modeling_llama.py::LlamaModelTest::test_pipeline_object_detection SKIPPED (LlamaModelTest::test_pipeline_object_detection is skipped: `object-detection` is not in `self.pipeline_model_mapping` for `LlamaModelTe...)\r\ntest_modeling_llama.py::LlamaModelTest::test_pipeline_question_answering SKIPPED (LlamaModelTest::test_pipeline_question_answering is skipped: `question-answering` is not in `self.pipeline_model_mapping` for `LlamaM...)\r\ntest_modeling_llama.py::LlamaModelTest::test_pipeline_summarization SKIPPED (LlamaModelTest::test_pipeline_summarization is skipped: `summarization` is not in `self.pipeline_model_mapping` for `LlamaModelTest`.)\r\ntest_modeling_llama.py::LlamaModelTest::test_pipeline_table_question_answering SKIPPED (LlamaModelTest::test_pipeline_table_question_answering is skipped: `table-question-answering` is not in `self.pipeline_model_ma...)\r\ntest_modeling_llama.py::LlamaModelTest::test_pipeline_text2text_generation SKIPPED (LlamaModelTest::test_pipeline_text2text_generation is skipped: `text2text-generation` is not in `self.pipeline_model_mapping` for `...)\r\ntest_modeling_llama.py::LlamaModelTest::test_pipeline_text_to_audio SKIPPED (LlamaModelTest::test_pipeline_text_to_audio is skipped: `text-to-audio` is not in `self.pipeline_model_mapping` for `LlamaModelTest`.)\r\ntest_modeling_llama.py::LlamaModelTest::test_pipeline_token_classification SKIPPED (LlamaModelTest::test_pipeline_token_classification is skipped: `token-classification` is not in `self.pipeline_model_mapping` for `...)\r\ntest_modeling_llama.py::LlamaModelTest::test_pipeline_translation SKIPPED (LlamaModelTest::test_pipeline_translation is skipped: `translation` is not in `self.pipeline_model_mapping` for `LlamaModelTest`.)\r\ntest_modeling_llama.py::LlamaModelTest::test_pipeline_video_classification SKIPPED (LlamaModelTest::test_pipeline_video_classification is skipped: `video-classification` is not in `self.pipeline_model_mapping` for `...)\r\ntest_modeling_llama.py::LlamaModelTest::test_pipeline_visual_question_answering SKIPPED (LlamaModelTest::test_pipeline_visual_question_answering is skipped: `visual-question-answering` is not in `self.pipeline_model...)\r\ntest_modeling_llama.py::LlamaModelTest::test_pipeline_zero_shot_audio_classification SKIPPED (LlamaModelTest::test_pipeline_zero_shot_audio_classification is skipped: `zero-shot-audio-classification` is not in `self...)\r\ntest_modeling_llama.py::LlamaModelTest::test_pipeline_zero_shot_image_classification SKIPPED (LlamaModelTest::test_pipeline_zero_shot_image_classification is skipped: `zero-shot-image-classification` is not in `self...)\r\ntest_modeling_llama.py::LlamaModelTest::test_pipeline_zero_shot_object_detection SKIPPED (LlamaModelTest::test_pipeline_zero_shot_object_detection is skipped: `zero-shot-object-detection` is not in `self.pipeline_mo...)\r\ntest_modeling_llama.py::LlamaModelTest::test_pt_tf_model_equivalence SKIPPED (test is PT+TF test)\r\ntest_modeling_llama.py::LlamaModelTest::test_save_load_fast_init_from_base SKIPPED (LLaMA buffers include complex numbers, which breaks this test)\r\ntest_modeling_llama.py::LlamaIntegrationTest::test_model_13b_greedy_generation SKIPPED (Model is curently gated)\r\ntest_modeling_llama.py::LlamaIntegrationTest::test_model_13b_logits SKIPPED (Logits are not exactly the same, once we fix the instabalities somehow, will update!)\r\ntest_modeling_llama.py::LlamaIntegrationTest::test_model_13bf_logits SKIPPED (Logits are not exactly the same, once we fix the instabalities somehow, will update!)\r\ntest_modeling_llama.py::LlamaIntegrationTest::test_model_70b_logits SKIPPED (Logits are not exactly the same, once we fix the instabalities somehow, will update! Also it is gonna be a `too_slow` test)\r\ntest_modeling_llama.py::LlamaIntegrationTest::test_model_7b_logits SKIPPED (Logits are not exactly the same, once we fix the instabalities somehow, will update!)\r\ntest_modeling_llama.py::CodeLlamaIntegrationTest::test_model_7b_logits SKIPPED (test requires CUDA)\r\ntest_tokenization_llama.py::LlamaTokenizationTest::test_batch_encode_plus_tensors SKIPPED (test is PT+TF test)\r\ntest_tokenization_llama.py::LlamaTokenizationTest::test_pickle_subword_regularization_tokenizer SKIPPED (worker 'gw4' crashed on CI, passing locally.)\r\ntest_tokenization_llama.py::LlamaTokenizationTest::test_save_pretrained SKIPPED (Let's wait for the fast tokenizer!)\r\ntest_tokenization_llama.py::LlamaTokenizationTest::test_save_slow_from_fast_and_reload_fast SKIPPED (Unfortunately way too slow to build a BPE with SentencePiece.)\r\ntest_tokenization_llama.py::LlamaTokenizationTest::test_subword_regularization_tokenizer SKIPPED (worker 'gw4' crashed on CI, passing locally.)\r\ntest_tokenization_llama.py::LlamaTokenizationTest::test_tf_encode_plus_sent_to_model SKIPPED (test requires TensorFlow)\r\ntest_tokenization_llama.py::LlamaIntegrationTest::test_integration_test_xnli SKIPPED (RUN_TOKENIZER_INTEGRATION=1 to run tokenizer integration tests)\r\n```",
"@gante, bumping this :)",
"He is off for a while so won't be able to review this! le me ping @fxmarty for a second look! ποΈ ",
"> It could be nice to fix it in this PR as well. Some that may be impacted: codgen, gpt2, gpt_neo, gpt_neox, gptj, gpt_bigcode, imagegpt, llama.\r\n\r\nDone! PTAL",
"@ArthurZucker @fxmarty, bumping this",
"@ArthurZucker, bumping this"
] | 1,694 | 1,696 | 1,696 |
CONTRIBUTOR
| null |
When `position_ids` is `None`, its value is generated using `torch.arange`, which creates a tensor of size `(seq_length + initial_val) - initial_val = seq_length`. The tensor is then unsqueezed, resulting in a tensor of shape `(1, seq_length)`. This means that the last `view` to a tensor of shape `(-1, seq_length)` is a no-op.
Simiarly, when `position_ids` is not `None`, the documentation for the models specifies that it should be of shape `(batch_size, seq_length)`, making the `view` made in this case also unnecessary.
This commit removes the unnecessary views.
@gante @ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26059/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26059/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26059",
"html_url": "https://github.com/huggingface/transformers/pull/26059",
"diff_url": "https://github.com/huggingface/transformers/pull/26059.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26059.patch",
"merged_at": 1696580880000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26058
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26058/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26058/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26058/events
|
https://github.com/huggingface/transformers/pull/26058
| 1,888,134,421 |
PR_kwDOCUB6oc5Z5tos
| 26,058 |
Add LLM doc
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Doc will also be released with blog post: https://github.com/huggingface/blog/pull/1473",
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the review @MKhalusova! \r\n\r\nAnswering the bullet points directly in line:\r\n\r\n> Given the level of detail, and the narrative style, I think the doc would fit better under the Conceptual Guides section rather than in Tutorials.\r\n\r\nHappy to move it to the conceptual guides if it fits better - @gante wdyt?\r\n\r\n> I would suggest changing the title of the doc as it is not very descriptive. Perhaps \"LLM inference optimizations\"/\"Efficient LLM deployment\"?\r\n\r\nYes, agree. Should be better now.\r\n\r\n> It would also be great to shorten/simplify the guide where possible.\r\n\r\nI'd need more precise comments here. We could maybe also do this in a second pass. As a general guide on how to optimize LLMs for memory and speed, I'm not sure how I would simplify or shorten it.\r\n\r\n> There are some really cool gems in the doc that would be great to highlight somehow (bold font or using a <Tip>), e.g. \"Therefore, inference time is often not reduced when using quantized weights, but rather increases\".\r\n\r\nGuess this is also a style issue. Maybe this can be done in a second pass to align it more with other docs.\r\n\r\n> I have left some nits regarding style, issues with formula rendering, etc.\r\n\r\nAddressed them",
"@gante could you also take a look here? "
] | 1,694 | 1,697 | 1,697 |
MEMBER
| null |
# Optimize LLM tutorial
This PR adds the doc version of https://huggingface.co/blog/optimize-llm .
Given the many code snippets I think that the tutorial is a nice addition to the Transfomers docs.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26058/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26058/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26058",
"html_url": "https://github.com/huggingface/transformers/pull/26058",
"diff_url": "https://github.com/huggingface/transformers/pull/26058.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26058.patch",
"merged_at": 1697465391000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26057
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26057/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26057/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26057/events
|
https://github.com/huggingface/transformers/issues/26057
| 1,888,110,238 |
I_kwDOCUB6oc5wikae
| 26,057 |
Dinov2 for depth estimation
|
{
"login": "rfan-debug",
"id": 69488297,
"node_id": "MDQ6VXNlcjY5NDg4Mjk3",
"avatar_url": "https://avatars.githubusercontent.com/u/69488297?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rfan-debug",
"html_url": "https://github.com/rfan-debug",
"followers_url": "https://api.github.com/users/rfan-debug/followers",
"following_url": "https://api.github.com/users/rfan-debug/following{/other_user}",
"gists_url": "https://api.github.com/users/rfan-debug/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rfan-debug/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rfan-debug/subscriptions",
"organizations_url": "https://api.github.com/users/rfan-debug/orgs",
"repos_url": "https://api.github.com/users/rfan-debug/repos",
"events_url": "https://api.github.com/users/rfan-debug/events{/privacy}",
"received_events_url": "https://api.github.com/users/rfan-debug/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @rfan-debug, this would be a great contribution! \r\n\r\nIf you'd like to open a PR we'd be happy to review and answer any questions if you need help. \r\n\r\ncc @rafaelpadilla",
"Hi,\r\n\r\nSo I saw they released the DINOv2 checkpoints with a DPT head: https://github.com/facebookresearch/dinov2#pretrained-heads---depth-estimation. I do have a [PR](https://github.com/huggingface/transformers/pull/25799) which extends DPT to leverage the `AutoBackbone` API. This means that the DPT head can be used together with any backbone (like ViT, DINOv2, etc.). This way, we could just do the following:\r\n\r\n```\r\nfrom transformers import Dinov2Config, DPTConfig, DPTForDepthEstimation\r\n\r\nbackbone_config = Dinov2Config(num_hidden_layers=2, num_attention_heads=4, out_features=[\"stage1\", \"stage2\", \"stage3\", \"stage4\")\r\nconfig = DPTConfig(backbone_config=backbone_config)\r\nmodel = DPTForDepthEstimation(config)\r\n```\r\n\r\n=> so would be great to leverage this instead of adding a standalone `Dinov2ForDepthEstimation`.\r\n",
"@NielsRogge Leveraging the AutoBackbone API is a great idea. Thanks for your advice and contributions! I'll follow your code examples. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,694 | 1,697 | 1,697 |
NONE
| null |
### Feature request
Dinov2's original repo has an example using Dinov2 backbone + DPT head for depth estimation [notebook link](https://github.com/facebookresearch/dinov2/blob/main/notebooks/depth_estimation.ipynb). If we can integrate it into `transformers` repo by adding a class `Dinov2ForImageDepthEstimation` and let `forward` method return `DepthEstimatorOutput`, we'll have a unified output interface across all depth estimation models. By doing this, we can easily chain this powerful depth estimation method together with other models under `transformers`'s pipelines.
### Motivation
This would be a very great feature for many production use cases or research problems. One example is camera angle estimation from a 2D image, in which reliable depth information are critical. In my limited test cases, using dinov2+DPT head to run depth estimation is way better than the existing [DPT model](https://huggingface.co/docs/transformers/main/model_doc/dpt) itself.
### Your contribution
I can submit a PR to add this feature if other professional developers don't have the bandwidth to deal with it. (I am relatively new to `transformers`'s develop workflow though.)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26057/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26057/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26056
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26056/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26056/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26056/events
|
https://github.com/huggingface/transformers/pull/26056
| 1,888,064,580 |
PR_kwDOCUB6oc5Z5ebV
| 26,056 |
safeguard torch distributed check
|
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,694 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
# What does this PR do?
1. Resolves https://github.com/huggingface/transformers/issues/26039
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26056/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26056",
"html_url": "https://github.com/huggingface/transformers/pull/26056",
"diff_url": "https://github.com/huggingface/transformers/pull/26056.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26056.patch",
"merged_at": 1694580997000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26055
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26055/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26055/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26055/events
|
https://github.com/huggingface/transformers/issues/26055
| 1,887,991,935 |
I_kwDOCUB6oc5wiHh_
| 26,055 |
Device mismatch VITS speaker embedding and speaker_id
|
{
"login": "fakhirali",
"id": 32309516,
"node_id": "MDQ6VXNlcjMyMzA5NTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/32309516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fakhirali",
"html_url": "https://github.com/fakhirali",
"followers_url": "https://api.github.com/users/fakhirali/followers",
"following_url": "https://api.github.com/users/fakhirali/following{/other_user}",
"gists_url": "https://api.github.com/users/fakhirali/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fakhirali/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fakhirali/subscriptions",
"organizations_url": "https://api.github.com/users/fakhirali/orgs",
"repos_url": "https://api.github.com/users/fakhirali/repos",
"events_url": "https://api.github.com/users/fakhirali/events{/privacy}",
"received_events_url": "https://api.github.com/users/fakhirali/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey @fakhirali - could you try using the official checkpoint (i.e. the one on the [kakao-enterprise org](https://huggingface.co/kakao-enterprise/vits-vctk)) and the following modified script?\r\n```python\r\nfrom transformers import VitsModel, AutoTokenizer\r\nimport torch\r\n\r\nmodel = VitsModel.from_pretrained(\"kakao-enterprise/vits-vctk\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"kakao-enterprise/vits-vctk\")\r\nmodel.to(\"cuda:0\")\r\n\r\ntext = \"Hey, it's Hugging Face on the phone\"\r\ninputs = tokenizer(text, return_tensors=\"pt\")\r\ninputs[\"speaker_id\"] = torch.tensor(3)\r\ninputs = inputs.to(\"cuda:0\")\r\n\r\nwith torch.no_grad():\r\n output = model(**inputs).waveform[0]\r\n```\r\n\r\n=> I've tested on MPS and this script worked for me. The trick is overriding your `inputs` with the version placed on your accelerator device.",
"I still get the same error. It is due to the following line in [modeling_vits.py](https://github.com/huggingface/transformers/blob/18ee1fe76295239335bf1528c744fe1cfba21cc8/src/transformers/models/vits/modeling_vits.py#L1438)\r\n\r\n```python\r\nspeaker_embeddings = self.embed_speaker(torch.tensor([speaker_id])).unsqueeze(-1)\r\n```\r\nI just changed it to the following and now it works.\r\n```python\r\nspeaker_embeddings = self.embed_speaker(torch.tensor([speaker_id], device=self.device)).unsqueeze(-1)\r\n```\r\nI have a [fork](https://github.com/fakhirali/transformers) with this fix should I submit a PR?",
"Yes please! Thanks @fakhirali "
] | 1,694 | 1,695 | 1,695 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.33.1
- Platform: Linux-6.2.0-32-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): 2.13.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@sanchit-gandhi @hollance
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
To reproduce the error
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("Matthijs/vits-vctk")
tokenizer = AutoTokenizer.from_pretrained("Matthijs/vits-vctk")
model.to('cuda')
text = "Hey, it's Hugging Face on the phone"
inputs = tokenizer(text, return_tensors="pt")
inputs.to('cuda')
inputs['speaker_id'] = 3
with torch.no_grad():
output = model(**inputs).waveform[0]
# or
text = "Hey, it's Hugging Face on the phone"
inputs = tokenizer(text, return_tensors="pt")
inputs['speaker_id'] = torch.tensor(3)
inputs.to('cuda')
with torch.no_grad():
output = model(**inputs).waveform[0]
```
### Error
```terminal
Traceback (most recent call last):
File "/home/fakhir/Code/VoiceChat/tts.py", line 27, in <module>
mouth.say(text)
File "/home/fakhir/Code/VoiceChat/tts.py", line 19, in say
output = self.model(**inputs).waveform[0].to('cpu')
File "/home/fakhir/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/fakhir/.local/lib/python3.10/site-packages/transformers/models/vits/modeling_vits.py", line 1438, in forward
speaker_embeddings = self.embed_speaker(torch.tensor([speaker_id])).unsqueeze(-1)
File "/home/fakhir/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/fakhir/.local/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 162, in forward
return F.embedding(
File "/home/fakhir/.local/lib/python3.10/site-packages/torch/nn/functional.py", line 2210, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)
```
My temporary solution is to add this to [modeling_vits.py](https://github.com/huggingface/transformers/blob/18ee1fe76295239335bf1528c744fe1cfba21cc8/src/transformers/models/vits/modeling_vits.py#L1438)
```python
if self.config.num_speakers > 1 and speaker_id is not None:
if not 0 <= speaker_id < self.config.num_speakers:
raise ValueError(f"Set `speaker_id` in the range 0-{self.config.num_speakers - 1}.")
speaker_id = torch.tensor([speaker_id]).to(self.embed_speaker.weight.device) #added this
speaker_embeddings = self.embed_speaker(speaker_id).unsqueeze(-1)
else:
speaker_embeddings = None
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26055/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26054
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26054/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26054/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26054/events
|
https://github.com/huggingface/transformers/pull/26054
| 1,887,734,836 |
PR_kwDOCUB6oc5Z4Wul
| 26,054 |
[Whisper Tokenizer] Encode timestamps
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"BTW, do we expect this changes showing up on users' code results? I think it's yes, but wondering why it is OK?",
"Alright refactored the tokenizer a bit to maintain the same results as what we had before! Essentially, we only output the timestamp tokens if the user passes `decode_with_timestamps=True`. Otherwise, we filter them out from the token ids, maintaining the behaviour we had before where the `.decode` method skipped them since they were OOV.\r\n\r\nPreviously, when the timestamps were not in the vocab:\r\n* `decode_with_timestamps=False`: timestamp tokens skipped from the `.decode` method since they're OOV\r\n* `decode_with_timestamps=True`: timestamp tokens manually added by the `._decode_with_timestamps` method\r\n\r\nNow, the timestamps are in the vocab:\r\n* `decode_with_timestamps=False`: timestamp tokens filtered out from within the `.decode` method (they're in-vocabulary now, so aren't automatically skipped)\r\n* `decode_with_timestamps=True`: timestamp tokens added automatically in the `.decode` method\r\n\r\nHow does this look to you @ArthurZucker @ydshieh?",
"Sound good to me. Arthur will know better and can provide better comments in any I believe.",
"The tests pass for me locally, but a small subset of the fast tokenizer tests timeout on the CI: [link](https://circleci-tasks-prod.s3.us-east-1.amazonaws.com/forks/storage/artifacts/d5b57382-7f67-4274-9623-7f238ef4fb6f/457033993/b3e236a7-4839-4737-93fe-06b96ad1c060/0/~/transformers/reports/tests_torch/failures_short.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIAQVFQINEODXMLOC25%2F20230912%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230912T165951Z&X-Amz-Expires=60&X-Amz-Security-Token=IQoJb3JpZ2luX2VjEDEaCXVzLWVhc3QtMSJHMEUCIAiUyw6JJaKlAyhx09NvNyD%2Bim9IwhgucrLVaP6eEZXtAiEAuWI9NcS7UXgP02B8VwaXXx15XCCU9%2BKPXJ%2BSZ%2FuAD1kqqwIIGhADGgwwNDU0NjY4MDY1NTYiDDzx64NrLY3vyLDcmCqIAuc%2BpK0RmEy8pRmLia3gseKKbhKOZ%2B66iDdeDxAEjnEApGprS28lsDHSM5xYcSI6KyQk%2B3%2Bk3tCOsX%2Bz6VbWbZM5Tn7Wpc1t6R5NUjPl%2BggOj8c%2BstRFvA0hRgacjs%2FB%2Bvj0ATnulS3ZeKUhe9vh2ubO0EuuBdp8dJ5js4eIfa711UqH6bWEVeMVss5N8aFWCyAZu0am9EIlRGR5C1xiIkkuzYdLvcY1R%2FmUXxL3TNX9lBVvyFCOS4l0yJ0u9iTyoan%2FGTO4UQMvsOQulq8TdmvsyWGwcju8cz37PqP%2F59%2B8dwVnxwU2tHLzmpqyRW5hPMf6t9WSK0r08Ix330ZKuxOyBsnwk3Ll3zDMsYKoBjqdAbY5dhwaoWcPQRpCrURiJdnd2LBtRCjGbffIm0B1exweCcdh0ZiBo7vG9xUBV%2BIJDpv8APP55re0ep%2FAg21T7b2DfWLDMm7jN8rgig0jzXIFOPqdLL1pSkgz32enWfC%2FPYunTtsT%2BEVCWTLnylp1K2iYgeKnCzSFtON0%2FEPOjgZ58FFFQjL%2BDaglGc9HhFcHhKUnArLwL0ODN%2FFgqEM%3D&X-Amz-SignedHeaders=host&x-id=GetObject&X-Amz-Signature=ea2518c3fc7e1bfe60b8c83a36c320fdf0c01c416e87dd1d0f272727ea7c72de)\r\n\r\n=> any reason these tests should timeout now that we have the expanded vocabulary with the new added tokens? It seems to me that it's the same stage that always gets stuck:\r\nhttps://github.com/huggingface/transformers/blob/8f609ab9e00e99b05fe2463483748c1f664295d1/src/transformers/tokenization_utils_fast.py#L281-L282",
"Yeah works pretty well: after the first cache step, tokenizing decode is on-par with what we had before. \r\n\r\nThat's correct regarding only computing the cache once: we'll likely never actually have to re-compute the cache, since in practice, everyone will used a fixed time-precision of `0.02`. However, the code can handle multiple values of time-precision, which is stay consistent with the `.decode` method where we allow `time_precision` as an arg"
] | 1,694 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
# What does this PR do?
As described in #24476, we have uploaded the Whisper timestamp tokens to the tokenizers on the Hub. This requires updating of the Whisper tokenizer and tokenizer tests to handle the new added tokens
cc @ydshieh @ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26054/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26054",
"html_url": "https://github.com/huggingface/transformers/pull/26054",
"diff_url": "https://github.com/huggingface/transformers/pull/26054.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26054.patch",
"merged_at": 1694689243000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26053
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26053/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26053/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26053/events
|
https://github.com/huggingface/transformers/pull/26053
| 1,887,697,711 |
PR_kwDOCUB6oc5Z4Os8
| 26,053 |
[Whisper Tokenizer] Test timestamps
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26053). All of your documentation changes will be reflected on that endpoint.",
"Hey!\r\n> the slow tokenizer removes spaces after special token ids \r\ncan be fixed by updating the AddedTokens to have lstrip and rstrip set to False and pushing the tokenizer again. \r\n> the fast tokenizer gives the wrong ids\r\n\r\nseems fixed on main no?:\r\n```python \r\nfrom transformers import WhisperTokenizer, WhisperTokenizerFast\r\n \r\ntokenizer = WhisperTokenizer.from_pretrained(f\"openai/whisper-tiny\")\r\ntokenizer_fast = WhisperTokenizerFast.from_pretrained(f\"openai/whisper-tiny\")\r\n \r\nprint(tokenizer.encode(\"<|0.00|> hey\"))\r\nprint(tokenizer_fast.encode(\"<|0.00|> hey\"))\r\n```\r\nLoading the tokenizer from the `special_tokens_map.json` and the `added_tokens.json` will be removed in `transformers 5`, it is kept for forward compatibility, but it is recommended to update your `tokenizer_config.json` by uploading it again. You will see the new `added_tokens_decoder` attribute that will store the relevant information.\r\nSpecial tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.\r\nSpecial tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.\r\n[50258, 50363, 50364, 17230, 50257]\r\n[50258, 50363, 50364, 17230, 50257]\r\n```\r\n",
"Hey @ArthurZucker - currently these tests fail because the Whisper tokenizer does not respect the space between added tokens and the subsequent token. See the following code snippet for an example where the space is removed:\r\n\r\n```python\r\nfrom transformers import WhisperTokenizerFast\r\n\r\ntokenizer = WhisperTokenizerFast.from_pretrained(\"openai/whisper-tiny\")\r\n\r\ninput_str = \"<|0.00|> Whisper can do timestamps?<|2.60|>\"\r\nencoding = tokenizer(input_str, add_special_tokens=False).input_ids\r\ndecoded_str = tokenizer.decode(encoding, decode_with_timestamps=True)\r\n\r\nprint(decoded_str)\r\n```\r\n**Print Output:**\r\n```\r\n<|0.00|>Whisper can do timestamps?<|2.60|>\r\n```\r\n\r\nLooking at the diff between this and the original:\r\n```diff\r\n<|0.00|> Whisper can do timestamps?<|2.60|>\r\n<|0.00|>Whisper can do timestamps?<|2.60|>\r\n```\r\nwe see that the space is contracted between the added token `<|0.00|>` and the first non-added token.\r\n\r\nThis is using the timestamp tokens added with `lstrip=rstrip=False`, as per the code snippet: https://github.com/huggingface/transformers/pull/24476#issuecomment-1711469487\r\n\r\nEven if we try with `lstrip=rstrip=True`, we don't get the expected behaviour:\r\n\r\n```python\r\nfrom transformers import WhisperTokenizerFast, AddedToken\r\n\r\n# specify revision that does not have timestamp tokens added\r\ntokenizer = WhisperTokenizerFast.from_pretrained(\"openai/whisper-tiny\", revision=\"1135fb87afc1dd37439285d93c0388041612e9a4\")\r\n\r\n# add the timestamp tokens\r\ntimestamps = [AddedToken(\"<|%.2f|>\" % (i * 0.02), lstrip=True, rstrip=True) for i in range(1500 + 1)]\r\ntokenizer.add_tokens(timestamps)\r\n\r\ninput_str = \"<|0.00|> Whisper can do timestamps?<|2.60|>\"\r\nencoding = tokenizer(input_str, add_special_tokens=False).input_ids\r\ndecoded_str = tokenizer.decode(encoding, decode_with_timestamps=True)\r\n\r\nprint(decoded_str)\r\n```\r\n\r\n**Print Output:**\r\n```\r\n<|0.00|>Whisper can do timestamps?<|2.60|>\r\n```\r\n\r\nThis is because we get different encodings with and without timestamp tokens:\r\n```python\r\nencoding_with = tokenizer(\"<|0.00|> The\", add_special_tokens=False).input_ids\r\nencoding_without = tokenizer(\" The\", add_special_tokens=False).input_ids\r\n\r\nprint(\"With timestamps: \", encoding_with)\r\nprint(\"Without timestamps: \", encoding_without)\r\n```\r\n**Print Output:**\r\n```\r\nWith timestamps: [50364, 2471, 271, 610]\r\nWithout timestamps: [41132, 610]\r\n```\r\n\r\nWe can see the missing space in these tokens:\r\n```python\r\nprint(tokenizer.decode(encoding_with))\r\nprint(tokenizer.decode(encoding_without))\r\n```\r\n**Print Output:**\r\n```\r\nThe\r\n The\r\n```",
"I am just not getting these on main π΅βπ« ",
"As discussed offline this is because we need to load the tokenizer, override the added tokens, and re-push it with the `added_tokens_decoder`. However, there's currently a bug where all special token ids are loaded with `lstrip=rstrip=True`, regardless of how they're saved in the tokenizer file, and they can't be overriden using `tokenizer.add_tokens(...)`.\r\n\r\nWill remove these special tokens and re-add them for the Whisper model. We can explore a proper fix in the lib as a long-term solution.",
"Fixed by #26538 ! ",
"Running the CI to see if we test the correct tokenizer encoding",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,694 | 1,704 | 1,704 |
CONTRIBUTOR
| null |
# What does this PR do?
Adds a test to check that the Whisper tokenizer correctly encodes/decodes the newly added timestamp tokens. This is quite a strict test since we check that the encoded id's match the expected id's, as well as the target string.
Currently, this test fails, since the slow tokenizer removes spaces after special token ids (see https://github.com/huggingface/transformers/pull/24476#issuecomment-1711469487), and the fast tokenizer gives the wrong ids (see https://github.com/huggingface/transformers/pull/24476#issuecomment-1711657385)
We can be confident the Whisper tokenizers are working as expected once this test is green
cc @ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26053/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26053",
"html_url": "https://github.com/huggingface/transformers/pull/26053",
"diff_url": "https://github.com/huggingface/transformers/pull/26053.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26053.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/26052
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26052/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26052/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26052/events
|
https://github.com/huggingface/transformers/pull/26052
| 1,887,681,046 |
PR_kwDOCUB6oc5Z4LCE
| 26,052 |
Docstring check
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> what happens if the docstring contains argument name(s) that are not signature?\r\n\r\nThen the argument in the docstrings should be removed. The goal is to ensure consistency, and arguments not in the signature are often hidden for a reason (deprecated arguments for instance) or might even not be accepted (but end up in the docstring after a bad copy paste for instance). Will fix the behavior you flagged in a comment.",
"Feel free to ping me again for another review! ",
"Waiting for @ydshieh review when he is back from vacation.",
"Here is a second review where I found something to check:\r\n\r\n- Previous `PILImageResampling.BICUBIC` now becomes `Resampling.BICUBIC` (beit image processor etc), similar to `Resampling.BILINEAR` (convnext image processor etc). Do you know why? There are quite of them, but you can find by search **`Resampling.**\r\n- `**optional**` is changed to `**optional*`, but I think we want `*optional*` (in logit processor file)\r\n- ``[\"<s>NOTUSED\", \"</s>NOTUSED\"]`` become `['<s>NOTUSED', '</s>NOTUSED']` (list of str): I am fine with it, but want to double check with you.\r\n- There is `eps: int = 1e-6,` but I think type annotation is not the goal of this PR\r\n\r\n",
"Yes, the tool does not check for type annotations. Added something to catch the `**optional**` floating around.\r\n\r\nAs for the two other points, this falls under the last point of the description: some values are reformatted, but the tool will be consistent so I don't think it's a bad thing, it's just a different thing.\r\n\r\nLet me know if you think of other things to add.\r\n\r\nThis is rebased and updated to main, would be cool to merge soon-ish if you're happy with this :-)",
"It turns out `PILImageResampling` is not from PIL but our internal code. So good to keep `PILImageResampling`.",
"I think the discrepancy in values is due to the different versions of PIL, but I didn't manage to reproduce having 3 instead of the enum object printed.\r\n\r\nFor speed, it's a bit challenging to have the util only verify the files modified (like `make fixup`) instead of everything, since it's not based on file but on objects. Part of the slowness is also just importing everything from `transformers`.",
"Sounds good! Let's merge this PR.\r\n\r\nThanks for your efforts @sgugger!\r\n\r\nEDIT: we'll merge the PR right after tomorrow's release to make sure we can fix if anything breaks. ",
"Merged! Thanks π«‘"
] | 1,694 | 1,696 | 1,696 |
COLLABORATOR
| null |
# What does this PR do?
This PR adds a new util script that checks docstrings documentation of the arguments match the signature. This first version only checks base objects (functions and classes) not class methods. It checks:
- that the defaults documented are the same as the signature
- applies our traditional formatting (numbers not in code blocks, all other objects in code blocks, default to None skipped)
- that the order of the parameters is the same in the docstring and in the signature
- that all arguments (except `*args` or `**kwargs` or private arguments) are documented
It also performs auto-fixing for all of those problems (with templates for arguments that are not documented). As you will see from the diff (and it only covers the objects where all arguments were documented!) this was direly needed and should help a lot in the future.
The auto-fix part is added in `make fix-copies` while the check is in `make repo-consistency`. Since the auto fixing adds templates sometimes (for non-documented arguments), the check will error if those templates haven't been replaced, so the CI tells us when something is wrong.
The script has been applied to all objects it could fully auto-fix, and there is a (long) list of objects where manual input is needed (because some arguments are not documented yet). This could be fixed progressively with a sprint involving the community (hacktoberfest is coming!), but at least we can ensure newly added objects are properly documented.
A couple of additional notes:
- this is only applied to objects that are in the public init and not ignored in the check that ensures all objects are documented.
- as said before this doesn't touch the method yet, only the class inits. Adding the methods will be done in a followup PR later on
- one can ignore a docstring from this check by adding a comment `# no-format` before the docstring (see for instance `TrainerCallback`)
- it is also possible to ignore the re-ordering of the arguments documentation (if there is a special order that makes more sense for the documentation) by adding a comment `# ignore-order` before the docstring (no example of this yet)
- since we have a lot of arguments with a default `None` in the signature but that are set to a value in the call (often for mutable arguments since default mutables are dangerous in Python) the check won't replace a documented default when the signature shows None.
- we also have a lot of default arguments that are defined by mathematical formulas, which make more sense than the actual value, so the check uses a small Python interpreter for math expressions to make sure to leave those alone (if they are accurate).
- some values are reformatted by Python: it wants `1e-05` instead of `1e-5` or `Resampling.Xxx` instead of `PILResampling.Xxx`. Those are minor changes we don't really care about (I think).
cc @amyeroberts since you were interested by this
cc the doc master @stevhliu and @MKhalusova for your information
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26052/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 5,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26052/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26052",
"html_url": "https://github.com/huggingface/transformers/pull/26052",
"diff_url": "https://github.com/huggingface/transformers/pull/26052.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26052.patch",
"merged_at": 1696425217000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26051
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26051/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26051/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26051/events
|
https://github.com/huggingface/transformers/issues/26051
| 1,887,592,435 |
I_kwDOCUB6oc5wgl_z
| 26,051 |
[i18n-<languageCode>] Translating docs to <languageName>
|
{
"login": "lawchingman",
"id": 57217968,
"node_id": "MDQ6VXNlcjU3MjE3OTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/57217968?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lawchingman",
"html_url": "https://github.com/lawchingman",
"followers_url": "https://api.github.com/users/lawchingman/followers",
"following_url": "https://api.github.com/users/lawchingman/following{/other_user}",
"gists_url": "https://api.github.com/users/lawchingman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lawchingman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lawchingman/subscriptions",
"organizations_url": "https://api.github.com/users/lawchingman/orgs",
"repos_url": "https://api.github.com/users/lawchingman/repos",
"events_url": "https://api.github.com/users/lawchingman/events{/privacy}",
"received_events_url": "https://api.github.com/users/lawchingman/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,694 | 1,694 | 1,694 |
NONE
| null |
<!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community π (currently 0 out of 267 complete)
Who would want to translate? Please follow the π€ [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers π€).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu and @MKhalusova for review.
* π If you'd like others to help you with the translation, you can also post in the π€ [forums](https://discuss.huggingface.co/).
## Get Started section
- [ ] [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) https://github.com/huggingface/transformers/pull/20180
- [ ] [quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.md) (waiting for initial PR to go through)
- [ ] [installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md).
## Tutorial section
- [ ] [pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.md)
- [ ] [autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.md)
- [ ] [preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.md)
- [ ] [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md)
- [ ] [accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.md)
- [ ] [model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.md)
- [ ] [multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.md)
<!--
Keep on adding more as you go π₯
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26051/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26050
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26050/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26050/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26050/events
|
https://github.com/huggingface/transformers/pull/26050
| 1,887,408,945 |
PR_kwDOCUB6oc5Z3OlJ
| 26,050 |
Add torchaudio with rocm 5.6 to AMD Dockerfile
|
{
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@fxmarty Could you link to any relevant conversation / issue relating to this fix in the PR description? ",
"@amyeroberts we are enabling transformers CI on AMD GPUs as part of our partnership with them.\r\n\r\nI originally started the `huggingface:ci-amdgpu` work under a branch on the official transformers repo but @fxmarty can't do the same.\r\n\r\nThe PR here is to merge the work Felix has been doing enabling `torchaudio` to build against AMD ROCm instead of CUDA inside my original branch so we can test the workflow end2end with all the modalities.\r\n\r\nHope it clarify a bit what we're trying to achieve here π. We can take it offline if you want some more info ππΌ.\r\n\r\ncc @LysandreJik for viz",
"@mfuntowicz No, that's OK, thanks for the explanation! I just wanted to know if there were any open issues or additional context needed before reviews happen. ",
"@ydshieh I think I don't have rights to create / push to a branch on transformers: https://huggingface.slack.com/archives/C014N4749J9/p1693906519779479",
"Thanks everyone for your feedback, I'm merging now to unlock the remaining on my branch π€"
] | 1,694 | 1,694 | 1,694 |
COLLABORATOR
| null |
As per title
e.g. `CUDA_VISIBLE_DEVICES=0 pytest tests/models/wav2vec2/test_modeling_wav2vec2.py -s -vvvvv` now do pass.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26050/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26050",
"html_url": "https://github.com/huggingface/transformers/pull/26050",
"diff_url": "https://github.com/huggingface/transformers/pull/26050.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26050.patch",
"merged_at": 1694187383000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26049
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26049/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26049/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26049/events
|
https://github.com/huggingface/transformers/pull/26049
| 1,887,327,943 |
PR_kwDOCUB6oc5Z29Bn
| 26,049 |
Normalize only if needed
|
{
"login": "mjamroz",
"id": 505646,
"node_id": "MDQ6VXNlcjUwNTY0Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/505646?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mjamroz",
"html_url": "https://github.com/mjamroz",
"followers_url": "https://api.github.com/users/mjamroz/followers",
"following_url": "https://api.github.com/users/mjamroz/following{/other_user}",
"gists_url": "https://api.github.com/users/mjamroz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mjamroz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mjamroz/subscriptions",
"organizations_url": "https://api.github.com/users/mjamroz/orgs",
"repos_url": "https://api.github.com/users/mjamroz/repos",
"events_url": "https://api.github.com/users/mjamroz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mjamroz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Could you take a look at this @rafaelpadilla ?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26049). All of your documentation changes will be reflected on that endpoint."
] | 1,694 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# Normalize tensor only if `std/mean` defined.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerz and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26049/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26049",
"html_url": "https://github.com/huggingface/transformers/pull/26049",
"diff_url": "https://github.com/huggingface/transformers/pull/26049.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26049.patch",
"merged_at": 1698150723000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26048
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26048/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26048/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26048/events
|
https://github.com/huggingface/transformers/issues/26048
| 1,887,299,183 |
I_kwDOCUB6oc5wfeZv
| 26,048 |
Potential bug of the Bark implementation
|
{
"login": "leo19941227",
"id": 33196053,
"node_id": "MDQ6VXNlcjMzMTk2MDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/33196053?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leo19941227",
"html_url": "https://github.com/leo19941227",
"followers_url": "https://api.github.com/users/leo19941227/followers",
"following_url": "https://api.github.com/users/leo19941227/following{/other_user}",
"gists_url": "https://api.github.com/users/leo19941227/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leo19941227/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leo19941227/subscriptions",
"organizations_url": "https://api.github.com/users/leo19941227/orgs",
"repos_url": "https://api.github.com/users/leo19941227/repos",
"events_url": "https://api.github.com/users/leo19941227/events{/privacy}",
"received_events_url": "https://api.github.com/users/leo19941227/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Sorry, now I notice the `merge_context` context option during the GPT forward. I have no concern then. Closing the issue."
] | 1,694 | 1,694 | 1,694 |
NONE
| null |
### System Info
Hello,
I am not sure whether this is a bug, but I notice this suspicious line when tracing the whole implementation of Bark.
It seems the text tokens, prompting semantic tokens, and the continued semantic tokens should be concatenated along the time axis as shown by the official implementation:
https://github.com/suno-ai/bark/blob/cb89688307c28cbd2d8bbfc78e534b9812673a26/bark/generation.py#L439-L443
However in the transformers package it seems like there is a typo:
https://github.com/huggingface/transformers/blob/fb7d246951d5f60aa36a7958841dfea72f51fc6b/src/transformers/models/bark/modeling_bark.py#L779-L786
Ignore me if this is expected in the transformers library's context. I just think it might be helpful if it was not noticed before.
### Who can help?
@ylacombe @sanchit-gandhi
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The example code can work fine, just a minor correctness concern.
### Expected behavior
I expected the code block to be like:
```python3
input_embeds = torch.cat(
[
self.input_embeds_layer(input_ids[:, :max_input_semantic_length]),
self.input_embeds_layer(semantic_history[:, : max_input_semantic_length]),
self.input_embeds_layer(infer_array),
],
dim=1,
)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26048/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26047
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26047/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26047/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26047/events
|
https://github.com/huggingface/transformers/pull/26047
| 1,887,169,128 |
PR_kwDOCUB6oc5Z2a9I
| 26,047 |
π [i18n-KO] Translated `llama2.md` to Korean
|
{
"login": "mjk0618",
"id": 39152134,
"node_id": "MDQ6VXNlcjM5MTUyMTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/39152134?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mjk0618",
"html_url": "https://github.com/mjk0618",
"followers_url": "https://api.github.com/users/mjk0618/followers",
"following_url": "https://api.github.com/users/mjk0618/following{/other_user}",
"gists_url": "https://api.github.com/users/mjk0618/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mjk0618/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mjk0618/subscriptions",
"organizations_url": "https://api.github.com/users/mjk0618/orgs",
"repos_url": "https://api.github.com/users/mjk0618/repos",
"events_url": "https://api.github.com/users/mjk0618/events{/privacy}",
"received_events_url": "https://api.github.com/users/mjk0618/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"> μ’μ λ²μ κ°μ¬ν©λλ€! π μΌλΆ 리뷰λ₯Ό ν΄λ΄€μ΅λλ€. κ²ν λΆνλλ €μ!\r\n\r\n리뷰 κ°μ¬ν©λλ€! λ§μν΄μ£Όμ λΆλΆ λ°μνμ¬ μ»€λ°νμμ΅λλ€.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26047). All of your documentation changes will be reflected on that endpoint.",
"> Uhhh hellooo? This PR should unexist.\r\n\r\nHi! I didn't understand what you meant, why shouldn't this PR exist?",
"May you please review this PR? @sgugger, @ArthurZucker, @eunseojo"
] | 1,694 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
<!-- PRμ μ λͺ©μ "π [i18n-KO] Translated `<your_file>.md` to Korean" μΌλ‘ λΆνλ립λλ€! -->
# What does this PR do?
Translated the `llama2.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (λ²μ λλ½/μ€λ³΅ κ²μ¬)
- [x] Grammar Check (λ§μΆ€λ² κ²μ¬)
- [x] Review or Add new terms to glossary (μ©μ΄ νμΈ λ° μΆκ°)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [ ] Check live-preview for gotchas (live-previewλ‘ μ μμλ νμΈ)
## Who can review? (Initial)
<!-- 1. μ 체ν¬κ° λͺ¨λ μλ£λ λ€μ, μ΄ μλμ 리뷰λ₯Ό μμ²ν νμλ€μ λ©μ
ν΄μ£ΌμΈμ! -->
<!-- May you please review this PR? @member1 @member2 ... -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. νμλ€κ³Ό λ¦¬λ·°κ° λλ νμλ§ νκΉ
νμ΄μ€ μ§μλ€μκ² λ¦¬λ·° μμ²νλ μλ μ£Όμμ λ
ΈμΆν΄μ£ΌμΈμ! -->
May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26047/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26047/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26047",
"html_url": "https://github.com/huggingface/transformers/pull/26047",
"diff_url": "https://github.com/huggingface/transformers/pull/26047.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26047.patch",
"merged_at": 1694531067000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26046
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26046/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26046/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26046/events
|
https://github.com/huggingface/transformers/pull/26046
| 1,887,153,565 |
PR_kwDOCUB6oc5Z2XoL
| 26,046 |
π [i18n-KO] Translated `llama2.md` to Korean
|
{
"login": "mjk0618",
"id": 39152134,
"node_id": "MDQ6VXNlcjM5MTUyMTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/39152134?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mjk0618",
"html_url": "https://github.com/mjk0618",
"followers_url": "https://api.github.com/users/mjk0618/followers",
"following_url": "https://api.github.com/users/mjk0618/following{/other_user}",
"gists_url": "https://api.github.com/users/mjk0618/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mjk0618/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mjk0618/subscriptions",
"organizations_url": "https://api.github.com/users/mjk0618/orgs",
"repos_url": "https://api.github.com/users/mjk0618/repos",
"events_url": "https://api.github.com/users/mjk0618/events{/privacy}",
"received_events_url": "https://api.github.com/users/mjk0618/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This PR was closed because it contains incorrect commits."
] | 1,694 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
<!-- PRμ μ λͺ©μ "π [i18n-KO] Translated `<your_file>.md` to Korean" μΌλ‘ λΆνλ립λλ€! -->
# What does this PR do?
Translated the `llama2.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [ ] Check for missing / redundant translations (λ²μ λλ½/μ€λ³΅ κ²μ¬)
- [ ] Grammar Check (λ§μΆ€λ² κ²μ¬)
- [ ] Review or Add new terms to glossary (μ©μ΄ νμΈ λ° μΆκ°)
- [ ] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [ ] Check live-preview for gotchas (live-previewλ‘ μ μμλ νμΈ)
## Who can review? (Initial)
<!-- 1. μ 체ν¬κ° λͺ¨λ μλ£λ λ€μ, μ΄ μλμ 리뷰λ₯Ό μμ²ν νμλ€μ λ©μ
ν΄μ£ΌμΈμ! -->
<!-- May you please review this PR? @member1 @member2 ... -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. νμλ€κ³Ό λ¦¬λ·°κ° λλ νμλ§ νκΉ
νμ΄μ€ μ§μλ€μκ² λ¦¬λ·° μμ²νλ μλ μ£Όμμ λ
ΈμΆν΄μ£ΌμΈμ! -->
<!-- May you please review this PR? @sgugger, @ArthurZucker, @eunseojo -->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26046/timeline
| null | true |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26046",
"html_url": "https://github.com/huggingface/transformers/pull/26046",
"diff_url": "https://github.com/huggingface/transformers/pull/26046.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26046.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/26045
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26045/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26045/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26045/events
|
https://github.com/huggingface/transformers/issues/26045
| 1,887,090,942 |
I_kwDOCUB6oc5werj-
| 26,045 |
ImportError: cannot import name 'GPTQQuantizer' from partially initialized module 'optimum.gptq'
|
{
"login": "jessiewiswjc",
"id": 70051089,
"node_id": "MDQ6VXNlcjcwMDUxMDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/70051089?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessiewiswjc",
"html_url": "https://github.com/jessiewiswjc",
"followers_url": "https://api.github.com/users/jessiewiswjc/followers",
"following_url": "https://api.github.com/users/jessiewiswjc/following{/other_user}",
"gists_url": "https://api.github.com/users/jessiewiswjc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessiewiswjc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessiewiswjc/subscriptions",
"organizations_url": "https://api.github.com/users/jessiewiswjc/orgs",
"repos_url": "https://api.github.com/users/jessiewiswjc/repos",
"events_url": "https://api.github.com/users/jessiewiswjc/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessiewiswjc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @jessiewiswjc. This happens because you named your file auto_gptq.py. Please rename it and let me know if it works. ",
"> Hi @jessiewiswjc. This happens because you named your file auto_gptq.py. Please rename it and let me know if it works.\r\n\r\nThanks very much and it works after renaming."
] | 1,694 | 1,694 | 1,694 |
NONE
| null |
### System Info
- `transformers` version: 4.33.1
- Platform: Linux-3.10.0-1160.88.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.17
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.22.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@SunMarc @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
code is
```
from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig
import torch
if torch.cuda.is_available():
print(torch.cuda.device_count())
model_id = "/mnt/data/wangjiancheng/checkpoint-38241"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
dataset = ["auto-gptq is an easy-to-use model quantization library with user-friendly apis, based on GPTQ algorithm."]
gptq_config = GPTQConfig(bits=8, dataset = dataset, tokenizer=tokenizer)
quant_model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=gptq_config, trust_remote_code=True).to(torch.device("cuda:1"))
```
and error is
```
8
Traceback (most recent call last):
File "/mnt/data/wangjiancheng/auto_gptq.py", line 13, in <module>
quant_model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=gptq_config, trust_remote_code=True).to(torch.device("cuda:1"))
File "/mnt/data/wangjiancheng/miniconda3/envs/autogptq/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 558, in from_pretrained
return model_class.from_pretrained(
File "/mnt/data/wangjiancheng/miniconda3/envs/autogptq/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2577, in from_pretrained
from optimum.gptq import GPTQQuantizer
File "/mnt/data/wangjiancheng/miniconda3/envs/autogptq/lib/python3.9/site-packages/optimum/gptq/__init__.py", line 15, in <module>
from .quantizer import GPTQQuantizer, load_quantized_model
File "/mnt/data/wangjiancheng/miniconda3/envs/autogptq/lib/python3.9/site-packages/optimum/gptq/quantizer.py", line 44, in <module>
from auto_gptq.modeling._utils import autogptq_post_init
File "/mnt/data/wangjiancheng/auto_gptq.py", line 13, in <module>
quant_model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=gptq_config, trust_remote_code=True).to(torch.device("cuda:1"))
File "/mnt/data/wangjiancheng/miniconda3/envs/autogptq/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 558, in from_pretrained
return model_class.from_pretrained(
File "/mnt/data/wangjiancheng/miniconda3/envs/autogptq/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2577, in from_pretrained
from optimum.gptq import GPTQQuantizer
ImportError: cannot import name 'GPTQQuantizer' from partially initialized module 'optimum.gptq' (most likely due to a circular import) (/mnt/data/wangjiancheng/miniconda3/envs/autogptq/lib/python3.9/site-packages/optimum/gptq/__init__.py)
```
### Expected behavior
code with no error
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26045/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26044
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26044/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26044/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26044/events
|
https://github.com/huggingface/transformers/pull/26044
| 1,886,911,926 |
PR_kwDOCUB6oc5Z1irW
| 26,044 |
π [i18n-KO] Translated `llama.md` to Korean
|
{
"login": "harheem",
"id": 49297157,
"node_id": "MDQ6VXNlcjQ5Mjk3MTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/49297157?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harheem",
"html_url": "https://github.com/harheem",
"followers_url": "https://api.github.com/users/harheem/followers",
"following_url": "https://api.github.com/users/harheem/following{/other_user}",
"gists_url": "https://api.github.com/users/harheem/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harheem/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harheem/subscriptions",
"organizations_url": "https://api.github.com/users/harheem/orgs",
"repos_url": "https://api.github.com/users/harheem/repos",
"events_url": "https://api.github.com/users/harheem/events{/privacy}",
"received_events_url": "https://api.github.com/users/harheem/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hello, @stevhliu π\r\nShould we translate the `autodoc` part in model_doc too?",
"> λͺ¨λΈ λ¬Έμ λ²μμ μ°Έμ¬ν΄μ£Όμ
μ κ°μ¬ν©λλ€! μ’μ λ²μμ΄μμ π\r\n> μΌλΆ λ¬Έμ₯μ μλ‘μ΄ μ μμ λλ Έμ΅λλ€. κ²ν λΆνλλ €μ!\r\n\r\n리뷰 κ°μ¬ν©λλ€! μ½λ©νΈ λ¬μμ£Όμ κ²λ€μ΄ μ λΆ κ³ λ―Όνλ λΆλΆμ΄μλλ°, μ’μ νν μ°Ύμμ£Όμ
μ κ°μ¬ν΄μ π€©",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26044). All of your documentation changes will be reflected on that endpoint."
] | 1,694 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
# What does this PR do?
Translated the `llama.md` file of the documentation to Korean π
Thank you in advance for your review!
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (λ²μ λλ½/μ€λ³΅ κ²μ¬)
- [x] Grammar Check (λ§μΆ€λ² κ²μ¬)
- [x] Review or Add new terms to glossary (μ©μ΄ νμΈ λ° μΆκ°)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [ ] Check live-preview for gotchas (live-previewλ‘ μ μμλ νμΈ)
## Who can review? (Initial)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26044/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26044",
"html_url": "https://github.com/huggingface/transformers/pull/26044",
"diff_url": "https://github.com/huggingface/transformers/pull/26044.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26044.patch",
"merged_at": 1694201922000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26043
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26043/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26043/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26043/events
|
https://github.com/huggingface/transformers/issues/26043
| 1,886,804,648 |
I_kwDOCUB6oc5wdlqo
| 26,043 |
Inconsistent tokenization across different runs (`WhisperTokenizerFast`)
|
{
"login": "xenova",
"id": 26504141,
"node_id": "MDQ6VXNlcjI2NTA0MTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xenova",
"html_url": "https://github.com/xenova",
"followers_url": "https://api.github.com/users/xenova/followers",
"following_url": "https://api.github.com/users/xenova/following{/other_user}",
"gists_url": "https://api.github.com/users/xenova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xenova/subscriptions",
"organizations_url": "https://api.github.com/users/xenova/orgs",
"repos_url": "https://api.github.com/users/xenova/repos",
"events_url": "https://api.github.com/users/xenova/events{/privacy}",
"received_events_url": "https://api.github.com/users/xenova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This model uses `dropout: 0.1` \r\n\r\nThis means that by default it will ignore merges 10% of the time, leading to different tokenization.\r\nYou can ignore the dropout by setting it to 0. If the model was trained with it though it's expected that sometimes it should see different tokens, therefore the generations shouldn't change much (at least in theory)",
"> This model uses dropout: 0.1\r\n\r\nAhh I somehow missed that π What's the recommended way to set it to 0? I don't immediately see any notes about this in the [docs](https://huggingface.co/docs/transformers/v4.33.0/en/model_doc/whisper#transformers.WhisperTokenizerFast). I've also tried adding `dropout=0` as a kwargs in `from_pretrained` but that didn't work.",
"After a bit of digging I found you can override it with `tokenizer.backend_tokenizer.model.dropout=0`. Closing the issue!",
"Thanks for digging! "
] | 1,694 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.34.0.dev0
- Platform: Linux-5.15.109+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.2 (cpu)
- Jax version: 0.4.14
- JaxLib version: 0.4.14
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker @Narsil
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Running this code
```py
from transformers import AutoTokenizer
# Load first tokenizer
a=AutoTokenizer.from_pretrained('NbAiLab/nb-whisper-large-beta')
# Save and load first tokenizer again
a.save_pretrained('temp')
b=AutoTokenizer.from_pretrained('temp')
# Load second tokenizer (NOTE: this is just a `.save_pretrained()` of `NbAiLab/nb-whisper-large-beta`)
c=AutoTokenizer.from_pretrained('Xenova/nb-whisper-large-beta')
# Save and load second tokenizer again
c.save_pretrained('temp2')
d=AutoTokenizer.from_pretrained('temp2')
# All these should output the same
a("Hello World"), b("Hello World"), c("Hello World"), c("Hello World")
```
Produces many different outputs, for example:
1. Run 1
```
({'input_ids': [50258, 50288, 50359, 50363, 2425, 3937, 50257], 'attention_mask': [1, 1, 1, 1, 1, 1, 1]},
{'input_ids': [50258, 50288, 50359, 50363, 2425, 26363, 348, 50257], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1]},
{'input_ids': [50258, 50288, 50359, 50363, 2425, 3937, 50257], 'attention_mask': [1, 1, 1, 1, 1, 1, 1]},
{'input_ids': [50258, 50288, 50359, 50363, 2425, 26363, 348, 50257], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1]})
```
2. Run 2
```
({'input_ids': [50258, 50288, 50359, 50363, 2425, 3937, 50257], 'attention_mask': [1, 1, 1, 1, 1, 1, 1]},
{'input_ids': [50258, 50288, 50359, 50363, 2425, 3937, 50257], 'attention_mask': [1, 1, 1, 1, 1, 1, 1]},
{'input_ids': [50258, 50288, 50359, 50363, 2425, 26363, 348, 50257], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1]},
{'input_ids': [50258, 50288, 50359, 50363, 2425, 3937, 50257], 'attention_mask': [1, 1, 1, 1, 1, 1, 1]})
```
Video:
https://github.com/huggingface/transformers/assets/26504141/c85b26e3-8436-4352-8ca7-dade32c61c99
### Expected behavior
The tokenization should be consistent across runs.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26043/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26042
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26042/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26042/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26042/events
|
https://github.com/huggingface/transformers/pull/26042
| 1,886,679,199 |
PR_kwDOCUB6oc5Z0wqx
| 26,042 |
[`Persimmon`] Add support for persimmon
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Looks good so far, and the outputs are definitely better than purely random, so I suspect the port is mostly correct except for one detail somewhere.\r\n\r\nI don't want to double up on work, but let me know if you want me to take any part of the port!",
"_The documentation is not available anymore as the PR was closed or merged._",
"The Fast Tokenizer requires `tokenizers == 0.14.0` which will be added in #23909 . \r\nI left the logic for now as it can work without! ",
"Thank you for setting this up @ArthurZucker and folks! This is awesome work, we're amazed by how quickly y'all spun this up β‘ Moving over a comment from the other PR over on our repo:\r\n\r\n> @ArthurZucker thank you for putting this together!!! A really quick note: We'd recommend adding the prompt structure for the chat-finetuning to the chat inference, since performance really degrades without it! It looks something like this:\r\n```\r\nhuman: <some query>\r\n\r\nadept:\r\n```\r\n\r\nIt should hopefully be an easy add here (but happy to help do this in a subsequent PR as well). Thanks so much again!!",
"The follow up PR is #25323 which introduces chat templating! \r\nThe only thing I have left to do is make sure I have 1 to 1 outputs and will merge. Just double checking \r\n",
"That sounds great @ArthurZucker. Just to confirm, would chat templating be the default for the chat model? If not, what steps can we take to make that happen? Thanks!",
"The chat template will be uploaded on the hub as it will be part of the tokenizer. It won't be the default for the tokenizer (as it uses the `LlamaTokenizer`) neither will it be the default for normal tasks. But this seems align with your codebase, as users have the freedom to use `Conversational` pipeline vs normal text generation π€ "
] | 1,694 | 1,694 | 1,694 |
COLLABORATOR
| null |
# What does this PR do?
Add support for the `Persimmon` models by Adept.
cc @Rocketknight1
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26042/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26042/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26042",
"html_url": "https://github.com/huggingface/transformers/pull/26042",
"diff_url": "https://github.com/huggingface/transformers/pull/26042.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26042.patch",
"merged_at": 1694511208000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26041
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26041/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26041/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26041/events
|
https://github.com/huggingface/transformers/pull/26041
| 1,886,674,681 |
PR_kwDOCUB6oc5Z0vn4
| 26,041 |
[`CodeLlamaTokenizerFast`] Fix fix `set_infilling_processor` to properly reset
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,694 | 1,694 | 1,694 |
COLLABORATOR
| null |
# What does this PR do?
Fixes #26038 were the `set_infilling_processor` was not properly reseting the template processing for `CodeLlama`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26041/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26041/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26041",
"html_url": "https://github.com/huggingface/transformers/pull/26041",
"diff_url": "https://github.com/huggingface/transformers/pull/26041.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26041.patch",
"merged_at": 1694203389000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26040
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26040/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26040/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26040/events
|
https://github.com/huggingface/transformers/pull/26040
| 1,886,671,829 |
PR_kwDOCUB6oc5Z0u97
| 26,040 |
Neurostimulation
|
{
"login": "wazeerzulfikar",
"id": 15856554,
"node_id": "MDQ6VXNlcjE1ODU2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/15856554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wazeerzulfikar",
"html_url": "https://github.com/wazeerzulfikar",
"followers_url": "https://api.github.com/users/wazeerzulfikar/followers",
"following_url": "https://api.github.com/users/wazeerzulfikar/following{/other_user}",
"gists_url": "https://api.github.com/users/wazeerzulfikar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wazeerzulfikar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wazeerzulfikar/subscriptions",
"organizations_url": "https://api.github.com/users/wazeerzulfikar/orgs",
"repos_url": "https://api.github.com/users/wazeerzulfikar/repos",
"events_url": "https://api.github.com/users/wazeerzulfikar/events{/privacy}",
"received_events_url": "https://api.github.com/users/wazeerzulfikar/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,694 | 1,694 | 1,694 |
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerz and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26040/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26040/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26040",
"html_url": "https://github.com/huggingface/transformers/pull/26040",
"diff_url": "https://github.com/huggingface/transformers/pull/26040.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26040.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/26039
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26039/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26039/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26039/events
|
https://github.com/huggingface/transformers/issues/26039
| 1,886,578,708 |
I_kwDOCUB6oc5wcugU
| 26,039 |
Regression: Need to guard torch.distributed.is_initialized with torch.distributed.is_available
|
{
"login": "marr75",
"id": 663276,
"node_id": "MDQ6VXNlcjY2MzI3Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/663276?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marr75",
"html_url": "https://github.com/marr75",
"followers_url": "https://api.github.com/users/marr75/followers",
"following_url": "https://api.github.com/users/marr75/following{/other_user}",
"gists_url": "https://api.github.com/users/marr75/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marr75/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marr75/subscriptions",
"organizations_url": "https://api.github.com/users/marr75/orgs",
"repos_url": "https://api.github.com/users/marr75/repos",
"events_url": "https://api.github.com/users/marr75/events{/privacy}",
"received_events_url": "https://api.github.com/users/marr75/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @pacman100 ",
"I am using FSDP to train 70B models according to your blog post @pacman100 . I found that https://github.com/huggingface/transformers/commit/73b13ac099443cf8297a4b729f00c907fa00f1b5 you add a `torch.distributed.is_initialized()` check. I found it is `False` when I use `accelerate launch`.\r\n\r\nCould you please give me some suggestions?",
"```python\r\ndef is_fsdp_enabled():\r\n return (\r\n torch.distributed.is_available()\r\n # and torch.distributed.is_initialized()\r\n and strtobool(os.environ.get(\"ACCELERATE_USE_FSDP\", \"False\")) == 1\r\n )\r\n```\r\n\r\nI comment this line to make FSDP load large model works."
] | 1,694 | 1,696 | 1,694 |
NONE
| null |
### System Info
- `transformers` version: 4.33.1
- Platform: macOS-13.5-arm64-arm-64bit
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
Accessing via sentence-transformers so text models? @ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
My environment.yaml:
name: prompt_gym
channels:
- conda-forge
- pytorch
dependencies:
- faiss
- python=3.10
- sentence-transformers
1. `mamba create` from the same directory as the environment.yaml
2. `mamba activate prompt_gym`
3. From a python interpreter:
```python
import sentence_transformers
encoder = sentence_transformers.SentenceTransformer('intfloat/e5-small-v2', device='mps')
```
4. Receive error:
```text
File "~/Documents/dev/gpt-experiments/common/src/msw_gpt_common/embedding.py", line 47, in encoder
encoder = sentence_transformers.SentenceTransformer("intfloat/e5-base-v2", device="mps")
File "~/mambaforge/envs/prompt_gym/lib/python3.10/site-packages/sentence_transformers/SentenceTransformer.py", line 95, in __init__
modules = self._load_sbert_model(model_path)
File "~/mambaforge/envs/prompt_gym/lib/python3.10/site-packages/sentence_transformers/SentenceTransformer.py", line 840, in _load_sbert_model
module = module_class.load(os.path.join(model_path, module_config['path']))
File "~/mambaforge/envs/prompt_gym/lib/python3.10/site-packages/sentence_transformers/models/Transformer.py", line 137, in load
return Transformer(model_name_or_path=input_path, **config)
File "~/mambaforge/envs/prompt_gym/lib/python3.10/site-packages/sentence_transformers/models/Transformer.py", line 29, in __init__
self._load_model(model_name_or_path, config, cache_dir)
File "~/mambaforge/envs/prompt_gym/lib/python3.10/site-packages/sentence_transformers/models/Transformer.py", line 49, in _load_model
self.auto_model = AutoModel.from_pretrained(model_name_or_path, config=config, cache_dir=cache_dir)
File "~/mambaforge/envs/prompt_gym/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 563, in from_pretrained
return model_class.from_pretrained(
File "~/mambaforge/envs/prompt_gym/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2347, in from_pretrained
if is_fsdp_enabled():
File "~/mambaforge/envs/prompt_gym/lib/python3.10/site-packages/transformers/modeling_utils.py", line 118, in is_fsdp_enabled
return torch.distributed.is_initialized() and strtobool(os.environ.get("ACCELERATE_USE_FSDP", "False")) == 1
AttributeError: module 'torch.distributed' has no attribute 'is_initialized'
```
### Expected behavior
Transformers should not throw an error when loading an Automodel from a pretrained model. It should check that distributed processing is available before checking if it is initialized. See #17590 for an earlier version of the same bug. Regression was likely introduced by #25686.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26039/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26038
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26038/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26038/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26038/events
|
https://github.com/huggingface/transformers/issues/26038
| 1,886,481,787 |
I_kwDOCUB6oc5wcW17
| 26,038 |
`CodeLlamaTokenizerFast` behavior changes permanently after encoding a string containing `"<FILL_ME>"`
|
{
"login": "rfriel",
"id": 20493507,
"node_id": "MDQ6VXNlcjIwNDkzNTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/20493507?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rfriel",
"html_url": "https://github.com/rfriel",
"followers_url": "https://api.github.com/users/rfriel/followers",
"following_url": "https://api.github.com/users/rfriel/following{/other_user}",
"gists_url": "https://api.github.com/users/rfriel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rfriel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rfriel/subscriptions",
"organizations_url": "https://api.github.com/users/rfriel/orgs",
"repos_url": "https://api.github.com/users/rfriel/repos",
"events_url": "https://api.github.com/users/rfriel/events{/privacy}",
"received_events_url": "https://api.github.com/users/rfriel/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Good catch! The `set_infilling_processor` is missing it's early return statement! "
] | 1,694 | 1,694 | 1,694 |
NONE
| null |
### System Info
- `transformers` version: 4.34.0.dev0
- Platform: Linux-5.15.109+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.2 (cpu)
- Jax version: 0.4.14
- JaxLib version: 0.4.14
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The `CodeLlamaTokenizerFast` tokenizer behaves differently after calling `.encode()` on a string containing `'<FILL_ME>'`.
Here's a very brief example showing the gist:
```python
>>> tokenizer = transformers.AutoTokenizer.from_pretrained("codellama/CodeLlama-7b-hf")
>>>
>>> a = tokenizer.encode("foo")
>>> tokenizer.encode("first <FILL_ME> second")
>>> b = tokenizer.encode("foo")
>>>
>>> a == b
False
```
The specific effects I've noticed are:
1. The tokenizer no longer includes a prefix space
2. The tokenizer no longer includes the BOS token, even with `add_special_tokens=True`
It seems like maybe the tokenizer is going into a state where it behaves more like [`encode_infilling` from the Facebook repo](https://github.com/facebookresearch/codellama/blob/e064c1c24c377cc0875711440ef4c0a6eaf0147b/llama/tokenizer.py#L50-L52), and not properly exiting that state afterward?
The following script demonstrates the issue in more detail.
```python
from transformers import AutoTokenizer
model_name = "codellama/CodeLlama-7b-hf"
def show_tokens(token_ids):
print()
print(f"\ttokens IDs: {token_ids}")
print(f"\tstring representations: {test_tokenizer.convert_ids_to_tokens(token_ids)}")
print()
def demo(use_fast: bool):
for add_special_tokens in [False, True]:
test_tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=use_fast)
TEST_STR = 'foo'
TEST_STR_FILL = "first <FILL_ME> second"
token_lists, descriptions = [], []
token_ids = test_tokenizer.encode(TEST_STR, add_special_tokens=add_special_tokens)
print(f'Before <FILL_ME>\nCalling `tokenizer.encode({repr(TEST_STR)}, add_special_tokens={add_special_tokens})`')
show_tokens(token_ids)
test_tokenizer.encode(TEST_STR_FILL)
token_ids = test_tokenizer.encode(TEST_STR, add_special_tokens=add_special_tokens)
print(f'After <FILL_ME>\nCalling `tokenizer.encode({repr(TEST_STR)}, add_special_tokens={add_special_tokens})`')
show_tokens(token_ids)
print('---------------------------------------------------\n')
demo(use_fast=True)
demo(use_fast=False)
```
When we run the line `demo(use_fast=True)`, it prints:
```
Before <FILL_ME>
Calling `tokenizer.encode('foo', add_special_tokens=False)`
tokens IDs: [7953]
string representations: ['βfoo']
After <FILL_ME>
Calling `tokenizer.encode('foo', add_special_tokens=False)`
tokens IDs: [5431]
string representations: ['foo']
---------------------------------------------------
Before <FILL_ME>
Calling `tokenizer.encode('foo', add_special_tokens=True)`
tokens IDs: [1, 7953]
string representations: ['<s>', 'βfoo']
After <FILL_ME>
Calling `tokenizer.encode('foo', add_special_tokens=True)`
tokens IDs: [5431]
string representations: ['foo']
---------------------------------------------------
```
That is, the tokenizer gives different outputs for the same inputs, depending on whether we have encoded a FILL_ME string yet or not.
The line `demo(use_fast=False)` prints:
```
before fill, add_special_tokens=False
tokens IDs: [7953]
string representations: ['βfoo']
after fill, add_special_tokens=False
tokens IDs: [7953]
string representations: ['βfoo']
---------------------------------------------------
before fill, add_special_tokens=True
tokens IDs: [1, 7953]
string representations: ['<s>', 'βfoo']
after fill, add_special_tokens=True
tokens IDs: [1, 7953]
string representations: ['<s>', 'βfoo']
---------------------------------------------------
```
So the slow tokenizer behaves consistently before and after FILL_ME.
### Expected behavior
The `encode` method should not modify the state of the tokenizer.
If I call `encode` multiple times, without doing anything else in between, I should expect the outputs to be independent of the order in which the calls are made.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26038/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26037
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26037/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26037/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26037/events
|
https://github.com/huggingface/transformers/pull/26037
| 1,886,467,805 |
PR_kwDOCUB6oc5Z0CpZ
| 26,037 |
[bnb] Let's make serialization of 4bit models possible
|
{
"login": "poedator",
"id": 24738311,
"node_id": "MDQ6VXNlcjI0NzM4MzEx",
"avatar_url": "https://avatars.githubusercontent.com/u/24738311?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/poedator",
"html_url": "https://github.com/poedator",
"followers_url": "https://api.github.com/users/poedator/followers",
"following_url": "https://api.github.com/users/poedator/following{/other_user}",
"gists_url": "https://api.github.com/users/poedator/gists{/gist_id}",
"starred_url": "https://api.github.com/users/poedator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/poedator/subscriptions",
"organizations_url": "https://api.github.com/users/poedator/orgs",
"repos_url": "https://api.github.com/users/poedator/repos",
"events_url": "https://api.github.com/users/poedator/events{/privacy}",
"received_events_url": "https://api.github.com/users/poedator/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @younesbelkada @SunMarc ",
"## On storage format for the quantized parameters.\r\nRight now there is no clear separation of code to load a pretrained model in fp16/fp32 with subsequent quantisation and loading already quantized model. I noticed that for 8-bit it comes down to detecting `SCB` in state_dict.keys(). In this PR the 4bit quantized param storage requires not one but about 10 components per weight to be loaded from the state_dict - it may become unpractical.\r\nAlso saving all quantization params as _buffers into state_dict() may be cluttering the state_dict, and be impractical for some data types.\r\nThis matter is important right now, because of multitude of quantization methods (and more to come): int8, nd4, fp4 in bnb, SpQR, GPTQ, etc. Some guidelines here will simplify transformers code and facilitate adding new quantizations later. \r\nOne possibility is this:\r\nPacking all of the quantisation components into single serializeable object per parameter (protobuf?) and then saving it alongside the main parameter data. Then the loading code may look like:\r\n```\r\nfor k in state_dict:\r\n if k + '_quant_state' in loaded_state_dict:\r\n new_param_weight = load_quantized_weight(loaded_state_dict[k], loaded_state_dict[k+'_quant_state'], )\r\n else:\r\n new_param_weight = loaded_state_dict[k]\r\n...\r\ndef unpack_quantized(data, packed_quant_state):\r\n quant_type, quant_state = unpack_q_state(packed_quant_state)\r\n if quant_type == 'bnb_nf4':\r\n new_param_weight = bnb.nn.Param4bit.from_prequantized(data, quant_state) # call specific unpacker\r\n ...\r\n return new_param_weight # fully unpacked and loaded unto cuda\r\n\r\ndef unpack_q_state(packed_quant_state: protobuf) -> Tuple(str, SomeQuantStateStru):\r\n \"\"\"from serialized format to dict(or some data structure) of quantization state\"\"\"\r\n ...\r\n return quant_type, quant_state\r\n```\r\nAlternatively, quant_type per parameter may be stored in `config.json`, but it becomes too fat then.\r\nI suggest to allow various quantization types per parameter.\r\nWhat do you think? I'll be glad to refactor this PR to make it future-proof. \r\n@younesbelkada @SunMarc @TimDettmers ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26037). All of your documentation changes will be reflected on that endpoint.",
"Thanks a lot @poedator for your great effort here! Let me know once this is ready for review!",
"> Thanks a lot @poedator for your great effort here! Let me know once this is ready for review!\r\n\r\n@younesbelkada , thanks for your friendly attention to my work!\r\n\r\nI updated the PR code to address your comments and also simplify/clarify some areas. Also stole the PR name from your #22177 ).\r\n\r\nFor saving format for 4bit params I changed scheme (see bnb my PR for more details) to this:\r\n- save main weight tensor as Parameter using standard state_dict() code\r\n- save tensor components from quant_state into state_dict() with custom code\r\n- pack all other quant_state components into json->bytestring->int8_tensor and save to state dict using name like `...weight.quant_state.bitsandbytes__nf4`\r\n- no extra per-weight metadata stored to safetensors\r\nAs a result, we have 6 items stored per weight (with double quant) vs ~12 in earlier draft.\r\n\r\nSuch scheme can be easily adopted for other quantizers: when loading, `_load_state_dict_into_meta_model()` checks for presence of `[param_name].quant_state.[QUANT_TYPE]` key in loaded dict and use QUANT_TYPE to call appropriate function to such weight in quantized form. It allows to load models with custom choice of quantizers on per-weight basis. Once this PR goes thru, I want to propose how to refactor model_utils to have a uniform interface for quantied models. \r\n\r\nI added tests covering saving / loading and matching by component of tensors and quant_states, + forward() and generate().\r\n\r\nNow we need to see the enabling [PR in bnb](https://github.com/TimDettmers/bitsandbytes/pull/753) going thru. Other than that, this PR looks ready for your review.",
"Thanks a lot for you work, could you tell me how the 4bit quantized model should be merged with the adapter trained by lora?",
"Also I would like @SunMarc to have a look on the PR whenever possible, as he is also maintaining the quantization feature in transformers, thanks!",
"> Thanks a lot for you work, could you tell me how the 4bit quantized model should be merged with the adapter trained by lora?\r\n\r\nHi, @Yu-Yuqing \r\nI believe, such merge is done by [peft](https://github.com/huggingface/peft) package. It was implemented recently by https://github.com/huggingface/peft/pull/851.\r\nThis PR is not affecting such merges. But if(when) it gets merged, it would be possible to save 4-bit quantized models, including ones that had adapters merged into them. \r\n",
"@younesbelkada, \r\nthank you for the review - I believe that I addressed your comments. Below is update on the tests and `bnb`:\r\n\r\n- the slow 8 bit tests generally work, except 5 of them in `MixedInt8GPT2Test` group (starting from test_generate_quality) failed with `AssertionError: \"Hello my name is John Doe, and I'm a fan of the\" != \"Hello my name is John Doe, and I'm a big fan of\"` However this error is also present in `main` branch. I believe that it is still a legit generation result. Perhaps it was affected by version change somewhere.\r\n- the slow 4bit tests generally work. My new test need some extra testing in multi-GPU setup. I will work on this soon.\r\n- meanwhile [at bnb](https://github.com/TimDettmers/bitsandbytes/pull/753) we had the first round of comments.",
"Update: I am still optimistic that the enabling PR in BitsAndBytes will get merged soon, but can't bet on a specific date. \r\n\r\nAs an alternative, I can try to bring 4-bit serialization code to here. But that would require subclassing `bnb.Linear4bit` with risk of compatibility issues. Does it make sense? @younesbelkada ",
"Thanks a lot @poedator ! \r\nI think it is better to have the bnb serialization live directly in bnb, there is no rush, we can wait until your PR gets merged!",
"rebased to main as of Nov 8. \r\ntested with recent `bitsandbytes==0.41.2.post2` - the 4bit serialization test works OK. \r\n\r\n@SunMarc @younesbelkada, you may want to review \r\n\r\nknown issues:\r\n- errors with `safetensors` serialization - need to restore shared tensors (`quant_map`, rather small) somehow - need a hint. \r\n\r\n- error in CI with ExamplesTestsNoTrainer.test_run_clm_no_trainer - not sure how it is related to this PR?",
"@younesbelkada , thank you for picking this up so fast. Indeed, `bnb_4bit_use_double_quant=False` causes error in serialization, it will be fixed on the bnb side. \r\nMeanwhile, let me know if any other issues pop up when testing with bnb_4bit_use_double_quant=True.\r\nAlso I need advice on shared tensors and on pytorch test - see my previous message.\r\n\r\nUPDATE: fix PR is open now: https://github.com/TimDettmers/bitsandbytes/pull/868",
"Thanks @poedator ! \r\nFor the safetensors issue please refer to this PR: https://github.com/huggingface/transformers/pull/27234 I had to fix something similar for llm.int8 weights. Let me know if this PR helps you digging a bit!",
"> Contributor\r\n\r\nthe example shows how to identify shared pointers. But where is that collection stored then and how it is used in unpacking?",
"I got the same issue for Mistral, too. ",
"> I got the same issue for Mistral, too.\r\n\r\nhi, @KnutJaegersberg , thank you for the interest to bnb 4bit serialization. While most of the changes are already made on bnb side, for successful 4bit serialization one may need to wait for this PR to merge. Hopefully it is a matter of days.\r\n\r\nThe error that you mention comes from code that restores shared tensors. It is still in development. Try saving without safetensors `safe_serialization=False`.",
"Thanks for your answer, @poedator I started thinking I just made a stupid mistake. I'll try what you said as soon as my local compute is available again. ",
"Same exception with save_serialization=False. \r\nhmm\r\nLoading checkpoint shards: 0%| | 0/4 [00:01<?, ?it/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/knut/Downloads/transformers-save4/src/transformers/models/auto/auto_factory.py\", line 566, in from_pretrained\r\n return model_class.from_pretrained(\r\n File \"/home/knut/Downloads/transformers-save4/src/transformers/modeling_utils.py\", line 3532, in from_pretrained\r\n ) = cls._load_pretrained_model(\r\n File \"/home/knut/Downloads/transformers-save4/src/transformers/modeling_utils.py\", line 3927, in _load_pretrained_model\r\n new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(\r\n File \"/home/knut/Downloads/transformers-save4/src/transformers/modeling_utils.py\", line 761, in _load_state_dict_into_meta_model\r\n recover_shared_tensors_values(state_dict, quantized_stats, param_name + \".\", likely_shared_key_endings)\r\n File \"/home/knut/Downloads/transformers-save4/src/transformers/modeling_utils.py\", line 799, in recover_shared_tensors_values\r\n target_dict[target_prefix + ending] = replacement_tensors[0]\r\nIndexError: list index out of range\r\n",
"Hi @poedator \r\n\r\n> the example shows how to identify shared pointers. But where is that collection stored then and how it is used in unpacking?\r\n\r\nIt is used directly by storing them on the state dict, that is what we do for 8bit layers, let me know if you want me to have a deeper look and propose something",
"@younesbelkada \r\nI decided to un-share these nested_quant_map tensors, which are only 256 elements each. So there is no need to deal with restoring them for 4bit quantization. \r\nThis change makes earlier saved 4-bit models slightly incompatible. Please discard them if saved any before today. My test models at HF-hub are updated. [pt](https://huggingface.co/poedator/opt-125m-bnb-4bit_pt), [safetensors](https://huggingface.co/poedator/opt-125m-bnb-4bit)\r\n\r\nI completed the needed changes on bnb side ([PR pending](https://github.com/TimDettmers/bitsandbytes/pull/868)). Also added a set of 4-bit serialization tests in a separate file. They are more detailed than you suggested, but this gives me mode peace of mind. \r\n\r\nPlease resume the review.\r\n---------------\r\n@KnutJaegersberg - you may want to re-test this PR now. That problem should be gone.\r\nthe code that gave `IndexError: list index out of range` is removed entirely - no need to restore shared tensors cause there is none. Both torch and safetensors formats should work OK.",
"@poedator thank you :) \r\nI tried again after reinstalling tf, now I get another exception: \r\n\r\nLoading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/knut/Downloads/transformers-save4/src/transformers/models/auto/auto_factory.py\", line 566, in from_pretrained\r\n return model_class.from_pretrained(\r\n File \"/home/knut/Downloads/transformers-save4/src/transformers/modeling_utils.py\", line 3512, in from_pretrained\r\n ) = cls._load_pretrained_model(\r\n File \"/home/knut/Downloads/transformers-save4/src/transformers/modeling_utils.py\", line 3907, in _load_pretrained_model\r\n new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(\r\n File \"/home/knut/Downloads/transformers-save4/src/transformers/modeling_utils.py\", line 760, in _load_state_dict_into_meta_model\r\n set_module_quantized_tensor_to_device(\r\n File \"/home/knut/Downloads/transformers-save4/src/transformers/integrations/bitsandbytes.py\", line 121, in set_module_quantized_tensor_to_device\r\n new_value = bnb.nn.Params4bit.from_prequantized(\r\n File \"/home/knut/transformers/lib/python3.9/site-packages/bitsandbytes/nn/modules.py\", line 161, in from_prequantized\r\n self.quant_state = QuantState.from_dict(qs_dict=quantized_stats, device=device)\r\n File \"/home/knut/transformers/lib/python3.9/site-packages/bitsandbytes/functional.py\", line 645, in from_dict\r\n shape=torch.Size(qs_dict['shape']),\r\nTypeError: 'NoneType' object is not iterable\r\n",
"@KnutJaegersberg ,\r\nI am not seeing this error in my tests, but to be on the safe side, I edited the line 645 in the BNB PR.\r\nPlease help me understand what triggered it? Could it be that you try to load a model created with earlier version of either PRs? Try to run the whole cycle \"quantize-save-load\" using the latest PR commits. Or try loading [my model form HF](https://huggingface.co/poedator/opt-125m-bnb-4bit).",
"@poedator I I did not use the latest PR of BNB because it gave me a cuda setup exception. It might be a local issue then. Thanks! ",
"I'm trying to do something weird, I guess. I can load your opt model @poedator but not my llamafied yi-34b, though I tried it with the same config, double quantization, no safetensors. \r\n\r\nI'll upload my weights here, it might take a few hours:\r\nhttps://huggingface.co/KnutJaegersberg/Deacon-34b-4bit\r\n\r\n\r\n File \"/home/knut/Downloads/transformers-save4/src/transformers/models/auto/auto_factory.py\", line 566, in from_pretrained\r\n return model_class.from_pretrained(\r\n File \"/home/knut/Downloads/transformers-save4/src/transformers/modeling_utils.py\", line 3512, in from_pretrained\r\n ) = cls._load_pretrained_model(\r\n File \"/home/knut/Downloads/transformers-save4/src/transformers/modeling_utils.py\", line 3907, in _load_pretrained_model\r\n new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(\r\n File \"/home/knut/Downloads/transformers-save4/src/transformers/modeling_utils.py\", line 760, in _load_state_dict_into_meta_model\r\n set_module_quantized_tensor_to_device(\r\n File \"/home/knut/Downloads/transformers-save4/src/transformers/integrations/bitsandbytes.py\", line 121, in set_module_quantized_tensor_to_device\r\n new_value = bnb.nn.Params4bit.from_prequantized(\r\n File \"/home/knut/miniconda3/envs/textgen/lib/python3.10/site-packages/bitsandbytes/nn/modules.py\", line 161, in from_prequantized\r\n self.quant_state = QuantState.from_dict(qs_dict=quantized_stats, device=device)\r\n File \"/home/knut/miniconda3/envs/textgen/lib/python3.10/site-packages/bitsandbytes/functional.py\", line 633, in from_dict\r\n code=qs_dict['nested_quant_map'].to(device),\r\nKeyError: 'nested_quant_map'\r\n",
"@KnutJaegersberg , \r\nThank you for your persistence! I was able to reproduce the error with `llamafied yi-34b` with earlier commit.\r\nThen I retested loading and saving with fresh commits and everything worked OK:\r\n`# bnb commit 079d7afe3468a9f9ad0c8214d5c5055bdedaccbf`\r\n`# transformers commit 8e44fd35712720222b5dcce2be9860f2bebd8f7d`\r\nTesting code: https://hastebin.com/share/uyurinixus.python \r\n\r\nMy guess is that either you used older commit when saving or loading the model, or there is some incompatibility in out environments. Please try my code with fresh save. ",
"@poedator I had issues building bnb for your newest commit on my local environment, have not resolved those so far. I tried to manually do the changes of your bnb commit to my local files, my transformers library should be exactly the same. It's properly my environment which is the cause. \r\nI'll just wait till the next bnb release, thanks :) ",
"cc @Titus-von-Koeller ",
"Hello, any update on this issue? Or is there any other way to save merged 4 bit model?",
"> Hello, any update on this issue? Or is there any other way to save merged 4 bit model?\r\n\r\nHi, @estibi \r\nit is 99% ready, and just need one last(hopefully) fix in bnb: https://github.com/TimDettmers/bitsandbytes/pull/868. Please upvote it."
] | 1,694 | 1,705 | 1,703 |
CONTRIBUTOR
| null |
## What does this PR do?
Purpose: enable saving and loading transformers models in 4bit formats.
tested with Bloom-560 and Llama-2-7b. Save - Load - match tensors and quant_states - match inference results.
## connection with bitsandbytes
Requires this PR in bitsandbytes: https://github.com/TimDettmers/bitsandbytes/pull/753 to be able to save/load models
## testing:
the functionality was tested doing this series of commands:
```
model = m4 = transformers.AutoModelForCausalLM.from_pretrained(..., [with 4bit quant config])
model.save_pretrained(SAVE_PATH, safe_serialization= [False / True] )
model2 = transformers.AutoModelForCausalLM.from_pretrained( SAVE_PATH)
# then matching all params and quant_state items between the models, matching inference results
```
Specific tests will be added to this PR once the [bitsandbytes PR](https://github.com/TimDettmers/bitsandbytes/pull/753) merges.
## Open questions to the maintainers:
1. I added and tested code necessary for straightforward save/load operations with LLMs. Yet there may be other kinds of function calls that may need to be updated to handle 4bit save/load - pls suggest if/where to expand this PR.
specifically: this PR covers `load_state_dict_into_meta_model()` but not `load_state_dict_into_model()` for I did not find an example that uses it.
2. Some of my edits may not fit with style / refactoring plans of the maintainers - pls give guidance if needed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26037/reactions",
"total_count": 11,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 4,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26037/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26037",
"html_url": "https://github.com/huggingface/transformers/pull/26037",
"diff_url": "https://github.com/huggingface/transformers/pull/26037.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26037.patch",
"merged_at": 1703156084000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26036
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26036/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26036/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26036/events
|
https://github.com/huggingface/transformers/issues/26036
| 1,886,402,833 |
I_kwDOCUB6oc5wcDkR
| 26,036 |
Option to disable CodeCarbon
|
{
"login": "nebrelbug",
"id": 25597854,
"node_id": "MDQ6VXNlcjI1NTk3ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/25597854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nebrelbug",
"html_url": "https://github.com/nebrelbug",
"followers_url": "https://api.github.com/users/nebrelbug/followers",
"following_url": "https://api.github.com/users/nebrelbug/following{/other_user}",
"gists_url": "https://api.github.com/users/nebrelbug/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nebrelbug/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nebrelbug/subscriptions",
"organizations_url": "https://api.github.com/users/nebrelbug/orgs",
"repos_url": "https://api.github.com/users/nebrelbug/repos",
"events_url": "https://api.github.com/users/nebrelbug/events{/privacy}",
"received_events_url": "https://api.github.com/users/nebrelbug/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @nebrelbug, thanks for raising this issue! \r\n\r\nSounds like a good idea to me. @muellerzr will know more about it than me and can advise. ",
"That sounds fine to me as well @nebrelbug. \r\n\r\nWe have a few options here:\r\n1. Globally disable `codecarbon` via an environmental variable (similar to `DISABLE_WANDB`, or something along those lines)\r\n2. Have it as a `TrainingArgument` (don't quite like that solution)\r\n3. Or: don't add the codecarbon callback if we know we are in a multi-node scenario. \r\n\r\nI think I'd prefer 3 if possible, otherwise 1 :) WDYT?",
"@muellerzr I don't think that CodeCarbon is necessarily broken for everyone using multiple nodes, so my personal preference would be option 1. \r\n\r\nWould it be alright if I opened a PR to disable the callback when `DISABLE_CODECARBON` is set?",
"@nebrelbug actually looking at this more, does using `--report_to \"none\"` give the desired result? ",
"If so, we should expand the doc on the `report_to` param in `TrainingArguments` to mention codecarbon",
"@muellerzr it does, actually! Setting `report_to` to either `\"none\"` or `\"wandb\"` results in CodeCarbon not running.",
"Thanks for verifying! Lets go ahead and add it to the report_to doc string @nebrelbug ",
"@muellerzr sweet! I actually think the doc string for `report_to` may be complete (though a little hard to find). Instead, I opened up a PR at #26155 to update the Callback doc page. Let me know if that sounds good!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,694 | 1,697 | 1,697 |
CONTRIBUTOR
| null |
### Feature request
- Allow disabling of CodeCarbon reporting, even when it's installed
### Motivation
I'm running code using the Trainer class across several nodes in an HPC environment using SLURM. It seems to automatically [use the CodeCarbon callback](https://huggingface.co/docs/transformers/v4.33.0/en/main_classes/callback#transformers.integrations.CodeCarbonCallback), but unfortunately doesn't work correctly in my multi-node environment and floods my log files with hundreds of distracting messages (mentioned earlier in https://github.com/mlco2/codecarbon/issues/252).

### Your contribution
I'd be happy to contribute and/or open a PR, given some guidance about where to start.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26036/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26036/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26035
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26035/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26035/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26035/events
|
https://github.com/huggingface/transformers/pull/26035
| 1,886,368,755 |
PR_kwDOCUB6oc5ZztHD
| 26,035 |
[docs] IDEFICS guide and task guides restructure
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"PR with images for the doc: https://huggingface.co/datasets/huggingface/documentation-images/discussions/175",
"_The documentation is not available anymore as the PR was closed or merged._",
"@LysandreJik Please suggest who else should be added as a reviewer. Thanks!",
"Think this is a bit in conflict with this PR: https://github.com/huggingface/transformers/pull/25371",
"I'm not a huge fan of changing the more general \"Task Guide\" title to the more specific \"Fine-tuning Guide\" because I think many of the guides contain very valuable information that's just needed for inference.\r\n\r\nE.g. let's say right now I want to know how to use a fine-tuned speech model from the Hub for speech transcription, I wouldn't necessarily look under \"Fine-tuning Guides\"\r\nTo be it would make more sense to either:\r\n- Leave \"Task Guide\" as a title and integrate LLM prompting and Idefics under it\r\n- Or completely split inference and training into two different areas of the docs.",
"Not sure what @gante thinks here",
"re @patrickvonplaten 's concerns, I've reverted the TOC structure reorg, and placed the IDEFICS guide under Task Guides. \r\nI've also addressed @LysandreJik 's feedback. ",
"Amazing! thank you! looking at it today!"
] | 1,694 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
There are two now common approaches to solving tasks: fine-tuning specialized models, and prompting large models. While the fine-tuning approach is prominently featured in the docs, the latter one is not. At the same time, generation docs fall on the sidelines.
This PR includes two parts:
1. A guide to using IDEFICS for various image-text tasks such as image captioning, vqa, etc. This helps not only highlight the non-fine-tuning approach in the docs, but also showcase the new open source model.
2. TOC restructure. In this PR, I suggest a restructured table of contents where the task guides are now called "Guides for fine-tuning specialized models", and there's a new section on the same level called "Prompting and Generation Guides for Large Models". This new section currently contains the IDEFICS guide, and the "Text generation strategies" doc. In the future, we can add prompting guides, and updated text generation docs here.
The new structure reflects two different approaches to tasks.
Currently the zero-shot guides for specialized models are still under fine-tuning task guides, and we can discuss whether they should be moved under the new section or not.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26035/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26035/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26035",
"html_url": "https://github.com/huggingface/transformers/pull/26035",
"diff_url": "https://github.com/huggingface/transformers/pull/26035.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26035.patch",
"merged_at": 1694794508000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26034
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26034/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26034/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26034/events
|
https://github.com/huggingface/transformers/issues/26034
| 1,886,154,730 |
I_kwDOCUB6oc5wbG_q
| 26,034 |
DistilBertForSequenceClassification 0% accuracy when fine-tuning (using from_pretrained())
|
{
"login": "johannes-garstenauer",
"id": 64467583,
"node_id": "MDQ6VXNlcjY0NDY3NTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/64467583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johannes-garstenauer",
"html_url": "https://github.com/johannes-garstenauer",
"followers_url": "https://api.github.com/users/johannes-garstenauer/followers",
"following_url": "https://api.github.com/users/johannes-garstenauer/following{/other_user}",
"gists_url": "https://api.github.com/users/johannes-garstenauer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johannes-garstenauer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johannes-garstenauer/subscriptions",
"organizations_url": "https://api.github.com/users/johannes-garstenauer/orgs",
"repos_url": "https://api.github.com/users/johannes-garstenauer/repos",
"events_url": "https://api.github.com/users/johannes-garstenauer/events{/privacy}",
"received_events_url": "https://api.github.com/users/johannes-garstenauer/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey! Thanks a lot for providing such a detailed issue π€ \r\nAs the documentation about the examples mentions:\r\n> While we strive to present as many use cases as possible, the scripts in this folder are just examples. It is expected that they wonβt work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs. To help you with that, most of the examples fully expose the preprocessing of the data. This way, you can easily tweak them.\r\n\r\nThis means that it's expected that it might not work on every single model out there. Since the script works for the models it was designed to work on, you probably have to adapt it to work on you own specific model. \r\nNone of us really has the bandwidth to debug your particular training script, but we welcome and appriciate bug fix PR! \r\nWould recommend you to post something on[ the forum](https://discuss.huggingface.co/) as well, to get input from the community! \r\n\r\nTLDR; Almost 100% sure there is nothing wrong with `from_pretrained` but rather with the way you finetune the model no? The logs can also help us debug if maybe you dont't have the latest version of transformers, etc. I could help you if you provided a full reproducer with loading the models, and showing the issue with from pretrained! ",
"Hey! \r\nThank you for the quick response and the input. \r\n\r\nIt is right I was not expecting the official classification script to work out of the box. If I find out why I'll make sure to create a bug fix PR.\r\nRegarding my custom script, I am also not sure if the issue lies with `from_pretrained` however something seems to go wrong when initializing certain models, that prevents the `[CLS]` token embedding from being learned and therefore prevents classification from producing sensible results. I have reduced the script as much as necessary I believe therefor I am not sure where to look next for a solution. I'll make sure to consult the forum, though ;)\r\n\r\nAnyways thanks again",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @johannes-garstenauer , were you able to resolve this problem?",
"Hello @gadregayatri, yes the issue lies with the Tokenizer.\r\nWhen adding special tokens (https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizer.__call__.add_special_tokens) it is important to keep in mind, that the order of the list you pass here determines the id of the tokens. It might be, that the model expects the pad token to be a certain ID (you can check this by printing the config of your model) but the tokenizer has a different ID for the pad token, based on the order of the special tokens list. This mismatch caused the bug for me. \r\n\r\nHope that helps,\r\nJohannes"
] | 1,694 | 1,698 | 1,697 |
NONE
| null |
### System Info
- `transformers` version: 4.34.0.dev0
- Platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.22.0.dev0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: YES - 1x NVIDIA RTX A6000
- Using distributed or parallel set-up in script?: NO
### Who can help?
@ArthurZucker
@younesbelkada
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Custom Script:
```python
import datasets
import evaluate
import torch
from tqdm import tqdm
from transformers import (
AutoTokenizer,
AutoModelForSequenceClassification,
TrainingArguments,
Trainer,
DistilBertConfig,
DistilBertForSequenceClassification
)
model_name = "johannes-garstenauer/distilbert-heaps-masked" # This doesn't work
#model_name = "distilbert-base-uncased" # This works
#model_name="AyoubChLin/distilbert_cnn_news" # This works
do_finetune = True
print("\n")
print(f"Finetuning {do_finetune}")
if do_finetune:
print(f"Model: {model_name}")
print("\n")
tokenizer = AutoTokenizer.from_pretrained(model_name)
def preprocess_function(examples):
return tokenizer(examples["struct"], truncation=True, max_length=512)
ds_name = "johannes-garstenauer/balanced_factor_3_structs_reduced_5labelled_large"
raw_dataset = datasets.load_dataset(ds_name, split="train[:1%]")
tokenized_datasets = raw_dataset.map(preprocess_function, batched=True)
tokenized_datasets = tokenized_datasets.train_test_split(test_size=0.05)
if do_finetune:
print("Finetuning")
# When using model 'AyoubChLin/distilbert_cnn_news', might have to adapt num_labels=6 to avoid error
model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=5)
else:
config = DistilBertConfig(output_hidden_states=True, num_labels=5)
model = DistilBertForSequenceClassification(config)
print(model.config)
args = TrainingArguments(
f"distilbert-finetuned",
evaluation_strategy="epoch",
save_strategy="epoch",
num_train_epochs=3,
)
metric = evaluate.load("accuracy")
def compute_metrics(eval_pred):
predictions, labels = eval_pred
predictions = torch.argmax(predictions, dim=-1)
return metric.compute(predictions=predictions, references=labels)
trainer = Trainer(
model,
args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["test"],
compute_metrics=compute_metrics,
tokenizer=tokenizer
)
# Overfitting the model on one batch
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
for batch in trainer.get_train_dataloader():
break
batch = {k: v.to(device) for k, v in batch.items()}
trainer.create_optimizer()
for _ in tqdm(range(40)):
outputs = trainer.model(**batch)
loss = outputs.loss
loss.backward()
trainer.optimizer.step()
trainer.optimizer.zero_grad()
with torch.no_grad():
outputs = trainer.model(**batch)
preds = outputs.logits
labels = batch["labels"]
print(compute_metrics((preds, labels)))
print(batch['labels'])
```
Using the official script:
```python
python run_classification.py \
--model_name_or_path johannes-garstenauer/distilbert-heaps-masked \
--dataset_name johannes-garstenauer/balanced_factor_3_structs_reduced_5labelled_large \
--metric_name accuracy \
--text_column_name struct \
--label_column_name label \
--do_train \
--do_eval \
--max_seq_length 512 \
--per_device_train_batch_size 64 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--output_dir ~/tmp/script_model/
```
### Expected behavior
**What the code does (Minimal Required Example + Official Classification Script):**
- The custom script fine-tunes a DistilBertForSequenceClassification model on a slightly unbalanced custom dataset with 5 labels.
- The custom script can also be used to instantiate an empty DistilBertForSequenceClassification
- The custom script will overfit the model on a single batch for 40 iterations
- The official classification script is configured to fine-tune DistilBertForSequenceClassification on the mentioned dataset
**The expected behavior is:**
- An accuracy of 100% after overfitting
- A decreasing loss and accuracy when using the official script
**The issue:**
- Depending on which model the sequence classifier is fine-tuned on, the resulting model will either work as expected (high accuracy, low loss) or not work at all, meaning it will have stable high loss and low accuracy throughout training and always predict the same class
- The predicted class will always be '0'. Using a weighted loss function can force the model to always predict another class, however not make it accurately predict
- Some investigating has shown that the models embeddings of the [CLS] token (which is used for the final classification internally) is the same for whatever input is given. This is likely the reason for why the model will always predict the same label.
- I have not come to an understanding of why using some models as foundation fro fine-tuning models will cause this error. Some examples of models where the issue arises and some of where it doesn't are:
- "johannes-garstenauer/distilbert-heaps-masked"
- This is the model I intend to use for fine-tuning in my project. The issue arises with this model
- "distilbert-base-uncased"
- This is the standard DistilBertModel. Here the issue doesn't arise.
- "AyoubChLin/distilbert_cnn_news"
- This is a DistilBertForSequenceClassification model from the Hub. Here the issue doesn't arise.
- To me it seems like the issue mysteriously arises sometimes. I could not determine f.e. that it would arise for certain types of models (f.e. MaskedLM, Sequence Classification) and not for others.
- The logs of when the model is instantiated using 'from_pretrained()' are also unsuspicious and do not indicate any errors
- A theory of mine has been that the issue is related to loading a model with a conflicting configuration. To me especially 'vocab_size' and 'dtype_torch' had been suspects. Experimenting with different configurations has however not proven conclusive.
In conclusion, the documentation for the 'from_pretrained()' method claims, that it is "suited for loading models of different tasks", however, a bug seems to occur .
Sorry for the long issue. I hope someone is able to help.
Thank you,
Johannes
**Notes on reproducing the official classification script:**
- When trying to reproduce the official classification script as outlined above be aware that an exception may occur:
- TypeError: __init__() got an unexpected keyword argument βtoken'
- I resolved this by commenting out line 494 in https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_classification.py
- (this is just a hint and unrelated to the larger issue)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26034/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26033
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26033/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26033/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26033/events
|
https://github.com/huggingface/transformers/pull/26033
| 1,886,042,774 |
PR_kwDOCUB6oc5Zymq3
| 26,033 |
[fix] mc_logits.shape doesn't match mc_labels.shape
|
{
"login": "flybird11111",
"id": 37931082,
"node_id": "MDQ6VXNlcjM3OTMxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/37931082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flybird11111",
"html_url": "https://github.com/flybird11111",
"followers_url": "https://api.github.com/users/flybird11111/followers",
"following_url": "https://api.github.com/users/flybird11111/following{/other_user}",
"gists_url": "https://api.github.com/users/flybird11111/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flybird11111/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flybird11111/subscriptions",
"organizations_url": "https://api.github.com/users/flybird11111/orgs",
"repos_url": "https://api.github.com/users/flybird11111/repos",
"events_url": "https://api.github.com/users/flybird11111/events{/privacy}",
"received_events_url": "https://api.github.com/users/flybird11111/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @flybird11111, thanks for opening a PR! Could you write in the description what issue this is resolving? ",
"> Hi @flybird11111, thanks for opening a PR! Could you write in the description what issue this is resolving?\r\n\r\nI have described the prolem I encountered in the description.",
"> # What does this PR do?\r\n> when 'loss_fct(mc_logits.view(-1, mc_logits.size(-1)), mc_labels.view(-1))' coompute the mc_loss, it will raise a error that the logits's shape doesn't match the labels' shape when batch > 1, while the logits and labels are the inputs of loss_fcn.\r\n> \r\n> Fixes # (issue)\r\n> \r\n> ## Before submitting\r\n> * [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).\r\n> * [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),\r\n> Pull Request section?\r\n> * [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link\r\n> to it if that's the case.\r\n> * [ ] Did you make sure to update the documentation with your changes? Here are the\r\n> [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and\r\n> [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).\r\n> * [ ] Did you write any new necessary tests?\r\n> \r\n> ## Who can review?\r\n> Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.\r\n\r\n"
] | 1,694 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
when 'loss_fct(mc_logits.view(-1, mc_logits.size(-1)), mc_labels.view(-1))' coompute the mc_loss, it will raise a error that the logits's shape doesn't match the labels' shape, while the logits and labels are the inputs of loss_fcn.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerz and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26033/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26033",
"html_url": "https://github.com/huggingface/transformers/pull/26033",
"diff_url": "https://github.com/huggingface/transformers/pull/26033.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26033.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/26032
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26032/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26032/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26032/events
|
https://github.com/huggingface/transformers/pull/26032
| 1,886,036,790 |
PR_kwDOCUB6oc5ZylXz
| 26,032 |
Make Whisper Encoder's sinusoidal PE non-trainable by default
|
{
"login": "gau-nernst",
"id": 26946864,
"node_id": "MDQ6VXNlcjI2OTQ2ODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/26946864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gau-nernst",
"html_url": "https://github.com/gau-nernst",
"followers_url": "https://api.github.com/users/gau-nernst/followers",
"following_url": "https://api.github.com/users/gau-nernst/following{/other_user}",
"gists_url": "https://api.github.com/users/gau-nernst/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gau-nernst/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gau-nernst/subscriptions",
"organizations_url": "https://api.github.com/users/gau-nernst/orgs",
"repos_url": "https://api.github.com/users/gau-nernst/repos",
"events_url": "https://api.github.com/users/gau-nernst/events{/privacy}",
"received_events_url": "https://api.github.com/users/gau-nernst/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey @gau-nernst - thanks very much for opening this PR! Looks like a great start already. I pushed the Flax changes in the latest commit. In short, the simplest way of setting the parameters to un-trainable in Flax is by stopping the back-prop through the layers. Otherwise, we need to explicitly pass a dict to the optimiser that defines which parameters are trainable/non-trainable (see https://colab.research.google.com/drive/1K-5bz6R6kt9GAvaUHvzYvvA-IOAO2PhL#scrollTo=BrF6Dtb8GlkJ)\r\n\r\nThere's not a test to check that the embed params are non-trainable, but you could certainly add one. This could follow the style of test that we use to check that we correctly freeze the encoder when we do decoder-only fine-tuning:\r\nhttps://github.com/huggingface/transformers/blob/ce2e7ef3d96afaf592faf3337b7dd997c7ad4928/tests/models/whisper/test_modeling_whisper.py#L338\r\n\r\nRegarding initialising the weights with sinusoidal embeddings - I agree that this should be the default case! In 99% of cases users will just use the model from pre-trained, in which case the embeddings will be initialised with the sinusoids, but if a user were to randomly initialise the model, the embeddings would be initialised incorrectly.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26032). All of your documentation changes will be reflected on that endpoint.",
"That's a great solution! However, from what I understand, it means that in the Flax implementation, it is not possible (or easy) to re-enable training for positional encodings? (something that we discussed previously)",
"It's possible (but a bit involved) to add functionality to toggle whether we train the PE's in Flax. However, to me this PR is a bug fix, rather than a feature addition. I agree with what you said in the issue that we should not train the embeddings, since they used fixed sinusoidal embeddings in the original implementation, so I think it's fine if we do a straight fix and always freeze the embeddings here, since this is the correct behaviour.",
"I added sinusoids weight init for PyTorch implementation. Looking at TF and Flax, I'm not sure where to put weight init. It seems like there is no weight init code in TF? For Flax, I see this but don't really understand what's going on.\r\n\r\nhttps://github.com/huggingface/transformers/blob/0a55d9f7376f72ad3ff296d4249840021b03bcc4/src/transformers/models/whisper/modeling_flax_whisper.py#L865-L895\r\n\r\nFrom other TF Keras and Flax code I have seen, I think the typical pattern is to pass weight init function to a module when it is created? I'm not sure what is the pattern HF is using here.",
"The `init_weights` function in Flax is used to initialise the Flax model's parameters by passing a set of dummy inputs (zeros and ones). Flax traces out the shapes of the weights that you get when you pass these dummy inputs, and initialises weights with the right shapes accordingly (see https://flax.readthedocs.io/en/latest/guides/flax_basics.html#model-parameters-initialization).\r\n\r\nThis `init_weights` function won't actually change the values of the weights, just their shapes and dtypes. To change the initialising function, we can pass an argument `embedding_init` function to the init of the embedding layer: https://flax.readthedocs.io/en/latest/api_reference/flax.linen/_autosummary/flax.linen.Embed.html#flax.linen.Embed.embedding_init\r\n\r\nThe init function should be an instance of a JAX initialiser. That is, it should take the PRNG Key as the first argument, as well as the shape and target dtype of the module: https://jax.readthedocs.io/en/latest/jax.nn.initializers.html",
"@sanchit-gandhi I fixed the embedding init for TF and Flax as you requested. I also add test for TF and Flax. For Flax, I don't add a test for non-trainable sinusoidal embedding, because I don't know how to do it cleanly. For checking the weight init in Flax, I don't know Flax semantics so well, so I added a rather \"crude\" solution to get the encoder position embeddings.",
"> Very nice @gau-nernst - especially the Flax init which is really clean now π Could the PT init go in `_init_weights`? Otherwise it all looks good to me!\r\n\r\n`_init_weights()` does not see the module name, it only sees the module itself. To make `_init_weights()` recognize encoder positional embedding, we probably need to set a private attribute to the module.",
"Could we change the `_init_weights` logic to:\r\n1. Loop through all modules\r\n2. Check if module is encoder. If yes: loop through all the sub-modules. When we get to the pos embeddings, do the sinusoidal init\r\n3. Check if module is decoder. If yes: loop through all the sub-modules. When we get to the pos embeddings, do the normal init",
"I put sinusoidal init in `_init_weights()` but in a different way. Relying on the fact that `nn.Module.apply()` will traverse the children in a depth-first search manner (leaf modules will be applied first), if we check for Whisper encoder in `_init_weights()`, it will override the default initialization for positional embeddings.\r\n\r\n```python\r\n def apply(self: T, fn: Callable[['Module'], None]) -> T:\r\n ...\r\n for module in self.children():\r\n module.apply(fn)\r\n fn(self)\r\n return self\r\n```"
] | 1,694 | 1,697 | 1,697 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #25989
I'm not too familiar with Jax/Flax and can't find a simple way to set a variable a non-trainable in Flax. Do advise on how I should approach this.
Should we have a test for this behavior also? i.e. test that Whisper Encoder PE is non-trainable by default.
Another note. Should Encoder's positional encodings be initialized with sinusoids? Just like the official repo
https://github.com/openai/whisper/blob/main/whisper/model.py#L150
```python
def sinusoids(length, channels, max_timescale=10000):
"""Returns sinusoids for positional embedding"""
assert channels % 2 == 0
log_timescale_increment = np.log(max_timescale) / (channels // 2 - 1)
inv_timescales = torch.exp(-log_timescale_increment * torch.arange(channels // 2))
scaled_time = torch.arange(length)[:, np.newaxis] * inv_timescales[np.newaxis, :]
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerz and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26032/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26032",
"html_url": "https://github.com/huggingface/transformers/pull/26032",
"diff_url": "https://github.com/huggingface/transformers/pull/26032.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26032.patch",
"merged_at": 1697011735000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26031
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26031/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26031/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26031/events
|
https://github.com/huggingface/transformers/pull/26031
| 1,886,012,376 |
PR_kwDOCUB6oc5ZygCt
| 26,031 |
Update missing docs on `activation_dropout` and fix DropOut docs for SEW-D
|
{
"login": "gau-nernst",
"id": 26946864,
"node_id": "MDQ6VXNlcjI2OTQ2ODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/26946864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gau-nernst",
"html_url": "https://github.com/gau-nernst",
"followers_url": "https://api.github.com/users/gau-nernst/followers",
"following_url": "https://api.github.com/users/gau-nernst/following{/other_user}",
"gists_url": "https://api.github.com/users/gau-nernst/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gau-nernst/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gau-nernst/subscriptions",
"organizations_url": "https://api.github.com/users/gau-nernst/orgs",
"repos_url": "https://api.github.com/users/gau-nernst/repos",
"events_url": "https://api.github.com/users/gau-nernst/events{/privacy}",
"received_events_url": "https://api.github.com/users/gau-nernst/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26031). All of your documentation changes will be reflected on that endpoint."
] | 1,694 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #25854
Add doc for `activation_dropout` for various audio models. Let me know if I miss out any.
For SEW-D, document the behavior that `hidden_dropout` is not used, while `activation_dropout` acts more like `hidden_dropout` in other models.
On a side note, it will be good if there is a test to catch undocumented config attributes in the future.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerz and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26031/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26031",
"html_url": "https://github.com/huggingface/transformers/pull/26031",
"diff_url": "https://github.com/huggingface/transformers/pull/26031.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26031.patch",
"merged_at": 1694181114000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26030
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26030/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26030/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26030/events
|
https://github.com/huggingface/transformers/pull/26030
| 1,885,733,873 |
PR_kwDOCUB6oc5ZxjBQ
| 26,030 |
update
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,694 | 1,694 | 1,694 |
COLLABORATOR
| null |
# What does this PR do?
Test if tokenizers release will beark everything
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26030/timeline
| null | true |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26030",
"html_url": "https://github.com/huggingface/transformers/pull/26030",
"diff_url": "https://github.com/huggingface/transformers/pull/26030.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26030.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/26029
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26029/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26029/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26029/events
|
https://github.com/huggingface/transformers/pull/26029
| 1,885,613,352 |
PR_kwDOCUB6oc5ZxIri
| 26,029 |
IDEFICS: allow interpolation of vision's pos embeddings
|
{
"login": "leot13",
"id": 17809020,
"node_id": "MDQ6VXNlcjE3ODA5MDIw",
"avatar_url": "https://avatars.githubusercontent.com/u/17809020?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leot13",
"html_url": "https://github.com/leot13",
"followers_url": "https://api.github.com/users/leot13/followers",
"following_url": "https://api.github.com/users/leot13/following{/other_user}",
"gists_url": "https://api.github.com/users/leot13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leot13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leot13/subscriptions",
"organizations_url": "https://api.github.com/users/leot13/orgs",
"repos_url": "https://api.github.com/users/leot13/repos",
"events_url": "https://api.github.com/users/leot13/events{/privacy}",
"received_events_url": "https://api.github.com/users/leot13/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for all the comments and suggestions! They should all be answered now"
] | 1,694 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
# What does this PR do?
Allows vision position embeddings to be interpolated. Thus allowing bigger images to be passed to the model.
Fixes issue #26154
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@ArthurZucker @amyeroberts
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26029/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26029",
"html_url": "https://github.com/huggingface/transformers/pull/26029",
"diff_url": "https://github.com/huggingface/transformers/pull/26029.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26029.patch",
"merged_at": 1694734060000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26028
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26028/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26028/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26028/events
|
https://github.com/huggingface/transformers/pull/26028
| 1,885,551,744 |
PR_kwDOCUB6oc5Zw6_Q
| 26,028 |
fix: allow "inputs" as kwargs or positional arg
|
{
"login": "Keredu",
"id": 29210848,
"node_id": "MDQ6VXNlcjI5MjEwODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/29210848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Keredu",
"html_url": "https://github.com/Keredu",
"followers_url": "https://api.github.com/users/Keredu/followers",
"following_url": "https://api.github.com/users/Keredu/following{/other_user}",
"gists_url": "https://api.github.com/users/Keredu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Keredu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Keredu/subscriptions",
"organizations_url": "https://api.github.com/users/Keredu/orgs",
"repos_url": "https://api.github.com/users/Keredu/repos",
"events_url": "https://api.github.com/users/Keredu/events{/privacy}",
"received_events_url": "https://api.github.com/users/Keredu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"When running something like the following script:\r\n```python\r\nfrom src.transformers.__init__ import pipeline\r\n\r\nclassifier = pipeline(\r\n task=\"sentiment-analysis\",\r\n model=\"distilbert-base-uncased-finetuned-sst-2-english\",\r\n)\r\n\r\nres = classifier(\"Hello world args\", inputs=\"Hello world kwargs\")\r\n\r\nprint(res)\r\n```\r\nit returns ``TypeError: Pipeline.__call__() got multiple values for argument 'inputs'`` which is right. Should we add something to control this case? I also happens if we use, for example, ``classifier(3, inputs=\"Hello world kwargs\")`` where someone could try to use it as ``top_k=3`` and ``input=\"Hello world kwargs\"`` ",
"> Hey! Would be easier for me to review if you add the tests now! (the 3 different cases you mention on the comment π\r\n\r\nOf course, I could add as many tests as you consider necessary. However, I don't know what I should check exactly.\r\n\r\nI mean, case 3 looks weird to me. That's because if I pass a string as an argument, it should return a dictionary. If I pass a list of strings, it should return a list of dictionaries. That's exactly what happened in case 1 and 2. However, case 3 always returns a list.\r\n\r\nDo you want me to code the following tests?\r\n1. When a string is passed, then its returned a dictionary\r\n2. When a list of strings is passed then it's returned a list of dictionaries\r\n",
"Hey @keredu, sorry for the delay in responding!\r\n\r\nRegarding tests, I would take your initial statement that failed with the current implementation and implement a test that ensures that it is now working. The rest of our tests should already ensure that your change doesn't break anything else.",
"> Hey @Keredu, sorry for the delay in responding!\r\n> \r\n> Regarding tests, I would take your initial statement that failed with the current implementation and implement a test that ensures that it is now working. The rest of our tests should already ensure that your change doesn't break anything else.\r\n\r\nOk, I'll do it asap",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> Hey! Would be easier for me to review if you add the tests now! (the 3 different cases you mention on the comment π\r\n\r\n\r\nHello @LysandreJik, apologies for the delay; I've only now managed to allocate time to this.\r\n\r\nI've added two tests. The first addresses the issue raised in #26010. The second covers the three scenarios we discussed. Essentially, when `top_k` is provided as a positional argument, it is disregarded. For instance, executing\r\n\r\n```python\r\nclassifier(\"test_args_kwargs input string\", 2)\r\n```\r\ntriggers a console message: \"Ignoring args: (2,)\" confirming the behavior. If this disregard of the argument is indeed the expected behavior, I could remove the corresponding test. Please advise on whether this is necessary.\r\n\r\nPlease let me know if there are any other changes or additions needed."
] | 1,694 | 1,700 | 1,700 |
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # Pipelines didn't work when passing "inputs" as kwargs instead of positional arg. This is discussed in #26010
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Yes, #26010.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Narsil
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerz and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26028/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26028",
"html_url": "https://github.com/huggingface/transformers/pull/26028",
"diff_url": "https://github.com/huggingface/transformers/pull/26028.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26028.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/26027
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26027/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26027/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26027/events
|
https://github.com/huggingface/transformers/pull/26027
| 1,885,551,036 |
PR_kwDOCUB6oc5Zw608
| 26,027 |
[wip testing doc-builder]
|
{
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,694 | 1,695 | 1,695 |
CONTRIBUTOR
| null |
Testing https://github.com/huggingface/doc-builder/pull/396
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26027/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26027/timeline
| null | true |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26027",
"html_url": "https://github.com/huggingface/transformers/pull/26027",
"diff_url": "https://github.com/huggingface/transformers/pull/26027.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26027.patch",
"merged_at": null
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.