repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 16,453 | closed | DeBERTa from official example return random logits that change every time I run the same example. | EDIT: This happens each time the model is loaded again from disk. Once model is loaded the logits are the same each run for the same input but when I reload the model the logits change.
I tried to run the official example of Deberta and found that the output logins are just random numbers that change every time i run the model? Is this a bug?
Transformers version is: 4.9.2 also in 4.15.0 is the same and i tried the current latest version and it is the same.
from transformers import DebertaTokenizer, DebertaForMaskedLM
import torch
tokenizer = DebertaTokenizer.from_pretrained("microsoft/deberta-base")
model = DebertaForMaskedLM.from_pretrained("microsoft/deberta-base")
model.eval()
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
with torch.no_grad():
outputs = model(**inputs, labels=labels)
loss = outputs.loss
logits = outputs.logits
print("logits",logits)`
example output:
1) tensor([[[-0.3574, -0.4279
2) tensor([[[-1.0341, 0.5181
I also get this as warning:
Some weights of the model checkpoint at microsoft/deberta-base were not used when initializing DebertaForMaskedLM: ['deberta.embeddings.position_embeddings.weight', 'lm_predictions.lm_head.LayerNorm.weight', 'lm_predictions.lm_head.LayerNorm.bias', 'lm_predictions.lm_head.bias', 'lm_predictions.lm_head.dense.weight', 'lm_predictions.lm_head.dense.bias']
- This IS expected if you are initializing DebertaForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DebertaForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of DebertaForMaskedLM were not initialized from the model checkpoint at microsoft/deberta-base and are newly initialized: ['cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.weight', 'cls.predictions.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.dense.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
@LysandreJik | 03-28-2022 15:44:28 | 03-28-2022 15:44:28 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,452 | closed | Possible bug: Only truncate works in FeatureExtractionPipeline | Possible Bu:g Only the truncate function is available for feature extraction, seems explicit omit but I do not see the reason why other arguments of the tokenizer could not be passed
See https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/feature_extraction.py#L54
| 03-28-2022 14:26:54 | 03-28-2022 14:26:54 | I also tried a ZeroShotClassificationPipeline,
Having a sentence too long raises a "Tensor length mismatch error".
Could it be possible to truncate to max_length by default?
Or could it be feasible to let us pass the arguments in the pipeline ?
Note that :
```python
my_pipeline(
reference,
prediction,
max_length=512,
padding=True,
truncation=True,
add_special_tokens=True,
)
```
Changes nothing as the max_length or truncation seems to not be passed down the line..... (at least for ZeroShot)
A solution could be to implement a custom _parse_thingy (sorry I don't remember the name) for having a custom tokenizing function but that seems overkill to do it everytime.
Thanks in advance
Have a great day<|||||>Hi @Ierezell ,
> Could it be possible to truncate to max_length by default?
This should already be the case, when `truncation=True` the tokenizer will respect `tokenizer.model_max_length` attribute when truncating the input. Do you mind which model is triggering this issue ? It could be a model misconfiguration.
> Or could it be feasible to let us pass the arguments in the pipeline ?
Adding arguments is definitely doable `max_length` is a tricky beast since it means something both linked to the `tokenizer` and the `generate` function, so what the user really mean might actually be ambiguous.
In general, the idea is to add arguments as they're needed since a lot of them are not useful at inference time, or extremely rarely used.
To get *really* customized inference, you do need to step away from pipelines and either subclass or use custom code with the relative components.
That being said, what other arguments do you need in the tokenizer ? We can definitely add some !<|||||>Hi @Narsil,
here is a snippet to reproduce the problem
```python
nli = pipeline("zero-shot-classification", model="sentence-transformers/paraphrase-MiniLM-L6-v2")
references = "a "*52
prediction = "b "*513
res = nli(reference, prediction)
```
Note that changing reference to a long string (`"a "*1000`) is still fine but prediction isn't: `"b "*520` raises the error.
(Even if I agree that every input should be of the good format and less than 512 tokens, it's a nice fallback to truncate).
have a great day. <|||||>Hi @Ierezell ,
Multiple things:
The model you are using triggers this warning:
```python
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at sentence-transformers/paraphrase-MiniLM-L6-v2 and are newly initialized: ['classifier.weight', 'classifier.bias']
```
This is a pretty worrying warning, since it means this model probably wasn't fine tuned for this task, so the results are probably going to be aweful.
Then you are using `zero-shot-classification` so what you call `prediction` is actually the `candidate_labels` which is what the model is supposed to classify the incoming text (`references`) as. If this gets truncated it basically means you are going to classify not along the classes you expect which IMO warrants a true error, hiding by truncating seems pretty bad here.
Is this what you intended to do ?
We *can* update the rules to allow more customization on truncation in this pipeline, but doing that is IMO very risky (as results will be super biased/wrong)
```python
nli = pipeline("zero-shot-classification", model="sentence-transformers/paraphrase-MiniLM-L6-v2")
sentence_to_classify = "I like chocolate and pudding"
res = nli(sentence_to_classify, candidate_labels=["food", "science", "fashion"])
```
I took the liberty of rewriting your example into something which feels slightly more explicit into what the pipeline will do with your data
<|||||>Hi @Narsil,
My bad for the model, it's a test I'm making and for sure I would need to change it :) Thanks for pointing that out.
For the "UI": I agree that truncating the labels is a really bad thing to do... Maybe just changing the error message could be the solution? For now, it's a "tensor size error" which does not reflect this "bad behavior/bad inputs".
Something like "It seems your predictions are too long, the model cannot deal with that and we will not auto-truncate." could conciliate both warning the user and not doing horrible things behind the scenes ?
(in my case paraphrasing model are a good metric for information retrieval or question generation or similar task. Like "It's freezing today in Canada vs It's really cold in Canada today" should match so I can have long labels.)
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,451 | closed | Avoid accessing .dataset of a DataLoader in Trainer | # What does this PR do?
* Respects get_train_dataloader and such, rather than going back and looking at .train_dataset or requiring attributes in the dataloader to be accessible directly.
* This allows for overriding it by any object which implements the methods required by a DataLoader (`__len__` and `__iter__`) without additional requirements.
* The original motivation was to train on a multi-task dataloader which defers to multiple dataloaders.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- Discussed in #16388
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| 03-28-2022 14:19:33 | 03-28-2022 14:19:33 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@[sgugger](https://github.com/sgugger) this should be ready for review.
You were right that there were a couple of more places to change, and the logic is quite inconsistent in places. I've tried to be on the defensive side in covering cases:
* dataloader.dataset can exist or not, and have a length or not
* dataloader always has a len, but it can raise an exception in fairly common cases
This implementation works for my particular case, giving the same output in training+evaluation as before, but without the really painful workarounds.
I had a look at tests and they look complicated, so I will add some after getting confirmation that this is ok otherwise.
<|||||>Thanks for implementing all the tweaks! |
transformers | 16,450 | closed | [GLPN] Improve code example | # What does this PR do?
This PR makes the code example of GLPNForDepthEstimation more meaningful, showing an end-to-end example of how to visualize the predicted depth on the cats image. | 03-28-2022 14:07:11 | 03-28-2022 14:07:11 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,449 | closed | Remove kwargs argument from IBERT MLM forward pass | # What does this PR do?
This PR fixes a small bug that was introduced in #16389 where a `kwargs` argument was added to the `forward` pass of the MLM IBERT class.
Once this is merged, the slow ONNX test for this model should also pass.
| 03-28-2022 13:46:30 | 03-28-2022 13:46:30 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,448 | closed | Fix doc example | # What does this PR do?
This PR fixes the doc example of `xxxForSequenceClassification` models. I wonder how this test passes currently, cause for me it returned an error as the labels are of shape `(batch_size, num_labels)` but the `problem_type` wasn't set to "multi_label_classification".
| 03-28-2022 12:20:35 | 03-28-2022 12:20:35 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,447 | open | HF TF models can be used with TFX | # 🚀 Feature request
Hugging Face TensorFlow models can be used with [TensorFlow Extended](https://www.tensorflow.org/tfx).
## Motivation
TensorFlow Extended goes beyond Tensorflow with respect to production-ready capabilities. Ensuring our models can be used with it just like [TensorFlow Hub](https://www.tensorflow.org/hub) models would open new possibilities for TF/TFX users.
This issue will be used to discuss, plan, and track work related to the goal of making HF TF models TFX-compatible.
| 03-28-2022 11:41:09 | 03-28-2022 11:41:09 | Sounds exciting. I'm curious to hear if you all at HF plan to collaborate with @rcrowe-google and the TFX folks on this and, if so, in what capacity.<|||||>Yup. We're just starting to meet now, with a goal of strong alignment.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Mr. bot, certainly not stale ;)<|||||>Hey there, I've done some work on investigating HF usage within my own organization and thought I'd braindump some of my findings on the topic so far in case you'd find it helpful.
Hopefully folks from the TF(X) teams can tell me if I'm off base with anything I describe here, or otherwise confirm this analysis.
The problem of supporting HF within TFX breaks down into a few key issues that are somewhat intertwined - Tokenizers, and model formats.
## Model Formats
HF Transformers supports models in pytorch, Tensorflow, and Flax (["Supported frameworks" chart](https://huggingface.co/docs/transformers/index#supported-frameworks)). TFX is primarily designed with Tensorflow in mind, so for the easiest scope of addressing the problem, I'd focus on supporting TF models.
TFX *can* support other frameworks, however clunky, and indeed examples exist for training things like sklearn, xgboost and flax, but the support is limited at time of writing to training and to some degree eval through the use of a [custom extractor](https://github.com/tensorflow/tfx-addons/blob/main/examples/sklearn_penguins/sklearn_predict_extractor.py), but support is missing for say [BulkInferrer](https://www.tensorflow.org/tfx/api_docs/python/tfx/v1/components/BulkInferrer) which assumes a TF model, although perhaps this could be addressed through similar functionality to the evaluator.
In either case, one will need to use a custom image for TFX which includes the extra dependencies for HuggingFace (and potentially pytorch if one wishes to use a pytorch model)
## Tokenization
This is more the crux of the issue as I see it. In TF(X), it is common to try and include the preprocessing of the data as part of the model graph, so that a model can predict on raw data, without relying on tokenization happening on client side or implementing it separately in the serving layer. This is commonly tackled by either using [Tensorflow Transform](https://www.tensorflow.org/tfx/transform/install) or [Keras Preprocessing Layers](https://www.tensorflow.org/guide/keras/preprocessing_layers). For TF Hub models, usually the preprocessing code is added to either of these pieces, or something from TF Text is used.
For Huggingface, it's hard but not impossible to figure how this might fit into the picture. In the most optimistic scenario, I think it might be possible that the slow tokenizers can be annotated with `@tf.function` and used in TFT or KPL, and included in the graph (I have not attempted this). [Some related docs here](https://www.tensorflow.org/guide/function), at least for the case of TF-based HF models. If this is possible, then HF should be able to fit somewhat neatly in the rest of the TFX ecosystem when using a TF model.
If this is not possible, or one needs to use the faster rust-based tokenizers (see "Supported frameworks" chart linked above), either work needs to be done to make them usable within a tensorflow graph (far beyond my knowledge) or there will need to be a TFX component that can take TF Records, tokenize them, and output tokenized TF Records, but ALSO somehow indicate what tokenizer is to be used in the Eval process (which would require a custom extractor whether youre using TF or another framework), and potentially as well in a batch prediction component like Bulk Inferrer (or perhaps you could chain the aforementioned tokenizer component).
This is also not considering what will need to happen if you wish to serve the model online, where it seems [HF's current example](https://huggingface.co/blog/tf-serving) implies tokenization on client side, or at least a python based intermediary service as opposed to using TFServing directly or pytorch equivalents (I'm less knowledgable there). If tokenization can be included in the model graph, then this could at least be avoided for TF models.<|||||>Hi @rclough 👋 Thank you for your notes, they are very helpful! Curiously, today we also talked internally about the tokenizers and their interoperability with downstream TF Graphs (the model) -- in a perfect world, tokenizer + model would go in a single serializable graph. We may have news soon, stay tuned!
cc @Rocketknight1 <|||||>Great to hear! I would love to see that happen!
I also forgot to mention a 3rd approach to tokenization - IIRC it may be possible to use the metadata for some of HF's tokenizers to instantiate TF Text tokenizers with the same implementation, for example on issue #5066, one user described a way to use HF tokenization with the TF Text SentencePiece implementation, [gist here](https://gist.github.com/noahtren/6f9f6ecf2f81d0975c4f54afaeb95318). Note I have not tried this either.<|||||>> Great to hear! I would love to see that happen!
>
> I also forgot to mention a 3rd approach to tokenization - IIRC it may be possible to use the metadata for some of HF's tokenizers to instantiate TF Text tokenizers with the same implementation, for example on issue #5066, one user described a way to use HF tokenization with the TF Text SentencePiece implementation, [gist here](https://gist.github.com/noahtren/6f9f6ecf2f81d0975c4f54afaeb95318). Note I have not tried this either.
I tried this and it works! The only thing is that you shouldn't use "Fast" implemantation. I've changed AutoTokenizer with AlbertTokenizer and it worked. Also, you need to install sentencepiece with "pip install sentencepiece" <|||||>+Jiayi Zhao ***@***.***> +Laurence Moroney ***@***.***>
That's great news, thanks Nusret! Have you written any documentation or
examples for this yet?
Robert Crowe | TensorFlow Developer Engineer | ***@***.*** |
@robert_crowe <https://twitter.com/robert_crowe>
On Wed, Jun 22, 2022 at 12:52 AM Nusret Ozates ***@***.***>
wrote:
> Great to hear! I would love to see that happen!
>
> I also forgot to mention a 3rd approach to tokenization - IIRC it may be
> possible to use the metadata for some of HF's tokenizers to instantiate TF
> Text tokenizers with the same implementation, for example on issue #5066
> <https://github.com/huggingface/transformers/issues/5066>, one user
> described a way to use HF tokenization with the TF Text SentencePiece
> implementation, gist here
> <https://gist.github.com/noahtren/6f9f6ecf2f81d0975c4f54afaeb95318>. Note
> I have not tried this either.
>
> I tried this and it works! The only thing is that you shouldn't use "Fast"
> implemantation. I've changed AutoTokenizer with AlbertTokenizer and it
> worked. Also, you need to install sentencepiece with "pip install
> sentencepiece"
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/16447#issuecomment-1162771724>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AKVWSW63MFF6LILDGP4AKTDVQLA5BANCNFSM5R23PU3Q>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
<|||||>I've tried the example in this [gist](https://gist.github.com/noahtren/6f9f6ecf2f81d0975c4f54afaeb95318) but now I've created a testable code [here](https://colab.research.google.com/drive/1Ufe6Umnj97ma8MqZPK9mDAL6QZmcwq_7?usp=sharing) . Btw, I'm currently taking the MLOps Specialization on Coursera and the lessons are great, thanks for it!<|||||>Interesting that the shape is different, do you foresee any problems with that?
```
tf_tokenizer.tokenize(tf.strings.lower("merhaba"))
<tf.Tensor: shape=(3,), dtype=int32, numpy=array([ 55, 16849, 969], dtype=int32)>
hf_tokenizer.encode("merhaba", add_special_tokens=False, return_tensors ="tf")
<tf.Tensor: shape=(1, 3), dtype=int32, numpy=array([[ 55, 16849, 969]], dtype=int32)>
```<|||||>Probably the reason is hf tokenizer encodes the input as a batch of strings. So a tf.reshape operation can fix the difference but now I wonder how we can create other outputs. Normally we use:
```
# I generally not use return_tensors parameter
hf_tokenizer(["hi", "this is me"], add_special_tokens=False, return_tensors ="tf", padding=True, truncation=True)
```
and get the output:
```
{
'input_ids': <tf.Tensor: shape=(2, 3), dtype=int32, numpy= array([[4148, 0, 0], [ 48, 25, 55]], dtype=int32)>,
'token_type_ids': <tf.Tensor: shape=(2, 3), dtype=int32, numpy= array([[0, 0, 0], [0, 0, 0]], dtype=int32)>,
'attention_mask': <tf.Tensor: shape=(2, 3), dtype=int32, numpy=array([[1, 0, 0], [1, 1, 1]], dtype=int32)>
}
```
it seems like there is no easy way to use Tokenizers with TensorFlow. I can only think of looking at the tokenizer source code and writing the TensorFlow version of that to create these outputs and add other features of the tokenizer. So that a graph could be created to use as @rclough mentioned. Maybe a tokenization layer by subclassing [base layer class](https://keras.io/api/layers/base_layer/#layer-class) could help.<|||||>@NusretOzates we are precisely working on that [native TF tokenizers for transformers] at the moment -- see https://github.com/huggingface/transformers/pull/17701<|||||>@gante I just checked the code and tests to see the usage and it looks great! Thanks a lot for the effort!<|||||>Nice to see progress getting BERT tokenization available for TFX.
On a side note, I found this interesting repo that converts a number of huggingface models (including tokenization) to TF Hub: https://github.com/jeongukjae/huggingface-to-tfhub<|||||>@rclough Wow, it looks like they did a lot of work on precisely reimplementing tokenizers in TF there, that's extremely interesting!<|||||>@Rocketknight1 Defintely! I'm working with a team that's using their DistilBERT port they found through TF Hub (after first doing an MVP with HF), and discovered the repo through that. The implementation seems pretty high quality to me, mostly just reusing the vocab files and aligning the configuration with how TF Text does things.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,446 | closed | Making the impossible to connect error actually report the right URL. | # What does this PR do?
When a user overrides `HUGGINGFACE_CO_RESOLVE_ENDPOINT` (or `HF_ENDPOINT`),
then the errors messages still point to `https://huggingface.co` even if the
code is not trying to fetch from there.
This PR proposes to change and make the error message reflect the correct
endpoint that was looked up.
Linked to https://github.com/huggingface/transformers/pull/16445
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 03-28-2022 11:13:32 | 03-28-2022 11:13:32 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,445 | closed | [TENTATIVE] Updating variable names HUGGINGFACE_CO_RESOLVE_ENDPOINT into HF_ENDPOINT. | # What does this PR do?
`HUGGINGFACE_CO_RESOLVE_ENDPOINT` is deprecated and will go away in favor
of `HF_ENDPOINT`.
In order to maximize consistency, this PR renames the variable name itself.
This is tentative, since there might have been reasons to keep the old name for
the variable.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 03-28-2022 11:03:48 | 03-28-2022 11:03:48 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> This is purely cosmetic and internal, so I don't really see the point. We usually don't bother with renaming in those situations.
Is there any downsides to making the modifications ?
Consistency in naming has a lot of merits, especially within HF ecosystem where the name is used in many different places and different repos. Here for instance, it took me several tries to find all variables that needed to be overridden in `datasets`, `transformers` and `huggingface_hub`. Changing the name increases the chances that new code respects that convention.<|||||>I think the renaming here makes sense as that's the variable name in other repositories; since there are no downsides, ok for me!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Ok, merging, feel free to revert if anything breaks ! |
transformers | 16,444 | closed | [FlaxSpeechEncoderDecoderModel] Ensure Input and Output Word Embeddings Are **Not** Tied | In a seq2seq Speech-Encoder-Text-Decoder Model, the word embeddings of the text decoder should **not** be tied to the word embeddings of the speech encoder. The embedding matrices lie in two completely different vector spaces that have no affiliation. The embedding matrix for the decoder should be specific to the `XXXForCausalLM` decoder model used. Thus, the encoder and decoder word embeddings should not be constrained to being equal. This is observed in the PyTorch script for the `SpeechEncoderDecoderModel`:
https://github.com/huggingface/transformers/blob/e02f95b2298997016cd01fdb182442093b34e8d2/src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py#L437-L438
This PR makes adds the same change to the `config` for the `FlaxSpeechEncoderDecoderModel`, as well as adding tests to ensure that the `tie_word_embeddings` config is set correctly for both the `SpeechEncoderDecoderModel` and `FlaxSpeechEncoderDecoderModel`.
| 03-28-2022 09:25:15 | 03-28-2022 09:25:15 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,443 | closed | How can i train wav2vec2 with my dataset? | # 🚀 Feature request
i wonder how to preprocess it
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md -->
| 03-28-2022 09:04:19 | 03-28-2022 09:04:19 | Hi,
We do have several blog posts on that:
* Fine-Tune Wav2Vec2 for English ASR with 🤗 Transformers: https://huggingface.co/blog/fine-tune-wav2vec2-english
* Fine-tuning XLS-R for Multi-Lingual ASR with 🤗 Transformers: https://huggingface.co/blog/fine-tune-xlsr-wav2vec2<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,442 | closed | Is there a possible way to moniter the realtime training in the example? | In the `transformers/examples/pytorch/translation/run_translation.py`, it seems it only produce `the runs` log file after stopping training. Is it possible to generate the runs log while training through certain steps in that script? I fail to see any readme talk about it. Any advice will be appreciated. | 03-28-2022 07:47:08 | 03-28-2022 07:47:08 | Hi, please consider using the forum to ask such general questions as we use issues for for bug reports and feature requests, thanks!
The `Trainer` should automatically log during training, you can control the logging using `--logging_strategy` and `--logging_steps` argument. You can also use TensorBoard or w&B for logging, refer to this [doc](https://github.com/huggingface/transformers/tree/main/examples/pytorch#logging--experiment-tracking).<|||||>Thank you, I will use forum in the future.😊 |
transformers | 16,441 | closed | Doctest longformer | # What does this PR do?
adds LONGFORMER PT to doctests
@patrickvonplaten - `LongformerForMaskedLM` and `LongformerForQuestionAnswering` have `@replace_return_docstrings `instead of `@add_code_sample_docstrings`. Should I be replacing them with that instead so that I can test for `expected_output` & `expected_loss`?
| 03-28-2022 07:32:18 | 03-28-2022 07:32:18 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @KMFODA, thanks for your PR! Could you rebase/merge on `main` so that the CI passes? Thank you!<|||||>apologies @LysandreJik, had merged with master not main. Fixed now :).<|||||>@KMFODA would it be ok if we go into the PR to finish it in case you don't have enough time? :-)<|||||>hey @patrickvonplaten, apologies for the delay. I've managed to work on the proposed changes. Let me know if anything else is missing?<|||||>Great! Looks good to me actually :-) @ydshieh could you take a look maybe?<|||||>@patrickvonplaten could you also check if my comment about `randomly initialized` is valid, please? I haven't run the examples myself though. I can double check tomorrow.
**Update**: I ran it, and the 2 tests failed as I think. Depending on @KMFODA, maybe we can merge this PR with only the PT file (and we work on TF part internally)? To address this situation, either a conversion of PT -> TF checkpoint, or using a tiny model checkpoint is required.<|||||>@KMFODA
If you are willing to deal with the TensorFlow version , could you try to use the TF checkpoint [hf-internal-testing/tiny-random-longformer](hf-internal-testing/tiny-random-longformer) for `TFLongformerForSequenceClassification` and `TFLongformerForTokenClassification`.
Otherwise, it's totally OK that we revert the changes in `src/transformers/models/longformer/modeling_tf_longformer.py` + remove `src/transformers/models/longformer/modeling_tf_longformer.py` from `utils/documentation_tests.txt`, and we can merge this PR :-). Thank you!<|||||>hey @ydshieh thanks for the comments. Yes sure I just switched those 3 checkpoints now. I'm having issues with TF and my new M1 Mac so I can't test these now unfortunately. Thought I'd just commit these changes now in case you're in a rush but will try and test these changes soon as I get my TF library working.<|||||>Hi, @KMFODA , no rush on our side :-). But if you have trouble with TF on Mac, we can run it and update the expect values, just let us know. Thank you!<|||||>No problem at all. Thanks for all the helpful comments. Changes made :).<|||||>Thanks a mille for all your work here @KMFODA ! |
transformers | 16,440 | closed | How can I use MLM and NSP to train Bert from scratch? | How can I use MLM and NSP to train Bert from scratch? | 03-28-2022 03:51:26 | 03-28-2022 03:51:26 | Hi,
I've answered this question on Stackoverflow [here](https://stackoverflow.com/questions/65646925/how-to-train-bert-from-scratch-on-a-new-domain-for-both-mlm-and-nsp/65760008#65760008).<|||||>OK, thanks!!! |
transformers | 16,439 | closed | Add Doc Test GPT-2 | # What does this PR do?
Fixes the broken doc tests for GPT-2
Apart of the documentation sprint work.
Fixes [Github issue] (https://github.com/huggingface/transformers/issues/16292)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a [Github issue](https://github.com/huggingface/transformers/issues/16292)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
gpt2: @patrickvonplaten, @LysandreJik
Documentation: @sgugger
| 03-28-2022 01:35:54 | 03-28-2022 01:35:54 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I think this is what is required something up with CI failing code quality check?
```
#!/bin/bash -eo pipefail
black --check examples tests src utils
Skipping .ipynb files as Jupyter dependencies are not installed.
You can fix this by running ``pip install black[jupyter]``
would reformat src/transformers/models/gpt2/modeling_gpt2.py
Oh no! 💥 💔 💥
1 file would be reformatted, 1510 files would be left unchanged.
Exited with code exit status 1
CircleCI received exit code 1
```<|||||>Yes, you need to run `make style` on your branch to make that test pass :-)
Pinging @ydshieh on this PR since Patrick is on vacation this week.<|||||>Hi, @ArEnSc
Thank you for this PR!
In order to run `make style`, you will need to run
```
pip install transformers[quality]
```
If you haven't done this before.<|||||>```Run python -m pytest -n [2](https://github.com/huggingface/transformers/runs/5729180598?check_suite_focus=true#step:6:2) --dist=loadfile -s --make-reports=tests_new_models tests/bert_new/test_modeling_bert_new.py
python -m pytest -n 2 --dist=loadfile -s --make-reports=tests_new_models tests/bert_new/test_modeling_bert_new.py
shell: /usr/bin/bash -e {0}
/usr/lib/python[3](https://github.com/huggingface/transformers/runs/5729180598?check_suite_focus=true#step:6:3)/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.9) or chardet (3.0.[4](https://github.com/huggingface/transformers/runs/5729180598?check_suite_focus=true#step:6:4)) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/runner/.local/lib/python3.8/site-packages/pytest/__main__.py", line [5](https://github.com/huggingface/transformers/runs/5729180598?check_suite_focus=true#step:6:5), in <module>
raise SystemExit(pytest.console_main())
File "/home/runner/.local/lib/python3.8/site-packages/_pytest/config/__init__.py", line 187, in console_main
code = main()
File "/home/runner/.local/lib/python3.8/site-packages/_pytest/config/__init__.py", line 145, in main
config = _prepareconfig(args, plugins)
File "/home/runner/.local/lib/python3.8/site-packages/_pytest/config/__init__.py", line 324, in _prepareconfig
config = pluginmanager.hook.pytest_cmdline_parse(
File "/home/runner/.local/lib/python3.8/site-packages/pluggy/_hooks.py", line 2[6](https://github.com/huggingface/transformers/runs/5729180598?check_suite_focus=true#step:6:6)5, in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
File "/home/runner/.local/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/home/runner/.local/lib/python3.8/site-packages/pluggy/_callers.py", line 55, in _multicall
gen.send(outcome)
File "/home/runner/.local/lib/python3.8/site-packages/_pytest/helpconfig.py", line 102, in pytest_cmdline_parse
config: Config = outcome.get_result()
File "/home/runner/.local/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/runner/.local/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall
res = hook_impl.function(*args)
File "/home/runner/.local/lib/python3.8/site-packages/_pytest/config/__init__.py", line 1016, in pytest_cmdline_parse
self.parse(args)
File "/home/runner/.local/lib/python3.8/site-packages/_pytest/config/__init__.py", line 1304, in parse
self._preparse(args, addopts=addopts)
File "/home/runner/.local/lib/python3.8/site-packages/_pytest/config/__init__.py", line 118[7](https://github.com/huggingface/transformers/runs/5729180598?check_suite_focus=true#step:6:7), in _preparse
self.pluginmanager.load_setuptools_entrypoints("pytest11")
File "/home/runner/.local/lib/python3.[8](https://github.com/huggingface/transformers/runs/5729180598?check_suite_focus=true#step:6:8)/site-packages/pluggy/_manager.py", line 287, in load_setuptools_entrypoints
plugin = ep.load()
File "/usr/lib/python3.8/importlib/metadata.py", line 77, in load
module = import_module(match.group('module'))
File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line [9](https://github.com/huggingface/transformers/runs/5729180598?check_suite_focus=true#step:6:9)91, in _find_and_load
File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line [10](https://github.com/huggingface/transformers/runs/5729180598?check_suite_focus=true#step:6:10)[14](https://github.com/huggingface/transformers/runs/5729180598?check_suite_focus=true#step:6:14), in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "/home/runner/.local/lib/python3.8/site-packages/_pytest/assertion/rewrite.py", line [16](https://github.com/huggingface/transformers/runs/5729180598?check_suite_focus=true#step:6:16)8, in exec_module
exec(co, module.__dict__)
File "/home/runner/.local/lib/python3.8/site-packages/dash/__init__.py", line 5, in <module>
from .dash import Dash, no_update # noqa: F401,E402
File "/home/runner/.local/lib/python3.8/site-packages/_pytest/assertion/rewrite.py", line 168, in exec_module
exec(co, module.__dict__)
File "/home/runner/.local/lib/python3.8/site-packages/dash/dash.py", line [18](https://github.com/huggingface/transformers/runs/5729180598?check_suite_focus=true#step:6:18), in <module>
from werkzeug.debug.tbtools import get_current_traceback
ImportError: cannot import name 'get_current_traceback' from 'werkzeug.debug.tbtools' (/home/runner/.local/lib/python3.8/site-packages/werkzeug/debug/tbtools.py)
````
```
I did run
make fixup
make style
Then I also merged master what am I missing for this last piece.
lmk if I am missing something
```
<|||||>Hi, @ArEnSc
For this sprint, you don't need to test the model, but just to test the docstrings in model files.
You can see a guide here [For Python files](https://github.com/huggingface/transformers/tree/main/docs#for-python-files).
Before you run, you need to
```python
pip install -e ".[dev]"
```
Let me know if this works for you.<|||||>> Hi, @ArEnSc
>
> For this sprint, you don't need to test the model, but just to test the docstrings in model files.
>
> You can see a guide here [For Python files](https://github.com/huggingface/transformers/tree/main/docs#for-python-files).
>
> Before you run, you need to
>
> ```python
> pip install -e ".[dev]"
> ```
>
> Let me know if this works for you.
Yes I did run the required commands specifically:
```
python utils/prepare_for_doc_test.py src docs #line1 This command I didn't run because I was specifically working on modeling_gpt2
python utils/prepare_for_doc_test.py src/transformers/utils/doc.py src/transformers/models/gpt2/modeling_gpt2.py
pytest --doctest-modules src/transformers/models/gpt2/modeling_gpt2.py -sv --doctest-continue-on-failure # I ran this to run
the test
python utils/prepare_for_doc_test.py src docs --remove_new_line # ran this line to get everything back to normal
```
I am unsure about how to stop CI from running the add model like runner I suppose as that error came from CI
Thanks let me know!
<|||||>@ArEnSc
For now, You can ignore the errors on **build_pr_documentation** and **Add new model like template tests** from the CI.
We are currently working on these issues internally.<|||||>@ydshieh @sgugger I think this should address the comments =)<|||||>Hi, @ArEnSc Ping me for the merge once you finish the `# fmt: off` thing mentioned by Sylvain :-)<|||||>@ydshieh this one is good to go now! =)<|||||>@ArEnSc
```
Hopefully ignores the formatting issue.
```
--> not just a hope, dream comes True now :-)
Thank you again for the contribution. Merged! |
transformers | 16,438 | closed | clean_up_tokenization_spaces=True won't clean up spaces | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0
- Platform: Linux-5.10.0-051000-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.10.2+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes (the same on CPU)
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik, @Narsil, @SaulLu
## Information
Model I am using (Bert, XLNet ...): BERT (`bert-large-cased`)
The problem arises when using:
* [x] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
encoded = tokenizer("This thing costs £4.56")
decoded = tokenizer.decode(encoded["input_ids"], clean_up_tokenization_spaces=True)
print (decoded)
```
Real output: `[CLS] This thing costs £4. 56 [SEP]`
I tried it also with NER pipelines and other text inputs.
Additional example: got `[CLS] ( including once - a - week tapping ) [SEP]` instead of `[CLS] (including once-a-week tapping) [SEP]`
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Expected output: `[CLS] This thing costs £4.56 [SEP]`.
I expected the tokenizer to cleanup all the spaces introduced. Is there any different way to do so? Am I missing some trivial parameter? | 03-27-2022 17:57:46 | 03-27-2022 17:57:46 | Hi @MorenoLaQuatra,
This is normal, as punctuation is considered as token splitter by this tokenizer. The `clean_up` is only meant to remove duplicate spaces, and normalize other space characters into the regular ascii one.
What are you trying to achieve ?
Using `.offset_mapping` is in general, could help you understand more what's going on without modifying the original string.
```python
from transformers import AutoTokenizer
sentence = "This thing costs £4.56"
tokenizer = AutoTokenizer.from_pretrained("bert-large-uncased")
encoded = tokenizer(sentence, return_offsets_mapping=True)
for _id, (start, stop) in zip(encoded.input_ids, encoded.offset_mapping):
print(f"Token {_id} ({start}-{stop}) comes from {repr(sentence[start:stop])}")
```
```
Token 101 (0-0) comes from ''
Token 2023 (0-4) comes from 'This'
Token 2518 (5-10) comes from 'thing'
Token 5366 (11-16) comes from 'costs'
Token 1069 (17-18) comes from '£'
Token 2549 (18-19) comes from '4'
Token 1012 (19-20) comes from '.'
Token 5179 (20-22) comes from '56'
Token 102 (0-0) comes from ''
```
This allows you to understand what is happening without having to "recreate" the original string (`decode` cannot in general recover the original string)<|||||>Thank you for the explanation. I'm trying to build a custom NER-like system using token classification. I leverage offset_mapping during training to identify the token class. While trying to integrate it in a NER-like pipeline (TokenClassification) I found out it was generating entities from the tokenized version.
However, from your answer, I think it would be better to use char indexes to identify the correct span in the input sentence right? <|||||>> However, from your answer, I think it would be better to use char indexes to identify the correct span in the input sentence right?
I cannot answer in general as I don't know if you have access to the original string for instance.
But if you do have access to it, then `offset_mapping` will be closer to what you expect most of the time. `decode` is a best effort way to represent what the model has seen, but it cannot in general output exactly what you sent in.
The reason is that `tokenizer.encode` is destructive and looses information. A simple example is that some tokenizer start by `.lower()` so we cannot in general recovering the capitalization. The same goes for spaces, `decode` will try to add them where they belong, but it cannot work 100% of the time.
Here around punctuation for instance You would like to get `"£4.56"` but you would want `"Hi. How are you doing ?"` (with the extra space after the dot). Since it looks the same to the input ids, `decode` has to make a choice and just use one form.<|||||>Ok, I got exactly your point. Thank you and sorry for the issue, I'll close it.<|||||>No worries, you're not alone having this issue, we're looking into how to communicate overall better the information.<|||||>@Narsil I also have trouble in the same problem. And your explanation is informative. But is there any way to customize this tokenizer behaviors? |
transformers | 16,437 | closed | 1 | s | 03-27-2022 09:32:36 | 03-27-2022 09:32:36 | |
transformers | 16,430 | closed | m2m-100 finetuning messes up lang pairs | ## Issue
Currently if I attempt to fine-tune M2M100 (many-to-many 100) on one language pair, what happens is the training data is convoluted and output translations from other language pairs are messed up. Further more, in the compute_metric which is ran when evaluate() is called and validation data is processed, the model/trainer returns predictions which, when decoded, are in a variety of languages, not in the target language which the reference text is in, making bleu calculations immensely low on trainer.evaluate(). I am hoping to be able to fine-tune specific language pairs and preserve the rest of the model's accuracy in it's other lang pairs.
## Environment info
- `transformers` version: 4.17.0
- Platform: `Windows-10-10.0.22000-SP0`
- Python version: 3.9.7
- PyTorch version (GPU?): `1.11.0+cu113 (True)`
- Tensorflow version (GPU?): `2.8.0 (True)`
- Flax version (CPU?/GPU?/TPU?): `not installed (NA)`
- Jax version: `not installed`
- JaxLib version: `not installed`
- Using GPU in script?: `Yes`
- Using distributed or parallel set-up in script?: `No`
- GPU Type: `RTX 3070`
Datasets used: [Totoeba Challenge EN-ES](https://github.com/Helsinki-NLP/Tatoeba-Challenge/blob/master/data/dev/eng-fra/dev.txt)
## Information
Model I am using: **`M2M100 (418M)`**
The problem arises when using my own modified scripts of the official example scripts regarding the fine-tuning of translation models. Guide used is [here](https://huggingface.co/course/chapter7/4)
The tasks I am working on is finetuning the m2m-100 418M model on one language pair while preserving the other language pairs' weights and accuracy
## To reproduce
1. Use `Trainer` to finetune m2m-100 on one language pair with bitext data with script below
OR
1. Load [this finetuned m2m100 huggingface hub model](https://huggingface.co/NDugar/m2m100_418M-fr) and generate predictions for a variety of language pairs. (this model utilized the script below)
3. Try inferencing with english to french, then try influencing from english to russian, or any other language pair. You will receive messed up results with french vocab, punctuation and grammar.
```python
from datasets import load_dataset, load_metric
from transformers import M2M100Tokenizer, AutoModelForSeq2SeqLM, DataCollatorForSeq2Seq, Seq2SeqTrainingArguments, Seq2SeqTrainer
import torch
raw_datasets = load_dataset("text", data_files="./en-es.txt") #load bitext data file, can download from tatoeba challenge
split_datasets = raw_datasets["train"].train_test_split(train_size=0.9,test_size=0.001, seed=20) #i set the test size really low so I could view compute_metrics and evaluate faster to see messed up predictions clearer
split_datasets["validation"] = split_datasets.pop("test")
metric = load_metric("sacrebleu")
model_checkpoint = "facebook/m2m100_1.2B"
tokenizer = M2M100Tokenizer.from_pretrained(model_checkpoint)
tokenizer.src_lang = "en"
tokenizer.tgt_lang = "es"
max_input_length = 128
max_target_length = 128
def preprocess_function(examples):
inputs = []
targets = []
for ex in examples['text']: #split the dataset as it is still in bitext format (lang1 TAB lang2)
split = ex.split(" ")
if len(split) >= 2: #dont add empty lines or messed up data if there is any
inputs.append(split[2]) #for totoeba challenge, first two inputs in array are the language codes
targets.append(split[3])
model_inputs = tokenizer(inputs, max_length=max_input_length, truncation=True)
# Set up the tokenizer for targets
with tokenizer.as_target_tokenizer():
labels = tokenizer(targets, max_length=max_target_length, truncation=True)
model_inputs["labels"] = labels["input_ids"]
return model_inputs
def compute_metrics(eval_preds): #compute bleu metrics of predictions and references
#eval_preds made up op label_ids (reference translations) and predictions (the predicted translation) among other values
reference_text = tokenizer.batch_decode(eval_preds.label_ids, skip_special_tokens=True) #reference translated texts
translated_text = tokenizer.batch_decode(eval_preds.predictions, skip_special_tokens=True)
metric_result = metric.compute(predictions=translated_text, references=reference_text) #compute bleu score
print(metric_result) #will be outrageously low (i got 0.69 BLEU ~)
return {"bleu": metric_result["score"]}
tokenized_datasets = split_datasets.map(
preprocess_function,
batched=True,
remove_columns=split_datasets["train"].column_names,
) #use preprocess function on all data in dataset in batches of 1000
del raw_datasets
del split_datasets #delete the text versions of dataset as has been converted to tensors be now
model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint) #load m2m100 model
data_collator = DataCollatorForSeq2Seq(tokenizer, model=model) #load data collator
batch = data_collator([tokenized_datasets["train"][i] for i in range(1, 3)])
args = Seq2SeqTrainingArguments(
f"m2m100_412M-test",
evaluation_strategy="epoch",
save_strategy="epoch",
learning_rate=2e-5,
weight_decay=0.01,
save_total_limit=3,
num_train_epochs=1,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
eval_accumulation_steps=1,
predict_with_generate=True,
push_to_hub=False,
do_train=True,
do_eval=False,
fp16=True,
)
trainer = Seq2SeqTrainer(
model,
args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"],
data_collator=data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics,
)
torch.cuda.empty_cache() #ensure torch cache is empty and ready
trainer.evaluate(max_length=max_target_length) #this will return a super super low bleu score, if you breakpoint the compute metrics function you'll see the decoded predictions are not in the target language but in an assortment of random ones, changing for every sentence
trainer.train() #training functions fine, however if you attempt to use the saved model after training from any other lang pair but en - es it will be messed up
```
## Expected behavior
The m2m-100 model and trainer should use the tokenized results to only train that specific language pair, (eg; en-es) instead of changing all language-pairs/weights with the target language. Hopefully you would be able to finetune language pairs while preserving the rest of the model, increasing the accuracy and translation output of specific highly used language pairs while keeping untouched language pairs alone and the same as original.
@patrickvonplaten @LysandreJik
| 03-27-2022 01:54:12 | 03-27-2022 01:54:12 | cc @patil-suraj <|||||>Hi @ArtanisTheOne ,
As specified in the [docs](https://huggingface.co/docs/transformers/model_doc/m2m_100), to generate output in a specific language with M2M100, we should set the `forced_bos_token_id` in `config` or pass it to `generate`.
For example to generate translations in Spanish, in your script you should set.
```python
model.config.forced_bos_token_id = tokenizer.lang_code_to_id["es"]
```
> Hopefully you would be able to finetune language pairs while preserving the rest of the model, increasing the accuracy and translation output of specific highly used language pairs while keeping untouched language pairs alone and the same as original.
When we fine-tune a multilingual model on a specific language pair, it's very likely that the model's performance on other languages will drop. How to avoid this is more of a research question.
> The m2m-100 model and trainer should use the tokenized results to only train that specific language pair, (eg; en-es) instead of changing all language-pairs/weights with the target language.
It's a single model for all 100 language pairs and there are no language-pair specific weights, so when fine-tuning the model all weights will be updated. <|||||>I understand, I’ll try to see if I can modify anything to achieve better results. Thanks for taking the time to respond.<|||||>@ArtanisTheOne as @patil-suraj said that modification bought you any changes in the results or there are any changes that you made and it worked for you? |
transformers | 16,429 | closed | Adding missing type hints for mBART model (PyTorch) | PyTorch Implementation model added with missing type hints
# What does this PR do?
Added type hints for mBART PyTorch as described in https://github.com/huggingface/transformers/issues/16059
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Rocketknight1 @gante
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| 03-26-2022 19:08:49 | 03-26-2022 19:08:49 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi, this looks good, and I'm sorry for the delay! I'm guessing the failing tests are due to issues we fixed recently, and would go away if you rebased the PR onto the most recent `huggingface/transformers/main` and force-pushed. If you're not sure how to do that, just ping me here or on Discord and I'll give you instructions! |
transformers | 16,428 | closed | Jia multi gpu eval | # What does this PR do?
This PR provdie a script to run codeparrot human eval in multi GPU mode
The code is a enhanced version of human_eval.py
# Performance data
The performance is check using the following command
```
accelerate launch scripts/human_eval.py --model_ckpt lvwerra/codeparrot-small --do_sample True --temperature 0.2 --top_p 0.95 --n_samples=40 --HF_ALLOW_CODE_EVAL="1" --device_int=0 --num_tasks=8 --batch_size=10
```
One should obtain something similar to
```
Results: {'pass@1': 0.1375, 'pass@10': 0.21251641317430792}
```
# Time benchmark
First configure accelerate
```
accelerate config
```
Remember to put more than 1 process if want to leverage multip gpu.
Then run
```
accelerate launch scripts/human_eval.py --model_ckpt lvwerra/codeparrot-small --do_sample True --temperature 0.2 --top_p 0.95 --n_samples=200 --HF_ALLOW_CODE_EVAL="1" --batch_size=10
```
One should obtain something similar to
```
Results: {'pass@1': 0.03807926829268292, 'pass@10': 0.056158491636524525, 'pass@100': 0.0739473539338153}
```
Here is time performance benchmark under a GPU VM with 4xT4
| Number of processes | Evaluation time for lvwerra/codeparrot-small |
|---------------------|----------------------------------------------|
| 1 | 2:10:00 |
| 4 | 35:00 |
You could also test the large model by setting `--model_ckpt lvwerra/codeparrot`, you might probably need to set a small batch size (1 is usually adapted)
Warning: when testing with the large model, accelerate is loading the model on all the processes. You would need about 60G RAM on your VM. (I am not sure there is another alternative)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-26-2022 19:03:50 | 03-26-2022 19:03:50 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @liyongsea, thanks for your PR! Could you rebase/merge on `main` so that the CI passes? Thank you!<|||||>There seems to be also a minor issue with the code style. The CI enforces some standards and you can fix any formatting issues with:
```bash
make fixup
```
For more information on both rebasing/merging and quality checks you can checkout the [contributing guide](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md).<|||||>> There seems to be also a minor issue with the code style. The CI enforces some standards and you can fix any formatting issues with:
>
> ```shell
> make fixup
> ```
>
> For more information on both rebasing/merging and quality checks you can checkout the [contributing guide](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md).
Thank you for your feedback @lvwerra here is what I am going the do:
- [x] Rebase and fix the formatting issue
- [x] use task id to reorganize the output
- [x] remove padding before generation to avoid the 'n_copies' tricks<|||||>hi @lvwerra
I finished the implementation. Please start review. Here are some area you might need to pay extra attention.
- I keep the behavior of sending copies of prompts, such that when generating 1M prediction per prompt, the same task is also distributed across worker
- Padding is removed before generation. However, the code still requires n_tasks x n_samples to be multiple of n_processes. Because the length of the dataloader should be a multiple of n_processes (do you know how to easily avoid this). Do you think it is ok? Or shall I change the behavior?<|||||>@lvwerra Thank you for your suggestions. I learnt a lot ! It is integrated into the PR, I will test it.
> Regarding your second point: I don't think this is true - accelerate should be able to handle that! It just means that during the last batch of jobs not all GPUs will be utilized but I think that is fine. Did you run into any problems? If not we could remove the ValueError.
I have an error message when doing so, I will dig into it<|||||>Hi @lvwerra I updated the code according to your suggestion. And I remove the exception regarding the division issue (please see my comment above).
I really appreciated your time. Tell me if you spot anything.<|||||>You can go the the branch [empty_tensor](https://github.com/liyongsea/transformers/tree/empty_tensor), it is the same PR as this one by remove the if condition and an exception.
Then config accelerate to 4 process and run
```
accelerate launch scripts/human_eval.py --model_ckpt lvwerra/codeparrot-small --do_sample True --temperature 0.2 --top_p 0.95 --n_samples=10 --HF_ALLOW_CODE_EVAL="1" --num_tasks=6 --batch_size=10
```
You will notice something like
```
TypeError: : only integer tensors of a single element can be converted to an index
```
Where accelerate is sending an empty tensor to some processes during the last iteration
I will also take a look into accelerate to see if I find anything<|||||>@lvwerra for the empty tensor issue, I found this in the documentation
https://github.com/huggingface/accelerate/blob/1d95ebdaa40ed42d0bb2b41ba5531b49eb8948b1/src/accelerate/data_loader.py#L204
I am really not sure if it is related. Tomorrow I will make a smaller example to verify this<|||||>Hi @liyongsea , thank you for your contributions. I made a test run on my side and the code works fine!
For an NVIDIA Tesla A100 with 2 GPUs and the parameters you're using, I get the following results
|#processes | Time | pass@1 | pass@10| pass@100|
| - | - | - | - | - |
| 1 | 3:07:26 | 3.41% | 5.39% | 7.01% |
| 2 | 1:36:34 | 3.51% | 5.63% | 6.67% |<|||||>So as the numbers seem to add up I think we can merge this once the `set_seed` is added. The check for empty batches is simple enough so we can leave it like that for now in my opinion - no reason to spend much time on that. Thanks again for adding this! <|||||>> So as the numbers seem to add up I think we can merge this once the `set_seed` is added. The check for empty batches is simple enough so we can leave it like that for now in my opinion - no reason to spend much time on that. Thanks again for adding this!
Hi @lvwerra
I have finished the modifications.
- accelerate.utils.set_seed is used with device_specific=True
- I have updated the accelerated version which solve the empty tensor issue
Feel free to run the whole script again and merge if you are ok ! |
transformers | 16,427 | closed | add Bigbird ONNX config | # What does this PR do?
Add Bigbird OnnxConfig to make this model available for conversion.
## Who can review?
@lewtun @LysandreJik | 03-26-2022 16:06:10 | 03-26-2022 16:06:10 | _The documentation is not available anymore as the PR was closed or merged._<|||||>5040f17eba15504bad66b14a645bddd9b015ebb7 #15622 <|||||>@lewtun Thank you for reviewing my code. I have fixed the merge conflicts and all tests by running this `RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -k "bigbird"` was passed.<|||||>This all looks good to me @vumichien - thanks for iterating! Gently pinging @LysandreJik or @sgugger for final approval |
transformers | 16,426 | closed | Add bigbird onnx config | # What does this PR do?
Adds Bigbird OnnxConfig to make this model available for conversion.
## Who can review?
@michaelbenayoun @lewtun
| 03-26-2022 15:14:50 | 03-26-2022 15:14:50 | |
transformers | 16,425 | closed | Add type hints for PyTorch Models | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
Add type hints to as many PyTorch models as possible.
This PR targets the following models to type hint entire files:
- Albert
- Bart
- Bert
- BertGeneration
- BigBird
- BigBirdPegasus
- Canine
- ConvBert
- ConvNext
- CTRL
- Data2VecText
- Data2VecAudio
- Hubert
- Marian
- MBart
- Nystromformer
- Wav2Vec2
- WavLM
- XGLM
- XLMRobertaXL
- Yoso
Any other file that has been edited is a result of running `make fix-copies`.
In the next PR, I will target few other models to type hint complete files.
<!-- Remove if not applicable -->
Fixes #16059
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@Rocketknight1 | 03-26-2022 14:36:10 | 03-26-2022 14:36:10 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16425). All of your documentation changes will be reflected on that endpoint.<|||||>Hello all,
I changed the cokkiecutter template code because the `check_copies` script couldn't correct code that got changed to multiple lines.
Changing a method in Bart model:
```py
##########
## From ##
##########
# Copied from transformers.models.bart.modeling_bart.BartDecoder._prepare_decoder_attention_mask
def _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length):
########
## To ##
########
# Copied from transformers.models.bart.modeling_bart.BartDecoder._prepare_decoder_attention_mask
def _prepare_decoder_attention_mask(
self,
attention_mask: torch.Tensor,
input_shape: torch.Size,
inputs_embeds: torch.FloatTensor,
past_key_values_length: int,
) -> Optional[torch.Tensor]:
```
The code correcter script would make the following change:
```py
# Copied from transformers.models.bart.modeling_bart.BartDecoder._prepare_decoder_attention_mask
def _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length):
self,
attention_mask: torch.Tensor,
input_shape: torch.Size,
inputs_embeds: torch.FloatTensor,
past_key_values_length: int,
) -> Optional[torch.Tensor]:
```
Which caused the errors in [one of the test runs](https://github.com/huggingface/transformers/runs/5709841712?check_suite_focus=true).
Just commenting the reason here in case someone is taking a look at this PR later.<|||||>Hello all,
I have updated the code base with type hints for a few models.
I will open a new PR for the remaining models after this one is merged, since this PR is getting bigger.
Thanks
cc: @Rocketknight1 <|||||>
Hello all,
The `Add model like runner` test is failing with an `ImportError` when starting to run the `Run all PyTorch modeling test` section of the tests with the following error:
```bash
ImportError: cannot import name 'get_current_traceback' from 'werkzeug.debug.tbtools' (/home/runner/.local/lib/python3.8/site-packages/werkzeug/debug/tbtools.py)
```
I am unsure as to what is causing the error and any leads on how to resolve this issue would be appreciated.
Thank you<|||||>Wow, this is a huge PR! Did you do this manually, or have you figured out some kind of tool for it?<|||||>Hello @Rocketknight1 ,
Yeah, I made all this manually. This was how I spent my weekend 😛.
<|||||>That's amazing! I'll try to review now.<|||||>This is a huge and very impressive PR, thank you! The main suggestion I have is that bools are not annotated in some cases, e.g. `output_attentions=False` should be `output_attentions: bool = False`, or `output_attentions: Optional[bool] = None` when the default is `None`. I'll try to recruit a couple of people from Huggingface to help me review the whole thing once that's resolved!<|||||>Hello,
Sure, I can also a take a look once again to fix the missing ones.
I had some doubts with a few of the types and I will post them here later to get the types for them and later update.
Thanks for the update and glad you liked the work.<|||||>Absolutely! I saw in some cases `past` was missing annotations - if you're unsure about annotations like that, you can usually check the docstrings for the `past` or `past_key_values` argument for that model - it'll be something like `Tuple[torch.Tensor]`<|||||>Sure, later in the process I figured out the type and I had added for a few files. Fill fix for others as well.<|||||>Note that `past`/`past_key_values` can have different structure in different models!<|||||>Ohhh, thanks for the heads up.<|||||>@karthikrangasai The best way to make sure the type hints are correct is to check the [Model Name]_INPUTS_DOCSTRING, right before the first user interfaced forward method<|||||>Hello @Tegzes ,
I checked that for all forward methods. But it might be possible that I missed it for a few files.
I have type hinted the entire file, from first function to last class. So i might have missed something in other places.<|||||>Hi @karthikrangasai ! This is totally my bad - other PRs came in and I reviewed them without realizing they would create conflicts with your one. Would it be possible to break this PR up into a few separate ones and submit them one at a time? That greatly reduces the chances of conflicts for each one, and it'll make it possible for me to add specific comments/suggestions, whereas at this size I really can just give general advice!<|||||>Hello @Rocketknight1 ,
Yeah sure.
I would like to completely work on the typing issues if that's fine with you ( for all PyTorch Models - complete file).
I will break the PR into multiple ones based on the corrections made or the model that was type hinted.
Should I close this one then ?<|||||>Hi @karthikrangasai, sorry for the delay! Yeah, it's probably easiest to close this one, make new ones and just tag me in them. Thank you! |
transformers | 16,424 | closed | PerceiverIO Attention maps | Hi, I am quite amazed with the usage and the implementation of PerceiverIO so far.
I am trying to visualize the attention maps for an arbitrary image using "PerceiverForImageClassificationConvProcessing".
I want to visualize the attention maps like in https://arxiv.org/pdf/2103.03206.pdf Figure 3.
I can hook into the layers to get attention vectors of the decoder and most probably I need a pixel-wise multiplication with the query vector to achieve such images. But if there are any other fast paced approach I really like to use it for this purpose.
Thanks | 03-26-2022 14:13:16 | 03-26-2022 14:13:16 | Hi,
You can get the attention scores by passing `output_attentions=True` to the forward method of any model in the Transformers library. See an example of how you can visualize the attention maps of DETR in [this notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/DETR_minimal_example_(with_DetrFeatureExtractor).ipynb).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,423 | closed | Train on custom translation dataset - machine translation | After my reading the readme in pytorch translation example, it says
> And here is how you would use the translation finetuning on your own files, after adjusting the values for the arguments --train_file, --validation_file to match your setup:
>
> python examples/pytorch/translation/run_translation.py \
> --model_name_or_path t5-small \
> --do_train \
> --do_eval \
> --source_lang en \
> --target_lang ro \
> --source_prefix "translate English to Romanian: " \
> --dataset_name wmt16 \
> --dataset_config_name ro-en \
> --train_file path_to_jsonlines_file \
> --validation_file path_to_jsonlines_file \
> --output_dir /tmp/tst-translation \
> --per_device_train_batch_size=4 \
> --per_device_eval_batch_size=4 \
> --overwrite_output_dir \
> --predict_with_generate
> The task of translation supports only custom JSONLINES files, with each line being a dictionary with a key "translation" and its value another dictionary whose keys is the language pair. For example:_
> { "translation": { "en": "Others have dismissed him as a joke.", "ro": "Alții l-au numit o glumă." } }
> { "translation": { "en": "And some are holding out for an implosion.", "ro": "Iar alții așteaptă implozia." } }
I do have the jsonl file in the format above on my disk, but I just don't know how to get correct _dataset_name_ and _dataset_config_name_. And I tried to just ingnore them but it pops up
> run_translation.py --model_name_or_path t5-small --do_train --do_eval --source_lang Cls --target_lang Mdn --source_prefix "translate Classical to Modern: " --train_file C:\Users\Gare\PycharmProjects\Gare\Classical_Chinese\train.jsonl --validation_file C:\Users\Gare\PycharmProjects\Gare\Classical_Chinese\valid.jsonl --output_dir /tmp/Gare-translation --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate
> 03/26/2022 20:47:20 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1distributed training: False, 16-bits training: False
> 03/26/2022 20:47:20 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(
> _n_gpu=1,
> adafactor=False,
> adam_beta1=0.9,
> adam_beta2=0.999,
> adam_epsilon=1e-08,
> bf16=False,
> bf16_full_eval=False,
> data_seed=None,
> dataloader_drop_last=False,
> dataloader_num_workers=0,
> dataloader_pin_memory=True,
> ddp_bucket_cap_mb=None,
> ddp_find_unused_parameters=None,
> debug=[],
> deepspeed=None,
> disable_tqdm=False,
> do_eval=True,
> do_predict=False,
> do_train=True,
> eval_accumulation_steps=None,
> eval_delay=0,
> eval_steps=None,
> evaluation_strategy=IntervalStrategy.NO,
> fp16=False,
> fp16_backend=auto,
> fp16_full_eval=False,
> fp16_opt_level=O1,
> generation_max_length=None,
> generation_num_beams=None,
> gradient_accumulation_steps=1,
> gradient_checkpointing=False,
> greater_is_better=None,
> group_by_length=False,
> half_precision_backend=auto,
> hub_model_id=None,
> hub_strategy=HubStrategy.EVERY_SAVE,
> hub_token=<HUB_TOKEN>,
> ignore_data_skip=False,
> label_names=None,
> label_smoothing_factor=0.0,
> learning_rate=5e-05,
> length_column_name=length,
> load_best_model_at_end=False,
> local_rank=-1,
> log_level=-1,
> log_level_replica=-1,
> log_on_each_node=True,
> logging_dir=/tmp/Gare-translation\runs\Mar26_20-47-20_GareNg,
> logging_first_step=False,
> logging_nan_inf_filter=True,
> logging_steps=500,
> logging_strategy=IntervalStrategy.STEPS,
> lr_scheduler_type=SchedulerType.LINEAR,
> max_grad_norm=1.0,
> max_steps=-1,
> metric_for_best_model=None,
> mp_parameters=,
> no_cuda=False,
> num_train_epochs=3.0,
> optim=OptimizerNames.ADAMW_HF,
> output_dir=/tmp/Gare-translation,
> overwrite_output_dir=True,
> past_index=-1,
> per_device_eval_batch_size=4,
> per_device_train_batch_size=4,
> predict_with_generate=True,
> prediction_loss_only=False,
> push_to_hub=False,
> push_to_hub_model_id=None,
> push_to_hub_organization=None,
> push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
> remove_unused_columns=True,
> report_to=['tensorboard'],
> resume_from_checkpoint=None,
> run_name=/tmp/Gare-translation,
> save_on_each_node=False,
> save_steps=500,
> save_strategy=IntervalStrategy.STEPS,
> save_total_limit=None,
> seed=42,
> sharded_ddp=[],
> skip_memory_metrics=True,
> sortish_sampler=False,
> tf32=None,
> tpu_metrics_debug=False,
> tpu_num_cores=None,
> use_legacy_prediction_loop=False,
> warmup_ratio=0.0,
> warmup_steps=0,
> weight_decay=0.0,
> xpu_backend=None,
> )
> 03/26/2022 20:47:40 - INFO - datasets.utils.file_utils - HEAD request to https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/jsonl/jsonl.py timed out, retrying... [1.0]
> 03/26/2022 20:48:45 - INFO - datasets.utils.file_utils - HEAD request to https://raw.githubusercontent.com/huggingface/datasets/master/datasets/jsonl/jsonl.py timed out, retrying... [1.0]
> Traceback (most recent call last):
> File "examples/pytorch/translation/run_translation.py", line 624, in <module>
> main()
> File "examples/pytorch/translation/run_translation.py", line 322, in main
> raw_datasets = load_dataset(extension, data_files=data_files, cache_dir=model_args.cache_dir)
> File "C:\ProgramData\Anaconda3\envs\gare\lib\site-packages\datasets\load.py", line 1671, in load_dataset
> **config_kwargs,
> File "C:\ProgramData\Anaconda3\envs\gare\lib\site-packages\datasets\load.py", line 1492, in load_dataset_builder
> data_files=data_files,
> File "C:\ProgramData\Anaconda3\envs\gare\lib\site-packages\datasets\load.py", line 1237, in dataset_module_factory
> ) from None
> FileNotFoundError: Couldn't find a dataset script at C:\Users\Gare\Desktop\transformers\jsonl\jsonl.py or any data file in the same directory. Couldn't find 'jsonl' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/jsonl/jsonl.py
So what how can I help with it to just get my data trained? | 03-26-2022 13:00:01 | 03-26-2022 13:00:01 | Hi @Gare-Ng , if you have the dataset in jsonl files
I don't think you need the `dataset_name and dataset_config_name.` parameters; you should use this instead to run the script
```
--train_file train.json \
--validation_file dev.json \
--test_file test.json \
```<|||||>@ToluClassics Thank for your prompt relpy, that is exactly what I believe it ought to be. But it still pops up same error
> Traceback (most recent call last):
File "run_translation.py", line 624, in <module>
main()
File "run_translation.py", line 322, in main
raw_datasets = load_dataset(extension, data_files=data_files, cache_dir=model_args.cache_dir)
File "C:\ProgramData\Anaconda3\envs\gare\lib\site-packages\datasets\load.py", line 1671, in load_dataset
**config_kwargs,
File "C:\ProgramData\Anaconda3\envs\gare\lib\site-packages\datasets\load.py", line 1492, in load_dataset_builder
data_files=data_files,
File "C:\ProgramData\Anaconda3\envs\gare\lib\site-packages\datasets\load.py", line 1237, in dataset_module_factory
) from None
FileNotFoundError: Couldn't find a dataset script at C:\Users\Gare\PycharmProjects\Gare\transformers\examples\pytorch\translation\jsonl\jsonl.py or any data file in the same directory. Couldn't find 'jsonl' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/jsonl/jsonl.py
I konw that I need a config file to tell it some details about training, but I just do not know how.<|||||>I finally find a solution😎It seems that huggingface remove supprot for jsonl but in its readme it remains unchanged, misleading me for quite a long time😤
[https://github.com/huggingface/transformers/issues/10820](url)<|||||>Currently, running https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation.py with a custom JSON dataset still fails if the file extension is `jsonl`.
The script itself has a check for file extensions that incorrectly accepts `jsonl`:
https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation.py#L238
This issue could be re-opened until fixed<|||||>I am facing the same issue. Any solutions? |
transformers | 16,422 | closed | Add TF implementation of XGLM model | # 🚀 Feature request
Add TF implementation of XGLM model
## Motivation
Narrow the gap between PT and TF implementations.
## Your contribution
I'd like to work on this. | 03-26-2022 12:47:11 | 03-26-2022 12:47:11 | Thanks a lot for willing to work on this, happy to help with it :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Still active<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>still active |
transformers | 16,421 | closed | Fix typo in language modeling example comment | # What does this PR do?
Fix typo in comment: proprocessed -> preprocessed
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review this PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-26-2022 11:57:52 | 03-26-2022 11:57:52 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,420 | closed | Providing a easier way to download pre-trained models. | # 🚀 Feature request
I tried to download T5-base manually because of my special environment. I tried to git clone T5-base from https://huggingface.co/t5-base. The model in PyTorch (which I am using) is only about 800M. However, I need to download files used in other DL frameworks (like TensorFlow). Also, I need to download the git files (.git directory). Finally, I need to download about 7G files, which is much larger than the model itself.
So, could there be a way to manually download pre-trained models in a specific DL framework only?
## Motivation
I need to download much more files than the model itself, which is too slow
## Your contribution
| 03-26-2022 11:34:52 | 03-26-2022 11:34:52 | Hello! I recommend using `snapshot_download`, from the `huggingface_hub` package:
```
from huggingface_hub import snapshot_download
```
See https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/snapshot_download.py#L21, especially the `allow_regex` and `ignore_regex` which should allow you to specify which files exactly you'd like to get.<|||||>@LysandreJik is 100% right, or an alternative if it's just a one-off download is you could just download individual files from the `Files and versions` tab (note that depending on your browser you might need to rename the files):
<img width="1833" alt="Screenshot 2022-03-28 at 12 04 16" src="https://user-images.githubusercontent.com/326577/160375553-4f920b72-8644-4300-a0b6-d782b0bafd91.png">
<|||||>Thank you guys a lot! |
transformers | 16,419 | closed | `Wav2Vec2FeatureExtractor` and `Speech2TextFeatureExtractor` tests fail for truncating `max_length` values | # Misleading use of `max_length` in `FeatureExtraction` classes
**TL;DR**
- Currently, setting `max_length` and having features be longer than `max_length` does not necessarily mean it should be `truncation=True`.
- Further look into testing code makes me believe that this is by design, as explained [below](https://github.com/huggingface/transformers/issues/16419#issuecomment-1079705990).
- I believe this is confusing because padding to `length==X` for shorter sequences and not truncating sequences greater than `X` is what you'd expect if `X` is `min_length`, not `max_length`.
---
The current tests for the existing `FeatureExtractor` classes are using somewhat meaningless `max_length` values that are greater than length of the input features.
**When using lower values that truncate the input features (for a meaningful effect of `max_length` parameter), the test code fails.**
## Environment info
- `transformers` version: 4.18.0.dev0
- Platform: Linux-3.10.0-1062.el7.x86_64-x86_64-with-debian-10.1
- Python version: 3.7.9
- Huggingface_hub version: 0.2.0
- PyTorch version (GPU?): 1.10.2+cu102 (True)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.4.0 (cpu)
- Jax version: 0.3.1
- JaxLib version: 0.3.0
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten, @anton-l
## Information
Model I am using (Bert, XLNet ...): `Wav2Vec2FeatureExtractor` and `Speech2TextFeatureExtractor`
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
### Existing Working Test 1
From `tests/wav2vec2/test_feature_extraction_wav2vec2.py` line 127:
```python
def test_zero_mean_unit_variance_normalization_np(self):
feat_extract = self.feature_extraction_class(**self.feat_extract_tester.prepare_feat_extract_dict())
speech_inputs = [floats_list((1, x))[0] for x in range(800, 1400, 200)]
paddings = ["longest", "max_length", "do_not_pad"]
max_lengths = [None, 1600, None]
for max_length, padding in zip(max_lengths, paddings):
processed = feat_extract(speech_inputs, padding=padding, max_length=max_length, return_tensors="np")
input_values = processed.input_values
self._check_zero_mean_unit_variance(input_values[0][:800])
self.assertTrue(input_values[0][800:].sum() < 1e-6)
self._check_zero_mean_unit_variance(input_values[1][:1000])
self.assertTrue(input_values[0][1000:].sum() < 1e-6)
self._check_zero_mean_unit_variance(input_values[2][:1200])
```
### Failing Test 1 (after changing `max_lengths`)
```python
def test_zero_mean_unit_variance_normalization_np(self):
feat_extract = self.feature_extraction_class(**self.feat_extract_tester.prepare_feat_extract_dict())
speech_inputs = [floats_list((1, x))[0] for x in range(800, 1400, 200)]
paddings = ["longest", "max_length", "do_not_pad"]
max_lengths = [None, 100, None]
for max_length, padding in zip(max_lengths, paddings):
processed = feat_extract(speech_inputs, padding=padding, max_length=max_length, return_tensors="np")
....
```
### Existing working test 2
From `tests/speech_to_text/test_feature_extraction_speech_to_text.py` Line 140
```python
def test_cepstral_mean_and_variance_normalization(self):
feature_extractor = self.feature_extraction_class(**self.feat_extract_tester.prepare_feat_extract_dict())
speech_inputs = [floats_list((1, x))[0] for x in range(800, 1400, 200)]
paddings = ["longest", "max_length", "do_not_pad"]
max_lengths = [None, 16, None]
for max_length, padding in zip(max_lengths, paddings):
inputs = feature_extractor(
speech_inputs, padding=padding, max_length=max_length, return_attention_mask=True
)
input_features = inputs.input_features
attention_mask = inputs.attention_mask
fbank_feat_lengths = [np.sum(x) for x in attention_mask]
self._check_zero_mean_unit_variance(input_features[0][: fbank_feat_lengths[0]])
self._check_zero_mean_unit_variance(input_features[1][: fbank_feat_lengths[1]])
self._check_zero_mean_unit_variance(input_features[2][: fbank_feat_lengths[2]])
```
### Failing test 2
```python
def test_cepstral_mean_and_variance_normalization(self):
feature_extractor = self.feature_extraction_class(**self.feat_extract_tester.prepare_feat_extract_dict())
speech_inputs = [floats_list((1, x))[0] for x in range(800, 1400, 200)]
paddings = ["longest", "max_length", "do_not_pad"]
max_lengths = [None, 4, None]
for max_length, padding in zip(max_lengths, paddings):
inputs = feature_extractor(
speech_inputs, padding=padding, max_length=max_length, return_attention_mask=True
)
input_features = inputs.input_features
....
```
For identical error messages:
```
if self.do_ceptral_normalize:
attention_mask = (
np.array(attention_mask, dtype=np.int32)
> if self._get_padding_strategies(padding, max_length=max_length) is not PaddingStrategy.DO_NOT_PAD
else None
)
E ValueError: setting an array element with a sequence.
```
## Why It Fails
The working tests produce `attention_mask` values like this:
```
[[1 1 1 0 0 0]
[1 1 1 1 0 0]
[1 1 1 1 1 1]]
```
While the failing tests produce `attention_mask` values **at the same point in the code** like this:
```
[array([1, 1, 1, 0], dtype=int32) array([1, 1, 1, 1], dtype=int32)
array([1, 1, 1, 1, 1, 1], dtype=int32)]
```
A very easy fix would just be doing type checking and changing the latter type to the same as the former.
But maybe it's worth looking into `SequenceFeatureExtractor`'s `pad` function that returns `List[np.array]` for truncated outputs and `List[List]]` for untruncated outputs.
<!-- A clear and concise description of what you would expect to happen. -->
| 03-26-2022 11:11:00 | 03-26-2022 11:11:00 | So despite the `max_length` parameter, it's not actually truncating (rather, it's forgetting to for the last item):
```
[
array([1, 1, 1, 0], dtype=int32)
array([1, 1, 1, 1], dtype=int32)
array([1, 1, 1, 1, 1, 1], dtype=int32)
]
```
The above is a list of "ragged" nested sequences, and turns out:
```
feature_extraction_utils.py:172: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a
list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must
specify 'dtype=object' when creating the ndarray.
```<|||||>Okay, so it seems like this behavior is by design:
https://github.com/huggingface/transformers/blob/main/tests/test_sequence_feature_extraction_common.py#L244
```python
# truncate to smallest
input_1 = feat_extract.pad(
processed_features, padding="max_length", max_length=len(speech_inputs[0]), truncation=True
)
input_1 = input_1[input_name]
input_2 = feat_extract.pad(processed_features, padding="max_length", max_length=len(speech_inputs[0]))
input_2 = input_2[input_name]
self.assertTrue(_inputs_have_equal_length(input_1))
# suggests we don't want equal lengths, despite max_length, since we didn't set `truncation=True`
self.assertFalse(_inputs_have_equal_length(input_2))
```
but isn't this confusing?
Rather, the current behavior is akin to `min_length`, where we pad all the inputs that are shorter than `max_length` to be equal to `max_length`, and **not truncate** longer sequences.
Found this while making a small PR to add and adjusting the test codes accordingly:
```python
def pad(self, ....):
if truncation is False and max_length is not None:
raise ValueError(
"If `max_length` is provided, `trunction` must be True."
)
...
# tests/speech_to_text/test_feature_extraction_speech_to_text.py
paddings = ["longest", "max_length", "do_not_pad"]
max_lengths = [None, 4, None]
for max_length, padding in zip(max_lengths, paddings):
do_truncation = max_length is not None
inputs = feature_extractor(
speech_inputs, padding=padding, max_length=max_length,
return_attention_mask=True, truncation=do_truncation
)
```
Could someone let me know if I should continue?
<|||||>@cwkeam Indeed, this logic might be confusing for sequence feature extraction in particular!
However, the fact that setting `max_length` doesn't make `truncate=True` stems from the base padding and truncation strategies of `transformers` (see for example [here](https://github.com/huggingface/transformers/blob/b320d87eceb369ea22d5cd73866499851cb2cca3/docs/source/pad_truncation.mdx)).
Maybe @LysandreJik or @sgugger have a nicer explanation for why that's the case?<|||||>Transformers does not do any magic behind the scenes. If you want truncation, you have to enable it. If you want padding, you have to enable it. `max_length` itself doesn't set any of those because what default are we supposed to take? There are several truncation strategies even when a max length is set (to deal with pairs of sentences for instance).
Our design choice, in accordance with our [philosophy](https://huggingface.co/docs/transformers/philosophy) was to avoid any magic default and let the user manually control everything. I agree it can be confusing and the documentation could definitely be improved (if you wanted to make a contribution ;-) )<|||||>Right, thanks for kindly explaining! I think I had to understand that `padding` and `truncation` must be two well-separated functionalities. That makes sense.
Thank you! |
transformers | 16,418 | closed | [Flax] Improve Robustness of Back-Prop Tests | This PR makes several modifications to the `test_freeze_feature_encoder` tests for two Flax models, FlaxWav2Vec2 and FlaxSpeechEncoderDecoderModel. This test is in place to verify that feature encoder of the FlaxWav2Vec2 model is correctly frozen when the keyword argument `freeze_feature_encoder` is set to `True`. This is achieved through one dummy forward and backward pass, in which the losses and gradients are compared for the case when the feature encoder is frozen and the case when it is not.
The modifications made include:
- Implementing a check that asserts the outputs of the unfrozen and frozen feature encoder models are _precisely equal_;
- Asserting that the losses of the unfrozen and frozen models are _precisely equal_, instead of _almost equal_ as was previously;
- Asserting that the gradients of the unfrozen feature encoder layers contain at least one non-zero entry, instead of asserting that at least one entry that surpasses a specified tolerance;
- Asserting that the gradients of all unfrozen layers remain _precisely equal_, instead of _almost equal_ as was previously.
The statements that assert unfrozen and frozen gradient values are _precisely equal_ instead of _almost equal_ provides more rigour to the back-propagation test. It verifies that `jax.stop_gradient` has no impact on the forward or backward pass, other than skipping computation of the gradients of any frozen layers. **Crucially**, these changes bypass the need to set a specified tolerance (`tol`) in the `assert_almost_equal` or `assert_difference` functions. Even with seemingly appropriate values for this tolerance (`1e-5`), the tests were prone to failing a small proportion of the time (<2.5%). With this PR, the tests should now pass 100% of the time. | 03-26-2022 11:01:55 | 03-26-2022 11:01:55 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,417 | closed | RuntimeError: params[0] in this process with sizes [253991, 1024] appears not to match sizes of the same param in process 0 | I am fine tuning masked language model from XLM Roberta large on google machine specs. `I have extended vocabulary by adding extra tokens.
`
I am using pre-trained Hugging face model.
I launch it as train.py file which I copy inside docker image and use vertex-ai ( GCP) to launch it using Containerspec
machineSpec = MachineSpec(machine_type="a2-highgpu-4g",accelerator_count=4,accelerator_type="NVIDIA_TESLA_A100")
`python -m torch.distributed.launch --nproc_per_node 4 train.py --gradient_accumulation_steps 16 --per_device_train_batch_size 4 --optim adamw_hf --tf32 --bf16"])
`
`device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')`
`Traceback (most recent call last):\n', ' File "train.py", line 215, in <module>\n trainer.train()\n', ' File "/opt/conda/lib/python3.7/site-packages/transformers/trainer.py", line 1258, in train\n model = self._wrap_model(self.model_wrapped)\n', ' File "/opt/conda/lib/python3.7/site-packages/transformers/trainer.py", line 1088, in _wrap_model\n **kwargs,\n', ' File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 641, in __init__\n dist._verify_params_across_processes(self.process_group, parameters)\n', 'RuntimeError: params[0] in this process with sizes [253991, 1024] appears not to match sizes of the same param in process 0.\n'`
```
torch==1.11.0+cu113
torchvision==0.12.0+cu113
torchaudio==0.11.0+cu113
transformers==4.17.0
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: Yes
```
### Who can help
@sgugger
Models:
- Roberta-xlm-large
Library:
- Tokenizers: @SaulLu
- Trainer: @sgugger
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
tokenizer = tr.XLMRobertaTokenizer.from_pretrained("xlm-roberta-large",local_files_only=True)
tokenizer.add_tokens(joined_keywords)
tokenizer_org = tr.XLMRobertaTokenizer.from_pretrained("xlm-roberta-large",local_files_only=True)
model = tr.XLMRobertaForMaskedLM.from_pretrained("xlm-roberta-large", return_dict=True,local_files_only=True)
# add embedding params for new vocab words
model.resize_token_embeddings(len(tokenizer))
weights = model.roberta.embeddings.word_embeddings.weight
# initialize new embedding weights as mean of original tokens
with torch.no_grad():
emb = []
for i in range(len(joined_keywords)):
word = joined_keywords[i]
# first & last tokens are just string start/end; don't keep
tok_ids = tokenizer_org(word)["input_ids"][1:-1]
tok_weights = weights[tok_ids]
# average over tokens in original tokenization
weight_mean = torch.mean(tok_weights, axis=0)
emb.append(weight_mean)
weights[-len(joined_keywords):,:] = torch.vstack(emb).requires_grad_()
# tokenizer.convert_ids_to_tokens(encoded_input["input_ids"][0])
tokenizer_out_files = tokenizer.save_pretrained("tokenizer_xlm")
```
`model.to(device)`
`train_encodings = tokenizer(train_df, truncation=True, padding=True, max_length=512, return_tensors="pt")`
```
class SEDataset(torch.utils.data.Dataset):
def __init__(self, encodings):
self.encodings = encodings
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
return item
def __len__(self):
return len(self.encodings["attention_mask"])
train_data = SEDataset(train_encodings)
# print("train data created")
training_args = tr.TrainingArguments(
output_dir='results_mlm_vocab_exp'
,logging_dir='logs_mlm_vocab_exp' # directory for storing logs
,save_strategy="epoch"
# ,run_name="MLM_Exp1"
,learning_rate=2e-5
,logging_steps=2000
,overwrite_output_dir=True
,num_train_epochs=20
,per_device_train_batch_size=4
,prediction_loss_only=True
,gradient_accumulation_steps=16
# ,sharded_ddp='zero_dp_3'
# ,gradient_checkpointing=True
,bf16=True #Ampere GPU
# ,fp16=True
,optim="adamw_hf"
# ,dataloader_num_workers=20
# ,logging_strategy='no'
# per_device_train_batch_size
# per_gpu_train_batch_size
# disable_tqdm=True
)
# print("training sample is 200001")
# print("Included ,gradient_accumulation_steps=8 ,bf16=True and per_device_train_batch_size=16 " )
print("start time",start)
trainer = tr.Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_data
)
# print("training to start without bf16")
trainer.train()
```
| 03-26-2022 10:11:41 | 03-26-2022 10:11:41 | As the error says, a parameter in your model is not the same size in all processes. It looks like it's your number of tokens, zso make sure you are doing the same preprocessing on each process.<|||||>> As the error says, a parameter in your model is not the same size in all processes. It looks like it's your number of tokens, zso make sure you are doing the same preprocessing on each process.
Hi @sgugger, thanks for quick response.
But, when you say process how should I do it same on all processes?
I execute my `train.py` by this line:
`python -m torch.distributed.launch --nproc_per_node 4 train.py --gradient_accumulation_steps 16 --per_device_train_batch_size 4 --optim adamw_hf --tf32 --bf16"`
**And it internally calls 4 GPU's of A100. Do I need to pass some parameters here or in train.py?**
Inside train.py I simply do:
`device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')`
<|||||>You should ensure the code you execute will result in the same preprocessing for the dataset on the fours GPUs. This is currently not the case.
This is not a bug in Transformers AFAICT, so if you need for help, I'd suggest asking on the [forums](https://discuss.huggingface.co/) :-)<|||||>> You should ensure the code you execute will result in the same preprocessing for the dataset on the fours GPUs. This is currently not the case.
>
> This is not a bug in Transformers AFAICT, so if you need for help, I'd suggest asking on the [forums](https://discuss.huggingface.co/) :-)
@[sgugger](https://github.com/sgugger)
Ok. Can you point me where in the code above should I fix this. It will be great help. I will definitely post in forum.
Also, when I didn't use `.add_tokens` then I was able to do multi GPU training. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,416 | closed | added typehints for RAG pytorch models | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adding type hints for RAG pytorch model as a part of " Add missing type hints #16059 " issue.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@Rocketknight1 | 03-26-2022 06:36:27 | 03-26-2022 06:36:27 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,415 | closed | Can't load config for 'DeepPavlov/xlm-roberta-large-en-ru' | Similar to this issue: https://github.com/huggingface/transformers/issues/6226
I'm unable to load a config for a specific model when I run the following:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("DeepPavlov/xlm-roberta-large-en-ru")
model = AutoModel.from_pretrained("DeepPavlov/xlm-roberta-large-en-ru")
```
Version:
```python
>>> import transformers
>>> transformers.__version__
'4.17.0'
```
Then I get this error from [this part](https://github.com/huggingface/transformers/blob/main/src/transformers/configuration_utils.py#L634) of the code
```
OSError: Can't load config for 'DeepPavlov/xlm-roberta-large-en-ru'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'DeepPavlov/xlm-roberta-large-en-ru' is the correct path to a directory containing a config.json file
``` | 03-25-2022 23:43:32 | 03-25-2022 23:43:32 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>It's not completed. Just never received a response.<|||||>I'm facing the same issue . Please let me know if you have resolved it @doyled-it<|||||>> I'm facing the same issue . Please let me know if you have resolved it @doyled-it
I believe what I did was git clone the huggingface repository and then point to it when I try to load the model instead of forcing huggingface to download it for me. |
transformers | 16,414 | closed | Removed inputs_processing and replaced with decorator for lxmert | # What does this PR do?
Replaces usage of `inputs_processing` with decorator `unpack_input` in `lxmert`
See #16051
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@gante | 03-25-2022 19:42:12 | 03-25-2022 19:42:12 | All tests pass after running `RUN_SLOW=1 py.test -vv tests/lxmert/test_modeling_tf_lxmert.py`
```
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_attention_outputs PASSED [ 2%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_compile_tf_model PASSED [ 5%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_config PASSED [ 8%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_determinism <- tests\test_modeling_tf_common.py PASSED [ 11%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_forward_signature <- tests\test_modeling_tf_common.py PASSED [ 14%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_generate_with_headmasking <- tests\test_modeling_tf_common.py PASSED [ 17%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_headmasking <- tests\test_modeling_tf_common.py PASSED [ 20%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_hidden_states_output PASSED [ 23%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_initialization <- tests\test_modeling_tf_common.py PASSED [ 26%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_inputs_embeds <- tests\test_modeling_tf_common.py PASSED [ 29%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_keras_save_load <- tests\test_modeling_tf_common.py PASSED [ 32%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_keyword_and_dict_args <- tests\test_modeling_tf_common.py PASSED [ 35%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_lm_head_model_beam_search_generate_dict_outputs <- tests\test_modeling_tf_common.py PASSED [ 38%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_lm_head_model_no_beam_search_generate_dict_outputs <- tests\test_modeling_tf_common.py PASSED [ 41%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_lm_head_model_random_beam_search_generate <- tests\test_modeling_tf_common.py PASSED [ 44%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_lm_head_model_random_no_beam_search_generate <- tests\test_modeling_tf_common.py PASSED [ 47%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_load_with_mismatched_shapes <- tests\test_modeling_tf_common.py PASSED [ 50%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_loss_computation <- tests\test_modeling_tf_common.py PASSED [ 52%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_lxmert_for_pretraining PASSED [ 55%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_lxmert_model PASSED [ 58%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_model_common_attributes PASSED [ 61%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_model_from_pretrained SKIPPED (test is slow) [ 64%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_model_main_input_name <- tests\test_modeling_tf_common.py PASSED [ 67%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_model_outputs_equivalence <- tests\test_modeling_tf_common.py PASSED [ 70%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_numpy_arrays_inputs <- tests\test_modeling_tf_common.py PASSED [ 73%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_onnx_compliancy <- tests\test_modeling_tf_common.py PASSED [ 76%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_onnx_runtime_optimize <- tests\test_modeling_tf_common.py SKIPPED (test is slow) [ 79%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_pt_tf_model_equivalence PASSED [ 82%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_resize_token_embeddings <- tests\test_modeling_tf_common.py PASSED [ 85%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_save_load PASSED [ 88%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_save_load_config <- tests\test_modeling_tf_common.py PASSED [ 91%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_saved_model_creation PASSED [ 94%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_saved_model_creation_extended SKIPPED (test is slow) [ 97%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelIntegrationTest::test_inference_masked_lm SKIPPED (test is slow) [100%]
```<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi again @gante remade all the changes in this fresh PR..so sorry that you need to review once again. In turn I realized I need to learn more `git` 😅 |
transformers | 16,413 | closed | `sequences_scores` does not compute the expected quantity for beam search | ## Environment info
- `transformers` version: v.4.17.0
Reported issue is with the implementation of the algorithm, and thus is platform independent.
### Who can help
@patrickvonplaten
## Information
From reading the code, it seems the user expectation of the beam search implementation should be that sequences are sorted by their per-token logprobs of the generated sequences (obviously when `length_penalty==1`, which is by default).
This, however, is not really the case for (purely) autoregressive models such as GPT-2, **when there is a prompt with many tokens**.
To see where the issue is coming from, recall this piece of code from [here](https://github.com/huggingface/transformers/blob/main/src/transformers/generation_beam_search.py#L829).
```python
def add(self, hyp: torch.LongTensor, sum_logprobs: float):
"""
Add a new hypothesis to the list.
"""
score = sum_logprobs / (hyp.shape[-1] ** self.length_penalty)
```
`sum_logprobs` is the sum of logprobs for the **generated tokens**, and does not include that for any token in a given prompt. On the other hand, `hyp.shape[-1]` is the total length of the prompt and generated tokens. When `length_penalty==1` (by default), this quantity is not the avg. per-token logprob for the generated tokens.
Note the encoder-decoder models don't have this issue, since the prompt/input is not really part of the output sequence.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
```python
print('test gpt2')
tokenizer = transformers.GPT2Tokenizer.from_pretrained('gpt2')
model = transformers.GPT2LMHeadModel.from_pretrained('gpt2')
inputs = tokenizer.encode('Tom is ', return_tensors='pt', add_special_tokens=False)
outputs = model.generate(
inputs=inputs, output_scores=True,
num_beams=3, num_return_sequences=1, return_dict_in_generate=True, max_length=15, do_sample=False,
forced_eos_token_id=tokenizer.eos_token_id, length_penalty=1
)
sequences = outputs.sequences
sequences_scores = outputs.sequences_scores
scores = outputs.scores
beam_indices = outputs.beam_indices
beam_scores = model.compute_transition_beam_scores(sequences=sequences, scores=scores, beam_indices=beam_indices)
term1 = beam_scores.sum() / (sequences[0] != 50256).sum()
term2 = sequences_scores[0]
print(f'diff: {(term1 - term2).abs()}')
torch.testing.assert_allclose(term1, term2, atol=1e-4, rtol=0)
# Expected actual sequences_scores
exp1 = beam_scores.sum() / (beam_scores != 0.).sum()
exp2 = beam_scores.mean()
print(f'expected sequences_scores: {exp1} or {exp2}')
```
## Expected behavior
Having `sequence_scores` return the actual per-token logprob for the entire generated sequence would be helpful when one tries to report results for the generated sequences.
This likely requires changing how `beam_scores` is initialized in `beam_search` [here](https://github.com/huggingface/transformers/blob/aa4c0a86dcb4387513074d0c7d60fb9caccb0880/src/transformers/generation_utils.py#L2139).
| 03-25-2022 19:02:13 | 03-25-2022 19:02:13 | Hey @lxuechen,
I don't think we can change this really in the code due to backward breaking changes. What you could do is:
- You can set `self.length_penalty = 0` to not apply any penalty and thus get the correct log probs.
- You would need to run a forward pass on the prompt first to get its probs from there you can derive the log probs of the prompt.
This test might also be helpful: https://github.com/huggingface/transformers/blob/9fd5e6bbe605941b707b0e1aa223a5c51c183550/tests/generation/test_generation_utils.py#L2177<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,412 | closed | Use doc builder styler | # What does this PR do?
This PR removes the `style_doc` script to rely on the `doc-builder style` command instead. The only change (in commands.lfs) is tiny and not unwelcome. | 03-25-2022 17:59:50 | 03-25-2022 17:59:50 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,411 | closed | How to calculate GPT-2 sentence loss (for each sentence) if batch has 2 or more sentences? | I need to calculate loss for batch of sentence but when i do this I get only average loss for all the sentences not all the loses.
from transformers import GPT2LMHeadModel, GPT2Tokenizer
model = GPT2LMHeadModel.from_pretrained('distilgpt2')
loss_list = model(input_ids, labels=input_ids)
Is it possible to somehow calculate the loss of each sentence separatelly in one batch or I have to use only batch-size=1 which will be much slower.
| 03-25-2022 17:12:07 | 03-25-2022 17:12:07 | Hello,
[Not an official answer, I'm not part of the the team]
The `GPT2LMHeadModel` uses the `CrossEntropyLoss` with its default parameters. The default for `reduction` is 'mean', so the loss is averaged. One thing that you can do is to define your custom `GPT2LMHeadModel` and replace this:
```
loss_fct = CrossEntropyLoss()
loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
```
with:
```
loss_fct = CrossEntropyLoss(reduction='none')
loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
```
this gives you the loss for every token. If you want the average by sequence, you need to undo the flatten first:
```
loss_by_sequence = loss.view(logits.size(0), -1).mean(dim=1)
```
Be aware that I say "by sequence" and not "by sentence" because that depends on how you structure the inputs and nothing prevents a sequence from containing more than 1 sentence.<|||||>Awesome. Thanks very much. This should speed up things considerably.
I need to change this in file: transformers/models/gpt2/modeling_gpt2.py and I get loss for every token. So if i have 2 sequences Do I get this [ [loss_word1a,loss_word2a ...], [loss_word1b,loss_word2b,...] ?
<|||||>Yes, that's what you'd get in `loss`.
In `loss_by_sequence` you'd get [loss_seq1, loss_seq2, ...]<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,410 | open | Edgeformer | # 🌟 New model addition
## Model description
[EdgeFormer: A Parameter-Efficient Transformer for On-Device Seq2seq Generation](https://arxiv.org/abs/2202.07959)
EdgeFormer: A Parameter-Efficient Transformer for On-Device Seq2seq Generation. Tao Ge and Furu Wei
March 2022: release code and pretrained checkpoints.
## Open source status
* [x] the model implementation is available: https://github.com/microsoft/unilm/blob/900f5416c8137a753b1c8f53cd5015d0ceca7061/edgelm/fairseq/models/transformer/transformer_legacy.py#L226
* [x] the model weights are available: https://github.com/microsoft/unilm/tree/900f5416c8137a753b1c8f53cd5015d0ceca7061/edgelm#pretrained-models
* [x] who are the authors: Maybe @gitnlp
Happy to help with a model contribution here! | 03-25-2022 15:23:46 | 03-25-2022 15:23:46 | @patrickvonplaten If I can work on this and contribute, do let me know. Meanwhile, I will proceed to read and understand the paper.<|||||>Hey @reichenbch feel free to work on this if you are interested. Patrick is on vacation this week so I would be happy to help with this :) <|||||>@patil-suraj I was thinking first to read the paper once and then look into the available implementation (they are using fairseq library) and checkpoints. Is that the correct approach ? Secondly, what all things would I need for this ? Any model creation guide available ? I know model templates are available in the repo.<|||||>For implementing the model I would suggest a code-first approach. My approach is to
- First setup the original code base
- Load model and be able to do inference and inspect the outputs, intermediate values.
- Add the modeling code for transformers
- Convert the weights
- Do forward pass using both the original model and transformers model, compare if the outputs match, if not then debug and iterate.
Here are some docs that might help when adding a new model :slightly_smiling_face:
- https://huggingface.co/docs/transformers/add_new_model
- https://github.com/huggingface/transformers/tree/master/templates/adding_a_new_model#adding-a-new-model
Most seq2seq models in `fairseq` are similar to bart/mbart, so I would suggest to refer to those models and use the `transformers-cli add-new-model-like` command which can create a bart/mbart like template.
Hope this helps!<|||||>Hey @reichenbch how's it going ? Let us know if you need any help :) <|||||>Hey @patil-suraj Work is in progress, I had the misfortune to witness some health issues sometime back. I will update the files and try to get back on track<|||||>Hope you are feeling okay now! And no rush, just wanted to check-in. Take your time 🤗 <|||||>Hey @patrickvonplaten @patil-suraj @reichenbch i am interested to work on this issue, let me know if I can contribute.<|||||>@inderpreetsingh01, feel free to open a PR if you want :-)<|||||>@inderpreetsingh01 @patrickvonplaten is anyone actively working on this issue. I was wondering if either I could take it up or shadow someone working on it. I'd like to start learning how to contribute models to huggingface.<|||||>@pramodith , I started working on it but got occupied with some personal things, I had gone through the resources shared what I understood from the paper:
EdgeFormer uses:
- Layer adaptation
- Interleaved transformer decoder layer
- Load balanced and encoder favored parametrization
- Each parameter is used a minimum 4 to a max of 6 times
I am not clear on the layer adaptation part and couldn’t find any parameter related to that in fairseq or edge_architecture function used to define the model.
Let me know if you want to discuss and work on it.
<|||||>@inderpreetsingh01 I believe that for the layer adaptation technique new parameters are only required for the LoRA method. The parameters for this are defined in [this](https://github.com/microsoft/unilm/blob/db95173cd050de0a58fce48077026c2f0247f782/edgelm/fairseq/modules/transformer_layer.py#L70) file in the fairseq repository. The file also contains the code for the Interleaved decoder.
If you're busy with other things I can definitely have a go at adding this model to the huggingface repo.<|||||>@pramodith thanks for clearing it, i actually looked at the original fairseq repository which is not having the adaptation part. I can contribute on this, we can connect [here](https://join.slack.com/t/slack-9ke1548/shared_invite/zt-1eln74e78-JtH6aQ83yEJCiR6j89~~hw)<|||||>Let me know if you need any help :-) <|||||>Hey @patrickvonplaten, I wanted to start porting the edgeformer model into the transformers library so I used the `transformers-cli add-new-model-like` command, however one of the questions that follows is `Please give a checkpoint identifier (on the model Hub) for this new model.` does this mean that I need to upload the pretrained weights file to the Huggingface hub?<|||||>Hey @pramodith,
It means that you should specific the checkpoint name that you intend to you when uploading the weights to the Hub.<|||||>Hi @patrickvonplaten, @pramodith Is this issue still open? I would like to contribute but I don't see a related PR. |
transformers | 16,409 | closed | TF PushToHubCallback fixes and updates | We had a couple of reports of slowdown and hanging with the `PushToHubCallback`. In particular, when training finished, there was a very long wait for the last training checkpoint to be pushed before the end-of-training commit could start, which sometimes seemed like it had hung completely. As a workaround, the end-of-training push now cancels any previous pushes with `terminate()` (thanks @LysandreJik!) before it starts, to save time and avoid this problem. This works well in testing.
However, there is a secondary problem - when we do an end-of-training push, we probably don't want to save a whole checkpoint with things like optimizer state, as they can be very large. Right now the `PushToHubCallback` does not delete checkpoint data in the output dir when it pushes at the end of training, and also does not squash commits (so if it cancels a checkpoint push and then starts the end-of-training push, it has to push the commit that includes the checkpoint data even if it's been deleted in the end-of-training commit).
I think we should definitely delete the checkpoint data in the final push, and possibly also squash commits to make the upload a lot faster (and reduce the size of the target repo). I can make these changes in this PR - WDYT?
CC: @merveenoyan | 03-25-2022 15:20:33 | 03-25-2022 15:20:33 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Will do! |
transformers | 16,408 | closed | Is it possible to train all the models available in hub using deepspeed? | If not, how to know which ones?
Specifically, I interested in sentence-transformers models.
(is it possible at all to train them using transformers api?)
@nreimers @stas00
thanks!
| 03-25-2022 15:18:22 | 03-25-2022 15:18:22 | more or less yes, please see https://github.com/huggingface/transformers/pull/12695 - I don't have 100% coverage of all models but the majority of popular models work just fine as seen by the tests.
I'm not rushing to merge that PR since it's just many tests.
If you find any that don't work please report and we will make it work.<|||||>amazing. thanks for your help. |
transformers | 16,407 | closed | [FlaxSpeechEncoderDecoder] Fix feature extractor gradient test | Applies new logic to the `assert_difference` statement in the FlaxSpeechEncoderDecoderModel test. For two arrays `a` and `b`, we compute the absolute difference between the arrays: `diff = np.abs(a - b)`. Only one element of the absolute difference array `diff` has to be greater than the specified tolerance `tol` for the statement to pass. This is in contrary to the previous implementation, in which the _minimum_ of `diff` had to exceed `tol`. This provides more robustness to the `test_freeze_feature_encoder` test and enables the tolerances to be set to more reasonable values. The modified test passes locally over 20 repeat runs. | 03-25-2022 15:09:52 | 03-25-2022 15:09:52 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,406 | closed | fixed typo from enable to disable in disable_progress_bar function | # What does this PR do?
This PR fixes the typo from `Enable progress bar` to `Disable progress bar` in `disable_progress_bar()` function's documentation.
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). | 03-25-2022 12:23:52 | 03-25-2022 12:23:52 | @sgugger can you please review this PR?<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,405 | closed | Fix PerceiverMLP and test | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixed widening factor of PerceiverMLP
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-25-2022 11:55:58 | 03-25-2022 11:55:58 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,404 | closed | Unable to install transformers & its related dependencies due Issue with Python Versions | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: NA
- Platform: Windows 10 (64 bit)
- Python version: 3.6 / 3.10
- PyTorch version (GPU?): NA
- Tensorflow version (GPU?): NA
- Using GPU in script?: NA
- Using distributed or parallel set-up in script?: NA
### Who can help
@gante @Rocketknight1
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART, BART, Marian, Pegasus: @patil-suraj
- Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- Longformer, BigBird: @ydshieh
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patil-suraj, @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: Following the contribution guidelines wherein its written to run command - `pip install -e ".[dev]"` inside a virtual environment
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: The task was to add type hints & decorators for the various models [part of code cleanup 2022]
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Create a virtual environment with python version 3.10 (latest) or with 3.6 .
2. Install the required dependencides mentioned in setup.py file via command - `pip install -e ".[dev]"`
3. During installation it gives out this error ->

which causes no further installation of any dependencies ie transformer was not installed.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
On running the pip install command every dependency along with transformers should get install completely with the python version 3.6 or 3.10 as in the setup.py file its mentioned `python>=3.6.0` but still it didnt work with 3.6/3.10 versions

**Note** :- But I was able to install all the dependencies completely & smoothly with **`python version 3.8`**
<!-- A clear and concise description of what you would expect to happen. -->
| 03-25-2022 11:15:59 | 03-25-2022 11:15:59 | cc @ydshieh -- @robotjellyzone was the user I mentioned yesterday, that had the installation uses. It seems like we have related problems for python 3.6 and 3.10 :( <|||||>I could reproduce the issue with Python 3.10.
From what I could saw (quickly), `ray` doesn't support yet Python 3.10.
See https://github.com/ray-project/ray/pull/21221
And for Python 3.6, we have some discussion to drop its support soon, but at this moment, I am not very sure about the process. Let's see what Lysandre says regarding this part.
(I am not even able to create a py 3.6 env via conda ...)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>is this issue resolved or is it just fine to let it as it is in a close state? @gante @ydshieh <|||||>Hi, @robotjellyzone
- Python 3.6 will be dropped over the next few weeks: #16832
- And as far as I know, `ray` still doesn't support Python 3.10 (yet). See https://github.com/ray-project/ray/pull/21221
I believe closing this issue is reasonable.<|||||>> Hi, @robotjellyzone
>
> * Python 3.6 will be dropped over the next few weeks: [Dropping support for Python 3.6 #16832](https://github.com/huggingface/transformers/issues/16832)
> * And as far as I know, `ray` still doesn't support Python 3.10 (yet). See [Add support for Python 3.10 ray-project/ray#21221](https://github.com/ray-project/ray/pull/21221)
>
> I believe closing this issue is reasonable.
Oh Great! if python 3.6 will be dropped then i guess everything will be fine but then it needs to be mentioned in the setup file as well so that others can use versions greater than 3.6.
so shall i create a pr for changing the version mentioned in the setup.py file? <|||||>Hi, @robotjellyzone Thank you for the offer. We will do this when v4.19.0 is released. Currently we are still in v4.18.0 release, and we can't change setup.py before we officially drop it :-)<|||||>i have an issue with installing transformers using packer. Can someone help me with transformers not getting installed ?
"inline": [
"python3 -m venv .env",
"source .env/bin/activate",
"source activate tensorflow2_p38",
"pip install --upgrade pip",
"pip install transformers",
"pip install torch",
"pip install tensorflow-gpu==2.7.0"
],
"type": "shell"
},<|||||>Hi @krishnacloud77 👋 Reading above, it seems like you are activating two python environments (assuming `tensorflow2_p38` is a python environment) -- try using only one of them. Also, I'd recommend installing TF and PT from the extras, i.e. `pip install transformers[tf,torch]`<|||||>Hi gante
> Hi @krishnacloud77 👋 Reading above, it seems like you are activating two python environments (assuming `tensorflow2_p38` is a python environment) -- try using only one of them. Also, I'd recommend installing TF and PT from the extras, i.e. `pip install transformers[tf,torch]`
Hi gante
still issue not resolved after removing one venv.
"inline": [
"python3 -m venv .env",
"source .env/bin/activate",
"source activate tensorflow2_p38",
"pip install --upgrade pip",
"pip install transformers[tf,torch]",
"pip install torch",
"pip install --user tensorflow-gpu==2.7.0"
],
"type": "shell"
},<|||||>The series of commands above is still activating two environments, and it's also installing torch and tensorflow externally. Try this:
```
"source activate tensorflow2_p38",
"pip install --upgrade pip",
"pip install transformers[tf,torch]",
```<|||||>I have tried those commands already. Amazon ebs error. Says source activate command not found. This is getting installed through packer, to build an AMI. |
transformers | 16,403 | open | R3M: A Universal Visual Representation for Robot Manipulation | # 🌟 New model addition
## Model description
We pre-train a visual representation using the Ego4D human video dataset using a combination of time-contrastive learning, video-language alignment,and an L1 penalty to encourage sparse and compact representations. The resulting representation, R3M, can be used as a frozen perception module for downstream policy learning. Across a suite of 12 simulated robot manipulation tasks, we find that R3M improves task success by over 20% compared to training from scratch and by over 10% compared to state-of-the-art visual representations like CLIP and MoCo. Furthermore, R3M enables a Franka Emika Panda arm to learn a range of manipulation tasks in a real, cluttered apartment given just 20 demonstrations.
<!-- Important information -->
## Open source status
* [x] the model implementation is available:(https://github.com/facebookresearch/r3m)
* [x] the model weights are available: https://github.com/facebookresearch/r3m/blob/main/r3m/example.py
* [x] who are the authors: @suraj-nair-1
| 03-25-2022 10:57:48 | 03-25-2022 10:57:48 | |
transformers | 16,402 | closed | M-CTC-T Model | # M-CTC-T Model
See: https://twitter.com/PatrickPlaten/status/1504875528469331972
Very early stage WIP for the mCTC model from the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/pdf/2111.00161.pdf).
Making this WIP PR in this particularly early stage to track many of the links & issues that keep coming up in development (PR code itself is not worth reading into atm.
## NOTES
1. This work is mainly based on the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/pdf/2111.00161.pdf), which is essentially a continuation of the following works (ASR -> slimIPL -> mCTC):
- [SlimIPL: Language-Model-Free Iterative Pseudo-Labeling](https://arxiv.org/abs/2010.11524)
- [End-to-end ASR: from Supervised to Semi-Supervised Learning with Modern Architectures](https://arxiv.org/abs/1911.08460)
- which were all developed with the [Flashlight wav2letter++](https://github.com/flashlight/wav2letter) package
2. The original C++ code reference for `mCTC` is [here](https://github.com/flashlight/wav2letter/blob/main/recipes/mling_pl/mling_large.cpp)
- Which is an adoption of the original [slimIPL code](https://github.com/flashlight/wav2letter/blob/main/recipes/slimIPL/100h_supervised.cpp) which provides useful reference.
- Both of which are very simple scripts with major abstractions (like `layer = std::make_shared<fl::Transformer>(768, 192, 3072, 4, 920, 0.3, 0.3, false, false)`, so the following code must be referenced:
- [Transformers](https://github.com/flashlight/flashlight/blob/main/flashlight/fl/contrib/modules/Transformer.cpp)
- [Multi-Head Self Attention](https://github.com/flashlight/flashlight/blob/main/flashlight/fl/autograd/Functions.h)
- [Conv2D](https://github.com/flashlight/flashlight/blob/main/flashlight/fl/nn/modules/Conv2D.cpp)
- And the weights for the `mctc-large` model can be found [here](https://dl.fbaipublicfiles.com/wav2letter/mling_pl/checkpoint_cv_finetune.bin).
- Weights for the other model sizes reported in the paper are not released.
3. [Position Embeddings](https://github.com/flashlight/flashlight/blob/main/flashlight/fl/contrib/modules/Transformer.cpp#L93): `posEmb = tile(params_[0].as(encoderInput.type()), af::dim4(1, 1, nHeads_ * bsz));`
- Pseudo-Labeling Paper: "...self-attention dimension 768, using the relative position embeddings of [15]."
```
# https://github.com/flashlight/flashlight/blob/main/flashlight/fl/contrib/modules/Transformer.cpp#L93
auto encoderInput = input.at(input.size() - 2);
int n = input[0].dims(1), bsz = input[0].dims(2);
# ...
posEmb = tile(params_[0].as(encoderInput.type()), af::dim4(1, 1, nHeads_ * bsz));
# https://github.com/flashlight/flashlight/blob/main/flashlight/fl/autograd/Functions.cpp#L1276
auto scores = matmulNT(q, k);
if (!posEmb.isempty()) {
int n = posEmb.dims(0) / 2 - offset;
auto pscores =
relativePositionEmbeddingRotate(matmulNT(posEmb.as(q.type()), q));
scores = scores + transpose(pscores.rows(n, n + k.dims(0) - 1));
}
```
4. Gated Linear Unit. Though not stated in the original paper, the slimIPL paper, or the End-to-End ASR paper, the code shows:
```
convFrontend_->add(
// std::make_shared<fl::Conv2D>(nFeature, 1536, 7, 1, 3, 1, -1, 0, 1,
// 1));
std::make_shared<fl::Conv2D>(nFeature, 3072, 7, 1, 3, 1, -1, 0, 1, 1));
convFrontend_->add(std::make_shared<fl::GatedLinearUnit>(2));
...
# The documentation doesn't state its parameters and I did a long search of the code base but couldn't find this info.
# But I think it's safe to assume 2 refers to the dimensions of GLU
```
## Issue Tracking
- [ ] **Naming**: Currently we've opted for `mCTC` since throughout the above works the author refer to it as a "CTC Transformer Model", but this leads to awkward uses later on when talking about `mCTCForCTC` for downstream tasks (and even `CTCForCTC` for single-language models). Seems like using a common loss function to define a model might lead to troubles later down the line.
- [x] **Feature Extraction (raw audio -> model inputs)**: Unlike other audio processing modules in `transformers`, the input data is a log-mel spectrogram. As explained in the paper:
[From the original Pseudo-Labeling paper]
*The input to the encoder is a sequence of **80**-dimensional **log-mel filterbank** frames, extracted using **25ms Hamming** windows every **10ms** from the **16kHz** audio signal.*
[For more details, from the slimIPL paper]
*We keep the original **16kHz** sampling rate and compute **log-mel** filterbanks with **80** coefficients for a **25ms** sliding window, strided by **10ms**. All features are **normalized** to have zero mean and unit variance per input sequence before feeding them into the acoustic model.*
For us, such feature processing could be implemented with the following process:
- [librosa melspectrogram](https://librosa.org/doc/main/generated/librosa.feature.melspectrogram.html) function that calls the [Scipy Hamming windows](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.windows.hamming.html#scipy.signal.windows.hamming) function.
- I'm conflicted between creating a `SpectrogramFeatureExtractor` in the `transformers` library that would involve `import librosa` in `feature_extraction_sequence_utils.py`, or adding a `spectrogram.py` in [datasets.features](https://github.com/huggingface/datasets/blob/master/src/datasets/features/audio.py). The latter separates this task of `audio signal -> spectrogram` to the user-side, while providing an easy interface with the `datasets` library.
- [x] **Testing**: Under development
- [x] **MCTCModel**
- [x] **MCTCForCTC**
- [x] **MCTCTokenizer**
- [x] **MCTCFeatureExtractor**:
- [x] **MCTCProcessor**
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 03-25-2022 05:49:50 | 03-25-2022 05:49:50 | Pinging @patrickvonplaten @anton-l @patil-suraj for a midway check-in.
Still have the following to resolve (some mentioned in slack):
- [ ] Loading weights from the binary file found [here](https://github.com/flashlight/wav2letter/tree/main/recipes/mling_pl).
- [ ] `Wav2Vec2` has a `Wav2Vec2Tokenizer` and a `Wav2Vec2CTCTokenizer`. Should there be both for this too (`MCTCTokenizer`, `MCTCCTCTokenizer`)? For now I've just made an `MCTCTokenizer`, taking the role analogous to `Wav2Vec2CTCTokenizer`.
- [ ] As the initial description states, the original model code uses a GLU layer @ dim=2, which requires the input to that layer to have even length. Due to CNN resizing, this isn't guaranteed, and I've encountered a related error during development.
- [ ] Properly integrating attention masks (right now it's ignored in several places due to temporary issues)
- [ ] vocabulary in the original code doesnt have analogous tokens to <pad> </s> and such that are index 0~3 for Wav2Vec2Tokenizer implementation. So currently the vocab.json doesn’t have tokens like them (its copied directly from the token list in the original repo)
<|||||>Hey @cwkeam,
Thanks for making so much progress already!
I think in a next step now it's very important that we are able to run the following original code:
https://github.com/flashlight/wav2letter/tree/main/recipes/mling_pl#inference
to be able to compare intermediate results of the model in C++ with the ported model in PyTorch.
Once we get this script to run, we should try to save the C++ weights somehow in PyTorch. Think for this we need to adapt the code here:
https://github.com/flashlight/flashlight/blob/942414d530f19a1fca965499c8a5441405bd3a16/flashlight/app/asr/Test.cpp#L82
<|||||>Model working after feature extraction step:
https://colab.research.google.com/drive/10-03jHBUcp2P4DMF-OeeSE0lDS0xux7y?usp=sharing
This was a result of a lot of debugging and manual C++ translations, the results of which still need a lot of optimizing and cleaning. The process I took can be found in this separate branch: https://github.com/cwkeam/transformers/tree/mctc-model-debugging/src
<|||||>Amazing job @cwkeam ! Feature extraction seems to be :heavy_check_mark: :-) Next the modeling code should be tackled? Think a similar debugging strategy as for the feature extraction can be used here :-)<|||||>@patrickvonplaten yep I've started the feature extraction debugging process in [this notebook](https://colab.research.google.com/drive/1Gw4DcnwvPNKxgMFkeDaADbJ7G6fhhcQk?usp=sharing). Hopefully I'll have a solution soon!<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>MFSC feature extraction is done and the functionality works as expected!
```python
!pip install gdown
!gdown --id 1GvsB9aqrlpJSvLNlz1Yv6lT_Je9jyNtB # audio.flac
import torch
import torchaudio
from transformers import MCTCForCTC, MCTCTokenizer, MCTCFeatureExtractor
model = MCTCForCTC.from_pretrained("cwkeam/mctc-large")
tokenizer = MCTCTokenizer.from_pretrained("cwkeam/mctc-large")
feature_extractor = MCTCFeatureExtractor.from_pretrained("cwkeam/mctc-large")
waveform, sample_rate = torchaudio.load("audio.flac")
features = feature_extractor(waveform[0])
input_features = torch.Tensor(features.input_features)
my_logits = model(input_features).logits[0]
pad_token = tokenizer.convert_tokens_to_ids(["<pad>"])[0]
pred_ids = torch.stack([token for token in pred_ids if token != pad_token])
output = tokenizer.batch_decode(pred_ids)
print(output)
>>> ['', 'H', 'e', '', 'm', 'u', 's', 's', 't', '', 'h', 'a', 'v', 'e', 'e', '', 'r', 'e', 'e', 'a', 'a', 'l', 'l', 'i', 'z', 'e', 'd', '', '', 'I', '', 'w', 'a', 's', 's', '', '', 'a', '', 's', 't', 'r', 'a', 'n', 'n', 'g', 'e', 'r', 'r', '', 'a', 'n', 'd', '', 'w', 'i', 's', 'h', 'e', 'd', '', 'y', 'o', 'u', 'u', '', 't', 'u', 'n', 'n', 'd', 'e', 'r', '', '', 'h', 'i', 's', '', '', 'h', 'o', 's', 's', 'p', 'i', 't', 'a', 'l', 'l', 'i', 'i', 't', 'y', '', 't', 'o', '', 'm', 'e', ';', '', 'I', '', 'a', 'c', 'c', 'e', 'p', 't', 't', 'e', 'd', '', '', 'i', 't', '', '', 'g', 'r', 'a', 'a', 't', 'e', 'f', 'f', 'u', 'l', 'l', 'y', ',', '', '', 'I', '', 'c', 'l', 'a', 's', 'p', '', 'h', 'i', 's', '', 'h', 'a', 'n', 'd', '', 'h', 'e', '', 'p', 'r', 'e', 's', 's', 'e', 'd', '', '', 'm', 'i', 'n', 'd', '.', '']
```
The following notebook shows the replicable step-by-step process behind developing the feature extraction process: [Debugging Flashlight MFSC Feature Extraction.ipynb](https://colab.research.google.com/drive/1Gw4DcnwvPNKxgMFkeDaADbJ7G6fhhcQk?usp=sharing)
[This notebook](https://colab.research.google.com/drive/1NnL819XoCGzo3fqKn3BySz0pyOnjGKpQ) directly plugs in the results of the above notebook, and [this notebook](https://colab.research.google.com/drive/16I-lJvG5EwlwY5f7AucTEO49pPGwIyGS?usp=sharing) tests directly importing `MCTCXXX` modules from `cwkeam/transformers`, loading the weights with `from_pretrained("cwkeam/mctc-large")`, and running ASR on the same audio.
Few things that might need fixing & added:
- [ ] Might need to add tests & further optimize some of my manual translations of the C++ code in the modeling & feature extraction code.
- [ ] The character/word offset values are a bit different from those found in `Wav2Vec2ForCTC`. The Wav2Vec2 one is [here](https://github.com/huggingface/transformers/blob/main/tests/wav2vec2/test_tokenization_wav2vec2.py#L654) and mine is [here](https://github.com/cwkeam/transformers/blob/mctc-model/tests/mctc/test_tokenization_mctc.py#L426).
- [ ] Been using the name `MCTC` because thats what I started off with initially, now wondering if I should replace everything with `MCTCT`; though after development I've felt like this might be preferrable over ending up with `MCTCTTokenizer` `MCTCTForCTC`.
- [ ] Language identification head since I think the published weights include them too
- [ ] Replacing the \<pad\> token with "\<blank\>"
<|||||>@cwkeam - amazing job on making this model work! This is an extremely difficult port.
I think there are still some things left to clean up, but overall the PR is already in a great shape. For now, I'd like to focus on two things:
1. IMO, we can fully remove the tokenizer no? The tokenizeir is 1-to-1 the same as the Wav2Vec2 tokenizer. I adapted your repo here: https://huggingface.co/cwkeam/mctc-large/commit/ce4eb45713b5f6873dcbb0e7bf35902c440cccb4 so that the Wav2Vec2CTCTokenizer can be used by default. I.e. now the following code snippet should work out of the box:
```python
#!/usr/bin/env python3
import torch
import torchaudio
from transformers import MCTCForCTC, AutoTokenizer, MCTCFeatureExtractor
model = MCTCForCTC.from_pretrained("./mctc-large")
tokenizer = AutoTokenizer.from_pretrained("./mctc-large")
feature_extractor = MCTCFeatureExtractor.from_pretrained("./mctc-large")
waveform, sample_rate = torchaudio.load("audio.flac")
features = feature_extractor(waveform[0], sampling_rate=sample_rate, return_tensors="pt")
input_features = features.input_features
my_logits = model(input_features).logits
# CTC
pred_ids = torch.argmax(my_logits, dim=-1)
output = tokenizer.batch_decode(pred_ids)
print("output", output)
```
-> As you can see one can use the `AutoTokenizer` functionality and the tokenizer re-uses to 100% the existing Wav2Vec2 one. Would it be ok if we delete all tokenizer logic in this PR? The advantage would be:
- less maintenance
- All features of Wav2Vec2 are tested and can be used (time stamps, etc...)
- Phoneme tokenizers would work as well
2. Let's discuss the naming of the model maybe a bit in Slack :-) Once we have a good name (or we keep this one) - let's update everything
After 1. and 2. I think we can take another review iteration.<|||||>@patrickvonplaten 100% agree with the tokenization part and have removed them accordingly. Everything else should be fixed as well! You can also find the model card [here](https://huggingface.co/cwkeam/mctct-large) that we should move over to SpeechBrain's page. <|||||>@cwkeam, let me know if you need help with anything or when your model is ready for a final review<|||||>@patrickvonplaten I was stuck on some importing issue but now it's all working! The one test that it's failing right now is [this](https://app.circleci.com/pipelines/github/huggingface/transformers/39945/workflows/62fc3038-3d21-45f5-a097-003932b2d3cb/jobs/450467?invite=true#step-111-5088)
```
SKIPPED [1] tests/pipelines/test_pipelines_image_segmentation.py:300: test is slow
FAILED tests/pipelines/test_pipelines_image_classification.py::ImageClassificationPipelineTests::test_pt_DeiTConfig_DeiTForImageClassification_notokenizer_DeiTFeatureExtractor
===== 1 failed, 464 passed, 350 skipped, 526 warnings in 113.76s (0:01:53) =====
```
I think it's ready for a final review!<|||||>@cwkeam - amazing job! Think we are really close to merging the PR here :-)
I've run the slow tests locally and I'm getting some errors for the following tests:
```bash
FAILED tests/models/mctct/test_modeling_mctct.py::MCTCTModelIntegrationTest::test_inference_ctc_normal - AssertionE...
FAILED tests/models/mctct/test_modeling_mctct.py::MCTCTModelIntegrationTest::test_inference_ctc_normal_batched - Ty...
FAILED tests/models/mctct/test_modeling_mctct.py::MCTCTModelIntegrationTest::test_inference_ctc_robust_batched - Ty...
```
The rest all passes - could you try fixing those?
Also we should make sure that the documentation example works correctly. The doc tests seem to fail at the moment. Could you follow the explanations here: https://github.com/huggingface/transformers/tree/main/docs#testing-documentation-examples to test the example doc strings? :-)
Let me know if something is not clear!
<|||||>> @cwkeam - amazing job! Think we are really close to merging the PR here :-)
>
> I've run the slow tests locally and I'm getting some errors for the following tests:
>
> ```shell
> FAILED tests/models/mctct/test_modeling_mctct.py::MCTCTModelIntegrationTest::test_inference_ctc_normal - AssertionE...
> FAILED tests/models/mctct/test_modeling_mctct.py::MCTCTModelIntegrationTest::test_inference_ctc_normal_batched - Ty...
> FAILED tests/models/mctct/test_modeling_mctct.py::MCTCTModelIntegrationTest::test_inference_ctc_robust_batched - Ty...
> ```
>
> The rest all passes - could you try fixing those?
>
> Also we should make sure that the documentation example works correctly. The doc tests seem to fail at the moment. Could you follow the explanations here: https://github.com/huggingface/transformers/tree/main/docs#testing-documentation-examples to test the example doc strings? :-)
>
> Let me know if something is not clear!
Also let me know if you'd like me to take over the final steps of the PR if you're busy with other things, happy to help you out here!<|||||>Thanks for the final fixes! Could you maybe resolve the conflicting files and then we're ready for a final review :heart_eyes: ? :-)<|||||>@patrickvonplaten just checked that the docs are correct as well!
Thanks for your patience on this project I've been so preoccupied with other work<|||||>Hey @cwkeam,
I'll can do the final changes (and move the checkpoints) once the tests are green and you give me the green light :-) <|||||>is there an specific reason why it's named "input_features" instead of "input_values" as in wav2vec2 and there is no pad function in the processor?<|||||>> is there an specific reason why it's named "input_features" instead of "input_values" as in wav2vec2 and there is no pad function in the processor?
Yeah good question! It's `input_features` because the input is a sequnce of vectors not a sequence of just float values<|||||>I guess there probably should be a pad function in the processor <|||||>@lorenlugosch right I missed that one. Should be good now!<|||||>@patrickvonplaten re: removing `encoder_hidden_states` logic: to be clear, you want to remove the (unused) `encoder_hidden_states` but not `output_hidden_states`? I personally would prefer keeping the ability to access the intermediate layer outputs --- e.g. I used them for this thread https://twitter.com/lorenlugosch/status/1533457200819183617 , and they might be useful for people trying transfer learning stuff
Other than that, I've tested it out, and everything seems good to me (would love to add the LID logits but we can do that in another PR as you said)<|||||>> @patrickvonplaten re: removing `encoder_hidden_states` logic: to be clear, you want to remove the (unused) `encoder_hidden_states` but not `output_hidden_states`? I personally would prefer keeping the ability to access the intermediate layer outputs --- e.g. I used them for this thread https://twitter.com/lorenlugosch/status/1533457200819183617 , and they might be useful for people trying transfer learning stuff
>
> Other than that, I've tested it out, and everything seems good to me (would love to add the LID logits but we can do that in another PR as you said)
Hey @lorenlugosch,
exactly the field `output_hidden_states` should be kept<|||||>Amazing job @cwkeam! Test failure is unrelated -> merging! |
transformers | 16,401 | closed | Unable to load BART model, AttributeError: module 'jax.ops' has no attribute 'index_update' | ## Environment info
- `transformers` version: 4.17.0
- Platform: Linux-5.11.0-1018-gcp-x86_64-with-glibc2.31
- Python version: 3.10.4
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.4.1 (tpu)
- Jax version: 0.3.4
- JaxLib version: 0.3.2
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@patil-suraj @patrickvonplaten
## Information
Model I am using: BART
The problem arises when using:
* [x] the official example scripts
## To reproduce
Steps to reproduce the behavior:
```python
>>> from transformers import FlaxBartForSequenceClassification
>>> model = FlaxBartForSequenceClassification.from_pretrained('facebook/bart-base')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ayaka/venv310/lib/python3.10/site-packages/transformers/modeling_flax_utils.py", line 550, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/home/ayaka/venv310/lib/python3.10/site-packages/transformers/models/bart/modeling_flax_bart.py", line 933, in __init__
super().__init__(config, module, input_shape=input_shape, seed=seed, dtype=dtype)
File "/home/ayaka/venv310/lib/python3.10/site-packages/transformers/modeling_flax_utils.py", line 116, in __init__
random_params = self.init_weights(self.key, input_shape)
File "/home/ayaka/venv310/lib/python3.10/site-packages/transformers/models/bart/modeling_flax_bart.py", line 939, in init_weights
input_ids = jax.ops.index_update(input_ids, (..., -1), self.config.eos_token_id)
AttributeError: module 'jax.ops' has no attribute 'index_update'
```
| 03-25-2022 05:22:25 | 03-25-2022 05:22:25 | https://jax.readthedocs.io/en/latest/jax.ops.html#module-jax.ops
> The functions jax.ops.index_update, jax.ops.index_add, etc., which were deprecated in JAX 0.2.22, have been removed. Please use the [jax.numpy.ndarray.at](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.ndarray.at.html#jax.numpy.ndarray.at) property on JAX arrays instead.
<|||||>Fixed in #16078 |
transformers | 16,400 | closed | TFDecoderEncoder: The following keyword arguments are not supported by this model: ['position_ids', 'token_type_ids'] | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: v4.17.0
- Platform: Google Colab
- Python version: 3.7
- PyTorch version (GPU?): NA
- Tensorflow version (GPU?): 2.8.0
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten for EncoderDecoder
Models: transformers/models/encoder_decoder/modeling_tf_encoder_decoder.py: @patrickvonplaten
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART, BART, Marian, Pegasus: @patil-suraj
- Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- Longformer, BigBird: @ydshieh
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patil-suraj, @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): TFEncoderDecoderModel. The encoder is distilroberta and the decoder is GPT2. I have my own ```position_ids``` and ```token_type_ids``` initialized for distilroberta when I tokenize my inputs. In pytorch version of EncoderDecoderModel, I can easily pass these 2 arguments to EncoderdecoderModel forward function. However, when I do the same thing in TFEncoderDecoderModel, it raises error that ```position_ids``` and ```token_type_ids``` argument are not supported by this model
## To reproduce
```python
encoder_config = AutoConfig.from_pretrained('distilroberta-base')
decoder_config = AutoConfig.from_pretrained('gpt2')
config = EncoderDecoderConfig.from_encoder_decoder_configs(encoder_config, decoder_config)
model = TFEncoderDecoderModel(config)
outputs = model(
input_ids=input_ids_batch,
attention_mask=attention_mask_batch,
decoder_input_ids=decoder_input_ids_batch,
decoder_attention_mask=decoder_attn_mask_batch,
encoder_outputs=None,
past_key_values=None,
inputs_embeds=None,
decoder_inputs_embeds=None,
labels=labelled_batch,
use_cache=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
training=False,
position_ids = pos_ids_batch,
token_type_ids = token_type_batch
)
```
```python
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-19-da93c7339615> in <module>()
33 return_dict=None,
34 training=False,
---> 35 **new_args
36 )
2 frames
/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
65 except Exception as e: # pylint: disable=broad-except
66 filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
69 del filtered_tb
/usr/local/lib/python3.7/dist-packages/transformers/models/encoder_decoder/modeling_tf_encoder_decoder.py in call(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict, training, **kwargs)
541 kwargs_encoder = {}
542
--> 543 encoder_inputs = input_processing(**encoder_processing_inputs)
544
545 # Handle the case where the inputs are passed as a single dict which contains `labels`.
/usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py in input_processing(func, config, input_ids, **kwargs)
401 if len(kwargs["kwargs_call"]) > 0:
402 raise ValueError(
--> 403 f"The following keyword arguments are not supported by this model: {list(kwargs['kwargs_call'].keys())}."
404 )
405
ValueError: Exception encountered when calling layer "tf_encoder_decoder_model" (type TFEncoderDecoderModel).
The following keyword arguments are not supported by this model: ['position_ids', 'token_type_ids'].
Call arguments received:
• input_ids=tf.Tensor(shape=(1, 512), dtype=int32)
• attention_mask=tf.Tensor(shape=(1, 512, 512), dtype=int32)
• decoder_input_ids=tf.Tensor(shape=(1, 512), dtype=int32)
• decoder_attention_mask=tf.Tensor(shape=(1, 512), dtype=int32)
• encoder_outputs=None
• past_key_values=None
• inputs_embeds=None
• decoder_inputs_embeds=None
• labels=tf.Tensor(shape=(1, 512), dtype=int32)
• use_cache=None
• output_attentions=None
• output_hidden_states=None
• return_dict=None
• training=False
• kwargs={'position_ids': 'tf.Tensor(shape=(1, 512), dtype=int32)', 'token_type_ids': 'tf.Tensor(shape=(1, 512), dtype=int32)'}
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
This should handle additional arguments correctly.
<!-- A clear and concise description of what you would expect to happen. -->
| 03-25-2022 03:43:15 | 03-25-2022 03:43:15 | @khang-nguyen2907 thank you for raising this. We have been making some changes to simplify our TF model code, and it seems to have caused this issue. I will look into it and keep you in the loop.<|||||>Old versions of `transformers` seem to have the same behavior, so the root cause is not what I thought. Diving deeper 🔍 <|||||>@khang-nguyen2907 the PR liked to this issue, which was just merged, should fix your issue -- try (re)installing `transformers==4.18.0.dev0`.
If it doesn't solve the issue, or if you run into new problems, feel free to reopen :) |
transformers | 16,399 | closed | Add type hints for UniSpeech | Adding type hints for forward methods in user-facing class for UniSpeech as mentioned in #16059
@Rocketknight1 | 03-24-2022 22:43:55 | 03-24-2022 22:43:55 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hey, this looks good but I'm going to leave a final review until the `Wav2vec` issue is resolved - for the record, I think it's totally fine for you to update that file too!<|||||>Yeah, I think I will work on it and make a commit right away<|||||>There are some other models that are dependent on Wave2Vec2, I will modify them asap<|||||>@Rocketknight1 I think it's good to go now |
transformers | 16,398 | closed | Adding DocTest to TrOCR | # Adding TrOCR to DocTests
For this model, there was actually a single docstring in the entire file (for the forward method).
A couple of comments:
- As far as I'm aware, there is no TF version of this model
- TrOCR is an edge case because it's meant to be used as the decoder for a VisionEncoderDecoder. So the forward function of the TrOCR is not meant to be called directly. As a result, I gave some example code to run a forward pass with TrOCR within a VisionEncoderDecoder
Let me know if the docstring is relevant to the problem. I can also revert adapt it to actually just showcase the forward for the `TrOCRForCausalLM` outside of a VisionEncoderDecoder
@patrickvonplaten @ydshieh @patil-suraj | 03-24-2022 22:18:51 | 03-24-2022 22:18:51 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I took a quick look for now, and think it is indeed a nice addition, considering there is currently no example in `modeling_trocr.py` at all.
Thank you, @arnaudstiegler!
<|||||>Addressed the comments and re-ran the test locally!
There's one thing though: `make fixup` messes up the formatting of the docstring, and removes the additional blank at the end of the docstring ([here](https://github.com/huggingface/transformers/blob/ae63ca7033aa1d6220a855498a5403d846ba51b3/src/transformers/models/trocr/modeling_trocr.py#L933-L934)). The issue is that without the additional line, doctest will fail when comparing outputs. Not sure why to be honest, but do you know how to prevent that `make fixup` from doing this? Otherwise, next time the file is touched, make fixup will introduce a bug in the docstring.
Here's the failure:
```
Expected:
['industry, " Mr. Brown commented icily. " Let us have a']
```
Got:
['industry, " Mr. Brown commented icily. " Let us have a']
```
Essentially, without the additional line, the test consider that ``` is part of the expected output. But that additional line gets removed by make fixup. I haven't found a good way around this yet
@ydshieh <|||||>Those blank lines issues should be treated by the places regarding `utils/prepare_for_doc_test.py` as shown in this guide [doc](https://github.com/huggingface/transformers/tree/main/docs#for-python-files).
This means you don't need to add extra blank line in the file. Did you run `python utils/prepare_for_doc_test.py` as in the guide before and after running the doctest?<|||||>Got it! I did run the script and it seems like it's not being applied to the trocr (but I can see the added lines on other model files). I'll debug that today<|||||>OK, thank you. Don't hesitate to report it if you find this is some bug in `prepare_for_doc_test.py`.<|||||>> OK, thank you. Don't hesitate to report it if you find this is some bug in `prepare_for_doc_test.py`.
Figured out the issue: there was some formatting issue in one of the docstring (see last commit) that prevented the script to correctly add a line. From `utils.prepare_for_doc_test.process_doc_file`
```
splits = code.split("```")
splits = [s if i % 2 == 0 else process_code_block(s, add_new_line=add_new_line) for i, s in enumerate(splits)]
```
splits were incorrect because of that, and the script wouldn't add an additional \n
Now the doctest runs fine post `make fixup`
Should be good to go<|||||>@ydshieh Is there anything else to do? I don't have writing access here, so I can't merge this<|||||>Rebased on latest main and addressed the comments, the failing test is coming from main (failing on main as well)<|||||>Hi, the failed check seems irrelevant to your PR. No need to fix it here 😃. I will give a final look and merge. Thank you 💗<|||||>Alright, thank you!<|||||>By the way, may I wonder why you choose to display 3 decimal numbers for the loss value? I remember we use 2 decimal numbers in doc.py. I will check once I am available.<|||||>Applied your changes, tested them locally, and ran make fixup as this was needed after the changes<|||||>If the remaining failed checks are **build_pr_documentation** and **Add new model like template tests**, you can keep the PR as it is. It is irrelevant I think.<|||||>Ok, I tried rebasing on latest master and it doesn't seem to be doing the trick. Not sure what's causing that unfortunately, but unlikely to be due to the code changes I've done :)
<|||||>Thank you again @arnaudstiegler ! (also for your patience)
I merge this PR now ❤️. |
transformers | 16,397 | closed | Rename master to main for notebooks links and leftovers | # What does this PR do?
This PR prepares the switch in the notebooks repo default branch from master to main and also catches a few leftovers master that were in the links to transformers GitHub. There are also a few links that were badly converted when moving to the new doc frontend that are fixed (transformers-doc2mdx -> transformers) | 03-24-2022 19:02:25 | 03-24-2022 19:02:25 | |
transformers | 16,396 | closed | Big file_utils cleanup | # What does this PR do?
This follows up form the split of `file_utils` into several submodules and updates the documentation links (or others) to reflect the new place of the objects. | 03-24-2022 18:36:08 | 03-24-2022 18:36:08 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,395 | closed | QDQBert example update | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR updates the quantization-qdqbert example (https://github.com/huggingface/transformers/tree/main/examples/research_projects/quantization-qdqbert)
Update the `Dockerfile `with latest NGC pytorch container (pytorch:22.02-py3) where the requirement that TensorRT >= 8.2 is naturally met. Users don't have to reinstall TensorRT 8.2+ in the container anymore.
Update the corresponding `README`
Update `utils_qa.py` as https://github.com/huggingface/transformers/pull/15438
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-24-2022 18:18:54 | 03-24-2022 18:18:54 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,394 | closed | "Go to definion" in VS Code doesn't work properly for Transformers | It's not the bug of the library itself, but is it some kind of glitch that happens because of the design of the package.
The problem is when you use VS Code as an editor for your development and try to go to definion of some model, it routes you to wrong file. For example:
```
from transformers import SwinForImageClassification
model = SwinForImageClassification.from_pretrained("microsoft/swin-base-patch4-window7-224")
```
When you right click on **SwinForImageClassification** class and press "Go to Definition" it routes you to **transformers/utils/dummy_pt_objects.py** line 3634:

I don't know what this code does, but it's definetely not something that you want.
It's not specific for Swin transformer, I also experienced the same issue while working with Wav2Vec2 (I guess it's for all the models since the code is similar).
The correct file that should've been opened is **transformers/models/swin/modeling_swin.py**:

I'm not sure why this happens, but maybe you should consider researching this issue and redesigning the library somehow, so that VS Code would open model code properly.
Please also let me know if you also have this problem (maybe it is just my local issue?).
| 03-24-2022 16:55:54 | 03-24-2022 16:55:54 | Hi @Sorrow321 -- this structure is convenient for multiple reasons, including ensuring a given import doesn't fail because you imported something that you are not planning to use, but that has a missing dependency.
I had the same problem, VS Code is a bit silly here. Go to your VS Code settings, search for `python.languageserver`, and change it to Microsoft's `Jedi` (see image below). It should solve your issue -- it did the trick for me! Let us know if it worked.
<img width="626" alt="Screenshot 2022-03-28 at 15 24 45" src="https://user-images.githubusercontent.com/12240844/160419773-a57f1ce4-e9f5-4c65-89d2-a27a415d2495.png">
<|||||>Hi @gante, thank you so much, it worked. |
transformers | 16,393 | closed | Removed input_processing and added decorator to lxmert | # What does this PR do?
This PR changes the unpacking of the inputs of TF `lxmert` model to use the decorator instead.
See https://github.com/huggingface/transformers/issues/16051
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@gante @Rocketknight1 | 03-24-2022 16:14:10 | 03-24-2022 16:14:10 | All tests pass locally after running `RUN_SLOW=1 py.test -vv tests/[model_name]/test_modeling_tf_[model_name].py`....Output of running tests
```
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_attention_outputs PASSED [ 2%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_compile_tf_model PASSED [ 5%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_config PASSED [ 8%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_determinism <- tests\test_modeling_tf_common.py PASSED [ 11%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_forward_signature <- tests\test_modeling_tf_common.py PASSED [ 14%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_generate_with_headmasking <- tests\test_modeling_tf_common.py PASSED [ 17%]
[ 61%] [ 20%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_model_from_pretrained SKIPPED (test is slow) [ 23%]
[ 64%] [ 26%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_model_main_input_name <- tests\test_modeling_tf_common.py PASSED [ 29%]
[ 67%] [ 32%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_model_outputs_equivalence <- tests\test_modeling_tf_common.py PASSED [ 35%]
[ 70%] [ 38%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_numpy_arrays_inputs <- tests\test_modeling_tf_common.py PASSED [ 41%]
[ 73%] [ 44%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_onnx_compliancy <- tests\test_modeling_tf_common.py PASSED [ 47%]
[ 76%] [ 50%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_onnx_runtime_optimize <- tests\test_modeling_tf_common.py SKIPPED (test is slow) [ 52%]
[ 79%] [ 55%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_pt_tf_model_equivalence PASSED [ 58%]
[ 82%] [ 61%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_resize_token_embeddings <- tests\test_modeling_tf_common.py PASSED [ 64%]
[ 85%] [ 67%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_save_load PASSED [ 70%]
[ 88%] [ 73%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_save_load_config <- tests\test_modeling_tf_common.py PASSED [ 76%]
[ 91%] [ 79%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_saved_model_creation PASSED [ 82%]
[ 94%] [ 85%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_saved_model_creation_extended SKIPPED (test is slow) [ 88%]
[ 97%] [ 91%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelIntegrationTest::test_inference_masked_lm SKIPPED (test is slow) [ 94%]
[100%]
```<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @gante I added the decorator to the top...but after fetching and rebasing I'm not sure why so many files got changed...and some tests are failing...what shud I do?<|||||>> Hi @gante I added the decorator to the top...but after fetching and rebasing I'm not sure why so many files got changed...and some tests are failing...what shud I do?
Oh my god...I spotted the mistake..will push another commit to fix
<|||||>Hello @gante I added the decorators to the top but I was mistakenly rebasing my branch instead of just pushing the changes. Even though I was finally able to realize my mistake and push the commit finally...the previous attempts made the PR quite messy as you can see with all the fetching/rebasing and I see that a CI test is failing...what should I do? If it helps to close this PR and open a fresh PR with a new branch I can do so as well...let me know<|||||>> Hello @gante I added the decorators to the top but I was mistakenly rebasing my branch instead of just pushing the changes. Even though I was finally able to realize my mistake and push the commit finally...the previous attempts made the PR quite messy as you can see with all the fetching/rebasing and I see that a CI test is failing...what should I do? If it helps to close this PR and open a fresh PR with a new branch I can do so as well...let me know
on ho 😱 haha yeah, it's probably easier to open a new PR and reapply the changes, since they are straightforward. The alternative is `git` magic, so it depends on how comfortable you are with `git` :)<|||||>> > Hello @gante I added the decorators to the top but I was mistakenly rebasing my branch instead of just pushing the changes. Even though I was finally able to realize my mistake and push the commit finally...the previous attempts made the PR quite messy as you can see with all the fetching/rebasing and I see that a CI test is failing...what should I do? If it helps to close this PR and open a fresh PR with a new branch I can do so as well...let me know
>
> on ho 😱 haha yeah, it's probably easier to open a new PR and reapply the changes, since they are straightforward. The alternative is `git` magic, so it depends on how comfortable you are with `git` :)
Not really sure how to fix with `git` 😅..no worries I'll close this one and tag you in a fresh PR |
transformers | 16,392 | closed | Fix readme links and add CI check | # What does this PR do?
This PR fixes links that are broken in the main README or don't add the necessary "docs". It also make `check_utils` automatically treat those problems when a user runs `make fix-copies`, or error when the CI runs the repo consistency checks and there is a wrong doc links.
The mistakes fixed/checked are:
- link to https://huggingface.co/transformers (missing the "docs")
- main not in the right place: https://huggingface.co/main/transformers
| 03-24-2022 15:52:22 | 03-24-2022 15:52:22 | CI failure has been fixed on master, so merging :-) |
transformers | 16,391 | closed | Fix style | Fixes the style issue. | 03-24-2022 15:40:54 | 03-24-2022 15:40:54 | |
transformers | 16,390 | closed | TypeError: expected str, bytes or os.PathLike object, not NoneType | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:4.17.0
- Platform:Linux-4.15.0-142-generic-x86_64-with-debian-buster-sid
- Python version:3.6.6
- PyTorch version (GPU?): 1.7.1 (True)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten This issue is similar to https://github.com/huggingface/transformers/issues/9328, but I'm not so sure. If adding a merge.txt is able to fix the error, maybe the model hub should update many models / notify the creators.
## Information
Model I am using (Bert, XLNet ...): Bart
The problem arises when using:
* [x] the official example scripts: (give details below)
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("fnlp/bart-base-chinese")
model = AutoModel.from_pretrained("fnlp/bart-base-chinese")
```
## To reproduce
Just run the official script above. The error is reported as below
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/huangbz/.conda/envs/NLP/lib/python3.6/site-packages/transformers/models/auto/tokenization_auto.py", line 546, in from_pretrained
return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/home/huangbz/.conda/envs/NLP/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1795, in from_pretrained
**kwargs,
File "/home/huangbz/.conda/envs/NLP/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1819, in _from_pretrained
**(copy.deepcopy(kwargs)),
File "/home/huangbz/.conda/envs/NLP/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1923, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/huangbz/.conda/envs/NLP/lib/python3.6/site-packages/transformers/models/bart/tokenization_bart.py", line 220, in __init__
with open(vocab_file, encoding="utf-8") as vocab_handle:
TypeError: expected str, bytes or os.PathLike object, not NoneType
``` | 03-24-2022 14:57:45 | 03-24-2022 14:57:45 | Hey @skpig, I think the model `fnlp/bart-base-chinese` is broken sadly. We would need to ping the author here to notify her/him. Soon this will be able on the Hub :heart: cc @julien-c <|||||>Thanks for your reply @patrickvonplaten . |
transformers | 16,389 | closed | Added type hints | # What does this PR do?
Added type hints as requested in #16059.
@Rocketknight1 | 03-24-2022 14:46:17 | 03-24-2022 14:46:17 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,388 | closed | Confusing interaction between training dataloaders and datasets in Trainer | Trainer documentation suggests to override `get_train_dataloader` for custom behaviour.
However, the `train` method does not respect this.
It directly goes back to `train_dataset`
```
train_dataset_is_sized = has_length(self.train_dataset)
```
And then it gets `num_examples` from `train_dataloader`, rather than the length it just checked.
```
# this accesses train_dataloader.dataset, which could very well not exist
num_examples = (
self.num_examples(train_dataloader) if train_dataset_is_sized else total_train_batch_size * args.max_steps
)
```
I suggest changing it to something like
```
try:
num_examples = self.num_examples(train_dataloader)
# other code for train_dataset_is_sized being true before
except NameError: # no length
# other code for train_dataset_is_sized being false before
num_examples = total_train_batch_size * args.max_steps
```
This way one can override `num_examples` to handle more exotic dataloaders, even in the absence of a train_dataset property.
Happy to create a PR, just wanted to check if there is a good reason for the current setup.
| 03-24-2022 14:37:23 | 03-24-2022 14:37:23 | I'm not sure this is the only thing that would require some changes, but I'm happy to review a PR.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,387 | closed | Bump cookiecutter version | # What does this PR do?
Fix
```
ImportError: cannot import name 'json' from 'itsdangerous'
```
due to the flask/itsdangerous versions issue, when we run `pytest`. | 03-24-2022 14:36:14 | 03-24-2022 14:36:14 | |
transformers | 16,386 | closed | Replaced usage of input_processing with unpack_input decorator | # What does this PR do?
This PR changes the unpacking of the inputs of TF `lxmert` model to use the `unpack_inputs` decorator instead.
See https://github.com/huggingface/transformers/issues/16051
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@gante @Rocketknight1
| 03-24-2022 14:14:48 | 03-24-2022 14:14:48 | Output after running tests
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_attention_outputs PASSED [ 2%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_compile_tf_model PASSED [ 5%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_config PASSED [ 8%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_determinism <- tests\test_modeling_tf_common.py PASSED [ 11%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_forward_signature <- tests\test_modeling_tf_common.py PASSED [ 14%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_generate_with_headmasking <- tests\test_modeling_tf_common.py PASSED [ 17%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_headmasking <- tests\test_modeling_tf_common.py PASSED [ 20%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_hidden_states_output PASSED [ 23%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_initialization <- tests\test_modeling_tf_common.py PASSED [ 26%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_inputs_embeds <- tests\test_modeling_tf_common.py PASSED [ 29%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_keras_save_load <- tests\test_modeling_tf_common.py PASSED [ 32%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_keyword_and_dict_args <- tests\test_modeling_tf_common.py PASSED [ 35%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_lm_head_model_beam_search_generate_dict_outputs <- tests\test_modeling_tf_common.py PASSED [ 38%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_lm_head_model_no_beam_search_generate_dict_outputs <- tests\test_modeling_tf_common.py PASSED [ 41%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_lm_head_model_random_beam_search_generate <- tests\test_modeling_tf_common.py PASSED [ 44%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_lm_head_model_random_no_beam_search_generate <- tests\test_modeling_tf_common.py PASSED [ 47%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_load_with_mismatched_shapes <- tests\test_modeling_tf_common.py PASSED [ 50%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_loss_computation <- tests\test_modeling_tf_common.py PASSED [ 52%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_lxmert_for_pretraining PASSED [ 55%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_lxmert_model PASSED [ 58%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_model_common_attributes PASSED [ 61%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_model_from_pretrained SKIPPED (test is slow) [ 64%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_model_main_input_name <- tests\test_modeling_tf_common.py PASSED [ 67%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_model_outputs_equivalence <- tests\test_modeling_tf_common.py PASSED [ 70%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_numpy_arrays_inputs <- tests\test_modeling_tf_common.py PASSED [ 73%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_onnx_compliancy <- tests\test_modeling_tf_common.py PASSED [ 76%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_onnx_runtime_optimize <- tests\test_modeling_tf_common.py SKIPPED (test is slow) [ 79%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_pt_tf_model_equivalence PASSED [ 82%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_resize_token_embeddings <- tests\test_modeling_tf_common.py PASSED [ 85%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_save_load PASSED [ 88%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_save_load_config <- tests\test_modeling_tf_common.py PASSED [ 91%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_saved_model_creation PASSED [ 94%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_saved_model_creation_extended SKIPPED (test is slow) [ 97%]
tests/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelIntegrationTest::test_inference_masked_lm SKIPPED (test is slow) [100%] |
transformers | 16,385 | closed | How to build a custom question-answering head? | Using the ```TFBertForQuestionAnswering.from_pretrained()``` function, we get a predefined head on top of BERT together with a loss function that are suitable for this task.
My question is how to create a custom head without relying on ```TFAutoModelForQuestionAnswering.from_pretrained()```.
I want to do this because there is no place where the architecture of the head is explained clearly. By reading the code [here][1] we can see the architecture they are using, but I can't be sure I understand their code 100%.
Starting from https://stackoverflow.com/questions/69025750/how-to-fine-tune-huggingface-bert-model-for-text-classificationis is good. However, it covers only the classification task, which is much simpler.
```'start_positions'``` and ```'end_positions'``` are created following [this][2] tutorial.
So far, I've got the following:
```
train_dataset
# Dataset({
# features: ['input_ids', 'token_type_ids', 'attention_mask', 'start_positions', 'end_positions'],
# num_rows: 99205
# })
train_dataset.set_format(type='tensorflow', columns=['input_ids', 'token_type_ids', 'attention_mask'])
features = {x: train_dataset[x] for x in ['input_ids', 'token_type_ids', 'attention_mask']}
labels = [train_dataset[x] for x in ['start_positions', 'end_positions']]
labels = np.array(labels).T
tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(16)
input_ids = tf.keras.layers.Input(shape=(256,), dtype=tf.int32, name='input_ids')
token_type_ids = tf.keras.layers.Input(shape=(256,), dtype=tf.int32, name='token_type_ids')
attention_mask = tf.keras.layers.Input((256,), dtype=tf.int32, name='attention_mask')
bert = TFAutoModel.from_pretrained("bert-base-multilingual-cased")
output = bert([input_ids, token_type_ids, attention_mask]).last_hidden_state
output = tf.keras.layers.Dense(2, name="qa_outputs")(output)
model = tf.keras.models.Model(inputs=[input_ids, token_type_ids, attention_mask], outputs=output)
num_train_epochs = 3
num_train_steps = len(tfdataset) * num_train_epochs
optimizer, schedule = create_optimizer(
init_lr=2e-5,
num_warmup_steps=0,
num_train_steps=num_train_steps,
weight_decay_rate=0.01
)
def qa_loss(labels, logits):
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction=tf.keras.losses.Reduction.NONE
)
start_loss = loss_fn(labels[0], logits[0])
end_loss = loss_fn(labels[1], logits[1])
return (start_loss + end_loss) / 2.0
model.compile(
loss=loss_fn,
optimizer=optimizer
)
model.fit(tfdataset, epochs=num_train_epochs)
```
And I am getting the following error:
```
ValueError: `labels.shape` must equal `logits.shape` except for the last dimension. Received: labels.shape=(2,) and logits.shape=(256, 2)
```
It is complaining about the shape of the labels. This should not happen since I am using ```SparseCategoricalCrossentropy``` loss.
[1]: https://github.com/huggingface/transformers/blob/198c335d219a5eb4d3f124fdd1ce1a9cd9f78a9b/src/transformers/models/bert/modeling_tf_bert.py#L2065
[2]: https://huggingface.co/course/chapter7/7?fw=pt | 03-24-2022 13:50:19 | 03-24-2022 13:50:19 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks!<|||||>Got it! |
transformers | 16,384 | closed | variable naming for Distilbert model | # What does this PR do?
As part of #16051 this PR changes the unpacking of the inputs of TF Distilbert model to use the decorator instead.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
See #16051
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Rocketknight1 @gante
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-24-2022 13:39:40 | 03-24-2022 13:39:40 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Awesome 🚀 Can you confirm that the tests have passed locally? :)<|||||>Nvm, reread your comment! Merging 👍 |
transformers | 16,383 | closed | AST: Audio Spectrogram Transformer | # 🌟 New model addition
## Model description
In the past decade, convolutional neural networks (CNNs)
have been widely adopted as the main building block for endto-end audio classification models, which aim to learn a direct
mapping from audio spectrograms to corresponding labels. To
better capture long-range global context, a recent trend is to
add a self-attention mechanism on top of the CNN, forming a
CNN-attention hybrid model. However, it is unclear whether
the reliance on a CNN is necessary, and if neural networks
purely based on attention are sufficient to obtain good performance in audio classification. In this paper, we answer the question by introducing the Audio Spectrogram Transformer (AST),
the first convolution-free, purely attention-based model for audio classification. We evaluate AST on various audio classification benchmarks, where it achieves new state-of-the-art results of 0.485 mAP on AudioSet, 95.6% accuracy on ESC-50,
and 98.1% accuracy on Speech Commands V2.
Index Terms: audio classification, self-attention
## Open source status
* [x] the model implementation is available: https://github.com/YuanGongND/ast
* [x] the model weights are available: https://github.com/YuanGongND/ast#Pretrained-Models
* [x] who are the authors: @YuanGongND
Happy to supervise anyone interested in porting the model :-) | 03-24-2022 13:26:39 | 03-24-2022 13:26:39 | Hi Patrick,
Thanks so much to bring this up. I'd love to add AST to Huggingface.
I am checking https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model for how to do that, is that the right tutorial I should read?
I also have a quick question: the AST model is built on the `timm` package, is that OK with such dependency?
Thanks!
-Yuan<|||||>Oh wow super cool to get such a quick answer from you :-) It would be amazing if you could give it a try with the https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model docs!
If possible, it'd be great to avoid a `timm` dependency and instead more or less copy paste the relevant code of `timm` into `transformers`. @NielsRogge mentioned that the model is more or less a ViT, so maybe we can get some inspiration from the existing ViT model: https://github.com/huggingface/transformers/blob/main/src/transformers/models/vit/modeling_vit.py . It's also totally fine to add the `timm` dependency in a first step and we'll help you adjust the PR afterwards :-) <|||||>cc @sgugger @LysandreJik @anton-l for visibility <|||||>Sure. Thanks for the suggestion. I will have a try and let you know if I have a question.
<|||||>@YuanGongND if I may pitch in -- without `timm` TF folks like me can easily port your model to TF 🧡 <|||||>@YuanGongND Did it work ?<|||||>Hi, I just need to find some time to do that, will be soon!<|||||>@YuanGongND did you manage to get this going or do you need some help? I'd be interested in contributing if possible! 😄 <|||||>Hi @thefirebanks, I apologize that I won't have time to do it in near future. It would be great if you could contribute! FYI, we have a repo at [here](https://github.com/YuanGongND/ast) (implemented in PyTorch and depends on `timm`) and a colab demo at [here](https://colab.research.google.com/github/YuanGongND/ast/blob/master/colab/AST_Inference_Demo.ipynb). Please let me know if you have any question!
Best,
Yuan<|||||>Hi,
I actually have a working implementation which I need to finish. Will open a PR soon, hopefully <|||||>Wow @NielsRogge, that's great!<|||||>Gotcha Niels, thanks! |
transformers | 16,382 | closed | Cache the files in get_fast_tokenizer_file() | # 🚀 Feature request
`transformers.BertTokenizer.from_pretrained()` calls `get_fast_tokenizer_file()` which downloads a file from the HuggingFace server but never adds it to the cache.
Would be useful to cache that file (in the same folder as the models) to make CI runs more robust.
## Motivation
Try to avoid having issues such as the one below in our CI runs:
```
E requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/models/mrm8488/bert-small-finetuned-squadv2
```
This is a duplicate of https://github.com/huggingface/transformers/issues/14862 which was closed without getting fixed and I don't seem able to re-open it. | 03-24-2022 09:47:53 | 03-24-2022 09:47:53 | Hello, thanks for opening an issue! This should be fixed by https://github.com/huggingface/transformers/pull/16362 which should drop in the next release.<|||||>Closing for now, feel free to reopen if it doesn't solve your issue. |
transformers | 16,381 | closed | run_mlm_wwm.py if set(load_result.missing_keys) == set(self.model._keys_to_ignore_on_save): TypeError: 'NoneType' object is not iterable | ## Environment info
- `transformers` version: 4.7.0
- Platform: linux
- Python version: 3.9.7
- PyTorch version (GPU?): 1.10.2 (cu111)
- Tensorflow version (GPU?): no
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
Models:
- roberta@LysandreJik
Library:
- Trainer: @sgugger
Model hub:
- https://huggingface.co/hfl/chinese-roberta-wwm-ext
Examples:
- research_projects/[mlm_wwm](https://github.com/huggingface/transformers/tree/main/examples/research_projects/mlm_wwm)/run_mlm_wwm.py: @julien-c
## Information
Model I am using roberta:
The problem arises when using:
* [x] the official example scripts: (give details below) https://github.com/huggingface/transformers/tree/main/examples/research_projects/mlm_wwm
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name) chinse-demo.txt as description in here :https://github.com/huggingface/transformers/blob/main/examples/research_projects/mlm_wwm/run_chinese_ref.py#L137
* [ ] my own task or dataset: (give details below)
this script has been right:
```sh
export TRAIN_FILE=/path/to/train/file
export LTP_RESOURCE=/path/to/ltp/tokenizer
export BERT_RESOURCE=/path/to/bert/tokenizer
export SAVE_PATH=/path/to/data/ref.txt
python run_chinese_ref.py \
--file_name=$TRAIN_FILE \
--ltp=$LTP_RESOURCE \
--bert=$BERT_RESOURCE \
--save_path=$SAVE_PATH
```
but got error:
```sh
export TRAIN_FILE=/path/to/train/file
export VALIDATION_FILE=/path/to/validation/file
export TRAIN_REF_FILE=/path/to/train/chinese_ref/file
export VALIDATION_REF_FILE=/path/to/validation/chinese_ref/file
export OUTPUT_DIR=/tmp/test-mlm-wwm
python run_mlm_wwm.py \
--model_name_or_path roberta-base \
--train_file $TRAIN_FILE \
--validation_file $VALIDATION_FILE \
--train_ref_file $TRAIN_REF_FILE \
--validation_ref_file $VALIDATION_REF_FILE \
--do_train \
--do_eval \
--output_dir $OUTPUT_DIR
```
error info is :
```log
[INFO|trainer.py:1047] 2022-03-24 17:08:46,271 >> Loading model from chinese-roberta-wwm-ext).
Traceback (most recent call last):
File "/home/20031211375/pretrain/run_language_modeling.py", line 364, in <module>
main()
File "/home/20031211375/pretrain/run_language_modeling.py", line 328, in main
trainer.train(model_path=model_path)
File "/home/20031211375/.conda/envs/search/lib/python3.9/site-packages/transformers/trainer.py", line 1066, in train
self._load_state_dict_in_model(state_dict)
File "/home/20031211375/.conda/envs/search/lib/python3.9/site-packages/transformers/trainer.py", line 1387, in _load_state_dict_in_model
if set(load_result.missing_keys) == set(self.model._keys_to_ignore_on_save):
TypeError: 'NoneType' object is not iterable
```
## Expected behavior
how to fix it up?
| 03-24-2022 09:39:53 | 03-24-2022 09:39:53 | I don't think this script works for any other model than BERT as it relies on assumptions that the subwords token have the prefix ##.<|||||>transformers=4.5.0<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,380 | closed | Why the lengthy max_length in Dataset effect the performance of T5 ? | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.12.5
- Platform: Linux-4.15.0-172-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.11
- PyTorch version (GPU?): 1.10.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
@patrickvonplaten @Narsil
## Information
I use t5-base to perform text generation tasks. When I conducted experiments on the benchmark, I found that the lengthy max_length (the max length of original dataset may be 128, I set the max_length to 150,200) will effect the text generation performance a lot. Theoretically, too long max length will not affect the calculation of loss through padding and mask, so I'm curious as to why it would affect performance.
The code is available at https://github.com/IsakZhang/ABSA-QUAD
## Expected behavior
This mistake can be as retarded as it is vital. Although I have been debugging for several days, I still haven't found the problem and look forward to your replies. | 03-24-2022 08:17:18 | 03-24-2022 08:17:18 | @SinclairCoder ,
Hi, `max_length` affects how many tokens are generated by the model, and even if padding an so on shouldn't apply (I haven't even looked at but it shouldn't).
However, compute is `O(n²)` in `seq_length`, so if your generation is generating much larger sequences, then it's normal that it takes much longer.
In order to confirm, could you provide a *short* example please that we could try on ? Thanks the code you shared, but it's too long to properly look over to run and identify issues.
Could you maybe also print somewhere your sequences and their lengths to confirm my hypothesis ?
Cheers.<|||||>@Narsil @patrickvonplaten
Thanks for your quick reply. I don't think you get my point. See the demo below. When specified different `max_input_len` of `MyDataset` (all of them are larger than the length of input sentences), the loss should be the same. But, it wasn't.
You can compare the result by uncommenting `L86`.
```py
from pytorch_lightning import seed_everything
from transformers import AdamW, T5ForConditionalGeneration, T5Tokenizer, AutoConfig
from torch.utils.data import Dataset
from torch.utils.data import DataLoader
model = T5ForConditionalGeneration.from_pretrained('t5-base')
model = model.cuda()
model.train()
tokenizer = T5Tokenizer.from_pretrained('t5-base')
seed_everything(42)
class MyDataset(Dataset):
def __init__(self, tokenizer, raw_inputs, raw_targets, max_input_len=128, max_output_len=128):
self.max_input_len = max_input_len
self.max_output_len = max_output_len
self.tokenizer = tokenizer
self.inputs = []
self.targets = []
self._build_examples(raw_inputs, raw_targets)
def __len__(self):
return len(self.inputs)
def __getitem__(self, index):
source_ids = self.inputs[index]["input_ids"].squeeze()
target_ids = self.targets[index]["input_ids"].squeeze()
src_mask = self.inputs[index]["attention_mask"].squeeze() # might need to squeeze
target_mask = self.targets[index]["attention_mask"].squeeze() # might need to squeeze
return {"source_ids": source_ids, "source_mask": src_mask,
"target_ids": target_ids, "target_mask": target_mask}
def _build_examples(self, raw_inputs, raw_targets):
for i in range(len(raw_inputs)):
# change input and target to two strings
input = ' '.join(raw_inputs[i])
target = raw_targets[i]
tokenized_input = self.tokenizer.batch_encode_plus(
[input], max_length=self.max_input_len, padding="max_length",
truncation=True, return_tensors="pt"
)
tokenized_target = self.tokenizer.batch_encode_plus(
[target], max_length=self.max_output_len, padding="max_length",
truncation=True, return_tensors="pt"
)
self.inputs.append(tokenized_input)
self.targets.append(tokenized_target)
no_decay = ["bias", "LayerNorm.weight"]
optimizer_grouped_parameters = [
{
"params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
"weight_decay": 0,
},
{
"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)],
"weight_decay": 0.0,
},
]
optimizer = AdamW(optimizer_grouped_parameters, lr=3e-4, eps=1e-8)
# toy dataset
raw_inputs = ["can't wait wait for my next visit.",
"their sake list was extensive, but we were looking for purple haze, which wasn't listed but made for us upon request!",
"the spicy tuna roll was unusually good and the rock shrimp tempura was awesome, great appetizer to share!",
"we love th pink pony."
]
raw_targets = ['restaurant general is great because it is NULL',
"drinks style options is great because sake list is extensive [SSEP] service general is great because it is NULL",
"food quality is great because spicy tuna roll is good [SSEP] food quality is great because rock shrimp tempura is awesome",
"restaurant general is great because pink pony is love"
]
# max_input_len is max length of input text.
train_dataset = MyDataset(tokenizer, raw_inputs, raw_targets, max_input_len=180, max_output_len=128)
# train_dataset = MyDataset(tokenizer, raw_inputs, raw_targets, max_input_len=200, max_output_len=128)
dataloader = DataLoader(train_dataset, batch_size=4,
drop_last=True, shuffle=True, num_workers=4)
for idx, batch in enumerate(dataloader):
batch = {k:v.cuda() for k,v in batch.items()}
lm_labels = batch["target_ids"]
lm_labels[lm_labels[:, :] == tokenizer.pad_token_id] = -100
optimizer.zero_grad()
outputs = model(
input_ids=batch["source_ids"],
attention_mask=batch["source_mask"],
labels=lm_labels,
# decoder_attention_mask=batch['target_mask']
)
print(f'{idx}: {outputs[0]}')
if idx>1:
break
# loss = outputs[0]
# loss.backward()
# optimizer.step()
# print(f'{idx}:{loss}')
```
<|||||>> I don't think you get my point
I didn't sorry, I work mostly on inference so I assumed performance was speed performance, not model performance (loss).
Thanks very much for the simpler script !
Unfortunately I am not really versed in that part of the library.
Thanks to your script I could test a little, and it seems the loss will change even without modifying the script when I remove the seeding, couldn't it be something like the random dropouts are different if you change the `max_length` ?
I tried moving the model to `.eval()` but the difference is still there.
Edit: I looked at the actual `input_ids` and if you look there are some non padded values after `180` for the last input in the batch. I you use `batch_size=3` and `.eval()` then I guess the same loss (within float approximation).
Can you confirm this works for you ?<|||||>Thanks for your reply. I am so sorry that I made a mistake in `_build_examples`.
Now, I create a **new** [gist](https://gist.github.com/SinclairCoder/3eee9d1cd78e81745de515ec594e6e2c) to test this problem.
I found that when using `T5` different `max_input_len`, the loss is the same. However, when using `BART` under the same condition, the loss is different when different `max_input_len` specified. I think maybe they should be the same.
I'm totally confused. Wish your reply soon.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>
@patrickvonplaten @Narsil Looking forward to your reply.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @SinclairCoder,
We sadly don't have the time to analyse costume code. We try to keep Transformers issues for bugs and questions related only to existing features. Could you try to use the forum instead: https://discuss.huggingface.co/ ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,379 | closed | when I using pytorch to load "distilbert-base-uncased", it's get an error:"HTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/distilbert-base-uncased/resolve/main/vocab.txt". But load other models can't get this error. | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART, BART, Marian, Pegasus: @patil-suraj
- Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- Longformer, BigBird: @ydshieh
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patil-suraj, @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 03-24-2022 02:30:54 | 03-24-2022 02:30:54 | To see #16351 <|||||>> To see #16351
thank you<|||||>Closing this for now 🙏 |
transformers | 16,378 | closed | Can't seem to run GPT-J in CPU mode: "LayerNormKernelImpl" not implemented for 'Half' | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.15.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.5
- PyTorch version (GPU?): 1.10.2+cu113 (True)
- Tensorflow version (GPU?): 2.5.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten @narsil
## Information
Model I am using (KoboldAI/GPT-J-6B-Adventure):
The problem arises when using:
* A simple script that calls model.generate() after loading it via `GPTJForCausalLM.from_pretrained` and `input_ids = tokenizer(prompt, return_tensors="pt").input_ids` **without** using anything cuda-related.
The tasks I am working on is:
* run GPT-J in CPU mode for calibration purposes for the game I am making called AI Roguelite (I am willing to wait a long time as this is a calibration preprocessing task rather than a real-time task).
## To reproduce
Steps to reproduce the behavior:
1. Call generate.py for gpt-j in cpu-only mode
2. Observe the error was `"LayerNormKernelImpl" not implemented for 'Half'`
## Expected behavior
Runs it without that error | 03-23-2022 23:34:48 | 03-23-2022 23:34:48 | Hey @monsieurpooh , this is because the model was saved in `fp16` as you can see here https://huggingface.co/KoboldAI/GPT-J-6B-Adventure/blob/main/config.json#L34
You can pass the `torch_dtype` argument to `from_pretrained`, to convert it to fp32 for CPU.
```python
model = GPTJForCausalLM.from_pretrained("KoboldAI/GPT-J-6B-Adventure", torch_dtype=torch.float32)
```<|||||>Thanks for the quick response; however, I tried your suggestion and it did not work and I got the same error.
Here's the minimal repro code:
```
import os
import re
import random
from transformers import GPTNeoForCausalLM, GPTJForCausalLM, GPT2Tokenizer
import torch
from pynvml import *
import json
import sys
model = GPTJForCausalLM.from_pretrained("..\\gpt-neo-master\\saved_models_dir\\KoboldAI_GPT-J-6B-Adventure", low_cpu_mem_usage=True, torch_dtype=torch.float32)
tokenizer = GPT2Tokenizer.from_pretrained("..\\gpt-neo-master\\saved_models_dir\\KoboldAI_GPT-J-6B-Adventure", low_cpu_mem_usage=True, torch_dtype=torch.float32)
input_ids = tokenizer("test prompt", return_tensors="pt").input_ids
generated_outputs = model.generate(input_ids)
```
The output was:
```
C:\Max\gpt_calibration>python gpt-j-bug.py
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
Traceback (most recent call last):
File "gpt-j-bug.py", line 16, in <module>
generated_outputs = model.generate(input_ids)
File "C:\Users\jerkm\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "C:\Users\jerkm\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\generation_utils.py", line 1109, in generate
return self.greedy_search(
File "C:\Users\jerkm\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\generation_utils.py", line 1406, in greedy_search
outputs = self(
File "C:\Users\jerkm\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\jerkm\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\models\gptj\modeling_gptj.py", line 786, in forward
transformer_outputs = self.transformer(
File "C:\Users\jerkm\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\jerkm\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\models\gptj\modeling_gptj.py", line 640, in forward
outputs = block(
File "C:\Users\jerkm\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\jerkm\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\models\gptj\modeling_gptj.py", line 279, in forward
hidden_states = self.ln_1(hidden_states)
File "C:\Users\jerkm\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\jerkm\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\normalization.py", line 189, in forward
return F.layer_norm(
File "C:\Users\jerkm\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\functional.py", line 2347, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
```<|||||>Nevermind. I removed "low_cpu_mem_usage" arg. It seems to be working now. Thanks again. <|||||>@patil-suraj I have same problem for GPT-Neox model. Any quick treatment |
transformers | 16,377 | closed | Add type hints for ConvBert model | # What does this PR do?
Adding type hints for ConvBert’s PyTorch and TensorFlow flavored models as requested in #16059.
@Rocketknight1 | 03-23-2022 23:09:03 | 03-23-2022 23:09:03 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,376 | closed | Type hints and decorator for TF T5 | @Rocketknight1 @gante | 03-23-2022 23:05:53 | 03-23-2022 23:05:53 | Thanks for this, the PR looks perfect to me! I see XLA generation tests failing, though - @gante could this be another situation like GPT-2 with some weird input names that are exposed by the decorator?<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@gante I think some of those were changes to main that came in during the PR, including the missing XLA function that caused those test failures! Let me fix it up real quick.<|||||>@Dahlbomii Tests are green so we're going to merge now. Thanks for your help with this, and thanks for your patience as we both edited the file you were working on and renamed `master` to `main` midway through your PR! |
transformers | 16,375 | closed | `cached_download ∘ hf_hub_url` is `hf_hub_download` | For a few versions already, `huggingface_hub` has added the `hf_hub_download` method which encompasses both `cached_download` and `hf_hub_url`. | 03-23-2022 21:30:20 | 03-23-2022 21:30:20 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,374 | closed | TF generate refactor - Beam Search | # What does this PR do?
As discussed in the original TF generate refactor plan (https://github.com/huggingface/transformers/pull/15562), adds `beam_search`.
This Beam Search implementation was inspired by our FLAX implementation, which is XLA-friendly. However, this PR is not yet XLA-ready (😭). To pass existing tests, a few tweaks were added on top of the FLAX adaptation -- I added some comments in the PR to explain the differences (and why they were needed), hopefully making the review process easier.
Tests ran (and passing):
- [x] GPT-2
- [x] T5
- [x] BART
- [x] Vision Encoder Decoder
- [x] Encoder Decoder
- [x] Speech to Text
- [x] RAG | 03-23-2022 21:29:06 | 03-23-2022 21:29:06 | _The documentation is not available anymore as the PR was closed or merged._<|||||>
|
transformers | 16,373 | closed | Is there a way to filter models by task and group the results? | # 🚀 Feature request
Is there a way to filter groups of models by task? Is it possible to add these tags/filters to https://huggingface.co/models
E.g. if I use the `translation` tag, multiple `Helsinki-NLP` models appeared and I had to programmatically do this to find out all the possible arcs:
```python
for s in list(all_possible_lang_code):
for t in list(all_possible_lang_code):
try:
model_name = f"Helsinki-NLP/opus-mt-{s}-{t}"
mtokenizer = MarianTokenizer.from_pretrained(model_name)
print(s, t, mtokenizer.supported_language_codes)
supported.append((s,t))
except:
continue
```
## Motivation
When I was trying to get all the available models for machine translation, I was going through each encoder-decoder model on the list and trying to run their decoders to see whether they can do translation out of the box.
## Your contribution
Currently, I have a list that looks like this for `Helsinki-NLP` models, not sure if it's complete though.
```
{'af': {'sv', 'fi', 'ru', 'nl', 'de', 'es', 'fr', 'en'},
'am': {'sv'},
'ar': {'he', 'ru', 'de', 'it', 'el', 'pl', 'tr', 'es', 'fr', 'en'},
'az': {'es', 'tr', 'en'},
'be': {'es'},
'bg': {'uk', 'sv', 'fi', 'ru', 'de', 'it', 'tr', 'es', 'fr', 'en'},
'bn': {'en'},
'ca': {'uk', 'nl', 'pt', 'it', 'de', 'es', 'fr', 'en'},
'ceb': {'sv', 'fi', 'es', 'fr', 'en'},
'cs': {'uk', 'sv', 'fi', 'de', 'fr', 'en'},
'cy': {'en'},
'da': {'fi', 'ru', 'de', 'es', 'no', 'fr', 'en'},
'de': {'af', 'ar', 'bg', 'ca', 'cs', 'da', 'de', 'el', 'en', 'es',
'et', 'fi', 'fr', 'ha', 'he', 'hr', 'ht', 'hu', 'ig', 'ilo',
'is', 'it', 'ln', 'lt', 'ms', 'nl', 'no', 'pl', 'tl', 'uk',
'vi'},
'el': {'sv', 'fi', 'fr', 'ar'},
'en': {'af', 'ar', 'az', 'bg', 'ca', 'ceb', 'cs', 'cy', 'da', 'de',
'el', 'es', 'et', 'fi', 'fr', 'ga', 'gl', 'ha', 'he', 'hi',
'ht', 'hu', 'hy', 'id', 'ig', 'ilo', 'is', 'it', 'lg', 'ln',
'mg', 'mk', 'ml', 'mr', 'nl', 'ro', 'ru', 'sk', 'sq', 'ss',
'sv', 'sw', 'tl', 'tn', 'uk', 'ur', 'vi', 'xh', 'zh'},
'es': {'af', 'ar', 'bg', 'ca', 'ceb', 'cs', 'da', 'de', 'el', 'en',
'es', 'et', 'fi', 'fr', 'gl', 'ha', 'he', 'hr', 'ht', 'id',
'ig', 'ilo', 'is', 'it', 'ln', 'lt', 'mk', 'nl', 'no', 'pl',
'ro', 'ru', 'sl', 'tl', 'tn', 'uk', 'vi', 'xh', 'yo'},
'et': {'sv', 'fi', 'ru', 'de', 'es', 'fr', 'en'},
'fi': {'af', 'bg', 'ceb', 'cs', 'de', 'el', 'en', 'es', 'et', 'fi',
'fr', 'ha', 'he', 'hr', 'ht', 'hu', 'id', 'ig', 'ilo', 'is',
'it', 'lg', 'ln', 'lv', 'mg', 'mk', 'nl', 'no', 'ro', 'ru',
'sk', 'sl', 'sq', 'sv', 'sw', 'tn', 'tr', 'uk', 'xh',
'yo'},
'fr': {'af', 'ar', 'bg', 'ca', 'ceb', 'de', 'el', 'en', 'es', 'ha',
'he', 'hr', 'ht', 'hu', 'id', 'ig', 'ilo', 'lg', 'ln', 'ms',
'no', 'pl', 'ro', 'ru', 'sk', 'sl', 'sv', 'tl', 'tn', 'uk',
'vi', 'xh', 'yo'},
'ga': {'en'},
'gl': {'es', 'pt', 'en'},
'ha': {'sv', 'fi', 'es', 'fr', 'en'},
'he': {'uk', 'sv', 'fi', 'ru', 'it', 'de', 'ar', 'es'},
'hi': {'ur', 'en'},
'hr': {'fi', 'sv', 'es', 'fr'},
'ht': {'sv', 'fi', 'es', 'fr', 'en'},
'hu': {'uk', 'sv', 'fi', 'de', 'fr', 'en'},
'hy': {'ru', 'en'},
'id': {'sv', 'fi', 'es', 'fr', 'en'},
'ig': {'sv', 'fi', 'de', 'es', 'fr', 'en'},
'ilo': {'sv', 'fi', 'de', 'es', 'en'},
'is': {'sv', 'fi', 'de', 'it', 'es', 'fr', 'en'},
'it': {'ar', 'bg', 'ca', 'de', 'en', 'es', 'fr', 'is', 'lt', 'ms',
'sv', 'uk', 'vi'},
'ja': {'ar', 'bg', 'da', 'de', 'en', 'es', 'fi', 'fr', 'he', 'hu',
'it', 'ms', 'nl', 'pl', 'pt', 'ru', 'sv', 'tr', 'vi'},
'ka': {'ru', 'en'},
'ko': {'sv', 'fi', 'ru', 'de', 'hu', 'es', 'fr', 'en'},
'lg': {'sv', 'fi', 'es', 'fr', 'en'},
'ln': {'fr', 'es', 'de', 'en'},
'lt': {'sv', 'ru', 'it', 'de', 'pl', 'tr', 'es', 'fr'},
'lv': {'sv', 'fi', 'ru', 'es', 'fr', 'en'},
'mg': {'es', 'en'},
'mk': {'fi', 'es', 'fr', 'en'},
'ml': {'en'},
'mr': {'en'},
'ms': {'fr', 'it', 'ms', 'de'},
'nl': {'uk', 'sv', 'fi', 'af', 'es', 'no', 'ca', 'fr', 'en'},
'no': {'da', 'de', 'es', 'fi', 'fr', 'nl', 'no', 'pl', 'ru', 'sv',
'uk'},
'pa': {'en'},
'pl': {'uk', 'sv', 'de', 'ar', 'es', 'no', 'lt', 'fr', 'en'},
'pt': {'ca', 'uk', 'tl', 'gl'},
'ro': {'sv', 'fi', 'fr'},
'ru': {'af', 'ar', 'bg', 'da', 'en', 'es', 'et', 'fi', 'fr', 'he',
'hy', 'lt', 'lv', 'no', 'sl', 'sv', 'uk', 'vi'},
'sk': {'sv', 'fi', 'es', 'fr', 'en'},
'sl': {'uk', 'sv', 'fi', 'ru', 'es', 'fr'},
'sq': {'sv', 'es', 'en'},
'ss': {'en'},
'sv': {'af', 'bg', 'ceb', 'cs', 'el', 'en', 'es', 'et', 'fi', 'fr',
'ha', 'he', 'hr', 'ht', 'hu', 'id', 'ig', 'ilo', 'is', 'lg',
'ln', 'lv', 'nl', 'no', 'ro', 'ru', 'sk', 'sl', 'sq', 'sv',
'th', 'tn', 'uk', 'xh', 'yo'},
'th': {'fr', 'en'},
'tl': {'pt', 'es', 'de', 'en'},
'tn': {'sv', 'es', 'fr', 'en'},
'tr': {'uk', 'az', 'sv', 'ar', 'es', 'lt', 'fr', 'en'},
'uk': {'bg', 'ca', 'cs', 'de', 'en', 'es', 'fi', 'fr', 'he', 'hu',
'it', 'nl', 'no', 'pl', 'pt', 'ru', 'sl', 'sv', 'tr'},
'ur': {'en'},
'vi': {'ru', 'it', 'de', 'es', 'fr', 'en'},
'xh': {'sv', 'es', 'fr', 'en'},
'yo': {'sv', 'fi', 'es', 'fr', 'en'},
'zh': {'bg', 'de', 'en', 'fi', 'he', 'it', 'ms', 'nl', 'sv', 'uk',
'vi'}}
``` | 03-23-2022 21:16:01 | 03-23-2022 21:16:01 | cc @muellerzr :)<|||||>Hi @alvations! You can do this via the `huggingface_hub` actually, as it's designed to talk with huggingface.co/models, datasets, etc.
Here's a slightly faster way to do what you want:
```python
from huggingface_hub import HfApi, ModelFilter
filt = ModelFilter(author="Helsinki-NLP", task="translation")
api = HfApi()
models = api.list_models(filter=filt)
# And now we make a list of your supported:
supported = []
for model in models:
_, _, _, s, t = model.modelId.split('-')
supported.append((s,t))
```
Which should be _much_ faster since we don't have to get the weights!
If we want to search for exact languages, we can pass them in as a tag to the `ModelFilter` as well, so for example all `zh` models would look like so:
```python
filt = ModelFilter(author="Helsinki-NLP", task="translation", tags="zh")
models = api.list_models(filter=filt)
```
Now we sadly can't specify the from or to, but this should largely speed up your searching 😄
You can then grab the name of whatever model it is by doing `models[idx].modelId` and pass that into your `.from_pretrained`
@LysandreJik let me know if I happened to miss anything you can think of!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,372 | closed | Infer padding in GPT2ForSequenceClassification when inputs_embeds and attention_mask are given | # 🚀 Feature request
Infer padding in `GPT2ForSequenceClassification` when `inputs_embeds` and `attention_mask` are given
## Motivation
When `config.pad_token_id is not None` and `inputs_embeds` are passed instead of `input_ids`, the forward method of `GPT2ForSequenceClassification` raises a warning telling that _"GPT2ForSequenceClassification will not detect padding tokens in `inputs_embeds`. Results may be unexpected if using padding tokens in conjunction with `inputs_embeds.`"_.
This happens because you have nowhere to look for the padding tokens start index when `input_ids` is not given. As a consequence, the last position (of the tensor) is taken as the classification token.
In practice, one may need to pass a batch of `inputs_embeds` with different sequence lengths and padded and I think there are options to solve or workaround this.
## Your contribution
One alternative that comes to my mind is to infer the sequence lengths from the attention mask (only when we don't have `input_ids`)
For instance:
a) Optimistic approach: the sequence length is given by the position of the last 1 in the corresponding row of the attention mask
or
b) Not so optimistic approach: the sequence length is given by the position of the last 1 in the attention mask only if all the previous positions are 1's and only if this is the case for every row of the attention mask; in any other case, `sequence_lengths = -1` and the warning would be raised.
Examples for approach b):
```
attention_mask = [
[1, 1, 1, 1, 1],
[1, 1, 1, 0, 0]
]
```
=> We infer sequence lengths = [5, 3] (in the code I think it should be [4, 2] actually)
```
attention_mask = [
[0, 1, 1, 1, 1],
[1, 1, 1, 0, 0]
]
```
=> We don't infer anything, sequence lengths = [-1, -1] (in the code I think it's just `sequence_lengths = -1` as broadcasting happens)
For reference, the modification would be done here: https://github.com/huggingface/transformers/blob/8a69e023bf81160e64848b256441e64a4cb47992/src/transformers/models/gpt2/modeling_gpt2.py#L1406-L1411
Do you see it reasonable or is it too much of an assumption?
A safer alternative is to add a new parameter that is only used when `inputs_embeds' are passed, but I'm not sure it's justified enough just for this specific case.
I can submit a PR that implements this or a different solution.
Thanks in advance. | 03-23-2022 20:03:56 | 03-23-2022 20:03:56 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,371 | closed | update smddp api to v1.4.0 | 1. Update SMDDP APIs to v1.4.0
2. Use vanilla PyTorch dist module and DDP class
3. replace `dist.get_local_rank()` with env var query as PyTorch does not have this API.
fixes https://github.com/huggingface/transformers/issues/16313 | 03-23-2022 19:51:24 | 03-23-2022 19:51:24 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @sgugger, I removed bunch of unused imports since it's guaranteed that process group will be initialized by `training_args_sm.py`.
Could you give some pointers on how to fix CI errors?
`F401 'smdistributed.dataparallel.torch.torch_smddp' imported but unused`
this import is needed to register SMDDP as a pytorch backend, can we disable style check on this line?
`FAILED tests/funnel/test_modeling_funnel.py::FunnelModelTest::test_pt_tf_model_equivalence`<|||||>To silence the flake8 error on those lines, you need to add a comment at the end of the import line: `# noqa: F401`<|||||>Thanks again for your work on this! |
transformers | 16,370 | closed | [Doctests] Make TFRoberta-like meaningfull | # What does this PR do?
Similar to #16363, but for TF
I made sure the doctests run without failure, but 2 models currently use `from_pt=True` --> need to convert and upload the TF ckpts.
@patrickvonplaten | 03-23-2022 19:20:37 | 03-23-2022 19:20:37 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@patrickvonplaten I uploaded the TF checkpoints and removed `from_pt`.
~~I will need to check and apply Sylvain's suggestion in your PR to finalize this PR.~~
Ready for review :-) |
transformers | 16,369 | closed | Create concept guide section | This PR wraps up the IA migration:
- Created a Conceptual Guide section and migrated docs that are explanatory.
- Created a separate page in Conceptual Guides for this [section](https://huggingface.co/docs/transformers/main/en/preprocessing#everything-you-always-wanted-to-know-about-padding-and-truncation) about padding and truncation from the Preprocess doc.
- Reorganized table of contents to group similar topics together like contributing (how to add model/pipeline/testing etc.). Feel free to suggest any changes :)
- Renamed [Create a custom ~~model~~](https://huggingface.co/docs/transformers/main/en/create_a_model) to `architecture` based on @NielsRogge feedback that this guide only describes how to customize the architecture from the config. Also feel free to suggest another title here, I'm not too excited with what I have right now. The title should be distinct from [Sharing custom models](https://huggingface.co/docs/transformers/main/en/custom_models), which actually shows you how to write your own custom config and model.
- Updated the `index` to reflect how the docs are organized.
- Removed older versions of docs that have been updated ([fine-tune ](https://huggingface.co/docs/transformers/main/en/custom_datasets )and [multilingual](https://huggingface.co/docs/transformers/main/en/multilingual)). | 03-23-2022 18:54:25 | 03-23-2022 18:54:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,368 | closed | Swap inequalities | # What does this PR do?
The eval delay is a feature that allows a user to specify the number of steps/epochs to wait before the first evaluation can take place. This fixes a bug where the opposite behavior occurs - evaluation stops after eval_delay!
Fixes #16365
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger (sorry for the repeated tagging)
| 03-23-2022 17:13:59 | 03-23-2022 17:13:59 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for fixing! |
transformers | 16,367 | closed | adds FLAUBERT to doc tests | # What does this PR do?
adds FLUABERT to doc tests
| 03-23-2022 17:06:51 | 03-23-2022 17:06:51 | Hi, @abdouaziz
Since `Flaubert` is not to be changed, and I guess you won't work on `CamemBERT` neither (right?), the changes in those 2 files have to be reverted to its original version :-) Thank you.<|||||>It also looks like you local clone and working branch doesn't have the recent update of HF `transformers`, which is necessary for this sprint. Maybe you could try to pull the latest change and rebase your working branch?
|
transformers | 16,366 | closed | Request on application/pipeline for Text Regression | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
The Text Regression task aims at delivering scalar output for each input sequence, which is widely used in NLP applications.
In my own use, I'm working on the research topic named translation evaluation, including metric and quality estimation (QE) tasks. Here are some available baselines:
- [BLEURT](https://aclanthology.org/2020.acl-main.704/)
- [COMET](https://aclanthology.org/2020.emnlp-main.213/)
- [TransQuest](https://aclanthology.org/2020.coling-main.445/)
Those models are designed on available pretrained language models (PLMs, like BERT, mBERT and XLM-R). Besides, those translation evaluation models consist of a multi-layer perceptron (MLP) at the end, which receives pooled hidden states and deliver a scalar output as the final prediction to describe how well the generated translation output express the semantic of referred sentence (source sentence for QE task, and reference for metric).
The Text Regression task is mostly related to Text Classification, where two things raise my concern:
- **The number of linear modules in MLP** Available Text Classification pipeline only makes one or two linear module available inside MLP, whereas modern approaches involve multiple linear modules (sometimes more than 2), and activate functions are arranged between any adjacent two of them.
- **Output of model** The output of Text Classification is a discrete output, e.g. 0 or 1 for binary classification. The output of Text Regression is a scalar.
So if it's possible, I wish that both features can be well supported:
- Increase the adaptability of related MLP module design, so that more models can be uploaded to Huggingface Hub and easy to be used. Coding can be referred to [COMET github repository](https://github.com/Unbabel/COMET/blob/c772b679e20725e6cc79b2107d50594f9ea7a4ae/comet/modules/feedforward.py);
- Create a new task named Text Regression, where it meets the demand of related translation evaluation approaches, as well as other applications in the forthcoming. I think the source code should be very easy to be implemented via transferring many things from Text Classification part.
Finally, that would be very appreciated if Text Regression model can benefit from the MLP with multiple linear modules.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
Recently I'm working on the repository of my own work, which is also a model-based translation evaluation approach like COMET and TransQuest. Personally I want to make my approach public and easy to use.
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md -->
I'm very glad to submit a PR and help develop the related functionalities.
| 03-23-2022 17:05:21 | 03-23-2022 17:05:21 | @Narsil <|||||>Here are related links on this Request:
[Huggingface Discuss](https://discuss.huggingface.co/t/demand-on-text-regression-pipeline-application/15935)
[Huggingface Doc](https://github.com/huggingface/hub-docs/issues/65)<|||||>Tagging @ricardorei, the 1st author and main developer of [COMET](https://github.com/Unbabel/COMET) -- he is also interested in HF <> COMET interoperability<|||||>Yes, I would like to work on this.
It would be useful to make COMET models and BLEURT models available through hugging face.
Also, it would help people develop new metrics!
Maybe a first step would be to add COMET/BLEURT as a model?
<|||||>Hi Pinging a core maintainer for feasibility
@LysandreJik .
**Adding a model is a task on its own and is a necessary preliminary step to adding it to a pipeline.**
Pipelines work on any model that implement `AutoModelForXXX` (XXX being `TextRegression` maybe here), because all those models will have a fixed API to be called on (regardless of implementations details like number of MLP in the heads).
Then adding a pipeline is described here: https://huggingface.co/docs/transformers/add_new_pipeline
I can help once we've started that stage.
> Output of model The output of Text Classification is a discrete output, e.g. 0 or 1 for binary classification. The output of Text Regression is a scalar.
This is actually not true, the output of a text classification model is a `N` classes tensor (floats) that is usually `softmaxed` and the max is then taken (but that comes later, the model really outputs N floats).
<|||||>Hi all,
Great thanks to @gante @ricardorei !
Thanks to @Narsil for pointing out my misunderstanding on Text Classification pipeline.
So... it seems that the first step should be adding this TextRegression pipeline into huggingface? Then @ricardorei and I can prepare our own models to upload to huggingface. I can try developing TextRegression pipeline recently.
<|||||>> So... it seems that the first step should be adding this TextRegression pipeline into huggingface? Then @ricardorei and I can prepare our own models to upload to huggingface. I can try developing TextRegression pipeline recently.
The first step is actually getting the models into `transformers` and adding the corresponding AutoModelForXXX.
I am not 100% familiar with that process, but I think what you did in documenting where the models come from, the research associated is a good first step.<|||||>I wonder if what was implemented here wasn't sufficient for the purpose: https://github.com/huggingface/transformers/pull/8328
The text classification pipeline adapts to the number of labels, and adapts either a sigmoid, a softmax, or no function at all. Is it insufficient for what you want to do here, and if so, can it be adapted to support your use case in a way that may be leveraged by other users?<|||||>Hi thanks to @Narsil @LysandreJik, I check the doc on recent topics about SequenceClassification. Take XLMRoBERTa as an example, I noticed that, the output of ```XLMRoBERTaForSequenceClassification``` can contain the logits before classification. I think the ```logits``` in this output format can suit my use.
The difference lies in the MLP of ```XLMRoBERTaForSequenceClassification```, as it inherits ```RobertaClassificationHead``` class. It contains two linear layers, and it takes the hidden states at the first place as input. For some modern approaches (like COMET), the number of linear layers may be not two, and the pooling strategy can be average pooling rather than first-place pooling.
Besides the functionalities of used PLM, I think for my model named ```XXX```, I have several solutions:
1. All I need to do additionally is to provide a new MLP class named ```XXXClassificationHead``` for the functionalities of my model. The ```XXX``` model can be applied in existing ```TextClassification``` pipeline, and I can directly collect the logits of output as the real output;
2. Design a new pipeline aside from ```TextClassification```. The new pipeline named ```TextRegression``` can receive related model classes named like ```XXXForSequenceRegression```;
3. ```TextClassification``` and ```TextRegression``` can somewhat share a great content. Maybe ```TextRegression``` can inherit from ```TextClassification```, or design a new parent class (maybe ```TextLogitsPrediction```) where ```TextClassification``` and ```TextRegression``` can both inherit from.
If we want a faster implementation, choice 1 is the best.
For a more precise definition in the forthcoming research and application, I think the difference between text classification and text regression can not be ignored, where choices 2&3 are recommended. We should not use the concept of classification instead of regression as this is a misdirection.
<|||||>Hi all @Narsil @gante @LysandreJik,
Sorry for my late response, I've been working on something else recently.
I've uploaded the model files of mine, named [UniTE-UP](https://huggingface.co/ywan/unite-up/) and [UniTE-MUP](https://huggingface.co/ywan/unite-mup/), and I'm implementing the UniTEForSequenceClassification class for my models.
But here, if I want to use the from_pretrained method to initialize the model, like:
```
from transformers import UniTEForSequenceClassification
unite_up_model = UniTEForSequenceClassification.from_pretrained('unite-up')
unite_mup_model = UniTEForSequenceClassification.from_pretrained('unite-mup')
```
How should I collect related materials to make those commands work?
Great thanks!
<|||||>Hi all,
I've uploaded the functionalities of my own model in the pull request.
Also, I've uploaded my model named UniTE, and you can check the [paper link](https://arxiv.org/abs/2204.13346) and [existing repo](https://github.com/NLP2CT/UniTE). Personally, I can use my model like this:
```
# import models (take UniTE-MUP as an example)
>>> from transformers import UniTEForSequenceClassification, UniTETokenizerFast
>>> tokenizer = UniTETokenizerFast.from_pretrained('ywan/unite-mup')
>>> model = UniTEForSequenceClassification.from_pretrained('ywan/unite-mup')
# construct sources (src), references (ref) and hypotheses (hyp)
>>> src = tokenizer(['你好!', '很高兴认识你!'], return_tensors='pt', padding=True)
>>> ref = tokenizer(['Hello!', 'Nice to meet you!'], return_tensors='pt', padding=True)
>>> hyp = tokenizer(['Hi!', 'Nice to see you!'], return_tensors='pt', padding=True)
# evaluating with different input formats
>>> model(hyp=hyp, src=src).cpu().tolist()
[0.714469850063324, 0.6583192944526672]
>>> model(hyp=hyp, ref=ref).cpu().tolist()
[0.746547281742096, 0.7588061094284058]
>>> model(hyp=hyp, src=src, ref=ref).cpu().tolist()
[0.6857070326805115, 0.7172597050666809]
```
What should I need to do next? Great thanks!
@Narsil @LysandreJik @gante <|||||>+1 on interest in application/pipeline (plus documenting!!!) Text Regression. While preparing for a workshop with movie reviews (1-5), I found this code:
```python
AutoModelForSequenceClassification.from_pretrained('model_name', num_labels=1, ignore_mismatched_sizes=True)
```
The word 'regression', and an example link should be on the same page as [AutoModelForSequenceClassification](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModelForSequenceClassification). Try searching [the main docs](https://huggingface.co/docs) for 'regression', or 'ignore_mismatched_sizes' and you will find very little to go on.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Text regression is a special case of text classification. Hence the existing classes for text classification are sufficient to train and do predictions for regression problems. As @mapmeld described, the right way to train a regression model is by passing `num_labels=1` to the `AutoModelForSequenceClassification.from_pretrained` method.
Coming to the pipeline for inferencing, it can be easily created like this
`pipe = pipeline('text-classification', model=trained_model_dir, function_to_apply="none")`
Remember to set the value of the parameter `function_to_apply` as `"none"` and not None.
|
transformers | 16,365 | closed | Swap `eval_delay signs` | ### Who can help
@sgugger
## Information
The signs for the `eval_delay` comparisons need to be switched. This is pretty NB @sgugger as there are a lot of breakages involved here.
Greater than sign must be changed to less than sign [here](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer_callback.py#L422) and [here](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer_callback.py#L446).
I'll put in a PR if need be.
| 03-23-2022 16:53:31 | 03-23-2022 16:53:31 | |
transformers | 16,364 | closed | question about the the specific text generation question | Hello, i am a tiro in transformers, and i have a question.
if i want to use serval concepts like: I question english ,and i want to the output like this: i have a english question, or i have a question rather than english question.
Does any project in the hub can handle this? If not ,can you give me some useful advice about how to use the current model and modify it to finish my above mentioned question?
Thank you very much | 03-23-2022 16:45:31 | 03-23-2022 16:45:31 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks! |
transformers | 16,363 | closed | [Doctests] Make roberta-like meaningfull | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This makes RoBERTa-like doc examples meaningful
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-23-2022 16:15:35 | 03-23-2022 16:15:35 | Thank you, @patrickvonplaten ! I do have one question: what should we do if other models give **not so good** results on these examples? We just try and use other input strings (which will give good outputs), and put the doc examples inside the model file?
<|||||>> Thank you, @patrickvonplaten ! I do have one question: what should we do if other models give **not so good** results on these examples? We just try and use other input strings (which will give good outputs), and put the doc examples inside the model file?
Very good question! I think the nice tendency we have is:
- the more a model architecture is used, the higher the chance that there is a checkpoint that gives good results
So this gives us a good incentive to start with model architectures that are highly used: BERT, Eletrca, etc... once we come to model architectures that are less used we can think about adapted the template in `utils.doc.py` to allow the user to pass any input string through the decorator :-)<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,362 | closed | Make Transformers use cache files when hf.co is down | # What does this PR do?
This PR makes sure that we actually go look in the cached files when hf.co is down (or the internet connection of the user). The main issue was that we were raising the `HTTPError` too fast and didn't execute the code path that goes look in the cached files.
Since that error is now delayed, in case of a connection error and file not in the cache, we end up in the `ValueError` raised by `get_from_cache` [here](https://github.com/huggingface/transformers/blob/12428f0ef15bb3631e7a5f04672ddb05f363de97/src/transformers/utils/hub.py#L538), so I had to adapt a bit the cascade of errors in the `from_pretrained` methods.
This is then tested for all objects with a `from_pretrained` method (except Flax models since I didn't find a tiny flax model). | 03-23-2022 15:48:00 | 03-23-2022 15:48:00 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,361 | closed | output_scores causes trainer.predict to error out | ## Environment info
- `transformers` version: 4.17.0
- Platform: Linux-4.14.252-131.483.amzn1.x86_64-x86_64-with-glibc2.9
- Python version: 3.6.13
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@[patrickvonplaten](https://github.com/patrickvonplaten)
Models: BARTForConditionalGeneration
Library:
- Text generation: @patrickvonplaten @narsil
- Trainer: @sgugger
## Information
Model I am using : BARTForConditionalGeneration
The problem arises when using:
* [ x] my own modified scripts: (give details below):
I am using the official example from https://github.com/huggingface/transformers/blob/main/examples/pytorch/summarization/run_summarization.py
with one modification to add output_scores and return_dict_in_generate in the PretrainedConfig
The tasks I am working on is:
* [ x] my own task or dataset: (give details below)
Finetune a BART model with my own dataset and predict on a testset
## To reproduce
Steps to reproduce the behavior:
1. Get the script from https://github.com/huggingface/transformers/blob/main/examples/pytorch/summarization/run_summarization.py
2. add the following lines (in asterisks) to the pretrained config loading, based on the documentation at: https://huggingface.co/transformers/v4.7.0/_modules/transformers/configuration_utils.html#PretrainedConfig
```
config = AutoConfig.from_pretrained(
model_args.config_name if model_args.config_name else model_args.model_name_or_path,
**return_dict_in_generate=True,
output_scores=True,**
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
**assert config.return_dict_in_generate==True
assert config.output_scores==True**
```
3. Run the script as follows:
`python -m torch.distributed.launch --nproc_per_node=1 run_summarization.py --model_name_or_path sshleifer/distilbart-xsum-12-6 --do_train False --do_predict True --do_eval False --train_file trainAll_allDesc_noCQA.csv --validation_file val.csv --test_file testAll.csv --output_dir /home/ec2-user/SageMaker/output/ --per_device_train_batch_size=1 --per_device_eval_batch_size=1 --overwrite_output_dir --predict_with_generate True --num_train_epochs 10 --text_column text --summary_column summary --learning_rate 3e-5 --weight_decay 0.01 --adam_beta2 0.98 --warmup_steps 5000 --generation_max_length 35`
The error snippet:
```
03/23/2022 13:31:38 - INFO - __main__ - *** Predict ***
[INFO|trainer.py:2389] 2022-03-23 13:31:38,756 >> ***** Running Prediction *****
[INFO|trainer.py:2391] 2022-03-23 13:31:38,756 >> Num examples = 5126
[INFO|trainer.py:2394] 2022-03-23 13:31:38,756 >> Batch size = 1
Traceback (most recent call last):
File "run_summarization.py", line 737, in <module>
main()
File "run_summarization.py", line 690, in main
predict_dataset, metric_key_prefix="predict", max_length=max_length, num_beams=num_beams
File "/home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/transformers/trainer_seq2seq.py", line 119, in predict
return super().predict(test_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "/home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/transformers/trainer.py", line 2332, in predict
test_dataloader, description="Prediction", ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix
File "/home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/transformers/trainer.py", line 2431, in evaluation_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/transformers/trainer_seq2seq.py", line 180, in prediction_step
if generated_tokens.shape[-1] < gen_kwargs["max_length"]:
AttributeError: 'BeamSearchEncoderDecoderOutput' object has no attribute 'shape'
Traceback (most recent call last):
File "/home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/distributed/launch.py", line 260, in <module>
main()
File "/home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/distributed/launch.py", line 256, in main
cmd=cmd)
```
## full script:
https://github.com/tanyaroosta/summarization/blob/main/run_summarization.py
## Expected behavior
If I use the same exact command to run the script, when I don't have the two added config arguments, everything runs fine, and it generates the predictions. However, as soon as I add the output_scores=True to the pretrained config, it errors out with the error shown above. I also noticed that if I print the model.config after loading the model, the two added parameters don't show up, but I am not sure if that is necessarily an issue.
| 03-23-2022 14:03:57 | 03-23-2022 14:03:57 | Yes, the `Seq2SeqTrainer` does not support `return_dict_in_generate` for now, so that is expected. In the meantime, using the `no_trainer` example would allow you to customize everything to your needs.<|||||>thanks @sgugger for the quick reply. Does it support `output_scores`? I took out the `return_dict_in_generate` and left `output_scores=True`. However, it seems no scores are printed in the generated_predictions.txt. It didn't error out though...<|||||>Not sure it will accumulate them either, the loop is focused on the generation, not additional outputs.<|||||>thanks, so for the time being, the safest thing is to use the `no_trainer` script? @sgugger <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@sgugger
In my case, When inference.
I have to get output score, and error accure even now 😥
I think code use like this,
``` python3
trainer = Seq2SeqTrainer(
model=model,
data_collator=data_collator,
args=training_args,
)
gen_kwargs = {"output_scores": True, "return_dict_in_generate": True}
output = trainer.predict(test_dataset, **gen_kwargs)
```
and, override `Seq2SeqTrainer.prediction_step`
pseudo code...
```
output= self.model.generate(
generation_inputs,
**gen_kwargs,
)
IF gen_kwargs.get("return_dict_in_generate") or (generated_tokens is BeamSearchDecoderOnlyOutput or BeamSearchEncoderDecoderOutput) ?
generated_tokens = output["sequences"]
....
return (loss, generated_toeksn, labels, output)
```
how about this? if it is fine, can i contribute to trainer_seq2seq.py? (not override, update `prediction_step`) |
transformers | 16,360 | closed | Add doctests for albert, bert, bigbird, mobilebert | # What does this PR do?
Add doctests for albert, bert, bigbird, mobilebert
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
## Who can review?
@patrickvonplaten @ydshieh | 03-23-2022 12:03:15 | 03-23-2022 12:03:15 | |
transformers | 16,359 | closed | Casting to device inside of the Tokenizer | # 🚀 Feature request
I think it will make sense if the tokenizer.encode() and in particular, tokenizer.encode_plus() accepting a string as input, will also get "device" as an argument and cast the resulting tensors to the given device. Otherwise, in case of encode_plus(), one has to loop through the output dict and manually cast the created tensors. | 03-23-2022 11:12:16 | 03-23-2022 11:12:16 | You can just do:
```
from transformers import AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
device = "cuda" if torch.cuda.is_available() else "cpu"
text = "hello world"
encoding = tokenizer(text, return_tensors="pt").to(device)
```<|||||>Cool, thanks! |
transformers | 16,358 | closed | Can't load Longformer Encoder Decoder converted from a MBart. | Hello there,
I have built a Longformer Encoder Decoder on top of a MBart architecture by simply following instructions provided at (https://github.com/allenai/longformer/blob/master/scripts/convert_bart_to_longformerencoderdecoder.py).
This is the huggingface MBart model --> **ARTeLab/mbart-summarization-fanpage**
In doing so I firstly updated any import methods called from the 'transfomers' library, secondly, since I am working on Google Colab to use a GPU, I moved all necessary classes into a .ipynb file.
When I try to load the model I get a **size mismatch for model.encoder.embed_positions.weight** error. I have tried to load the model calling different functions provided by transformers but none of them seem to be compatible with the model.
Interestingly, when I load the model via the _LongformerModel.from_pretrained(load_model_from)_ function the model seems to be correctly loaded but I can't find a way to make inference.
Snippets of code and more in details explanations are given below.
## Environment info
- `transformers` version: 4.17.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes, Backend (GPU) Google Compute Engine
- Using distributed or parallel set-up in script?: no
### Who can help
@ydshieh
@patil-suraj
## Information
Model I am using (Longformer Encoder Decoder For Conditional Generation, MBART):
The problem arises when using:
* [ ] model = LongformerEncoderDecoderForConditionalGeneration.from_pretrained(load_model_from)
* [ ] my own modified scripts: (give details below)
The task I am trying to work on is:
* Summarization in Italian
## To reproduce
Steps to reproduce the behavior:
1. Run the code
```
import argparse
import logging
import os
import copy
from transformers import AutoTokenizer
from transformers import MBartForConditionalGeneration
from typing import List, Optional, Tuple, Dict
from torch import nn, Tensor
from transformers.models.longformer.modeling_longformer import LongformerSelfAttention
#from transformers.models.bart.modeling_bart import BartConfig
#from transformers.models.led.modeling_led import LEDForConditionalGeneration, LEDConfig
from transformers import MBartConfig
class LongformerSelfAttentionForBart(nn.Module):
def __init__(self, config, layer_id):
super().__init__()
self.embed_dim = config.d_model
self.longformer_self_attn = LongformerSelfAttention(config, layer_id=layer_id)
self.output = nn.Linear(self.embed_dim, self.embed_dim)
def forward(
self,
query,
key: Optional[Tensor],
key_padding_mask: Optional[Tensor] = None,
layer_state: Optional[Dict[str, Optional[Tensor]]] = None,
attn_mask: Optional[Tensor] = None,
need_weights=False,
output_attentions=False,
) -> Tuple[Tensor, Optional[Tensor]]:
tgt_len, bsz, embed_dim = query.size()
assert embed_dim == self.embed_dim
assert list(query.size()) == [tgt_len, bsz, embed_dim]
assert attn_mask is None
outputs = self.longformer_self_attn(
query.transpose(0, 1), # LongformerSelfAttention expects (bsz, seqlen, embd_dim)
attention_mask=key_padding_mask.unsqueeze(dim=1).unsqueeze(dim=1) * -1,
head_mask=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
output_attentions=output_attentions,
)
attn_output = self.output(outputs[0].transpose(0, 1))
return (attn_output,) + outputs[1:] if len(outputs) == 2 else (attn_output, None)
class LongformerEncoderDecoderForConditionalGeneration(MBartForConditionalGeneration):
def __init__(self, config):
super().__init__(config)
# if config.attention_mode == 'n2':
# pass # do nothing, use BertSelfAttention instead
# else:
for i, layer in enumerate(self.model.encoder.layers):
layer.self_attn = LongformerSelfAttentionForBart(config, layer_id=i)
class LongformerEncoderDecoderConfig(MBartConfig):
def __init__(self, attention_window: List[int] = None, attention_dilation: List[int] = None,
autoregressive: bool = False, attention_mode: str = 'sliding_chunks',
gradient_checkpointing: bool = False, **kwargs):
"""
Args:
attention_window: list of attention window sizes of length = number of layers.
window size = number of attention locations on each side.
For an affective window size of 512, use `attention_window=[256]*num_layers`
which is 256 on each side.
attention_dilation: list of attention dilation of length = number of layers.
attention dilation of `1` means no dilation.
autoregressive: do autoregressive attention or have attention of both sides
attention_mode: 'n2' for regular n^2 self-attention, 'tvm' for TVM implemenation of Longformer
selfattention, 'sliding_chunks' for another implementation of Longformer selfattention
"""
super().__init__(**kwargs)
self.attention_window = attention_window
self.attention_dilation = attention_dilation
self.autoregressive = autoregressive
self.attention_mode = attention_mode
self.gradient_checkpointing = gradient_checkpointing
assert self.attention_mode in ['tvm', 'sliding_chunks', 'n2']
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)
def create_long_model(
save_model_to,
base_model,
tokenizer_name_or_path,
attention_window,
max_pos
):
model = MBartForConditionalGeneration.from_pretrained(base_model)
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name_or_path, model_max_length=max_pos)
config = MBartConfig.from_pretrained(base_model)
model.config = config
# in BART attention_probs_dropout_prob is attention_dropout, but LongformerSelfAttention
# expects attention_probs_dropout_prob, so set it here
config.attention_probs_dropout_prob = config.attention_dropout
config.architectures = ['LongformerEncoderDecoderForConditionalGeneration', ]
# extend position embeddings
tokenizer.model_max_length = max_pos
tokenizer.init_kwargs['model_max_length'] = max_pos
current_max_pos, embed_size = model.model.encoder.embed_positions.weight.shape
assert current_max_pos == config.max_position_embeddings + 2
config.max_encoder_position_embeddings = max_pos
config.max_decoder_position_embeddings = config.max_position_embeddings
del config.max_position_embeddings
max_pos += 2 # NOTE: BART has positions 0,1 reserved, so embedding size is max position + 2
assert max_pos >= current_max_pos
# allocate a larger position embedding matrix for the encoder
new_encoder_pos_embed = model.model.encoder.embed_positions.weight.new_empty(max_pos, embed_size)
# copy position embeddings over and over to initialize the new position embeddings
k = 2
step = current_max_pos - 2
while k < max_pos - 1:
new_encoder_pos_embed[k:(k + step)] = model.model.encoder.embed_positions.weight[2:]
k += step
model.model.encoder.embed_positions.weight.data = new_encoder_pos_embed
# allocate a larger position embedding matrix for the decoder
# new_decoder_pos_embed = model.model.decoder.embed_positions.weight.new_empty(max_pos, embed_size)
# # copy position embeddings over and over to initialize the new position embeddings
# k = 2
# step = current_max_pos - 2
# while k < max_pos - 1:
# new_decoder_pos_embed[k:(k + step)] = model.model.decoder.embed_positions.weight[2:]
# k += step
# model.model.decoder.embed_positions.weight.data = new_decoder_pos_embed
# replace the `modeling_bart.SelfAttention` object with `LongformerSelfAttention`
config.attention_window = [attention_window] * config.num_hidden_layers
config.attention_dilation = [1] * config.num_hidden_layers
for i, layer in enumerate(model.model.encoder.layers):
longformer_self_attn_for_bart = LongformerSelfAttentionForBart(config, layer_id=i)
longformer_self_attn_for_bart.longformer_self_attn.query = layer.self_attn.q_proj
longformer_self_attn_for_bart.longformer_self_attn.key = layer.self_attn.k_proj
longformer_self_attn_for_bart.longformer_self_attn.value = layer.self_attn.v_proj
longformer_self_attn_for_bart.longformer_self_attn.query_global = copy.deepcopy(layer.self_attn.q_proj)
longformer_self_attn_for_bart.longformer_self_attn.key_global = copy.deepcopy(layer.self_attn.k_proj)
longformer_self_attn_for_bart.longformer_self_attn.value_global = copy.deepcopy(layer.self_attn.v_proj)
longformer_self_attn_for_bart.output = layer.self_attn.out_proj
layer.self_attn = longformer_self_attn_for_bart
logger.info(f'saving model to {save_model_to}')
model.save_pretrained(save_model_to)
tokenizer.save_pretrained(save_model_to)
return model, tokenizer
def main(base_model, tokenizer, save_model_to, attention_window = 512, max_pos = 4096 * 4):
if not os.path.exists(save_model_to):
os.mkdir(save_model_to)
model, tokenizer_ = create_long_model(
save_model_to=save_model_to,
base_model=base_model,
tokenizer_name_or_path=tokenizer,
attention_window=attention_window,
max_pos=max_pos
)
return model, tokenizer
model_, tokenizer_ = main(base_model = 'ARTeLab/mbart-summarization-fanpage', tokenizer = 'ARTeLab/mbart-summarization-fanpage', save_model_to = "/content/model", attention_window = 512, max_pos = 4096 * 4)
```
2. Load the Model
```
from transformers.models.bart.tokenization_bart_fast import BartTokenizerFast
load_model_from = "/content/model"
tokenizer = BartTokenizerFast.from_pretrained(load_model_from)
model = LongformerEncoderDecoderForConditionalGeneration.from_pretrained(load_model_from)
```
3. Get the following output
**Tokenizer:**
_The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'MBartTokenizer'.
The class this function is called from is 'BartTokenizerFast'._
**Model**
_RuntimeError: Error(s) in loading state_dict for LongformerEncoderDecoderForConditionalGeneration:
size mismatch for model.encoder.embed_positions.weight: copying a param with shape torch.Size([16386, 1024]) from checkpoint, the shape in current model is torch.Size([1026, 1024])._
4. The other loading methods I have tried to call:
- **model = LEDForConditionalGeneration.from_pretrained(load_model_from)**
Error message
_ValueError: The state dictionary of the model you are training to load is corrupted. Are you sure it was properly saved?_
- **model = EncoderDecoderModel.from_pretrained(load_model_from)**
Error message
_AssertionError: Config has to be initialized with encoder and decoder config_
- **model = LongformerModel.from_pretrained(load_model_from)**
Model is loaded but with warnings
_You are using a model of type mbart to instantiate a model of type longformer. This is not supported for all configurations of models and can yield errors.
Some weights of the model checkpoint at /content/model were not used when initializing LongformerModel:
This IS expected if you are initializing LongformerModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
This IS NOT expected if you are initializing LongformerModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference._
When I try to make inference:
```
from transformers.models.bart.tokenization_bart_fast import BartTokenizerFast
from transformers.models.bart.modeling_bart import shift_tokens_right
tokenizer = BartTokenizerFast.from_pretrained(load_model_from)
TXT = "an article..."
data = tokenizer([TXT], return_tensors='pt', padding='max_length', max_length=2048)
input_ids = data['input_ids']
attention_mask = data['attention_mask']
decoder_input_ids = shift_tokens_right(input_ids[:, :5], tokenizer.pad_token_id, decoder_start_token_id = 250011)
logits = model.generate(main_input_name = input_ids, attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, use_cache=False)[0]
masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item()
probs = logits[0, masked_index].softmax(dim=0)
values, predictions = probs.topk(5)
print(tokenizer.convert_ids_to_tokens(predictions))
```
Error message
_AttributeError: 'LongformerEncoder' object has no attribute 'main_input_name'_
or
```
import torch
text = " ".join(["Hello world! "] * 1000) # long input document
input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0) # batch of size 1
attention_mask = torch.ones(
input_ids.shape, dtype=torch.long, device=input_ids.device
) # initialize to local attention
global_attention_mask = torch.zeros(
input_ids.shape, dtype=torch.long, device=input_ids.device
) # initialize to global attention to be deactivated for all tokens
global_attention_mask[
:,
[
1,
4,
21,
],
] = 1 # Set global attention to random tokens for the sake of this example
# Usually, set global attention based on the task. For example,
# classification: the <s> token
# QA: question tokens
# LM: potentially on the beginning of sentences and paragraphs
outputs = model(input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask)
```
Error message
_IndexError: index out of range in self_
or
```
inputs = tokenizer(TXT, return_tensors="pt")
#inputs = {k: v.cuda() for k, v in inputs.items()}
outputs = model(**inputs)
features = outputs[0][:,0,:].detach().numpy().squeeze()
print(tokenizer.decode(features, skip_special_tokens=True, clean_up_tokenization_spaces=True))
```
Error message
_TypeError: 'float' object cannot be interpreted as an integer_
## Expected behavior
Model loaded successfully!
Inference likewise :) | 03-23-2022 10:52:23 | 03-23-2022 10:52:23 | Hi @VioletRaven
Consider this question involves:
- external script: `allenai/longformer`
- a full & custom model architecture code
this is beyond the scope of maintenance the original `transformers` library.
[Hugging Face Forums](https://discuss.huggingface.co/) is a better place for such questions instead.
If you can find the root cause and it turns out to be an issue in the existing model code, don't hesitate to open a new issue for it.<|||||>I have now opened an issue directly on that forum.
Thank you for your suggestion! |
transformers | 16,357 | closed | Make FeaturesManager.get_model_from_feature a static method | # What does this PR do?
This makes `FeaturesManager.get_model_from_feature` a static method. It was already implied but there was a missing `@staticmethod` decorator.
Fixes #16347
| 03-23-2022 09:56:21 | 03-23-2022 09:56:21 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,356 | closed | Trainer evaluation delay | # What does this PR do?
Attempts to implement #16327 by adding two new `TrainingArguments` arguments, one to delay evaluation until a certain number of epochs have passed and another for steps (which overrides the former). The aim is to delay evaluation to free up more time for training at the initial stages.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. [Link](https://github.com/huggingface/transformers/issues/16327)
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Let me know your thoughts @sgugger
| 03-23-2022 07:50:53 | 03-23-2022 07:50:53 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16356). All of your documentation changes will be reflected on that endpoint. |
transformers | 16,355 | closed | _tokenizer.decode TypeError: 'list' object cannot be interpreted as an integer | ## Environment info
- `transformers` version: 4.15.0
- Platform: Linux-4.15.0-162-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.11
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: No
### Who can help
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, Pegasus, EncoderDecoder: @patrickvonplaten
Library:
- Tokenizers: @SaulLu
- Trainer: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
BERT 2 BERT FINETUNED FOR PARAPHRASING
https://huggingface.co/mrm8488/bert2bert_shared-spanish-finetuned-paus-x-paraphrasing
## Error

```
File "bert2bert_paraphraser.py", line 204, in <module>
paraphraser.train(True)
File "bert2bert_paraphraser.py", line 107, in train
trainer.train()
File "/data/anaconda3/envs/motoria_paraphrase/lib/python3.7/site-packages/transformers/trainer.py", line 1399, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/data/anaconda3/envs/motoria_paraphrase/lib/python3.7/site-packages/transformers/trainer.py", line 1521, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/data/anaconda3/envs/motoria_paraphrase/lib/python3.7/site-packages/transformers/trainer_seq2seq.py", line 70, in evaluate
return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "/data/anaconda3/envs/motoria_paraphrase/lib/python3.7/site-packages/transformers/trainer.py", line 2165, in evaluate
metric_key_prefix=metric_key_prefix,
File "/data/anaconda3/envs/motoria_paraphrase/lib/python3.7/site-packages/transformers/trainer.py", line 2401, in evaluation_loop
metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))
File "bert2bert_paraphraser.py", line 159, in compute_metrics
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
File "/data/anaconda3/envs/motoria_paraphrase/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 3208, in batch_decode
for seq in sequences
File "/data/anaconda3/envs/motoria_paraphrase/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 3208, in <listcomp>
for seq in sequences
File "/data/anaconda3/envs/motoria_paraphrase/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 3244, in decode
**kwargs,
File "/data/anaconda3/envs/motoria_paraphrase/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py", line 531, in _decode
text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)
TypeError: 'list' object cannot be interpreted as an integer
```
## To reproduce
```python
import numpy as np
from transformers import AutoTokenizer, DataCollatorForSeq2Seq, Seq2SeqTrainingArguments, Seq2SeqTrainer, EarlyStoppingCallback
from transformers import EncoderDecoderModel
from paraphraser import *
import sys
import os
APP_ROOT = os.path.dirname(os.path.abspath(__file__))
sys.path.insert(0, APP_ROOT)
sys.stdout.flush()
# STATIC VALUES
PATH_FILE_CSV_DATASETS =
PRETRAINED_MODEL =
TRAIN_EPOCHS = 5
MAX_LEN = 512
encoder_max_length = MAX_LEN
decoder_max_length = MAX_LEN
# TODO Cambiar con el dataset correcto
# load rouge for validation
print("Load metrics, models and tokenizer")
model = EncoderDecoderModel.from_pretrained(PRETRAINED_MODEL)
tokenizer = AutoTokenizer.from_pretrained(PRETRAINED_MODEL)
# 2 - Prepare TrainingArguments
# Arguments for training
class Bert2BertParaphrase(Paraphraser):
#PATH, local_files_only=True
def __init__(self,
batch_size=2,
model_pretrained="bert2bert",
epoch_size=10, # change to 16 for full training
number_of_steps=5000,
cuda_id=0
):
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = str(cuda_id)
# MODEL THAT IS GOING TO BE USED
# I think it was mrm8488/bert2bert_shared-spanish-finetuned-paus-x-paraphrasing
self.output_dir = str(model_pretrained + '-motoria-paraphrasing')
print("Preparing arguments for training...")
self.args = Seq2SeqTrainingArguments(
output_dir=self.output_dir,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
#predict_with_generate=True,
# evaluate_during_training=True,
evaluation_strategy='steps',
do_train=True,
# do_train (bool, optional, defaults to False) — Whether to run training or not.
# This argument is not directly used by Trainer, it’s intended to be used by your training/evaluation scripts instead.
# See the example scripts for more details.
do_eval=True,
# do_eval (bool, optional) — Whether to run evaluation on the validation set or not.
# Will be set to True if evaluation_strategy is different from "no".
# This argument is not directly used by Trainer, it’s intended to be used by your training/evaluation scripts instead.
# See the example scripts for more details.
save_steps=number_of_steps,
# max_steps=1500, # delete for full training
overwrite_output_dir=True,
save_total_limit=10,
fp16=True,
num_train_epochs=epoch_size,
# fp16 (bool, optional, defaults to False) — Whether to use fp16 16-bit (mixed) precision training instead of 32-bit training.
load_best_model_at_end = True,
push_to_hub=False,
#metric_for_best_model="bleu",
#eval_accumulation_steps=1,
eval_steps=number_of_steps
)
def train(self, bool_local, dataset_path=PATH_FILE_CSV_DATASETS):
print("Lets the training begin!")
# 1 - Load BERT AS TOKENIZER
# Loading the BERT Tokenizer
# change to 16 for full training
print("Preprocessing file...")
tokenized_datasets = self.preprocess_datasets(dataset_path, bool_local)
print("Preprocess done!")
# Data collator is used to putting together all the examples inside a batch
data_collator = DataCollatorForSeq2Seq(tokenizer, model=model)
print("Preparing arguments")
trainer = Seq2SeqTrainer(
model,
self.args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["test"],
data_collator=data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics,
callbacks=[EarlyStoppingCallback(early_stopping_patience=1)]
)
print("It is time to process!")
trainer.train()
print("Training done")
trainer.save_model(self.output_dir)
# 3 - PROCESS DATA AS BERT REQUIRES
def preprocess_function(self, batch):
# Tokenize the input and target data
"""
Parameters according to:
https://huggingface.co/docs/transformers/v4.16.2/en/model_doc/encoder-decoder#transformers.EncoderDecoderModel
"""
inputs = tokenizer(batch["source"], padding="max_length",
truncation=True, max_length=encoder_max_length)
outputs = tokenizer(batch["target"], padding="max_length",
truncation=True, max_length=decoder_max_length)
batch["input_ids"] = inputs.input_ids
# input_ids (torch.LongTensor of shape (batch_size, sequence_length))
# Indices of input sequence tokens in the vocabulary.
batch["attention_mask"] = inputs.attention_mask
# attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional)
# Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
batch["decoder_input_ids"] = outputs.input_ids
# decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional)
# Indices of decoder input sequence tokens in the vocabulary.
batch["decoder_attention_mask"] = outputs.attention_mask
# decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional)
# Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default.
batch["labels"] = outputs.input_ids.copy()
# labels (torch.LongTensor of shape (batch_size, sequence_length), optional)
# Labels for computing the masked language modeling loss for the decoder.
# Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring)
# Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
batch["labels"] = [[-100 if token == tokenizer.pad_token_id else token for token in labels]
for labels in batch["labels"]]
return batch
def postprocess_text(preds, labels):
preds = [pred.strip() for pred in preds]
labels = [[label.strip()] for label in labels]
return preds, labels
def compute_metrics(eval_preds):
preds, labels = eval_preds
if isinstance(preds, tuple):
preds = preds[0]
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
# Replace -100 in the labels as we can't decode them.
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
# Some simple post-processing
decoded_preds_bleu, decoded_labels_bleu = postprocess_text(
decoded_preds, decoded_labels)
result = metric.compute(predictions=decoded_preds_bleu,
references=decoded_labels_bleu)
meteor_result = meteor.compute(
predictions=decoded_preds_bleu, references=decoded_labels_bleu)
prediction_lens = [np.count_nonzero(
pred != tokenizer.pad_token_id) for pred in preds]
result = {'bleu': result['score']}
# Rouge expects a newline after each sentence
decoded_preds = ["\n".join(nltk.sent_tokenize(pred.strip()))
for pred in decoded_preds]
decoded_labels = ["\n".join(nltk.sent_tokenize(label.strip()))
for label in decoded_labels]
result_rouge = rouge.compute(
predictions=decoded_preds, references=decoded_labels, use_stemmer=True)
result["rouge"] = result_rouge['rougeL'].mid.fmeasure
result["gen_len"] = np.mean(prediction_lens)
result["meteor"] = meteor_result["meteor"]
result = {k: round(v, 4) for k, v in result.items()}
return result
if __name__ == "__main__":
print(f"Arguments count: {len(sys.argv)}")
for i, arg in enumerate(sys.argv):
print(f"Argument {i:>3}: {arg}")
paraphraser = Bert2BertParaphrase(
batch_size=int(sys.argv[1]),
epoch_size=int(sys.argv[2]),
model_pretrained="BERT2BERT-model-"+str(sys.argv[1])+"-"+str(sys.argv[2]),
cuda_id=0
)
paraphraser.train(True)
```
This function is found in other file.
```python
def preprocess_datasets(self,dataset_path, bool_local):
"""
Read the CSV file which contains all the data and transforms
it into a datasets divided by train set and test set
"""
if bool_local:
dataset_path = PAWS_X_CSV
dataset_content = pd.read_csv(
dataset_path,engine='python', sep="\;\;")
#"sentence1", "sentence2", "label"
dataset_paws_x = dataset_content[["sentence1", "sentence2", "label"]]
dataset_paws_x.columns = ['source', 'target','label']
dataset_paws_x.dropna(axis=0, how='any')
dataset_paws_x = dataset_paws_x.loc[dataset_paws_x['label'] != "0"]
df2= dataset_paws_x[['source', 'target']]
dataset_path = FORTEC_DATASET
dataset_conent = pd.read_csv(
dataset_path, engine='python', header=None, sep="\;\;")
created_dataset = dataset_conent[[0, 1]]
created_dataset.columns = ['source', 'target']
created_dataset.dropna(axis=0, how='any')
df1= created_dataset.loc[created_dataset['target'] != None]
frames = [df1, df2]
dataset = pd.concat(frames)
dataset = dataset[['source', 'target']]
dataset = Dataset.from_pandas(dataset)
dataset = dataset.remove_columns('__index_level_0__')
dataset = dataset.filter(
lambda example: example['target'] != None)
dataset = dataset.filter(
lambda example: example['source'] != None)
print(dataset)
train_testvalid = dataset.train_test_split(test_size=0.1)
print(train_testvalid)
tokenized_dataset = train_testvalid.map(
self.preprocess_function, batched=True)
else:
dataset = load_dataset(dataset_path, 'labeled_final')
dataset_paraphrase = dataset.filter(
lambda example: example['label'] > 0)
dataset_paraphrase = dataset_paraphrase.remove_columns("label")
dataset_paraphrase = dataset_paraphrase.remove_columns("id")
dataset_paraphrase.shuffle(seed=42, buffer_size=10_000)
tokenized_dataset = dataset_paraphrase.map(
self.preprocess_function, batched=True)
return tokenized_dataset
```
## Problem Description
Hello! Sorry but I've recently started to work with language models. I apologize in case I've forgotten to include something which may help you to help me with this issue.
I was fine tunning the model https://huggingface.co/mrm8488/bert2bert_shared-spanish-finetuned-paus-x-paraphrasing while in the evaluating process I got the error shown in the screenshot.
I don't actually understand why this is happening, because I have built the compute metrics function according to this tutorial https://neptune.ai/blog/hugging-face-pre-trained-models-find-the-best
Sorry if it is a naive mistake, I don't mean to bother but I don't know how I can solve it.
Thank you so much!
| 03-23-2022 07:48:48 | 03-23-2022 07:48:48 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) for additional chances of getting an answer?
Thanks!<|||||>@LysandreJik Sorry I didn't know the existence of the forum, I will write a post on there as well.
Thank you!
<|||||>Reading through the forum of HuggingFace Community I could find the [Solution here](https://discuss.huggingface.co/t/type-error-list-object-cannot-be-interpreted-as-integer-while-evaluating-a-summarization-model-seq2seq-bart/11590/4)
What's wrong with my code is that I completely forgot to set the value predict_with_generate toTrue
```python
predict_with_generate=True
```
Now it is traning perfectly, thank you! |
transformers | 16,354 | closed | Add CANINE and TAPAS to doc tests | # What does this PR do?
Adds CANINE and TAPAS to the doc tests. | 03-23-2022 07:29:55 | 03-23-2022 07:29:55 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16354). All of your documentation changes will be reflected on that endpoint.<|||||>Thank you working on this, @NielsRogge
Do you think it's possible to:
- Add more realistic examples for `TapasForQuestionAnswering`, `TapasForSequenceClassification`, etc.
- Add expected output shape for `TapasModel` and make its docstring output the shape instead of the logits.
@patrickvonplaten Do you think it is good to (always) ask the contributors to change the docstrings of `ModelForXXX` to something more meaningful (i.e. instead of the logits), as my comments here above?
Or should I just review only the changes done in their PRs?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,353 | open | bert.embeddings.position_ids is not loaded in TFBertForSequenceClassification | ## Environment info
- `transformers` version: 4.17.0
- Python version: 3.7.11
- PyTorch version (GPU?): 1.10.1 (True)
- Tensorflow version (GPU?): 2.8.0 (False)
### Who can help
@LysandreJik
## Information
`TFBertForSequenceClassification` will not load `bert.embeddings.position_ids` from pytorch checkpoint, which will raise a warning.
**Related issue**: https://github.com/huggingface/transformers/issues/7797
## To reproduce
```python
from transformers import BertForSequenceClassification, TFBertForSequenceClassification
pt_model = BertForSequenceClassification.from_pretrained("uer/chinese_roberta_L-4_H-512")
pt_model.save_pretrained("pt_model")
# load pytorch model
tf_model = TFBertForSequenceClassification.from_pretrained("pt_model", from_pt=True)
# Expect: No warning/error
# Actual: This IS NOT expected if you are initializing TFBertForSequenceClassification from a PyTorch model that you expect to be exactly identical (e.g. initializing a TFBertForSequenceClassification model from a BertForSequenceClassification model).
```
```
Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFBertForSequenceClassification: ['bert.embeddings.position_ids']
- This IS expected if you are initializing TFBertForSequenceClassification from a PyTorch model trained on another task or with another architecture (e.g. initializing a TFBertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFBertForSequenceClassification from a PyTorch model that you expect to be exactly identical (e.g. initializing a TFBertForSequenceClassification model from a BertForSequenceClassification model).
All the weights of TFBertForSequenceClassification were initialized from the PyTorch model.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFBertForSequenceClassification for predictions without further training.
```
| 03-23-2022 07:03:23 | 03-23-2022 07:03:23 | Hey @z-bookworm 👋 From the name of the folder on the hub, it seems like you are trying to load a Roberta model ("uer/chinese_roberta_L-4_H-512") in a BERT model class. Can you try replacing `TFBertForSequenceClassification` with `TFRobertaForSequenceClassification`?
Let us know if it solves your problem :)<|||||>Hi @gante 👋
The model name may be a little bit misleading. According to the config.json and [README](https://huggingface.co/uer/chinese_roberta_L-4_H-512), it should use BERT model class indeed.
Trying to load the checkpoint with `RobertaForSequenceClassification` will raise a lot of warnings:
```
>>> pt_model = RobertaForSequenceClassification.from_pretrained("uer/chinese_roberta_L-4_H-512", local_files_only=True)
You are using a model of type bert to instantiate a model of type roberta. This is not supported for all configurations of models and can yield errors.
Some weights of the model checkpoint at uer/chinese_roberta_L-4_H-512 were not used when initializing RobertaForSequenceClassification: ['bert.embeddings.token_type_embeddings.weight', 'bert.encoder.layer.2.attention.output.dense.weight', 'bert.encoder.layer.0.output.LayerNorm.weight', 'bert.encoder.layer.3.attention.self.value.weight', 'bert.encoder.layer.1.attention.self.query.bias', 'bert.encoder.layer.0.attention.self.query.bias', 'bert.encoder.layer.1.output.dense.weight', 'bert.encoder.layer.1.attention.output.dense.weight', 'bert.encoder.layer.3.intermediate.dense.weight', 'bert.encoder.layer.2.attention.self.key.bias', 'bert.encoder.layer.3.intermediate.dense.bias', 'bert.encoder.layer.3.attention.self.key.weight', 'cls.predictions.decoder.bias', 'bert.encoder.layer.1.output.dense.bias', 'bert.encoder.layer.2.attention.self.value.weight', 'bert.encoder.layer.2.output.LayerNorm.bias', 'bert.encoder.layer.2.intermediate.dense.weight', 'bert.encoder.layer.2.attention.self.value.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.LayerNorm.bias', 'bert.encoder.layer.0.attention.self.value.weight', 'bert.encoder.layer.1.attention.self.key.bias', 'cls.predictions.transform.LayerNorm.weight', 'bert.encoder.layer.2.attention.self.query.bias', 'bert.encoder.layer.0.attention.output.LayerNorm.bias', 'bert.encoder.layer.2.output.LayerNorm.weight', 'bert.encoder.layer.0.output.dense.bias', 'bert.encoder.layer.2.attention.output.LayerNorm.weight', 'bert.encoder.layer.0.output.dense.weight', 'bert.encoder.layer.3.output.dense.bias', 'bert.encoder.layer.3.output.LayerNorm.weight', 'bert.encoder.layer.0.intermediate.dense.weight', 'bert.encoder.layer.1.attention.self.query.weight', 'bert.encoder.layer.2.output.dense.weight', 'bert.encoder.layer.2.intermediate.dense.bias', 'bert.embeddings.LayerNorm.weight', 'bert.encoder.layer.1.output.LayerNorm.weight', 'bert.encoder.layer.0.attention.output.dense.bias', 'bert.encoder.layer.0.attention.output.LayerNorm.weight', 'bert.encoder.layer.1.attention.self.value.bias', 'bert.encoder.layer.3.attention.output.LayerNorm.weight', 'bert.encoder.layer.3.attention.output.LayerNorm.bias', 'bert.encoder.layer.1.intermediate.dense.weight', 'bert.encoder.layer.1.attention.self.key.weight', 'bert.embeddings.LayerNorm.bias', 'bert.encoder.layer.3.attention.self.query.bias', 'bert.encoder.layer.3.attention.self.query.weight', 'bert.encoder.layer.3.attention.output.dense.weight', 'bert.encoder.layer.2.attention.self.key.weight', 'bert.encoder.layer.1.attention.output.LayerNorm.weight', 'bert.encoder.layer.0.attention.self.key.bias', 'bert.encoder.layer.0.attention.self.query.weight', 'bert.encoder.layer.0.intermediate.dense.bias', 'bert.encoder.layer.2.output.dense.bias', 'bert.encoder.layer.1.attention.output.dense.bias', 'bert.encoder.layer.2.attention.output.dense.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.dense.weight', 'bert.encoder.layer.1.attention.self.value.weight', 'bert.encoder.layer.0.attention.output.dense.weight', 'bert.embeddings.position_ids', 'bert.encoder.layer.3.output.dense.weight', 'bert.encoder.layer.0.output.LayerNorm.bias', 'bert.encoder.layer.2.attention.output.LayerNorm.bias', 'cls.predictions.bias', 'bert.encoder.layer.3.attention.output.dense.bias', 'bert.encoder.layer.0.attention.self.value.bias', 'bert.encoder.layer.1.intermediate.dense.bias', 'bert.encoder.layer.3.output.LayerNorm.bias', 'bert.encoder.layer.3.attention.self.value.bias', 'bert.encoder.layer.1.output.LayerNorm.bias', 'bert.embeddings.position_embeddings.weight', 'bert.encoder.layer.2.attention.self.query.weight', 'bert.embeddings.word_embeddings.weight', 'bert.encoder.layer.0.attention.self.key.weight', 'bert.encoder.layer.1.attention.output.LayerNorm.bias', 'bert.encoder.layer.3.attention.self.key.bias']
- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of RobertaForSequenceClassification were not initialized from the model checkpoint at uer/chinese_roberta_L-4_H-512 and are newly initialized: ['encoder.layer.0.output.dense.bias', 'encoder.layer.3.attention.output.LayerNorm.weight', 'encoder.layer.2.output.LayerNorm.weight', 'encoder.layer.3.attention.self.key.weight', 'encoder.layer.2.attention.self.key.weight', 'encoder.layer.0.attention.self.value.bias', 'encoder.layer.1.attention.output.dense.bias', 'encoder.layer.2.attention.self.value.bias', 'embeddings.LayerNorm.weight', 'encoder.layer.1.intermediate.dense.weight', 'encoder.layer.3.attention.self.key.bias', 'encoder.layer.3.attention.output.dense.weight', 'encoder.layer.3.attention.output.LayerNorm.bias', 'embeddings.token_type_embeddings.weight', 'encoder.layer.1.output.LayerNorm.weight', 'classifier.dense.bias', 'encoder.layer.2.intermediate.dense.weight', 'encoder.layer.0.attention.output.dense.weight', 'encoder.layer.3.output.LayerNorm.weight', 'encoder.layer.0.attention.self.query.bias', 'encoder.layer.1.output.dense.bias', 'encoder.layer.2.attention.output.dense.bias', 'encoder.layer.0.attention.self.key.bias', 'encoder.layer.0.intermediate.dense.weight', 'encoder.layer.0.attention.output.LayerNorm.bias', 'encoder.layer.0.output.LayerNorm.bias', 'encoder.layer.1.intermediate.dense.bias', 'encoder.layer.2.attention.self.query.weight', 'encoder.layer.2.attention.output.LayerNorm.bias', 'encoder.layer.1.attention.self.key.weight', 'encoder.layer.3.attention.self.value.weight', 'encoder.layer.2.attention.output.dense.weight', 'encoder.layer.3.attention.self.query.bias', 'encoder.layer.0.output.LayerNorm.weight', 'encoder.layer.1.attention.self.query.weight', 'encoder.layer.0.attention.self.query.weight', 'encoder.layer.3.intermediate.dense.bias', 'encoder.layer.1.attention.output.dense.weight', 'encoder.layer.0.attention.output.LayerNorm.weight', 'encoder.layer.2.attention.self.key.bias', 'encoder.layer.0.attention.output.dense.bias', 'embeddings.LayerNorm.bias', 'encoder.layer.1.attention.self.value.weight', 'encoder.layer.1.output.LayerNorm.bias', 'classifier.out_proj.weight', 'embeddings.word_embeddings.weight', 'encoder.layer.3.attention.output.dense.bias', 'encoder.layer.2.output.dense.bias', 'encoder.layer.2.output.dense.weight', 'encoder.layer.3.attention.self.value.bias', 'classifier.dense.weight', 'encoder.layer.2.output.LayerNorm.bias', 'classifier.out_proj.bias', 'embeddings.position_embeddings.weight', 'encoder.layer.2.attention.self.value.weight', 'encoder.layer.0.intermediate.dense.bias', 'encoder.layer.1.attention.self.query.bias', 'encoder.layer.1.attention.output.LayerNorm.weight', 'encoder.layer.0.output.dense.weight', 'encoder.layer.0.attention.self.value.weight', 'encoder.layer.2.intermediate.dense.bias', 'encoder.layer.3.attention.self.query.weight', 'encoder.layer.2.attention.output.LayerNorm.weight', 'encoder.layer.3.output.LayerNorm.bias', 'encoder.layer.1.attention.output.LayerNorm.bias', 'encoder.layer.3.output.dense.weight', 'encoder.layer.3.intermediate.dense.weight', 'encoder.layer.1.output.dense.weight', 'encoder.layer.2.attention.self.query.bias', 'encoder.layer.1.attention.self.key.bias', 'encoder.layer.1.attention.self.value.bias', 'encoder.layer.0.attention.self.key.weight', 'encoder.layer.3.output.dense.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
According to the [source code](https://github.com/huggingface/transformers/blob/main/src/transformers/models/roberta/modeling_tf_roberta.py#L117), it seems `position_ids ` is calculated deterministically without any parameters. It won't affect model performance. So can we remove this warning?<|||||>That's interesting! 🤔
Because it is a non-expected combination of code/data (having a roberta model that should be loaded with bert, according to the instructions), that can happen. The only way to confirm that is correct is to do some standard checks, like confirming that the PT model has good performance on the expected downstream task, and that the TF model as the same output as the PT model for the same input. Because it is a model submitted by an user, I don't think there is much more we can do, and you should try to get in touch with the user if there is any remaining problem.
Since this is not a bug in the code, I'm closing this issue. For further (non-bug) discussions, please refer to our [forums](https://discuss.huggingface.co/) :) <|||||>@gante Sorry for using a misleading checkpoint as the example. **The problem should arise for any BERT model**. Take the official checkpoint `bert-base-uncased` for exmaple:
```python
from transformers import BertForSequenceClassification, TFBertForSequenceClassification
pt_model = BertForSequenceClassification.from_pretrained("bert-base-uncased")
pt_model.save_pretrained("pt_model")
tf_model = TFBertForSequenceClassification.from_pretrained("pt_model", from_pt=True)
```
```
Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFBertForSequenceClassification: ['bert.embeddings.position_ids']
- This IS expected if you are initializing TFBertForSequenceClassification from a PyTorch model trained on another task or with another architecture (e.g. initializing a TFBertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFBertForSequenceClassification from a PyTorch model that you expect to be exactly identical (e.g. initializing a TFBertForSequenceClassification model from a BertForSequenceClassification model).
All the weights of TFBertForSequenceClassification were initialized from the PyTorch model.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFBertForSequenceClassification for predictions without further training.
```<|||||>@z-bookworm that is a good point, I'm assigning the issue to me :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@gante Any progress on this?
I also had this issue when converting a bunch of models. It looks like the `position_ids` buffer is not used for anything anymore in the `Embeddings` class, and is just taking up memory.
```
from transformers import RobertaForMaskedLM, TFRobertaForMaskedLM
pt_model = RobertaForMaskedLM.from_pretrained('roberta-base')
pt_model.save_pretrained('./testing')
tf_model = TFRobertaForMaskedLM.from_pretrained('./testing', from_pt=True)
```
produces
```
Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFRobertaForMaskedLM: ['roberta.embeddings.position_ids']
- This IS expected if you are initializing TFRobertaForMaskedLM from a PyTorch model trained on another task or with another architecture (e.g. initializing a TFBertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFRobertaForMaskedLM from a PyTorch model that you expect to be exactly identical (e.g. initializing a TFBertForSequenceClassification model from a BertForSequenceClassification model).
All the weights of TFRobertaForMaskedLM were initialized from the PyTorch model.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFRobertaForMaskedLM for predictions without further training.
```<|||||>Hey @AndreasMadsen 👋 No update -- since it is not breaking, just an annoying warning that shouldn't be there, it's low in my priority.
If you'd like to have a go in fixing it, I'd be happy to provide pointers :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>You can set the strict argument to False in the load_state_dict() function to ignore non-matching keys.
`model.load_state_dict(torch.load(path, map_location=torch.device('cpu')), strict=False)` |
transformers | 16,352 | closed | Make REST server support arbitrary pipeline params | # What does this PR do?
This PR makes the `forward` endpoint of the REST server support arbitrary pipeline params. This allows for easy configuration, such as setting `max_length` and `temperature`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-23-2022 04:02:08 | 03-23-2022 04:02:08 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16352). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,351 | closed | The huggingface web site crashed | <img width="800" src="https://user-images.githubusercontent.com/16131917/159614484-741db4c9-5c60-442a-8df2-efa39869369d.png">
**Set `local_files_only` to be True for reading the local cache first。**
```python
tokenizer = BertTokenizer.from_pretrained(tokenizer_config.model_path, local_files_only=True)
```
# Is there any mirror of this web site?
| 03-23-2022 02:52:33 | 03-23-2022 02:52:33 | yes, and I couldn't use from_pretrained() in my code<|||||>+1<|||||>+1<|||||>+1<|||||>+1<|||||>+1<|||||>+1
500 Server Error: Internal Server Error for url: https://huggingface.co/bert-base-uncased/resolve/main/config.json
OSError: We couldn't connect to 'https://huggingface.co/' to load this model and it looks like bert-base-uncased is not the path to a directory conaining a config.json file.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.<|||||>+1<|||||>+1<|||||>How to prevent Tokenizer/Model from requesting Internet when local cache is available? Force HuggingFace read the local cache first?<|||||>> How to prevent Tokenizer/Model from requesting Internet when local cache is available? Force HuggingFace read the local cache first?
You can set `local_files_only ` to be True.
```python
tokenizer = BertTokenizer.from_pretrained(tokenizer_config.model_path, local_files_only=True)
```<|||||>@z-bookworm Thanks it worked.<|||||>+1<|||||>+1<|||||>+1<|||||>Sorry about that!
See this tweet for some context on our work on Hub scalability and performance: https://twitter.com/julien_c/status/1496129354023288838
Will close this issue for now, thanks for reporting 🙏 |
transformers | 16,350 | closed | Fix pipeline loading of custom specified config | # What does this PR do?
Supplying a manual config file to a pipeline does not work. The model defaults is used instead.
This is due to the config file supplied to pipeline is not passed on to the instantiation of a model.
This PR solves this by simply passing on the config object from the pipeline to the model instantiation.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-23-2022 01:21:45 | 03-23-2022 01:21:45 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16350). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,349 | open | [WIP] New Model Add FastPitch 1.1 | # 🌟 New model addition
## Model description
<!-- Important information -->
**What type of model is Fast Pitch 1.1?**
It is a Mel spectrogram generator (part of a speech to text model engine) that mainly comprises of two Feed Forward Transformer stacks.
Fast Pitch is used to transform text to a spectrogram **to be used in wave form generation in speech synthesis**.
Fast Pitch 1.1 include's multispeaker embeddings.
**What is the novel feature of the model making it different from other spectrogram generators?**
A fully-parallel text-to-speech model
based on Fast Speech, conditioned on fundamental frequency
contours. The model predicts pitch contours during inference.
By altering these predictions, the generated speech can be
more expressive, better match the semantic of the utterance,
and in the end more engaging to the listener.
Uniformly increasing or decreasing pitch with Fast Pitch generates speech
that resembles the voluntary modulation of voice.
Fast Pitch is meant to be used with a *neural vocoder* like Wave Net, or Wave Glow.
Text (Feature Extraction) → **Audio Synthesis (**spectrogram)**→ Waveform Synthesis (wavform)**
From the [paper](https://arxiv.org/pdf/2006.06873.pdf)
abstract:
> We present FastPitch, a fully-parallel text-to-speech model
> based on FastSpeech, conditioned on fundamental frequency
> contours. The model predicts pitch contours during inference.
> By altering these predictions, the generated speech can be
> more expressive, better match the semantic of the utterance,
> and in the end more engaging to the listener. Uniformly increasing or decreasing pitch with FastPitch generates speech
> that resembles the voluntary modulation of voice. Conditioning on frequency contours improves the overall quality of
> synthesized speech, making it comparable to state-of-the-art.
> It does not introduce an overhead, and FastPitch retains the
> favorable, fully-parallel Transformer architecture, with over
> 900× real-time factor for mel-spectrogram synthesis of a typical utterance.
## Open source status
**Samples**
https://fastpitch.github.io/
**My Own Generated Samples**
https://voca.ro/1eYmqidRhGi6
**Pro's of the model:**
It plays a part in a high MOS score. Compariable to Tacotron2, without the high cost of inference.
High performance and High Quality will be useful to provide voice or soul to digital assistants or metaverse assistants.
Training isn't sophisticated, unlike FastPitch 1.0 this model does not required durations or alignments to be generated from Tacotron2 or Montreal Forced Aligner.
**Con's of the model:**
It isn't in the Hugging Face repository to be easily adapted to products use cases. =)
* [x] the model implementation is available:
It is availible here.
https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/SpeechSynthesis/FastPitch
* [x] the model weights are available:
Using Automatic Mixed Percision FP-1.1
https://catalog.ngc.nvidia.com/orgs/nvidia/models/fastpitch_pyt_amp_ckpt_v1_1/files?version=21.12.0
* [x] who are the authors: @alancucki
*I am sorry if I missed anyone.*
cc @anton-l @patrickvonplaten
will assign to whom will be availible once the draft is complete. | 03-23-2022 01:06:00 | 03-23-2022 01:06:00 | Preliminary Tasks:
- [x] Load Model Run Inference
- [ ] Understand the Explore Hugging Face Encoder Functions determine API or options availible to the encoder for FP and other TTS
- [ ] Port CMU Dict and Text Processing and Cleaners over to HF for the Encoder?
Scope out more tasks these are just enough for now:
- [ ] Start Inquiring about porting Wave Glow as a Decoder Model for the MelSpectrogram to Wavform Generation.
<|||||>Very cool! @anton-l do you know whether we are allowed to use checkpoints from Microsoft regarding licensing? <|||||>@patrickvonplaten looks like there's no custom NVIDIA license this time, the checkpoint's license refers to the BSD-3 bundled with the code: https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/SpeechSynthesis/FastPitch/LICENSE
I think we're able to concatenate it with the transformers' Apache license?
Also FYI @jaketae as you were interested in porting FastPitch too :slightly_smiling_face:
<|||||>Cool - if @anton-l and @jaketae you think it's worth adding this model, happy to give it a try |
transformers | 16,348 | closed | Added spanish translation of autoclass_tutorial.mdx | # Translation of autoclass_tutorial.mdx into spanish
I made the translation of autoclass_tutorial.mdx into Spanish (fixes https://github.com/huggingface/transformers/issues/15947). The document is located in the transformers/docs/source_es folder.
@omarespejel
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
| 03-23-2022 00:25:55 | 03-23-2022 00:25:55 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16348). All of your documentation changes will be reflected on that endpoint.<|||||>Thank you @Duedme! 🤗
@sgugger LGTM 👍 <|||||>There are a lot of conflicts in your branch, for just one added file. I think the easiest to fix this is to copy your file, create a new fresh branch from the main branch and open a new PR, as I can't review the diff here :-) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.