repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
14,839
open
Fine-tuning GPT-J-6B in colab: 8-bit weights with low-rank adaptors
# 🌟 New model addition ## Model description This is a version of EleutherAI's GPT-J with 6 billion parameters that is modified so you can generate and fine-tune the model in colab or equivalent desktop GPU (e.g. single 1080Ti). The original GPT-J takes 22+ GB memory for float32 parameters. Even if you cast everything to 16-bit, it will still not fit onto most single-GPU setups short of A6000 and A100. You can inference it on TPU or CPUs, but fine-tuning is way more expensive. ## Implementation Proof-of-concept notebook is available here: [![colab](https://camo.githubusercontent.com/84f0493939e0c4de4e6dbe113251b4bfb5353e57134ffd9fcab6b8714514d4d1/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667)](https://colab.research.google.com/drive/1ft6wQU0BhqG5PRlwgaZJv2VukKKjU4Es?usp=sharing#scrollTo=P8Y75B6WDIN-) [Model card](https://huggingface.co/hivemind/gpt-j-6B-8bit) has more detailed explanations and auxiliary notebooks (e.g. model conversion and perplexity check). The current implementation is somewhat hacky, but it can be integrated easily with modelling_gptj.py if you like the idea. ## Open source status * the model implementation is available [here](https://colab.research.google.com/drive/1ft6wQU0BhqG5PRlwgaZJv2VukKKjU4Es) * the model weights are available [here](https://huggingface.co/hivemind/gpt-j-6B-8bit) * who are the authors: - the [original GPT-J-6B](https://huggingface.co/EleutherAI/gpt-j-6B) was trained by [Eleuther AI](https://www.eleuther.ai/) (citation: Ben Wang and Aran Komatsuzaki) - fast quantization from [bitsandbytes](https://github.com/facebookresearch/bitsandbytes) by [Tim Dettmers](https://github.com/TimDettmers) - low-rank adapters were proposed for GPT-like models by [Hu et al (2021)](https://arxiv.org/abs/2106.09685) - this notebook was created by me ( @deniskamazur ) with some help from Yozh ( @justheuristic)
12-19-2021 23:53:52
12-19-2021 23:53:52
Just read the LORA paper and your implementation combined with weight quantization is very neat, @deniskamazur. Thank you! a few comments: 1. If and when this is integrated into transformers may I just suggest not to override `__repr__` in the frozen modules? I have puzzled for a while over why the `adapter` weights don't show up when dumping the model until I have noticed a custom `__repr__` that hides them. I totally get it that it was added for brevity of the demo. Nothing needs to be changed in the demo notebook. 2. With the little I know of BNB, `Adam8bit` usually requires a `StableEmdedding` - which is the same as `nn.Embedding` but with `layer_norm` inited to kaiming uniform at the end of `forward`. Do you think it's not needed for LoRA? We will be discussing this in https://github.com/huggingface/transformers/issues/14819 as well, once Tim is back from vacation. But I thought I'd bring it up here as well as it's relevant. 3. It would be good to finetune it for a bit and see that LoRA actually delivers on the promise. Unless someone already did so, then it's not needed. 4. How do we decide on a good `adapter_dim` (rank) to recommend to users? what should be the default? and this hparam definitely should be user-configurable. 5. We surely will want to make it available to more than just GPT-J if it works well. But it's good to start with one model.<|||||>The KoboldAI community is really looking forward to seeing these 8-bit models implemented, since many users of our software use it at their own home computers this allows more people to run 6B at good speeds. Ideally we'd see a way to easily convert the models for our users on the fly, similar to how _half() works so they can load unconverted versions and still have the gains this brings. If this is not possible i hope it will be easy to detect that a model is the 8-bit variant, so we can avoid executing half() on the model.<|||||>Thank you so much for creating this, Denis! > 2. With the little I know of BNB, `Adam8bit` usually requires a `StableEmdedding` - which is the same as `nn.Embedding` but with `layer_norm` inited to kaiming uniform at the end of `forward`. Do you think it's not needed for LoRA? We will be discussing this in [RFC: Integrating bitsandbytes 8-bit optimizer / adding Embedding Norm #14819](https://github.com/huggingface/transformers/issues/14819) as well, once Tim is back from vacation. But I thought I'd bring it up here as well as it's relevant. See the discussion in this issue for more information, but in short, StableEmbedding layer is only required if the model was pretrained with the StableEmbedding layer. In the case of regular finetuning with 8-bit Adam, it is better to have 32-bit optimizers for the embedding layer. It is currently unclear if this is required for LoRA since the frozen 8-bit weights will provide some stability. Just to be sure, it is probably better to optimize the LoRA embedding layer in 32-bit (no change to the model). You can integrate this in your embedding layer class as shown [here](https://github.com/facebookresearch/bitsandbytes/blob/main/bitsandbytes/nn/modules.py#L51). Optionally, you can use the `bnb.nn.StableEmbedding` in place of the LoRA embedding layer and optimize the linear projection normally: ```python elif isinstance(module, FrozenBNBEmbedding): module.adapter = nn.Sequential( bnb.nn.StableEmbedding(module.num_embeddings, adapter_dim), nn.Linear(adapter_dim, module.embedding_dim, bias=False), ) ``` > 3. It would be good to finetune it for a bit and see that LoRA actually delivers on the promise. Unless someone already did so, then it's not needed. This is definitely a good idea. From my experience with 8-bit weights is that they work fine as long as they are not optimized over time. So keeping them frozen and optimizing the low-rank matrices should work just fine and produce results similar to the LoRA paper. However, I have never tried the setup of 8-bit weights + 16/32-bit low-rank matrices, so its better to check this. <|||||>> If this is not possible i hope it will be easy to detect that a model is the 8-bit variant, so we can avoid executing half() on the model. 1. This quant+lora will most likely require new architecture, in which case the model should automatically do the right thing on load. or at the very least there should be a config entry which will tell transformers what to do. 2. I'm trying to find a way to automatically detect the dtype here https://github.com/stas00/ml-ways/blob/master/numbers/detect-model-pretrained-in-bf16-fp16-fp32.ipynb, so now we can try int8 as well - I could use more inputs to help with that work. 3. Also I proposed a while ago to have a model save how it was trained in its `config.json` https://github.com/huggingface/transformers/issues/11209 - my proposal didn't go far, but perhaps this new development might give it some push.<|||||>Hey, everyone! Thanks for your interest and comments! 1. I'd like to discuss if we actually need LoRa adapters in the possible implementation. As I see it, they are not necessarily a part of the 8bit model. Maybe, we could just add an `add_low_rank_adaptors_` function or method. 2. @stas00, I like your idea of generalizing this to other models. Though I don't have any ideas regarding the possible implementation of this. Would be glad to hear yours. 3. I could open a PR with the 8bit GPT-J without adapters like tomorrow. Should I do it, or is there anything we should discuss before that? <|||||>> * I'd like to discuss if we actually need LoRa adapters in the possible implementation. As I see it, they are not necessarily a part of the 8bit model. Maybe, we could just add an `add_low_rank_adaptors_` function or method. These are orthogonal features so probably they can be implemented separately. Separating these surely would make the PRs simpler to manage. But it'd be good to keep in mind the ensemble from the get going. > * @stas00, I like your idea of generalizing this to other models. Though I don't have any ideas regarding the possible implementation of this. Would be glad to hear yours. Since you're overriding pytorch components, this is already generic enough. So the unique to model changes are the `post_init` code where you call 1x or 2x of `convert_to_int8`. By post init I mean literally post init (we don't have such method yet I think). So here we need a sort of a map/policy per arch that will run the right post_init after the map lookup if the model config says so, so .e.g. So this is a hardcoded way (to replace monkeypatch) ``` class GPTJBlock(): def __init__(self, config): super().__init__(config) [...] if config.8bits: convert_to_int8(self.attn) convert_to_int8(self.mlp) ``` and the more generic way which can then be expanded to other archs easily: ``` # in another file 8bit_map = dict( gptj=dict( GPTJBlock = ["self.attn", "self.mlp"], GPTJModel = ["self"], GPTJForCausalLM = ["self"], ), gptneo=dict(), gpt2=dict() ) # gptj_modeling class GPTJBlock(): def __init__(self, config): super().__init__(config) #[...] if config.8bits: to_int8_params = 8bit_map["GPTJBlock"] for param in to_int8_params: # XXX: figure out the getattr for self vs self.foo convert_to_int8(getattr(self, "param")) ``` which of course should be refactored into a simple: ``` if config.8bits: self.to_init8() # do all of the above ``` and since we will likely to have other similar maps as we try to integrate all the new development this then again can be abstracted away: ``` post_init_maps = dict( 8bit=8bit_map, featureX=featureX_map, # doesn't exist yet ) [....] self.post_init() ``` which will do this and other future feature enabling and not make the code noisy. On the other hand it's possible that my proposal will be supported by others and an explicit code will be used for each class/arch. This is all very incomplete pseudo code, just to show what I'm trying to propose conceptually Here is another example where a policy map is created for different archs: https://github.com/huggingface/transformers/blob/10a382bb85e0ea75e34623adad4bdd521b16b16a/src/transformers/deepspeed.py#L36-L41 This is from a very early deepspeed-inference PR https://github.com/huggingface/transformers/pull/14426 > * I could open a PR with the 8bit GPT-J without adapters like tomorrow. Should I do it, or is there anything we should discuss before that? I'm sure others will have a lot more to say, but since you have the code written already, probably the best way is to just open an PR and go from there. You can start with the hardcoded version or you can try to do something like I suggested, which will immediately prepare a foundation to support other architectures. As I said earlier w/o hearing from other maintainers I'm not sure what is the best first step. The lowest risk is hardcoded I'd say. Alternatively you can wait till Monday when many devs should be back and may have a chance to comment.<|||||>I like your suggestion with the policy map. I think I'll wait for the other maintainer's opinions before opening the PR. Thanks!<|||||>I also wonder whether the policy should be arch-specific, or model-specific - what if someone wants to do 8-bit only for FFN or only for Embedding? If model-specific than the specific params to convert to 8-bit can be declared in the model config. or perhaps there could be an arch-specific default and then the model-specific could override it? Not sure...<|||||>> 1. I'd like to discuss if we actually need LoRa adapters in the possible implementation. As I see it, they are not necessarily a part of the 8bit model. Maybe, we could just add an `add_low_rank_adaptors_` function or method. From my experience training with 8-bit dynamic block-wise quantization degrades performance over time but it is fine if only applied once and used for inference or as in the case of LoRA as some sort of "base output" that is adapted. As such, I think that LoRA might be required to maintain good performance. That being said, I have never tried _finetuning_ a model and I only worked on _pretraining_ -- it might be that finetuning with 8-bit weights works just fine. I think the solution with a map to specify 8-bit parameters would be very handy. I think that would give the flexibility that is needed. What I would add is what kind of int8 data type is used. > I also wonder whether the policy should be arch-specific, or model-specific - what if someone wants to do 8-bit only for FFN or only for Embedding? If model-specific than the specific params to convert to 8-bit can be declared in the model config. or perhaps there could be an arch-specific default and then the model-specific could override it? Not sure... I think it should be model specific. There are certain tradeoffs and important differences having certain things in 8-bit and others in 16-bit for the same model architecture. So it would be very useful to be able to have more flexibility overall to accommodate that.<|||||>> What I would add is what kind of int8 data type is used. Did you mean to say something different here, Tim? Unless I misunderstood, int8 is already a single data type. Perhaps you meant having a flexibility on how many quantization bits are used for different components, so it's not always 8, but can be 4, 16, etc.? Same as `optim_bits` param in the BNB's optim: ``` GlobalOptimManager.get_instance().register_module_override(module, 'weight', {'optim_bits': 32}) ``` <|||||>> Did you mean to say something different here, Tim? Unless I misunderstood, int8 is already a single data type. Currently, the bnb quantization by default uses dynamic block-wise quantization so the int8 data type represents that data type which is defined by the int8 data + int-to-float map + normalization constants. This data type is storage optimized. Soon, I will also add another data type to bnb which will be compute optimized. It is still represented by int8 data + int-to-float map + normalization constants but these will be different and incompatible from the storage optimized variant. At this point, it is already clear to me that the storage data type can be improved quite easily. So it might also be helpful to support that to make sure future variants can be supported easily. On the other hand, it might be better defined separately. That one defines int8 + a quantization method which is defined somewhere else.<|||||>Sounds good, Tim. So I trust you will come up with the different names then. We just need to think how to make it easily expandable in the future to support other types. My thinking is that perhaps BNB won't be the only library providing quantization support so the more generic it is the better. We can start with one model, flag it experimental until we sort out the config.<|||||>Hi, everyone! Thank you for your suggestions. I'm currently busy with my uni exams, but I'll be back with a PR in a couple of weeks.<|||||>I have a question and I am writing. quantized model(hivemind/gpt-j-6B-8bit) and of the original model(EleutherAI/gpt-j-6B) The generate inference speed is almost doubled(quantized model is much slower) I wonder if it is normal to come out at that speed or if it can be reduced<|||||>Hi! The inference speed is indeed slower due to the fact that you de-quantize weight matrices for every token. You can increase the batch size (i.e. generate several sequences in parallel) to reduce that overhead. The same is true for training: the fine-tuning speed is not significantly different from the original model because training is parallel over `sequence_length` tokens (while inference is inherently sequential). You can combine the two setups (vanilla and 8-bit) to better fit your hardware. For instance, if you have a T4 or rtx3090 gpu, it is enough to inference the model but not enough to fine-tune it. The optimal pipeline would be to fine-tune using 8-bit weights, then de-quantize for inference. In turn, if you have a 10-12GB GPU such as rtx 2080Ti or 3080, inferencing should run in 8-bit mode as well.<|||||>I just really can not hold back from saying, this is awesome! Thank you 🙏 Good luck on your studies, hope when you're finished I can assist you somehow with next steps.<|||||>Hey, everyone! I've [implemented](http://github.com/deniskamazur/transformers/tree/gpt-j-8bit) the «hardcoded» version of this issue. You can verify it's functional over [here](https://colab.research.google.com/drive/1m3KQYva980cQnRoycCMAMEEcAyeallZJ?usp=sharing). Should I add any tests before opening a PR? I'd also be glad to implement LoRA and a generalized version of this issue in future PRs. <|||||>> I've [implemented](https://github.com/deniskamazur/transformers/tree/gpt-j-8bit) the «hardcoded» version of this issue. Awesome news, @deniskamazur! I won't have time at this moment to support this process very closely but I trust there will be other maintainers who will have a closer look and provide feedback once you open a PR. > Should I add any tests before opening a PR? Definitely, and you can use this tiny model for functionality tests: https://huggingface.co/hf-internal-testing/tiny-random-gptj but I guess you will need the 8bit version which we currently don't have, perhaps then start with what you have and then we can reduce it to a tiny size at the end of the PR process (we want functional tests to run fast). As we have a massive test suite it should be relatively easy to build upon/mimic some of the existing tests. And if you get stuck please don't hesitate to ask in the PR. <|||||>Great, thanks! I'll open a PR as soon as I write the test then.<|||||>Hey! I've noticed this [PR](https://github.com/huggingface/transformers/pull/17901), that seems to generalize what we are doing with gpt-j-8bit. What should I do with this issue?<|||||>Hi Denis, it has been a long time.... perhaps there has been a misunderstanding - as we have been waiting for you to complete the PR so nothing has happened here until now. Let's tag @younesbelkada, whose PR you linked to. Younes, not to load more work on you, but a quick question - does your PR supercedes Denis' work? or is there some collaboration that can happen here?<|||||>Hi @deniskamazur @stas00 Sorry for getting back late on this! I don't think there will be a conflict in both methods, but our PR aims to support all models on `transformers` by replacing their Linear layers by the one that will be provided by `bitsandbytes` - so naturally GPT-J should be supported too. But I am not sure the quantization method you want to integrate here is the same as the one we are aiming to integrate on the other PR. In our implementation the weights should not need to be loaded/pushed in int8 and could be directly casted from any fp16 weights, therefore we could just do something like `AutoModel.from_pretrained(load_in_8bit=True)` and it should be fine (which is different to what is described here?). Though, I will be definitely happy to discuss any possible collaboration with you if you see any! Feel free to jump in the other PR tagging also @TimDettmers <|||||>I suppose the advantage of loading in int8, is that with fp16 you need 2x memory upfront, but since we now have sharded checkpoints this can be overcome by sharding into smaller shards if someone is really tight on memory, so only the embedding will be the largest param. But otherwise I'll let you guys to discuss the pros and cons of which way, as I'm still busy with the BigScience, but would love to study this closer / support you guys once the marathon is over. May be let's also cc @justheuristic to this discussion. So between the four of you this domain is in the good hands.<|||||>> Hey Thanks for your notebook I am trying to run this notebook how ever I am getting the following error when installing bitesandbytes-cuda111 with your specified version 0.26.0: ERROR: Could not find a version that satisfies the requirement bitsandbytes-cuda111==0.26.0 (from versions: 0.26.0.post2) ERROR: No matching distribution found for bitsandbytes-cuda111==0.26.0 Please let me know if any other version should be replaced. Thanks <|||||>> > > > Hey Thanks for your notebook > > I am trying to run this notebook how ever I am getting the following error when installing bitesandbytes-cuda111 with your specified version 0.26.0: ERROR: Could not find a version that satisfies the requirement bitsandbytes-cuda111==0.26.0 (from versions: 0.26.0.post2) ERROR: No matching distribution found for bitsandbytes-cuda111==0.26.0 > > Please let me know if any other version should be replaced. Thanks Change `!pip install bitsandbytes-cuda111==0.26.0` to `!pip install bitsandbytes` and this notebook works for now. > I suppose the advantage of loading in int8, is that with fp16 you need 2x memory upfront, but since we now have sharded checkpoints this can be overcome by sharding into smaller shards if someone is really tight on memory, so only the embedding will be the largest param. > > But otherwise I'll let you guys to discuss the pros and cons of which way, as I'm still busy with the BigScience, but would love to study this closer / support you guys once the marathon is over. > > May be let's also cc @justheuristic to this discussion. So between the four of you this domain is in the good hands. As per the `hivemind/gpt-j-6b-8bit` model card, I'm trying to use `load_in_8bit=True` with `EleutherAI/gpt-j-6B` but I can't seem to get it to work without crashing due to too much RAM usage. What would the RAM requirements be? <|||||>@petertjmills Same here. Using int8, the original model fits on an 8 GB NVIDIA GeForce GTX 1080, but crashes after the first generation. The Hivemind model [uses float16 or float32 for computation](https://huggingface.co/hivemind/gpt-j-6B-8bit), so it's even more unlikely to succeed. Probably at least 9-10 GB VRAM are needed.<|||||>I am getting the following error when attempting to fine-tune: Traceback (most recent call last): File "/opt/gpt-j-8bit/gpt-j-6b-8-bit.py", line 242, in <module> out = gpt.forward(**batch,) File "/opt/gpt-j-8bit/.env/lib/python3.8/site-packages/transformers/models/gptj/modeling_gptj.py", line 782, in forward transformer_outputs = self.transformer( File "/opt/gpt-j-8bit/.env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/opt/gpt-j-8bit/.env/lib/python3.8/site-packages/transformers/models/gptj/modeling_gptj.py", line 636, in forward outputs = block( File "/opt/gpt-j-8bit/.env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/opt/gpt-j-8bit/.env/lib/python3.8/site-packages/transformers/models/gptj/modeling_gptj.py", line 291, in forward feed_forward_hidden_states = self.mlp(hidden_states) File "/opt/gpt-j-8bit/.env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/opt/gpt-j-8bit/.env/lib/python3.8/site-packages/transformers/models/gptj/modeling_gptj.py", line 254, in forward hidden_states = self.fc_in(hidden_states) File "/opt/gpt-j-8bit/.env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/opt/gpt-j-8bit/gpt-j-6b-8-bit.py", line 48, in forward output += self.adapter(input) RuntimeError: Output 0 of DequantizeAndLinearBackward is a view and is being modified inplace. This view was created inside a custom Function (or because an input was returned as-is) and the autograd logic to handle view+inplace would override the custom backward associated with the custom Function, leading to incorrect gradients. This behavior is forbidden. You can fix this by cloning the output of the custom Function. Any idea on how to solve this? Edit: Was able to get the fine-tuning going by modifying the following part: `def forward(self, input): output = DequantizeAndLinear.apply(input, self.weight, self.absmax, self.code, self.bias) if self.adapter: output += self.adapter(input) return output` To: ` def forward(self, input): output = DequantizeAndLinear.apply(input, self.weight, self.absmax, self.code, self.bias) if self.adapter: output_cloned = torch.clone(output + self.adapter(input)) return output_cloned else: return output`<|||||>After training the model with this notebook, how can it be saved and loaded back? If I try `gpt.save_pretrained(some_folder)` I can save the model, but then if I try to load it back in another script with `model = AutoModelForCausalLM.from_pretrained(some_folder).cuda()` I get the following warning: > Some weights of the model checkpoint at some_folder were not used when initializing GPTJForCausalLM: ['transformer.h.0.mlp.fc_in.code', 'transformer.h.21.attn.k_proj.adapter.1.weight', 'transformer.h.17.attn.k_proj.code', 'transformer.h.12.attn.v_proj.absmax', 'transformer.h.0.attn.q_proj.absmax', 'transformer.h.2.attn.out_proj.code (...) And the loaded model only produces garbage output. Alternatively, if I try to load it with `model = AutoModelForCausalLM.from_pretrained(some_folder, load_in_8bit=True, device_map='auto')` I get an error: > RuntimeError: Only Tensors of floating point and complex dtype can require gradients<|||||>To the best of my knowledge, you will need to manually extract and save model state dict -- containing only the modules you have trained -- and then load the state dict with model.load_state_dict .<|||||>Hi, thanks for your very nice work! I tried to almost blindly copy-past your notebook on a blank colab notebook (simple standard free account). I only encountered one error, almost at the beginning CUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so. Maybe you need to compile it from source? CUDA SETUP: Defaulting to libbitsandbytes_cpu.so... /usr/local/lib/python3.8/dist-packages/bitsandbytes/cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable. warn("The installed version of bitsandbytes was compiled without GPU support. " I ignored it (of course I felt that something was not right) and I arrived with no other errors up to the point where it tries to gpt.generate new text, i.e. before fine-tuning. The command gtp.generate is running since 25 minutes. I suspect this slowness is not normal, but rather is an effect of not using gpu. Is that correct? any suggestion how to solve it? Thanks Andrea<|||||>Hi @andreo73, You need to install the CUDA version of bitsandbytes, `pip install bitsandbytes-cuda111` <|||||>Has anyone already tried fine-tuning this with the alpaca approach?<|||||>### Runtime error when batching I'm having issues with getting [the proof-of-concept notebook](https://colab.research.google.com/drive/1ft6wQU0BhqG5PRlwgaZJv2VukKKjU4Es) to work with a batch size > 1. The original notebook just iterates over the sample dataset row by row (one example at a time), which works fine also for my dataset. However, when I feed batches of more than one example to the model (in `out = gpt.forward(**batch,)`), I get a `RuntimeError: The size of tensor a (64) must match the size of tensor b (4) at non-singleton dimension 3`. The same happens when I use the `Trainer` API. Does anyone have an idea what's going on here? My batches are of the form ``` { "input_ids": [[123, 456, ...], [321, 654, ...], ...], "attention_mask": [[1,1,1, ...0], [1,1,1,...0], ...] } ``` <|||||>![image](https://user-images.githubusercontent.com/432168/234852102-3d5b0b20-d136-49e9-bc96-7e3e0eccd15c.png) out of memory on colab every time ((
transformers
14,838
closed
Subword Tokenization Bug after Non-Space Word Boundaries (AlbertTokenizerFast)
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.11.0 - Platform: macOS Big Sur - Python version: 3.8.5 ### Who can help @LysandreJik ## Information When using the AlbertTokenizer or AlbertTokenizerFast, words that appear right after word boundaries (except spaces) are not tokenized properly. I have been able to replicate this problem for word boundaries such as brackets `[{(`, slashes `/` or dashes `-`, but there may be others. However, if a space is introduced between these word boundaries and the subsequent word, the phrase is tokenized properly. For example, `(presentation)` is tokenized incorrectly by AlbertTokenzerFast, while `( presentation)` is tokenized correctly by AlbertTokenzerFast. This issue doesn't exist for BertTokenizerFast (see detailed example below). It seems like words that follow non-space word boundaries are treated as a continuation of the word boundary. I would expect that the word would be treated as a new word. This leads to incorrect subword tokenization. For example, AlbertTokenizerFast is tokenizing `(presentation)` with the subword tokens `present, ation` rather than with the start word token `▁presentation`. Model I am using: **ALBERT** The problem arises when using: * [x] my own modified scripts ## To reproduce Steps to reproduce the behavior: ``` python from transformers import AlbertTokenizerFast, BertTokenizerFast test_phrase = "Knowledge of (presentation) would be great." albert_tokenizer = AlbertTokenizerFast.from_pretrained('albert-base-v2') bert_tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased') albert_tokenizer_tokens = albert_tokenizer(test_phrase).tokens() bert_tokenizer_tokens = bert_tokenizer(test_phrase).tokens() print("Albert Tokenizer Tokens: {}".format(albert_tokenizer_tokens)) print("Bert Tokenizer Tokens: {}".format(bert_tokenizer_tokens)) ``` The print statements produce the following: ``` Albert Tokenizer Tokens: ['[CLS]', '▁knowledge', '▁of', '▁', '(', 'present', 'ation', ')', '▁would', '▁be', '▁great', '.', '[SEP]'] Bert Tokenizer Tokens: ['[CLS]', 'knowledge', 'of', '(', 'presentation', ')', 'would', 'be', 'great', '.', '[SEP]'] ``` Notably, the word that appears after the parenthesis, presentation, is tokenized into the tokens `present` and `ation`, rather than `▁presentation`. ## Expected behavior I would expect the print statements to produce the following: ``` Albert Tokenizer Tokens: ['[CLS]', '▁knowledge', '▁of', '▁', '(', '▁presentation', ')', '▁would', '▁be', '▁great', '.', '[SEP]'] Bert Tokenizer Tokens: ['[CLS]', 'knowledge', 'of', '(', 'presentation', ')', 'would', 'be', 'great', '.', '[SEP]'] ``` <!-- A clear and concise description of what you would expect to happen. -->
12-19-2021 23:22:21
12-19-2021 23:22:21
cc @SaulLu <|||||>Hi @mcrchopra , Could you tell us more about why you think the tokenization of `(presentation)` with Albert's tokenizer should be `'(', '▁presentation', ')'` and not `'(', 'present', 'ation', ')'`? Apriori, there is no reason to have similar tokenizations produced with Albert's and Bert's tokenizers. From my point of view, the tokenization returned by the Albert tokenizer corresponds well to the rules we gave it but I would be really keen to know your opinion on it. :blush:<|||||>Hi @SaulLu, thanks for the quick response! This specifically becomes an issue, when you introduce new tokens to the Albert vocabulary and continue pre-training. For the sake of continuity, let's say that the word "presentation" (from the example above) was not present in the original vocabulary. Given the current behavior, phrases with `(presentation)` would tokenize the term into subwords rather than use the new token `presentation`. This defeats the purpose of adding the new token altogether (i.e. lessens the amount of training data that the model has in it's purview). In the dataset I'm using to further pre-train Albert, there are a large number of parentheticals and slashes, which is "reducing the training signal".<|||||>Hi @mcrchopra , thank you for detailing your use case! > Given the current behavior, phrases with (presentation) would tokenize the term into subwords rather than use the new token presentation. This defeats the purpose of adding the new token altogether (i.e. lessens the amount of training data that the model has in it's purview). Could you share with me what you did to add the token `presentation` to your tokenizer? :relaxed: On my side, if I realize this addition to the tokenizer of "albert-base-v2", the token `presentation` is well preserved on the example that you shared: ```python from transformers import AlbertTokenizerFast tokenizer = AlbertTokenizerFast.from_pretrained("albert-base-v2") text = "Knowledge of (presentation) would be great." tokenizer.tokenize(text) ``` output ``` ['▁knowledge', '▁of', '▁', '(', 'present', 'ation', ')', '▁would', '▁be', '▁great', '.'] ``` and here's the result after I add the token: ```python tokenizer.add_tokens(["presentation"]) tokenizer.tokenize(text) ``` output ``` ['▁knowledge', '▁of', '▁', '(', 'presentation', '▁', ')', '▁would', '▁be', '▁great', '.'] ```<|||||>Hi @SaulLu, thank you so much for looking into this, especially before the holidays! I was wrong -- as you show your in your example, this totally works for added tokens. To provide a little more context, in my use case I am training ALBERT on a dataset from a different domain. Within the new domain, there were certain tokens that were already are captured in the vocabulary of the original ALBERT tokenizer. I was **not adding these tokens** via the "add_tokens" method. It seems like these tokens are the ones having the issue I presented above. My hypothesis is that these tokens are quite in-frequent in the original data distribution upon which ALBERT was trained; so the tokenizer uses subword representations rather than the full word, when word boundaries are present. For now, my solution will be to add these overlapping tokens as new tokens, essentially forcing the model to re-learn the embeddings for these tokens, on my given domain. Again, thanks so much for looking into this! You're feedback help me figure it out :) <|||||>I'm glad to read that it helped you! :hugs: If I understood your use case correctly: you plan to modify the tokenizer a bit and to continue training the model a bit on this modified tokenizer. If that is the case, I think you can have I have another solution to show you. Of course, adding tokens with the `add_tokens` method may be the best solution for in the end, but I just want to show you an alternative (which necessarily requires some re-training of your model). Actually, if your goal is that the text contained between parentheses is treated as text that would be at the beginning of a sentence, then you can also change the pre-tokenization rules of your tokenizer in order to "force" a division at the level of parentheses. Here is how to do it (if it is more commanding you can also directly open [this google colab](https://colab.research.google.com/drive/16XLwof-eCRMJZRIutT2n9bDltRPTMRAq?usp=sharing)): ### A. We load the original albert tokenizer ```python import json from transformers import AlbertTokenizerFast from tokenizers import pre_tokenizers ``` ```python text = "Knowledge of (presentation) would be great." ``` ```python tokenizer = AlbertTokenizerFast.from_pretrained("albert-base-v2") ``` ```python print(tokenizer.tokenize(text)) ``` ['▁knowledge', '▁of', '▁', '(', 'present', 'ation', ')', '▁would', '▁be', '▁great', '.'] We store the encoding of this sentence in order to compare it at the end with the encoding of the same sentence done with our new tokenizer ```python ids_tokenization_original_tokenizer = tokenizer.encode(text) ids_tokenization_original_tokenizer_map = [(id, tokenizer.convert_ids_to_tokens([id])) for id in ids_tokenization_original_tokenizer] print(ids_tokenization_original_tokenizer_map) ``` [(2, ['[CLS]']), (1918, ['▁knowledge']), (16, ['▁of']), (13, ['▁']), (5, ['(']), (3914, ['present']), (857, ['ation']), (6, [')']), (83, ['▁would']), (44, ['▁be']), (374, ['▁great']), (9, ['.']), (3, ['[SEP]'])] ### B. We adapt the tokenizer to treat the content of parenthesis as a regular sentence beginning with a space 1. We add a rule to pre-tokenize the text on parenthesis ```python parenthesis_1 = "(" parenthesis_2 = ")" behavior = "isolated" tokenizer.backend_tokenizer.pre_tokenizer = pre_tokenizers.Sequence( [ pre_tokenizers.WhitespaceSplit(), pre_tokenizers.Split(parenthesis_1, behavior=behavior), pre_tokenizers.Split(parenthesis_2, behavior=behavior), pre_tokenizers.Metaspace(replacement="▁", add_prefix_space=True), ] ) ``` 2. We test the result ```python print(tokenizer.tokenize(text)) ``` ['▁knowledge', '▁of', '▁', '(', '▁presentation', '▁', ')', '▁would', '▁be', '▁great', '.'] We observe here that `" ("` is tokenized into 2 tokens `['▁', '(']`. This is probably due to the fact that the tokenizer vocabulary was not learned with this pre-tokenization rule. We can tweak the vocabulary to solve this problem. ```python tokenizer.save_pretrained('local') ``` 3. We change the vocabulary to replace the token `"("` with the token `"▁("` and `")"` with `"▁)"` because with the new pre-tokenization rule, a parenthesis will always be preceded by a space. ```python with open('local/tokenizer.json', "r") as f: tokenizer_json = json.loads(f.read()) vocab = tokenizer_json['model']['vocab'] ``` we first verify that the tokens `"▁("` and `"▁)"` are not contained into our vocabulary to confirm our guess. ```python tokens_to_check = ["▁(", "▁)"] for token, prob in vocab: if token in tokens_to_check: print(token) ``` Now we are sure that `"▁("` and `"▁)"` are not in the vocabulary, we can do the replacement (in order to continue to use the same id!) ```python new_vocab = [] tokens_to_change = ["(", ")"] for token, prob in vocab: if token in tokens_to_change: token = f"▁{token}" new_vocab.append([token, prob]) tokenizer_json['model']['vocab'] = new_vocab with open('local/tokenizer.json', "w") as f: f.write(json.dumps(tokenizer_json)) ``` 4. That's it, we can load our new tokenizer and compare the tokenization with the original one! ```python tokenizer = AlbertTokenizerFast.from_pretrained('local') print(tokenizer.tokenize(text)) ``` ['▁knowledge', '▁of', '▁(', '▁presentation', '▁)', '▁would', '▁be', '▁great', '.'] We do a last check to verify that we didn't mess with the ids (which is the thing that matter to the model) ```python ids_tokenization_new_tokenizer = tokenizer.encode(text) ids_tokenization_new_tokenizer_map = [(id, tokenizer.convert_ids_to_tokens([id])) for id in ids_tokenization_new_tokenizer] print(ids_tokenization_original_tokenizer_map) print(ids_tokenization_new_tokenizer_map) ``` [(2, ['[CLS]']), (1918, ['▁knowledge']), (16, ['▁of']), (13, ['▁']), (5, ['(']), (3914, ['present']), (857, ['ation']), (6, [')']), (83, ['▁would']), (44, ['▁be']), (374, ['▁great']), (9, ['.']), (3, ['[SEP]'])] [(2, ['[CLS]']), (1918, ['▁knowledge']), (16, ['▁of']), (5, ['▁(']), (6364, ['▁presentation']), (6, ['▁)']), (83, ['▁would']), (44, ['▁be']), (374, ['▁great']), (9, ['.']), (3, ['[SEP]'])] So the ids look good: the token `" ("` is now tokenized into one token which use the same id as the token `"("` before and "presentation" is no longer divided into 2 tokens 🚀
transformers
14,837
closed
Adding S4 Model
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @Rocketknight1
12-19-2021 20:03:38
12-19-2021 20:03:38
@kamalkraj This looks great! Please let us know if you're encountering any problems, and of course ping me when it's ready for a final review.<|||||>Hi @kamalkraj ! We noticed there haven't been any commits here for a while - do you want us to find someone to take it over and resolve the issues from here? Either way, we appreciate everything you've done so far, so don't stress about it!<|||||>Hi @Rocketknight1, Got a little busy with some work. Will work on this week <|||||>Hi @kamalkraj! We've actually just been speaking with the authors, and they said they're planning a refactor of the model that's going to be released soon. We're going to speak with them about adding it to the hub at that point - either way, though, we'll probably use your existing PR as a base and we'll include you as a co-author.<|||||>Sure @Rocketknight1. After the refractor also if you need any help from my side please ping here. Thanks<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, any news with this implementation? For some time I finished the @kamalkraj implementation, but I did not refactored the code. @Rocketknight1 was mentioning that the authors were planing to release a cleaner implementation. Do you have any update from the authors on it? also I think that it would be veeeeeery valuable if they also release his pre-trained model. Thanks!<|||||>@gaceladri This is something we're keeping an eye on. We were planning to do a port of the newer implementation, but we've seen [this paper ](https://twitter.com/ankgup2/status/1508807766093217804) recently. We believe this indicates that we're liable to see some significant updates to that architecture before the first foundation models arrive, and so we're backing away a little from porting S4 in its current form, at least until a significant foundation model arrives! That said, we still think the architecture is likely to be very important, and we're seeing what we can do to get it to the point where those models exist. I know that's a little mysterious, but I don't want to over-promise anything just yet!<|||||>@Rocketknight1 Thanks a lot for the answers and for the paper as well. I had not seen it! Thanks!!<|||||>@Rocketknight1 do you think S4D is already mature enough to get into transformers? On https://github.com/HazyResearch/state-spaces/ it is already on version 3 and there is already a module with few dependencies https://github.com/HazyResearch/state-spaces/blob/main/src/models/s4/ I´m getting extraordinary results compared to regular transformers with small datasets
transformers
14,836
closed
Segmentation fault (core dumped) when conversion of GPT-J to onnx
## Environment info - `transformers` version: 4.14.1 - `PyTorch` version: ``` nvidia-dlprof-pytorch-nvtx 1.7.0 pytorch-quantization 2.1.2 torch 1.11.0a0+b6df043 torch-tensorrt 1.0.0a0 torchtext 0.12.0a0 torchvision 0.11.0a0 ``` - onnx: 1.10.1 - Platform: GCP A100 Instance - NVIDIA Driver Version: 495.44 - Docker Image: nvcr.io/nvidia/pytorch:21.11-py3 - Docker version: 20.10.12, build e91ed57 ## Information I want to convert GPT-J model(https://huggingface.co/NovelAI/genji-jp) to onnx file, but I have a trouble with the conversion by using the following scripts. ``` from transformers import AutoTokenizer, AutoModelForCausalLM, AutoConfig import torch torch.device("cuda", index=0) torch.set_default_tensor_type('torch.cuda.HalfTensor') from transformers.onnx import OnnxConfig, export from typing import Any, List, Mapping, Optional from transformers import TensorType, LayoutLMv2Processor, PreTrainedTokenizer from collections import OrderedDict from pathlib import Path MAX_MODEL_INPUT = 256 config = AutoConfig.from_pretrained("NovelAI/genji-jp") class GPTJOnnxConfig(OnnxConfig): @property def inputs(self) -> Mapping[str, Mapping[int, str]]: return OrderedDict( [ ("input_ids", {0: "batch", 1: "sequence"}), ("attention_mask", {0: "batch", 1: "sequence"}), ("token_type_ids", {0: "batch", 1: "sequence"}), ] ) @property def outputs(self) -> Mapping[str, Mapping[int, str]]: return OrderedDict([("last_hidden_state", {0: "batch", 1: "sequence"}), ("pooler_output", {0: "batch"})]) onnx_config = GPTJOnnxConfig(config) tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") model = AutoModelForCausalLM.from_pretrained( "NovelAI/genji-jp", torch_dtype=torch.float16, low_cpu_mem_usage=True ).eval().cuda() export( tokenizer=tokenizer, model=model, config=onnx_config, opset=13, output=Path.cwd() / "outputs", ) ``` ## To reproduce Steps to reproduce the behavior: 1. run docker image `nvcr.io/nvidia/pytorch:21.11-py3` as > docker run --rm -it --gpus all \ --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 \ -v $(curr_dir):/mashim \ nvcr.io/nvidia/pytorch:$(container_version)-py3 \ bash 2. install transformer==4.14.1 3. run the above scripts Log is as follows: ``` Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 836/836 [00:00<00:00, 874kB/s] {'input_ids': {0: 'batch', 1: 'sequence'}, 'attention_mask': {0: 'batch', 1: 'sequence'}, 'token_type_ids': {0: 'batch', 1: 'sequence'}} Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 619/619 [00:00<00:00, 629kB/s] Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 779k/779k [00:00<00:00, 2.13MB/s] Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 446k/446k [00:00<00:00, 1.79MB/s] Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.31M/1.31M [00:00<00:00, 3.55MB/s] Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.94k/3.94k [00:00<00:00, 3.90MB/s] Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 357/357 [00:00<00:00, 354kB/s] {'input_ids': [[50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256], [50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1]]} Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11.3G/11.3G [02:29<00:00, 81.1MB/s] /opt/conda/lib/python3.8/site-packages/torch/onnx/utils.py:117: UserWarning: 'enable_onnx_checker' is deprecated and ignored. It will be removed in the next PyTorch release. To proceed despite ONNX checker failures, catch torch.onnx.CheckerError. warnings.warn("'enable_onnx_checker' is deprecated and ignored. It will be removed in " /opt/conda/lib/python3.8/site-packages/torch/onnx/utils.py:130: UserWarning: `use_external_data_format' is deprecated and ignored. Will be removed in next PyTorch release. The code will work as it is False if models are not larger than 2GB, Otherwise set to False because of size limits imposed by Protocol Buffers. warnings.warn("`use_external_data_format' is deprecated and ignored. Will be removed in next " /opt/conda/lib/python3.8/site-packages/transformers/models/gptj/modeling_gptj.py:558: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! assert batch_size > 0, "batch_size has to be defined and > 0" Segmentation fault (core dumped) ``` ## Expected behavior onnx file is generated successfully. <!-- A clear and concise description of what you would expect to happen. -->
12-19-2021 17:39:23
12-19-2021 17:39:23
I figured out the above reason. While ONNXconfig file is wrong, I explicit use of GPU, leading to segmentation fault. Correct config is here ``` class GPTJOnnxConfig(OnnxConfig): @property def inputs(self) -> Mapping[str, Mapping[int, str]]: return OrderedDict( [ ("input_ids", {0: "batch", 1: "sequence"}), ] ) @property def outputs(self) -> Mapping[str, Mapping[int, str]]: return OrderedDict( [ ("last_hidden_state", {0: "batch", 1: "sequence"}), ] ) ``` After modification and use of CPU I can pass the error, but many files are generated... How can I fix? ``` root@8909446e3559:/mashim# ls outputs/ 14164 14316 14443 14595 14722 14874 15001 transformer.h.12.ln_1.weight transformer.h.19.ln_1.bias transformer.h.24.mlp.fc_out.bias transformer.h.6.mlp.fc_in.bias 14165 14317 14444 14596 14723 14875 15002 transformer.h.12.mlp.fc_in.bias transformer.h.19.ln_1.weight transformer.h.25.ln_1.bias transformer.h.6.mlp.fc_out.bias 14166 14318 14445 14597 14724 14876 15003 transformer.h.12.mlp.fc_out.bias transformer.h.19.mlp.fc_in.bias transformer.h.25.ln_1.weight transformer.h.7.ln_1.bias 14192 14319 14471 14598 14750 14877 15029 transformer.h.13.ln_1.bias transformer.h.19.mlp.fc_out.bias transformer.h.25.mlp.fc_in.bias transformer.h.7.ln_1.weight 14193 14320 14472 14599 14751 14878 15030 transformer.h.13.ln_1.weight transformer.h.2.ln_1.bias transformer.h.25.mlp.fc_out.bias transformer.h.7.mlp.fc_in.bias 14194 14321 14473 14600 14752 14879 15031 transformer.h.13.mlp.fc_in.bias transformer.h.2.ln_1.weight transformer.h.26.ln_1.bias transformer.h.7.mlp.fc_out.bias 14195 14347 14474 14626 14753 14905 15032 transformer.h.13.mlp.fc_out.bias transformer.h.2.mlp.fc_in.bias transformer.h.26.ln_1.weight transformer.h.8.ln_1.bias 14196 14348 14475 14627 14754 14906 gpt-j.onnx transformer.h.14.ln_1.bias transformer.h.2.mlp.fc_out.bias transformer.h.26.mlp.fc_in.bias transformer.h.8.ln_1.weight 14197 14349 14476 14628 14755 14907 lm_head.bias transformer.h.14.ln_1.weight transformer.h.20.ln_1.bias transformer.h.26.mlp.fc_out.bias transformer.h.8.mlp.fc_in.bias 14223 14350 14502 14629 14781 14908 transformer.h.0.attn.bias transformer.h.14.mlp.fc_in.bias transformer.h.20.ln_1.weight transformer.h.27.ln_1.bias transformer.h.8.mlp.fc_out.bias 14224 14351 14503 14630 14782 14909 transformer.h.0.ln_1.bias transformer.h.14.mlp.fc_out.bias transformer.h.20.mlp.fc_in.bias transformer.h.27.ln_1.weight transformer.h.9.ln_1.bias 14225 14352 14504 14631 14783 14910 transformer.h.0.ln_1.weight transformer.h.15.ln_1.bias transformer.h.20.mlp.fc_out.bias transformer.h.27.mlp.fc_in.bias transformer.h.9.ln_1.weight 14226 14378 14505 14657 14784 14936 transformer.h.0.mlp.fc_in.bias transformer.h.15.ln_1.weight transformer.h.21.ln_1.bias transformer.h.27.mlp.fc_out.bias transformer.h.9.mlp.fc_in.bias 14227 14379 14506 14658 14785 14937 transformer.h.0.mlp.fc_out.bias transformer.h.15.mlp.fc_in.bias transformer.h.21.ln_1.weight transformer.h.3.ln_1.bias transformer.h.9.mlp.fc_out.bias 14228 14380 14507 14659 14786 14938 transformer.h.1.ln_1.bias transformer.h.15.mlp.fc_out.bias transformer.h.21.mlp.fc_in.bias transformer.h.3.ln_1.weight transformer.ln_f.bias 14254 14381 14533 14660 14812 14939 transformer.h.1.ln_1.weight transformer.h.16.ln_1.bias transformer.h.21.mlp.fc_out.bias transformer.h.3.mlp.fc_in.bias transformer.ln_f.weight 14255 14382 14534 14661 14813 14940 transformer.h.1.mlp.fc_in.bias transformer.h.16.ln_1.weight transformer.h.22.ln_1.bias transformer.h.3.mlp.fc_out.bias transformer.wte.weight 14256 14383 14535 14662 14814 14941 transformer.h.1.mlp.fc_out.bias transformer.h.16.mlp.fc_in.bias transformer.h.22.ln_1.weight transformer.h.4.ln_1.bias 14257 14409 14536 14688 14815 14967 transformer.h.10.ln_1.bias transformer.h.16.mlp.fc_out.bias transformer.h.22.mlp.fc_in.bias transformer.h.4.ln_1.weight 14258 14410 14537 14689 14816 14968 transformer.h.10.ln_1.weight transformer.h.17.ln_1.bias transformer.h.22.mlp.fc_out.bias transformer.h.4.mlp.fc_in.bias 14259 14411 14538 14690 14817 14969 transformer.h.10.mlp.fc_in.bias transformer.h.17.ln_1.weight transformer.h.23.ln_1.bias transformer.h.4.mlp.fc_out.bias 14285 14412 14564 14691 14843 14970 transformer.h.10.mlp.fc_out.bias transformer.h.17.mlp.fc_in.bias transformer.h.23.ln_1.weight transformer.h.5.ln_1.bias 14286 14413 14565 14692 14844 14971 transformer.h.11.ln_1.bias transformer.h.17.mlp.fc_out.bias transformer.h.23.mlp.fc_in.bias transformer.h.5.ln_1.weight 14287 14414 14566 14693 14845 14972 transformer.h.11.ln_1.weight transformer.h.18.ln_1.bias transformer.h.23.mlp.fc_out.bias transformer.h.5.mlp.fc_in.bias 14288 14440 14567 14719 14846 14998 transformer.h.11.mlp.fc_in.bias transformer.h.18.ln_1.weight transformer.h.24.ln_1.bias transformer.h.5.mlp.fc_out.bias 14289 14441 14568 14720 14847 14999 transformer.h.11.mlp.fc_out.bias transformer.h.18.mlp.fc_in.bias transformer.h.24.ln_1.weight transformer.h.6.ln_1.bias 14290 14442 14569 14721 14848 15000 transformer.h.12.ln_1.bias transformer.h.18.mlp.fc_out.bias transformer.h.24.mlp.fc_in.bias transformer.h.6.ln_1.weight ```<|||||>Thanks for raising the error! cc @michaelbenayoun and @lewtun for knowledge<|||||>I was able to reproduce this behaviour but am not entirely sure (yet) what is causing the ONNX export to generate so many files (there should only be a single `.onnx` file in the output). My current best guess is that it is something peculiar with the `torch.onnx.export` function that we call internally in `transformers.onnx`. One possibility is that the sheer size of the model is causing a problem with protocol buffers (until recently it was only possible to export 2GB-sized models). Some more investigation is needed to figure this out, and I'll report back here when I have a better insight. Incidentally, @shimoshida were you able to use your `gpt-j.onnx` model in ONNX Runtime? I'm curious whether the extra files are harmless or signal a deeper problem with the export. <|||||>@lewtun Thank you for your reply. I also try to call `torch.onnx.export` function directly, but the result is the same as the above one. The script is here. <details> ``` from transformers import AutoTokenizer, AutoModelForCausalLM, AutoConfig import torch torch.device('cpu') torch.set_default_tensor_type('torch.FloatTensor') import transformers from transformers.onnx import OnnxConfig, export from typing import Any, List, Mapping, Optional from transformers import TensorType, LayoutLMv2Processor, PreTrainedTokenizer from collections import OrderedDict from pathlib import Path dir_path = Path.cwd() / "outputs" dir_path.mkdir(exist_ok=True) MAX_MODEL_INPUT = 256 config = AutoConfig.from_pretrained("NovelAI/genji-jp") model = AutoModelForCausalLM.from_pretrained( "NovelAI/genji-jp", low_cpu_mem_usage=True ).eval() model.float() tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") class GPTJOnnxConfig(OnnxConfig): @property def inputs(self) -> Mapping[str, Mapping[int, str]]: return OrderedDict( [ ("input_ids", {0: "batch", 1: "sequence"}), ] ) @property def outputs(self) -> Mapping[str, Mapping[int, str]]: return OrderedDict( [ ("last_hidden_state", {0: "batch", 1: "sequence"}), ] ) onnx_config = GPTJOnnxConfig(config) dummy_inputs = onnx_config.generate_dummy_inputs(tokenizer) dummy_inputs = torch.Tensor(dummy_inputs["input_ids"]) dummy_inputs = dummy_inputs.to(torch.int64) input_names = list(onnx_config.inputs.keys()) output_names = list(onnx_config.outputs.keys()) with torch.no_grad(): outputs = model(dummy_inputs) dynamic_axes = { input_names[0]: {0: 'batch_size', 1: 'seq_len'}, output_names[0]: {0: 'batch_size', 1: 'seq_len'}, } torch.onnx.export( model, dummy_inputs, str(dir_path / "gpt-j.onnx"), input_names=input_names, output_names=output_names, example_outputs=outputs, dynamic_axes=dynamic_axes, opset_version=13, do_constant_folding=True, verbose=True ) ``` </details> I have tested loading `gpt-j.onnx` by using ONNXRuntime, and then the following error is obtained: ``` Traceback (most recent call last): File "runtime_test.py", line 5, in <module> ort_sess = ort.InferenceSession('outputs/gpt-j.onnx') File "/opt/conda/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 335, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/opt/conda/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 368, in _create_inference_session sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model) onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from outputs/gpt-j.onnx failed:Type Error: Type parameter (T) of Optype (Einsum) bound to different types (tensor(int64) and tensor(float) in node (Einsum_110). ``` > One possibility is that the sheer size of the model is causing a problem with protocol buffers (until recently it was only possible to export 2GB-sized models) Oh, I didn't know about that limitation until now... If so, I should raise this issue in the torch repository. <|||||>Thanks for testing the model with ONNX Runtime @shimoshida! > Oh, I didn't know about that limitation until now... If so, I should raise this issue in the torch repository. I think the limitation is actually on the `onnx` side (which is used by `torch.onnx`). For example, here's an [issue](https://github.com/onnx/onnx/issues/3275) where someone tries to export a >2GB sized model. I tracked down the `onnx` [PR](https://github.com/onnx/onnx/pull/678) where support for large models was introduced, and one can see the potentially relevant comment: > We need a method for optionally storing tensor data in separate files, which can be loaded on demand. So my current understanding is that the multiple file export is expected for models like GPT-J, but that raises the question on how this data should be ingested in ONNX Runtime. I'll take another look at this and report back! <|||||>Hi @shimoshida here's a summary of what I think is going on: 1. The additional files created during the export _are expected_ after all. In the `torch.onnx.export()` function ([docs](https://pytorch.org/docs/stable/onnx.html#functions)) you can see there's a `use_external_data_format` argument. This argument is `True` for GPT-J when using the `transformers.onnx` package, as you can see [here](https://github.com/huggingface/transformers/blob/13504dcbea231d2cae701d1ffdeb0810d62aff81/src/transformers/onnx/convert.py#L124). 2. On the ONNX side, I'm able to load the model and also check that it was exported correctly via ```python import onnx # Check we can load the model onnx_model = onnx.load('model.onnx') # Check the model onnx.checker.check_model('model.onnx', full_check=True) ``` 3. I was able to reproduce your error with loading the ONNX model in ONNX Runtime. I saw a similar issue was raised in the ONNX Runtime, and a solution was suggested [here](https://github.com/Microsoft/onnxruntime/issues/649#issuecomment-533640318). I haven't tried this yet, but it _might_ work for your case. It doesn't seem to be connected to the choice of `opset`, since `Einsum` has been [available](https://github.com/onnx/onnx/blob/master/docs/Operators.md) since `opset=12` I suggest opening an issue on the ONNX Runtime repo and see whether they can provide some further advice. <|||||>@lewtun Thank you for sharing information ! > I suggest opening an issue on the ONNX Runtime repo and see whether they can provide some further advice. Sure. I've asked a question and will wait for an answer. https://github.com/microsoft/onnxruntime/discussions/10121 <|||||>Hi @shimoshida it seems that the root cause of the problem was due to a mismatch in the `einsum` types: https://github.com/microsoft/onnxruntime/discussions/10121#discussioncomment-1948951 Does that proposal solve the issue for you? <|||||>@lewtun I'm sorry for the late reply. I have tested using the proposal, but I have encountered the following problem: https://github.com/microsoft/onnxruntime/discussions/10121#discussioncomment-1987845 However, the problem seems not relevant to transformers, I have closed this issue. Thank you for your help!<|||||>Thank you for the reply @shimoshida ! It looks like a mismatch between the ops in the original and traced models at runtime, but you're right that the ONNX export itself seems to be OK.
transformers
14,835
closed
Update CONTRIBUTING.md
fix cmd typo # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger @LysandreJik
12-19-2021 16:57:03
12-19-2021 16:57:03
transformers
14,834
closed
eval_loss is nan for GPT2 trained with fp16 + deepseed on 8xA40s
Hi, I'm training GPT2 from scratch. After 5 epochs, the training loss decreased dramatically and the eval loss became nan. The last steps during training all produce the same logs: ``` [2021-12-19 16:54:45,098] [INFO] [unfused_optimizer.py:275:_update_scale] Grad overflow on iteration: 445299 [2021-12-19 16:54:45,098] [INFO] [unfused_optimizer.py:277:_update_scale] Reducing dynamic loss scale from 1 to 1 [2021-12-19 16:54:45,098] [INFO] [unfused_optimizer.py:202:step] [deepspeed] fp16 dynamic loss scale overflow! Skipping step. Attempted loss scale: 1, reducing to 1 ``` Now I wonder whether GPT2 was trained using bf16 like T5, and I shouldn't have used fp16+deepspeed. Below you can find my command and deepspeed configuration: ` deepspeed ./5.run_clm-post.py --model_name_or_path gpt2-large --train_file dataset.txt --tokenizer_name tokenizer --do_train --do_eval --output_dir output --evaluation_strategy steps --eval_steps 10000 --save_steps 10000 --num_train_epochs 12 --per_device_train_batch_size 28 --cache_dir .cache2 --fp16 --deepspeed ds_config_zero1.json --save_total_limit 2` ds_config_zero1.json: ``` { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto"} ``` ![imagen](https://user-images.githubusercontent.com/50919790/146681634-c6b89390-570c-4237-a927-6d936ae2069f.png) Any help is appreciated. I assume I have to start the training over, am I right? Thanks.
12-19-2021 15:59:33
12-19-2021 15:59:33
Maybe of interest to @stas00 <|||||>Thank you, @LysandreJik, for the ping. The issue template does say to tag me on Deepspeed issues. Yes, @aqred1, you're running into overflow and yes Deepspeed should assert there as I proposed here: https://github.com/microsoft/DeepSpeed/issues/1599 But it's not a bug per se, just not user-friendly. Since you're on Amphere you can switch to bf16 which is currently in PR stage since Deepspeed hasn't merged their side yet. Please see this PR: https://github.com/huggingface/transformers/pull/14569 and additionally you will need to use this Deepspeed branch: https://github.com/microsoft/DeepSpeed/pull/1453 A while back I started compiling the info on how different models were pretrained but I couldn't find any info on gpt2. https://discuss.huggingface.co/t/compiling-data-on-how-models-were-pre-trained-fp16-fp32-bf16/5671 I suspect that the original wasn't trained in bf16. I very much doubt the overflow issue itself has anything to do with deepspeed. You will most likely have the same issue if you remove deepspeed. You need to tweak your hparams and see where you're doing something incorrect, e.g. your learning rate could be too big.<|||||>Thank you, I didn't know I had to tag you, sorry about that, will read all templates next time. I'll move to bf16+deepspeed and check hyperparameters as well. As this is not really a bug or issue, feel free to close.
transformers
14,833
closed
Seq2SeqTrainer.evaluation_loop requires `labels` due to DataCollatorForSeq2Seq
## Environment info - `transformers` version: 4.11.0.dev0 - Platform: Linux-3.10.0-1160.49.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.11 - PyTorch version (GPU?): 1.9.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes - `transformers` version: 4.11.0.dev0 - Platform: CentOS 7 - Python version: 3.8.11 - PyTorch version (GPU?): 1.9.0+cu102 - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help @affjljoo3581 @patrickvonplaten ## Information I'm trying to run inference with a fine-tuned T5 model. I'm using the `run_summarization` script with some editions, and the problem occurs when the `predict_dataset` doesn't have `labels` (prediction time). the `__call__` function on the DataCollatorForSeq2Seq object fails ("KeyError") because it expects the datasets to have a `labels` key: ```python # prepare decoder_input_ids if self.model is not None and hasattr(self.model, "prepare_decoder_input_ids_from_labels"): decoder_input_ids = self.model.prepare_decoder_input_ids_from_labels(labels=features["labels"]) features["decoder_input_ids"] = decoder_input_ids ``` Model I am using (Bert, XLNet ...): T5 and BART The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [x] my own task or dataset: (give details below) ## Expected behavior I should be able to run the script on prediction (`--do_predict`) without providing labels in the dataset.
12-19-2021 14:21:57
12-19-2021 14:21:57
Good catch! The `DataCollatorForSeq2Seq` should check for `None` `labels` before computing `decoder_input_ids`, Would you like to open a PR to fix this? Happy to help with it, thanks !<|||||>I would have opened a PR, but there seem to have more in it. Modifying `DataCollatorForSeq2Seq` solved the issue for a BART model, but not for a T5 model. For T5, when I now try to use `trainer.predict` (as in the `run_summarization.py` script) over a dataset that only includes `input_ids` and `attention_mask` features but no `labels`, it fails to prepare the decoder: ``` Caught ValueError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/nlp/kleinay/miniconda3/envs/seq2seq-qasrl/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, **kwargs) File "/home/nlp/kleinay/miniconda3/envs/seq2seq-qasrl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/nlp/kleinay/miniconda3/envs/seq2seq-qasrl/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 1612, in forward decoder_outputs = self.decoder( File "/home/nlp/kleinay/miniconda3/envs/seq2seq-qasrl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/nlp/kleinay/miniconda3/envs/seq2seq-qasrl/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 902, in forward raise ValueError(f"You have to specify either {err_msg_prefix}input_ids or {err_msg_prefix}inputs_embeds") ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds File "/home/nlp/kleinay/miniconda3/envs/seq2seq-qasrl/lib/python3.8/site-packages/torch/_utils.py", line 425, in reraise raise self.exc_type(msg) File "/home/nlp/kleinay/miniconda3/envs/seq2seq-qasrl/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply output.reraise() File "/home/nlp/kleinay/miniconda3/envs/seq2seq-qasrl/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/home/nlp/kleinay/miniconda3/envs/seq2seq-qasrl/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/home/nlp/kleinay/miniconda3/envs/seq2seq-qasrl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/nlp/kleinay/miniconda3/envs/seq2seq-qasrl/lib/python3.8/site-packages/transformers/trainer_seq2seq.py", line 179, in prediction_step outputs = model(**inputs) File "/home/nlp/kleinay/miniconda3/envs/seq2seq-qasrl/lib/python3.8/site-packages/transformers/trainer.py", line 2323, in evaluation_loop loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys) File "/home/nlp/kleinay/miniconda3/envs/seq2seq-qasrl/lib/python3.8/site-packages/transformers/trainer.py", line 2223, in predict output = eval_loop( File "/home/nlp/kleinay/miniconda3/envs/seq2seq-qasrl/lib/python3.8/site-packages/transformers/trainer_seq2seq.py", line 117, in predict return super().predict(test_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix) File "/home/nlp/kleinay/Parsing/Seq2Seq_QASRL_Parsing/qasrl_bart/run_summarization.py", line 936, in main predict_results = trainer.predict( ``` Looking at the full stack trace it seems that something in the logic of `Seq2SeqTrainer.predict` is problematic - it calls `Trainer.evaluation_loop`, which is promised in the docstring to work "both with or without labels", but it in turn calls `Seq2SeqTrainer.prediction_step` which seems to expect `labels` in the `inputs` dict, at least for T5 model. So I still couldn't make `trainer.predict` to work for T5. <|||||>O.K, I've caught what I was doing wrong - as the docs say, > Note that T5 uses the pad_token_id as the decoder_start_token_id, so when doing generation without using generate(), make sure you start it with the pad_token_id. So for T5 models, I need to have some dummy `labels` feature in the predict-dataset initialized with just `[tokenizer.pad_token_id]`. Still, I think the logic or documentation issues that I pointed out in the previous comment stand - it was hard to understand where is the problem when `Trainer.evaluation_loop` is promised to work without labels. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,832
closed
Fix the wrong attention mask in TransformerXL by shifting `mask_shift_len` with 1
# What does this PR do? In `forward` of TransformerXL, the attention mask `dec_attn_mask` is set up by: ``` dec_attn_mask = (torch.triu(all_ones, 1 + mlen) + torch.tril(all_ones, -mask_shift_len))[:, :, None] # -1 ``` But I noticed if `mask_shift_len` is zero (while the default memory length equals to the input memory length, which is the most case), it will make `dec_attn_mask[0,:,:]` all 1, which make the model unable to get any attention from the first memory embedding. I fix this problem by shift `mask_shift_len` with 1. @patrickvonplaten
12-19-2021 08:25:43
12-19-2021 08:25:43
Thanks a lot for your PR @tanchihpin0517. @TevenLeScao - could you maybe take a look here, I think you've added this part of the code a while back<|||||>Hey @tanchihpin0517 , if this is correct, I am surprised that it didn't show up in generation before. Have you noticed erroneous generation with TransformerXL in this setting?<|||||>No, it didn't generate an obvious error. It only makes the model ignore the first memory embedding. But in most of cases the length of memory is much longer (more than 64 or 128), so it just generate a tiny effect to the results.<|||||>hi @patrickvonplaten @TevenLeScao , is there any update here? Just remind myself. Thanks.<|||||>@TevenLeScao - could you take a look when you have a moment? Otherwise let me know and I'll try to allocate time<|||||>Sorry, I won't have the time in tge coming weeks, can you take it? I don't remember the code much better than you do anyway <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
14,831
closed
FlaxVisionEncoderDecoderModel has no attribute 'from_encoder_decoder_pretrained' ?
Transformers Version: 4.14.1 `FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained('google/vit-base-patch32-224-in21k', 'gpt2')` throws the following error `AttributeError: type object 'FlaxVisionEncoderDecoderModel' has no attribute 'from_encoder_decoder_pretrained'` But the documentation below suggests its available https://huggingface.co/docs/transformers/model_doc/visionencoderdecoder what am I missing?
12-19-2021 06:47:45
12-19-2021 06:47:45
I think this is because `flax` is not installed, for flax models we should install `jax`, `jaxlib` and `flax`. The import returns a dummy `FlaxVisionEncoderDecoderModel` when flax is not installed which has no methods. You could see in this colab that it works https://colab.research.google.com/drive/1g6oU-AYFpzVNWgn1qT5NKlEqor3tLCeI?usp=sharing<|||||>Thanks @patil-suraj
transformers
14,830
closed
Adafactor lacking min_dim_size_to_factor
The Adafactor implementation in mesh_tensorflow has a `min_dim_size_to_factor` flag where tensors with size smaller than this will not be factored. https://github.com/tensorflow/mesh/blob/57ed4018e6a173952501b074daabad32b6449f3d/mesh_tensorflow/optimize.py#L220 However, this flag is lacking in HF's version. And there are indeed parameters that are smaller than this, for example, T5's `relative_attention_bias`. First of all, this causes an implementation mismatch. Secondly, this prevents the loading of pretrained optimizer states into HF.
12-19-2021 04:32:29
12-19-2021 04:32:29
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,829
closed
[Wav2Vec2 Phoneme] Let phonemizer lang default to tokenizer's settings
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> When calling `tokenizer.phonemize(...)` the language should default to the tokenizer's one. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
12-19-2021 00:59:06
12-19-2021 00:59:06
transformers
14,828
closed
Unable to save pretrained model after finetuning : trainer.save_pretrained(modeldir) AttributeError: 'Trainer' object has no attribute 'save_pretrained'
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.8 - Platform: - Python version: 3.9 - PyTorch version (GPU?): 1.9.0 - Tensorflow version (GPU?): - Using GPU in script?: yes (I called the used of gpu via a slurm sbatch script in Jean zay) - Using distributed or parallel set-up in script?: ### Who can help @LysandreJik, @stas00, @sgugger <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. @LysandreJik, @stas00, @sgugger Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten - Blenderbot, MBART: @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten @narsil - Tokenizers: @LysandreJik - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): Flaubert model The problem arises when using: * [x] the official example scripts: (give details below) I tried to save best model after training and I get that error. The tasks I am working on is: * [x] my own task or dataset: (give details below) My own dataset a dataframe with one column text and other column label with 0,1 or 2 ## To reproduce Steps to reproduce the behavior: 1. I used the officiel notebook , I changed just the model name by Flaubert, 2. I define my own trainings arguments and trainer 3. I loop inside a directory to load my dataset, call trainer and then train , evalaute and save (this is where the error appears) <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ``` train_dataset = dataset.map( lambda x: tokenizer(x['verbatim'], padding="max_length", truncation=True, max_length=512), batched=True) train_dataset.set_format(type='torch', columns=['input_ids', 'token_type_ids', 'attention_mask']) tokenizer = FlaubertTokenizer.from_pretrained(model_name, do_lowercase=True) model = FlauBertForSequenceClassification(config=mdl.config, num_labels=num_labels, freeze_encoder = False) training_args = TrainingArguments( output_dir=output_dir, # output directory num_train_epochs=1.0, # total number of training epochs per_device_train_batch_size=8, # batch size per device during training, can increase if memory allows per_device_eval_batch_size=8, # batch size for evaluation, can increase if memory allows save_steps=50, # number of updates steps before checkpoint saves save_total_limit=2, # limit the total amount of checkpoints and deletes the older checkpoints logging_first_step=True, evaluation_strategy='epoch', # evaluation strategy to adopt during training eval_steps=10, # number of update steps before evaluation #warmup_steps=50, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir=logging_dir, # directory for storing logs logging_steps=10, learning_rate=5e-5, load_best_model_at_end = True #save_strategy='no' ) trainer = Trainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=val_dataset, # evaluation dataset tokenizer=tokenizer, compute_metrics=compute_metrics, #callbacks=[EarlyStoppingCallback(3, 0.0)] # early stopping if results dont improve after 3 epochs ) modeldir = './path_to_save_model' trainer.train() trainer.save_pretrained(modeldir) tokenizer.save_pretrained(modeldir) ``` For doing experiment I used jean zay ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
12-18-2021 23:02:55
12-18-2021 23:02:55
not sure where you took that code from, but indeed the Trainer doesn't have such method. What you want is: ``` model.save_pretrained(modeldir) ``` <|||||>Or `trainer.save_model(modeldir)` which will call the method Stas mentioned.<|||||>Hey folks, i couldn't get [`save_model`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.Trainer.save_model) to work. <img width="800" alt="Screen Shot 2023-01-15 at 11 25 55 AM" src="https://user-images.githubusercontent.com/84933469/212563151-9741d80e-0677-4114-b49d-8ed45a31fb76.png"> I am guessing `push_to_hub` isn't the only option we got, right? My trainer is ```python from setfit import SetFitModel, SetFitTrainer from sentence_transformers.losses import CosineSimilarityLoss # Load a SetFit model from Hub model_id = "sentence-transformers/all-mpnet-base-v2" model = SetFitModel.from_pretrained(model_id) # Create trainer trainer = SetFitTrainer( model=model, train_dataset=train_dataset, eval_dataset=test_dataset, loss_class=CosineSimilarityLoss, metric="accuracy", batch_size=64, num_iterations=20, num_epochs=1, ) # Train and evaluate trainer.train() metrics = trainer.evaluate() ```<|||||>you're calling `save_pretrained` on the wrong object, please see my comment: https://github.com/huggingface/transformers/issues/14828#issuecomment-997433709 (edit: I see you edited your post to remove this failure). wrt `save_model` we don't know what `SetFitTrainer` is. If you use the normal `Trainer` object it has `save_model` https://github.com/huggingface/transformers/blob/5db9abde439bc02c3791da2a4fefee80d94d5b96/src/transformers/trainer.py#L2608 <|||||>Sorry, I updated my comment i am using `save_model` (not `save_pretrained`) my `trainer` is as shown below. My assumption was `SetFitTrainer` is fundamentally of type `Trainer`? I could be wrong. ``` <setfit.trainer.SetFitTrainer at 0x7fe8b748e710> ```<|||||>In my last comment I showed you that `transformers.Trainer` has `save_model`. And I repeat I have no idea what `setfit.trainer.SetFitTrainer` is - it's not a `transformers` class.<|||||>Sorry, should've referred here about [SetFit](https://github.com/huggingface/setfit). I'll log an issue there. Thanks<|||||>It is not a subclass of `transformers.Trainer` as far as I can see: https://github.com/huggingface/setfit/blob/f777c2c60b270604dae0dc1db4eea815e8c9019d/src/setfit/trainer.py#L28 I suppose it looks like `transformers.Trainer` but it's a totally independent implementation. So you will have to ask for this feature at that other project. or use `model.save_pretrained(modeldir)` which always works, since it's a feature of the `transformers` models. <|||||>Yeah, I realized that it wasn't a subclass. <|||||>So, I guess exports to [openvino and onnx](https://github.com/huggingface/setfit/tree/f777c2c60b270604dae0dc1db4eea815e8c9019d/src/setfit/exporters ) are supported for now. The only other ways I could think is using `joblib` or `pickle`. This worked - ```python import joblib joblib.dump(trainer, './model/cstom-setfit-model.joblib') trainer = joblib.load('./model/cstom-setfit-model.joblib') trainer.model.predict(["text", "text"]) ``` P.S.: `model.save_pretrained` works too.<|||||>`save_pretrained` isn't just for saving/restoring objects on resume - its primarily use is to save just the parts that are needed to use `from_pretrained` and/or sharing the results with others. But otherwise your method with `joblib` for resumes is just fine.
transformers
14,827
closed
as_target_tokenizer is not an attribute of PreTrainedTokenizerBase and Bart/RobertaTokenizer
### Environment Instance: Paperspace Gradient Cloud Instance transformers: 3.5.1 (same issue on 4.14.1) Python: 3.6.9 ### Information When creating an instance of the Roberta/Bart tokenizer the method as_target_tokenizer is not recognized. Code almost entirely the same as in the summarization example of Huggingface (https://github.com/huggingface/transformers/blob/master/examples/pytorch/summarization/run_summarization.py). The tokenizer is instantiated in the following way: ```python tokenizer = AutoTokenizer.from_pretrained("roberta-base") ``` and the executed code is: ```python with tokenizer.as_target_tokenizer(): labels = tokenizer(targets, max_length=MAX_TARGET_LENGTH, padding=USES_PADDING, truncation=True) ``` The error that occurs is: AttributeError: 'RobertaTokenizer' object has no attribute 'as_target_tokenizer' Same thing happens when instantiating just PreTrainedTokenizerBase, which according to the documentation has this function (https://huggingface.co/docs/transformers/internal/tokenization_utils).
12-18-2021 19:17:46
12-18-2021 19:17:46
Hello! The `as_target_tokenizer` method is specifically for seq2seq models in order to separate between the source and target tokenizers. Why do you need it for RoBERTa? cc @patrickvonplaten @sgugger <|||||>I would double-check on v4.14.1 as every tokenizer has this method (it just doesn't do anything for models that are not seq2seq). It didn't exist for v3.5.1, so it's normal if you have an error on that version.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,826
closed
Add support for TPU training with CodeParrot!
# What does this PR do? This PR adds support for training CodeParrot-style models on TPUs! It also fixes a small issue in the model initialization script. You can verify it works using this colab notebook: https://colab.research.google.com/drive/143qMz_0yf1Irb1ZrsYIWu39nqBYh9OTd?usp=sharing ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @lvwerra
12-18-2021 15:35:46
12-18-2021 15:35:46
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@ncoop57 let me know when you had a chance to test this and I'll reopen the PR.
transformers
14,825
closed
Flax/Roberta - Tokenizer
## Environment info transformers version: 4.14.0.dev0 Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29 Python version: 3.8.10 Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu) Jax version: 0.2.17 JaxLib version: 0.1.68 ### Who can help @patrickvonplaten, @patil-suraj, @LysandreJik ## Information I am reporting this as a very strange behaviour when converting a tokenizer from Flax to PyTorch. My initial tests indicates that it also can mess up the Flax training. In any case I do not think it can be intentional. ##To reproduce Use the procedure explained [here](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling) to create a RoBERTa BPE tokenizer. In the final step this saves the tokenizer.json. This can be used for training a Flax model. Then use the following procedure to load and save a tokenizer from pretrained: ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(".") model.save_pretrained(".") ('./tokenizer_config.json', './special_tokens_map.json', './vocab.json', './merges.txt', './added_tokens.json', './tokenizer.json') ``` Apart from creating the extra files, this also overwrites the original tokenizer.json. This should be an identical file. However it is not. In the setting for the mask-token, it changes the lstrip=false to ltrip=true. ```json {"id":4,"special":true,"content":"<mask>","single_word":false,"lstrip":false,"rstrip":false,"normalized":false} {"id":4,"special":true,"content":"<mask>","single_word":false,"lstrip":true,"rstrip":false,"normalized":false} ``` Using the "Flax-version" of tokenizer.json messes up the results in the HuggingFace widget. My initial test also indicates that I am getting better results training the Flax model using the settings from the "RoBERTa-version" of tokenizer.json. Though I have not really been able to verify these results yet.
12-18-2021 12:02:26
12-18-2021 12:02:26
Note that tokenizers are independent of the framework, so flax/pytorch shouldn't make any difference. > My initial tests indicates that it also can mess up the Flax training What do you mean by it can mess up the flax training ? Is there any difference any tokenization when you use `AutoTokenizer`, could you maybe check for a few text examples if the two tokenizers give different results and if so post the code snippet here? Thanks!<|||||>Thanks @patil-suraj. I think this code snippet below shows pretty well what is going on. This initially reads a tokenizer.json that is created by using the script referred above. In the original json the setting for the ```<mask>``` is ```lstrip```is set to ```False```. This seem to be respected when it is read by AutoTokenizer, however, using ```tokenizer.save_pretrained``` it overrides this value and sets it to ```True```. ```python >>> tokenizer = AutoTokenizer.from_pretrained(".") >>> tokenizer.encode("My <mask>.") [15447, 225, 4, 18] >>> tokenizer.tokenize("My <mask>.") ['My', 'Ġ', '<mask>', '.'] >>> tokenizer.save_pretrained(".") ('./tokenizer_config.json', './special_tokens_map.json', './vocab.json', './merges.txt', './added_tokens.json', './tokenizer.json') >>> tokenizer = AutoTokenizer.from_pretrained(".") >>> tokenizer.encode("My <mask>.") [15447, 4, 18] >>> tokenizer.tokenize("My <mask>.") ['My', ' <mask>', '.'] ``` I have tried training two models on this. One using ```lstrip=False``` and the other ```lstrip=False```. Using a 1GB corpus and training for 200k. Loss/MLM accuracy is roughly the same but for some (strange) reason, but I am getting different results on the downstream tasks. For the downstream tasks I am evaluating the pytorch model and here lstrip=True (This also needs to be set to "True" for the HuggingFace widget to work). As you know, the variance when finetuning on downstream tasks is often huge, and the ```<mask>```-token itself is not used directly in the finetuning. On average I am getting better results on the latter model. I am not super-confident on this finding though. I have not done any test where I initiated from the same weights. <|||||>Hi @patil-suraj, Did you have a chance to verify that my example code snippet shows there is a bug here? Or am I missing something?<|||||>@peregilk do you get the same errors when you use `RobertaTokenizer` instead of `AutoTokenizer`? I think what might happen here is the second call is in fact a Fast tokenizer while the first is a slow one<|||||>@patrickvonplaten It does not look like this is the case. Inspecting the tokenizers, loading with AutoTokenizer always seems to load a fast tokenizer, while loading with RobertaTokenizer always seem to load a slow one. This is valid both before and after saving the tokenizer. I tried this on the tokenizer from your model: ```bash wget https://huggingface.co/patrickvonplaten/norwegian-roberta-base/raw/main/tokenizer.json wget https://huggingface.co/patrickvonplaten/norwegian-roberta-base/raw/main/config.json ``` Then I load them: ```python from transformers import RobertaTokenizer, AutoTokenizer tokenizerA = AutoTokenizer.from_pretrained(".") tokenizerB = RobertaTokenizer.from_pretrained(".") # Inspecting tokenizerA shows it is fast and tokenizerB is slow. They do give different results: tokenizerA.tokenize("My <mask>.") ['My', 'Ġ', '<mask>', '.'] tokenizerB.tokenize("My <mask>.") ['My', ' <mask>', '.'] # Then save any of the tokenizers (Saving tokenizerA and tokenizerB seem to give the same result) tokenizerA.save_pretrained(".") ``` Now try and load the tokenizer ```python tokenizerA = AutoTokenizer.from_pretrained(".") tokenizerB = RobertaTokenizer.from_pretrained(".") # Inspecting these tokenizers also shows one is fast and the other slow. However, they now give identical results. tokenizerA.tokenize("My <mask>.") ['My', ' <mask>', '.'] tokenizerB.tokenize("My <mask>.") ['My', '<mask>', '.'] ``` The main problem is that it seems like training a model on the tokenizer that has not been loaded and saved, will result in a sub-optimal model. This might be because the training script is using AutoTokenizer when tokenizing (and then finish the training by saving the tokenizer).<|||||>Thanks for checking, yes this definitely looks like a bug. I'll leave it on my todo list, but probably will only have time in 2 weeks to look into it. If @SaulLu or @LysandreJik have more time by any chance, feel free to go ahead!<|||||>Great! The workaround here is of course very easy - just load and save the tokenizer before starting any training. However, not doing this might hurt the training. To me it seems like switching back to the "unsaved" tokenizer then improves the model, but not as much as if it was trained from scratch with the "saved" tokenizer. But these things are really hard to evaluate since there always is variance in the benchmarks. Also correcting a small error in my previous post. I am writing that the last output is **identical** for both tokenizers. If you look very closely this is not correct. TokenizerA gives the token ' \<mask\>', while tokenizerB gives '\<mask\>' (without the leading space). There are therefore 3 different ways this can be tokenized: ['My', 'Ġ', '<mask>', '.'], ['My', ' <mask>', '.'] and ['My', '<mask>', '.']. <|||||>It looks really weird! On my side, I know that I won't be able to look at this issue until next week at the earliest.<|||||>Any chance anyone can take a look at this? Though the workaround is pretty easy, it did mess up several of our models before we noticed it. It is likely that it also will cause problems for others. <|||||>Looking into it tomorrow<|||||>@patrickvonplaten Where you able to reproduce this errors?<|||||>Just took a look. ```python from transformers import RobertaTokenizer, AutoTokenizer tokenizerA = AutoTokenizer.from_pretrained(".") tokenizerB = RobertaTokenizer.from_pretrained(".") # Inspecting tokenizerA shows it is fast and tokenizerB is slow. They do give different results: tokenizerA.tokenize("My <mask>.") ['My', 'Ġ', '<mask>', '.'] tokenizerB.tokenize("My <mask>.") ['My', ' <mask>', '.'] # Then save any of the tokenizers (Saving tokenizerA and tokenizerB seem to give the same result) tokenizerA.save_pretrained(".") ``` doesn't work for me as loading the "non-fast" tokenizer fails with: ``` OSError: Can't load tokenizer for '.'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure '.' is the correct path to a directory containing all relevant files for a RobertaTokenizer tokenizer. ``` In general, the tokenizer of this model was trained purely to be used with the fast tokenizer class - e.g. it's not an "official" tokenizer. It doesn't surprise me that much that the tokenizers give different resultls here. @peregilk - I think we should try to just use the fast tokenizer here? Nevertheless, the issues does uncover another mismatch between fast and slow architectures here it seems cc @SaulLu <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
14,824
closed
preprocessing_num_workers coredump
## Environment info - `transformers` version: 4.14.0.dev0 - Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29 - Python version: 3.8.10 - Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu) - Jax version: 0.2.17 - JaxLib version: 0.1.68 ### Who can help @patrickvonplaten @patil-suraj ## Information I am training RoBERTa on Flax, and since I have a huge local dataset, I need to set the preprocessing_num_workers. This seems to result in coredumps. In this example I am setting preprocessing_num_workers=64, and two of the cpu cores crashes. Fortunately the proecess continues, but most likely the output from these two cpus are lost. I have made some debugging tests. The dataset is in json-format. I did reduce the length of each line, and also made a 1GB dummy dataset consisting of "this is a test"-only. For ruling out errors related to the dataset itself. I do however get the exact same error. Same error happens running the default run_mlm_flax.py-script. Please see the error below: ``` #33: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [01:41<00:00, 12.66s/ba] #12: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [01:42<00:00, 12.77s/ba] #32: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [01:43<00:00, 12.98s/ba] #2: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [01:47<00:00, 13.38s/ba] https://symbolize.stripped_domain/r/?trace=5f5fdb,7f90de96b20f&map= ████████████████████████████████████████████████████████████████████████████████████████████████████████▊ | 7/8 [01:40<00:09, 9.92s/ba] https://symbolize.stripped_domain/r/?trace=*** SIGTERM received by PID 1288846 (TID 1288846) on cpu 7 from PID 1288427; stack trace: ***████████████████████████████████████▊ | 7/8 [01:46<00:10, 10.91s/ba] 7f90de9153f4,7f90de96b20f,7f&map= ████████████████████████████████████████████████████████████████████████████████████████████▋ | 5/8 [01:27<00:42, 14.07s/ba] *** SIGTERM received by PID 1288892 (TID 1288892) on cpu 31 from PID 1288427; stack trace: ***████████████████████████████████▋ | 5/8 [01:31<00:46, 15.39s/ba] PC: @ 0x5f5fdb (unknown) _PyFunction_Vectorcall█████████████████████████████████████████████████████████████████████████████████████████████████████████████████▉ | 7/8 [01:31<00:10, 10.55s/ba] @ 0x7f8fe0058800 976 (unknown)██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▉ | 7/8 [01:24<00:09, 9.27s/ba] @ 0x7f90de96b210 (unknown) (unknown)████████████████████████████████████████████████████████████████████████████████▋ | 5/8 [01:33<00:47, 15.78s/ba] https://symbolize.stripped_domain/r/?trace=5f5fdb,7f8fe00587ff,7f90de96b20f&map=2a762cd764e70bc90ae4c7f9747c08d7:7f8fd3116000-7f8fe0397280 | 5/8 [01:32<01:03, 21.10s/ba] E1216 18:18:13.622459 1288846 coredump_hook.cc:250] RAW: Remote crash gathering disabled for SIGTERM.███████████████████████████████████████████████████████████████████████▉ | 7/8 [01:37<00:12, 12.51s/ba] PC: @ 0x7f90de9153f4 (unknown) do_futex_wait.constprop.0██████████████████████████████████████████████████████████████████████████████████████████████████████████████▉ | 7/8 [01:42<00:08, 8.45s/ba] @ 0x7f8fe0058800 976 (unknown)███████████████████████████████████████████████████████████████████████████████████████████████████████▊ | 6/8 [01:34<00:22, 11.36s/ba] @ 0x7f90de96b210 592243856 (unknown)██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▉ | 7/8 [01:37<00:08, 8.44s/ba] @ 0x80 (unknown) (unknown)██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▉ | 7/8 [01:38<00:08, 8.47s/ba] https://symbolize.stripped_domain/r/?trace=7f90de9153f4,7f8fe00587ff,7f90de96b20f,7f&map=2a762cd764e70bc90ae4c7f9747c08d7:7f8fd3116000-7f8fe0397280 | 4/8 [01:31<01:29, 22.30s/ba] E1216 18:18:13.628332 1288892 coredump_hook.cc:250] RAW: Remote crash gathering disabled for SIGTERM.███████████████████████████████████████████████████████████████████████▉ | 7/8 [01:34<00:12, 12.52s/ba] E1216 18:18:13.628836 1288846 process_state.cc:771] RAW: Raising signal 15 with default behavior████████████████████████████████████████████████████████████████████████████▉ | 7/8 [01:38<00:10, 10.98s/ba] E1216 18:18:13.643199 1288892 process_state.cc:771] RAW: Raising signal 15 with default behavior████████████████████████████████████████████████████████████████████████████▉ | 7/8 [01:32<00:14, 14.25s/ba] #50: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [01:00<00:00, 7.61s/ba] #19: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ ```
12-18-2021 11:45:19
12-18-2021 11:45:19
Is this related to the flax script or does the core dump also occur when you just process the dataset? Could you maybe write a script to just process the dataset and see if it gives a core dump, so we could pinpoint the issue? Thanks!<|||||>@patil-suraj Thanks for the feedback. I did not have the possibility of creating a separate script for this today. However, I did run a fast check. The error is happening within the datasets.map-function: https://github.com/huggingface/transformers/blob/cd583bdaa543318785cc2a74abb195546d972a25/examples/flax/language-modeling/run_mlm_flax.py#L509-L514 I did also verify that the example-text looks OK. I also ran it with different tokenize_function to verify that the error is not happening within this function. The error is reproduced by running the run_mlm_flax.py script. It does however only appear if the data is loaded locally. Running it toward a HuggingFace dataset, I do not see any errors. Tell me if a script would still be useful. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,823
closed
How to use Roberta as the Encoder and a randomly initialized TransformerDecoder as the Decoder?
Hello! I want to use Roberta as an Encoder and a randomly initialized TransformerDecoder as a Decoder. I know EncoderDecoderModel can make any pretrained autoencoding model as the encoder and any pretrained autoregressive model as the decoder. But I just want to use 4 or 6 TransformerDecoder layers as my decoder. Maybe I should use torch.nn.TransformerDecoder to build my model. But is there any easier way or example when I use Huggingface Transformers? Thank you! @patrickvonplaten @NielsRogge
12-18-2021 08:56:36
12-18-2021 08:56:36
Hey @Captainr22, I would recommend doing the following: - 1) Create a randomly intialized `RobertaModel` as your transformer decoder: ```python from transformers import RobertaModel, RobertaConfig decoder = RobertaModel(RobertaConfig(num_layers=4)) decoder.save_pretrained("./temp") ``` - 2) Now load an encoder / decoder model as follows: ```python from transformers import EncoderDecoderModel model = EncoderDecoderModel.from_encoder_decoder_pretrained("roberta-base", "./temp") ``` <|||||>Thank you for your recommendation!
transformers
14,822
closed
fp16 flag silently fails
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.15.0.dev0 - Platform: Linux-5.10.68+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - PyTorch version (GPU?): 1.9.1 (True) - Tensorflow version (GPU?): 2.6.2 (True) - Flax version (CPU?/GPU?/TPU?): 0.3.6 (gpu) - Jax version: 0.2.25 - JaxLib version: 0.1.70 - Using GPU in script?: Y - Using distributed or parallel set-up in script?: N Models: 1. mT5-small 2. mT5-base both have the same behavior Library: - Trainer: @sgugger ## Information The problem arises when using: * my own modified scripts: https://github.com/rumeshmadhusanka/mt5-simplification/blob/main/finetune.py derived from and very much identical to the official translation example The tasks I am working on is: * text simplification * translation ## To reproduce Steps to reproduce the behavior: 1. Run a translation/simplification task task turning fp16 flag on My params(for simplification): `model="google/mt5-base" !python mt5-simplification/finetune.py \ --model_name_or_path $model \ --do_train \ --fp16 \ --do_eval \ --adafactor \ --source_lang com \ --target_lang sim \ --source_prefix "com-sim: " \ --train_file train.json \ --validation_file valid.json \ --output_dir mt5-simplification \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --save_total_limit=1 \ --adam_epsilon=1e-6 \ --learning_rate=3e-5 \ --save_strategy=epoch \ --report_to="wandb" \ --max_steps=1200 \ --warmup_steps=250 \ --overwrite_output_dir \ --log_level debug \ --output_dir saved \ --predict_with_generate ` Some of the output logs:<br> > 0%| | 0/1200 [00:00<?, ?it/s]/kaggle/working/transformers/src/transformers/trainer.py:1366: FutureWarning: Non-finite norm encountered in torch.nn.utils.clip_grad_norm_; continuing anyway. Note that the default behavior will change in a future release to error out if a non-finite total norm is encountered. At that point, setting error_if_nonfinite=false will be required to retain the old behavior. args.max_grad_norm, 14%|█████▌ | 166/1200 [00:39<03:58, 4.33it/s]/opt/conda/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:134: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning) {'loss': 0.0, 'learning_rate': 2.7347368421052632e-05, 'epoch': 0.02} 83%|████████████████████████████████▌ | 1000/1200 [04:10<10:45, 3.23s/it]{'loss': 0.0, 'learning_rate': 1.1557894736842106e-05, 'epoch': 0.03} 100%|███████████████████████████████████████| 1200/1200 [04:56<00:00, 4.39it/s][INFO|trainer.py:2033] 2021-12-18 02:11:18,316 >> Saving model checkpoint to saved/checkpoint-1200 > ***** train metrics ***** epoch = 0.04 train_loss = 0.0 train_runtime = 0:05:11.02 train_samples = 120000 train_samples_per_second = 15.433 train_steps_per_second = 3.858 [INFO|trainer.py:2281] 2021-12-18 02:11:46,330 >> ***** Running Evaluation ***** [INFO|trainer.py:2283] 2021-12-18 02:11:46,330 >> Num examples = 2000 [INFO|trainer.py:2286] 2021-12-18 02:11:46,330 >> Batch size = 4 100%|█████████████████████████████████████████| 500/500 [01:56<00:00, 4.31it/s] ***** eval metrics ***** epoch = 0.04 eval_bleu = 0.0126 eval_gen_len = 8.16 eval_loss = nan eval_runtime = 0:01:56.24 eval_samples = 2000 eval_samples_per_second = 17.205 eval_steps_per_second = 4.301 When I run a translation task on Kaggle's GPU(Tesla P100-PCIE) or AWS's T4 GPU the training loss is always zero. This has been tried out multiple times with different training params. ## Expected behavior Loss not to be zero while training<br> Throw an error message if the GPU doesn't support fp16 <br> <!-- A clear and concise description of what you would expect to happen. -->
12-18-2021 02:17:29
12-18-2021 02:17:29
It's not an issue of GPU not supporting fp16. It's an issue of many models trained in bf16 and attempted to be used with fp16 and overflowing due to the incompatible numerical range. bf16-pretrained models use much bigger weight values than fp16 can accommodate so it overflows. Please see: https://github.com/huggingface/transformers/pull/10956 for various workarounds. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,821
closed
Add 'with torch.no_grad()' to DeBERTa integration test forward pass
# What does this PR do? As proposed in #14642, this encapsulates the forward pass in the DeBERTa integration test with "with torch.no_grad():". This way, no unnecessary gradients are computed during inference. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @NielsRogge
12-18-2021 00:34:37
12-18-2021 00:34:37
transformers
14,820
closed
Add 'with torch.no_grad()' to BERT integration test forward pass
# What does this PR do? As proposed in #14642, this encapsulates the forward pass in the BERT integration test with "with torch.no_grad():". This way, no unnecessary gradients are computed during inference. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @NielsRogge
12-18-2021 00:05:32
12-18-2021 00:05:32
transformers
14,819
closed
RFC: Integrating bitsandbytes 8-bit optimizer / adding Embedding Norm
# 🚀 Feature request 1. **BNB** AdamW Optimizer: https://github.com/facebookresearch/bitsandbytes created by @TimDettmers uses 8-bit quantization technique, which allows to reduce memory usage for the AdamW optimizer from 8 bytes to 2 bytes, which is a huge memory saving and I think our users will benefit a lot from it. 2. Additionally, we discovered that one of BNB's components, **Embedding Norm**, on its own made a huge improvement to the training stability of large models @BigScience. Therefore this is a 2-features in one request. ## Performance We did experiments at BigScience for 104B model and while we didn't have a chance to run it through a full training to the end, BNB was performing on par with the normal AdamW quality-wise. I'm currently also running a full 1.3B model training with embed norm to compare scaling laws with the same training w/o embed norm. Should be finished in a few days. ## Tech This technology comes in 2 components. 1. 8-bit quantization optimizer 2. required Embedding Norm The optimizer itself is a drop-in replacement for Adam: ``` import bitsandbytes as bnb optim = bnb.optim.Adam(model.parameters(), lr=0.001, betas=(0.9, 0.995), optim_bits=8) ``` but there is an important requirement of using Embed norm, which is needed to ensure training stability, which we currently don't have. In fact for BigScience we discovered that adding Embed norm on its own and w/o BNB made a huge difference to training stability and we are most likely going to enable it in the 200B gpt model training, as the current 104B gpt model results are the best when embed norm is enabled. So once we release the 200B model most likely we want the Embed norm in transformers for the custom architecture of that model. Embedding norm currently appears to be a new default for google and openai models according to Tim. BNB comes with `StableEmbedding` which replaces `nn.Embedding` So the only integration that is needed on the HF side (other than adding `--optim=adamw_bnb` to HF Trainer) is to add an embed norm and config option to have it enabled or not. It also wants xavier_uniform init, but that's a minor detail. ## Finetuning For existing pre-trained transformers models one could use them as is and use 8-bit optimizers for all weights, but 32-bit optimizers for the embedding layer. This will improve stability for fine-tuning. Tim shared that for GLUE fine-tuning, it is fine to have 8-bit optimizers for the embedding layer, but in general 32-bit should be more stable. ## Pretraining For pretraining it would make sense to implement the full stable embedding layer. i.e. add a configurable embed norm at the end of `Embedding.forward`. Here we would want to implement it ourselves rather than re-use `StableEmbedding` from BNB, so that we can easily load any model from the hub without depending on BNB, after it was trained with BNB. We obviously can't make this a default for all our models, but perhaps we can consider starting enabling this for some models where we know it makes a huge difference - or at least to recommend to. @TimDettmers, please let me know if I missed something or you'd like to add anything to my summary here. Thank you! Comments are welcome. @patrickvonplaten, @patil-suraj, @LysandreJik, @sgugger
12-17-2021 22:14:19
12-17-2021 22:14:19
Regarding the optimizers for `Trainer`, I think we can have a "small" breaking change in general and completely remove our implementation of AdamW and instead make use of `torch's` native `AdamW` implementation. I think it's a good idea to add a `--optim` arg to Trainer<|||||>Regarding the StableEmbedding, this has to be handled in each model file respectively IMO and should be done in a second PR (if necessary)<|||||>I'm very torn about adding an option for the `StableEmbedding` in the config of some (all?) models so I feel I need more information. Specifically, let's say we had that option to GPT-2 models: - can a current checkpoint for GPT-2 (like `gpt2`) be used with that option enabled in the config and produce good results, or would it need to be retrained? - can a checkpoint trained with StableEmbedding be used with a regular Embedding instead if someone disables the config option? I'm trying to see if it's something like enable gradient checkpointing for instance, which you can use for training without changing anything if the user that ends with your model doesn't want it, or if it impacts the checkpoints in any way. Depending on the answer to that, we will see if we need new model files or not.<|||||>Excellent questions, @sgugger gradient checkpointing doesn't change the math. layer norm does (that's how it makes the training more stable). The layer norm has 2 weights which are trained. > * can a current checkpoint for GPT-2 (like `gpt2`) be used with that option enabled in the config and produce good results, or would it need to be retrained? Because layernorm will change the hidden representation that is seen by the next stage almost certainly some finetuning will be needed. Not sure how much. > * can a checkpoint trained with StableEmbedding be used with a regular Embedding instead if someone disables the config option? same answer as above, removing this transform will impact the hidden representation. I have only used it in training from scratch so far. But perhaps @TimDettmers has some suggestions. I know he is on the road, so let's perhaps wait for him to follow up.<|||||>Thanks Stas! The current version of bnb also features the normal [32-bit embedding layer _without_ layer norm](https://github.com/facebookresearch/bitsandbytes/blob/main/bitsandbytes/nn/modules.py#L46) for the very reason of compatibility. What I would do is use this 32-bit optimizer version as the default when using bnb optimizers and have an option for the stable embedding layer if one wants to pretrain a model. From my experience the difference between 8-bit/32-bit optimizers for embedding layer and layer norm are as follows: - 8-bit: unstable training and poor performance for pretraining; successful finetuning on GLUE; finetuning on more complicated objectives (seq2seq) _might_ be unstable - 32-bit: stable training for all objectives for models below 1.5B parameters; - 32-bit + layer norm: stable training for all models and all objectives and improved performance. > * can a current checkpoint for GPT-2 (like `gpt2`) be used with that option enabled in the config and produce good results, or would it need to be retrained? I experimented with this. For pretrained checkpoints adding a layer norm after loading the model makes training difficult and leads to poor performance. I tinkered a bit with low learning rates to adapt the layer norm first before regular finetuning, but that did not work well and is a mess. So for pretrained models, the best would be to use the 32-bit optimized embedding layer (`bnb.nn.Embedding`) and no layer norm if the pretrained model was not trained with a layer norm. > * can a checkpoint trained with StableEmbedding be used with a regular Embedding instead if someone disables the config option? This is basically the same as above. If a StableEmbedding has been used for pretraining it needs to be used for fine-tuning. Removing/adding a layer after pretraining makes finetuning difficult. The performance is usually a bit better with StableEmbedding layer (using fairseq for language modeling, masked language modeling, machine translation, multi-lingual machine translation). Pretraining is usually also easier with the layer norm. That is why it is standard for Google/OpenAI models. > I'm trying to see if it's something like enable gradient checkpointing for instance, which you can use for training without changing anything if the user that ends with your model doesn't want it, or if it impacts the checkpoints in any way. Depending on the answer to that, we will see if we need new model files or not. Like Stas said, the optimizer should not have any effect on gradient checkpointing with or without the layer norm. It just requires consistency between pretrained/finetuned checkpoints. Let me know if there are any more questions! <|||||>Thank you very much for this detailed answer, Tim! So to use Adam8bit with any normally pre-trained model we can do: 1. load optimizer ``` import bitsandbytes as bnb optim = bnb.optim.Adam8bit ``` 2. fixup the model architecture - extend the `nn.Embedding` class with `bnb.nn.Embedding.__init__` (which will do embedding optim in 32-bit, while the rest of the model will be optimized in 8-bit) - must do that before loading the model! since we can't miss the init: https://github.com/facebookresearch/bitsandbytes/blob/4e60e7dc62c50b6ba9b6becf6e779a1d48906be2/bitsandbytes/nn/modules.py#L51 Perhaps something like: ``` import torch from transformers import GPTNeoForCausalLM from bitsandbytes.optim import GlobalOptimManager torch.nn.modules.sparse.Embedding.orig__init__ = torch.nn.modules.sparse.Embedding.__init__ def bnb_embed_init(self, *args, **kwargs): torch.nn.modules.sparse.Embedding.orig__init__(self, *args, **kwargs) GlobalOptimManager.get_instance().register_module_override(self, 'weight', {'optim_bits': 32}) torch.nn.modules.sparse.Embedding.__init__ = bnb_embed_init ``` which of course can be made into a wrapper and won't be an eye sore. There are also neater way to do it with `functools.wraps` ``` import functools import torch from bitsandbytes.optim import GlobalOptimManager def run_after(f): @functools.wraps(f) def wrapper(module, *args, **kwargs): f(module, *args, **kwargs) GlobalOptimManager.get_instance().register_module_override(module, 'weight', {'optim_bits': 32}) return wrapper cls = torch.nn.modules.sparse.Embedding cls._old_init = cls.__init__ cls.__init__ = run_after(cls.__init__) ``` 3. load as normal: ``` model = GPTNeoForCausalLM.from_pretrained(...) - load as normal. ``` --------------------------- or may be it's easier to first load the model and then traverse it and tell Adam8bit to run embed layers in fp32: ``` import torch import bitsandbytes as bnb from transformers import GPTNeoForCausalLM from bitsandbytes.optim import GlobalOptimManager def set_optim_to_run_embedding_in_fp32(model): for module in model.modules(): if isinstance(module, torch.nn.Embedding): GlobalOptimManager.get_instance().register_module_override(module, 'weight', {'optim_bits': 32}) mname = "EleutherAI/gpt-neo-125M" model = GPTNeoForCausalLM.from_pretrained(mname) set_optim_to_run_embedding_in_fp32(model) ``` This does look simpler. @TimDettmers, if this is useful, perhaps `bnb_embedding_in_fp32` this can be part of BNB API, but it probably then should take an optional `embed_class=torch.nn.Embedding` should the user have a custom Embedding class. I suppose it is ok to run `register_module_override` after the model was fully loaded. Do we need to import `bnb.optim.Adam8bit`, first by any chance? If we add support for `StableEmbedding` in `transformers` archs, then for new trainings `bnb.optim.Adam8bit` could be used directly. Once we merge https://github.com/huggingface/transformers/pull/14744 we can add `--optim adam_bnb_8bit` to HF Trainer and give it a try.<|||||>The difficult question would be how would HF Trainer know when to push in fp32-embed-optim and when not to. The model will need to have a way to tell the user that info.<|||||>@TimDettmers, I'm curious whether you have done experiments with using [AdaNorm](https://github.com/lancopku/AdaNorm) as part of `StableEmbedding` for those cases where the model wasn't pretrained with `StableEmbedding`. If I understand correctly AdaNorm doesn't have the LayerNorm's normal gain+bias trainable params and uses a fixed hparam instead and the paper shows very close and better at times performance in several studies done in the paper. https://arxiv.org/abs/1911.07013 If it worked, then instead of doing embeddings in fp32, perhaps using AdaNorm could be a simpler solution and further save memory. So the user will then just have to swap `nn.Embedding` -> `bnb.nn.StableEmbeddingAdaNorm` and supply an additional hparam (no idea how easy it might be to get it right though, so perhaps it's not that easy).<|||||>Mmm, I don't think we will "fixup the model architectures". The test for keeping a current architecture and adding support for Embedding norm as a config argument does not pass, so we will need new architectures with the proper embedding layers IMO, and only those will support training/fine-tuning with `--optim adam_bnb_8bit`<|||||>> Mmm, I don't think we will "fixup the model architectures". The test for keeping a current architecture and adding support for Embedding norm as a config argument does not pass, so we will need new architectures with the proper embedding layers IMO, and only those will support training/fine-tuning with `--optim adam_bnb_8bit` How about only using the `bnb.nn.Embedding` that does not use embedding layer norm? If using a different class is problematic, the 32-bit optimizers can also be configured by passing the weight attribute of the respective class to the bitsandbytes library, like so: ```python GlobalOptimManager.get_instance().register_module_override(emb_module, 'weight', {'optim_bits': 32}) ``` No architecture reconfiguration should be needed with this option, and the embedding will run with 32-bit optimizers. This is indeed the configuration that I use to fine-tune models in the 8-bit optimizer paper -- no embedding norm required! Do you think this could make more sense?<|||||>I think so, Tim. Thank you for your practical suggestion! We should try it out and see how it fares. @manuelciosici, you were asking earlier if there is something else to work on. Would you like to try to work on adding `--optim adamw_bnb` to HF Trainer? If yes please read this whole thread, to attempt to wrap your head around the requirements of needing an embedding with layernorm, which we don't have, but Tim has proposed a workaround above that should be a middle-ground memory-saving-wise. So basically it'd have 3/4 memory saved for all params in optimizer except the embedding, where there will be no saving at all. Additionally I hope that in the future we will have model archs with embed norm, and we will need to figure out how to activate the bnb optim for those archs. But we can discuss that in the PR. If you're busy it's no problem I then hope to be able to try this some time soon. Thank you!<|||||>@stas00 Thank you for the opportunity. I will read the thread today and add `--optim adamw_bnb`.<|||||>Thank you **github-actions**. I plan to work on this issue's PR this weekend.
transformers
14,818
closed
a
null
12-17-2021 21:35:14
12-17-2021 21:35:14
transformers
14,817
closed
Roberta Classification Head
Why does Roberta classification Head uses 2 linear layers? https://github.com/huggingface/transformers/blob/84ea427f460ffc8d2ddc08a341ccda076c24fc1f/src/transformers/models/roberta/modeling_roberta.py#L1443 The BERT classification head uses only one and it makes sense since all the work is done by the Transformer layers.
12-17-2021 21:32:23
12-17-2021 21:32:23
Hi! We try to stick as much as possible to the original implementation. See fairseq's original implementation here: https://github.com/pytorch/fairseq/blob/a54021305d6b3c4c5959ac9395135f63202db8f1/fairseq/models/roberta/model.py#L394-L429<|||||>Thank you so much.
transformers
14,816
closed
[examples/summarization] deal with None in data records
When trying to use https://huggingface.co/datasets/wikihow with `run_summarization.py` I run into incomplete records in the manually downloaded dataset (the data is not on the hub and requires a user to download it manually): ``` [...] File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_dataset.py", line 1990, in decorated result = f(decorated_item, *args, **kwargs) File "examples/pytorch/summarization/run_summarization.py", line 450, in preprocess_function inputs = [prefix + inp for inp in inputs] File "examples/pytorch/summarization/run_summarization.py", line 450, in <listcomp> inputs = [prefix + inp for inp in inputs] TypeError: can only concatenate str (not "NoneType") to str ``` This PR is fixing that by filtering out incomplete records. Now it's possible to run: ``` python examples/pytorch/summarization/run_summarization.py --model_name_or_path \ google/pegasus-wikihow --max_source_length 512 --max_target_length 256 --do_eval \ --per_device_eval_batch_size 8 --predict_with_generate --num_beams 8 --overwrite_output_dir \ --output_dir output_dir --validation_file data/wikihowAll.csv --text_column text --summary_column \ headline --max_eval_samples 10 ``` For context: I was trying to deal with this issue https://github.com/huggingface/transformers/issues/14804 when I run into this problem. And this fix was needed for me to be able to reproduce the issue. In other words this wasn't me just randomly trying some random dataset for the heck of it, I was trying to deal with a bug report. And this dataset is not random, since we report a performance score on it for https://huggingface.co/google/pegasus-wikihow which was originally reported here: https://github.com/huggingface/transformers/issues/6844 and if we can't use our own tools to reproduce a report made by us, then I don't know how to move forward here. @sgugger
12-17-2021 19:22:59
12-17-2021 19:22:59
As I said before, the examples are not supposed to work out of the box on every dataset and we shouldn't strive for that. Adding more complexity should be on the user's side when they want to deal with another dataset. Cf the second paragraph of the examples README: > While we strive to present as many use cases as possible, the scripts in this folder are just examples. It is expected that they won't work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs. To help you with that, most of the examples fully expose the preprocessing of the data. This way, you can easily tweak them. cc @LysandreJik @patrickvonplaten @patil-suraj if you have a different opinion, we can evolve our philosophy.<|||||>Oh, sorry, I think I have misinterpreted your comment on Slack. I thought you were agreeing that this fix should go in. In this particular situation ideally `datasets` should import that data into its online storage for https://huggingface.co/datasets/wikihow (as it's not there at the moment) and it could do the fixing as part of the import script. But on the other hand this is just defensive programming since the example code takes random csv files and can't expect them to be without problems. So I think as an example this is a good demonstration of data sanitizing, am I wrong? I do hear you saying that every additional code makes the examples more complex. I'm not disagreeing with that.<|||||>It's true that this one is borderline and generally useful, so I'm curious of other people's opinion. <|||||>Don't have a strong opinion, but I'm more in favor of it than against it. It's quite easy to understand as a reader what's going on there IMO. I would slightly favor to **not** use `map(...)` and `zip(...)` however, but rather just two list comprehensions `[x for x in batch[...] if x[...] is not None]` for improved readability.<|||||>> I would slightly favor to **not** use `map(...)` and `zip(...)` however, but rather just two list comprehensions `[x for x in batch[...] if x[...] is not None]` for improved readability. The reason it's complex is because unless I misunderstood your proposal I don't think it would work. This is because we have 2 parallel arrays thus you need to filter them together to keep the alignment between the pairs. Here is a sample code to understand what's going on: ``` x = {"a":[1,None,3,4], "b":[5,6,None,7]} a, b = map(list, zip(*([x["a"][i], x["b"][i]] for i in range(len(x["a"])) if x["a"][i] is not None and x["b"][i] is not None))) ``` If you filter them out separately you will end up with mismatching pairs. Here is a simpler to understand version, but it's slower of course. ``` x = {"a":[1,None,3,4], "b":[5,6,None,7]} a, b = [], [] for i in range(len(x["a"])): if x["a"][i] is not None and x["b"][i] is not None: a.append(x["a"][i]) b.append(x["b"][i]) ``` But by all means I'd be happy to use a simpler code if you can think of one.<|||||>> > I would slightly favor to **not** use `map(...)` and `zip(...)` however, but rather just two list comprehensions `[x for x in batch[...] if x[...] is not None]` for improved readability. > > The reason it's complex is because unless I misunderstood your proposal I don't think it would work. This is because we have 2 parallel arrays thus you need to filter them together to keep the alignment between the pairs. > > Here is a sample code to understand what's going on: > > ``` > x = {"a":[1,None,3,4], "b":[5,6,None,7]} > a, b = map(list, zip(*([x["a"][i], x["b"][i]] for i in range(len(x["a"])) if x["a"][i] is not None and x["b"][i] is not None))) > ``` > > If you filter them out separately you will end up with mismatching pairs. > > Here is a simpler to understand version, but it's slower of course. > > ``` > x = {"a":[1,None,3,4], "b":[5,6,None,7]} > a, b = [], [] > for i in range(len(x["a"])): > if x["a"][i] is not None and x["b"][i] is not None: > a.append(x["a"][i]) > b.append(x["b"][i]) > ``` > > But by all means I'd be happy to use a simpler code if you can think of one. Ah I see - thanks for explaining it in more detail! Your proposal ```python x = {"a":[1,None,3,4], "b":[5,6,None,7]} a, b = [], [] for i in range(len(x["a"])): if x["a"][i] is not None and x["b"][i] is not None: a.append(x["a"][i]) b.append(x["b"][i]) ``` looks very nice. I don't think speed is really relevant here <|||||>Pushed the slower, but easier to read version as suggested by Patrick.<|||||>Thanks a lot!
transformers
14,815
closed
[Generate] Correct input_ids detection
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> A test whether the input is `input_ids` is incorrect. Good that our RAG tests are pretty aggressive to spot such small differences ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
12-17-2021 14:20:55
12-17-2021 14:20:55
Thanks for fixing!
transformers
14,814
open
Support on Mixture of expert models
Hi, I find that there are emerging works in the field of NLP on **Mixture of experts** based models, such as Switch Transformers from Google. However, I do not find such mixture of expert models in huggingface transformers. Do you have the plan to support such models? Thanks !
12-17-2021 14:05:20
12-17-2021 14:05:20
Hi, It's true, but as long as there are no pretrained weights, chances are small models are added. There are some open-source implementations of MoE available, including: - DeepSpeed: https://www.deepspeed.ai/tutorials/mixture-of-experts/ - Fairseq. Once there are some pretrained weights available somewhere, be sure to let us know!<|||||>Hi, I find there is a very recent implementation of MoE with code and pretrained weights for your reference https://github.com/pytorch/fairseq/tree/main/examples/moe_lm<|||||>Indeed, hot of the press! Let's add them!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Any progress on this front ? Thanks...<|||||>cc'ing @patil-suraj here<|||||>Hey @cerisara ! We have planned to add the `moe_lm` in `Transformers` but I don't have much bandwidth to work on it. If you or anyone else in the community is interested in adding it, I would be more than happy to help :) <|||||>Hi @patil-suraj, thanks, I'm interested in these models and would like to contribute, but I'm afraid my bandwidth is too small as well, at least for now, sorry ;-)
transformers
14,813
closed
[Perceiver] Skip multi-gpu tests for now
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Multi-GPU for perceiver don't work due to deeply rooted problem in PyTorch's multi-gpu. For now I think we should skip the test. For the near future I think we should switch multi-gpu forward tests to DDP ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
12-17-2021 13:21:54
12-17-2021 13:21:54
Merging now as it's blocking the failing tests a bit. In a follow-up PR we should enable DDP as well
transformers
14,812
closed
Enable ONNX export for `VisionDecoderEncoderModel`
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> As discussed on the [forums](https://discuss.huggingface.co/t/can-i-export-a-visionencoderdecoder-checkpoint-to-onnx/12885), it would be nice if one could export `VisionDecoderEncoderModel` classes using the `transformers.onnx` package. ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> It is currently not possible to export `VisionDecoderEncoderModel` classes (or vision models more generally), unless the end user is willing to write their own `OnnxConfig`. It might make sense to first explore what is involved in exporting ViT before tackling this more complex example. Looking at this a bit more closely, I can see that `transformers.onnx` currently has a tight integration with tokenizers (e.g. to generate dummy inputs), so some refactoring will be necessary to support other modalities.
12-17-2021 12:01:58
12-17-2021 12:01:58
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I believe progress on this ticket will be welcome by the communitiy.<|||||>Hey @albertoandreottiATgmail thanks for the interest! This is next up on my list now that we've just merged the ViT export in #15658 :) <|||||>> Hey @albertoandreottiATgmail thanks for the interest! This is next up on my list now that we've just merged the ViT export in #15658 :) Hi @lewtun is there any way (or script) to export trocr to onnx ?<|||||>Hi all, any updates on this? I've also tried to convert TrOCR to ONNX without success. Thanks in advance :) cc @lewtun <|||||>Re-opening this issue as it hasn't been resolved yet<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Adding another feature request for this: https://github.com/NielsRogge/Transformers-Tutorials/issues/183<|||||>> Adding another feature request for this: [NielsRogge/Transformers-Tutorials#183](https://github.com/NielsRogge/Transformers-Tutorials/issues/183) Any updates on this one? Thank you<|||||>Hi folks, @mht-sharma will be tackling this issue - stay tuned :)<|||||>@mht-sharma Any updates? Thank you.<|||||>@BakingBrains you can see the PR from @mht-sharma linked above - we should be able to get this merged pretty soon!<|||||>@lewtun and @mht-sharma Thank you.
transformers
14,811
closed
[WavLM] Layerdrop is not allowed for first layer
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes flaky CI. WavLM makes use of relative position encoding that are **always** computed in the first attention layer and then passed down to subsequent layers. Hence the first layer **cannot** be skipped when using layerdrop. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
12-17-2021 11:00:37
12-17-2021 11:00:37
Thanks for fixing!
transformers
14,810
closed
Fix Perceiver multi GPU test
# What does this PR do? This PR fixes the failing test on CI for Perceiver on multi-GPU. Note: I can't run this test locally, unfortunately, so I hope it'll pass.
12-17-2021 09:00:10
12-17-2021 09:00:10
transformers
14,809
closed
How does decoder's weight shared with input embeddings?
As the title, I haven't found the way to share input embeddings in self.decoder.weight. Can someone help me point it out.? ```python class BertLMPredictionHead(nn.Module): def __init__(self, config): super().__init__() self.transform = BertPredictionHeadTransform(config) # The output weights are the same as the input embeddings, but there is # an output-only bias for each token. self.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False) self.bias = nn.Parameter(torch.zeros(config.vocab_size)) # Need a link between the two variables so that the bias is correctly resized with `resize_token_embeddings` self.decoder.bias = self.bias def forward(self, hidden_states): hidden_states = self.transform(hidden_states) hidden_states = self.decoder(hidden_states) return hidden_states ```
12-17-2021 03:45:02
12-17-2021 03:45:02
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,808
closed
Add 'with torch.no_grad()' to ALBERT integration test forward pass
# What does this PR do? As proposed in #14642, this encapsulates the forward pass in the ALBERT integration test with "with torch.no_grad():". ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @NielsRogge
12-16-2021 22:34:36
12-16-2021 22:34:36
transformers
14,807
closed
[WIP] Get started docs
# What does this PR do? This PR contains the first draft of the Get Started docs as discussed for the new Transformers IA. Some additional thoughts: - I think adding better signposts on the landing page will help users quickly get to the content they are looking for. For inspiration, maybe we can have cards like [these](https://stripe.com/docs/payments?payments=popular) with links to the pages (cc @mishig25). From the analytics, it looks like people view fine-tuning a pretrained model, fine-tuning for common downstream tasks, installation, and the BERT docs a lot. The first two pages definitely look valuable for getting users up and running with their tasks, but I would love to hear what other pages you think we can add to help guide users down the right path. - On the Quick tour page, it'd be neat if we can embed a widget there (cc @mishig25) for sentiment analysis so users can play with it directly.
12-16-2021 22:28:25
12-16-2021 22:28:25
Hey Steven, I believe there was an issue in your merge/rebase and GitHub is having a hard time understanding what happened. Could you close and reopen a new PR without touching at your branch so that we may see the actual differences? Thank you.
transformers
14,806
closed
Convert rst to mdx bert
Convert the BERT model file to mdx.
12-16-2021 21:40:29
12-16-2021 21:40:29
transformers
14,805
closed
[WavLM] Correct position bias computation
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes small issue with position bias ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
12-16-2021 21:33:30
12-16-2021 21:33:30
transformers
14,804
closed
[Benchmark] google/pegasus-wikihow
# 🖥 Benchmarking `transformers` ## Benchmark Which part of `transformers` did you benchmark? [```google/pegasus-wikihow```](https://huggingface.co/google/pegasus-wikihow) ## Set-up What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use? Command below was run with transformer v4.13.0 with single GPU. I tried aligning the input parameters to paper setup. ```bash python run_summarization.py \ --model_name_or_path google/pegasus-wikihow \ --dataset_name wikihow \ --dataset_config all \ --dataset_dir /data/dataset/wikihow \ --max_source_length 512 \ --max_target_length 256 \ --do_eval \ --per_device_eval_batch_size 8 \ --predict_with_generate \ --num_beams 8 \ --overwrite_output_dir \ --run_name $RUNID \ --output_dir $OUTDIR ``` ## Results [Model Card](https://huggingface.co/google/pegasus-wikihow) | dataset | C4 | HugeNews | Mixed & Stochastic| | ---- | ---- | ---- | ----| | wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *| According to issue #6844 46.85/23.64/28.73 There was a footnote in the issue - I wonder if any customization needed. (* (authors footnote)) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data My results "eval_rouge1": 33.99, "eval_rouge2": 13.0781, "eval_rougeL": 26.5329, @stas00, @patil-suraj @sshleifer appreciate your pointers!
12-16-2021 20:11:39
12-16-2021 20:11:39
I'm not sure what section of the dataset it was eval'ed on so it's hard to tell how to compare the scores, especially if the dataset has grown since it was eval'ed on a year ago. So first I had to do the following as the dataset contains missing fields: ``` diff --git a/examples/pytorch/summarization/run_summarization.py b/examples/pytorch/summarization/run_summarization.py index 658c24114..60da701e6 100755 --- a/examples/pytorch/summarization/run_summarization.py +++ b/examples/pytorch/summarization/run_summarization.py @@ -436,8 +436,19 @@ def main(): ) def preprocess_function(examples): - inputs = examples[text_column] - targets = examples[summary_column] + + # remove pairs where at least one record is None + inputs, targets = map( + list, + zip( + *( + [examples[text_column][i], examples[summary_column][i]] + for i in range(len(examples[text_column])) + if examples[text_column][i] is not None and examples[summary_column][i] is not None + ) + ), + ) + inputs = [prefix + inp for inp in inputs] model_inputs = tokenizer(inputs, max_length=data_args.max_source_length, padding=padding, truncation=True) ``` so that takes care of dropping incomplete records. Now I can run the script normally after manually downloading the csv file with just first 10 records: ``` python examples/pytorch/summarization/run_summarization.py --model_name_or_path \ google/pegasus-wikihow --max_source_length 512 --max_target_length 256 --do_eval \ --per_device_eval_batch_size 8 --predict_with_generate --num_beams 8 --overwrite_output_dir \ --output_dir output_dir --validation_file data/wikihowAll.csv --text_column text --summary_column \ headline --max_eval_samples 10 ``` we get: ``` ***** eval metrics ***** eval_gen_len = 62.2 eval_loss = 3.153 eval_rouge1 = 53.0496 eval_rouge2 = 30.0482 eval_rougeL = 45.322 eval_rougeLsum = 45.3855 eval_runtime = 0:00:07.14 eval_samples = 10 eval_samples_per_second = 1.399 eval_steps_per_second = 0.14 ``` So the score is good. But of course, we want more samples and the right samples. The question is which eval samples did the authors use - you have to use the same samples and then you will be comparing apples to apples. Until then the results don't lend themselves to a fair comparison, other than knowing that it does summarize as the numbers are relatively high. Does it make sense? p.s. Alternatively you could checkout a revision of `transformers` from when the results were published, and run the script on the same subset and compare the current code with a year-old one - if you get the same score then you know there was no regression in the code over past year. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,803
closed
Add a main_input_name attribute to all models
# What does this PR do? This PR adds a new attribute to all models called `main_input_name` which allows us to know whether the model expects `input_ids`, `pixel_values`, `input_values` or `input_features` as a first argument. A test is also added to check the correct value is set for all models, by inspecting the signature. We could also add some magic property to do what's tested, but I feel an explicit attribute is more customizable for future use cases.
12-16-2021 19:50:58
12-16-2021 19:50:58
transformers
14,802
closed
[Seq2SeqTrainer] Remove model input name hack
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> As discussed in #14784 this removes the model input name hack in the Seq2SeqTrainer. Should I add a new tests somewhere? The hack was added in https://github.com/huggingface/transformers/pull/14139/files. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
12-16-2021 18:19:25
12-16-2021 18:19:25
transformers
14,801
closed
[ImageGPT] Deprecate pixel_values input name to input_ids
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> As discussed in https://github.com/huggingface/transformers/pull/14784 this deprecates `pixel_values` to `input_ids`. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
12-16-2021 17:32:02
12-16-2021 17:32:02
> LGTM. Can you verify the slow integration test is passing in `test_modeling_imagegpt.py`? All tests are passing
transformers
14,800
closed
Update CONTRIBUTING.md
fix pip installation cmd # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @LysandreJik
12-16-2021 15:34:54
12-16-2021 15:34:54
transformers
14,799
closed
Update CONTRIBUTING.md
typo correction # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger
12-16-2021 14:39:29
12-16-2021 14:39:29
transformers
14,798
closed
CUDA error: device-side assert triggered while training Marian MT
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: transformers-4.13.0.dev0 - Platform: linux - Python version: 3.8 - PyTorch version (GPU?): torch==1.11.0a0+b6df043 GPU - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: one node multigpu ### Who can help @patrickvonplaten @sgugger\ @LysandreJik <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten @narsil - Tokenizers: @LysandreJik - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [X] the official example scripts: (give details below) * --> NMT/smancha5/transformers/examples/pytorch/translation/run_translation.py * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) Training marianMT on EMEA custom dataset ## To reproduce Steps to reproduce the behavior: 1. Clone the latest transformer repo 2. /opt/conda/bin/python -m torch.distributed.launch --nnodes 1 --nproc_per_node 4 /data/atc_tenant/NMT/smancha5/transformers/examples/pytorch/translation/run_translation.py --train_file /data/atc_tenant/NMT/smancha5/EMEA.en-es.train.json --model_name_or_path Helsinki-NLP/opus-mt-en-es --do_train --source_lang=en --target_lang=es --output_dir=/data/atc_tenant/NMT/model1/ --per_device_train_batch_size=8 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate --cache_dir=/data/atc_tenant/NMT/cache/ 3. <!-- Error--> /opt/pytorch/pytorch/aten/src/ATen/native/cuda/Indexing.cu:698: indexSelectLargeIndex: block: [194,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/pytorch/pytorch/aten/src/ATen/native/cuda/Indexing.cu:698: indexSelectLargeIndex: block: [194,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed. Traceback (most recent call last): File "/data/atc_tenant/NMT/smancha5/transformers/examples/pytorch/translation/run_translation.py", line 621, in <module> main() File "/data/atc_tenant/NMT/smancha5/transformers/examples/pytorch/translation/run_translation.py", line 538, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1471, in train self._total_loss_scalar += tr_loss.item() RuntimeError: CUDA error: device-side assert triggered terminate called after throwing an instance of 'c10::CUDAError' what(): CUDA error: device-side assert triggered Exception raised from query at /opt/pytorch/pytorch/aten/src/ATen/cuda/CUDAEvent.h:95 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6c (0x7fe03f1d3e1c in /opt/conda/lib/python3.8/site-packages/torch/lib/libc10.so) frame #1: c10d::ProcessGroupNCCL::WorkNCCL::finishedGPUExecutionInternal() const + 0x125 (0x7fe042e6d345 in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so) frame #2: c10d::ProcessGroupNCCL::WorkNCCL::isCompleted() + 0x78 (0x7fe042e704e8 in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so) frame #3: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x158 (0x7fe042e71df8 in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so) frame #4: <unknown function> + 0xcc9d4 (0x7fe0d47a29d4 in /opt/conda/bin/../lib/libstdc++.so.6) frame #5: <unknown function> + 0x9609 (0x7fe0d6295609 in /usr/lib/x86_64-linux-gnu/libpthread.so.0) frame #6: clone + 0x43 (0x7fe0d6055293 in /usr/lib/x86_64-linux-gnu/libc.so.6) Debugging Logs: print(inputs['labels'].shape) : torch.Size([8, 94]) print(inputs['input_ids'].shape) : torch.Size([8, 70]) print(inputs['decoder_input_ids'].shape) : torch.Size([8, 94]) ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Training of model complete
12-16-2021 14:31:22
12-16-2021 14:31:22
It's sadly impossible for us to reproduce this error give the message above. From the error message, I'm quite sure that you are using a sequence length which is too long. Could you make sure you cut the input sequences to the maximum length of Marian?<|||||>@patrickvonplaten Thanks for the reply. From the logs, i tried to print the length of the input_ids from the batch the training fails on : and it prints print(inputs['labels'].shape) : torch.Size([8, 94]) print(inputs['input_ids'].shape) : torch.Size([8, 70]) print(inputs['decoder_input_ids'].shape) : torch.Size([8, 94]) The max length in the config of this model is 512. Could you recommend if there is any flag to make sure of this length or should i preprocess my data to have a certain length ? Thanks again for the help :) <|||||>Could you try to simply add: ``` --max_source_length 512 ``` to your command for this input: https://github.com/huggingface/transformers/blob/48463ebb33c4a3f4035dbdaf55dc43778304f318/examples/pytorch/translation/run_translation.py#L136 It is set to 1024 by default<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,797
closed
Unable so save quantized model for mobile using Speech2Text
## Environment info - Platform: Ubuntu 20.04 - Python version: 3.9 - PyTorch version (GPU?): 1.10.0 (yes) ### Who can help @patrickvonplaten @anton-l ## Information I am trying to save a quantized model for speech recognition. Nothing fancy, I'm just trying to explore 🤗 for this topic hoping I can get some models for mobile out of it. ```python model_path = "facebook/s2t-small-librispeech-asr" labels = None # Initialize the model model = Speech2TextForConditionalGeneration.from_pretrained(model_path) model = model.eval() model.gradient_checkpointing_disable() # Apply quantization / script / optimize for mobile quantized_model = torch.quantization.quantize_dynamic(model, qconfig_spec={torch.nn.Linear}, dtype=torch.qint8) scripted_model = torch.jit.script(quantized_model) ``` but it appears that there are some nodes which cannot be scripted as the error suggests: ```none File "/home/sfalk/miniconda3/envs/speech/lib/python3.9/site-packages/torch/jit/frontend.py", line 702, in build_Call args = [build_expr(ctx, py_arg) for py_arg in expr.args] File "/home/sfalk/miniconda3/envs/speech/lib/python3.9/site-packages/torch/jit/frontend.py", line 702, in <listcomp> args = [build_expr(ctx, py_arg) for py_arg in expr.args] File "/home/sfalk/miniconda3/envs/speech/lib/python3.9/site-packages/torch/jit/frontend.py", line 286, in __call__ raise UnsupportedNodeError(ctx, node) torch.jit.frontend.UnsupportedNodeError: GeneratorExp aren't supported: File "/home/sfalk/miniconda3/envs/speech/lib/python3.9/site-packages/transformers/modeling_utils.py", line 987 activations". """ return any(hasattr(m, "gradient_checkpointing") and m.gradient_checkpointing for m in self.modules()) ~ <--- HERE ``` ## Expected behavior Well, it would be great if it simply worked but unfortunately there's not really a tutorial or an example out there. I've already raised this issue in https://github.com/huggingface/transformers/issues/14523 - it's not 100% related but I hope that there will be some examples on how we can make these model ready for mobile.
12-16-2021 14:25:28
12-16-2021 14:25:28
We don't fully support torch quantization in `transformers`. Did you try out the optimum library instead: https://github.com/huggingface/optimum . Also gently pinging @michaelbenayoun @echarlaix @lewtun here<|||||>Hi @stefan-falk! As @patrickvonplaten mentioned, we don't support native torch quantization. However we're working on enabling ONNX export for all speech models, so that they can be supported by [optimum](https://github.com/huggingface/optimum) in quantization mode. Will keep you updated on that :) <|||||>Hi and thanks for the reply! If you could update me on that matter that would be great, thank you 👍 If I may I'd like to ask this kind of directly: So.. I just want to be able to train (basically) any speech recognition model from scratch and use it on mobile. I've tried out the Wav2Vec2 Android demo app, which works fine, but the problem here is that training from scratch is really non-trival. I am not sure what I can do here if I am aiming at other languages than English. My question here is basically: Is there any guideline which I can follow in order to accomplish this or am I making this too complicated and should look into something else? 😆 I realize that there is a cross-language Wav2Vec model but this would be too large for mobile deployment I assume.<|||||>Speech-recognition from scratch is really not easy. It also defeats a bit the purpose of Wav2Vec2 which is a *pretrained* speech recognition model that can be *fine-tuned* extremely easily. I strongly recommend fine-tuning pretrained speech recognition models instead of training from scratch. We have a lot of pretrained checkpoints on the Hub in different sizes for different tasks that I would strongly recommend to leverage: - Wav2Vec2: https://huggingface.co/models?other=wav2vec2 : A bunch of wav2vec2 models for speech-recognition and others - Robust Wav2Vec2: https://huggingface.co/models?arxiv=arxiv:2104.01027 - wav2vec2's robust version. This should work well on more "real-world" data - XLS-R: https://huggingface.co/models?other=xls_r_pretrained: very good pretrained checkpoints for multi-lingual - SEW: https://huggingface.co/models?other=sew & https://huggingface.co/models?other=sew-d for low-resource speech recognition - WavLM: https://huggingface.co/models?other=walm Microsoft's SOTA speech model - UniSpeech: https://huggingface.co/models?other=unispeech and https://huggingface.co/models?other=unispeech_sat for speaker related tasks <|||||>@patrickvonplaten thank you, this is very helpful. I'll follow you advice and try to stay on the Wav2Vec route then. Thanks a lot :)<|||||>DistilHubert: https://huggingface.co/ntu-spml/distilhubert could also be interesting for mobile <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,796
closed
Train step fix
This should (hopefully!) clean up the last of the issues with the modified train_step in TF. Keras metrics are still a bit shaky, but that'll be solved soon!
12-16-2021 14:23:50
12-16-2021 14:23:50
transformers
14,795
closed
Remove `require_datasets` testing utility
The datasets library is a testing requirement so there is no need for a specific `require_datasets` testing utility.
12-16-2021 13:48:48
12-16-2021 13:48:48
transformers
14,794
closed
some error when I run the pytorch example wav2vec2 by rum_common_voice.py
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/models/facebookav2vec2-base/revision/main when I went to the web https://huggingface.co in browser, I found that the page https://huggingface.co/api/ is 404. Has the structure of https://huggingface.co changed? and what should I do?
12-16-2021 13:25:29
12-16-2021 13:25:29
transformers
14,793
closed
Can I print mlm and nsp loss instead of their sum while train the BertForPretraning model?
The loss just contain the sum of nsp and mlm loss, I want to know their respective specific values Is there a way to do that?
12-16-2021 12:58:25
12-16-2021 12:58:25
Hey @dancingpipi! There's no way to get the individual losses out of the box, but you could use the returned logits with your labels to calculate the two losses out of the model. The loss is computed as such: ``` if labels is not None and next_sentence_label is not None: loss_fct = CrossEntropyLoss() masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) next_sentence_loss = loss_fct(seq_relationship_score.view(-1, 2), next_sentence_label.view(-1)) total_loss = masked_lm_loss + next_sentence_los ``` and the output contains the `prediction_logits` (named `prediction_scores` above) and `seq_relationship_logits` (named `seq_relationship_score` above)<|||||>@LysandreJik thanks for your reply! I refer to the compute_loss, but the problem still exists: how to print them or add the to tensorboard in distributed training? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,792
closed
Add Speech Seq2Seq Training script
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> An explanation of this new training script is given on the README.md. Two successful training runs can be seen here: https://huggingface.co/models?other=asr_seq2esq ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
12-16-2021 11:46:34
12-16-2021 11:46:34
Hmmm, not that easy the training it seems...will run some more examples next week with Wav2Vec2 - BERT<|||||>Getting good results now for wav2vec2 - bart: https://huggingface.co/patrickvonplaten/wav2vec2-2-bart-base<|||||>Results are good: https://huggingface.co/models?other=asr_seq2esq Cleaning up the PR and we can merge in 1,2 days<|||||>Merging this now. Very much agree on the naming issue @sgugger and thanks for reminding me again. Will open another PR for this later today.
transformers
14,791
closed
Eval rouge = 100.0 on gem/wiki_lingua_english_en
Hey everyone! I just tried the example code `transformers/examples/pytorch/summarization/run_summarization_no_trainer.py` but got 100.0 for eval rouge1, rouge2, and rougeL on a GEM dataset. Seems like there is something wrong here. Am I missing something? (Tagging @patil-suraj as it involves summarization) My script ``` accelerate launch run_summarization_no_trainer.py \ --model_name_or_path t5-small \ --dataset_name "gem" \ --dataset_config "wiki_lingua_english_en" \ --source_prefix "summarize: " \ --output_dir ~/tmp/tst-summarization ``` Training log ``` 12/14/2021 18:51:04 - INFO - __main__ - ***** Running training ***** 12/14/2021 18:51:04 - INFO - __main__ - Num examples = 99020 12/14/2021 18:51:04 - INFO - __main__ - Num Epochs = 3 12/14/2021 18:51:04 - INFO - __main__ - Instantaneous batch size per device = 32 12/14/2021 18:51:04 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 128 12/14/2021 18:51:04 - INFO - __main__ - Gradient Accumulation steps = 1 12/14/2021 18:51:04 - INFO - __main__ - Total optimization steps = 2322 0%| | 0/2322 [00:00<?, ?it/s] 0%| | 1/2322 [00:00<19:52, 1.95it/s]12/14/2021 18:51:05 - INFO - root - Reducer buckets have been rebuilt in this iteration. 12/14/2021 18:51:05 - INFO - root - Reducer buckets have been rebuilt in this iteration. 12/14/2021 18:51:05 - INFO - root - Reducer buckets have been rebuilt in this iteration. 12/14/2021 18:51:05 - INFO - root - Reducer buckets have been rebuilt in this iteration. __main__ - {'rouge1': 100.0, 'rouge2': 100.0, 'rougeL': 100.0, 'rougeLsum': 100.0} Configuration saved in ./output_dir/config.json Model weights saved in ./output_dir/pytorch_model.bin tokenizer config file saved in ./output_dir/tokenizer_config.json Special tokens file saved in ./output_dir/special_tokens_map.json Copy vocab file to ./output_dir/spiece.model 100%|██████████| 2322/2322 [07:28<00:00, 5.18it/s] ```
12-16-2021 05:39:32
12-16-2021 05:39:32
Hello, thanks for opening an issue! For an additional chance of getting your question answered, could you ask your question on the [forum](https://discuss.huggingface.co) as well, to involve the broader community? Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,790
closed
Post sphinx-clean up and contributing guide updates
# What does this PR do? This PR finishes to clean up some old references to sphinx in the setup or Makefile and updates the contributing guide/docs README to explain to users how to build the docs with our new tool or how to write them. Fixes #14762
12-15-2021 20:31:09
12-15-2021 20:31:09
transformers
14,789
closed
Add tqdm to pipeline
# 🚀 Feature request Pipeline can process a list of inputs but doesn't print out progress. If the input list is large, it's difficult to tell whether the pipeline is running fine or gets stuck. Add tqdm to the generation loop to show progress. For example, if I add tqdm to /src/transformers/pipelines/base.py line 1086, then I can see text generation progress. But I don't know how to add such a feature to all paths. **from tqdm import tqdm** .... if self.framework == "pt": final_iterator = self.get_iterator( inputs, num_workers, batch_size, preprocess_params, forward_params, postprocess_params ) outputs = [output for output in **tqdm**(final_iterator)] return outputs Generation progress would show in stdout: Process 0, Generate starts: Wed Dec 15 12:27:42 2021 2%|█▊ | 2/92 [00:17<13:01, 8.68s/it]
12-15-2021 20:22:44
12-15-2021 20:22:44
Yeah that would be quite nice! cc @Narsil <|||||>Hi @dunalduck0 , If you use a `Dataset` instead of a list, it will work. ```python dataset = MyDataset() for out in tqdm.tqdm(pipe(dataset)): print(out) ``` And converting a raw list to a dataset is relatively easy ```python class ListDataset(Dataset): def __init__(self, original_list) self.original_list = original_list def __len__(self): return len(self.original_list) def __getitem__(self, i): return self.original_list[i] ``` This helper could live in transformers pipelines utils for sure. The problem with doing what you suggest is: - tqdm needs a special command when running in notebooks (not something we should handle IMO) - `List` existed previously so we need to return a `list` for backward compatibility reasons. That means the pipeline needs to consume and recreate the list, meaning adding tqdm inside is not desirable IMHO (or supporting it by adding special arguments). My personal take would be to not break backward compatibility for lists, but encourage instead to use iterator enabling user code to decide if they want something like `tqdm` or any other solution (also tweaking all the arguments of `tqdm` like the name of the loop so on and so forth). The helper to transform the `list` into a `Dataset` is already something (and might cover other frameworks too which is something we have in mind) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,788
closed
Fix the build documentation job
# What does this PR do? The last two releases were not automatically documented. That's fine for v4.14.0 since we yanked it, and I did a manual update for v4.14.1, but in the future it would be nice to not have to do this! The first fix is to make sure Transformers is installed from the current checked out branch and not the content of the master branch. The second fix is to run that job on branches with the patter vxxx-release
12-15-2021 19:26:48
12-15-2021 19:26:48
transformers
14,787
closed
Move import to avoid circular import
# What does this PR do? Urgent fix to make the TensorFlow side of the library work agian when Onnx is installed :-)
12-15-2021 18:33:37
12-15-2021 18:33:37
transformers
14,786
closed
Improve Perceiver docs
# What does this PR do? Some last-minute changes to beautify the Perceiver docs.
12-15-2021 16:53:50
12-15-2021 16:53:50
transformers
14,785
closed
Share custom pipelines on Huggingface Hub
# 🚀 Feature request Publish and share custom pipelines on Huggingface Hub. ## Motivation It seems that the models section on Huggingface Hub is currently useful only for use cases supported out-of-the-box by the huggingface/transformers library, e.g., for sequence classification, masked pretraining, and sequence-to-sequence classification (e.g., translation). However, if one has a different use case, such as target-dependent sentiment classification (TSC), it seems the Hub is not the right platform to publish the model, since neither the Inference API will work not will users not be able to use AutoModel.from_pretrained("somehuggingfacehuburltoamodel") work out-of-the-box. For example, in the case of TSC, the input consists of two parts: a sentence and a target phrase within, whereas the latter is typically expressed as two char-based indexes. For example, an input might be ("I like Bert but I hate Robert", 7, 11), and the model should then identify the sentiment of this sentence towards the target "Bert" (defined by the indexes 7 and 11). It seems to me that [pipelines](https://huggingface.co/docs/transformers/master/add_new_pipeline) are a solution to this, where one could identify how to preprocess such inputs and then correspondingly postprocess the model's outputs. However, I could not find a way to publish one's pipelines. This in turn makes Huggingface Hub currently not really usable for any use case different to those supported also by the Huggingface Transformers library. ## Your contribution This issue seems to be more related to the huggingface hub, so it would be interesting to see your thoughts first. If there need to be changes done in the transformers library, I'm glad to talk about necessary changes and how I can help :)
12-15-2021 16:23:11
12-15-2021 16:23:11
cc @osanseviero and @LysandreJik!<|||||>Hey @fhamborg! That would be a super cool complement to @sgugger's great work on making models/tokenizers/configurations accessible from the hub!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,784
closed
[Generate] Make generate multi-modal
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR makes `generate()` multi-modal by not assuming anymore that the input to the encoder is of type `input_ids`. While doing so some refactoring is done to make `generate()` cleaner. To review the changes in detail it's probably best to take a look at the new `generate()` function as well as the diff. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
12-15-2021 16:16:34
12-15-2021 16:16:34
Also tested on slow tests of: - T5 - BART - GPT2 - RAG<|||||>@Narsil it would be amazing if you could take a closer look here<|||||>Also cc @ydshieh for info<|||||>Can we also take care of the 2 hacks that were included: - remove `attention_mask` from `modeling_vit.py` and `modeling_deit.py` (this was added to make `VisionEncoderDecoderModel` work with `generate`) - remove the hack in `Seq2SeqTrainer` [here](https://github.com/huggingface/transformers/blob/8010fda9bfbac3f4860e15bdb476c63c6cf2ce81/src/transformers/trainer_seq2seq.py#L164)<|||||>> Can we also take care of the 2 hacks that were included: > > * remove `attention_mask` from `modeling_vit.py` and `modeling_deit.py` (this was added to make `VisionEncoderDecoderModel` work with `generate`) > * remove the hack in `Seq2SeqTrainer` [here](https://github.com/huggingface/transformers/blob/8010fda9bfbac3f4860e15bdb476c63c6cf2ce81/src/transformers/trainer_seq2seq.py#L164) We should tackle them in a follow-up PR: - remove `attention_mask` is completely orthogonal to this PR - the `Seq2SeqTrainer` hack can be removed after this PR, but I don't want to touch both `generate` and `Trainer` here. EDIT: BTW the remove `attention_mask` will be made trivial once we add `model_input_names` to the model architectures. So another reason to add those @sgugger ;-)<|||||>Merging this PR now! I will open two new PRs: 1.) Deprecate `imageGPT` input arg to `input_ids` 2.) Remove the Trainer hack Once the `model_input_names` are added as a class variable to all model architectures (@sgugger - happy to take over this PR next week), we can remove the `attention_mask` hack in the image models as well
transformers
14,783
closed
Update Perceiver code examples
# What does this PR do? This PR: - fixes the code examples of `PerceiverModel` and `PerceiverForMaskedLM` - adds a link to the Perceiver blog post. Fixes #14775
12-15-2021 15:18:58
12-15-2021 15:18:58
transformers
14,782
closed
Support Tensorflow tensors as input in tokenizers - [Ongoing]
# 🚀 Feature request Currently, huggingface tokenizers support as input either a list of strings or strings. It would be great if the tokenizers could support `Tensorflow tensor of strings` or `Keras.layers.Input` as input also. ## Motivation When working with end-to-end TensorFlow model, and especially with TensorFlow Reusable SavedModels, I need to be able to bundle the tokenizer and model in one model/class. The data the model is passed is in a TensorFlow format similar to the following example https://www.tensorflow.org/text/tutorials/classify_text_with_bert#define_your_model and my models are defined in a similar format. My current problem is that I need to be able to both trace the model and pass a tensor as input. ## Your contribution I would love to know if it is at all feasible to add support for tokenizers to take tf.tensor as input. If so, where those features should be added? I would gladly help to contribute to this feature in that case. Or if it is not feasible, if there could be possible to map existing huggingface tokenizers to TensorFlow-text tokenizers, would that work, and if so, would that be something that would be interesting to have in huggingface? @Rocketknight1
12-15-2021 14:32:10
12-15-2021 14:32:10
Hi @MarkusSagen, this is an interesting issue! Our fast tokenizers are implemented in Rust, and tracing that will be impossible. However, there is usually a slower pure-Python version as well. It may be possible to trace these as a Tensorflow graph, but I've never done it myself. The tokenizers are designed for cross-framework compatibility, so I don't think we'd be able to support a special TF-compiled version of all of them because it's quite niche (even most TF users don't need it!) In addition, many of our models have their own unique tokenizers and quirks, and I don't think there'll be a clean pure-TF solution that works for them all. Still, I believe it should be possible to do this for a specific model you're interested in, and in particular it should be possible for BERT and BERT-like models, because of the [tf.text.BertTokenizer](https://www.tensorflow.org/text/api_docs/python/text/BertTokenizer) class. You could try extracting the vocabulary from one of our tokenizers, creating a `tf.text.BertTokenizer`, and then checking to see if you got the same output from it as you get from our tokenizers on some sample strings. If so, then you could create a model that uses the `BertTokenizer` as the first layer and then one of our models as the subsequent layer. If you attempt it, feel free to ping me here with any issues you run into. We probably can't rewrite our tokenizers to support this out of the box, but if you get it working we think it'd make a great addition to our [tutorial notebooks](https://huggingface.co/docs/transformers/notebooks)!<|||||>Thank you for getting back so quickly on this! Good to know, I assumed that that might be the only way to get the tokenizer to except tensors as input. I will test it out and get back to you if there are any issues<|||||>Hi again @Rocketknight1! I started working on this issue today and seems very promising so far, at least for BERT tokenizers. I'll keep you updated on the progress and add a notebook when everything works well for the BERT tokenizer and have been tested https://github.com/Hugging-Face-Supporter/TFTokenizers<|||||>The problem isn't so much tracing but that the input can't be a Tensorflow Tensor. Even though the tokenizers can output TF Tensors. I would be happy if I could map the tokenizer over a tf.data.Dataset before feeding it to a model. The tokenizer is deterministic at this point anyway. EDIT: Eh, HuggingFace Datasets actually reads parquet files. That's so much of an advantage that I'm not terribly concerned with it. It would be nice though since the batch padding would then happen in the model instead of in the conversion to TF Dataset.<|||||>hey hi any update on this
transformers
14,781
closed
Removes images to put them in a dataset
This PR removes all images to put them in a dataset, and completes the instructions to do so.
12-15-2021 14:13:22
12-15-2021 14:13:22
transformers
14,780
closed
Fix the value error typo of AdamW's betas' valid values checking
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes the value error typo of AdamW's betas' valid values checking(`raise ValueError()`) `"should be in [0.0, 1.0["` -> `"should be in [0.0, 1.0]"` ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
12-15-2021 11:47:57
12-15-2021 11:47:57
> The test asserts that betas[0] < 1.0 so it seems like the current error is correct. Thanks for your reply. I mean, that there is a incorrect format of the interval indicates, as shown below: > if I initialize a AdamW optimizer with error betas' values like [0.1, 2], there would raise a ValueError like this: > ![image](https://user-images.githubusercontent.com/16441055/146630371-b295583f-36d6-417b-b700-b1b481dc9002.png) It seems like a incorrect format of the interval indicates. The correct format may should be `Invalid beta parameter: 2 - should be in [0.0, 1.0]`, but the current version is `Invalid beta parameter: 2 - should be in [0.0, 1.0[`. Is my understanding correct? And after fixed, the error is shown as follows: > ![image](https://user-images.githubusercontent.com/16441055/146630632-c9c4c9a5-7168-49f6-be29-55e64104ea28.png) <|||||>No, both beta1 and beta2 are tests to be `< 1.0`, not `<= 1.0`, so the upper bound needs to be excluded. If you put 1.0 as a value, you will get the error.<|||||>> No, both beta1 and beta2 are tests to be `< 1.0`, not `<= 1.0`, so the upper bound needs to be excluded. If you put 1.0 as a value, you will get the error. Yeah you are right. I made a mistake on this. Besides, my question is the error message `should be in [0.0, 1.0[` is a correct expression form of `[0.0, 1.0)` or a spelling mistake? I see that the expression form in original paper of Adam is: ![image](https://user-images.githubusercontent.com/16441055/146893762-7fb15fd7-262d-4b18-a2d0-0f29ed7b1d81.png) If it's a spelling mistake, I submit a new commit, please have a look, thanks a lot.<|||||>It's the European (or at least French, not sure about all the neighbors!) way of writing an interval without the upper bound, yours is the American way ;-) It's more inline with the paper so let's take yours.
transformers
14,779
closed
Add custom `stopping_criteria` and `logits_processor` to `generate`
# What does this PR do? This PR are continues the work and discussions from #12219 with a fresh start. It integrates the custom `stopping_criteria` and `logits_processor` with the following logic: - the default `stopping_criteria`/`logits_processor` are created from the arguments and model's config - if additional, custom `stopping_criteria` and `logits_processor` are passed to `generate`, they are compared with the default list - if there is an overlap between the two lists (e.g. a `MaxLengthCriteria` in both lists) an error is thrown - if there is no overlap the two lists are merged Fixes #12118
12-15-2021 11:28:36
12-15-2021 11:28:36
@Narsil thanks for your feedback and catching that copy-paste error! I added something to the docstring and fixed the error. @patrickvonplaten any comments?
transformers
14,778
closed
Nan when training LayoutLM_V2 Model
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.13.0 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyTorch version (GPU) : 1.10.0+cu111 - Tensorflow version (GPU): 2.7.0 - Flax version: not installed - Jax version: not installed - JaxLib version: not installed ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. --> @NielsRogge ## Information The model used is LayoutLMv2: The problem arises when using: * [x] my own modified scripts: The tasks I am working on is: * [x] Document Classification ## To reproduce Steps to reproduce the behavior: 1. Access to the colab notebook created to train LayoutLM_V2 (https://colab.research.google.com/drive/1u4xfrP2tWqMpgnoh8ciuUHeW56ppguse?usp=sharing) 2. Execute every cell in order 3. In the training loop, Accuracy, loss, and output will be printed, and there will be a moment when the output, Accuracy and Loss will become Nan. ## Expected behavior The model trains, and despite if it accomplishes its task or not, the training loop ends without any Nan.
12-15-2021 10:50:16
12-15-2021 10:50:16
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,777
closed
Added forward pass of test_inference_image_classification_head
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> Added forward pass of test_inference_image_classification_head with torch.no_grad() Addresses : #14642 <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
12-15-2021 08:02:31
12-15-2021 08:02:31
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,776
closed
Simplify T5 docs
Closes https://github.com/huggingface/transformers/issues/14731 @NielsRogge
12-15-2021 04:11:42
12-15-2021 04:11:42
transformers
14,775
closed
Errors in running Perceiver example with transformers-4.14.0.dev0
Python3.8, torch-1.7.1, transformers-4.14.0.dev0 Errors in running example on https://huggingface.co/docs/transformers/model_doc/perceiver ```python from transformers import PerceiverTokenizer, PerceiverForMaskedLM import torch tokenizer = PerceiverTokenizer.from_pretrained('deepmind/language-perceiver') model = PerceiverForMaskedLM.from_pretrained('deepmind/language-perceiver') inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt") labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"] outputs = model(**inputs, labels=labels) loss = outputs.loss logits = outputs.logits ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/software/install/miniconda38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/opt/software/install/miniconda38/lib/python3.8/site-packages/transformers/models/perceiver/modeling_perceiver.py", line 950, in forward masked_lm_loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) File "/opt/software/install/miniconda38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/opt/software/install/miniconda38/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 961, in forward return F.cross_entropy(input, target, weight=self.weight, File "/opt/software/install/miniconda38/lib/python3.8/site-packages/torch/nn/functional.py", line 2468, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "/opt/software/install/miniconda38/lib/python3.8/site-packages/torch/nn/functional.py", line 2261, in nll_loss raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' ValueError: Expected input batch_size (2048) to match target batch_size (33).
12-15-2021 03:50:13
12-15-2021 03:50:13
Hi, Thanks for your interest in Perceiver! The reason you're getting an error is because the `logits` that come out of the model have a sequence length of 2048, as the decoder of `PerceiverForMaskedLM` defines a sequence length of 2048 which you can see [here](https://github.com/huggingface/transformers/blob/a94105f95fb66ee4129077c03e4e8a224f6a07fd/src/transformers/models/perceiver/modeling_perceiver.py#L888). It means that 2048 trainable position embeddings are used to decode the final hidden states of the latents into language modeling predictions. Perceiver was trained with a max sequence length of 2048 bytes, hence it's advised to follow the same regime: ``` from transformers import PerceiverTokenizer, PerceiverForMaskedLM import torch tokenizer = PerceiverTokenizer.from_pretrained('deepmind/language-perceiver') model = PerceiverForMaskedLM.from_pretrained('deepmind/language-perceiver') inputs = tokenizer("The capital of France is [MASK].", padding="max_length", return_tensors="pt") labels = tokenizer("The capital of France is Paris.", padding="max_length", return_tensors="pt")["input_ids"] outputs = model(**inputs, labels=labels) loss = outputs.loss logits = outputs.logits ``` We'll update the code examples.
transformers
14,774
closed
Fix the doc_build_test job
# What does this PR do? This PR fixes the `build_doc_test` job by installing Transformers from the branch instead of master. This then triggers a problem if we do an editable install because the `pull-request` environment of GitHub actions is read-only and a Python package. Thankfully just removing the `-e` solves the issue!
12-15-2021 03:07:31
12-15-2021 03:07:31
transformers
14,773
closed
Failed to import transformers.trainer
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: '4.13.0' - Platform: Linux - Python version: 3.8.12 - PyTorch version (GPU?): '1.10.0+cu113' - I use a tensorrt docker `docker pull nvcr.io/nvidia/pytorch:21.11-py3` ### Who can help @sgugger <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten - Blenderbot, MBART: @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten @narsil - Tokenizers: @LysandreJik - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * the official example scripts: (give details below) The tasks I am working on is: * my own task or dataset: (give details below) ```py from transformers import Trainer ``` ## To reproduce Steps to reproduce the behavior: 1. make python file ```py from transformers import Trainer ``` 2. run it <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior ``` Traceback (most recent call last): File "/opt/conda/lib/python3.8/site-packages/transformers/file_utils.py", line 2281, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/opt/conda/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 843, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 147, in <module> from apex import amp File "/opt/conda/lib/python3.8/site-packages/apex/__init__.py", line 20, in <module> from . import transformer File "/opt/conda/lib/python3.8/site-packages/apex/transformer/__init__.py", line 3, in <module> from apex.transformer import pipeline_parallel File "/opt/conda/lib/python3.8/site-packages/apex/transformer/pipeline_parallel/__init__.py", line 1, in <module> from apex.transformer.pipeline_parallel.schedules import get_forward_backward_func File "/opt/conda/lib/python3.8/site-packages/apex/transformer/pipeline_parallel/schedules/__init__.py", line 4, in <module> from apex.transformer.pipeline_parallel.utils import get_num_microbatches File "/opt/conda/lib/python3.8/site-packages/apex/transformer/pipeline_parallel/utils.py", line 23, in <module> import amp_C ImportError: /opt/conda/lib/python3.8/site-packages/amp_C.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZN3c105ErrorC2ENS_14SourceLocationENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE The above exception was the direct cause of the following exception: Traceback (most recent call last): File "trt_infer.py", line 2, in <module> from transformers import Trainer File "<frozen importlib._bootstrap>", line 1039, in _handle_fromlist File "/opt/conda/lib/python3.8/site-packages/transformers/file_utils.py", line 2271, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/opt/conda/lib/python3.8/site-packages/transformers/file_utils.py", line 2283, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback): /opt/conda/lib/python3.8/site-packages/amp_C.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZN3c105ErrorC2ENS_14SourceLocationENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE ``` <!-- A clear and concise description of what you would expect to happen. -->
12-15-2021 02:18:57
12-15-2021 02:18:57
Seems like there is a problem in your docker container with the apex library, as the error actually comes when trying to do `from apex import amp`. The error will disappear if you uninstall `apex`, or fix the install if you wanted to use it.<|||||>solved! I made a docker like here: https://github.com/boostcampaitech2/final-project-level3-nlp-09/commit/f27b7a0dcaa1f6bb0de0944731f92b2e6e841527
transformers
14,772
closed
A bug in T5LayerNorm
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.13.0 - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help @patrickvonplaten <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: T5 - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten - Blenderbot, MBART: @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten @narsil - Tokenizers: @LysandreJik - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): T5 There is a bug in the implementation of T5LayerNorm. The term that subtracts the mean of a tensor is omitted when calculating the variance of a tensor. ## To reproduce Steps to reproduce the behavior: Here's the current code: https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_t5.py#L231 In the implementation of the forward method, the variance is calculated without subtracting the mean of a tensor. ```python def forward(self, hidden_states): # layer norm should always be calculated in float32 variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True) ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior The variance of a tensor should be calculated correctly. <!-- A clear and concise description of what you would expect to happen. -->
12-15-2021 01:50:03
12-15-2021 01:50:03
Hey @jk-jung, The naming might be a bit off here, but assuming that the mean is 0 this is actually the correct formula ;-)
transformers
14,771
closed
Update Using AWS Inferentia to run HuggingFace TorchScript model
# What does this PR do? This PR appended a new section "Using HuggingFace TorchScript model in AWS Inf1 using Neuron SDK" to the TorchScript section of the HuggingFace transformer documentation. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) Link to issue: https://github.com/huggingface/transformers/issues/14425 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @philschmid, @LysandreJik Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
12-15-2021 00:23:07
12-15-2021 00:23:07
This is a PR for updating TorchScript section with guidance on how to use AWS Inf1 instances to run HuggingFace transformer models. @philschmid, @LysandreJik I look forward to your reviews. <|||||>@kct22aws could you rebase your branch with the master so only your changes are shown? An in addition the `check_code_quality` CI is failing. Could you run `make style` to fix this? <|||||>> Hi @philschmid I committed again after a rebase, and also passed the style and format checks. <|||||>Hi @philschmid all your suggestions are implemented and commit passed checks. <|||||>Hi @philschmid @LysandreJik all reference links to BERT and BERT models are updated and should be consistent now. <|||||>Thank you @kct22aws. Could you rebase and resolve the conflicts one last time then we could merge it.<|||||>Hi Philipp, I rebased and checkout a new branch that is consistent with current main branch. My changes are now in serialization.mdx. A new PR is in https://github.com/huggingface/transformers/pull/14982 Hope this works. Regards, KC From: Philipp Schmid ***@***.***> Reply-To: huggingface/transformers ***@***.***> Date: Wednesday, December 29, 2021 at 3:29 AM To: huggingface/transformers ***@***.***> Cc: "Tung, KC" ***@***.***>, Mention ***@***.***> Subject: Re: [huggingface/transformers] Update Using AWS Inferentia to run HuggingFace TorchScript model (PR #14771) Thank you @kct22aws<https://github.com/kct22aws>. Could you rebase and resolve the conflicts one last time then we could merge it. — Reply to this email directly, view it on GitHub<https://github.com/huggingface/transformers/pull/14771#issuecomment-1002480962>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AU3SAP7GP66DJNRYY63SWQLUTLIFNANCNFSM5KCJLPKQ>. Triage notifications on the go with GitHub Mobile for iOS<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675> or Android<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>. You are receiving this because you were mentioned.Message ID: ***@***.***> <|||||>* https://github.com/huggingface/transformers/pull/14982 is merged
transformers
14,770
closed
Adding new tokens to various models changes tokenization of adjacent elements in strings
## Environment info - `transformers` version: 4.13.0 - Platform: Windows-10-10.0.19043-SP0 - Python version: 3.9.7 - PyTorch version (GPU?): 1.10.0+cu113 (True) - Tensorflow version (GPU?): 2.7.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help @LysandreJik @SaulLu ## Information Models I am using: DistilBERT, BERT, RoBERTa The problem arises when using: * my own modified scripts: (see below) The tasks I am working on is: * my own task or dataset: (see below) ## To reproduce When adding a new token to various models (so far found with DistilBERT, BERT, and RoBERTa), adding a new token using the `add_tokens` function changes how adjacent parts of the string are tokenized in subtle ways (for DistilBERT and BERT, this might depend on `do_basic_tokenize` being set to `False` when creating the tokenizer, at least in the examples I've found). (This might be related to the issue reported in https://github.com/huggingface/transformers/issues/11531, but that one specifically mentions T5.) See the code below for details. This doesn't seem like intended behavior based on what I can tell from looking at the documentation, but it's possible I'm misunderstanding something about the right way to add new tokens to produce the behavior I'd like. (Currently, to get the expected behavior, I've had to manually modify the vocab (+ merges file for RoBERTa), using additional scripting, and load the tokenizer from the modified files. If it'd be of use, I could post the code for that workaround here, but I've left it out for now since it's a bit long and may not be relevant.) Steps to reproduce the behavior: (Distil)BERT: ```python from transformers import DistilBertTokenizer, BertTokenizer new_word = 'mynewword' # BERT bt = BertTokenizer.from_pretrained('bert-base-uncased', do_basic_tokenize=False) bt.tokenize('mynewword') # verify the new word doesn't yet exist # ['my', '##ne', '##w', '##word'] bt.tokenize('testing.') # ['testing', '##.'] (note that the period is tokenized as '##.') bt.add_tokens(new_word) bt.tokenize('mynewword') # verify the new token now exists # ['mynewword'] bt.tokenize('mynewword.') # ['mynewword', '.'] (note that the period is tokenized as '.' rather than the expected '##.') # DistilBERT dbt = DistilBertTokenizer.from_pretrained('distilbert-base-uncased', do_basic_tokenize=False) dbt.tokenize('mynewword') # ['my', '##ne', '##w', '##word'] dbt.tokenize('testing.') # ['testing', '##.'] dbt.add_tokens(new_word) dbt.tokenize('mynewword') # ['mynewword'] dbt.tokenize('mynewword.') # ['mynewword', '.'] (expected: ['mynewword', '##.']) ``` RoBERTa: ```python from transformers import RobertaTokenizer new_word = 'mynewword' rt = RobertaTokenizer.from_pretrained('roberta-base') rt.tokenize('mynewword') # verify the new word doesn't yet exist # ['my', 'new', 'word'] rt.tokenize('A testing a') # ['A', 'Ġtesting', 'Ġa'] (note that the final token includes a preceding 'Ġ') rt.add_tokens(new_word) rt.tokenize('mynewword') # verify the new token was added # ['mynewword'] rt.tokenize('A mynewword a') # ['A', 'mynewword', 'a'] (note that the final token lacks a 'Ġ') ``` ## Expected behavior Adding a token to a tokenizer should not affect tokenization of adjacent elements (when these are not part of the added token).
12-14-2021 19:48:25
12-14-2021 19:48:25
## Environment * Python version: 3.6.13 * Platform: Ubuntu 18.04.5 LTS * `PyTorch` version (GPU?): 1.10.1 (True) * `transformers` version: 3.3.1 * Flax version (CPU?/GPU?/TPU?): not installed (NA) * Jax version: not installed * JaxLib version: not installed * Using GPU in script?: no * Using distributed or parallel set-up in script?: no Hello ! I have noticed the same with transformers (v3.3.1) with the BartTokenizer. Tokenization behavior on some existing words changes after adding new tokens, and the Ġ prefix disappears as well.<|||||>Thank you very much for the detailed issue, unfortunately it seems to us that there is no simple way to add tokens in the way you describe. Currently the added tokens are not added to the vocabulary of the tokenization model - here WordPiece - but are preserved from the beginning of the tokenization - no matter which tokenization model is used afterwards. To put it simply, if you added the `'mynewword'` token then the first thing your tokenizer will do when you ask to tokenize this example `This is a example with mynewwords token inside` is pre-tokenize your example like this `["This is a example with ", "mynewword, "s token inside"]` and then the tokenization model will be applied to `"This is a example with "` and `"s token inside"`. However, if you see how a easy solution could be implemented, we would be happy to discuss it!<|||||>Hi I've encountered a similar problem specifically when adding a new token `t_new` which is a prefix of an existing token `t_old`. This is very counterintuitive as (1) `t_old` is in the vocabulary but is now tokenized into multiple sub-words and (2) these sub-words are actually not pieces, but are rather treated as completely different tokens (as you mentioned, it happens in the pre-tokenize phase). Putting aside a solution, maybe a warning message should be added in `tokenizer.add_tokens` when tokenization changes for some in-vocabulary word? This could be a simple and quite general way of letting know the user that something could go "weird" when tokenization an in-vocabulary word. Example code: ```python tokenizer = AutoTokenizer.from_pretrained(...) new_tokens = [...] vocab_before_add = list(tokenizer.vocab) vocab_tokenization_before_add = [tuple(tokenizer.tokenize(w)) for w in vocab_before_add] tokenizer.add_tokens(new_tokens) vocab_tokenization_after_add = (tuple(tokenizer.tokenize(w)) for w in vocab_before_add) in_vocab_tokens_changed = [ (w, before, after) for w, before, after in zip(vocab_before_add, vocab_tokenization_before_add, vocab_tokenization_after_add) if before != after ] ``` Thanks!<|||||>cc @ArthurZucker <|||||>Hey! Thanks for reporting. I'll have a look, when I can.<|||||>Regarding the original issue as well as the second issue, it appears that a specific parameter exist to prevent the tokenizer from matching the new token in the middle of words. `single_word: Whether this token must be a single word or can break words` . However after testing a bit, this parameter does not seem to take effect when asked, which indeed greatly change the behavior. Also, regarding the spaces before and after, `rstrip` and `lstrip` are also suppose to control the space before and after.
transformers
14,769
closed
Fix preprocess_function in run_summarization_flax.py
# What does this PR do? `run_summarization_flax.py` has https://github.com/huggingface/transformers/blob/e7ed7ffdcb66c78d3437ed4c3a63c3640f50f436/examples/flax/summarization/run_summarization_flax.py#L535-L537 Using `jnp.array` here will cause `preprocess_function` to hang forever when it is used by `datasets.Dataset.map()` with `num_proc > 1`, when this script is running on a TPU VM. I think it is related to #12719 and #12720 ## Who can review? @patil-suraj @patrickvonplaten
12-14-2021 17:17:18
12-14-2021 17:17:18
@patil-suraj - could you take this one? :-)
transformers
14,768
closed
Length penalty for beam search
Hey everyone! I want to reproduce T5's result on the CNN/DM summarization task. As described in the paper, T5 uses beam search with a beam width of 4 and a length penalty of α = 0.6 ([Wu et al., 2016](https://arxiv.org/abs/1609.08144)). However, I couldn't find a specific argument to set the length penalty α (which is a scaling factor for brevity penalty). Is it available somewhere?
12-14-2021 15:45:30
12-14-2021 15:45:30
Tagging @patil-suraj as it involves summarization.<|||||>Hi @tuvuumass , there's the `length_penalty` argument in the `generate` method, which you can use for this. https://huggingface.co/docs/transformers/master/en/main_classes/model#transformers.generation_tf_utils.TFGenerationMixin.generate.length_penalty<|||||>@patil-suraj: Great, thanks Suraj! Is it also available for a pre-trained Pytorch model, e.g., BART?<|||||>it's in the `generate` method, which is available for all causal and seq2seq models, so yes for BART as well.<|||||>Great, thanks!
transformers
14,767
closed
Is there a way to batch examples by max number of tokens using tokenizers and datasets?
I want to batch my dataset ("super_glue") by the number of tokens instead of the number of sentences. Does this library have any Sampler or Collater that can do this while using a huggingface tokenizer and a huggingface dataset?
12-14-2021 15:08:20
12-14-2021 15:08:20
Yes, see this answer on our forum: https://discuss.huggingface.co/t/are-dynamic-padding-and-smart-batching-in-the-library/10404/4<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Sorry for bumping this up -- I had the same question but I think the page linked here is answering a different question. Let me ask again differently: is there a [fairseq `--max-tokens` counterpart](https://github.com/facebookresearch/fairseq/issues/165) I can use in huggingface?
transformers
14,766
closed
Nan when training LayoutLM_V2 Model
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.13.0 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyTorch version (GPU) : 1.10.0+cu111 - Tensorflow version (GPU): 2.7.0 - Flax version: not installed - Jax version: not installed - JaxLib version: not installed ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. --> @NielsRogge ## Information The model used is LayoutLMv2: The problem arises when using: * [x] my own modified scripts: The tasks I am working on is: * [x] Document streaming segmentation In my script, I try to determine when starts a new document with the objective to divide into segments a streaming of folders, so each segment can be interpreted as an independent document. ## To reproduce Steps to reproduce the behavior: 1. Access to the colab notebook created to train LayoutLM_V2 (https://colab.research.google.com/drive/1MsEkj_WlGYDOs3vFcm1JxmMNLWj_Se78?usp=sharing) 2. Execute every cell in order 3. In the training loop, Accuracy, loss, and output will be printed, and there will be a moment when the output, Accuracy and Loss will become Nan. ## Expected behavior The model trains, and despite if it accomplishes its task or not, the training loop ends without any Nan.
12-14-2021 14:34:44
12-14-2021 14:34:44
transformers
14,765
closed
Adding documentation on how to overload `SacreBLEU` arguments
in the example # What does this PR do? Fixes #14758 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
12-14-2021 13:47:27
12-14-2021 13:47:27
This seems like a very niche use case so let's not clutter the example README with it. We can add a comment in the actual script if we feel it's really necessary, but we already say everywhere that those examples are **just examples** and that users should customize them to their needs.<|||||>I am perfectly fine not adding that. I will drop that PR then. (People will most likely be referred to the original issue on google I guess)
transformers
14,764
closed
finetune_wav2vec2_xlsr_turkish.sh can not find wav
Hi @patrickvonplaten when using transformers/examples/research_projects/wav2vec2/finetune_wav2vec2_xlsr_turkish.sh I got err: 0%| | 0/3478 [00:00<?, ?ex/s]formats: can't open input file `common_voice_tr_17346025.mp3': No such file or directory 0%| | 0/3478 [00:00<?, ?ex/s] Traceback (most recent call last): File "run_common_voice.py", line 506, in <module> main() File "run_common_voice.py", line 394, in main train_dataset = train_dataset.map( File "/usr/local/lib/python3.8/dist-packages/datasets/arrow_dataset.py", line 2018, in map return self._map_single( File "/usr/local/lib/python3.8/dist-packages/datasets/arrow_dataset.py", line 518, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/datasets/arrow_dataset.py", line 485, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/datasets/fingerprint.py", line 411, in wrapper out = func(self, *args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/datasets/arrow_dataset.py", line 2368, in _map_single example = apply_function_on_filtered_inputs(example, i, offset=offset) File "/usr/local/lib/python3.8/dist-packages/datasets/arrow_dataset.py", line 2277, in apply_function_on_filtered_inputs processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) File "/usr/local/lib/python3.8/dist-packages/datasets/arrow_dataset.py", line 1978, in decorated result = f(decorated_item, *args, **kwargs) File "run_common_voice.py", line 388, in speech_file_to_array_fn speech_array, sampling_rate = torchaudio.load(batch["path"]) File "/usr/local/lib/python3.8/dist-packages/torchaudio/backend/sox_io_backend.py", line 152, in load return torch.ops.torchaudio.sox_io_load_audio_file( RuntimeError: Error loading audio file: failed to open file. Python version:3.8 I find cached common_voice of tr in dir /root/.cache/huggingface/datasets/common_voice/tr/6.1.0/b879a355caa529b11f2249400b61cadd0d9433f334d5c60f8c7216ccedfecfe1/ <img width="752" alt="image" src="https://user-images.githubusercontent.com/21211666/146008927-7ba5284a-920d-4418-912f-366b5009f3ac.png"> I think I need not have to download mp3 file before, right? and how to fix err? Thanks
12-14-2021 13:44:25
12-14-2021 13:44:25
Hi @Qoboty! Sorry for the inconvenience, `common_voice` recently got upgraded to use `datasets.features.Audio` with archive streaming, so the older scripts in `research_projects/wav2vec2/` that used string audio paths no longer support it. Try running an experiment with this example, which is actively maintained to support the latest features: https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-recognition#single-gpu<|||||>@lhoestq should we maybe deprecate the use of `"path"` directly on the datasets side? We'll upgrade the older scripts of course, but I'm asking just in case there's a sure way to prevent this error from happening :) <|||||>Also we don't actively maintain the `research_projects` anymore. @Qoboty in case it's possible for you it would be great if you could move to https://github.com/huggingface/transformers/blob/master/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py instead<|||||>> Try running an experiment with this example, which is actively maintained to support the latest features: https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-recognition#single-gpu Cool, I will try it today, Thanks! <|||||>> Also we don't actively maintain the `research_projects` anymore. @Qoboty in case it's possible for you it would be great if you could move to https://github.com/huggingface/transformers/blob/master/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py instead Thanks, I got the research_project link by "fine-tune-xlsr-wav2vec2" blog on "Set-up Trainer" chapter https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 so, we'd better make new link to https://github.com/huggingface/transformers/blob/master/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py in case other gays got the same err, Thanks <img width="987" alt="image" src="https://user-images.githubusercontent.com/21211666/146108980-2dc33cdd-4950-4e4e-a6b3-ee8884d3a3f2.png"> <|||||>Hey @Qoboty, Yes you're 100% correct. Thanks a lot for spotting this!<|||||>https://github.com/huggingface/blog/pull/187<|||||>> @lhoestq should we maybe deprecate the use of "path" directly on the datasets side? We'll upgrade the older scripts of course, but I'm asking just in case there's a sure way to prevent this error from happening :) Yes indeed, we're discussing this in `datasets` (see first point at https://github.com/huggingface/datasets/pull/3430#issuecomment-994734828)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@patrickvonplaten @anton-l hi there. i already have this problem even with new example. i got the same error the @Qoboty had. what should i do?<|||||>Hey @mehrdad78, Could you provide us with a code snippet to reproduce this error?<|||||>> Hey @mehrdad78, > > Could you provide us with a code snippet to reproduce this error? Hi Patrick: #18379
transformers
14,763
closed
Question about how to modify token embedding before sending to bert
Anyone knows how to modify the embedding before sending to bert using transformer? Inherit the `BertPreTrainedModel`? such as ``` # get token embedding embedding_output = self.embeddings( input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds, past_key_values_length=past_key_values_length, ) # some modify for embedding_output==== This is what I want to add # send to bert encoder_outputs = self.encoder( embedding_output, attention_mask=extended_attention_mask, head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_extended_attention_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) ```
12-14-2021 11:39:34
12-14-2021 11:39:34
You can do it as follows: ``` from transformers.models.bert.modeling_bert import BertPreTrainedModel , BertEmbeddings, BertEncoder, BertPooler class CustomBertModel(BertPreTrainedModel): def __init__(self, config, add_pooling_layer=True): super().__init__(config) self.config = config self.embeddings = BertEmbeddings(config) self.encoder = BertEncoder(config) self.pooler = BertPooler(config) if add_pooling_layer else None # Initialize weights and apply final processing self.post_init() def get_input_embeddings(self): return self.embeddings.word_embeddings def set_input_embeddings(self, value): self.embeddings.word_embeddings = value def forward( self, input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_values=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict if input_ids is not None and inputs_embeds is not None: raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") elif input_ids is not None: input_shape = input_ids.size() elif inputs_embeds is not None: input_shape = inputs_embeds.size()[:-1] else: raise ValueError("You have to specify either input_ids or inputs_embeds") batch_size, seq_length = input_shape device = input_ids.device if input_ids is not None else inputs_embeds.device if attention_mask is None: attention_mask = torch.ones(((batch_size, seq_length)), device=device) if token_type_ids is None: if hasattr(self.embeddings, "token_type_ids"): buffered_token_type_ids = self.embeddings.token_type_ids[:, :seq_length] buffered_token_type_ids_expanded = buffered_token_type_ids.expand(batch_size, seq_length) token_type_ids = buffered_token_type_ids_expanded else: token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device) # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] # ourselves in which case we just need to make it broadcastable to all heads. extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape, device) # If a 2D or 3D attention mask is provided for the cross-attention # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length] if self.config.is_decoder and encoder_hidden_states is not None: encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size() encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length) if encoder_attention_mask is None: encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device) encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask) else: encoder_extended_attention_mask = None # Prepare head mask if needed # 1.0 in head_mask indicate we keep the head # attention_probs has shape bsz x n_heads x N x N # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) embedding_output = self.embeddings( input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds, past_key_values_length=past_key_values_length, ) # you can modify the embedding outputs here encoder_outputs = self.encoder( embedding_output, attention_mask=extended_attention_mask, head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_extended_attention_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) sequence_output = encoder_outputs[0] pooled_output = self.pooler(sequence_output) if self.pooler is not None else None if not return_dict: return (sequence_output, pooled_output) + encoder_outputs[1:] return BaseModelOutputWithPoolingAndCrossAttentions( last_hidden_state=sequence_output, pooler_output=pooled_output, past_key_values=encoder_outputs.past_key_values, hidden_states=encoder_outputs.hidden_states, attentions=encoder_outputs.attentions, cross_attentions=encoder_outputs.cross_attentions, ) ```<|||||>@NielsRogge thank you for your reply! I also implement it by inherting `BertModel` and only overwrite its `forward` function, the amount of code can be less
transformers
14,762
closed
make docs failing
https://github.com/huggingface/transformers/blob/2a606f9974feb0f7578e6a638c7e5b548523ecb4/CONTRIBUTING.md?plain=1#L211-L221 ```bash make docs cd docs && make html SPHINXOPTS="-W -j 4" Application error: config directory doesn't contain a conf.py file (source) make[1]: *** [html] Error 2 make: *** [docs] Error 2 ``` transformers version: master branch @sgugger
12-14-2021 11:34:58
12-14-2021 11:34:58
Yes we are not using sphinx anymore to build the doc. The contributing guide will be updated in the coming days.<|||||>Okay
transformers
14,761
closed
TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
I am downloading the model <<https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384/tree/main>> **microsoft/Multilingual-MiniLM-L12-H384** and then using it. **Transformer Version: '4.11.3'** Following are the files in the links: <img width="277" alt="Screenshot 2021-12-14 at 3 29 09 PM" src="https://user-images.githubusercontent.com/11159549/145976626-afde48a9-224f-4298-817c-e56a912ec036.png"> When I use this line: `tokenizer = tr.BertTokenizer.from_pretrained("/home/pchhapolika/minilm_model/")` **Error: TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType**
12-14-2021 10:03:39
12-14-2021 10:03:39
How are you downloading the files, and what are the size of the files @pratikchhapolika ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,760
closed
Addition of Swin Transformer for Computer Vision
# 🌟 Addition Swin Transformer ## Model description Swin Transformer (the name Swin stands for Shifted window) is initially described in arxiv, which capably serves as a general-purpose backbone for computer vision. It is basically a hierarchical Transformer whose representation is computed with shifted windows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. Swin Transformer achieves strong performance on COCO object detection (58.7 box AP and 51.1 masks AP on test-dev) and ADE20K semantic segmentation (53.5 mIoU on Val), surpassing previous models by a large margin. ## Open source status * [x] the model implementation is available: https://github.com/microsoft/Swin-Transformer * [x] the model weights are available: https://github.com/microsoft/Swin-Transformer * [x] who are the authors: [Swin Transformer](https://arxiv.org/pdf/2103.14030.pdf) ## Possible Task Support: Opensource Version Supports below tasks * Image Classification * Object Detection * Instance Segmentation * Semantic Segmentation * Video Recognition
12-14-2021 09:46:26
12-14-2021 09:46:26
Maybe of interest to @NielsRogge <|||||>Hello, I would like to work on adding Swin. I will put out a PR sometime soon. <|||||>Hey @novice03, thanks for your effort! I believe that @FrancescoSaverioZuppichini is in the process of adding the `Mask2Former` model which depends on Swin, so he's probably working on that too. I'll let him share more about his work.
transformers
14,759
closed
KeyError: 337 when training a hugging face model using pytorch
I am training a simple binary classification model using `Hugging face models` using `pytorch.` Bert PyTorch HuggingFace. Here is the code: import transformers from transformers import TFAutoModel, AutoTokenizer from tokenizers import Tokenizer, models, pre_tokenizers, decoders, processors from transformers import AutoTokenizer from transformers import AdamW from transformers import get_linear_schedule_with_warmup from transformers import BertTokenizerFast as BertTokenizer, BertModel, AdamW, get_linear_schedule_with_warmup,BertConfig I am reading a text-data and classifying it as toxic or non-toxic. I have downloaded and saved model in path. BERT_MODEL_NAME = '/home/pch/conv-bert-base' MODEL_PATHS = {'conv-bert-base': '/home/pch/conv-bert-base/'} tokenizer = BertTokenizer.from_pretrained(BERT_MODEL_NAME) TRANSFORMERS = {"conv-bert-base": (BertModel, BertTokenizer, "conv-bert-base")} df=pd.read_excel('gold_data.xlsx', engine='openpyxl') df2=df[['text','labels','validation']] df3=df2[df2.labels.isin([0,1])] val_data=df2[df2.validation.isin([1])] class SEDataset(Dataset): """ Sexually Explicit dataset for the hate speech. """ def __init__(self, df,tokenizer: BertTokenizer, max_token_len: int = 512): """ Constructor Arguments: df {pandas dataframe} -- Dataframe where the data is. """ super().__init__() self.df = df self.tokenizer = tokenizer self.max_token_len = max_token_len try: self.y = df['toxic'].values except KeyError: # test data self.y = np.zeros(len(df)) def __len__(self): return len(self.df) def __getitem__(self, idx): data_row = self.df[idx] text_data = data_row['text'] encoding = tokenizer.encode_plus( text_data, add_special_tokens=True, max_length=512, return_token_type_ids=False, padding="max_length", return_attention_mask=True, return_tensors='pt',) self.word_ids = encoding["input_ids"] self.attention_mask=encoding["attention_mask"] return self.word_ids[idx], torch.tensor(self.y[idx]), self.attention_mask[idx] class Transformer(nn.Module): def __init__(self, model, num_classes=1): """ Constructor Arguments: model {string} -- Transformer to build the model on. Expects "conv-bert-base". num_classes {int} -- Number of classes (default: {1}) """ super().__init__() self.name = model model_class, tokenizer_class, pretrained_weights = TRANSFORMERS[model] bert_config = BertConfig.from_json_file(MODEL_PATHS[model] + 'config.json') bert_config.output_hidden_states = True self.transformer = BertModel(bert_config) self.nb_features = self.transformer.pooler.dense.out_features self.pooler = nn.Sequential( nn.Linear(self.nb_features, self.nb_features), nn.Tanh(), ) self.logit = nn.Linear(self.nb_features, num_classes) def forward(self, tokens): """ Usual torch forward function Arguments: tokens {torch tensor} -- Sentence tokens Returns: torch tensor -- Class logits """ _, _, hidden_states = self.transformer( tokens, attention_mask=(tokens > 0).long() ) hidden_states = hidden_states[-1][:, 0] # Use the representation of the first token of the last layer ft = self.pooler(hidden_states) return self.logit(ft) def fit(model, train_dataset, val_dataset, epochs=1, batch_size=32, warmup_prop=0, lr=5e-5): device = torch.device('cuda') model.to(device) train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True) val_loader = DataLoader(val_dataset, batch_size=batch_size, shuffle=False) optimizer = AdamW(model.parameters(), lr=lr) num_warmup_steps = int(warmup_prop * epochs * len(train_loader)) num_training_steps = epochs * len(train_loader) scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps, num_training_steps) loss_fct = nn.BCEWithLogitsLoss(reduction='mean').to(device) for epoch in range(epochs): model.train() start_time = time.time() optimizer.zero_grad() avg_loss = 0 for step, (x, y_batch) in tqdm(enumerate(train_loader), total=len(train_loader)): y_pred = model(x.to(device)) loss = loss_fct(y_pred.view(-1).float(), y_batch.float().to(device)) loss.backward() avg_loss += loss.item() / len(train_loader) xm.optimizer_step(optimizer, barrier=True) scheduler.step() model.zero_grad() optimizer.zero_grad() model.eval() preds = [] truths = [] avg_val_loss = 0. with torch.no_grad(): for x, y_batch in val_loader: y_pred = model(x.to(device)) loss = loss_fct(y_pred.detach().view(-1).float(), y_batch.float().to(device)) avg_val_loss += loss.item() / len(val_loader) probs = torch.sigmoid(y_pred).detach().cpu().numpy() preds += list(probs.flatten()) truths += list(y_batch.numpy().flatten()) score = roc_auc_score(truths, preds) dt = time.time() - start_time lr = scheduler.get_last_lr()[0] print(f'Epoch {epoch + 1}/{epochs} \t lr={lr:.1e} \t t={dt:.0f}s \t loss={avg_loss:.4f} \t val_loss={avg_val_loss:.4f} \t val_auc={score:.4f}') model = Transformer("conv-bert-base") epochs = 1 # 1 epoch seems to be enough batch_size = 32 warmup_prop = 0.1 lr = 2e-5 # Important parameter to tweak train_dataset = SEDataset(df3,tokenizer) val_dataset = SEDataset(val_data,tokenizer) fit(model, train_dataset, val_dataset, epochs=epochs, batch_size=batch_size, warmup_prop=warmup_prop, lr=lr) I have attached all the codes above. **Error:** **0%| | 0/29 [00:00<?, ?it/s] KeyError: 337**
12-14-2021 06:37:40
12-14-2021 06:37:40
@patrickvonplaten any help on this, please !!<|||||>I think this is an issue with the ConvBERT tokenizer conversion cc @abhishekkrthakur <|||||>@pratikchhapolika where does this error occur? would you mind posting the full stacktrace? <|||||>@abhishekkrthakur This is the only error I get. The `KeyError: ***` keeps changing after I re-run the model. `Uploaded the notebook. Please change it to .ipynb` [20211213_se_model.pdf](https://github.com/huggingface/transformers/files/7755264/20211213_se_model.pdf) <|||||>@abhishekkrthakur any help?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>> @pratikchhapolika where does this error occur? would you mind posting the full stacktrace? Any help please! <|||||>Thanks for the ping. I kinda lost it during christmas time. Unfortunately, im not able to see your pdf file. Could you please upload an ipynb version?<|||||>> Thanks for the ping. I kinda lost it during christmas time. Unfortunately, im not able to see your pdf file. Could you please upload an ipynb version? Just rename .pdf to .ipynb<|||||>I'm not sure what the error is but its not related to the model. Here is my code for imdb (since i don't have your dataset) that works just fine: ``` import pandas as pd import tez import torch import torch.nn as nn import transformers from sklearn import metrics, model_selection from transformers import AdamW, get_linear_schedule_with_warmup class BERTDataset: def __init__(self, review, target): self.review = review self.target = target self.tokenizer = transformers.AutoTokenizer.from_pretrained("YituTech/conv-bert-base") self.max_len = 64 def __len__(self): return len(self.review) def __getitem__(self, item): review = str(self.review[item]) review = " ".join(review.split()) inputs = self.tokenizer.encode_plus( review, None, add_special_tokens=True, max_length=self.max_len, padding="max_length", truncation=True, ) ids = inputs["input_ids"] mask = inputs["attention_mask"] token_type_ids = inputs["token_type_ids"] return { "ids": torch.tensor(ids, dtype=torch.long), "mask": torch.tensor(mask, dtype=torch.long), "token_type_ids": torch.tensor(token_type_ids, dtype=torch.long), "targets": torch.tensor(self.target[item], dtype=torch.float), } class BERTBaseUncased(tez.Model): def __init__(self, num_train_steps): super().__init__() config = transformers.AutoConfig.from_pretrained("YituTech/conv-bert-base") config.update( { "output_hidden_states": True, } ) self.tokenizer = transformers.AutoTokenizer.from_pretrained("YituTech/conv-bert-base") self.bert = transformers.AutoModel.from_pretrained("YituTech/conv-bert-base", config=config) self.bert_drop = nn.Dropout(0.3) self.out = nn.Linear(768, 1) self.num_train_steps = num_train_steps self.step_scheduler_after = "batch" def fetch_optimizer(self): param_optimizer = list(self.named_parameters()) no_decay = ["bias", "LayerNorm.bias"] optimizer_parameters = [ { "params": [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], "weight_decay": 0.001, }, { "params": [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], "weight_decay": 0.0, }, ] opt = AdamW(optimizer_parameters, lr=3e-5) return opt def fetch_scheduler(self): sch = get_linear_schedule_with_warmup( self.optimizer, num_warmup_steps=0, num_training_steps=self.num_train_steps ) return sch def loss(self, outputs, targets): if targets is None: return None return nn.BCEWithLogitsLoss()(outputs, targets.view(-1, 1)) def monitor_metrics(self, outputs, targets): if targets is None: return {} outputs = torch.sigmoid(outputs).cpu().detach().numpy() >= 0.5 targets = targets.cpu().detach().numpy() accuracy = metrics.accuracy_score(targets, outputs) return {"accuracy": accuracy} def forward(self, ids, mask, token_type_ids, targets=None): o_2 = self.bert(ids, attention_mask=mask, token_type_ids=token_type_ids) pooled_output = torch.mean(o_2.last_hidden_state, dim=1) print(pooled_output.shape) b_o = self.bert_drop(pooled_output) output = self.out(b_o) loss = self.loss(output, targets) acc = self.monitor_metrics(output, targets) return output, loss, acc if __name__ == "__main__": dfx = pd.read_csv("/home/abhishek/workspace/autoxgb/datasets/imdb.csv").fillna("none") dfx.sentiment = dfx.sentiment.apply(lambda x: 1 if x == "positive" else 0) df_train, df_valid = model_selection.train_test_split( dfx, test_size=0.1, random_state=42, stratify=dfx.sentiment.values ) df_train = df_train.reset_index(drop=True) df_valid = df_valid.reset_index(drop=True) train_dataset = BERTDataset(review=df_train.review.values, target=df_train.sentiment.values) valid_dataset = BERTDataset(review=df_valid.review.values, target=df_valid.sentiment.values) n_train_steps = int(len(df_train) / 32 * 10) model = BERTBaseUncased(num_train_steps=n_train_steps) tb_logger = tez.callbacks.TensorBoardLogger(log_dir=".logs/") es = tez.callbacks.EarlyStopping(monitor="valid_loss", model_path="model.bin") model.fit( train_dataset, valid_dataset=valid_dataset, train_bs=32, device="cuda", epochs=50, callbacks=[tb_logger, es], fp16=True, ) ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I got the same error when I used transformer to perform NER on Chinese text. my code is : from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese") model = AutoModelForMaskedLM.from_pretrained("bert-base-chinese") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "我的名字叫大头,男,生于1900年12月12日" ner_results = nlp(example) print(ner_results) Then I got: KeyError Traceback (most recent call last) in 7 example = "我的名字叫大头,男,生于1900年12月12日" 8 ----> 9 ner_results = nlp(example) 10 print(ner_results) ~/opt/anaconda3/lib/python3.8/site-packages/transformers/pipelines/token_classification.py in call(self, inputs, **kwargs) 187 kwargs["offset_mapping"] = offset_mapping 188 --> 189 return super().call(inputs, **kwargs) 190 191 def preprocess(self, sentence, offset_mapping=None): ~/opt/anaconda3/lib/python3.8/site-packages/transformers/pipelines/base.py in call(self, inputs, num_workers, batch_size, *args, **kwargs) 1025 return self.iterate(inputs, preprocess_params, forward_params, postprocess_params) 1026 else: -> 1027 return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) 1028 1029 def run_multi(self, inputs, preprocess_params, forward_params, postprocess_params): ~/opt/anaconda3/lib/python3.8/site-packages/transformers/pipelines/base.py in run_single(self, inputs, preprocess_params, forward_params, postprocess_params) 1033 model_inputs = self.preprocess(inputs, **preprocess_params) 1034 model_outputs = self.forward(model_inputs, **forward_params) -> 1035 outputs = self.postprocess(model_outputs, **postprocess_params) 1036 return outputs 1037 ~/opt/anaconda3/lib/python3.8/site-packages/transformers/pipelines/token_classification.py in postprocess(self, model_outputs, aggregation_strategy, ignore_labels) 240 sentence, input_ids, scores, offset_mapping, special_tokens_mask, aggregation_strategy 241 ) --> 242 grouped_entities = self.aggregate(pre_entities, aggregation_strategy) 243 # Filter anything that is in self.ignore_labels 244 entities = [ ~/opt/anaconda3/lib/python3.8/site-packages/transformers/pipelines/token_classification.py in aggregate(self, pre_entities, aggregation_strategy) 319 score = pre_entity["scores"][entity_idx] 320 entity = { --> 321 "entity": self.model.config.id2label[entity_idx], 322 "score": score, 323 "index": pre_entity["index"], KeyError: 7357<|||||>I've actually found a solution for this and posted it on a [stackoverflow answer](https://stackoverflow.com/questions/73154063/sentencetransformers-throwing-keyerror-on-pandas-series/73154064#73154064)
transformers
14,758
closed
SacreBLEU uses incorrect tokenizer for Japanese
[SacreBLEU](https://github.com/mjpost/sacreBLEU) needs to be told to use a different tokenizer in order to properly evaluate Japanese text, but as far as I can tell the HuggingFace framework has no means with which to do this. I imagine other languages that need special tokenizers are affected as well, but I haven't tested them. ## Environment info - `transformers` version: 4.13.0.dev0 - Platform: Linux-5.4.0-84-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyTorch version (GPU?): 1.10.0+cu113 (True) - Tensorflow version (GPU?): 2.6.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help - Text generation: @patrickvonplaten @narsil ## Information Model I am using: mBART-50 and mT5 The problem arises when using: * [ ] the official example scripts: /transformers/examples/pytorch/run_translation.py The tasks I am working on is: * [ ] my own task or dataset: [JESC](https://nlp.stanford.edu/projects/jesc/) ## To reproduce Steps to reproduce the behavior: 1. Run a translation model as described in /transformers/examples/pytorch/README.md, with **Japanese as the target language** 2. Use the --do_eval, --do_predict, and --predict_with_generate flags 3. Note the low BLEU score 4. Run the [SacreBLEU](https://github.com/mjpost/sacreBLEU) command on detokenized target vs. generated_predictions.txt with Japanese tokenizer (example: sacrebleu dev_ja_detokenized.txt -i generated_predictions.txt --tokenize ja-mecab -b) 5. Note the different, probably higher, BLEU score ## Expected behavior I would expect HuggingFace's SacreBLEU evaluation score to be the same as SacreBLEU proper. To give an example of how big the gap gets: After 3 epochs of fine-tuning mT5-large on JESC, HF evaluated the BLEU score as only **5.17**. Running the output through the SacreBLEU command, however, caused the score to jump to **11.90**, which is much more in-line with the ja-en score of 16.8.
12-14-2021 02:36:39
12-14-2021 02:36:39
Hi @ekoenitz, I am by no means an expert in this example. However, looking at it, it seems you are adding an argument to help `sacrebleu` figuring out the japanese tokenizer `--tokenizer ja-meca` right ? If that's the case, the bleu score is actually calculated by Sacrebleu too and you would need to send that option too., https://github.com/huggingface/transformers/blob/master/examples/pytorch/translation/run_translation.py#L515 Adding `metrics.compute(...., tokenizer="ja-meca")` should behave the same as your command. Could that be the explanation ? If that's it, this could probably be added to the README.md ?<|||||>Ah, I didn't know you could send metric-specific arguments like that. That does solve the issue, yes, thank you. Along with readme updates, perhaps adding something like an eval_tokenizer argument would be a good idea? I was 50/50 on whether I should file this issue as a bug or a feature request.
transformers
14,757
closed
[doc] performance: groups of operations by compute-intensity
I found this really good summary of 3 groups of operations in transformers by their compute-intensity from https://arxiv.org/abs/2007.00072 I adapted it to be generic as it was originally written to be specific to data movements. I think it'd make a great addition to the performance doc. @sgugger
12-14-2021 00:37:16
12-14-2021 00:37:16
transformers
14,756
closed
Add an argument to set bucket_cap_mb for PyTorch DDP
# What does this PR do? `bucket_cap_mb` determines the bucket size that PyTorch's `DistributedDataParallel` will group gradient parameters in. In some occasions, tuning this parameter will have significant impact on the distributed training performance. This PR allows user to change the `bucket_cap_mb` parameter via a new training argument `--ddp_bucket_cap_mb`. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
12-13-2021 22:57:46
12-13-2021 22:57:46
There is just the formatting issue to deal with before we can merge. Can you run `make style` on your branch to fix it? Thanks!<|||||>Done - thanks for the review!<|||||>Mmm, are you sure you installed the proper versions of the formatting tools with `pip install transformers[dev]` or `pip install transformers[quality]` ? The failure is still there. <|||||>My bad - I forgot to push the latest commit. Should be working now?<|||||>Yes, the failure is unrelated. Thanks again for your PR!
transformers
14,755
closed
Update Table of Contents
# What does this PR do? This PR adds the missing pages in the table of contents, found thanks to [doc-builder#54](https://github.com/huggingface/doc-builder/pull/54).
12-13-2021 22:04:26
12-13-2021 22:04:26
Merging since it will fix the doc on master and is fairly straightforward.
transformers
14,754
closed
PoC for conserving old links
# What does this PR do? The PR evolved to having a little compact section for old links, see this comment for a demo: https://github.com/huggingface/transformers/pull/14754#issuecomment-995067459 ---------------- This follows up from a [#14753 comment](https://github.com/huggingface/transformers/pull/14753#discussion_r767974738) to deal with old links without leaving lots of empty subsections. It just replaces it by an anchor which will make the link not 404 and the text just above tells the user to checkout the DeepSpeed page which contains the same information. I'll treat the other links the same way if this is accepted.
12-13-2021 19:28:39
12-13-2021 19:28:39
so where can I see it rendered? in the past it was possible to see the doc as it was built on CI - I don't seem to be able to find the same in this new incarnation. <|||||>Tested it manually by saving the html and editing it as you proposed. It's probably safe to assume everybody uses modern browsers these days. So you propose to break the anchor links and the only backward link support is to point a user to a new document where all sections got moved to and let them hunt down the actual section they were after, is that correct? <|||||>As stated in the internal slack, the ability so see the doc rendered in a PR is still a work in progress, you can see where the link would point on the Markdown [here](https://github.com/huggingface/transformers/blob/79817836add2334e6b1105be2c4cb774a3390362/docs/source/main_classes/trainer.mdx#deepspeed-installation). > So you propose to break the anchor links I don't see which link is broken, could you tell me? > the only backward link support is to point a user to a new document where all sections got moved to and let them hunt down the actual section they were after, is that correct Yes, the user will have something to do in all cases, and the [doc](https://huggingface.co/docs/transformers/main_classes/deepspeed) clearly show the Table of Contents on the right, so I don't think it will be too hard. We don't even know if there are such links with anchor in the wild, so I think this is enough support.<|||||>> As stated in the internal slack, the ability so see the doc rendered in a PR is still a work in progress, you can see where the link would point on the Markdown [here](https://github.com/huggingface/transformers/blob/79817836add2334e6b1105be2c4cb774a3390362/docs/source/main_classes/trainer.mdx#deepspeed-installation). Right! Thank you! > > So you propose to break the anchor links > > I don't see which link is broken, could you tell me? all the links from the right section in the old document to that exact section in the new document. > > the only backward link support is to point a user to a new document where all sections got moved to and let them hunt down the actual section they were after, is that correct > > Yes, the user will have something to do in all cases, and the [doc](https://huggingface.co/docs/transformers/main_classes/deepspeed) clearly show the Table of Contents on the right, so I don't think it will be too hard. We don't even know if there are such links with anchor in the wild, so I think this is enough support. I know we have such links in the wild and not only in the wild but in our own issues since I used them in various announcements and in issues. The right hand TOC misses all deeper level sub-sections than h2. (or is it h3?) So you won't find many sub-sections there. Here is an example: https://huggingface.co/docs/transformers/master/main_classes/deepspeed#zero2-config - it's not in the right-hand menu - and many others items aren't there either. Same for `performance.html` Again, if as a body we agree that we don't care for such things then your proposed change is perfectly fine. <|||||>> all the links from the right section in the old document to that exact section in the new document. I'm sorry but I don't know what you mean. Can you give me an explicit link that is broken by this change (e.g., will result in a 404) As for the actual use cases you mention, I don't think the "job" required by the user stumbling on that dead link to scroll or use a search is too much, but let's see what the others think.<|||||>I meant broken as in soft 404. Where I wrote something like: > to solve this problem see this section https://huggingface.co/docs/transformers/master/main_classes/trainer.html#zero2-config and the user, reading this comment later can't get to where I pointed to, because the moved-to section has been removed.<|||||>> https://huggingface.co/docs/transformers/master/main_classes/trainer.html#zero2-config unless i'm mistaken, this link was never valid, because the only doc that was under `/docs/` urls is the new doc and never had any .html suffixes<|||||>(you might mean the same URL without the /docs ?)<|||||>yes, sorry, I have no way of seeing how the old url was other than searching for old style links in issues, so the original would probably be: https://huggingface.co/transformers/master/main_classes/trainer.html#zero2-config thank you for the fix, @julien-c <|||||>I've a possible alternative suggestion of doing the same, while keeping things neat and not losing the redirects, will post one soon.<|||||>OK, here is what I propose. At the end of each doc file we will have a neat pile of moved links if any and the original anchors that will take the user to this section. The trainer doc will have be the biggest number of these moved links at the moment. ![snapshot_70](https://user-images.githubusercontent.com/10676103/146245623-31bd10a9-1827-42dd-afc8-adf3faced816.png) but other files may have small ones here and there as the docs evolve. So this allows us to keep backward compatibility not only for the code but for user browsing experience as well. I purposefully removed line breaks to keep it compact and still allow the user to quickly find what they are after. If it resonates here is the html for this file: ``` Sections that were moved: [ <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-trainer-integration">DeepSpeed</a><a id="deepspeed-trainer-integration"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-installation">Installation</a><a id="deepspeed-installation"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-multi-gpu">Deployment with multiple GPUs</a><a id="deepspeed-multi-gpu"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-one-gpu">Deployment with one GPU</a><a id="deepspeed-one-gpu"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-notebook">Deployment in Notebooks</a><a id="deepspeed-notebook"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-config">Configuration</a><a id="deepspeed-config"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-config-passing">Passing Configuration</a><a id="deepspeed-config-passing"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-config-shared">Shared Configuration</a><a id="deepspeed-config-shared"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-zero">ZeRO</a><a id="deepspeed-zero"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-zero2-config">ZeRO-2 Config</a><a id="deepspeed-zero2-config"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-zero3-config">ZeRO-3 Config</a><a id="deepspeed-zero3-config"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-nvme">NVMe Support</a><a id="deepspeed-nvme"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-zero2-zero3-performance">ZeRO-2 vs ZeRO-3 Performance</a><a id="deepspeed-zero2-zero3-performance"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-zero2-example">ZeRO-2 Example</a><a id="deepspeed-zero2-example"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-zero3-example">ZeRO-3 Example</a><a id="deepspeed-zero3-example"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-optimizer">Optimizer</a><a id="deepspeed-optimizer"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-scheduler">Scheduler</a><a id="deepspeed-scheduler"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-fp32">fp32 Precision</a><a id="deepspeed-fp32"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-amp">Automatic Mixed Precision</a><a id="deepspeed-amp"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-bs">Batch Size</a><a id="deepspeed-bs"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-grad-acc">Gradient Accumulation</a><a id="deepspeed-grad-acc"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-grad-clip">Gradient Clipping</a><a id="deepspeed-grad-clip"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-weight-extraction">Getting The Model Weights Out</a><a id="deepspeed-weight-extraction"></a> ] ``` could use relatively links as well if you prefer. ``` Sections that were moved: [ <a href="./deepspeed#deepspeed-trainer-integration">DeepSpeed</a><a id="deepspeed-trainer-integration"></a> | <a href="./deepspeed#deepspeed-installation">Installation</a><a id="deepspeed-installation"></a> | <a href="./deepspeed#deepspeed-multi-gpu">Deployment with multiple GPUs</a><a id="deepspeed-multi-gpu"></a> | <a href="./deepspeed#deepspeed-one-gpu">Deployment with one GPU</a><a id="deepspeed-one-gpu"></a> | <a href="./deepspeed#deepspeed-notebook">Deployment in Notebooks</a><a id="deepspeed-notebook"></a> | <a href="./deepspeed#deepspeed-config">Configuration</a><a id="deepspeed-config"></a> | <a href="./deepspeed#deepspeed-config-passing">Passing Configuration</a><a id="deepspeed-config-passing"></a> | <a href="./deepspeed#deepspeed-config-shared">Shared Configuration</a><a id="deepspeed-config-shared"></a> | <a href="./deepspeed#deepspeed-zero">ZeRO</a><a id="deepspeed-zero"></a> | <a href="./deepspeed#deepspeed-zero2-config">ZeRO-2 Config</a><a id="deepspeed-zero2-config"></a> | <a href="./deepspeed#deepspeed-zero3-config">ZeRO-3 Config</a><a id="deepspeed-zero3-config"></a> | <a href="./deepspeed#deepspeed-nvme">NVMe Support</a><a id="deepspeed-nvme"></a> | <a href="./deepspeed#deepspeed-zero2-zero3-performance">ZeRO-2 vs ZeRO-3 Performance</a><a id="deepspeed-zero2-zero3-performance"></a> | <a href="./deepspeed#deepspeed-zero2-example">ZeRO-2 Example</a><a id="deepspeed-zero2-example"></a> | <a href="./deepspeed#deepspeed-zero3-example">ZeRO-3 Example</a><a id="deepspeed-zero3-example"></a> | <a href="./deepspeed#deepspeed-optimizer">Optimizer</a><a id="deepspeed-optimizer"></a> | <a href="./deepspeed#deepspeed-scheduler">Scheduler</a><a id="deepspeed-scheduler"></a> | <a href="./deepspeed#deepspeed-fp32">fp32 Precision</a><a id="deepspeed-fp32"></a> | <a href="./deepspeed#deepspeed-amp">Automatic Mixed Precision</a><a id="deepspeed-amp"></a> | <a href="./deepspeed#deepspeed-bs">Batch Size</a><a id="deepspeed-bs"></a> | <a href="./deepspeed#deepspeed-grad-acc">Gradient Accumulation</a><a id="deepspeed-grad-acc"></a> | <a href="./deepspeed#deepspeed-grad-clip">Gradient Clipping</a><a id="deepspeed-grad-clip"></a> | <a href="./deepspeed#deepspeed-weight-extraction">Getting The Model Weights Out</a><a id="deepspeed-weight-extraction"></a> ] ``` Except I think the local anchors are wrong here. Let me know if you like it and then I will rewrite those. Yes, the anchors your proposed Sylvain are wrong, since you used the target anchors instead of the local header anchors. e.g. all of them but one should not start with deepspeed i.e. "Deployment in Notebooks" should be anchored locally to "deployment-in-notebooks" and not "deepspeed-notebook", i.e. ``` "<a id="deployment-in-notebooks"></a>" is the correct local anchor. ```<|||||>Here is one with local anchors; ``` Sections that were moved: [ <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-trainer-integration">DeepSpeed</a><a id="deepspeed"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-installation">Installation</a><a id="installation"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-multi-gpu">Deployment with multiple GPUs</a><a id="deployment-with-multiple-gpus"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-one-gpu">Deployment with one GPU</a><a id="deployment-with-one-gpu"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-notebook">Deployment in Notebooks</a><a id="deployment-in-notebooks"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-config">Configuration</a><a id="configuration"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-config-passing">Passing Configuration</a><a id="passing-configuration"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-config-shared">Shared Configuration</a><a id="shared-configuration"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-zero">ZeRO</a><a id="zero"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-zero2-config">ZeRO-2 Config</a><a id="zero-2-config"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-zero3-config">ZeRO-3 Config</a><a id="zero-3-config"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-nvme">NVMe Support</a><a id="nvme-support"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-zero2-zero3-performance">ZeRO-2 vs ZeRO-3 Performance</a><a id="zero-2-vs-zero-3-performance"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-zero2-example">ZeRO-2 Example</a><a id="zero-2-example"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-zero3-example">ZeRO-3 Example</a><a id="zero-3-example"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-optimizer">Optimizer</a><a id="optimizer"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-scheduler">Scheduler</a><a id="scheduler"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-fp32">fp32 Precision</a><a id="fp32-precision"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-amp">Automatic Mixed Precision</a><a id="automatic-mixed-precision"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-bs">Batch Size</a><a id="batch-size"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-grad-acc">Gradient Accumulation</a><a id="gradient-accumulation"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-grad-clip">Gradient Clipping</a><a id="gradient-clipping"></a> | <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-weight-extraction">Getting The Model Weights Out</a><a id="getting-the-model-weights-out"></a> ] ``` note myself: converted with: ``` perl -pi -e 'BEGIN { sub convert { $_=lc shift; s/ /-/g; return $_ }} s{\| (.*?)<a id=.(.*?).></a>}{qq[| <a href="https://huggingface.co/docs/transformers/main_classes/deepspeed#$2">$1</a><a id="].convert($1).qq["></a>]}e' fix.html ```<|||||>This looks like a nice solution. Feel free to take over my branch and use it! I think you should use local anchors as it would work across versions of the documentation. And yes I copied the anchors from the deepspeed doc but I see they were different in the Trainer doc, so some of them do need to be updated.<|||||>done. and made them relative - good call! Question: how will rst files be converted? e.g. currently we have: ``` .. _deepspeed-zero2-config: ZeRO-2 Config ``` will it become: a. ``` <h3>ZeRO-2 Config</h3><a id="zero-2-config"></a>| ``` or b.: ``` <h3>ZeRO-2 Config</h3><a id="deepspeed-zero-2-config"></a>| ``` i.e. are you going to respect the original rst anchors or switch to md archors which is just based on the header name? <|||||>You get both anchors. For instance the section "Deployment with one GPU" has two anchors in the current doc that you can test: [https://huggingface.co/docs/transformers/main_classes/deepspeed#deployment-with-one-gpu](https://huggingface.co/docs/transformers/main_classes/deepspeed#deployment-with-one-gpu) and [https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-one-gpu](https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-one-gpu). The first one is generated automatically by the front for each section header and the second is a ref generated by the conversion.<|||||>I added a little instruction section for other future renames and moves. Please let me know if it's kosher.
transformers
14,753
closed
Convert Trainer doc page to MarkDown
# What does this PR do? The `doc-builder` only resolves `ref` when they point to sections in the same page, so this PR fixes #14730 by converting the doc page to MarkDown and fixing all links. In passing, it removes mention of the `TFTrainer` which is now deprecated.
12-13-2021 17:32:00
12-13-2021 17:32:00
transformers
14,752
closed
update the arguments `add_prefix_space` and `trim_offsets` in `backend_tokenizer.post_processor` of `RobertaTokenizerFast`
# What does this PR do? Roberta's tokenizer fast has `add_prefix_space` and `trim_offsets` as arguments. It seems to me that these last 2 arguments should be updated accordingly in the post processor `RobertaProcessing` of its `backend_tokenizer`. This PR proposes to automate this update with a strategy similar to the update of the `pre_tokenizer` done inside the `__init__` of `GPT2TokenizerFast` (from which `RobertaTokenizerFast` inherits). It is the issue #14305 that allowed me to notice that this update was not done when changing `add_prefix_space` or `trim_offsets`. I would ideally like to wait until I have an opinion on [this issue](https://github.com/huggingface/tokenizers/issues/843) on the Tokenizers library before merging this PR. Edit: So I had the confirmation that there was a bug on the Tokenizer side, which was solved in [this PR](https://github.com/huggingface/tokenizers/pull/844) which confirms my understanding of the `RobertaTokenizerFast` component. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Maybe @LysandreJik, @sgugger what do you think of this change? (and in particular of the fact that the behavior will not be entirely satisfactory before the fix on the tokenizers side is released) <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
12-13-2021 17:20:01
12-13-2021 17:20:01
Is this leveraging PR https://github.com/huggingface/tokenizers/pull/844 that was just merged in `tokenizers` ? I believe it isn't part of a version yet, right? Or is this meant as a patch on top of the current tokenizers version?<|||||>@LysandreJik, you do raise an important point. This PR only proposes to reflect the choices of the user on the arguments `add_prefix_space` and `trim_offsets` on the `backend_tokenizer`. Before this modification the `add_prefix_space` and `trim_offsets` arguments were not changed in the post_processors while they are in the pre_tokenizer (in the `__init__` of the GPT2 tokenizer), which causes problems. Nevertheless, the investigation of this problem allowed us to identify a problem on the side of the Tokenizers library that has been solved in the PR you mention. As long as there is no new version of Tokenizers, this problem will also affect the transformers library. To sum up, this PR just aligns the arguments chosen between the pre_tokenizer and the post_processor components of the tokenizer.<|||||>To complete this PR, I created [a new branch with a new test that shows the behavior that will remain incorrect](https://github.com/SaulLu/transformers/pull/1) until we can use a new version of `Tokenizers` that includes [this commit](https://github.com/huggingface/tokenizers/pull/844). cc @LysandreJik and @sgugger for visibility
transformers
14,751
closed
Small fixes for the doc
# What does this PR do? This removes the index.rst (left by mistake) and changes the branch for the installation of doc-builder (needs to be merged after the corresponding PR is merged on doc-builder).
12-13-2021 16:12:14
12-13-2021 16:12:14
Got approval from Lysandre offline so merging.
transformers
14,750
closed
Improve perceiver
# What does this PR do? This PR removes the need for the hard-coded `d_model` attribute of the Perceiver. Instead, one calculates the dimensionality (i.e. number of channels) of the inputs based on the `num_channels` property of the preprocessor. Note that `d_model` is still relevant for users 1) when embedding text using `PerceiverTextProcessor` 2) when no preprocessor is provided.
12-13-2021 15:47:55
12-13-2021 15:47:55
transformers
14,749
closed
Feature/fix slow test in mluke
# What does this PR do? Fix slow tests in MLuke/Luke (https://github.com/huggingface/transformers/pull/14690). * Replace tokenizers to dummy tokenizers in `MLukeTokenizerTest/LukeTokenizerTest`. * Add a missing entry in toctree. ## Who can review? @sgugger @NielsRogge
12-13-2021 14:29:36
12-13-2021 14:29:36
Merging this :)
transformers
14,748
closed
Fix the perceiver docs
Fix a typo in the perceiver docs
12-13-2021 14:11:12
12-13-2021 14:11:12
transformers
14,747
closed
Update docs
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
12-13-2021 13:42:39
12-13-2021 13:42:39
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,746
closed
Change how to load config of XLNetLMHeadModel
# What does this PR do? Fix load config of XLNET. Fixes #14736 ## Who can review? @patrickvonplaten
12-13-2021 13:17:41
12-13-2021 13:17:41
transformers
14,745
closed
Skip Perceiver tests
Skip tests for Perceiver while working on https://github.com/huggingface/transformers/pull/14739
12-13-2021 13:08:12
12-13-2021 13:08:12
transformers
14,744
closed
Deprecates AdamW and adds `--optim`
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #14539 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @stas00 <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Misc @stas00 * I added `FusedAdam` based on your comment in #14539 * Since both `--optim adafactor` and `--adafactor` server the same purpose, I marked `--adafactor` as deprecated. I copy-pasted a deprecation warning that mentions `transformers` version 5. Let me know if the deprecation warning should say something else. * Let me know if I missed anything else.
12-13-2021 12:32:01
12-13-2021 12:32:01
@sgugger Thank you for the suggestions. I think an enum makes sense. I will add that and fix the missing `optimizer_cls` line.<|||||>@manuelciosici. have we lost you or are you just on an extended vacation? Let's finish this up, so we can test out various optimizers. I will sync the recent changes in doc formatting. I was just told that `apex.optimizers.FusedAdam` is an even faster fused optimizer than torch's - we added it already but going to benchmark it.<|||||>update: see updated benchmarks here: 1. [RTX-3090](https://github.com/huggingface/transformers/issues/14608#issuecomment-1005219385) 2. [A100](https://github.com/huggingface/transformers/issues/15026#issuecomment-1005220263) ------------ I'm working on a [neat HF Trainer benchmarking tool](https://github.com/huggingface/transformers/pull/14934), so here it is applied to the changes introduced by this PR: | Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss | |:------------------------|------------------------------------:|------------:|----------------:| | --optim adamw_hf | 117.544 | 32 | 2.19851 | | --optim adamw_torch | 112.951 | 27 | 2.19829 | | --optim adafactor | 89.194 | 0 | 2.20484 | | --optim apex_fused_adam | 126.232 | 42 | 2.19832 | So torch's AdamW appears to be even slower than ours. So clearly apex's AdamW is the way to go speed-wise. Note, that the absolute and relative results will be different on a different GPU and a different finetuning setup, but most likely the current fastest optimizer will remain fastest, etc. Reproducibility and other info: ``` Datetime : 2021-12-28 20:56:42 Software: transformers: 4.16.0.dev0 torch : 1.10.1 cuda : 11.3 python : 3.8.11 Hardware: 1 GPUs : NVIDIA GeForce RTX 3090, 23.70GB The benchmark command line was: CUDA_VISIBLE_DEVICES=0 python \ /hf/transformers-trainer-benchmark/scripts/benchmark/trainer-benchmark.py \ --base-cmd \ ' \ examples/pytorch/translation/run_translation.py --model_name_or_path t5-base --output_dir output_dir \ --do_train --label_smoothing 0.1 --logging_strategy no --save_strategy no --per_device_train_batch_size 16 \ --max_source_length 512 --max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \ --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \ --source_prefix "translate English to Romanian: " --warmup_steps 50 \ --max_train_samples 10000 --dataloader_num_workers 2 \ ' \ --target-metric-key train_samples_per_second --repeat-times 1 --variations \ '--optim adamw_hf|--optim adamw_torch|--optim adafactor|--optim apex_fused_adam' \ --table-format github --report-metric-keys train_loss ```<|||||>@stas00 You have not lost me. I've been on vacation and before that, I accidentally messed up my GitHub notification settings, so I didn't know you had also reviewed the code. I applied all the suggestions from you. I also changed to an `Enum` for optimizer string values as @sgugger suggested. Let me know if I should change anything else.<|||||>@stas00 Let me know if I should squash all the commits to a single commit so they don't pollute `master`'s commit log.<|||||>> @stas00 You have not lost me. [...] Great! I will review your edits today and we can finalize it then. > @stas00 Let me know if I should squash all the commits to a single commit so they don't pollute `master`'s commit log. No need to. We always squash on merge<|||||>I implemented @stas00 's suggestions (except for the `warnings.warn` bit, see the discussion above). I also remembered that @sgugger suggested adding a unit test for the optimizer selection, so I added that too.<|||||>@stas00 I managed to parameterize the tests for AdamW (HF and torch) and Adafactor and refactored the testing code. For testing `apex.optimizers.FusedAdam`, since `Trainer.get_optimizer_cls_and_kwargs` does not use `FusedAdam` but only returns a class called `apex.optimizers.FusedAdam`, I thought I could mock `FusedAdam`'s presence or absence independent of an `apex` installation. In `test_fused_adam`, I mock `apex.optimizers.FusedAdam` and check that `Trainer.get_optimizer_cls_and_kwargs` returns the object mocking `apex.optimizers.FusedAdam`. In `test_fused_adam_no_apex`, I simulate a missing `apex.optimizers.FusedAdam` by setting the `apex` namespace to `None` so `Trainer.get_optimizer_cls_and_kwargs` cannot import `apex.optimizers.FusedAdam` even if `apex` is installed. Does this testing approach make sense? Or should I write `apex`-dependent tests and use `is_apex_available`? <|||||>> @stas00 I managed to parameterize the tests for AdamW (HF and torch) and Adafactor and refactored the testing code. Awesome! much better! > For testing `apex.optimizers.FusedAdam`, since `Trainer.get_optimizer_cls_and_kwargs` does not use `FusedAdam` but only returns a class called `apex.optimizers.FusedAdam`, I thought I could mock `FusedAdam`'s presence or absence independent of an `apex` installation. Perhaps we are looking at different code? `Trainer.get_optimizer_cls_and_kwargs` branches are almost identical for torch or apex. What do you mean it only returns a class. It returns kwargs too. > In `test_fused_adam`, I mock `apex.optimizers.FusedAdam` and check that `Trainer.get_optimizer_cls_and_kwargs` returns the object mocking `apex.optimizers.FusedAdam`. > > In `test_fused_adam_no_apex`, I simulate a missing `apex.optimizers.FusedAdam` by setting the `apex` namespace to `None` so `Trainer.get_optimizer_cls_and_kwargs` cannot import `apex.optimizers.FusedAdam` even if `apex` is installed. > > Does this testing approach make sense? Or should I write `apex`-dependent tests and use `is_apex_available`? Your testing is fabulous, Manuel. But you're not testing the real thing when it's available. What if apex does a breaking change and we are still mocking what it should be doing and not what it does. Hence, I'm suggesting that there should be an identical test to torch's except need to inject a skip in that parameterized test if `testing apex and not is_apex_available()`. Would it'd be easier if I wrote it? <|||||>@stas00 > What do you mean it only returns a class. It returns kwargs too. You are right, it returns a class and the keyword arguments. I didn't express myself properly. I added a new test that uses `FusedAdam` from `apex` if `is_apex_available` is `True`. I couldn't figure out how to test using a parameterization of `test_supported_optim` since I couldn't figure out how to add the `FusedAdam` class to `test_params` without causing the code to throw a `ModuleNotFoundError` in environments without `apex`. In other words, I can't figure out how to conditionally add to `test_params` so that `@parameterized.expand` still works. If there is a way, can you show me?<|||||>Done. Also added an actual `train()` in the test, which is the ultimate test. We typically don't test API at such low level when it gets functionally tested via a higher end API. The functional `train()` test does it all by the fact that it should succeed and all the testing is already done there automatically. Note that I get the last test fail: ``` tests/test_trainer.py .F ___________________________________________________ TrainerOptimizerChoiceTest.test_fused_adam_no_apex ___________________________________________________ self = <tests.test_trainer.TrainerOptimizerChoiceTest testMethod=test_fused_adam_no_apex> def test_fused_adam_no_apex(self): args = TrainingArguments(optim=OptimizerNames.ADAM_APEX_FUSED.value, output_dir="None") # Pretend that apex does not exist, even if installed. By setting apex to None, importing # apex will fail even if apex is installed. with patch.dict("sys.modules", {"apex": None}): with self.assertRaises(ValueError): > Trainer.get_optimizer_cls_and_kwargs(args) E AssertionError: ValueError not raised tests/test_trainer.py:1795: AssertionError tests/test_trainer.py ..... ``` I'm not an expert on Mock, so not sure why it doesn't override like it should. I have apex installed when I run this test. Can you reproduce it? update: fixed it. Needed `with patch.dict("sys.modules", {"apex.optimizers": None}):` since it's importing from `apex.optimizers` and not apex.<|||||>@stas00 Thank you for the code cleanup, figuring out the mocking issue, and for your patience. This PR has been an educational experience (I didn't know about `parameterized`). I'm looking forward to figuring out what my next contribution should be. Let me know if you have any suggestions.<|||||>Since clearly you're interesting in easier testing, you may find some useful tidbits in this extensive doc: https://huggingface.co/docs/transformers/testing, e.g. for different parameterization situations https://huggingface.co/docs/transformers/testing#parametrization optimizers-wise I think the next interesting but challenging thing to add is BNB 8bit optimizer https://github.com/huggingface/transformers/issues/14819 but we are still discussing how it'd work. The other thing to potentially experiment with is https://www.deepspeed.ai/tutorials/onebit-adam/ but I haven't had a chance to understand it so I have no idea whether it can be used outside of Deepspeed or just with deepspeed.<|||||>I think the only other remaining item that I didn't hear you weigh on, @sgugger, is whether we should call these `--adam_foo` or `--adamw_foo` since the class names are `AdamW` (except apex).<|||||>Oh sorry, I didn't catch that. It should adamw everywhere IMO.<|||||>> Oh sorry, I didn't catch that. It should adamw everywhere IMO. Thank you for validating that, @sgugger. @manuelciosici, so one more tiny change please `s|--adam_|--adamw_|` Thank you! <|||||>I removed `.value` from everywhere and ensured that the tests pass. I have also changed optimizer name strings as @stas00 asked. Let me know if I should change anything else.<|||||>but the rest needs to updated to match, e.g. currently many torch tests fail with: ``` E ValueError: Trainer cannot instantiate unsupported optimizer: adamw_hf ```<|||||>@stas00 I just saw that. I'm trying to figure out what I misunderstood.<|||||>@stas00 I surprised that fixes it. On my end, I just fixed it by adding `self.optim = OptimizerNames(self.optim)` in post-init. I also had to remove a now redundant unit test.<|||||>It's the same as: ``` elif args.optim == OptimizerNames.ADAMW_HF: ``` I'm not sure if you can see the failing tests from CI, so I thought it'd be faster to push in the fix as I saw where it was failing. Are you still planning to push a change? You said removing a unit test. Let us know when you're done. <|||||>@stas00 Commit e73249ce130bc81c392f5b9a3224f5ba64a2214d makes the unit tests pass, but it doesn't work when `--optim` is explicitly set on the command line. `TrainingArguments` does not automatically convert strings to enums. For a parallel, see https://github.com/huggingface/transformers/blob/9a94bb8e218033cffa1ef380010b528410ba3ca7/src/transformers/training_args.py#L755 So, calling a script on the command line (for example, `examples/pytorch/language-modeling/run_clm.py`) with any explicit `--optim` (even if only with `--optim adamw_hf`), throws an error from `Trainer.get_optimizer_cls_and_kwargs` since `get_optimizer_cls_and_kwargs` receives a string that does not match any of the if branches which test for `Enum` values. I added another test for specifying the optimizer name as a string. Also, I removed `test_optim_unsupported` since, with these commits, `--optim` no longer accepts strings that are not in `OptimizerNames`. I also changed the default value from `OptimizerNames.ADAMW_HF` to `OptimizerNames.ADAMW_HF.value`. With `default=OptimizerNames.ADAMW_HF` calling `--help` on the CLI gives: ``` --optim {adamw_hf,adamw_torch,adamw_apex_fused,adafactor} The optimizer to use. (default: OptimizerNames.ADAMW_HF) ``` While `default=OptimizerNames.ADAMW_HF.value` gives ``` --optim {adamw_hf,adamw_torch,adamw_apex_fused,adafactor} The optimizer to use. (default: adamw_hf) ``` The first one leaks the internal object name, while the second indicates the string we want users to pass. Finally, I make `OptimizerNames` inherit `ExplicitEnum` instead of `Enum` because I saw `SchedulerType` do the same and it seems more elegant. https://github.com/huggingface/transformers/blob/9a94bb8e218033cffa1ef380010b528410ba3ca7/src/transformers/trainer_utils.py#L281 <|||||>I second that - thank you, @manuelciosici!<|||||>@stas00 @sgugger Thank you for guiding me!
transformers
14,743
closed
Batch size affecting output when using GPT2Model
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: 4.12.5 - Python version: Python 3.8.12 - PyTorch version (GPU?): 1.10.0 (GPU) - Tensorflow version (GPU?): X - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ``` import torch from transformers import AutoModel, AutoTokenizer def get_device_from_arg(device_id): if (device_id is not None and torch.cuda.is_available() and 0 <= device_id < torch.cuda.device_count()): return torch.device(f'cuda:{device_id}') else: return CPU_DEVICE def get_model(model_name, tokenizer, device_id): device = get_device_from_arg(device_id) model = AutoModel.from_pretrained(model_name, pad_token_id=tokenizer.eos_token_id).to(device) model = model.eval() return model def get_tokenizer(model_name='gpt2'): tokenizer = AutoTokenizer.from_pretrained(model_name) return tokenizer TOKENIZER = get_tokenizer('gpt2-large') MODEL = get_model('gpt2-large', TOKENIZER, 0) human_texts = ["Hello World!", "What is huggingface?"] tokenized_texts = [ TOKENIZER.encode(sen, return_tensors='pt', truncation=True, max_length=1024) for sen in human_texts ] device = next(MODEL.parameters()).device padded_chunk = torch.nn.utils.rnn.pad_sequence([t.view(-1) for t in tokenized_texts], batch_first=True, padding_value=0).to(device) attention_mask = torch.nn.utils.rnn.pad_sequence( [torch.ones(len(t.view(-1))).long() for t in tokenized_texts], batch_first=True, padding_value=0).to(device) outs = MODEL(input_ids=padded_chunk, attention_mask=attention_mask, past_key_values=None, output_hidden_states=True, return_dict=True, output_attentions=True) outs2 = MODEL(input_ids=padded_chunk[:1], attention_mask=attention_mask[:1], past_key_values=None, output_hidden_states=True, return_dict=True, output_attentions=True) print(outs.hidden_states[0][0] - outs2.hidden_states[0][0]) print(outs.hidden_states[-1][0] - outs2.hidden_states[-1][0]) ``` ``` tensor([[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], device='cuda:0', grad_fn=<SubBackward0>) tensor([[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [-2.9385e-04, -1.7121e-05, -3.2863e-04, ..., -1.3408e-04, -1.4349e-04, -9.2506e-05], [ 9.9063e-05, -3.7980e-04, 2.1064e-04, ..., 5.2011e-04, 1.3547e-04, -4.0713e-04], [-1.8436e-04, 4.5538e-05, -7.6592e-06, ..., 1.5700e-04, -4.7076e-05, -2.0326e-04], [-2.0707e-04, -6.7145e-05, -1.3128e-04, ..., 6.8665e-05, -2.5548e-04, -1.2420e-04]], device='cuda:0', grad_fn=<SubBackward0>) ``` The value of hidden states at first is same between two outputs, however the difference gets slightly bigger at last. https://github.com/huggingface/transformers/issues/2401 also tackled same issue, however it isn't resolved. ## Expected behavior The model outputs should be exactly same.
12-13-2021 10:35:51
12-13-2021 10:35:51
Hello! I ran your code sample on CPU and got the following results: ``` tensor([[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], grad_fn=<SubBackward0>) tensor([[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], grad_fn=<SubBackward0>) ``` Do you also get the same when running on CPU?<|||||>I got the results below when rerun on CPU. It seems the error gets lower! ``` tensor([[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], grad_fn=<SubBackward0>) tensor([[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [-3.5763e-07, -9.2387e-07, -2.9802e-07, ..., -1.1921e-07, -3.0920e-07, 0.0000e+00], [ 0.0000e+00, -5.3644e-07, -7.7486e-07, ..., 3.5763e-07, -2.0117e-07, 2.6450e-07], [-2.3842e-07, 8.9407e-08, -5.9605e-08, ..., 5.9605e-07, -7.8231e-08, 1.4901e-08], [-5.9605e-08, 2.0862e-07, -1.9073e-06, ..., 1.1921e-06, 5.1036e-07, -8.7544e-08]], grad_fn=<SubBackward0>) ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,742
closed
Swap TF and PT code inside two blocks
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #14741 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
12-13-2021 09:43:05
12-13-2021 09:43:05
transformers
14,741
closed
TF and PT code confuse in the documentation
Hi, @sgugger, at [quicktour.mdx#L337](https://github.com/huggingface/transformers/blame/5eca742f6c0e513d8c7d8085fd14b7a754ea96f7/docs/source/quicktour.mdx#L337) and [quicktour.mdx#L345](https://github.com/huggingface/transformers/blame/5eca742f6c0e513d8c7d8085fd14b7a754ea96f7/docs/source/quicktour.mdx#L345), TF code was written in PT block, so do PT.
12-13-2021 09:35:57
12-13-2021 09:35:57
Thanks again for fixing it!
transformers
14,740
closed
add support to DistilBertLMHeadModel
# What does this PR do? Fixes #14737 Goal: Add support to DistilBertLMHeadModel Changes: 1. Modified [modeling_distilbert.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/distilbert/modeling_distilbert.py): Changed `MultiHeadSelfAttention`, `Transformer`, `TransformerBlock` to model cross-attention and accept arguments `encoder_hidden_states` & `encoder_attention_mask `; and added the new class `DistilBertLMHeadModel`; 2. Expose the new class and register it everywhere needed. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @VictorSanh @thomwolf @patil-suraj
12-13-2021 09:32:07
12-13-2021 09:32:07
Thanks for the PR! There's a similar PR here #11085, which is almost complete, so let's wait for the author to see if he's still interested to finish it :) <|||||>> Thanks for the PR! There's a similar PR here #11085, which is almost complete, so let's wait for the author to see if he's still interested to finish it :) Thanks for your attention! OK, let me also have a look at what is going on there and anything I can help:).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.