repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 11,320 | closed | Irregular VRAM usage with gpt-neo inference with sequences longer than 250 tokens | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1 / HEAD
- Platform: Linux/Colab Pro
- Python version: 3.7
- PyTorch version (GPU?): 1.8.1 (CUDA 11.0)
- Tensorflow version (GPU?):
- Using GPU in script?: Yes, NVIDIA P100
- Using distributed or parallel set-up in script?:
### Who can help
@patil-suraj
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): EleutherAI/gpt-neo-2.7B
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Install transformers in a Colab Pro notebook
2. Run this script to log peak memory usage for inference with increasing sequence length: https://gist.github.com/finetuneanon/7ce0ed5090a27a383abffbbbc0433a29
3. Wait for it to crash with an OOM error in the attention matmul somewhere above sequence length 1850
Output:
```
1870 5436434432
ok 6535669248
1871 5436434432
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-2-f2aeed4489bd> in <module>()
21 return_dict_in_generate=True,
22 repetition_penalty=1.2,
---> 23 pad_token_id=tokenizer.eos_token_id
24 )
25 del ids
13 frames
/usr/local/lib/python3.7/dist-packages/transformers/models/gpt_neo/modeling_gpt_neo.py in _attn(self, query, key, value, causal_mask, masked_bias, attn_dropout, attention_mask, head_mask)
238 key = key.to(torch.float32)
239
--> 240 attn_weights = torch.matmul(query, key.transpose(-1, -2))
241 attn_weights = torch.where(causal_mask, attn_weights, masked_bias.to(attn_weights.dtype))
242
RuntimeError: CUDA out of memory. Tried to allocate 4.59 GiB (GPU 0; 15.90 GiB total capacity; 9.75 GiB already allocated; 4.60 GiB free; 10.42 GiB reserved in total by PyTorch)
```
The full output can be found here: https://gist.github.com/finetuneanon/c7292ea676f57f5bb63803685d80bf5b
The output has the format:
```
sequence_length occupied_cuda_memory_before_inference
ok peak_occupied_cuda_memory_during_inference
```
Doing inference with real text has the same issue.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I expected memory usage to increase steadily instead of jumping around wildly, but I am not sure if this might actually be the correct behaviour. If it is correct, reliably doing inference on long sequences on 16GB of VRAM seems to be impossible, but sometimes it works.
I have also plotted the peak memory allocation during inference:

The green line is peak memory allocation, the brown line is the amount of memory in use before running inference. | 04-19-2021 16:22:05 | 04-19-2021 16:22:05 | After asking about this on the EleutherAI discord, it was pointed out to me that 256 tokens corresponds to the local attention span of the model. Looking at the plot above, the first allocation peaks appear after about 256 tokens. After about 512 tokens, another shorter set of spikes occur, with more and shorter spikes being added every 256. This could indicate that there is an issue related to the implementation of local attention.<|||||>One more comment with additional information. During another run of the of the test script, I added some [logging](https://gist.github.com/finetuneanon/5b8b5cdaf4c27836ebbbf1ed0d238c5b) to modeling_gpt_neo.py in an exception handler. For another run where an OOM crash occured with sequence length 1871, before converting query and key to float32, 5270MB are used. Afterwards, 9984MB are in use. query.shape is [1, 1871, 20, 257, 128] and key.shape is [1, 1871, 20, 1, 128]. The transpose makes no additional allocation, but the matmul attempts to allocate another 4.6GB. The main culprit seems to be the dimension of size 257, which greatly increases the size of the tensor.<|||||>Hi @finetuneanon
Thanks for the detailed issue!
So what is happening here is, the way local attention is designed is a bit weird (not the implementation), in that it splits the `seq_length` dim into `(num_blocks, block_length)` but here `block_length` is actually dynamic.
It's equal to `window_size` by default which is 256. But when the `seq_length` is not evenly divisible by `block_length` then it's adjusted as follows
```python
def _get_block_length_and_num_blocks(seq_length, window_size):
"""
Computes ``block_length`` and ``num_blocks`` such that ``seq_length`` becomes evenly divisible by
``block_length``.
"""
block_length = window_size
while seq_length % block_length != 0:
block_length -= 1
num_blocks = seq_length // block_length
return block_length, num_blocks
```
such that, the `seq_length` becomes evenly divisible by `block_length`.
So the shape of `query` becomes `(batch, num_blocks, block_length, hidden_dim)`
and then the `keys` and `values` are padded and the `seq_length` dim is split such that their shape becomes
`(batch, num_blocks, window_size + block_length, hidden_dim`).
Here's a simple function to get the shape of `query` and `key` for given `seq_length`
```python
def get_query_key_shape(seq_len, window_size, hidden_dim):
block_length, num_blocks = _get_block_length_and_num_blocks(seq_len, window_size)
query_shape = (1, num_blocks, block_length, hidden_dim)
key_shape = (1, num_blocks, window_size + block_length, hidden_dim)
return query_shape, key_shape
```
Let's print the shapes for few lengths
```python
window_size = 256
hidden_dim = 2560
for seq_len in range(256, 266):
query_shape, key_shape = get_query_key_shape(seq_len, window_size, hidden_dim)
print(f"seq_len: {seq_len}, query_shape: {query_shape}, key_shape: {key_shape}"
```
which gives
```
seq_len: 256, query_shape: (1, 1, 256, 2560), key_shape: (1, 1, 512, 2560)
seq_len: 257, query_shape: (1, 257, 1, 2560), key_shape: (1, 257, 257, 2560)
seq_len: 258, query_shape: (1, 2, 129, 2560), key_shape: (1, 2, 385, 2560)
seq_len: 259, query_shape: (1, 7, 37, 2560), key_shape: (1, 7, 293, 2560)
seq_len: 260, query_shape: (1, 2, 130, 2560), key_shape: (1, 2, 386, 2560)
seq_len: 261, query_shape: (1, 3, 87, 2560), key_shape: (1, 3, 343, 2560)
seq_len: 262, query_shape: (1, 2, 131, 2560), key_shape: (1, 2, 387, 2560)
seq_len: 263, query_shape: (1, 263, 1, 2560), key_shape: (1, 263, 257, 2560)
seq_len: 264, query_shape: (1, 2, 132, 2560), key_shape: (1, 2, 388, 2560)
seq_len: 265, query_shape: (1, 5, 53, 2560), key_shape: (1, 5, 309, 2560))
```
as you can see, because of the dynamic `block_length` the dimensions are very different for different `seq_length` which explains the irregular VRAM usage.
if you set the seq_length to 1871 you'll get
```
seq_len: 1871, query_shape: (1, 1871, 1, 2560), key_shape: (1, 1871, 257, 2560)
```
as you posted above.
So I wouldn't say this is an implementation issue, that's how the local attention algorithm is designed in mesh-tf.<|||||>Thank you for taking the time and walking me through the calculations. It makes sense and certainly explains the irregular pattern. However, I wonder if it is possible to reach the same end result in a way that is less memory intensive. A bit earlier I was looking for more information about local self-attention and I found this [implementation](https://github.com/lucidrains/local-attention/blob/master/local_attention/local_attention.py). Running it for a [1, 1871, 2560] tensor results in a peak allocation of just about 253MB:
```
>>> import torch
>>> q = torch.rand(1,1871,128*20).to(torch.float32).cuda()
>>> k = torch.rand(1,1871,128*20).to(torch.float32).cuda()
>>> v = torch.rand(1,1871,128*20).to(torch.float32).cuda()
>>> from local_attention import LocalAttention
>>> local_attention = LocalAttention(256, causal=True, look_forward=0, autopad=True, dim=2560).cuda()
>>> torch.cuda.memory_allocated(), torch.cuda.max_memory_allocated()
(57482240, 57482240)
>>> result = local_attention(q, k, v)
>>> torch.cuda.memory_allocated(), torch.cuda.max_memory_allocated()
(78453760, 252826624)
```
Simply running this implementation and GPTNeoLocalSelfAttention on the same input does seem to give different results however, so I think there may also be some difference between the algorithms.
Edit: Experimenting with it a bit, I think my best bet is to just limit the sequence length to 1750. The padding approach of that approach is very different.<|||||>I have thought more about it and think I have found a solution to reduce memory use.
```
--- src/transformers/models/gpt_neo/modeling_gpt_neo.py.backup 2021-04-07 22:28:43.049493417 +0200
+++ src/transformers/models/gpt_neo/modeling_gpt_neo.py 2021-04-22 10:53:41.274276535 +0200
@@ -413,4 +413,18 @@
batch_size, seq_length = hidden_states.shape[:2]
full_seq_length = seq_length + past_length
+
+ padding = None
+ if layer_past is None and full_seq_length % self.window_size != 0 and full_seq_length > self.window_size:
+ padding = self.window_size-(full_seq_length%self.window_size)
+ if attention_mask is None:
+ attention_mask = torch.zeros(query.shape[0], query.shape[1] + padding).to(query.device)
+ attention_mask[:, padding:] = 1
+ else:
+ attention_mask = torch.cat([torch.zeros(attention_mask.shape[0], padding).to(attention_mask.device), attention_mask], axis=1)
+ pad = lambda x: torch.cat([torch.zeros(x.shape[0],padding,x.shape[2]).to(x.device), x], axis=1)
+ query, key, value = map(pad, (query, key, value))
+ seq_length += padding
+ full_seq_length += padding
+
block_length, num_blocks = self._get_block_length_and_num_blocks(full_seq_length, self.window_size)
@@ -454,5 +468,9 @@
attn_output = attn_output.reshape(batch_size, seq_length, self.embed_dim)
- attn_output = self.out_proj(attn_output)
+ if padding is not None:
+ attn_output = attn_output[:,padding:]
+ attn_weights = attn_weights[:,padding:]
+
+ attn_output = self.out_proj(attn_output.to(hidden_states.dtype))
attn_output = self.resid_dropout(attn_output)
```
By padding q, k and v and adding a mask to mask out the padding, the it becomes unnecessary to split things in a way that leads to a very large dimension. From how I see it, this should not change the result of the _attn function due to masking. For some reason I needed to add an extra .to at the end for running the model in fp16. First results of doing inference with this change look okay, but I am still testing it more. It's not updated with cfd2eaa8cf82da8581825c6592b66d2789c5bc53 yet.
The purple line here is a run with the patch applied:

<|||||>EricHallahan from EleutherAI was so kind to run the lambada evaluation task with the patch applied and found no degradation in accuracy and negligible differences in speed over multiple runs.<|||||>(It is worth noting that LAMBADA task contexts are significantly shorter than 256 tokens. I believe Eric is currently running QA4MRE, which has much longer contexts)<|||||>Hi @finetunenon
Thanks a lot for working on this. Let me run a few experiments to verify this and get back to you.<|||||>Great, I ran a small test and it seems to be working! (sorry about the earlier comment). Here's the script
```python
import torch
from torch import nn
from transformers.models.gpt_neo.modeling_gpt_neo import GPTNeoAttentionMixin
from transformers import GPTNeoConfig
class GPTNeoLocalSelfAttention(nn.Module, GPTNeoAttentionMixin):
def __init__(self, config):
super().__init__()
self.register_buffer("masked_bias", torch.tensor(-1e9))
self.attn_dropout = nn.Dropout(config.attention_dropout)
self.resid_dropout = nn.Dropout(config.resid_dropout)
self.embed_dim = config.hidden_size
self.num_heads = config.num_heads
self.head_dim = self.embed_dim // self.num_heads
if self.head_dim * self.num_heads != self.embed_dim:
raise ValueError(
f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim} and `num_heads`: {self.num_heads})."
)
self.k_proj = nn.Linear(self.embed_dim, self.embed_dim, bias=False)
self.v_proj = nn.Linear(self.embed_dim, self.embed_dim, bias=False)
self.q_proj = nn.Linear(self.embed_dim, self.embed_dim, bias=False)
self.out_proj = nn.Linear(self.embed_dim, self.embed_dim, bias=True)
self.window_size = config.window_size
def forward(
self,
hidden_states,
attention_mask=None,
layer_past=None,
head_mask=None,
use_cache=False,
output_attentions=False,
pad_qkv=False
):
query = self.q_proj(hidden_states)
if layer_past is not None:
past = layer_past[0]
key_value_hidden_states = torch.cat([past, hidden_states], dim=1)
past_length = past.size()[1]
else:
key_value_hidden_states = hidden_states
past_length = 0
key = self.k_proj(key_value_hidden_states)
value = self.v_proj(key_value_hidden_states)
# compute block length and num_blocks
batch_size, seq_length = hidden_states.shape[:2]
full_seq_length = seq_length + past_length
padding = None
if pad_qkv:
if layer_past is None and full_seq_length % self.window_size != 0 and full_seq_length > self.window_size:
padding = self.window_size-(full_seq_length%self.window_size)
if attention_mask is None:
attention_mask = torch.zeros(query.shape[0], query.shape[1] + padding).to(query.device)
attention_mask[:, padding:] = 1
else:
attention_mask = torch.cat([torch.zeros(attention_mask.shape[0], padding).to(attention_mask.device), attention_mask], axis=1)
pad = lambda x: torch.cat([torch.zeros(x.shape[0],padding,x.shape[2]).to(x.device), x], axis=1)
query, key, value = map(pad, (query, key, value))
seq_length += padding
full_seq_length += padding
block_length, num_blocks = self._get_block_length_and_num_blocks(full_seq_length, self.window_size)
# create buckets
if layer_past is not None:
# we just need 1 block with block_length 1 when caching is enabled
query = self._split_seq_length_dim_to(query, 1, 1)
else:
query = self._split_seq_length_dim_to(query, num_blocks, block_length)
key = self._look_back(key, block_length, self.window_size)
value = self._look_back(value, block_length, self.window_size)
# select key/value vectors only for the last block
if layer_past is not None:
key = key[:, -1:, ...]
value = value[:, -1:, ...]
query = self._split_heads(query, self.num_heads, self.head_dim)
key = self._split_heads(key, self.num_heads, self.head_dim)
value = self._split_heads(value, self.num_heads, self.head_dim)
attention_mask = GPTNeoAttentionMixin.create_local_attention_mask(
batch_size, full_seq_length, self.window_size, "cpu", attention_mask
)
if layer_past is not None:
# only take the mask for the last block
attention_mask = attention_mask[:, -1:, :, -1:, :]
# attn
attn_output, attn_weights = self._attn(
query,
key,
value,
causal_mask=attention_mask,
masked_bias=self.masked_bias,
attn_dropout=self.attn_dropout,
head_mask=head_mask,
)
attn_output = self._merge_heads(attn_output, self.num_heads, self.head_dim)
attn_output = attn_output.reshape(batch_size, seq_length, self.embed_dim)
if padding is not None:
attn_output = attn_output[:,padding:]
attn_weights = attn_weights[:,padding:]
attn_output = self.out_proj(attn_output)
attn_output = self.resid_dropout(attn_output)
outputs = (attn_output,)
if output_attentions:
outputs += (attn_weights,)
return outputs # a, (attentions)
config = GPTNeoConfig(hidden_size=16, num_heads=4)
attn_layer = GPTNeoLocalSelfAttention(config).eval()
matched = []
with torch.no_grad():
for seq_len in range(1, 2049):
hidden_states = torch.randn(1, seq_len, 16)
out = attn_layer(hidden_states)[0]
out_with_padding = attn_layer(hidden_states, pad_qkv=True)[0]
matched.append(torch.allclose(out, out_with_padding, atol=1e-5))
all(matched)
# True
```
I will run a few tests with the actual model and will let you know. If it works, feel free to open a PR :)
<|||||>Thanks for testing. If it works with the actual model, how should cfd2eaa8cf82da8581825c6592b66d2789c5bc53 be handled? I tried adapting the patch, but the attention mask seems to work in a very different way and I haven't been able to figure it out yet.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,319 | closed | Error in loading model tokenizer ('Helsinki-NLP/opus-mt-en-fr' actually loads 'Helsinki-NLP/opus-mt-en-de') | ## Environment info
- `transformers` version: 4.3.3
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.1
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: not at this stage in the code
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten as per https://huggingface.co/transformers/model_doc/marian.html specifications
Models:
- marian : @patrickvonplaten
Library:
- tokenizers: @LysandreJik
Documentation: @sgugger
## Information
Model I am using : MarianMT, 'Helsinki-NLP/opus-mt-en-fr'
The problem arises when using:
[X] the official example scripts
The tasks I am working on is:
not relevant
## To reproduce
Steps to reproduce the behavior:
```
from transformers import (MarianMTModel, MarianTokenize)
MT_model_name = 'Helsinki-NLP/opus-mt-en-fr'
MT_tokenizer = MarianTokenizer.from_pretrained(MT_model_name)
def download_vocab_files_for_tokenizer(tokenizer, model_type, output_path='/vocab'):
vocab_files_map = tokenizer.pretrained_vocab_files_map
vocab_files = {}
for resource in vocab_files_map.keys():
print (vocab_files_map[resource])
download_location = vocab_files_map[resource][model_type]
f_path = os.path.join(output_path, os.path.basename(download_location))
urllib.request.urlretrieve(download_location, f_path)
vocab_files[resource] = f_path
return vocab_files
vocab_files = download_vocab_files_for_tokenizer(tokenizer=MT_tokenizer, model_type=MT_model_name, output_path="/vocab")
```
> {'Helsinki-NLP/opus-mt-en-de': 'https://cdn.huggingface.co/Helsinki-NLP/opus-mt-en-de/source.spm'}
>
> ---------------------------------------------------------------------------
> KeyError Traceback (most recent call last)
> <ipython-input-6-9d4f64132d23> in <module>
> ----> 1 process_datasets(source_datasets_paths, dataset_dir, test_mode=True, clean_sentences=True, translate=False)
>
> ~\...py in process_datasets(source_datasets_paths, dataset_dir, test_mode, clean_sentences, translate, max_sample, negative_sampling)
> 171 MT_model_name = 'Helsinki-NLP/opus-mt-en-fr'
> 172 MT_tokenizer = MarianTokenizer.from_pretrained(MT_model_name)
> --> 173 vocab_files = download_vocab_files_for_tokenizer(tokenizer=MT_tokenizer, model_type=MT_model_name, output_path="/vocab")
> 174
> 175 print (vocab_files)
>
> ~...py in download_vocab_files_for_tokenizer(tokenizer, model_type, output_path)
> 29 print (vocab_files_map[resource])
> 30 print (model_type)
> ---> 31 print (vocab_files_map[resource][model_type])
> 32 download_location = vocab_files_map[resource][model_type]
> 33 f_path = os.path.join(output_path, os.path.basename(download_location))
>
> KeyError: 'Helsinki-NLP/opus-mt-en-fr'
## Expected behavior
> {'Helsinki-NLP/opus-mt-en-fr': 'https://cdn.huggingface.co/Helsinki-NLP/opus-mt-en-fr/source.spm'}
Then function outputs a dictionnary containing for the 'Helsinki-NLP/opus-mt-en-fr' key, the corresponding value being a path to file found remotely at https://cdn.huggingface.co/Helsinki-NLP/opus-mt-en-fr/source.spm (checked link it works) downloaded on local folder ./vocab
| 04-19-2021 15:51:08 | 04-19-2021 15:51:08 | You should upgrade to the last version of transformers, which fully relies on the repository of a pretrained model instead of using special files like here.<|||||>Thanks! |
transformers | 11,318 | closed | Load checkpoint without re-creating the model | # What does this PR do?
This PR avoids recreating the model when loading a checkpoint in the Trainer. As mentioned in #11294, the current loading messes up the model a user passed when some weights are frozen, which will yield to unexpected OOM errors.
A test is also added checking the frozen parameters are kept frozen.
Fixes #11294 and probably #11317 | 04-19-2021 14:42:54 | 04-19-2021 14:42:54 | |
transformers | 11,317 | closed | large memory usage when resuming training from a checkpoint | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.5
- Platform: linux
- Python version: 3.7
- PyTorch version (GPU?): 1.8
- Tensorflow version (GPU?): -
- Using GPU in script?: -
- Using distributed or parallel set-up in script?: -
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger @patrickvonplaten, @patil-suraj
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger @patrickvonplaten, @patil-suraj
## Information
Hi
I am training t5-base model on mnli dataset, with batch size = 128, the training works fine, but the moment, I want to resume from a checkpoint, then I will get a memory issue, so I observe large memory usage when it is resuming the training.
## Expected behavior
resuming from a checkpoint and training, should take equal amount of memory
## Error Stack
```
Traceback (most recent call last):
File "run_seq2seq.py", line 671, in <module>
main()
File "run_seq2seq.py", line 629, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/users/dara/dev/codes/seq2seq/third_party/trainers/trainer.py", line 329, in train
tr_loss += self.training_step(model, inputs)
File "/users/dara/libs/anaconda3/envs/test1/lib/python3.7/site-packages/transformers/trainer.py", line 1486, in training_step
loss = self.compute_loss(model, inputs)
File "/users/dara/libs/anaconda3/envs/test1/lib/python3.7/site-packages/transformers/trainer.py", line 1518, in compute_loss
outputs = model(**inputs)
File "/users/dara/libs/anaconda3/envs/test1/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/users/dara/dev/codes/seq2seq/third_party/models/t5/modeling_t5.py", line 1762, in forward
lang=lang
File "/users/dara/libs/anaconda3/envs/test1/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/users/dara/dev/codes/seq2seq/third_party/models/t5/modeling_t5.py", line 1115, in forward
task=task
File "/users/dara/libs/anaconda3/envs/test1/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/users/dara/dev/codes/seq2seq/third_party/models/t5/modeling_t5.py", line 752, in forward
output_attentions=output_attentions,
File "/users/dara/libs/anaconda3/envs/test1/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/users/dara/dev/codes/seq2seq/third_party/models/t5/modeling_t5.py", line 653, in forward
output_attentions=output_attentions,
File "/users/dara/libs/anaconda3/envs/test1/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/users/dara/dev/codes/seq2seq/third_party/models/t5/modeling_t5.py", line 518, in forward
hidden_states, self.k, key_value_states, past_key_value[0] if past_key_value is not None else None
File "/users/dara/dev/codes/seq2seq/third_party/models/t5/modeling_t5.py", line 501, in project
hidden_states = shape(proj_layer(key_value_states))
File "/users/dara/libs/anaconda3/envs/test1/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/users/dara/libs/anaconda3/envs/test1/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 94, in forward
return F.linear(input, self.weight, self.bias)
File "/users/dara/libs/anaconda3/envs/test1/lib/python3.7/site-packages/torch/nn/functional.py", line 1753, in linear
return torch._C._nn.linear(input, weight, bias)
RuntimeError: CUDA out of memory. Tried to allocate 48.00 MiB (GPU 0; 23.70 GiB total capacity; 21.38 GiB already allocated; 41.69 MiB free; 22.18 GiB reserved in total by PyTorch)
0%|
```
Thanks for your help and suggestions. | 04-19-2021 12:53:16 | 04-19-2021 12:53:16 | Similar issue to https://github.com/huggingface/transformers/issues/11294<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,316 | closed | Added BERT pretraining example running on Graphcore IPUs to research projects | # What does this PR do?
Adds BERT pretraining example running on Graphcore IPUs to research projects
## Before submitting
- This was discussed in the HuggingFace/Graphcore meeting
## Who can review?
@LysandreJik
| 04-19-2021 12:26:32 | 04-19-2021 12:26:32 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,315 | closed | T5Model crashes when trained with multiple GPUs | ## Environment info
- `transformers` version: 4.5.1
- Platform: Linux-5.4.0-1041-azure-x86_64-with-debian-buster-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
@patrickvonplaten, @patil-suraj
## Information
I'm training a T5 translation model. It works on a CPU or single GPU, but when I try to run it with multiple GPUs I get the following error:
Traceback (most recent call last):
File "train2.py", line 43, in <module>
model.train_model(train_df, eval_data=eval_df)
File "/home/eladyt/.local/lib/python3.7/site-packages/simpletransformers/t5/t5_model.py", line 206, in train_model
**kwargs,
File "/home/eladyt/.local/lib/python3.7/site-packages/simpletransformers/t5/t5_model.py", line 605, in train
**kwargs,
File "/home/eladyt/.local/lib/python3.7/site-packages/simpletransformers/t5/t5_model.py", line 705, in eval_model
result = self.evaluate(eval_dataset, output_dir, verbose=verbose, silent=silent, **kwargs)
File "/home/eladyt/.local/lib/python3.7/site-packages/simpletransformers/t5/t5_model.py", line 763, in evaluate
outputs = model(**inputs)
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 167, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 177, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/_utils.py", line 429, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 1 on device 1.
Original Traceback (most recent call last):
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 167, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 177, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/_utils.py", line 429, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/eladyt/.local/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py", line 1506, in forward
return_dict=return_dict,
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/eladyt/.local/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py", line 881, in forward
inputs_embeds = self.embed_tokens(input_ids)
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 158, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/functional.py", line 1916, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Input, output and indices must be on the current device
## To reproduce
Steps to reproduce the behavior:
Here is the code (from https://towardsdatascience.com/how-to-train-an-mt5-model-for-translation-with-simple-transformers-30ba5fa66c5f):
```python
import logging
import pandas as pd
from simpletransformers.t5 import T5Model, T5Args
import torch.multiprocessing
torch.multiprocessing.set_sharing_strategy('file_system')
logging.basicConfig(level=logging.INFO)
transformers_logger = logging.getLogger("transformers")
transformers_logger.setLevel(logging.WARNING)
train_df = pd.read_csv("data2/train.tsv", sep="\t").astype(str)
eval_df = pd.read_csv("data2/eval.tsv", sep="\t").astype(str)
train_df["prefix"] = ""
eval_df["prefix"] = ""
model_args = T5Args()
model_args.max_seq_length = 25
model_args.train_batch_size = 20
model_args.eval_batch_size = 20
model_args.num_train_epochs = 20
model_args.evaluate_during_training = True
model_args.evaluate_during_training_steps = 30000
model_args.use_multiprocessing = False
model_args.fp16 = True
model_args.save_steps = -1
model_args.save_eval_checkpoints = False
model_args.no_cache = True
model_args.reprocess_input_data = True
model_args.overwrite_output_dir = True
model_args.preprocess_inputs = True
model_args.num_return_sequences = 1
model_args.n_gpu=4
model_args.is_model_parallel = True
model = T5Model("mt5", "google/mt5-base", args=model_args)
model.train_model(train_df, eval_data=eval_df)
```
## Expected behavior
A model should be generated. | 04-19-2021 12:20:10 | 04-19-2021 12:20:10 | This code seems to be using `simpletransformers`, sadly we won't dive into that. You could use the `run_translation.py` script [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq) which supports multi-gpu training. See this [doc](https://github.com/huggingface/transformers/tree/master/examples#distributed-training-and-mixed-precision) for distributed training using `Trainer`.
And with `Trainer` you could also leverage `deepspeed` to get more efficiency, see this [doc](https://huggingface.co/transformers/main_classes/trainer.html#trainer-integrations) for `deepspeed` integration <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,314 | closed | Removed `max_length` from being mandatory within `generate`. | # What does this PR do?
- Moving on to fully using `StoppingCriteria` for `greedy` and `sample`
modes.
- `max_length` still used for `beam_search` and `group_beam_search`
(Follow up PR)
- Fixes a bug with MaxLengthStoppingCriteria (we should stop as soon a
we hit the max_length, the comparison needs to be or equal, that affects
the tests).
- Added options to use `logits_processor` and `stopping_criteria`
directly within `generate` function (so some users can define their own
`logits_processor` and `stopping_criteria`).
- Modified the backward compat tests to make sure we issue a warning.
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@patrickvonplaten @LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 04-19-2021 09:57:19 | 04-19-2021 09:57:19 | All tests pass, the max_length was actually a bug hidden by `while cur_len < max_length` that was still in there.
The bart tests (at least) caught it automatically and enabled me to change it to the correct comparison ! <|||||>> All tests pass, the max_length was actually a bug hidden by `while cur_len < max_length` that was still in there.
> The bart tests (at least) caught it automatically and enabled me to change it to the correct comparison !
Great! Feel free to merge then! |
transformers | 11,313 | open | [WIP] Add PiT | # What does this PR do?
Adds `PoolingTransformer` for image classification. https://github.com/naver-ai/pit
Todos:
- [ ] Fix tests
- [ ] Add doc
- [ ] port and push all `PiT` checkpoints | 04-19-2021 09:56:10 | 04-19-2021 09:56:10 | |
transformers | 11,312 | closed | The output of IBERT is float32. Am I doing wrong? | ## Environment info
- `transformers` version: 4.5.1
- Platform: Linux-5.8.0-49-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: DDP (pytorch-lightning)
### Who can help
@LysandreJik, @patil-suraj, @patrickvonplaten
## Information
I'm trying IBert. The first output of the model is `float32` so I'm curious why it happens. I set `quant_mode=True`.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
I'm using MSMARCO (IR dataset)
## To reproduce
Steps to reproduce the behavior:
1. Initialize a model with the command `AutoModel.from_pretrained('kssteven/ibert-roberta-base', quant_mode=True, add_pooling_layer=False)`
2. Check the `dtype` of the model output.
## Expected behavior
The output `dtype` should be `int8`, but I see `float32`
| 04-19-2021 09:40:39 | 04-19-2021 09:40:39 | The I-BERT framework allows for easy fine-tuning in PyTorch to find the optimal parameters. Once those are found, the model can be deployed in a setup using int-8 capable operations such as TensorRT.
@kssteven418 will be able to explain better than me.<|||||>Yes, the current I-BERT implementation (both in HF and Fairseq in my personal repo) only searches for the optimal int8 parameters through quantization-aware training and leaves out the actual model deployment. That is to say, it simulates int8 inference using floating-point representations and operations. One reason we are not supporting int8 execution is that, PyTorch only supports int8 inference via its own quantization APIs. Therefore, the optimal parameters found in the I-BERT framework must be then exported to different frameworks that can support int8 deployment (TensorRT, TVM, etc. are some popular frameworks). We haven't yet open-sourced the code for model deployment.<|||||>@kssteven418 Thanks for the answer. What I want is the final output in `int8`. Then will it be ok to just cast the output into `int8` even with pytorch? The whole execution doesn't have to be run in integer mode.<|||||>Yes, if you look at the quantization modules, e.g., QuantLinear, there are additional attributes such as `weight_integer` that represent the int8 model parameters in the torch.float. You can cast those numbers to torch.int8, but just make sure that you don't round down the numbers - they must be rounded.<|||||>> Yes, if you look at the quantization modules, e.g., QuantLinear, there are additional attributes such as `weight_integer` that represent the int8 model parameters in the torch.float. You can cast those numbers to torch.int8, but just make sure that you don't round down the numbers - they must be rounded.
Thank you very much. This will be the last question I expect :)
The following is how I defined my network and how the input passes flows. Briefly, I want the output of `IBertModel` to pass one more `QuantLinear` layer to get the final representation, like this.
1. input -> `IBertModel`
1. `IBertModel` -> `QuantAct`
1. `QuantAct` -> `QuantLinear` -> `final_representation`
But the `QuantLinear` module outputs two tensors: `quant_x` and `scaling_factor`.
**Do I have to deal with both, or can I just use `quant_x` as the final representation?**
```python
return (
F.linear(x_int, weight=self.weight_integer, bias=self.bias_integer) * bias_scaling_factor, # use only this?
bias_scaling_factor, # or do I also need this?
)
```
This is my code.
```python
# Define network
self.pre_linear = QuantAct(self.act_bit, quant_mode=self.quant_mode)
self.linear = QuantLinear(
self.input_size,
self.n,
quant_mode=self.quant_mode,
)
...
# Generate output
x, pre_scaling_factor = self.pre_linear(x)
x, scaling_factor = self.linear(x, pre_scaling_factor)
# x = x * scaling_factor?
```<|||||>As you have also noticed, all the quant modules including QuantLinear return two tensors: `quant_x` and `scaling_factor`. Here, `quant_x` / `scaling_factor` represents the quantized (integer) value for the activation - in other words, `quant_x` is the dequantized value. Therefore, you do not have to multiply it with the `scaling_factor`. <|||||>Hi!
I would like to deploy IBERT on a framework like TensorRT. I am a complete beginner in that field and I was wandering if someone could give me some tips on the main steps of how to quantize IBERT?
Thank you!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,311 | closed | Update hf_argparser.py | Dictionary type should be annotated with `Dict`.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes type annotation for dict.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-19-2021 08:32:28 | 04-19-2021 08:32:28 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,310 | closed | [Benchmark] GPT2LMHeadModel (gpt2-medium) forward pass inference became 9% slower compared to 2.8.0 release | # 🖥 Benchmarking `GPT2LMHeadModel`
## Benchmark
GPT2LMHeadModel model call (and model.generate() too)
## Set-up
gpu: gtx 1080
pytorch 1.4.0
transformers 2.8.0, 3.5.1, 4.5.1 releases and latest master branch
Code to reproduce
```python
import timeit
import numpy as np
import torch
from transformers import __version__ as trans_version
from transformers import (
GPT2LMHeadModel,
)
print("transformers:", trans_version)
model = GPT2LMHeadModel.from_pretrained("gpt2-medium")
print(model.__class__)
model.to("cuda")
model.eval()
rounding = 3
timed_result = timeit.repeat(stmt="""model.generate(input_ids=inp_t,
max_length=1024,
min_length=1024,
do_sample=False,
early_stopping=False, pad_token_id=50256, eos_token_id=50256)""",
setup="""inp = np.random.randint(low=1, high=50255, size=1014);inp_t = torch.LongTensor(inp).unsqueeze(0).to("cuda")""",
repeat=30, number=1, globals=globals())
timed_model_result = timeit.repeat(stmt="""with torch.no_grad():
model(input_ids=inp_t)""",
setup="""inp = np.random.randint(low=1, high=50255, size=1024);inp_t = torch.LongTensor(inp).unsqueeze(0).to("cuda")""",
repeat=30, number=10, globals=globals())
print('GPT2LMmedium model.generate (using caching) 1014 input, generate to 1024 (mean ± 3std):',
str(np.round(np.mean(timed_result), rounding)) + '±' + str(np.round(3 * np.std(timed_result), rounding)))
print('GPT2LMmedium model call, 1024 input 10 times (mean ± 3std):',
str(np.round(np.mean(timed_model_result), rounding)) + '±' + str(np.round(3 * np.std(timed_model_result), rounding)))
```
## Results
While `model.generate()` code improved and works faster now, model forward pass used in model direct call, became 9% slower
transformers: **2.8.0**
<class 'transformers.modeling_gpt2.GPT2LMHeadModel'>
GPT2LMmedium model.generate (using caching) 1014 input, generate to 1024 (mean ± 3std): 0.557±0.037
GPT2LMmedium model call, 1024 input 10 times (mean ± 3std): **1.821**±0.017
transformers: **3.5.1**
<class 'transformers.modeling_gpt2.GPT2LMHeadModel'>
GPT2LMmedium model.generate (using caching) 1014 input, generate to 1024 (mean ± 3std): 0.37±0.003
GPT2LMmedium model call, 1024 input 10 times (mean ± 3std): 1.849±0.012
transformers: **4.5.1**
<class 'transformers.models.gpt2.modeling_gpt2.GPT2LMHeadModel'>
GPT2LMmedium model.generate (using caching) 1014 input, generate to 1024 (mean ± 3std): 0.36±0.003
GPT2LMmedium model call, 1024 input 10 times (mean ± 3std): 1.823±0.013
transformers: **4.6.0.dev0**
<class 'transformers.models.gpt2.modeling_gpt2.GPT2LMHeadModel'>
GPT2LMmedium model.generate (using caching) 1014 input, generate to 1024 (mean ± 3std): 0.367±0.004
GPT2LMmedium model call, 1024 input 10 times (mean ± 3std): **1.991**±0.013
| 04-19-2021 07:54:34 | 04-19-2021 07:54:34 | @patil-suraj Can you please check if this speed decrease of GPT2LMHeadModel model call is not caused by your PR #11225?<|||||>Hi @LSinev
Thank you for posting the detailed issue. I will take a look.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,309 | closed | Vit deit fixes | # What does this PR do?
* Some small documentation improvements of ViT + DeiT.
* Adds a cats image to the `fixtures/test_samples` folder, which is used in the integration tests of both ViT and DeiT.
* Adds a community notebook, illustrating how to fine-tune the Vision Transformer on CIFAR-10.
(there's something weird going on with the .gitignore within the test_samples folder , see files changed). | 04-19-2021 07:41:45 | 04-19-2021 07:41:45 | The notebook looks amazing!
Out of curiosity, have you tried using `Trainer` to fine-tune `ViT`?<|||||>Added a notebook that uses the Trainer. Includes a nice confusion matrix at the end :) |
transformers | 11,308 | closed | RAG with RAY implementation: Ray workers memory slowly increase over time. | I fine-tune the RAG with RAY workers from 50 000 steps. When I checked the MEM% with the top command, I can see the memory consumption keep growing slowly. Usually, it should use around 20GB. After 50000 steps it raises up to 24 GB.
This could eventually lead to crash the system with OOM error. I did a background check and found the Redis server usually keeps increasing its memory consumption.
So is it ok to set a value **object_store_memory**?
I get something similar to this... (found this from the [issue](https://github.com/ray-project/ray/issues/10431) )

@lhoestq @amogkam | 04-19-2021 06:53:13 | 04-19-2021 06:53:13 | I think I solved the memory leakage issue. In my case, I simply wanted to update the index parameters in **self.retriever object**. So I used **def set_index function**. But I observe for some reason ray works completely cannot flush out the old index and its related objects.
So now when I want to update it, I delete the self.retriever object and re-initialize it.
```
class RayRetriever:
def __init__(self):
self.initialized = False
def create_rag_retriever(self, config, question_encoder_tokenizer, ctx_encoder_tokenizer,generator_tokenizer, index):
if not self.initialized:
self.retriever = RagRetriever(
config,
question_encoder_tokenizer=question_encoder_tokenizer,
ctx_encoder_tokenizer=ctx_encoder_tokenizer,
generator_tokenizer=generator_tokenizer,
index=index,
init_retrieval=False,
)
self.initialized = True
def init_retrieval(self):
self.retriever.index.init_index()
def set_index(self,index):
self.retriever.index=index #with this new index class all the paramters in HFindex class get updated
def clear_object(self): #we can call this first delete all thing things and again call create_rag_retriever
del self.retriever
self.initialized = False
def retrieve(self, question_hidden_states, n_docs):
doc_ids, retrieved_doc_embeds = self.retriever._main_retrieve(question_hidden_states, n_docs)
doc_dicts= self.retriever.index.get_doc_dicts(doc_ids)
return doc_ids, retrieved_doc_embeds,doc_dicts
`````` |
transformers | 11,307 | open | Getting time offsets of beginning and end of each word in Wav2Vec2 | # 🚀 Feature request
Hello I was thinking it would be of great help if I can get the time offsets of start and end of each word .
## Motivation
I was going through Google Speech to text documentation and found this [feature](https://cloud.google.com/speech-to-text/docs/async-time-offsets) and thought will be really amazing if i can have something similar here.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
I can really use some help in this task and would love to implement something similar.
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| 04-19-2021 03:57:57 | 04-19-2021 03:57:57 | @patrickvonplaten @patil-suraj @sgugger <|||||>This sounds like a nice feature, but I sadly won't have time to work on it - let's see if someone in the community could be interested :-)<|||||>There is something like this which may help : https://github.com/lumaku/espnet/blob/espnet2_ctc_segmentation/espnet2/bin/asr_align.py
I need some help in integrating it to wav2vec2 in hugging face. <|||||>@theainerd are you working on this feature?<|||||>I would also really like to see this feature.
@theainerd I'd be happy to help in any way I can although I'm not too familiar with the Wav2Vec transformer.
@patrickvonplaten do you think you could write out a brief outline of what you think the steps required would be?<|||||>Hi there!
I'm very very new to collaborating on open-source projects as well as on using huggingface/transformers in general therefore I'm not confident I can come up with a solution for this issue -- however I did some poking around with tutorials surrounding Wav2Vec2 and I was thinking of ways on how this might be implemented:
It seems like the Wav2Vec2FeatureExtractor does most of the heavylifting of converting the raw audio array to suitable input values
-> These input values are then fed into the model to obtain the logits (Dimension of the output is observed to be dropped a considerable amount here)
-> after applying argmax to obtain the IDs, these IDs are then fed back into the Wav2Vec2CTCTokenizer decode/batch_decode function to obtain the transcription.
Perhaps information of the sampling rate should be stored within the Tokenizer class such that during decode it's able to make use of this information to determine the timestamp? Or it might be possible to store it within the Wav2Vec2Processor class and have some wrapper functions take care of determining the timestamp and including it during the decode step
A relation of how the input values dimensions are mapped to the output logit's dimensions would be needed for this, which I don't have the expertise at the moment to figure out
CC:
@theainerd
@MerryOscar
@patrickvonplaten
sources I've been referring to --
https://www.kdnuggets.com/2021/03/speech-text-wav2vec.html (I realise this is outdated with the old tokenizer class, which seems to perform feature extraction as well)
https://huggingface.co/blog/fine-tune-wav2vec2-english
<|||||>+1 on this, i'd really appreciate timestamped words as well. the datasets like timit, etc. seem to have this info, but i guess that's part of their test set, not an output from the model itself. <|||||>Here's what i've found so far:
if speech length is - 480,000
input_values lenth - 480,000
logits length - 1499
this was for a 30s audio file.
`
model = Wav2Vec2ForCTC
processor = Wav2Vec2Processor
input_values = processor(speech, return_tensors="pt").input_values
logits = model(input_values).logits
`<|||||>> Here's what i've found so far:
> if speech length is - 480,000
> input_values lenth - 480,000
> logits length - 1499
>
> this was for a 30s audio file.
> `
> model = Wav2Vec2ForCTC
> processor = Wav2Vec2Processor
>
> ```
> input_values = processor(speech, return_tensors="pt").input_values
> logits = model(input_values).logits
> ```
>
> `
Thanks for investigating on this -- while I think it may be possible to just use the ratio and sampling rate to derive the timestamp, what I'm afraid of is that this ratio might just be a "magic number" and might differ if there are variations in the configuration of the Wav2Vec2 model
Current ratio from input_values size to logits seem to be around **320**
e.g.:
Does the ratio change if the [hyperparameters](https://huggingface.co/transformers/model_doc/wav2vec2.html#transformers.Wav2Vec2Config) of the model are changed?
Is this ratio constant for varying size of audio? (Experiment with different size WAV clips and check the ratio)
<|||||>> > Here's what i've found so far:
> > if speech length is - 480,000
> > input_values lenth - 480,000
> > logits length - 1499
> > this was for a 30s audio file.
> > `
> > model = Wav2Vec2ForCTC
> > processor = Wav2Vec2Processor
> > ```
> > input_values = processor(speech, return_tensors="pt").input_values
> > logits = model(input_values).logits
> > ```
> >
> >
> > `
>
> Thanks for investigating on this -- while I think it may be possible to just use the ratio and sampling rate to derive the timestamp, what I'm afraid of is that this ratio might just be a "magic number" and might differ if there are variations in the configuration of the Wav2Vec2 model
>
> Current ratio from input_values size to logits seem to be around **320**
>
> e.g.:
> Does the ratio change if the [hyperparameters](https://huggingface.co/transformers/model_doc/wav2vec2.html#transformers.Wav2Vec2Config) of the model are changed?
>
> Is this ratio constant for varying size of audio? (Experiment with different size WAV clips and check the ratio)
Maybe @patrickvonplaten could shed some light of whether we are going in the right direction about this (if it's not too much trouble) 😓 🙏 <|||||>> > Here's what i've found so far:
> > if speech length is - 480,000
> > input_values lenth - 480,000
> > logits length - 1499
> > this was for a 30s audio file.
> > `
> > model = Wav2Vec2ForCTC
> > processor = Wav2Vec2Processor
> > ```
> > input_values = processor(speech, return_tensors="pt").input_values
> > logits = model(input_values).logits
> > ```
> >
> >
> > `
>
> Thanks for investigating on this -- while I think it may be possible to just use the ratio and sampling rate to derive the timestamp, what I'm afraid of is that this ratio might just be a "magic number" and might differ if there are variations in the configuration of the Wav2Vec2 model
>
> Current ratio from input_values size to logits seem to be around **320**
>
> e.g.:
> Does the ratio change if the [hyperparameters](https://huggingface.co/transformers/model_doc/wav2vec2.html#transformers.Wav2Vec2Config) of the model are changed?
>
> Is this ratio constant for varying size of audio? (Experiment with different size WAV clips and check the ratio)
hey @yushao2, what ratio are you referring to here ? sorry, not too familiar with audio processing<|||||>@patrickvonplaten @yushao2 following up on this<|||||>> @patrickvonplaten @yushao2 following up on this
Hi there! Sorry for not being responsive here.
The ratio here refers to the number you get when you divide the size of ``input_values`` to the size of ``logits``
in this case, you mentioned
>input_values lenth - 480,000
>logits length - 1499
the ratio would be 480000/1499 which is approximately 320<|||||>Hello all,
There is something I have found which may serve as a good starting point. Basically this returns the time offsets and the textual data as well .
https://github.com/lumaku/ctc-segmentation
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
from ctc_segmentation import ctc_segmentation
from ctc_segmentation import CtcSegmentationParameters
from ctc_segmentation import determine_utterance_segments
from ctc_segmentation import prepare_text
# Get the Wav2Vec2 model and the predicted text
test_dataset = load_dataset("common_voice", "hi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("theainerd/Wav2Vec2-large-xlsr-hindi")
model = Wav2Vec2ForCTC.from_pretrained("theainerd/Wav2Vec2-large-xlsr-hindi")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
input_values = processor(test_dataset["speech"][0], return_tensors="pt").input_values # Batch size 1
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.decode(predicted_ids[0])
softmax = torch.nn.Softmax(dim = -1)
# apply configuration
config = CtcSegmentationParameters()
with torch.no_grad():
# Apply ctc layer to obtain log character probabilities
lpz = softmax(logits)[0].cpu().numpy()
char_dict = {"न": 0, "च": 1, "थ": 2, "ी": 3, "ऐ": 4, "ृ": 5, "ध": 6, "य": 7, "ह": 8, "ऊ": 9, "म": 10, "ण": 11, "ै": 13, "ौ": 14, "ा": 15, "ल": 16, "त": 17, "इ": 18, "ढ़": 19, "ष": 20, "भ": 21, "ग़": 22, "ख": 23, "ड़": 24, "ए": 25, "व": 26, "ु": 27, "ओ": 28, "र": 29, "श": 30, "औ": 31, "ट": 32, "आ": 33, "ो": 34, "ढ": 35, "झ": 36, "ग": 37, "ज़": 38, "अ": 39, "े": 40, "प": 41, "घ": 42, "द": 43, "ई": 44, "फ़": 45, "ब": 46, "ड": 47, "ँ": 48, "छ": 49, "ू": 50, "फ": 51, "ि": 52, "स": 53, "्": 54, "क": 55, "उ": 56, "ठ": 57, "ं": 58, "़": 59, "ज": 60, "क़": 61, "|": 12, "[UNK]": 62, "[PAD]": 63}
char_list = list(char_dict.keys())
# Prepare the text for aligning
ground_truth_mat, utt_begin_indices = prepare_text(config, transcription,char_list)
# Align using CTC segmentation
timings, char_probs, state_list = ctc_segmentation(config, lpz, ground_truth_mat)
# Obtain list of utterances with time intervals and confidence score
segments = determine_utterance_segments(config, utt_begin_indices, char_probs, timings, transcription)
# Sample Output : 0.26 1.73 -0.0154 THE SALE OF THE HOTELS * An example picked up from the ctc_segmentation
```
Now if I have the time offsets but how to get this for each and every word rather than the segments. _Please don't take this as an absolute solution_ as I am not sure that this is a good direction to go but still something is better than nothing. Please share your thoughts.
<|||||>Hi everyone, here is some sample code which I have created to get the word-level start and end timestamps.
It's surely a bit hacky, and I could imagine there being some special cases where it might break, but for the cases I have tried it with it worked great.
```python
from itertools import groupby
import torch
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
import soundfile as sf
##############
# load model & audio and run audio through model
##############
model_name = 'facebook/wav2vec2-large-960h-lv60-self'
processor = Wav2Vec2Processor.from_pretrained(model_name)
model = Wav2Vec2ForCTC.from_pretrained(model_name).cuda()
audio_filepath = ''
speech, sample_rate = sf.read(audio_filepath)
input_values = processor(speech, sampling_rate=sample_rate, return_tensors="pt").input_values.cuda()
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.decode(predicted_ids[0]).lower()
##############
# this is where the logic starts to get the start and end timestamp for each word
##############
words = [w for w in transcription.split(' ') if len(w) > 0]
predicted_ids = predicted_ids[0].tolist()
duration_sec = input_values.shape[1] / sample_rate
ids_w_time = [(i / len(predicted_ids) * duration_sec, _id) for i, _id in enumerate(predicted_ids)]
# remove entries which are just "padding" (i.e. no characers are recognized)
ids_w_time = [i for i in ids_w_time if i[1] != processor.tokenizer.pad_token_id]
# now split the ids into groups of ids where each group represents a word
split_ids_w_time = [list(group) for k, group
in groupby(ids_w_time, lambda x: x[1] == processor.tokenizer.word_delimiter_token_id)
if not k]
assert len(split_ids_w_time) == len(words) # make sure that there are the same number of id-groups as words. Otherwise something is wrong
word_start_times = []
word_end_times = []
for cur_ids_w_time, cur_word in zip(split_ids_w_time, words):
_times = [_time for _time, _id in cur_ids_w_time]
word_start_times.append(min(_times))
word_end_times.append(max(_times))
words, word_start_times, word_end_times
```<|||||>@KB-g
Congrats!
Is there a chance to also extract the "per word probability"?<|||||>@KB-g
The `assert len() == len()` triggers.
This audio: [assert.zip](https://github.com/huggingface/transformers/files/6721402/assert.zip)
Testcase:
````python
from itertools import groupby
import torch
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
import soundfile as sf
model_name = 'DewiBrynJones/wav2vec2-large-xlsr-welsh'
processor = Wav2Vec2Processor.from_pretrained(model_name)
model = Wav2Vec2ForCTC.from_pretrained(model_name)
audio_filepath = '/tmp/assert.wav'
speech, sample_rate = sf.read(audio_filepath)
input_values = processor(speech, sampling_rate=sample_rate, return_tensors="pt").input_values
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.decode(predicted_ids[0]).lower()
##############
# this is where the logic starts to get the start and end timestamp for each word
##############
words = [w for w in transcription.split(' ') if len(w) > 0]
predicted_ids = predicted_ids[0].tolist()
duration_sec = input_values.shape[1] / sample_rate
ids_w_time = [(i / len(predicted_ids) * duration_sec, _id) for i, _id in enumerate(predicted_ids)]
ids_w_time = [i for i in ids_w_time if i[1] != processor.tokenizer.pad_token_id]
split_ids_w_time = [list(group) for k, group
in groupby(ids_w_time, lambda x: x[1] == processor.tokenizer.word_delimiter_token_id)
if not k]
# make sure that there are the same number of id-groups as words. Otherwise something is wrong
assert len(split_ids_w_time) == len(words), (len(split_ids_w_time), len(words))
````<|||||>> @KB-g Congrats! Is there a chance to also extract the "per word probability"?
Hey @KB-g
Any success on this?<|||||>Hi @doublex , @abhirooptalasila,
I haven't tried to get the per-word probability. If you come up with a solution, it would be great if you could let me know. I'd also be interested in a solution :)<|||||>Hi @KB-g, @doublex and @abhirooptalasila,
maybe this [tutorial](https://pytorch.org/audio/main/tutorials/forced_alignment_tutorial.html) helps to find out a way to calculate a "per-word probability". In the function `merge_words`, the author calculates scores for each word based on tokens probabilities and theirs duration. <|||||>We need to document the time stamp retrieval a bit better here I think<|||||>@KB-g Thanks for the code snippet, really useful. Made a small addition (no_grad) for inference, would help people facing OOM error(s):
```python
from itertools import groupby
import torch
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
import soundfile as sf
##############
# load model & audio and run audio through model
##############
model_name = 'facebook/wav2vec2-large-960h-lv60-self'
processor = Wav2Vec2Processor.from_pretrained(model_name)
model = Wav2Vec2ForCTC.from_pretrained(model_name).cuda()
audio_filepath = ''
speech, sample_rate = sf.read(audio_filepath)
input_values = processor(speech, sampling_rate=sample_rate, return_tensors="pt").input_values.cuda()
with torch.no_grad():
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.decode(predicted_ids[0]).lower()
##############
# this is where the logic starts to get the start and end timestamp for each word
##############
words = [w for w in transcription.split(' ') if len(w) > 0]
predicted_ids = predicted_ids[0].tolist()
duration_sec = input_values.shape[1] / sample_rate
ids_w_time = [(i / len(predicted_ids) * duration_sec, _id) for i, _id in enumerate(predicted_ids)]
# remove entries which are just "padding" (i.e. no characers are recognized)
ids_w_time = [i for i in ids_w_time if i[1] != processor.tokenizer.pad_token_id]
# now split the ids into groups of ids where each group represents a word
split_ids_w_time = [list(group) for k, group
in groupby(ids_w_time, lambda x: x[1] == processor.tokenizer.word_delimiter_token_id)
if not k]
assert len(split_ids_w_time) == len(words) # make sure that there are the same number of id-groups as words. Otherwise something is wrong
word_start_times = []
word_end_times = []
for cur_ids_w_time, cur_word in zip(split_ids_w_time, words):
_times = [_time for _time, _id in cur_ids_w_time]
word_start_times.append(min(_times))
word_end_times.append(max(_times))
words, word_start_times, word_end_times
```<|||||>@Ap1075, thank you for the example you provided above. I'm having a hard time figuring out where/how to pass in transcribed text so it can be aligned with the audio. Is passing in pre-transcribed text possible, or am I misunderstanding how it works?<|||||>I'm trying to get word timing for karaoke I have the lyrics... Would this be possible? 🤔 |
transformers | 11,306 | closed | Wav2Vec2 Pretraining | # What does this PR do?
Fixes #11246
This adds `Wav2Vec2ForPreTraining` which allows to pre-train Wav2Vec 2.0 on unlabeled audio with a self-supervised Vector Quantization task.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
## Implementation checklist
- [x] Run a successful forward pass on `Wav2Vec2ForPreTraining` with mostly copy-pasted code from `fairseq`
- [x] Make sure the intermediate and output logit tensors match between `Wav2Vec2ForPreTraining` and `fairseq.models.wav2vec.Wav2Vec2Model`
- [x] Sync with @patrickvonplaten regarding class decomposition, network layers placement and potentially breaking changes
- [x] Run the model with a padded variable-length batch (not just a single sample)
- [x] Run the model in training mode, make sure that the contrastive loss and code perplexity decrease
- [x] Write integraton tests to check fairseq's tensor reproducibility
- [x] Write smoke tests for `GumbelVectorQuantizer` and vector sampling
- [x] Refactor copied code (e.g. `GumbelVectorQuantizer` and `sample_negatives`) to follow the code style of the rest of the module
- [x] Add sensible defaults for config variables
- [x] Add docstrings for every module and comments where necessary
- [x] Update model documentation
Bonus round:
- [ ] Finetune the model on a subset of CommonVoice
- [ ] Check that the pooled vectors of audio samples converge into neat clusters as a result of quantization
- [x] Check that Pretraining works with Deepspeed | 04-18-2021 21:44:30 | 04-18-2021 21:44:30 | I will take a deeper look tomorrow and comment here - looks great already :-)<|||||>Looks great to me so far @anton-l - I see no breaking changes, the modularization looks good to me and the parameter naming is fine as well -> don't think `self.quantizer.quantizer` is too awkward => we also have `self.self.self_attention` somewhere in BERT ;-)<|||||>I think an important next step would be to verify that the pretraining works more or less :-) <|||||>Integration tests are now passing. Can be verified by running:
```python
#!/usr/bin/env python3
import datasets
import fairseq
import torch
import soundfile as sf
import sys
from fairseq.criterions.wav2vec_criterion import Wav2VecCriterionConfig, Wav2vecCriterion
from fairseq.tasks.audio_pretraining import AudioPretrainingConfig, AudioPretrainingTask
from transformers import Wav2Vec2ForPreTraining, Wav2Vec2FeatureExtractor
hf_path = str(sys.argv[1])
fairseq_wav2vec2_path = str(sys.argv[2])
model, cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task([fairseq_wav2vec2_path])
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(hf_path, do_normalize=False)
hf_model = Wav2Vec2ForPreTraining.from_pretrained(hf_path)
model = model[0]
model.eval()
dummy_speech_data = datasets.load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
def map_to_array(batch):
speech_array, _ = sf.read(batch["file"])
batch["speech"] = speech_array
return batch
dummy_speech_data = dummy_speech_data.map(map_to_array, remove_columns=["file"])
inputs = feature_extractor(dummy_speech_data[:3]["speech"], return_tensors="pt", padding="longest", return_attention_mask=True)
input_values = inputs.input_values
attention_mask = inputs.attention_mask
audio_cfg = AudioPretrainingConfig(labels="ltr", data="./data")
task = AudioPretrainingTask.setup_task(audio_cfg)
criterion = Wav2vecCriterion(Wav2VecCriterionConfig(infonce=True, log_keys=["prob_perplexity", "code_perplexity", "temp"], loss_weights=[0.1, 10]), task)
sample = {
"net_input": {
"source": input_values,
"padding_mask": attention_mask.ne(1),
},
"id": torch.zeros((1,)),
}
torch.manual_seed(0)
result = model(**sample["net_input"])
torch.manual_seed(0)
hf_result = hf_model(input_values, attention_mask=attention_mask)
assert torch.allclose(hf_result.logits, result['x'], atol=1e-3), "wrong logits"
loss, sample_size, log = criterion(model, sample)
print("Loss diff %", 100 * (loss.detach().item() - hf_result.loss.detach().item()) / hf_result.loss.detach())
```
and using [this](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small.pt) as the fairseq checkpoint and [this](https://huggingface.co/patrickvonplaten/wav2vec2-base) model as the HF model. |
transformers | 11,305 | closed | invalid multinomial distribution (with replacement=False, not enough non-negative category to sample) | When using "sshleifer/distilbart-cnn-6-6" & do_sample the below code errors out, meanwhile the same code works for "sshleifer/distilbart-xsum-6-6". Am I missing something really obvious here? Thanks for any help!
Tranformers: 4.5.1
````
from transformers import (
AutoModelForSeq2SeqLM,
AutoTokenizer
)
model_name = "sshleifer/distilbart-cnn-6-6"
#model_name = "sshleifer/distilbart-xsum-6-6"
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
text = "New York City (NYC), often simply called New York, is the most populous city in the United States"
input_ids = tokenizer.encode(text, return_tensors='pt')
sample_outputs = model.generate(input_ids,
num_beams=3,
do_sample=True
)
sample_outputs
```` | 04-18-2021 10:17:14 | 04-18-2021 10:17:14 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>i have the same exact problem when i use `do_sample=True` can you re-open this issue?<|||||>Maybe @gante has an idea!<|||||>Hi there @Muennighoff @zeke-john 👋
I've run the script above for both models on `v4.5.1` (and on `v4.22.dev0`) and it works with no problems -- you can see a colab [here](https://colab.research.google.com/drive/1bg7v0mxZbFxJTjj28AriYBeERsAN264E?usp=sharing).
A potential cause for errors may be GPU memory -- generation with `num_beams` is memory intensive. Let me know if you have more details about your problem :) |
transformers | 11,304 | closed | env about run longformer model downloaded from https://github.com/allenai/longformer | 1. just use `conda install transformers`,the transformers version is 4.4.2, can't run
1.1. error:
```bash
can't import pipline
# then use tokenizer and model to get my feature vector,ERROR:
RuntimeError: Error(s) in loading state_dict for BartModel:
size mismatch for model.encoder.embed_positions.weight: copying a param with shape torch.Size([16386, 768]) from
checkpoint, the shape in current model is torch.Size([1026, 768]).
```
2. change to use `pip install transformers`, the transformers version is 3.3.0,can't run
2.1. error:
```bash
can't import name tokenizer
# then i found it in issue that use "pip install tokenizer", it has exited, and tokenizer version is 0.8.0rc2. I find another env that can work,the tokenizer version is 0.5.0,so i use pip to change its version from 0.8.0rc2 to 0.5.0.ERROR:
pip's dependency ..... which is incompatibe
# however the version of toikenizer in only one env which can run is 0.5.0 .
```
3. So the only one worked env is:
```bash
conda create -n dnabert python=3.6
# pytorch-transformers
pip install pytorch-transformers
conda install pytorch torchvision cudatoolkit=10.0 -c pytorch
# dnabert
git clone https://github.com/jerryji1993/DNABERT
cd DNABERT
python3 -m pip install --editable .
cd examples
python3 -m pip install -r requirements.txt
# allenai
conda install cudatoolkit=10.0
pip install git+https://github.com/allenai/longformer.git
# huggingface
pip install transformers
```
# 我做错了什么才会到这种局面? | 04-18-2021 08:44:10 | 04-18-2021 08:44:10 | I created an env that cannot be copied<|||||>Please do not create duplicates.
Duplicate of #11301 <|||||>sorry, i close it now |
transformers | 11,303 | closed | small bug in RAG model | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: Linux-5.4.0-66-generic-x86_64-with-glibc2.27
- Python version: 3.8.3
- PyTorch version (GPU?): 1.8.1+cu111 (True)
### Who can help
@ola13
Models:
- rag
## Information
Model I am using (Bert, XLNet ...): RAG
The problem arises when using:
* [ ] the official example scripts: seems to be a bug in https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/modeling_rag.py#L306
To pass my own question encoder the argument needs to be `question_encoder_model`, but the suffix ("model") gets trimmed because the len is taken according to `question_question_encoder_`.
```python
kwargs_question_encoder = {
argument[len("question_question_encoder_") :]: value
for argument, value in kwargs.items()
if argument.startswith("question_encoder_")
}
```
## To reproduce
Steps to reproduce the behavior:
```python
question_encoder = AutoModel.from_pretrained("any model")
rag_model = model_class.from_pretrained_question_encoder_generator(
question_encoder_model=question_encoder, generator_pretrained_model_name_or_path=generator_name_or_path, config=rag_config
)
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
be able to pass a question encoder **model** and not just a config to the rag model.
| 04-18-2021 06:21:54 | 04-18-2021 06:21:54 | This indeed looks like a typo! Thanks for your issue :-)
The kwargs should not be trimmed by `"question_question_encoder"`, but by `"question_encoder"`.
Would you like to open a PR to fix it? Otherwise I can do it as well :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Unping |
transformers | 11,302 | closed | Problems with webbased editing of model cards | When I open a model in hugging face model repository - like here: https://huggingface.co/german-nlp-group/electra-base-german-uncased
And then click "Edit model card" the text in the webbased editor contains `\r` characters. When the webbased editor is now used to save the modelcard these characters are saved and shown.
See screenshot:

This is a bug. | 04-18-2021 05:54:39 | 04-18-2021 05:54:39 | The probem only seems to happen on some models. This model here works ok: https://huggingface.co/T-Systems-onsite/cross-en-de-roberta-sentence-transformer<|||||>Tagging @Pierrci for visibility<|||||>Hi @PhilipMay, thank you for reporting this, we just deployed a fix for that, let us know if you still encounter the problem, otherwise feel free to close the issue :)<|||||>Works now. Thanks. |
transformers | 11,301 | closed | Longformer model with weight(model.encoder.embed_positions.weight) error | ```bash
RuntimeError: Error(s) in loading state_dict for BartModel:
size mismatch for model.encoder.embed_positions.weight: copying a param with shape torch.Size([16386, 768]) from checkpoint, the shape in current model is torch.Size([1026, 768]).
```
I use longformer model called longformer-encdec-base-16384 which is downloaded in https://github.com/allenai/longformer,and use huggingface to load the model,when transformers’ version is 3.1.0, the code can run, but when it is 4.4.2,the error happened.
MeanWhile,when I use the model to proposal pairs of sentences,I found it that the returned token_type_ids values are just zero
without one. how ever,in the model's special_tokens_map.json, it has defined cls_token and sep_token.
Finally, I sincerely hope you would reply me soon. Thanks! | 04-18-2021 01:56:52 | 04-18-2021 01:56:52 | What code are you running that leads to that error?<|||||># God, someone finally replied to me,thanks!
## code
```python
from transformers import AutoModel, AutoTokenizer, pipeline
import torch
model_name = 'pre-model/' + 'longformer-encdec-base-16384'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
classifier = pipeline('feature-extraction', model=model, tokenizer=tokenizer)
# encoded_inputs = tokenizer(["ATGCATGCNACT"], ["ATGCATGCNACT"], return_token_type_ids=True, return_tensors='pt')
encoded_inputs = tokenizer(["ATGCATGCNACT", "ATGCATG", "ACTGGTCATGCAC"], return_tensors='pt',
padding=True)
print(encoded_inputs)
# feature = model(input_ids=encoded_inputs['input_ids'], attention_mask=encoded_inputs['attention_mask'],
# return_netsors='pt')
feature = model(**encoded_inputs,
return_netsors='pt')
print(feature[0])
print(type(feature[0]))
# feature = torch.as_tensor(feature)
# print(feature.shape)
print("***" * 48)
feature = classifier(["ATG", "ATGCATG", "ACTGGTCATGCAC"])
print(type(feature))
feature = torch.as_tensor(feature)
print(feature)
print(feature.shape)
print("***" * 48)
```
## env info
### can work: env0
```bash
# Name Version Build Channel
_libgcc_mutex 0.1 main defaults
absl-py 0.12.0 pypi_0 pypi
astunparse 1.6.3 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
biopython 1.78 pypi_0 pypi
blas 1.0 mkl defaults
boto3 1.17.48 pypi_0 pypi
botocore 1.20.48 pypi_0 pypi
brotlipy 0.7.0 py36h27cfd23_1003 defaults
ca-certificates 2021.1.19 h06a4308_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
cachetools 4.2.1 pypi_0 pypi
certifi 2020.12.5 py36h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
cffi 1.14.5 py36h261ae71_0 defaults
chardet 4.0.0 py36h06a4308_1003 defaults
click 7.1.2 pyhd3eb1b0_0 defaults
cryptography 3.4.7 py36hd23ed53_0 defaults
cudatoolkit 10.0.130 0 defaults
dataclasses 0.8 pyh4f3eec9_6 defaults
dill 0.3.3 pypi_0 pypi
filelock 3.0.12 pyhd3eb1b0_1 defaults
freetype 2.10.4 h5ab3b9f_0 defaults
future 0.18.2 pypi_0 pypi
google-auth 1.28.1 pypi_0 pypi
google-auth-oauthlib 0.4.4 pypi_0 pypi
grpcio 1.37.0 pypi_0 pypi
idna 2.10 pyhd3eb1b0_0 defaults
imageio 2.9.0 pypi_0 pypi
importlib-metadata 3.10.0 pypi_0 pypi
intel-openmp 2020.2 254 defaults
jmespath 0.10.0 pypi_0 pypi
joblib 1.0.1 pyhd3eb1b0_0 defaults
jpeg 9b h024ee3a_2 defaults
lcms2 2.12 h3be6417_0 defaults
ld_impl_linux-64 2.33.1 h53a641e_7 defaults
libffi 3.3 he6710b0_2 defaults
libgcc-ng 9.1.0 hdf63c60_0 defaults
libpng 1.6.37 hbc83047_0 defaults
libprotobuf 3.14.0 h8c45485_0 defaults
libstdcxx-ng 9.1.0 hdf63c60_0 defaults
libtiff 4.1.0 h2733197_1 defaults
longformer 0.1 pypi_0 pypi
lz4-c 1.9.3 h2531618_0 defaults
markdown 3.3.4 pypi_0 pypi
mkl 2020.2 256 defaults
mkl-service 2.3.0 py36he8ac12f_0 defaults
mkl_fft 1.3.0 py36h54f3939_0 defaults
mkl_random 1.1.1 py36h0573a6f_0 defaults
ncurses 6.2 he6710b0_1 defaults
ninja 1.10.2 py36hff7bd54_0 defaults
nlp 0.4.0 pypi_0 pypi
nltk 3.6.1 pypi_0 pypi
numpy 1.19.5 pypi_0 pypi
numpy-base 1.19.2 py36hfa32c7d_0 defaults
oauthlib 3.1.0 pypi_0 pypi
olefile 0.46 py36_0 defaults
openssl 1.1.1k h27cfd23_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
packaging 20.9 pyhd3eb1b0_0 defaults
pandas 1.1.5 pypi_0 pypi
patsy 0.5.1 pypi_0 pypi
pillow 8.2.0 py36he98fc37_0 defaults
pip 21.0.1 py36h06a4308_0 defaults
protobuf 3.15.8 pypi_0 pypi
pyahocorasick 1.4.2 pypi_0 pypi
pyarrow 3.0.0 pypi_0 pypi
pyasn1 0.4.8 pypi_0 pypi
pyasn1-modules 0.2.8 pypi_0 pypi
pybedtools 0.8.2 pypi_0 pypi
pycparser 2.20 py_2 defaults
pyopenssl 20.0.1 pyhd3eb1b0_1 defaults
pyparsing 2.4.7 pyhd3eb1b0_0 defaults
pysam 0.16.0.1 pypi_0 pypi
pysocks 1.7.1 py36h06a4308_0 defaults
python 3.6.13 hdb3f193_0 defaults
python-dateutil 2.8.1 pypi_0 pypi
python_abi 3.6 1_cp36m huggingface
pytorch-lightning 0.8.5 pypi_0 pypi
pytorch-transformers 1.2.0 pypi_0 pypi
pytz 2021.1 pypi_0 pypi
pyyaml 5.4.1 pypi_0 pypi
readline 8.1 h27cfd23_0 defaults
regex 2021.4.4 py36h27cfd23_0 defaults
requests 2.25.1 pyhd3eb1b0_0 defaults
requests-oauthlib 1.3.0 pypi_0 pypi
rouge-score 0.0.4 pypi_0 pypi
rsa 4.7.2 pypi_0 pypi
s3transfer 0.3.6 pypi_0 pypi
sacremoses 0.0.44 pypi_0 pypi
scikit-learn 0.24.1 pypi_0 pypi
scipy 1.5.4 pypi_0 pypi
sentencepiece 0.1.91 pypi_0 pypi
seqeval 1.2.2 pypi_0 pypi
setuptools 52.0.0 py36h06a4308_0 defaults
six 1.15.0 py36h06a4308_0 defaults
sqlite 3.35.4 hdfb4753_0 defaults
statsmodels 0.12.2 pypi_0 pypi
tensorboard 2.4.1 pypi_0 pypi
tensorboard-plugin-wit 1.8.0 pypi_0 pypi
tensorboardx 2.2 pypi_0 pypi
test-tube 0.7.5 pypi_0 pypi
threadpoolctl 2.1.0 pypi_0 pypi
tk 8.6.10 hbc83047_0 defaults
tokenizers 0.5.0 pypi_0 pypi
torch 1.6.0 pypi_0 pypi
torchvision 0.5.0 py36_cu100 pytorch
tqdm 4.60.0 pypi_0 pypi
transformers 3.1.0 pypi_0 pypi
typing-extensions 3.7.4.3 pypi_0 pypi
urllib3 1.26.4 pyhd3eb1b0_0 defaults
werkzeug 1.0.1 pypi_0 pypi
wheel 0.36.2 pyhd3eb1b0_0 defaults
xxhash 2.0.2 pypi_0 pypi
xz 5.2.5 h7b6447c_0 defaults
zipp 3.4.1 pypi_0 pypi
zlib 1.2.11 h7b6447c_3 defaults
zstd 1.4.9 haebb681_0 defaults
```
### can not work
#### env1:tf2-pt-keras
```bash
# Name Version Build Channel
_libgcc_mutex 0.1 main https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
_tflow_select 2.1.0 gpu https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
absl-py 0.11.0 py36h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
aiohttp 3.6.3 py36h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
apex 0.1 pypi_0 pypi
argon2-cffi 20.1.0 py36h7b6447c_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
astor 0.8.1 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
astunparse 1.6.3 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
async-timeout 3.0.1 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
async_generator 1.10 py36h28b3542_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
attrs 20.3.0 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
backcall 0.2.0 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
bert-serving-client 1.10.0 pypi_0 pypi
bert-serving-server 1.10.0 pypi_0 pypi
blas 1.0 mkl https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
bleach 3.2.1 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
blinker 1.4 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
brotlipy 0.7.0 py36h27cfd23_1003 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
c-ares 1.16.1 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
ca-certificates 2021.4.13 h06a4308_1 defaults
cachetools 4.1.1 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
certifi 2020.12.5 py36h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
cffi 1.14.3 py36h261ae71_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
chardet 3.0.4 py36h06a4308_1003 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
click 7.1.2 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
cryptography 3.2.1 py36h3c74f83_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
cudatoolkit 10.1.243 h6bb024c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
cudnn 7.6.5 cuda10.1_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
cupti 10.1.168 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
cycler 0.10.0 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
dataclasses 0.7 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
dbus 1.13.18 hb2f20db_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
decorator 4.4.2 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
defusedxml 0.6.0 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
entrypoints 0.3 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
expat 2.2.10 he6710b0_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
filelock 3.0.12 pyhd3eb1b0_1 defaults
fontconfig 2.13.0 h9420a91_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
freetype 2.10.4 h5ab3b9f_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
gast 0.2.2 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
glib 2.66.1 h92f7085_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
google-auth 1.23.0 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
google-auth-oauthlib 0.4.2 pyhd3eb1b0_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
google-pasta 0.2.0 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
gputil 1.4.0 pypi_0 pypi
grpcio 1.31.0 py36hf8bcb03_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
gst-plugins-base 1.14.0 hbbd80ab_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
gstreamer 1.14.0 hb31296c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
h5py 2.10.0 py36hd6299e0_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
hdf5 1.10.6 hb1b8bf9_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
icu 58.2 he6710b0_3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
idna 2.10 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
idna_ssl 1.1.0 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
importlib-metadata 2.0.0 py_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
importlib_metadata 2.0.0 1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
intel-openmp 2020.2 254 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
ipykernel 5.3.4 py36h5ca1d4c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
ipython 7.12.0 py36h5ca1d4c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
ipython_genutils 0.2.0 pyhd3eb1b0_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
ipywidgets 7.6.0 pyhd3eb1b0_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
jedi 0.10.2 py36_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
jinja2 2.11.2 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
joblib 0.17.0 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
jpeg 9b h024ee3a_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
jsonschema 3.2.0 py_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
jupyter 1.0.0 py36_7 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
jupyter_client 6.1.7 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
jupyter_console 6.2.0 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
jupyter_core 4.7.0 py36h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
jupyterlab_pygments 0.1.2 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
keras 2.3.1 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
keras-applications 1.0.8 py_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
keras-base 2.3.1 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
keras-preprocessing 1.1.0 py_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
kiwisolver 1.3.0 py36h2531618_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
krb5 1.18.2 h173b8e3_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
lcms2 2.11 h396b838_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
ld_impl_linux-64 2.33.1 h53a641e_7 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libcurl 7.71.1 h20c2e04_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libedit 3.1.20191231 h14c3975_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libffi 3.3 he6710b0_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libgcc-ng 9.1.0 hdf63c60_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libgfortran-ng 7.3.0 hdf63c60_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libpng 1.6.37 hbc83047_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libprotobuf 3.13.0.1 hd408876_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libsodium 1.0.18 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libssh2 1.9.0 h1ba5d50_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libstdcxx-ng 9.1.0 hdf63c60_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libtiff 4.1.0 h2733197_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libuuid 1.0.3 h1bed415_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libuv 1.40.0 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libxcb 1.14 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libxml2 2.9.10 hb55368b_3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
lz4-c 1.9.2 heb0550a_3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
markdown 3.3.3 py36h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
markupsafe 1.1.1 py36h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
matplotlib 3.3.2 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
matplotlib-base 3.3.2 py36h817c723_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
mistune 0.8.4 py36h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
mkl 2020.2 256 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
mkl-service 2.3.0 py36he904b0f_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
mkl_fft 1.2.0 py36h23d657b_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
mkl_random 1.1.1 py36h0573a6f_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
multidict 4.7.6 py36h7b6447c_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
nbclient 0.5.1 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
nbconvert 6.0.7 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
nbformat 5.0.8 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
ncurses 6.2 he6710b0_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
nest-asyncio 1.4.3 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
ninja 1.10.1 py36hfd86e86_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
notebook 6.1.6 py36h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
numpy 1.19.2 py36h54aff64_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
numpy-base 1.19.2 py36hfa32c7d_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
oauthlib 3.1.0 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
olefile 0.46 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
openssl 1.1.1k h27cfd23_0 defaults
opt_einsum 3.1.0 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
packaging 20.8 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pandas 1.1.3 py36he6710b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pandoc 2.11 hb0f4dca_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pandocfilters 1.4.3 py36h06a4308_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pcre 8.44 he6710b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pexpect 4.8.0 pyhd3eb1b0_3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pickleshare 0.7.5 pyhd3eb1b0_1003 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pillow 8.0.1 py36he98fc37_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pip 20.2.4 py36h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
prometheus_client 0.9.0 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
prompt-toolkit 3.0.8 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
prompt_toolkit 3.0.8 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
protobuf 3.13.0.1 py36he6710b0_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
ptyprocess 0.6.0 pyhd3eb1b0_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pyasn1 0.4.8 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pyasn1-modules 0.2.8 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pycparser 2.20 py_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pygments 2.7.3 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pyjwt 1.7.1 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pyopenssl 19.1.0 pyhd3eb1b0_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pyparsing 2.4.7 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pyqt 5.9.2 py36h05f1152_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pyrsistent 0.17.3 py36h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pysocks 1.7.1 py36h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
python 3.6.12 hcff3b4d_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
python-dateutil 2.8.1 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
python_abi 3.6 1_cp36m https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
pytorch 1.7.0 py3.6_cuda10.1.243_cudnn7.6.3_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
pytz 2020.1 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pyyaml 5.3.1 py36h7b6447c_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pyzmq 20.0.0 py36h2531618_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
qt 5.9.7 h5867ecd_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
qtconsole 4.7.7 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
qtpy 1.9.0 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
readline 8.0 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
regex 2021.4.4 py36h27cfd23_0 defaults
requests 2.24.0 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
requests-oauthlib 1.3.0 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
rsa 4.6 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
sacremoses 0.0.44 pypi_0 pypi
scikit-learn 0.23.2 py36h0573a6f_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
scipy 1.5.2 py36h0b6359f_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
seaborn 0.11.1 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
send2trash 1.5.0 pyhd3eb1b0_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
setuptools 50.3.1 py36h06a4308_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
sip 4.19.8 py36hf484d3e_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
six 1.15.0 py36h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
sqlite 3.33.0 h62c20be_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
tensorboard 2.3.0 pyh4dce500_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
tensorboard-plugin-wit 1.6.0 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
tensorflow 2.1.0 gpu_py36h2e5cdaa_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
tensorflow-base 2.1.0 gpu_py36h6c5654b_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
tensorflow-estimator 2.1.0 pyhd54b08b_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
tensorflow-gpu 2.1.0 h0d30ee6_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
termcolor 1.1.0 py36_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
terminado 0.9.1 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
testpath 0.4.4 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
threadpoolctl 2.1.0 pyh5ca1d4c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
tk 8.6.10 hbc83047_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
tokenizers 0.10.2 pypi_0 pypi
torchaudio 0.7.0 py36 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
torchvision 0.1.8 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
tornado 6.0.4 py36h7b6447c_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
tqdm 4.60.0 pypi_0 pypi
traitlets 4.3.3 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
transformers 4.4.2 py_0 huggingface
typing_extensions 3.7.4.3 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
urllib3 1.25.11 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
wcwidth 0.2.5 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
webencodings 0.5.1 py36_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
werkzeug 1.0.1 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
wheel 0.35.1 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
widgetsnbextension 3.5.1 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
wrapt 1.12.1 py36h7b6447c_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
xz 5.2.5 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
yaml 0.2.5 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
yarl 1.6.2 py36h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
zeromq 4.3.3 he6710b0_3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
zipp 3.4.0 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
zlib 1.2.11 h7b6447c_3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
zstd 1.4.5 h9ceee32_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
```
### env2: copied from env0 but not worked
```bash
# Name Version Build Channel
_libgcc_mutex 0.1 main https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
absl-py 0.12.0 pypi_0 pypi
astunparse 1.6.3 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
blas 1.0 mkl https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
boto3 1.17.53 pypi_0 pypi
botocore 1.20.53 pypi_0 pypi
brotlipy 0.7.0 py36h27cfd23_1003 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
ca-certificates 2021.4.13 h06a4308_1
cachetools 4.2.1 pypi_0 pypi
certifi 2020.12.5 py36h06a4308_0
cffi 1.14.5 py36h261ae71_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
chardet 4.0.0 py36h06a4308_1003 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
click 7.1.2 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
cryptography 3.4.7 py36hd23ed53_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
cudatoolkit 10.0.130 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
dataclasses 0.8 pyh4f3eec9_6 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
dill 0.3.3 pypi_0 pypi
filelock 3.0.12 pyhd3eb1b0_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
freetype 2.10.4 h5ab3b9f_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
future 0.18.2 pypi_0 pypi
google-auth 1.29.0 pypi_0 pypi
google-auth-oauthlib 0.4.4 pypi_0 pypi
grpcio 1.37.0 pypi_0 pypi
idna 2.10 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
imageio 2.9.0 pypi_0 pypi
importlib-metadata 2.0.0 py_1 anaconda
intel-openmp 2020.2 254 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
jmespath 0.10.0 pypi_0 pypi
joblib 1.0.1 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
jpeg 9b h024ee3a_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
lcms2 2.12 h3be6417_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
ld_impl_linux-64 2.33.1 h53a641e_7 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libffi 3.3 he6710b0_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libgcc-ng 9.1.0 hdf63c60_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libpng 1.6.37 hbc83047_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libprotobuf 3.14.0 h8c45485_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libstdcxx-ng 9.1.0 hdf63c60_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libtiff 4.1.0 h2733197_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
longformer 0.1 pypi_0 pypi
lz4-c 1.9.3 h2531618_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
markdown 3.3.4 pypi_0 pypi
mkl 2020.2 256 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
mkl-service 2.3.0 py36he8ac12f_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
mkl_fft 1.3.0 py36h54f3939_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
mkl_random 1.1.1 py36h0573a6f_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
ncurses 6.2 he6710b0_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
ninja 1.10.2 py36hff7bd54_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
nlp 0.4.0 pypi_0 pypi
nltk 3.6.1 pypi_0 pypi
numpy 1.19.2 py36h54aff64_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
numpy-base 1.19.2 py36hfa32c7d_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
oauthlib 3.1.0 pypi_0 pypi
olefile 0.46 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
openssl 1.1.1k h27cfd23_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
packaging 20.9 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pandas 1.1.5 pypi_0 pypi
pillow 8.2.0 py36he98fc37_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pip 21.0.1 py36h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
protobuf 3.15.8 pypi_0 pypi
pyarrow 3.0.0 pypi_0 pypi
pyasn1 0.4.8 pypi_0 pypi
pyasn1-modules 0.2.8 pypi_0 pypi
pycparser 2.20 py_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pyopenssl 20.0.1 pyhd3eb1b0_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pyparsing 2.4.7 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pysocks 1.7.1 py36h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
python 3.6.13 hdb3f193_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
python-dateutil 2.8.1 pypi_0 pypi
python_abi 3.6 1_cp36m huggingface
pytorch-lightning 0.8.5 pypi_0 pypi
pytorch-transformers 1.2.0 pypi_0 pypi
pytz 2021.1 pypi_0 pypi
pyyaml 5.4.1 pypi_0 pypi
readline 8.1 h27cfd23_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
regex 2021.4.4 py36h27cfd23_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
requests 2.25.1 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
requests-oauthlib 1.3.0 pypi_0 pypi
rouge-score 0.0.4 pypi_0 pypi
rsa 4.7.2 pypi_0 pypi
s3transfer 0.3.7 pypi_0 pypi
sacremoses 0.0.44 pypi_0 pypi
sentencepiece 0.1.95 pypi_0 pypi
setuptools 52.0.0 py36h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
six 1.15.0 py36h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
sqlite 3.35.4 hdfb4753_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
tensorboard 2.4.1 pypi_0 pypi
tensorboard-plugin-wit 1.8.0 pypi_0 pypi
tensorboardx 2.2 pypi_0 pypi
test-tube 0.7.5 pypi_0 pypi
tk 8.6.10 hbc83047_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
tokenizers 0.8.1rc2 pypi_0 pypi
torch 1.6.0 pypi_0 pypi
torchvision 0.5.0 py36_cu100 pytorch
tqdm 4.60.0 pypi_0 pypi
transformers 3.1.0 pypi_0 pypi
urllib3 1.26.4 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
werkzeug 1.0.1 pypi_0 pypi
wheel 0.36.2 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
xxhash 2.0.2 pypi_0 pypi
xz 5.2.5 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
zipp 3.4.1 pyhd3eb1b0_0
zlib 1.2.11 h7b6447c_3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
zstd 1.4.9 haebb681_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
```<|||||>next step i wan to use gene seqs to pretrain longformer, but i has been seemly dead in step 0 ....<|||||>Can you please post the output of:
```
type(model)
```
of your working environment?
In case it is showing something with `....BartModel`, can you please show us the definition of the class BertEncoder? You can locate it in the directory of:
```
import transformers
print(transformers.__file__)
```
<|||||>> Can you please post the output of:
>
> ```
> type(model)
> ```
>
> of your working environment?
> In case it is showing something with `....BartModel`, can you please show us the definition of the class BertEncoder? You can locate it in the directory of:
>
> ```
> import transformers
> print(transformers.__file__)
> ```
# code
```python
from transformers import AutoModel, AutoTokenizer # , pipeline
import transformers
print(transformers.__file__)
model_name = 'pre-model/' + 'longformer-encdec-base-16384'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
# classifier = pipeline('feature-extraction', model=model, tokenizer=tokenizer)
print(type(model))
```
# env0:
```bash
/home/pbc/anaconda3/envs/dnabert/lib/python3.6/site-packages/transformers/__init__.py
Some weights of the model checkpoint at pre-model/longformer-encdec-base-16384 were not used when initializing BartModel: ['model.encoder.layers.0.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.0.self_attn.output.weight', 'model.encoder.layers.0.self_attn.output.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.1.self_attn.output.weight', 'model.encoder.layers.1.self_attn.output.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.2.self_attn.output.weight', 'model.encoder.layers.2.self_attn.output.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.3.self_attn.output.weight', 'model.encoder.layers.3.self_attn.output.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.4.self_attn.output.weight', 'model.encoder.layers.4.self_attn.output.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.5.self_attn.output.weight', 'model.encoder.layers.5.self_attn.output.bias']
- This IS expected if you are initializing BartModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing BartModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BartModel were not initialized from the model checkpoint at pre-model/longformer-encdec-base-16384 and are newly initialized: ['model.encoder.layers.0.self_attn.k_proj.weight', 'model.encoder.layers.0.self_attn.k_proj.bias', 'model.encoder.layers.0.self_attn.v_proj.weight', 'model.encoder.layers.0.self_attn.v_proj.bias', 'model.encoder.layers.0.self_attn.q_proj.weight', 'model.encoder.layers.0.self_attn.q_proj.bias', 'model.encoder.layers.0.self_attn.out_proj.weight', 'model.encoder.layers.0.self_attn.out_proj.bias', 'model.encoder.layers.1.self_attn.k_proj.weight', 'model.encoder.layers.1.self_attn.k_proj.bias', 'model.encoder.layers.1.self_attn.v_proj.weight', 'model.encoder.layers.1.self_attn.v_proj.bias', 'model.encoder.layers.1.self_attn.q_proj.weight', 'model.encoder.layers.1.self_attn.q_proj.bias', 'model.encoder.layers.1.self_attn.out_proj.weight', 'model.encoder.layers.1.self_attn.out_proj.bias', 'model.encoder.layers.2.self_attn.k_proj.weight', 'model.encoder.layers.2.self_attn.k_proj.bias', 'model.encoder.layers.2.self_attn.v_proj.weight', 'model.encoder.layers.2.self_attn.v_proj.bias', 'model.encoder.layers.2.self_attn.q_proj.weight', 'model.encoder.layers.2.self_attn.q_proj.bias', 'model.encoder.layers.2.self_attn.out_proj.weight', 'model.encoder.layers.2.self_attn.out_proj.bias', 'model.encoder.layers.3.self_attn.k_proj.weight', 'model.encoder.layers.3.self_attn.k_proj.bias', 'model.encoder.layers.3.self_attn.v_proj.weight', 'model.encoder.layers.3.self_attn.v_proj.bias', 'model.encoder.layers.3.self_attn.q_proj.weight', 'model.encoder.layers.3.self_attn.q_proj.bias', 'model.encoder.layers.3.self_attn.out_proj.weight', 'model.encoder.layers.3.self_attn.out_proj.bias', 'model.encoder.layers.4.self_attn.k_proj.weight', 'model.encoder.layers.4.self_attn.k_proj.bias', 'model.encoder.layers.4.self_attn.v_proj.weight', 'model.encoder.layers.4.self_attn.v_proj.bias', 'model.encoder.layers.4.self_attn.q_proj.weight', 'model.encoder.layers.4.self_attn.q_proj.bias', 'model.encoder.layers.4.self_attn.out_proj.weight', 'model.encoder.layers.4.self_attn.out_proj.bias', 'model.encoder.layers.5.self_attn.k_proj.weight', 'model.encoder.layers.5.self_attn.k_proj.bias', 'model.encoder.layers.5.self_attn.v_proj.weight', 'model.encoder.layers.5.self_attn.v_proj.bias', 'model.encoder.layers.5.self_attn.q_proj.weight', 'model.encoder.layers.5.self_attn.q_proj.bias', 'model.encoder.layers.5.self_attn.out_proj.weight', 'model.encoder.layers.5.self_attn.out_proj.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
<class 'transformers.modeling_bart.BartModel'>
```
# env1:tf2-pt-keras
```bash
/home/pbc/anaconda3/envs/tf2_pt_kr2/lib/python3.6/site-packages/transformers-4.4.2-py3.8.egg/transformers/__init__.py
Some weights of the model checkpoint at pre-model/longformer-encdec-base-16384 were not used when initializing BartModel: ['model.encoder.layers.0.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.0.self_attn.output.weight', 'model.encoder.layers.0.self_attn.output.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.1.self_attn.output.weight', 'model.encoder.layers.1.self_attn.output.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.2.self_attn.output.weight', 'model.encoder.layers.2.self_attn.output.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.3.self_attn.output.weight', 'model.encoder.layers.3.self_attn.output.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.4.self_attn.output.weight', 'model.encoder.layers.4.self_attn.output.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.5.self_attn.output.weight', 'model.encoder.layers.5.self_attn.output.bias']
- This IS expected if you are initializing BartModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BartModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BartModel were not initialized from the model checkpoint at pre-model/longformer-encdec-base-16384 and are newly initialized: ['model.encoder.layers.0.self_attn.k_proj.weight', 'model.encoder.layers.0.self_attn.k_proj.bias', 'model.encoder.layers.0.self_attn.v_proj.weight', 'model.encoder.layers.0.self_attn.v_proj.bias', 'model.encoder.layers.0.self_attn.q_proj.weight', 'model.encoder.layers.0.self_attn.q_proj.bias', 'model.encoder.layers.0.self_attn.out_proj.weight', 'model.encoder.layers.0.self_attn.out_proj.bias', 'model.encoder.layers.1.self_attn.k_proj.weight', 'model.encoder.layers.1.self_attn.k_proj.bias', 'model.encoder.layers.1.self_attn.v_proj.weight', 'model.encoder.layers.1.self_attn.v_proj.bias', 'model.encoder.layers.1.self_attn.q_proj.weight', 'model.encoder.layers.1.self_attn.q_proj.bias', 'model.encoder.layers.1.self_attn.out_proj.weight', 'model.encoder.layers.1.self_attn.out_proj.bias', 'model.encoder.layers.2.self_attn.k_proj.weight', 'model.encoder.layers.2.self_attn.k_proj.bias', 'model.encoder.layers.2.self_attn.v_proj.weight', 'model.encoder.layers.2.self_attn.v_proj.bias', 'model.encoder.layers.2.self_attn.q_proj.weight', 'model.encoder.layers.2.self_attn.q_proj.bias', 'model.encoder.layers.2.self_attn.out_proj.weight', 'model.encoder.layers.2.self_attn.out_proj.bias', 'model.encoder.layers.3.self_attn.k_proj.weight', 'model.encoder.layers.3.self_attn.k_proj.bias', 'model.encoder.layers.3.self_attn.v_proj.weight', 'model.encoder.layers.3.self_attn.v_proj.bias', 'model.encoder.layers.3.self_attn.q_proj.weight', 'model.encoder.layers.3.self_attn.q_proj.bias', 'model.encoder.layers.3.self_attn.out_proj.weight', 'model.encoder.layers.3.self_attn.out_proj.bias', 'model.encoder.layers.4.self_attn.k_proj.weight', 'model.encoder.layers.4.self_attn.k_proj.bias', 'model.encoder.layers.4.self_attn.v_proj.weight', 'model.encoder.layers.4.self_attn.v_proj.bias', 'model.encoder.layers.4.self_attn.q_proj.weight', 'model.encoder.layers.4.self_attn.q_proj.bias', 'model.encoder.layers.4.self_attn.out_proj.weight', 'model.encoder.layers.4.self_attn.out_proj.bias', 'model.encoder.layers.5.self_attn.k_proj.weight', 'model.encoder.layers.5.self_attn.k_proj.bias', 'model.encoder.layers.5.self_attn.v_proj.weight', 'model.encoder.layers.5.self_attn.v_proj.bias', 'model.encoder.layers.5.self_attn.q_proj.weight', 'model.encoder.layers.5.self_attn.q_proj.bias', 'model.encoder.layers.5.self_attn.out_proj.weight', 'model.encoder.layers.5.self_attn.out_proj.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Traceback (most recent call last):
File "/home/pbc/anaconda3/envs/tf2_pt_kr2/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3331, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-b1f8935f1cfa>", line 1, in <module>
runfile('/home/pbc/Documents/PycharmProjects/myEPI/src/github.py', wdir='/home/pbc/Documents/PycharmProjects/myEPI/src')
File "/home/pbc/.local/share/JetBrains/Toolbox/apps/PyCharm-P/ch-0/202.7660.27/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "/home/pbc/.local/share/JetBrains/Toolbox/apps/PyCharm-P/ch-0/202.7660.27/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/pbc/Documents/PycharmProjects/myEPI/src/github.py", line 8, in <module>
model = AutoModel.from_pretrained(model_name)
File "/home/pbc/anaconda3/envs/tf2_pt_kr2/lib/python3.6/site-packages/transformers-4.4.2-py3.8.egg/transformers/models/auto/modeling_auto.py", line 815, in from_pretrained
pretrained_model_name_or_path, *model_args, config=config, **kwargs
File "/home/pbc/anaconda3/envs/tf2_pt_kr2/lib/python3.6/site-packages/transformers-4.4.2-py3.8.egg/transformers/modeling_utils.py", line 1183, in from_pretrained
model.__class__.__name__, "\n\t".join(error_msgs)
RuntimeError: Error(s) in loading state_dict for BartModel:
size mismatch for model.encoder.embed_positions.weight: copying a param with shape torch.Size([16386, 768]) from checkpoint, the shape in current model is torch.Size([1026, 768]).
```
# env2: copied from env0 but not worked
```bash
home/pbc/anaconda3/envs/dnabert/lib/python3.6/site-packages/transformers/__init__.py
Some weights of the model checkpoint at pre-model/longformer-encdec-base-16384 were not used when initializing BartModel: ['model.encoder.layers.0.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.0.self_attn.output.weight', 'model.encoder.layers.0.self_attn.output.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.1.self_attn.output.weight', 'model.encoder.layers.1.self_attn.output.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.2.self_attn.output.weight', 'model.encoder.layers.2.self_attn.output.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.3.self_attn.output.weight', 'model.encoder.layers.3.self_attn.output.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.4.self_attn.output.weight', 'model.encoder.layers.4.self_attn.output.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.5.self_attn.output.weight', 'model.encoder.layers.5.self_attn.output.bias']
- This IS expected if you are initializing BartModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing BartModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BartModel were not initialized from the model checkpoint at pre-model/longformer-encdec-base-16384 and are newly initialized: ['model.encoder.layers.0.self_attn.k_proj.weight', 'model.encoder.layers.0.self_attn.k_proj.bias', 'model.encoder.layers.0.self_attn.v_proj.weight', 'model.encoder.layers.0.self_attn.v_proj.bias', 'model.encoder.layers.0.self_attn.q_proj.weight', 'model.encoder.layers.0.self_attn.q_proj.bias', 'model.encoder.layers.0.self_attn.out_proj.weight', 'model.encoder.layers.0.self_attn.out_proj.bias', 'model.encoder.layers.1.self_attn.k_proj.weight', 'model.encoder.layers.1.self_attn.k_proj.bias', 'model.encoder.layers.1.self_attn.v_proj.weight', 'model.encoder.layers.1.self_attn.v_proj.bias', 'model.encoder.layers.1.self_attn.q_proj.weight', 'model.encoder.layers.1.self_attn.q_proj.bias', 'model.encoder.layers.1.self_attn.out_proj.weight', 'model.encoder.layers.1.self_attn.out_proj.bias', 'model.encoder.layers.2.self_attn.k_proj.weight', 'model.encoder.layers.2.self_attn.k_proj.bias', 'model.encoder.layers.2.self_attn.v_proj.weight', 'model.encoder.layers.2.self_attn.v_proj.bias', 'model.encoder.layers.2.self_attn.q_proj.weight', 'model.encoder.layers.2.self_attn.q_proj.bias', 'model.encoder.layers.2.self_attn.out_proj.weight', 'model.encoder.layers.2.self_attn.out_proj.bias', 'model.encoder.layers.3.self_attn.k_proj.weight', 'model.encoder.layers.3.self_attn.k_proj.bias', 'model.encoder.layers.3.self_attn.v_proj.weight', 'model.encoder.layers.3.self_attn.v_proj.bias', 'model.encoder.layers.3.self_attn.q_proj.weight', 'model.encoder.layers.3.self_attn.q_proj.bias', 'model.encoder.layers.3.self_attn.out_proj.weight', 'model.encoder.layers.3.self_attn.out_proj.bias', 'model.encoder.layers.4.self_attn.k_proj.weight', 'model.encoder.layers.4.self_attn.k_proj.bias', 'model.encoder.layers.4.self_attn.v_proj.weight', 'model.encoder.layers.4.self_attn.v_proj.bias', 'model.encoder.layers.4.self_attn.q_proj.weight', 'model.encoder.layers.4.self_attn.q_proj.bias', 'model.encoder.layers.4.self_attn.out_proj.weight', 'model.encoder.layers.4.self_attn.out_proj.bias', 'model.encoder.layers.5.self_attn.k_proj.weight', 'model.encoder.layers.5.self_attn.k_proj.bias', 'model.encoder.layers.5.self_attn.v_proj.weight', 'model.encoder.layers.5.self_attn.v_proj.bias', 'model.encoder.layers.5.self_attn.q_proj.weight', 'model.encoder.layers.5.self_attn.q_proj.bias', 'model.encoder.layers.5.self_attn.out_proj.weight', 'model.encoder.layers.5.self_attn.out_proj.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/home/pbc/.local/share/JetBrains/Toolbox/apps/PyCharm-P/ch-0/203.7148.72/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "/home/pbc/.local/share/JetBrains/Toolbox/apps/PyCharm-P/ch-0/203.7148.72/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/pbc/PycharmProjects/bert/github.py", line 7, in <module>
model = AutoModel.from_pretrained(model_name)
File "/home/pbc/anaconda3/envs/dnabert/lib/python3.6/site-packages/transformers/modeling_auto.py", line 523, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/home/pbc/anaconda3/envs/dnabert/lib/python3.6/site-packages/transformers/modeling_utils.py", line 972, in from_pretrained
model.__class__.__name__, "\n\t".join(error_msgs)
RuntimeError: Error(s) in loading state_dict for BartModel:
size mismatch for model.encoder.embed_positions.weight: copying a param with shape torch.Size([16386, 768]) from checkpoint, the shape in current model is torch.Size([1026, 768]).
```
I found transformers.__file__ all are different
<|||||>Now please check this directory `/home/pbc/anaconda3/envs/dnabert/lib/python3.6/site-packages/transformers/` and locate the file called `modeling_bart.py`. Post the BertEncoder class definition here.
You should also pay attention to the weights that were not used from the pre-trained weights:
`['model.encoder.layers.0.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.0.self_attn.output.weight', 'model.encoder.layers.0.self_attn.output.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.1.self_attn.output.weight', 'model.encoder.layers.1.self_attn.output.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.2.self_attn.output.weight', 'model.encoder.layers.2.self_attn.output.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.3.self_attn.output.weight', 'model.encoder.layers.3.self_attn.output.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.4.self_attn.output.weight', 'model.encoder.layers.4.self_attn.output.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.5.self_attn.output.weight', 'model.encoder.layers.5.self_attn.output.bias']
`
Are you sure that this model (including config and weights) can be used with the transformers AutoModel class? Currently, it looks to me that someone has built his own model with the transformers library (which is not supposed to work with the AutoClasses). <|||||>> Now please check this directory `/home/pbc/anaconda3/envs/dnabert/lib/python3.6/site-packages/transformers/` and locate the file called `modeling_bart.py`. Post the BertEncoder class definition here.
>
> You should also pay attention to the weights that were not used from the pre-trained weights:
> `['model.encoder.layers.0.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.0.self_attn.output.weight', 'model.encoder.layers.0.self_attn.output.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.1.self_attn.output.weight', 'model.encoder.layers.1.self_attn.output.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.2.self_attn.output.weight', 'model.encoder.layers.2.self_attn.output.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.3.self_attn.output.weight', 'model.encoder.layers.3.self_attn.output.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.4.self_attn.output.weight', 'model.encoder.layers.4.self_attn.output.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.5.self_attn.output.weight', 'model.encoder.layers.5.self_attn.output.bias'] `
>
> Are you sure that this model (including config and weights) can be used with the transformers AutoModel class? Currently, it looks to me that someone has built his own model with the transformers library (which is not supposed to work with the AutoClasses).
I am not sure, when I start to use bert model by the same way , it works, So I make it again in longformer, then many errors occured one by one<|||||>That is a different thing. What you have here `longformer-encdec-base-16384` is something that is provided by someone that is not supposed to work with the provided AutoClasses by hugging face. Please check the code of this someone and see what this person did.
I think this is the repository you should check out: https://github.com/allenai/ms2
or maybe this code snippet: https://github.com/allenai/longformer/issues/154 <|||||>> That is a different thing. What you have here `longformer-encdec-base-16384` is something that is provided by someone that is not supposed to work with the provided AutoClasses by hugging face. Please check the code of this someone and see what this person did.
> I think this is the repository you should check out: https://github.com/allenai/ms2
> or maybe this code snippet: [allenai/longformer#154](https://github.com/allenai/longformer/issues/154)
yeah, you means that I should install env with allenai/longformer and not huggingface, at start I read allenai/longformer's readme, i just found that it may from huggingface and don't look for any things about how to use its longformer model by python code.
I have seen [allenai/longformer#154](https://github.com/allenai/longformer/issues/154), and I will try it through Imitating her code.
And another question is, if I want to use hugging face env to load model, that means I should download in https://huggingface.co/?
As for ms2, I will view it soon, Thanks!
# Finally,thank you very much! you save me!Thanks!ORZ<|||||>Yes, the `allenai/longformer` is the framework you should use for `longformer-encdec-base-16384`.
> And another question is, if I want to use hugging face env to load model, that means I should download in https://huggingface.co/?
Yes, you can check the pre-trained models here: https://huggingface.co/models<|||||>Okay, and I am curious that how do you find `allenai/longformer#154` and https://github.com/allenai/ms2. If I have the skill, I can save myself quickly,haha.<|||||>Use a search engine of your choice and look for `longformer-encdec-base-16384` ;-)<|||||>> longformer-encdec-base-16384
OK,thank you very much!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,300 | closed | EncoderDecoderConfigs should not create new objects | # What does this PR do?
1. Removes the creation of separate config objects (pre PR 3:encoderdecoderConfig, encoderConfig, decoderConfig) and uses the existing ones (encoderConfig and decoderConfig now part of the encoderdecoderConfig)
2. Overwrite `resize_token_embeddings` from the parent class because it is not working for the EncoderDecoderModel and currently throws an error
Fixes #11285
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten @patil-suraj | 04-17-2021 22:13:20 | 04-17-2021 22:13:20 | @patrickvonplaten @patil-suraj : Could please help me with the failed test? The error message is not that expressive:
> Run cat reports/tests_templates_failures_short.txt
> cat reports/tests_templates_failures_short.txt
> shell: /usr/bin/bash -e {0}
> env:
> pythonLocation: /opt/hostedtoolcache/Python/3.6.13/x64
> cat: reports/tests_templates_failures_short.txt: No such file or directory
> Error: Process completed with exit code 1.<|||||>> Instead of modifying the config, I think one alternate solution is to assign the shared config object to the encoder and decoder, after this line
>
> https://github.com/huggingface/transformers/blob/95dab34d5588fb155dfed8293ac2fbb1217a95a7/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L345
>
> ```python
> encoder.config = config.encoder
> decoder.config = config.decoder
> ```
Please correct me if I am wrong but what happens, in that case, is the following:
1. Casting already existing config objects to dictionaries.
2. Recreating config objects from those dictionaries.
3. Initializing EncoderDecoderConfig with those new config objects.
4. Throwing away the newly generating config objects by assigning the ones that were already present before step 1.
I think this PR is a cleaner implementation by avoids steps 1-3 and executing 4 directly.
<|||||>Good point!
But this breaks backward compatibility. With this change, none of the previously trained models will be able to load because the config will now be in-compatible. For ex if you try
```python
config = EncoderDecoderConfig.from_pretrained("google/bert2bert_L-24_wmt_de_en")
```
on this PR, it raises an exception. So loading model fails.
In any case, backward compatibility is of utmost importance.<|||||>Hi @patil-suraj
I have pushed a new version that is now backward compatible and also covers a case I have previously overlooked. After checking the implementation of the parent classes `PreTrainedModel` and `PretrainedConfig` I came to the conclusion that your suggestion is the best because they all transfer dictionaries as parameters and not config objects.
We could of course implement a type check like;
```
if type(encoder) == dict:
#.....
```
but I think this makes the code less readable. Would be great if you could have a look again and thanks for the constructive review so far :+1:.<|||||>Thanks a lot for taking care of this @cronoik :-) It's a nice fix. It would be awesome if you could check out the suggestions and then we can merge this IMO.<|||||>Hi @patil-suraj @patrickvonplaten,
thanks for all the suggestions. I think I am done. Could you please have a look? |
transformers | 11,299 | closed | Pr2keep encoder decoder synced | # What does this PR do?
Fixes #11285 + adds an implementation for the `resize_token_embeddings method` (currently the parent class implementation is used which throws an error).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten, @patil-suraj | 04-17-2021 21:40:01 | 04-17-2021 21:40:01 | |
transformers | 11,297 | closed | Fixing bug in generation | When passing `inputs_embeds` and not `input_ids=None` the generation function fails because `input_ids` is created but the function but it should not.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten
| 04-17-2021 17:39:44 | 04-17-2021 17:39:44 | Circle CI is unrelated - merging! Thanks a lot @nicola-decao |
transformers | 11,296 | closed | Cannot save GPT2 model with signature | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.4
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@VictorSanh @n1t0 @Pierrci
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
I am trying to follow this [post](https://blog.tensorflow.org/2020/05/how-hugging-face-achieved-2x-performance-boost-question-answering.html) where @Pierrci illustrated how to convert a distilled BERT model into Tensorflow saved model and serve it with Tensorflow.js in the end. I would like to do similar stuff with a distilgpt2 model.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
import tensorflow as tf
from transformers import TFGPT2LMHeadModel
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("distilgpt2")
model = TFGPT2LMHeadModel.from_pretrained("distilgpt2", pad_token_id=tokenizer.eos_token_id)
callable = tf.function(model.call)
concrete_function = callable.get_concrete_function([tf.TensorSpec([None, 384], tf.int32, name="input_ids"), tf.TensorSpec([None, 384], tf.int32, name="attention_mask")])
tf.saved_model.save(model, 'distilgpt2_sig', signatures=concrete_function)
```
and the error messages are as follow
```
ValueError: Got a non-Tensor value (<tf.Tensor 'StatefulPartitionedCall:1' shape=(2, None, 12, 384, 64) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:2' shape=(2, None, 12, 384, 64) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:3' shape=(2, None, 12, 384, 64) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:4' shape=(2, None, 12, 384, 64) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:5' shape=(2, None, 12, 384, 64) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:6' shape=(2, None, 12, 384, 64) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:7' shape=(2, None, 12, 384, 64) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:8' shape=(2, None, 12, 384, 64) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:9' shape=(2, None, 12, 384, 64) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:10' shape=(2, None, 12, 384, 64) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:11' shape=(2, None, 12, 384, 64) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:12' shape=(2, None, 12, 384, 64) dtype=float32>) for key 'past_key_values' in the output of the function __inference_call_90110 used to generate the SavedModel signature 'serving_default'. Outputs for functions used as signatures must be a single Tensor, a sequence of Tensors, or a dictionary from string to Tensor.
```
I can save the model if I don't specify `signitures` but in that case the input shape is default to [-1,5].
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I expect the model to be saved without a problem.
| 04-17-2021 17:31:29 | 04-17-2021 17:31:29 | Tbh, I've never really worked with `get_concrete_function()`, etc.... @Rocketknight1 - do you have an idea by any chance?<|||||>> Tbh, I've never really worked with `get_concrete_function()`, etc.... @Rocketknight1 - do you have an idea by any chance?
I just realized I should tag the authors of the post I read about in the issue. I have edited the issue.<|||||>Hi, I'm the TF maintainer! There are two problems here. The first is that the first two arguments to `TFGPT2LMHeadModel` are not `input_ids` and `attention_mask`, they are `input_ids` and `past`, see [here](https://huggingface.co/transformers/model_doc/gpt2.html#tfgpt2lmheadmodel). Also, `TFGPT2LMHeadModel` returns a tuple/dict of Tensors. Concrete functions do not support that - you need to pick which one you want. Try something like this, which should work (if you want an output other than "logits", you can just change that bit):
```
import tensorflow as tf
from transformers import TFGPT2LMHeadModel
from transformers import GPT2Tokenizer
@tf.function
def call_model(input_ids, attention_mask):
return model(input_ids=input_ids, attention_mask=attention_mask)['logits']
tokenizer = GPT2Tokenizer.from_pretrained("distilgpt2")
model = TFGPT2LMHeadModel.from_pretrained("distilgpt2", pad_token_id=tokenizer.eos_token_id)
concrete_function = call_model.get_concrete_function(tf.TensorSpec([None, 384], tf.int32, name="input_ids"), tf.TensorSpec([None, 384], tf.int32, name="attention_mask"))
tf.saved_model.save(model, 'distilgpt2_sig', signatures=concrete_function)
```<|||||>> Hi, I'm the TF maintainer! There are two problems here. The first is that the first two arguments to `TFGPT2LMHeadModel` are not `input_ids` and `attention_mask`, they are `input_ids` and `past`, see [here](https://huggingface.co/transformers/model_doc/gpt2.html#tfgpt2lmheadmodel). Also, `TFGPT2LMHeadModel` returns a tuple/dict of Tensors. Concrete functions do not support that - you need to pick which one you want. Try something like this, which should work (if you want an output other than "logits", you can just change that bit):
>
> ```
> import tensorflow as tf
> from transformers import TFGPT2LMHeadModel
> from transformers import GPT2Tokenizer
>
> @tf.function
> def call_model(input_ids, attention_mask):
> return model(input_ids=input_ids, attention_mask=attention_mask)['logits']
>
> tokenizer = GPT2Tokenizer.from_pretrained("distilgpt2")
> model = TFGPT2LMHeadModel.from_pretrained("distilgpt2", pad_token_id=tokenizer.eos_token_id)
> concrete_function = call_model.get_concrete_function(tf.TensorSpec([None, 384], tf.int32, name="input_ids"), tf.TensorSpec([None, 384], tf.int32, name="attention_mask"))
> tf.saved_model.save(model, 'distilgpt2_sig', signatures=concrete_function)
> ```
That works! Thank you for the help. I am not familiar with TF especially things like `get_concrete_function`, I didn't know you can define a function outside the model and then save it. |
transformers | 11,295 | closed | Improve "infer_framework_from_model" func readability | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-17-2021 10:55:25 | 04-17-2021 10:55:25 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,294 | closed | serious bug with trainer.py when restarting the training from a checkpoint | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: Linux
- Python version: 3.8
- PyTorch version (GPU?): 1.8
- Tensorflow version (GPU?): -
- Using GPU in script?: -
- Using distributed or parallel set-up in script?: -
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
trainer: @sgugger, @patil-suraj
## Information
Hi, I see this serious issue with trainer.py class, let please consider run_translation.py script [1] after you define the model, let freeze the encoder, or wrap the model in a class. So one can modify the model after this line https://github.com/huggingface/transformers/blob/d9c62047a8d75e18d2849d345ab3394875a712ef/examples/seq2seq/run_translation.py#L331
Then, during the training, one can stop the training, and now would like to continue the training from the place it is stopped, if you print the number of parameters inside trainer.py, right before this line:
https://github.com/huggingface/transformers/blob/d9c62047a8d75e18d2849d345ab3394875a712ef/src/transformers/trainer.py#L1062
like this
```
for n,p in model.named_parameters():
if p.requires_grad:
print(n)
```
what would we see? We see all parameters are there, even the ones we made frozen, this is a serious bug that if the user modify the model after creation, those modifications are not considered when restarting the training, could you kindly have a look?
thanks
[1] https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_translation.py
## Expected behavior
The user should be able to continue training the modified model as they are modified. | 04-17-2021 10:33:00 | 04-17-2021 10:33:00 | You can also consider the modification in finetune_trainer.py
https://github.com/huggingface/transformers/blob/1c06240e1b3477728129bb58e7b6c7734bb5074e/examples/legacy/seq2seq/finetune_trainer.py#L233
If you freeze some parameters as done in the line above, those would not be there when you load the model and restarting the training, this is really a serious issue, thanks for the help
<|||||>Here is the minimal code to generate this bug, we make a model, we freeze, then we save it (as done in trainer checkpoint), then we load it (as done in train() in trainer.py) and we see if the feezed parameters are there or not
```
from transformers import T5ForConditionalGeneration
from typing import Optional
import torch
import os
# This is copied from trainer.py
def _save(model, output_dir: Optional[str] = None):
os.makedirs(output_dir, exist_ok=True)
print(f"Saving model checkpoint to {output_dir}")
# Save a trained model and configuration using `save_pretrained()`.
# They can then be reloaded using `from_pretrained()`
state_dict = model.state_dict()
model.save_pretrained(output_dir, state_dict=state_dict)
def print_num_parameters(model):
for n,p in model.named_parameters():
if (p.requires_grad):
print("n ", n)
def freeze_params(model):
for n,p in model.named_parameters():
p.requires_grad = False
model = T5ForConditionalGeneration.from_pretrained("t5-base")
freeze_params(model)
print("#### parameters before saving ####")
print_num_parameters(model)
_save(model, "temp_model")
# Now lets load the model as done in trainer from the checkpoint.
model = model.from_pretrained("temp_model")
# Now lets print the number of parameters
print("#### parameters after saving ####")
print_num_parameters(model)
```
surprisingly, no, the freezed parameters are not freezed anymore after loading the checkpoint:
```
#### parameters before saving ####
Saving model checkpoint to temp_model
#### parameters after saving ####
n shared.weight
n encoder.block.0.layer.0.SelfAttention.q.weight
n encoder.block.0.layer.0.SelfAttention.k.weight
n encoder.block.0.layer.0.SelfAttention.v.weight
n encoder.block.0.layer.0.SelfAttention.o.weight
n encoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight
n encoder.block.0.layer.0.layer_norm.weight
n encoder.block.0.layer.1.DenseReluDense.wi.weight
n encoder.block.0.layer.1.DenseReluDense.wo.weight
n encoder.block.0.layer.1.layer_norm.weight
n encoder.block.1.layer.0.SelfAttention.q.weight
n encoder.block.1.layer.0.SelfAttention.k.weight
n encoder.block.1.layer.0.SelfAttention.v.weight
n encoder.block.1.layer.0.SelfAttention.o.weight
n encoder.block.1.layer.0.layer_norm.weight
n encoder.block.1.layer.1.DenseReluDense.wi.weight
n encoder.block.1.layer.1.DenseReluDense.wo.weight
n encoder.block.1.layer.1.layer_norm.weight
n encoder.block.2.layer.0.SelfAttention.q.weight
n encoder.block.2.layer.0.SelfAttention.k.weight
n encoder.block.2.layer.0.SelfAttention.v.weight
n encoder.block.2.layer.0.SelfAttention.o.weight
n encoder.block.2.layer.0.layer_norm.weight
n encoder.block.2.layer.1.DenseReluDense.wi.weight
n encoder.block.2.layer.1.DenseReluDense.wo.weight
n encoder.block.2.layer.1.layer_norm.weight
n encoder.block.3.layer.0.SelfAttention.q.weight
n encoder.block.3.layer.0.SelfAttention.k.weight
n encoder.block.3.layer.0.SelfAttention.v.weight
n encoder.block.3.layer.0.SelfAttention.o.weight
n encoder.block.3.layer.0.layer_norm.weight
n encoder.block.3.layer.1.DenseReluDense.wi.weight
n encoder.block.3.layer.1.DenseReluDense.wo.weight
n encoder.block.3.layer.1.layer_norm.weight
n encoder.block.4.layer.0.SelfAttention.q.weight
n encoder.block.4.layer.0.SelfAttention.k.weight
n encoder.block.4.layer.0.SelfAttention.v.weight
n encoder.block.4.layer.0.SelfAttention.o.weight
n encoder.block.4.layer.0.layer_norm.weight
n encoder.block.4.layer.1.DenseReluDense.wi.weight
n encoder.block.4.layer.1.DenseReluDense.wo.weight
n encoder.block.4.layer.1.layer_norm.weight
n encoder.block.5.layer.0.SelfAttention.q.weight
n encoder.block.5.layer.0.SelfAttention.k.weight
n encoder.block.5.layer.0.SelfAttention.v.weight
n encoder.block.5.layer.0.SelfAttention.o.weight
n encoder.block.5.layer.0.layer_norm.weight
n encoder.block.5.layer.1.DenseReluDense.wi.weight
n encoder.block.5.layer.1.DenseReluDense.wo.weight
n encoder.block.5.layer.1.layer_norm.weight
n encoder.block.6.layer.0.SelfAttention.q.weight
n encoder.block.6.layer.0.SelfAttention.k.weight
n encoder.block.6.layer.0.SelfAttention.v.weight
n encoder.block.6.layer.0.SelfAttention.o.weight
n encoder.block.6.layer.0.layer_norm.weight
n encoder.block.6.layer.1.DenseReluDense.wi.weight
n encoder.block.6.layer.1.DenseReluDense.wo.weight
n encoder.block.6.layer.1.layer_norm.weight
n encoder.block.7.layer.0.SelfAttention.q.weight
n encoder.block.7.layer.0.SelfAttention.k.weight
n encoder.block.7.layer.0.SelfAttention.v.weight
n encoder.block.7.layer.0.SelfAttention.o.weight
n encoder.block.7.layer.0.layer_norm.weight
n encoder.block.7.layer.1.DenseReluDense.wi.weight
n encoder.block.7.layer.1.DenseReluDense.wo.weight
n encoder.block.7.layer.1.layer_norm.weight
n encoder.block.8.layer.0.SelfAttention.q.weight
n encoder.block.8.layer.0.SelfAttention.k.weight
n encoder.block.8.layer.0.SelfAttention.v.weight
n encoder.block.8.layer.0.SelfAttention.o.weight
n encoder.block.8.layer.0.layer_norm.weight
n encoder.block.8.layer.1.DenseReluDense.wi.weight
n encoder.block.8.layer.1.DenseReluDense.wo.weight
n encoder.block.8.layer.1.layer_norm.weight
n encoder.block.9.layer.0.SelfAttention.q.weight
n encoder.block.9.layer.0.SelfAttention.k.weight
n encoder.block.9.layer.0.SelfAttention.v.weight
n encoder.block.9.layer.0.SelfAttention.o.weight
n encoder.block.9.layer.0.layer_norm.weight
n encoder.block.9.layer.1.DenseReluDense.wi.weight
n encoder.block.9.layer.1.DenseReluDense.wo.weight
n encoder.block.9.layer.1.layer_norm.weight
n encoder.block.10.layer.0.SelfAttention.q.weight
n encoder.block.10.layer.0.SelfAttention.k.weight
n encoder.block.10.layer.0.SelfAttention.v.weight
n encoder.block.10.layer.0.SelfAttention.o.weight
n encoder.block.10.layer.0.layer_norm.weight
n encoder.block.10.layer.1.DenseReluDense.wi.weight
n encoder.block.10.layer.1.DenseReluDense.wo.weight
n encoder.block.10.layer.1.layer_norm.weight
n encoder.block.11.layer.0.SelfAttention.q.weight
n encoder.block.11.layer.0.SelfAttention.k.weight
n encoder.block.11.layer.0.SelfAttention.v.weight
n encoder.block.11.layer.0.SelfAttention.o.weight
n encoder.block.11.layer.0.layer_norm.weight
n encoder.block.11.layer.1.DenseReluDense.wi.weight
n encoder.block.11.layer.1.DenseReluDense.wo.weight
n encoder.block.11.layer.1.layer_norm.weight
n encoder.final_layer_norm.weight
n decoder.block.0.layer.0.SelfAttention.q.weight
n decoder.block.0.layer.0.SelfAttention.k.weight
n decoder.block.0.layer.0.SelfAttention.v.weight
n decoder.block.0.layer.0.SelfAttention.o.weight
n decoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight
n decoder.block.0.layer.0.layer_norm.weight
n decoder.block.0.layer.1.EncDecAttention.q.weight
n decoder.block.0.layer.1.EncDecAttention.k.weight
n decoder.block.0.layer.1.EncDecAttention.v.weight
n decoder.block.0.layer.1.EncDecAttention.o.weight
n decoder.block.0.layer.1.layer_norm.weight
n decoder.block.0.layer.2.DenseReluDense.wi.weight
n decoder.block.0.layer.2.DenseReluDense.wo.weight
n decoder.block.0.layer.2.layer_norm.weight
n decoder.block.1.layer.0.SelfAttention.q.weight
n decoder.block.1.layer.0.SelfAttention.k.weight
n decoder.block.1.layer.0.SelfAttention.v.weight
n decoder.block.1.layer.0.SelfAttention.o.weight
n decoder.block.1.layer.0.layer_norm.weight
n decoder.block.1.layer.1.EncDecAttention.q.weight
n decoder.block.1.layer.1.EncDecAttention.k.weight
n decoder.block.1.layer.1.EncDecAttention.v.weight
n decoder.block.1.layer.1.EncDecAttention.o.weight
n decoder.block.1.layer.1.layer_norm.weight
n decoder.block.1.layer.2.DenseReluDense.wi.weight
n decoder.block.1.layer.2.DenseReluDense.wo.weight
n decoder.block.1.layer.2.layer_norm.weight
n decoder.block.2.layer.0.SelfAttention.q.weight
n decoder.block.2.layer.0.SelfAttention.k.weight
n decoder.block.2.layer.0.SelfAttention.v.weight
n decoder.block.2.layer.0.SelfAttention.o.weight
n decoder.block.2.layer.0.layer_norm.weight
n decoder.block.2.layer.1.EncDecAttention.q.weight
n decoder.block.2.layer.1.EncDecAttention.k.weight
n decoder.block.2.layer.1.EncDecAttention.v.weight
n decoder.block.2.layer.1.EncDecAttention.o.weight
n decoder.block.2.layer.1.layer_norm.weight
n decoder.block.2.layer.2.DenseReluDense.wi.weight
n decoder.block.2.layer.2.DenseReluDense.wo.weight
n decoder.block.2.layer.2.layer_norm.weight
n decoder.block.3.layer.0.SelfAttention.q.weight
n decoder.block.3.layer.0.SelfAttention.k.weight
n decoder.block.3.layer.0.SelfAttention.v.weight
n decoder.block.3.layer.0.SelfAttention.o.weight
n decoder.block.3.layer.0.layer_norm.weight
n decoder.block.3.layer.1.EncDecAttention.q.weight
n decoder.block.3.layer.1.EncDecAttention.k.weight
n decoder.block.3.layer.1.EncDecAttention.v.weight
n decoder.block.3.layer.1.EncDecAttention.o.weight
n decoder.block.3.layer.1.layer_norm.weight
n decoder.block.3.layer.2.DenseReluDense.wi.weight
n decoder.block.3.layer.2.DenseReluDense.wo.weight
n decoder.block.3.layer.2.layer_norm.weight
n decoder.block.4.layer.0.SelfAttention.q.weight
n decoder.block.4.layer.0.SelfAttention.k.weight
n decoder.block.4.layer.0.SelfAttention.v.weight
n decoder.block.4.layer.0.SelfAttention.o.weight
n decoder.block.4.layer.0.layer_norm.weight
n decoder.block.4.layer.1.EncDecAttention.q.weight
n decoder.block.4.layer.1.EncDecAttention.k.weight
n decoder.block.4.layer.1.EncDecAttention.v.weight
n decoder.block.4.layer.1.EncDecAttention.o.weight
n decoder.block.4.layer.1.layer_norm.weight
n decoder.block.4.layer.2.DenseReluDense.wi.weight
n decoder.block.4.layer.2.DenseReluDense.wo.weight
n decoder.block.4.layer.2.layer_norm.weight
n decoder.block.5.layer.0.SelfAttention.q.weight
n decoder.block.5.layer.0.SelfAttention.k.weight
n decoder.block.5.layer.0.SelfAttention.v.weight
n decoder.block.5.layer.0.SelfAttention.o.weight
n decoder.block.5.layer.0.layer_norm.weight
n decoder.block.5.layer.1.EncDecAttention.q.weight
n decoder.block.5.layer.1.EncDecAttention.k.weight
n decoder.block.5.layer.1.EncDecAttention.v.weight
n decoder.block.5.layer.1.EncDecAttention.o.weight
n decoder.block.5.layer.1.layer_norm.weight
n decoder.block.5.layer.2.DenseReluDense.wi.weight
n decoder.block.5.layer.2.DenseReluDense.wo.weight
n decoder.block.5.layer.2.layer_norm.weight
n decoder.block.6.layer.0.SelfAttention.q.weight
n decoder.block.6.layer.0.SelfAttention.k.weight
n decoder.block.6.layer.0.SelfAttention.v.weight
n decoder.block.6.layer.0.SelfAttention.o.weight
n decoder.block.6.layer.0.layer_norm.weight
n decoder.block.6.layer.1.EncDecAttention.q.weight
n decoder.block.6.layer.1.EncDecAttention.k.weight
n decoder.block.6.layer.1.EncDecAttention.v.weight
n decoder.block.6.layer.1.EncDecAttention.o.weight
n decoder.block.6.layer.1.layer_norm.weight
n decoder.block.6.layer.2.DenseReluDense.wi.weight
n decoder.block.6.layer.2.DenseReluDense.wo.weight
n decoder.block.6.layer.2.layer_norm.weight
n decoder.block.7.layer.0.SelfAttention.q.weight
n decoder.block.7.layer.0.SelfAttention.k.weight
n decoder.block.7.layer.0.SelfAttention.v.weight
n decoder.block.7.layer.0.SelfAttention.o.weight
n decoder.block.7.layer.0.layer_norm.weight
n decoder.block.7.layer.1.EncDecAttention.q.weight
n decoder.block.7.layer.1.EncDecAttention.k.weight
n decoder.block.7.layer.1.EncDecAttention.v.weight
n decoder.block.7.layer.1.EncDecAttention.o.weight
n decoder.block.7.layer.1.layer_norm.weight
n decoder.block.7.layer.2.DenseReluDense.wi.weight
n decoder.block.7.layer.2.DenseReluDense.wo.weight
n decoder.block.7.layer.2.layer_norm.weight
n decoder.block.8.layer.0.SelfAttention.q.weight
n decoder.block.8.layer.0.SelfAttention.k.weight
n decoder.block.8.layer.0.SelfAttention.v.weight
n decoder.block.8.layer.0.SelfAttention.o.weight
n decoder.block.8.layer.0.layer_norm.weight
n decoder.block.8.layer.1.EncDecAttention.q.weight
n decoder.block.8.layer.1.EncDecAttention.k.weight
n decoder.block.8.layer.1.EncDecAttention.v.weight
n decoder.block.8.layer.1.EncDecAttention.o.weight
n decoder.block.8.layer.1.layer_norm.weight
n decoder.block.8.layer.2.DenseReluDense.wi.weight
n decoder.block.8.layer.2.DenseReluDense.wo.weight
n decoder.block.8.layer.2.layer_norm.weight
n decoder.block.9.layer.0.SelfAttention.q.weight
n decoder.block.9.layer.0.SelfAttention.k.weight
n decoder.block.9.layer.0.SelfAttention.v.weight
n decoder.block.9.layer.0.SelfAttention.o.weight
n decoder.block.9.layer.0.layer_norm.weight
n decoder.block.9.layer.1.EncDecAttention.q.weight
n decoder.block.9.layer.1.EncDecAttention.k.weight
n decoder.block.9.layer.1.EncDecAttention.v.weight
n decoder.block.9.layer.1.EncDecAttention.o.weight
n decoder.block.9.layer.1.layer_norm.weight
n decoder.block.9.layer.2.DenseReluDense.wi.weight
n decoder.block.9.layer.2.DenseReluDense.wo.weight
n decoder.block.9.layer.2.layer_norm.weight
n decoder.block.10.layer.0.SelfAttention.q.weight
n decoder.block.10.layer.0.SelfAttention.k.weight
n decoder.block.10.layer.0.SelfAttention.v.weight
n decoder.block.10.layer.0.SelfAttention.o.weight
n decoder.block.10.layer.0.layer_norm.weight
n decoder.block.10.layer.1.EncDecAttention.q.weight
n decoder.block.10.layer.1.EncDecAttention.k.weight
n decoder.block.10.layer.1.EncDecAttention.v.weight
n decoder.block.10.layer.1.EncDecAttention.o.weight
n decoder.block.10.layer.1.layer_norm.weight
n decoder.block.10.layer.2.DenseReluDense.wi.weight
n decoder.block.10.layer.2.DenseReluDense.wo.weight
n decoder.block.10.layer.2.layer_norm.weight
n decoder.block.11.layer.0.SelfAttention.q.weight
n decoder.block.11.layer.0.SelfAttention.k.weight
n decoder.block.11.layer.0.SelfAttention.v.weight
n decoder.block.11.layer.0.SelfAttention.o.weight
n decoder.block.11.layer.0.layer_norm.weight
n decoder.block.11.layer.1.EncDecAttention.q.weight
n decoder.block.11.layer.1.EncDecAttention.k.weight
n decoder.block.11.layer.1.EncDecAttention.v.weight
n decoder.block.11.layer.1.EncDecAttention.o.weight
n decoder.block.11.layer.1.layer_norm.weight
n decoder.block.11.layer.2.DenseReluDense.wi.weight
n decoder.block.11.layer.2.DenseReluDense.wo.weight
n decoder.block.11.layer.2.layer_norm.weight
n decoder.final_layer_norm.weight
```
<|||||>Your end example is not surprising: you are re-loading a new model so of course modifications done on the original model are erased.
We'll look at how we can fix this inside the trainer, to load the weights differently instead of using `from_pretrained`.<|||||>Hi @sgugger thank you for the response, my intention from the example was showing that the procedure happening inside the trainer if the user resume training from a modified model. thank you. <|||||>@sgugger I also see some more issues with the trainer.py when I load the model from the checkpoint, the model which previously was training fine on the GPU gets out of memory issue, there must be a leakge of memory during the loading from a checkpoint, shall I make a separate ticket for this issue? thanks <|||||>If it's because you add frozen parameters previously, that would explain the OOM error.<|||||>Hi @LysandreJik @sgugger
thanks for the respose, but I do not see how this is relevant.
During training, I also load the model and then freeze some of the parameters, then this trains fine, only during loading from a checkpoint, this goes to memory issues, but the procedure remains the same, by loading the model and then freezing params, i personally think there must be a bug somewhere in checkpoint loading, resulting in extra usage of memory,. thanks for your help <|||||>One simple test to check this @sgugger would be get a model, and choose a batch size in a way that it just fit the memory but larger than that wont, then resume the training from a checkpoint, then I am sure you would also see the memory issue, even without any modification, just the baseline t5 I do see this issue with huggingface codes. thanks for your help <|||||>> We'll look at how we can fix this inside the trainer, to load the weights differently instead of using `from_pretrained`.
@sgugger, this is definitely related to what I need for deepspeed checkpoint resume. Currently we first load the model `from_pretrained` and then it gets dropped and replaced by the deepspeed checkpoint, which for huge models is a huge slowdown.
So let's coordinate this work.
My preliminary idea was to pass a new flag to `from_pretrained` which will do everything except actually loading the weights. I was planning to work on this this week.
(plus we need to stop random init the weights when they are replaced with pre-trained weights, so this is related too but not directly to this particular issue)<|||||>Thanks @stas00 for your attention to this issue, this would be really awesome to have this fixed, thanks a lot for the great work you do <|||||>You're in good hands, @dorooddorood606 - @sgugger is taking care of it already, I was just commenting that something similar needs to be done for deepspeed, so once @sgugger's PR goes in I will work on doing the same for deepspeed.<|||||>Dear @stas00
Thank you very much both of you @sgugger to your great efforts and the great job you do, I also observe the vanilla t5-base checkpointing gets very different results after resume, I reported the bug here, this can be a relevant bug to this one:
https://github.com/huggingface/transformers/issues/11323
So this seems there are some randomness in t5-base model which is not considered in trainer.py resume from checkpointing part, if you also think these bugs can be related, I would greatly appreciate if you could also consider the losses when one resume from a checkpoint.
I would like to thank you so much for your time and your efforts and the great and awesome job you do. |
transformers | 11,293 | closed | OSError: Unable to load weights from pytorch checkpoint file | I got this error with MT5 model. Can anyone help?
```
(base) notooth@Debian:~$ python
Python 3.8.8 (default, Apr 13 2021, 19:58:26)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import MT5Model, T5Tokenizer
>>> model = MT5Model.from_pretrained("google/mt5-small")
Traceback (most recent call last):
File "/home/notooth/anaconda3/lib/python3.8/site-packages/transformers-4.4.2-py3.8.egg/transformers/modeling_utils.py", line 1062, in from_pretrained
File "/home/notooth/anaconda3/lib/python3.8/site-packages/torch/serialization.py", line 527, in load
with _open_zipfile_reader(f) as opened_zipfile:
File "/home/notooth/anaconda3/lib/python3.8/site-packages/torch/serialization.py", line 224, in __init__
super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer))
RuntimeError: version_ <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at /tmp/pip-req-build-66hwoyb6/caffe2/serialize/inline_container.cc:132, please report a bug to PyTorch. Attempted to read a PyTorch file with version 3, but the maximum supported version for reading is 2. Your PyTorch installation may be too old. (init at /tmp/pip-req-build-66hwoyb6/caffe2/serialize/inline_container.cc:132)
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x6d (0x7f2e92daa2ad in /home/notooth/anaconda3/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #1: caffe2::serialize::PyTorchStreamReader::init() + 0x25db (0x7f2e8eba52bb in /home/notooth/anaconda3/lib/python3.8/site-packages/torch/lib/libtorch.so)
frame #2: caffe2::serialize::PyTorchStreamReader::PyTorchStreamReader(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x7b (0x7f2e8eba67cb in /home/notooth/anaconda3/lib/python3.8/site-packages/torch/lib/libtorch.so)
frame #3: <unknown function> + 0x65d00e (0x7f2e91f1c00e in /home/notooth/anaconda3/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #4: <unknown function> + 0x1375f9 (0x7f2e919f65f9 in /home/notooth/anaconda3/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
<omitting python frames>
frame #32: __libc_start_main + 0xea (0x7f2ea2fd5d0a in /lib/x86_64-linux-gnu/libc.so.6)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/notooth/anaconda3/lib/python3.8/site-packages/transformers-4.4.2-py3.8.egg/transformers/modeling_utils.py", line 1064, in from_pretrained
OSError: Unable to load weights from pytorch checkpoint file for 'google/mt5-small' at '/home/notooth/.cache/huggingface/transformers/8e7b2a80ddcb5611b27d8c89e1e8e33a947e105415051402a22b9c8d7d1caeb0.e22331f3a065b885b30ae3dd1ff11ccaf7fbc444485f6eb07ef5e0138bca8b70'If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
``` | 04-17-2021 08:23:58 | 04-17-2021 08:23:58 | > Attempted to read a PyTorch file with version 3, but the maximum supported version for reading is 2. Your PyTorch installation may be too old.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,292 | closed | move device statements outside if statements | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Move some device statements outside if statements.
There are three model classes (GPT2Model, GPTNeoModel, CTRLModel) that state the variable `device` inside a `if` statement in their `forward()` method. This may lead to some inconvenience for GPT2Model #11179, and is not consistent with the way it is written in other model classes.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-17-2021 06:43:49 | 04-17-2021 06:43:49 | |
transformers | 11,290 | closed | Python crashes when loading Bert model from pretrained | Hello all, I am here because I am encountering a outmost obscure problem. Right when I begin by creating my model I encounter that my gpu usage spikes and then my python code crashes. This only happens when I try to use any of the models 'from_pretrained' only, I haven't had issues with neither Tensorflow nor PyTourch by themselves (this behavior is only native to transformers)
For example:
The problem arises when running this line of code, right at the beginning of my script ;
model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased')
I get the following messages, which are pretty standard but as you can see in the bottom the code simply stops.
<
2021-04-16 16:16:35.330093: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
2021-04-16 16:16:38.495667: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library nvcuda.dll
2021-04-16 16:16:38.519178: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1760] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1060 computeCapability: 6.1
coreClock: 1.6705GHz coreCount: 10 deviceMemorySize: 6.00GiB deviceMemoryBandwidth: 178.99GiB/s
2021-04-16 16:16:38.519500: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
2021-04-16 16:16:38.528695: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublas64_11.dll
2021-04-16 16:16:38.528923: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublasLt64_11.dll
2021-04-16 16:16:38.533582: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cufft64_10.dll
2021-04-16 16:16:38.535368: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library curand64_10.dll
2021-04-16 16:16:38.540093: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusolver64_10.dll
2021-04-16 16:16:38.543728: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusparse64_11.dll
2021-04-16 16:16:38.544662: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudnn64_8.dll
2021-04-16 16:16:38.544888: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1898] Adding visible gpu devices: 0
2021-04-16 16:16:38.545436: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-04-16 16:16:38.546588: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1760] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1060 computeCapability: 6.1
coreClock: 1.6705GHz coreCount: 10 deviceMemorySize: 6.00GiB deviceMemoryBandwidth: 178.99GiB/s
2021-04-16 16:16:38.547283: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1898] Adding visible gpu devices: 0
2021-04-16 16:16:39.115250: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1300] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-04-16 16:16:39.115490: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1306] 0
2021-04-16 16:16:39.115592: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1319] 0: N
2021-04-16 16:16:39.115856: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1446] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4634 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060, pci bus id: 0000:01:00.0, compute capability: 6.1)
2021-04-16 16:16:39.419407: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublas64_11.dll
2021-04-16 16:16:39.709427: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublasLt64_11.dll
Process finished with exit code -1073741819 (0xC0000005)
>
Has any one else seen this? There something I am missing here?
Thank you for your help.
Here are the details of my system.
- `transformers` version: Latest
- Platform: Windows
- Python version: 3.7
- PyTorch version (GPU?): Latest
- Tensorflow version (GPU?): Latest
- Using GPU in script?: Yes, GeForce GTX 1060 computeCapability: 6.1
- Using distributed or parallel set-up in script?: No
Models I encountered this error on:
- albert, bert, xlm:
Libraries that are related to this issue:
- text classification: @patrickvonplaten
- trainer: @sgugger
- pipelines: @LysandreJik
| 04-16-2021 22:00:59 | 04-16-2021 22:00:59 | Is it possible you're running out of RAM (not necessarily GPU RAM)?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,289 | closed | google/pegasus-cnn_dailymail generates blank file | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.0 and 4.5.1
- Platform: linux
- Python version: 3.6
- PyTorch version (GPU?): 1.7.1
- Tensorflow version (GPU?): NA
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes (and I also try to not use distributed but problem exists)
### Who can help
@patrickvonplaten, @patil-suraj
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): google/pegasus-cnn_dailymail
The problem arises when using:
* [x] the official example scripts: run_distributed_eval.py from https://github.com/huggingface/transformers/tree/master/examples/legacy/seq2seq
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: summarization with ROUGE
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
I am trying to generate the summaries from Pegasus on CNN/DM and XSUM datasets. I use the same dataset shared by HuggingFace (from README.md in https://github.com/huggingface/transformers/tree/master/examples/legacy/seq2seq). My experiments are run on 3 V100 GPUs. I use ```google/pegasus-cnn_dailymail``` for CNN/DM and ```google/pegasus-xsum``` for XSUM.
1. The results on XSUM is perfect. I run the following code and receive the ROUGE score as: ```{'rouge1': 47.0271, 'rouge2': 24.4924, 'rougeL': 39.2529, 'n_obs': 11333, 'seconds_per_sample': 0.035, 'n_gpus': 3}```
```bash
python -m torch.distributed.launch --nproc_per_node=3 run_distributed_eval.py \
--model_name google/pegasus-xsum \
--save_dir $OUTPUT_DIR \
--data_dir $DATA_DIR \
--bs 64 \
--fp16
```
2. I was expecting similar SOTA performance on CNNDM, so I run the following code and receive: ```{"n_gpus": 3, "n_obs": 11490, "rouge1": 0.1602, "rouge2": 0.084, "rougeL": 0.1134, "seconds_per_sample": 0.1282}```.
(Note: here the batch size is changed due to memory limitation. Although experiments are performed on the same devices, CNN/DM requires more spaces considering the unique feature of dataset itself.)
```bash
python -m torch.distributed.launch --nproc_per_node=3 run_distributed_eval.py \
--model_name google/pegasus-cnn_dailymail \
--save_dir $OUTPUT_DIR \
--data_dir $DATA_DIR \
--bs 32 \
--fp16
```
3. I look at the generated ```test_generations.txt``` file to try to figure out why ```google/pegasus-cnn_dailymail``` doesn't work. Then I found most of lines in ```test_generations.txt``` are blank. (Please using the attached image for an example)
<img width="682" alt="image" src="https://user-images.githubusercontent.com/26696253/115087890-1b6cac80-9edd-11eb-8289-d45cbcf4f6dc.png">
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
It is so wired that ```google/pegasus-xsum``` works out perfectly while ```google/pegasus-cnn_dailymail``` does not generate summaries successfully. I am confused so I switch the transformers version (4.2.0 and 4.5.1), and I re-run the experiments on different GPUs. This problem exists. Could you please give me any suggestions? Thank you!
<!-- A clear and concise description of what you would expect to happen. -->
| 04-16-2021 21:58:41 | 04-16-2021 21:58:41 | Hi @chz816
I can reproduce the issue. This is because pegasus doesn't really work with `fp16`since its trained with `bfloat16`, so in most cases, it overflows and returns `nan` logits. The model works as expected in `fp32`, so if you run the above command without the `--fp16` arg, it should give the expected results.
cc @stas00 <|||||>Thank you @patil-suraj!
I have generated the summaries using ```pegasus-cnn_dailymail``` with the following performance: ```{'rouge1': 43.146, 'rouge2': 20.7292, 'rougeL': 30.4596, 'n_obs': 11490, 'seconds_per_sample': 0.2415, 'n_gpus': 3}```. It is lower than expected, but I think it can be explained by smaller batch size, which is caused by the memory limitation.
```bash
python -m torch.distributed.launch --nproc_per_node=3 run_distributed_eval.py \
--model_name google/pegasus-cnn_dailymail \
--save_dir $OUTPUT_DIR \
--data_dir $DATA_DIR \
--bs 16
```
Can you maybe explain why this problem does not exist for ```google/pegasus-xsum```? Thank you!<|||||>As @patil-suraj pointed out many models trained in `bfloat16` can't be run under mixed precision `fp16` (albeit pytorch are discussing bfloat16 mixed precision)
`pegasus-cnn_dailymail` has an issue of underflow under `fp16`:
let's take a single frame - Linear forward for `lm_head`:
fp32:
```
abs min abs max metadata
lm_head Linear
4.66e-10 1.13e+01 weight
6.29e-07 4.47e+00 input[0]
1.63e-07 3.00e+01 output
```
fp16:
```
lm_head Linear
0.00e+00 1.13e+01 weight
6.76e-07 5.38e+00 input[0]
0.00e+00 3.08e+01 output
```
As you can see `4.66e-10` under fp16 underflows into `0.0`.
**edit:** well, actually this would be the case if we did `model.half()` (which is what deepspeed does, and that's where it'd immediately underflow on the very first use), so here it's probably something else. I will need some time to try to understand what's going on here.
This is from WIP PR https://github.com/huggingface/transformers/pull/11274 - still polishing some nuances but should be ready soon.
Let me check `google/pegasus-xsum`<|||||>Regarding the cnn_dailymail scores, please see this issue #6844<|||||>@chz816, meanwhile could you please give me a way to reproduce your case? Ideally with some public dataset and best with the current version of the examples (master or last release), which would be using `examples/seq2seq/run_summarization.py`
e.g.:
```
python examples/seq2seq/run_summarization.py --model_name_or_path google/pegasus-cnn_dailymail \
--do_train --do_eval --dataset_name cnn_dailymail --dataset_config "3.0.0" --source_prefix \
"summarize: " --output_dir /tmp/tst-summarization --per_device_train_batch_size=1 \
--per_device_eval_batch_size=1 --overwrite_output_dir --predict_with_generate
```<|||||>Thank you, @patil-suraj.
Oh, this is the legacy script so it does do:
```
if fp16:
model = model.half()
```
<|||||>```
wget https://cdn-datasets.huggingface.co/summarization/pegasus_data/cnn_dailymail.tar.gz
tar -xvzf cnn_dailymail.tar.gz
python -m torch.distributed.launch --nproc_per_node=1 run_distributed_eval.py \
--model_name google/pegasus-cnn_dailymail --save_dir output_dir --data_dir cnn_dailymail \
--bs 8 --fp16
```
So the detection is quick (had to bolt it on manually, since this script isn't using the `Trainer`):
```
Detected inf/nan during batch_number=0
Last 10 forward frames:
abs min abs max metadata
model.encoder.layers.14.fc1 Linear
0.00e+00 1.88e+01 weight
2.73e-05 2.54e+00 bias
5.96e-08 9.05e+00 input[0]
0.00e+00 3.16e+02 output
model.encoder.layers.14.fc2 Linear
5.96e-08 3.29e+01 weight
5.40e-03 2.66e+01 bias
0.00e+00 1.03e+02 input[0]
0.00e+00 8.00e+03 output
model.encoder.layers.14 PegasusEncoderLayer
0.00e+00 6.45e+04 input[0]
0.00e+00 0.00e+00 input[1]
0.00e+00 6.45e+04 output[0]
model.encoder.layers.15.self_attn_layer_norm LayerNorm
5.63e-03 3.85e-01 weight
1.69e-05 2.49e-01 bias
0.00e+00 6.45e+04 input[0]
0.00e+00 1.50e+00 output
model.encoder.layers.15.self_attn.q_proj Linear
8.34e-07 2.95e+00 weight
0.00e+00 0.00e+00 bias
0.00e+00 1.50e+00 input[0]
5.96e-08 8.52e+00 output
model.encoder.layers.15.self_attn.k_proj Linear
2.38e-07 1.85e+00 weight
0.00e+00 0.00e+00 bias
0.00e+00 1.50e+00 input[0]
1.19e-07 9.30e+00 output
model.encoder.layers.15.self_attn.v_proj Linear
5.96e-08 4.03e+00 weight
0.00e+00 0.00e+00 bias
0.00e+00 1.50e+00 input[0]
6.56e-07 2.95e+01 output
model.encoder.layers.15.self_attn.out_proj Linear
5.96e-08 2.25e+01 weight
0.00e+00 0.00e+00 bias
5.96e-08 1.25e+01 input[0]
3.58e-07 1.29e+03 output
model.encoder.layers.15.self_attn PegasusAttention
3.58e-07 1.29e+03 output[0]
None output[1]
None output[2]
model.encoder.layers.15.final_layer_norm LayerNorm
7.32e-02 2.69e+00 weight
2.00e-05 1.02e+00 bias
0.00e+00 inf input[0]
nan nan output
```<|||||>I'm able to reproduce this with the "modern" version of the script:
```
rm -rf output_dir; USE_TF=0 PYTHONPATH=src python examples/seq2seq/run_summarization.py \
--model_name_or_path google/pegasus-cnn_dailymail --do_eval --dataset_name cnn_dailymail \
--dataset_config "3.0.0" --output_dir output_dir \
--per_device_eval_batch_size=16 --predict_with_generate --fp16_full_eval --max_val_samples 10
[...]
***** eval metrics *****
eval_gen_len = 9.0
eval_loss = nan
eval_mem_cpu_alloc_delta = -55MB
eval_mem_cpu_peaked_delta = 55MB
eval_mem_gpu_alloc_delta = 1089MB
eval_mem_gpu_peaked_delta = 7241MB
eval_rouge1 = 0.0
eval_rouge2 = 0.0
eval_rougeL = 0.0
eval_rougeLsum = 0.0
eval_runtime = 0:00:07.71
eval_samples = 10
eval_samples_per_second = 1.295
init_mem_cpu_alloc_delta = 0MB
init_mem_cpu_peaked_delta = 0MB
init_mem_gpu_alloc_delta = 0MB
init_mem_gpu_peaked_delta = 0MB
```<|||||>Thank you for your response @stas00 ! Yeah I am able to resolve the issue without ```--fp16```, but I am still little confused why ```google/pegasus-xsum``` works well with ```---fp16``` argument, since they are from the same seq2seq model. Any ideas? Thank you!<|||||>For some reason I can't even run `google/pegasus-xsum` https://github.com/huggingface/transformers/issues/11344, so I'm not able to look inside.
I can only guess that perhaps `google/pegasus-xsum` was trained in mixed precision fp16? |
transformers | 11,288 | closed | Question about T5-11b model weights | Hi, where do the T5-11b model weights come from? Are they from the original paper or have they been trained on the community release version of C4 independently? | 04-16-2021 21:06:23 | 04-16-2021 21:06:23 | |
transformers | 11,287 | closed | Zero-shot pipeline feature extraction | Is it possible to extract the hidden states representation from the zero-shot pipeline? I have these two tasks: feature extraction and zero-shot classification. But I don't want to load the same model twice, since it is a major burden on GPU memory. Any suggestions to how I can do both tasks without having to load it twice? | 04-16-2021 18:38:38 | 04-16-2021 18:38:38 | The <|||||>> The
Yes?<|||||>Answering my question: pipeline returns the model as well. You just have to use it directly to extract the hidden states. |
transformers | 11,286 | closed | Trainer support for IterableDataset for evaluation and predict | # What does this PR do?
This PR rewrites the entirety of the evaluation loop to add support for `IterableDataset`. The main problem with the current training loop is that, in distributed settings, it expects the indices of the evaluation set to come like this:
- `[0, 1, 2, 3, 4, 5, 6, 7, ...., 99]` for process 0
- `[100, 101, 102, 103, 104, 105, 106, 107, ...., 199]` for process 1
(if we have 200 samples)
In an `IterableDataset` we don't know the length at the beginning of the process, so we can cleanly cut the indices in half like that. Therefore, the indices will come like this (with a batch size of 4):
- `[0, 1, 2, 3, 8, 9, 10, 11, ...., 192, 193, 194, 195]` for process 0
- `[4, 5, 6, 7, 12, 13, 14, 15, ...., 196, 197, 198, 199]` for process 1
The rewrite of the evaluation loop is done to:
- change the sampling indices in a normal `Dataset` to be the same as an `IterableDataset`
- change the way predictions and labels are gathered accordingly
- avoid having one evaluation loop for `Dataset` and one for `IterableDataset`
To avoid any breaking change:
- the old evaluation loop is still there with the same name (for people who subclass Trainer) and can be used if one passes the flag `--use_legacy_prediction_loop`.
- the old `DistributedSequentialSampler` and `DistributedTensorGatherer` are left and deprecated | 04-16-2021 16:40:55 | 04-16-2021 16:40:55 | |
transformers | 11,285 | closed | `resize_token_embeddings` not taken into account in `save_pretrained` for `EncoderDecoderModel` | ## Environment info
- `transformers` version: 4.5.0
- Platform: Darwin-17.7.0-x86_64-i386-64bit
- Python version: 3.7.2
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten, @patil-suraj
## Information
I am extending the embeddings of the decoder of an `EncoderDecoderModel` model. When I save it, the config does not reflect the new size. However, it works fine when I try doing the same for non `EncoderDecoderModel` models.
## To reproduce
```
In [1]: model = t.EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased')
In [2]: model.decoder.bert.embeddings.word_embeddings
Out[2]: Embedding(30522, 768, padding_idx=0)
In [3]: model.decoder.resize_token_embeddings(30522+100)
Out[3]: Embedding(30622, 768)
In [4]: model.save_pretrained('test-bert')
```
## Expected behavior
The updated embedding size should be saved in `config.json`
| 04-16-2021 15:24:52 | 04-16-2021 15:24:52 | This is caused by the EncoderDecoderConfig which initializes independent objects ([link](https://github.com/huggingface/transformers/blob/d9c62047a8d75e18d2849d345ab3394875a712ef/src/transformers/models/encoder_decoder/configuration_encoder_decoder.py#L84)) instead of utilizing the already existing ones.
You can fix that for the moment by calling:
```
model.config.decoder = model.decoder.config
model.config.encoder = model.encoder.config
```
PR will follow. |
transformers | 11,284 | closed | Loading from checkpoint seems to hang indefinitely for Roberta | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.0
- Platform: Linux-5.8.0-48-generic-x86_64-with-glibc2.29
- Python version: 3.8.7
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: Yes - RTX 3090
- Using distributed or parallel set-up in script?: No
Models:
- albert, bert, xlm: @LysandreJik
- Trainer: @sgugger
## Information
Model I am using (Bert, XLNet ...): Roberta
The problem arises when using:
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] my own task or dataset
## To reproduce
I'm trying to resume training my Roberta model from a checkpoint. When the training initialises it seems to pick up the last checkpoint:
```
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 660000
Will skip the first 0 epochs then the first 660000 batches in the first epoch.
```
After that it just hangs, training does not start, no further logging and GPU utilisation is 0. I've left it for over 6 hours and still no progress.
I've tried both loading directly from a checkpoint and initialising the trainer with checkpoint=True: trainer.train("ml/models/araberto/checkpoint-660000") and trainer.train(checkpoint=True)
Code below:
```
from datasets import load_dataset
from datasets import ClassLabel, Value, Sequence
tokenizer = RobertaTokenizerFast.from_pretrained(output_path)
dataset = load_dataset('text',
data_files={
"train":[str(x) for x in Path(f"{dataset_path}/train").glob("*.txt")],
"test": str(Path(f"{dataset_path}/test.txt"))
})
def encode(batch):
tokenized = tokenizer(batch.get("text", ""), padding="max_length", truncation=True, max_length=max_length)
return tokenized
dataset.set_transform(encode)
import torch
if not torch.cuda.is_available():
raise Exception("GPU not available")
from transformers import RobertaForMaskedLM
from transformers import RobertaConfig
from transformers import DataCollatorForLanguageModeling
config = RobertaConfig(
vocab_size=vocab_size,
max_position_embeddings=514,
num_attention_heads=12,
num_hidden_layers=6,
type_vocab_size=1,
)
model = RobertaForMaskedLM(config=config)
model.num_parameters()
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=True, mlm_probability=0.15
)
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir=output_path,
overwrite_output_dir=True,
num_train_epochs=1,
per_device_train_batch_size=48,
save_steps=10_000,
save_total_limit=2,
remove_unused_columns=False,
fp16=True,
fp16_backend="amp"
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset["train"],
eval_dataset=dataset["test"]
)
%%time
torch.cuda.is_available()
torch.cuda.empty_cache()
trainer.train("ml/models/roberto/checkpoint-660000")
```
Debug logs:
```
Loading model from ml/models/roberto/checkpoint-660000).
loading configuration file ml/models/roberto/checkpoint-660000/config.json
Model config RobertaConfig {
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 12,
"num_hidden_layers": 6,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.5.0",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 51000
}
loading weights file ml/models/roberto/checkpoint-660000/pytorch_model.bin
All model checkpoint weights were used when initializing RobertaForMaskedLM.
All the weights of RobertaForMaskedLM were initialized from the model checkpoint at ml/models/roberto/checkpoint-660000.
If your task is similar to the task the model of the checkpoint was trained on, you can already use RobertaForMaskedLM for predictions without further training.
***** Running training *****
Num examples = 808405026
Num Epochs = 1
Instantaneous batch size per device = 48
Total train batch size (w. parallel, distributed & accumulation) = 48
Gradient Accumulation steps = 1
Total optimization steps = 16841772
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 660000
Will skip the first 0 epochs then the first 660000 batches in the first epoch.
```
Iteration over the dataset seems fine:
```
x = dataset["train"][808405025]
print(x)
{'input_ids': [0, 14527, 606, 606, 503, 616, 13117, 1319, 7537, 93, 2506, 7712, 4897, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}
```
## Expected behavior
Training to resume from the checkpoint | 04-16-2021 15:23:34 | 04-16-2021 15:23:34 | The script is not hanging, it is skipping the first 660,000 batches since you are resuming training from there which takes a lot of time. If you don't mind continuing training with the same data, you can use the option `ignore_data_skip=True` in your training arguments.<|||||>@eh-93 Do you remember how much time it took to train that checkpoint?
@sgugger How about we add a progress bar to make the trainer more user-friendly? <|||||>@sgugger got it, thanks
@cronoik - around 5 days to get to that checkpoint
After 28 hours the training resumed<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>That's not a good solution. A better solution that should be implemented is to pass some boolean to the CustomDataset, telling it that we are now in a 'skip' mode, so that the CustomDataset could prevent from doing expensive and unneeded steps in the skipping phase, like for example tokenize words. Does HF work on such solution? |
transformers | 11,283 | closed | Beam search decoding and language model integration for Wav2Vec2ForCTC models | 1. AFAIK, `Wav2Vec2ForCTCTokenizer.decode` method only provides greedy decoding. Is there a Beamsearch implementation for CTC available yet?
2. Also, as it is a common norm in ASR modelling, language models are also generally added on top of the acoustic model. It would also be nice to have a possibility of appending a pretrained Language model which gets taken into consideration at the beamsearch decoding time. Not sure if there's an out-of-box solution implemented for that yet?
I'm also aware of efforts to integrate a language model in #10794 and have had a look at the notebook [here](https://github.com/voidful/huggingface_notebook/blob/main/xlsr_gpt.ipynb). Although it is a nice, simple way to integrate an LM, it is suboptimal when considering CTC semantics. A more appropriate approach would be the one described in [this](https://arxiv.org/pdf/1408.2873.pdf) paper and explained in [this](https://distill.pub/2017/ctc/) distilpub blog. Would be great to have these features added (if they are already not there and I somehow missed them). | 04-16-2021 14:43:10 | 04-16-2021 14:43:10 | Hey @tanujjain,
We are very interested in adding beam search for Wav2Vec2 + LM support in general, but sadly don't find the time to do so at the moment. We would be really happy about a contribution if you want to give it a try.
As a start we could add the logic to `examples/research_projects/wav2vec2` and if it's clean then move to upstream to `src/transformers`<|||||>@patrickvonplaten Sure, I'll give it a go.<|||||>Hello @patrickvonplaten and @tanujjain,
I have already worked with prefix beam search decoding with language models for wav2vec2 and would like to implement it for huggingface, if you guys are okay with it.<|||||>PRs are very much welcome!<|||||>Any update on this? Specifically any transformer based lm that one can use with wav2vec 2.0?<|||||>As a quick solution, I used the code by original author of the algo which can be found [here](https://gist.github.com/awni/56369a90d03953e370f3964c826ed4b0).
``` python
import numpy as np
import math
import collections
NEG_INF = -float("inf")
def make_new_beam():
fn = lambda : (NEG_INF, NEG_INF)
return collections.defaultdict(fn)
def logsumexp(*args):
"""
Stable log sum exp.
"""
if all(a == NEG_INF for a in args):
return NEG_INF
a_max = max(args)
lsp = math.log(sum(math.exp(a - a_max)
for a in args))
return a_max + lsp
def decode(probs, beam_size=100, blank=0):
"""
Performs inference for the given output probabilities.
Arguments:
probs: The output probabilities (e.g. post-softmax) for each
time step. Should be an array of shape (time x output dim).
beam_size (int): Size of the beam to use during inference.
blank (int): Index of the CTC blank label.
Returns the output label sequence and the corresponding negative
log-likelihood estimated by the decoder.
"""
T, S = probs.shape
probs = np.log(probs)
# Elements in the beam are (prefix, (p_blank, p_no_blank))
# Initialize the beam with the empty sequence, a probability of
# 1 for ending in blank and zero for ending in non-blank
# (in log space).
beam = [(tuple(), (0.0, NEG_INF))]
for t in range(T): # Loop over time
next_beam = make_new_beam() # A default dictionary to store the next step candidates.
for s in range(S): # Loop over vocab
p = probs[t, s]
# The variables p_b and p_nb are respectively the
# probabilities for the prefix given that it ends in a
# blank and does not end in a blank at this time step.
for prefix, (p_b, p_nb) in beam: # Loop over beam
# If we propose a blank the prefix doesn't change.
# Only the probability of ending in blank gets updated
if s == blank:
n_p_b, n_p_nb = next_beam[prefix]
n_p_b = logsumexp(n_p_b, p_b + p, p_nb + p)
next_beam[prefix] = (n_p_b, n_p_nb)
continue
# Extend the prefix by the new character s and add it to
# the beam. Only the probability of not ending in blank
# gets updated.
end_t = prefix[-1] if prefix else None
n_prefix = prefix + (s,)
n_p_b, n_p_nb = next_beam[n_prefix]
if s != end_t:
n_p_nb = logsumexp(n_p_nb, p_b + p, p_nb + p)
else:
# We don't include the previous probability of not ending
# in blank (p_nb) if s is repeated at the end. The CTC
# algorithm merges characters not separated by a blank.
n_p_nb = logsumexp(n_p_nb, p_b + p)
# *NB* this would be a good place to include an LM score.
next_beam[n_prefix] = (n_p_b, n_p_nb) ## add lm here
# If s is repeated at the end we also update the unchanged
# prefix. This is the merging case.
if s == end_t:
n_p_b, n_p_nb = next_beam[prefix]
n_p_nb = logsumexp(n_p_nb, p_nb + p)
next_beam[prefix] = (n_p_b, n_p_nb)
# Sort and trim the beam before moving on to the
# next time-step.
beam = sorted(next_beam.items(),
key=lambda x : logsumexp(*x[1]),
reverse=True)
beam = beam[:beam_size]
best = beam[0]
return best[0], -logsumexp(*best[1])
# Try the algo on an example
time = 50
output_dim = 20
batch_size = 16
batch_probs = np.random.rand(batch_size, time, output_dim)
decoded_batch = []
for b in batch_probs:
norm_b = b/np.sum(b, axis=1, keepdims=True)
decoded_batch.append(decode(norm_b, beam_size=3)[0])
```
Trying to add a language model (for german) like so:
``` python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer_de = AutoTokenizer.from_pretrained("dbmdz/german-gpt2")
model_de = AutoModelWithLMHead.from_pretrained("dbmdz/german-gpt2", return_dict_in_generate=True)
def lm_prob(sentence):
last_word_token = tokenizer_de.encode(sentence.split(' ')[-1])
earlier_sentence = ' '.join(sentence.split(' ')[:-1])
input_ids_earlier_sent = tokenizer_de.encode(earlier_sentence, return_tensors="pt") # tokenize rest of the sentence
generated_outputs_lm = model_de.generate(input_ids_earlier_sent,
max_length=len(input_ids_earlier_sent[0]) + 1,
do_sample=True,
num_return_sequences=1,
output_scores=True)
sftmax_prob_lm = generated_outputs_lm.scores[0].softmax(-1)
prob = sftmax_prob_lm[0, last_word_token]
return prob
```
The lm snippet should give the prob of having the last word in a beam given all the other preceding characters, but the probabilities for the words I expect are almost always close to zero, so still working on figuring out how better to use the LM. Hence, haven't integrated the LM with the above snippet.
As for a decent implementation for beamsearchforctc, I'm thinking on the lines of running the above algo (not the same code obviously) with each sequence in the batch running an independent beamsearch on a different thread/process.
**Anyone with less complex implementational ideas?**
Found another implementation [here](https://github.com/githubharald/CTCDecoder/blob/master/src/BeamSearch.py) (without consideration for batch inference).
<|||||>> As for a decent implementation for beamsearchforctc, I'm thinking on the lines of running the above algo (not the same code obviously) with each sequence in the batch running an independent beamsearch on a different thread/process.
There you go: https://github.com/mozilla/DeepSpeech/blob/master/native_client/ctcdecode/ctc_beam_search_decoder.cpp#L287
I'd highly encourage to also consider returning the frames where the probability of the token spikes as it can be used for alignment. Mozilla did it in their implementation and it works quite nicely.
Is there any restriction on the programming language? The computational complexity of the algorithm is quite high and ctc beam search decoding often the bottleneck.<|||||>I think we can try to add a dependency to wav2letter: https://github.com/flashlight/wav2letter and add LM decoding as explained here on fairseq: https://github.com/pytorch/fairseq/blob/master/examples/wav2vec/README.md#evaluating-a-ctc-model . It would be awesome if we manage to create a nice `run_wav2vec2_eval_with_lm.py` script that people can use out of the box with every wav2vec2 model. We can also make a nice blog post out of this and publish it on our blog :-)
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>ping<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>For future developers, you may find this implementation useful. I used the simplest code possible to develop it
https://github.com/farisalasmary/wav2vec2-kenlm
<|||||>I'm now working on this topic full time.
We will most likely foster a closer collaboration between [pyctcdecode](https://github.com/kensho-technologies/pyctcdecode) and Transformers. [Here](https://github.com/patrickvonplaten/Wav2Vec2_PyCTCDecode) is a github repo that shows how to use `pyctcdecode` with Wav2Vec2 for LM supported decoding. It works quite well with KenLM. |
transformers | 11,282 | closed | tf.function and half precision fails with Roberta models | ## Environment info
- `transformers` version: 4.5.1
- Platform: Linux-5.4.0-71-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: No (but also fails with GPU)
- Using distributed or parallel set-up in script?: No
### Who can help
As far as I can tell, this worked before #9788, so maybe @jplu can help. Also this is a TF issue so @Rocketknight1 .
## Information
Model I am using (Bert, XLNet ...): TFRoberta, this also happens with TFXLMRoberta
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```py3
import tensorflow as tf
from transformers.models.roberta import RobertaTokenizerFast, TFRobertaModel
@tf.function
def get_embeddings(
tokenizer: RobertaTokenizerFast, model: TFRobertaModel, text: str
) -> tf.Tensor:
return model(**tokenizer(text, return_tensors="tf")).last_hidden_state
if __name__ == "__main__":
tf.keras.mixed_precision.set_global_policy("float16")
name = "roberta-base"
tokenizer = RobertaTokenizerFast.from_pretrained(name)
model = TFRobertaModel.from_pretrained(name)
embeddings = get_embeddings(
tokenizer=tokenizer,
model=model,
text="tf.function and mixed precision",
)
print(embeddings)
```
Traceback:
```
File "roberta_bug.py", line 17, in <module>
embeddings = get_embeddings(
File "/home/arthur/reinfer/env/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 828, in __call__
result = self._call(*args, **kwds)
File "/home/arthur/reinfer/env/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 871, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/home/arthur/reinfer/env/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 725, in _initialize
self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
File "/home/arthur/reinfer/env/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 2969, in _get_concrete_function_internal_garbage_collected
graph_function, _ = self._maybe_define_function(args, kwargs)
File "/home/arthur/reinfer/env/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3361, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/home/arthur/reinfer/env/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3196, in _create_graph_function
func_graph_module.func_graph_from_py_func(
File "/home/arthur/reinfer/env/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 990, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/home/arthur/reinfer/env/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 634, in wrapped_fn
out = weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/home/arthur/reinfer/env/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 977, in wrapper
raise e.ag_error_metadata.to_exception(e)
TypeError: in user code:
roberta_bug.py:9 get_embeddings *
return model(**tokenizer(text, return_tensors="tf")).last_hidden_state
/home/arthur/reinfer/env/lib/python3.8/site-packages/transformers/models/roberta/modeling_tf_roberta.py:744 call *
outputs = self.roberta(
/home/arthur/reinfer/env/lib/python3.8/site-packages/transformers/models/roberta/modeling_tf_roberta.py:544 call *
extended_attention_mask = tf.multiply(tf.subtract(1.0, extended_attention_mask), -10000.0)
/home/arthur/reinfer/env/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py:201 wrapper **
return target(*args, **kwargs)
/home/arthur/reinfer/env/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py:561 subtract
return gen_math_ops.sub(x, y, name)
/home/arthur/reinfer/env/lib/python3.8/site-packages/tensorflow/python/ops/gen_math_ops.py:10316 sub
_, _, _op, _outputs = _op_def_library._apply_op_helper(
/home/arthur/reinfer/env/lib/python3.8/site-packages/tensorflow/python/framework/op_def_library.py:555 _apply_op_helper
raise TypeError(
TypeError: Input 'y' of 'Sub' Op has type float16 that does not match type float32 of argument 'x'.
```
## Expected behavior
The model should calculate embeddings correctly. This is due to `tf.subtract(1.0, extended_attention_mask)` checking that `1.0` and `extended_attention_mask` have the same type, but in `float16` mode they do not. Reverting to `1.0 - extended_attention_mask` fixes the issue.
| 04-16-2021 14:12:31 | 04-16-2021 14:12:31 | Thanks for sharing this issue!
The issue here indeed comes from 1.0 that is not from the same dtype than `extended_attention_mask`. A better fix would be to align this line such as the other models by replacing it with:
```
extended_attention_mask = tf.cast(extended_attention_mask, dtype=embedding_output.dtype)
one_cst = tf.constant(1.0, dtype=embedding_output.dtype)
ten_thousand_cst = tf.constant(-10000.0, dtype=embedding_output.dtype)
extended_attention_mask = tf.multiply(tf.subtract(one_cst, extended_attention_mask), ten_thousand_cst)
```
Extracted from BERT. I will do a fix.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,281 | closed | Adding and consequently removing tokens leads to incorrect number of input embeddings | ## Environment info
- `transformers` version: 4.2.2
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.9
- PyTorch version (GPU?): 1.7.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Models:
- gpt2: @patrickvonplaten, @LysandreJik
Library:
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
## Information
Using `gpt2-medium`.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
I am attempting to undo `add_tokens()` and `resize_token_embeddings()` for a given, fine-tuned gpt2-medium model. I deleted the token `del tokenizer.added_tokens_encoder[token]` and `model.resize_token_embeddings(len(tokenizer))`, but there remain too many embeddings in the model and consequently, the output is corrupted.
Steps to reproduce the behavior:
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
model_path = "gpt2"
tokenizer = GPT2Tokenizer.from_pretrained(model_path)
model = GPT2LMHeadModel.from_pretrained(model_path)
def speak(model, tokenizer, prefix):
input_ids = tokenizer.encode(prefix, return_tensors='pt')
output_ids = model.generate(input_ids, max_length=15, return_dict_in_generate=True, do_sample=False).sequences
print(tokenizer.decode(output_ids[0]))
print(len(tokenizer), len(tokenizer.encoder))
print(model.get_input_embeddings())
print('\n\n')
# out-of-the-box
speak(model, tokenizer, 'I like cheese')
# added token
tokenizer.add_tokens('cheese')
model.resize_token_embeddings(len(tokenizer))
speak(model, tokenizer, 'I like cheese')
# removed token
del tokenizer.added_tokens_encoder['cheese']
model.resize_token_embeddings(len(tokenizer))
speak(model, tokenizer, 'I like cheese')
```
This results in
```
I like cheese.<|endoftext|>
50258 50257
Embedding(50257, 1024)
I like cheese cheese cheese cheese cheese cheese cheese cheese cheese cheese cheese cheese cheese
50259 50257
Embedding(50259, 1024)
I like<|endoftext|> <|startoftext|> <|startoftext|> <|startoftext|> <|startoftext|> <|startoftext|> <|startoftext|> <|startoftext|> <|startoftext|> <|startoftext|> <|startoftext|> <|startoftext|> <|startoftext|>
50258 50257
Embedding(50258, 1024)
```
## Expected behavior
`model.get_input_embeddings()` of the third output should be equal to the first output `Embedding(50257, 1024)`. Note that I used a fine-tuned version of `gpt2-medium` and I wasn't able to recreate the issue entirely with a pretrained model, but even the deterministic output of a pretrained model will change after deleting a previously added token.
Is this expected behavior?
| 04-16-2021 13:47:20 | 04-16-2021 13:47:20 | Hey @doubleplusnice,
I can't really reproduce the bug.
When running your code the output I get is:
```
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
/home/patrick/python_bin/transformers/generation_utils.py:963: UserWarning: `max_length` is deprecated in this function, use `stopping_criteria=StoppingCriteriaList(MaxLengthCriteria(max_length=max_length))` instead.
warnings.warn(
I like cheese, but I don't like cheese. I like cheese because
50257 50257
Embedding(50257, 768)
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
I like cheese cheese cheese cheese cheese cheese cheese cheese cheese cheese cheese cheese cheese
50258 50257
Embedding(50258, 768)
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
I like<|endoftext|>The first time I saw the new "The Walking Dead"
50257 50257
Embedding(50257, 768)
```
which seems correct to me.
Can you maybe try to update on master? Also, I'm testing with the `gpt2` checkpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,280 | closed | failed to import BertModel | # 📚 Migration
## Information
The version of my torch is 1.6.0. When I want to import BertModel from transformers, it raised an error: ModuleNotFoundError: No module named '_sentencepiece'
I firstly activate my envs and used 'conda install transformers'.
Please help me, how can I address this problem? | 04-16-2021 13:10:46 | 04-16-2021 13:10:46 | I don't know what your transformers version is, but from your error message, you should install `sentencepiece`: `pip install sentencepiece` or `conda install sentencepiece` with the appropriate channel (probably `conda-forge`)<|||||>Thank u very much, I have solved this problem. Your work is really remarkable.
---Original---
From: "Lysandre ***@***.***>
Date: Fri, Apr 16, 2021 21:17 PM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [huggingface/transformers] failed to import BertModel (#11280)
I don't know what your transformers version is, but from your error message, you should install sentencepiece: pip install sentencepiece or conda install sentencepiece with the appropriate channel (probably conda-forge)
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or unsubscribe.<|||||>Happy to help!<|||||>The |
transformers | 11,279 | closed | fp16 compatibility | 我使用的是RTX3090,cuda11.0,系统是ubuntu 18.04
现在遇到问题如下,希望请教一下解决方法
^MEpoch: 0%| | 0/2 [00:00<?, ?it/s]
^MIteration: 0%| | 0/10860 [00:00<?, ?it/s]^[[A^MIteration: 0%| | 0/10860 [00:03<?, ?it/s]
^MEpoch: 0%| | 0/2 [00:03<?, ?it/s]
Selected optimization level O1: Insert automatic casts around Pytorch functions and Tensor methods.
Defaults for this optimization level are:
enabled : True
opt_level : O1
cast_model_type : None
patch_torch_functions : True
keep_batchnorm_fp32 : None
master_weights : None
loss_scale : dynamic
Processing user overrides (additional kwargs that are not None)...
After processing overrides, optimization options are:
enabled : True
opt_level : O1
cast_model_type : None
patch_torch_functions : True
keep_batchnorm_fp32 : None
master_weights : None
loss_scale : dynamic
Warning: multi_tensor_applier fused unscale kernel is unavailable, possibly because apex was installed without --cuda_ext --cpp_ext. Using Python fallback. Original ImportError was: ModuleNotFoundError("No module named 'amp_C'",)
Traceback (most recent call last):
File "./examples/run_cls.py", line 645, in <module>
main()
File "./examples/run_cls.py", line 533, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "./examples/run_cls.py", line 159, in train
outputs = model(**inputs)
File "/home/jitingyu/.conda/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/jitingyu/.conda/envs/pytorch/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 161, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/jitingyu/.conda/envs/pytorch/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 171, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/jitingyu/.conda/envs/pytorch/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/home/jitingyu/.conda/envs/pytorch/lib/python3.6/site-packages/torch/_utils.py", line 428, in reraise
raise self.exc_type(msg)
StopIteration: Caught StopIteration in replica 0 on device 0.
Original Traceback (most recent call last):
File "/home/jitingyu/.conda/envs/pytorch/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/home/jitingyu/.conda/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/jitingyu/AwesomeMRC-master/transformer-mrc/transformers/modeling_albert.py", line 688, in forward
inputs_embeds=inputs_embeds
File "/home/jitingyu/.conda/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/jitingyu/AwesomeMRC-master/transformer-mrc/transformers/modeling_albert.py", line 524, in forward
extended_attention_mask = extended_attention_mask.to(dtype=next(self.parameters()).dtype) # fp16 compatibility
StopIteration | 04-16-2021 12:43:08 | 04-16-2021 12:43:08 | Please fill in the issue template for us to help you. You seem to be on an older transformers version, this was fixed in recent versions.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,278 | closed | [Benchmark] | # 🖥 Benchmarking `transformers`
## Benchmark
Which part of `transformers` did you benchmark?
## Set-up
What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use?
## Results
Put your results here! | 04-16-2021 12:05:20 | 04-16-2021 12:05:20 | |
transformers | 11,277 | closed | We should make an eco freindly phone and it should be affordable for everyone | 04-16-2021 11:39:15 | 04-16-2021 11:39:15 | Seems out of scope. |
|
transformers | 11,276 | open | Running gpt-neo 2.7B with less than 13GB of system memory like Colab | # 🚀 Feature request
A way to conserve regular system memory while loading large models. On systems without much system memory, the process crashes because it tries to load both the weight checkpoint and the model into system memory. In the case of gpt-neo 2.7B, this is even worse because the checkpoint is offered only in float32 format, taking up twice as much space.
## Motivation
Free Colab often offers GPUs with 16GB of VRAM but only about 13GB of RAM. This increases the barrier of entry for people to play with these kinds of models. I have found a way to load models in this kind of situation, but it is not general or well integrated. By posting it here, I hope that a more general implementation can be built at some point.
This would also help #11271.
## Your contribution
It is possible to work around this by loading the checkpoint directly into VRAM, casting it to float16, instantiating the model in VRAM and only then applying the weights from the checkpoint.
To do this, first a patch has to be applied to src/transformers/models/gpt_neo/modeling_gpt_neo.py. This is based on the 4.5.1 release.
703c703
< self.h = nn.ModuleList([GPTNeoBlock(config, layer_id=i) for i in range(config.num_layers)])
---
> self.h = nn.ModuleList([GPTNeoBlock(config, layer_id=i).half().cuda() for i in range(config.num_layers)])
890,891c890,891
< self.transformer = GPTNeoModel(config)
< self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
---
> self.transformer = GPTNeoModel(config).half().cuda()
> self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False).half().cuda()
This causes space for the biggest part of the model to be allocated directly on the GPU, which has more space in the free Colab scenario. It also moves the other parts of the model to GPU. Now the model can be instantiated like this:
from transformers.file_utils import cached_path, WEIGHTS_NAME, hf_bucket_url
model_name = "EleutherAI/gpt-neo-2.7B"
archive_file = hf_bucket_url(model_name, filename=WEIGHTS_NAME)
resolved_archive_file = cached_path(archive_file)
checkpoint = torch.load(resolved_archive_file, map_location="cuda:0")
for k in checkpoint.keys():
checkpoint[k] = checkpoint[k].half()
model = GPTNeoForCausalLM.from_pretrained(model_name, state_dict=checkpoint).half().to("cuda")
for k in list(checkpoint.keys()):
del checkpoint[k] | 04-16-2021 10:41:54 | 04-16-2021 10:41:54 | |
transformers | 11,275 | closed | modify double considering special tokens in `language_modeling.py` | # What does this PR do?
in `class TextDatasetForNextSentencePrediction`, double considering `self.tokenizer.num_special_tokens_to_add(pair=True)`
so, i remove `self.block_size`, and add parameter for `def create_examples_from_document` like `class LineByLineWithSOPTextDataset` do
Fixes # (issue) double considering special tokens in `language_modeling.py`
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-16-2021 05:57:16 | 04-16-2021 05:57:16 | |
transformers | 11,274 | closed | [debug utils] activation/weights underflow/overflow detector | This PR came to be out of the overflow issue we have been dealing with in t5/mt5/gpt-neo due to bf16 pretrained models. This PR:
* adds a new file `debug_utils.py`
* adds new helper debug class `DebugUnderOverflow` and function `detect_overflow` for doing the same for any tensor variable (useful for detailed debug).
* extends `Trainer` to support `--debug underflow_overflow` which automatically activates this detector - no changes to the code required
* overloads the old `--debug` which for some reason was used for very specific tpu debug prints yet, so folding that feature into now multi-optional `--debug` (similar to `--sharded_ddp`). the old unqualified `--debug` is now specific `--debug tpu_metrics_debug`. I know it sort of breaks back-compat for `--debug`, but since it's debug it's hopefully OK.
* creates a new doc `debugging.rst` - will add some more useful debug recipes into it later
I'm open to suggestions of different namings to all of the new things...
@LysandreJik, @sgugger | 04-16-2021 02:29:31 | 04-16-2021 02:29:31 | > I haven't commented on each print statement, but they should use the logger maybe?
I started with it first and then replaced with print, because we need all the horizontal space and the really busy long pre-amble is just getting in the way, IMHO. I also don't see what useful information it'd contribute because the tool raises an exception when it detects the problem. Finally, what if someone disabled the logger - it'd not be able to do its work then. Please correct me if I'm missing something.
> Also I think the `DebugActivationOverflow` should be documented in our internals API doc (since we tell people to use it in their own Trainers). `internal/trainer_utils` is probably the place for that.
Will do. Thank you for suggesting where to put it.
Converted to .rst - found this new tool https://github.com/miyakogi/m2r that did it well, just needed to clean up a weird quirk.
<|||||>All links have been added and tweaked the doc some more to improve readability.<|||||>OK, the original functionality has been expanded to include a lot more useful information. Please see the updated copious documentation both in the docstring and the user docs.
The main changes are that:
1. we now print each frame separately and include inputs/outputs/weights
2. there is a tracing mode which can easily trace any number of batches at will
I wasn't sure how I could integrate the new features into the limited `--debug underflow_overflow` interface as it now has 3 optional parameters. So for now these can be activated directly from the script. If you can think how I could make these work with ``TrainingArguments`` I'm all ears.
As this has changed a lot inside I'd appreciate another look from @sgugger and @LysandreJik - no rush please. And thank you! |
transformers | 11,273 | closed | update dependency_versions_table | missed this updating when bumped the version.
| 04-16-2021 02:03:23 | 04-16-2021 02:03:23 | |
transformers | 11,272 | closed | squad_convert_example_to_features is broken | ## Environment info
- `transformers` version: 4.2.2
- Platform: Linux-4.15.0-132-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1+cu101 (True)
- Tensorflow version (GPU?): No
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger
Models:
NA
## Information
The squad_convert_example_to_features function requires a tokenizer but there is no way to give it one so you always get `NameError: name 'tokenizer' is not defined`
## To reproduce
Call `squad_convert_example_to_features` with any input.
## Expected behavior
Convert a squad example to features.
| 04-16-2021 00:05:27 | 04-16-2021 00:05:27 | Please use `squad_convert_example_to_features_init(yourtokenizer)` to set the tokenizer. |
transformers | 11,271 | closed | gpt-neo 2.7 crashes, 1.3 runs fine | Loading the generator crashes Python
```
python3
Python 3.8.5 (default, Jan 27 2021, 15:41:15)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B', device=0)
[2]+ Killed python3
Killed
```
I have an A6000 with 48gb.
... it looks like I'm running on 16gb of ram?? maybe a stick is dead -- usually 32gb.
Is it a system ram issue?
## Environment info
- `transformers` version: 4.5.0
- Platform: Linux-5.4.0-70-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.9.0.dev20210217+cu112 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
| 04-15-2021 23:02:35 | 04-15-2021 23:02:35 | I would guess that's a memory issue indeed! That should be the only difference between the two checkpoints.<|||||>Oof makes sense. I'll check back if that doesn't fix it. Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,270 | closed | Workflow fixes | Fixes some workflow issues:
- Installs torch scatter in the CI with the appropriate pre-compiled version
- Removes DeepSpeed and Fairscale from the non-cuda-extension workflows
- Adds the forgotten reports for cuda-extension workflows
- Adds the result of the cuda-extension workflows to be sent to Slack
Also it updates the `deepspeed` dependency in the dependency table, because they seem mismatched on `master` (running `make fixup` fixed it for me) | 04-15-2021 22:04:34 | 04-15-2021 22:04:34 | That's one advantage of using the docker images! We control the exact CUDA and torch versions by controlling the images directly, so it won't break until we manually update it, at which point we should remember to be extra careful about this dependency. |
transformers | 11,268 | closed | DataCollatorForSOP marked as deprecated but DataCollatorForLanguageModeling does not offer the same functionality | The `DataCollatorForSOP` is marked as deprecated and it is recommended to use the `DataCollatorForLanguageModeling` instead. [Link to the data_collator.py](https://github.com/huggingface/transformers/blob/4bae96ec2bee265f938fc262201538819419089a/src/transformers/data/data_collator.py)
As far as I can tell, the labels for the sentence order prediction task (`sentence_order_label`) are not set in `DataCollatorForLanguageModeling`.
Will this be added to `DataCollatorForLanguageModeling` in a future release, or what is the correct procedure when a data collator is needed for both the mlm and sop tasks at the same time, as in ALBERT training?
| 04-15-2021 13:53:35 | 04-15-2021 13:53:35 | The `sentence_order_label` will be left as is and collated if your dataset provides them. This is tested [here](https://github.com/huggingface/transformers/blob/2550b41aa2ec34f05ddfd3ec5875ddb32ad78d58/tests/test_data_collator.py#L268) which is adapted from the old test of `DataCollatorForSOP`.<|||||>Oh, now I see it too. I was kind of under the assumption that the DataCollator could also be used on non-preprocessed data to tokenize and preprocess batches on demand. |
transformers | 11,267 | closed | inf/nan in generate (beam_sample) with small temperature values | ## Environment info
- transformers` version: transformers version: '4.6.0.dev0'
- Platform: Linux
- Python version: 3.6.9
- PyTorch version (GPU?): '1.8.0' (yes)
## Information
The `generate` function (`beam_sample`) throws error when passing small temperature values.
## To reproduce
```
from transformers import (
AutoModelForSeq2SeqLM,
AutoTokenizer
)
model_name = "sshleifer/distilbart-xsum-12-3"
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
text = "New York City (NYC), often simply called New York, is the most populous city in the United States"
input_ids = tokenizer.encode(text, return_tensors='pt')
sample_outputs = model.generate(input_ids,
num_beams=3,
do_sample=True,
temperature=0.2
)
```
```
Traceback (most recent call last):
File "test.py", line 16, in <module>
temperature=0.2
File "/opt/anaconda3/envs/tensorflow2/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/opt/anaconda3/envs/tensorflow2/lib/python3.6/site-packages/transformers/generation_utils.py", line 1113, in generate
**model_kwargs,
File "/opt/anaconda3/envs/tensorflow2/lib/python3.6/site-packages/transformers/generation_utils.py", line 2134, in beam_sample
next_tokens = torch.multinomial(probs, num_samples=2 * num_beams)
RuntimeError: probability tensor contains either `inf`, `nan` or element < 0
```
Another way to reproduce this error is using higher temperatures and more iterations (generate a longer output).
It looks like this error is caused by `next_token_scores` growing to -inf and `probs` becoming nan.
Apparently, large absolute values accumulate over iterations because `next_token_scores` are no longer normalized after adding unnormalized `beam_scores`.
`beam_scores` are calculated form the output of `logits_warper(input_ids, next_token_scores)` ,
and can grow fast with low temperatures (warper does: `scores = scores / self.temperature`).
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Is the increase of unscaled values a desired behaviour and should one just implement their own `logits_warper` handling float overflow?
If not, a quick fix, just for demonstration, is scaling the values of `beam_scores` added to `next_token_scores` by replacing:
`next_token_scores = next_token_scores + beam_scores[:, None].expand_as(next_token_scores)`
with:
`beam_scores_softmax = F.softmax(beam_scores, dim=-1) `
`next_token_scores = next_token_scores + beam_scores_softmax[:, None].expand_as(next_token_scores)`
It works fine but changes absolute values of scores users may rely on.
| 04-15-2021 13:39:19 | 04-15-2021 13:39:19 | Hi @elsanns
Great catch, thanks for the detailed explanation!
Your observation is right and I can re-produce this error.
Re, your question
> should one just implement their own logits_warper handling float overflow?
Actually there's an `InfNanRemoveLogitsProcessor ` (#10769) which does just that, and can be enabled by passing `remove_invalid_values=True` to `generate`. But the problem is that it replaces the `inf` values by the maximum value for the current `dtype` which is still quite large and ends up becoming `inf` again after adding the `beam_scores`.
Also if you use `InfNanRemoveLogitsProcessor` as `logits_warper` (so that it gets applied after adding the `beam_scores`) then it no longer gives this error but seems to be shifting the distribution and the generated output doesn't make sense.
I tried your fix of normalizing `beam_scores` and it seems to be working.
One possible solution would be to add `normalize_beam_scores` argument and when it is `True`, `BeamScorer` would return the normalized `beam_scores`.
What do you think @patrickvonplaten?<|||||>Hi @patil-suraj,
Thanks for replying!
I think there are several ways of scaling `beam_scores`, e.g. using `beam_sample` with a custom `beam_scorer` scaling the input before processing. Pros: no changes to the code, cons: not available through `generate`.
Another approach would be applying logits processors and warper before softmax but it could be a breaking change for users writing custom processors.<|||||>Hey @elsanns,
Sorry for answering so late! My answer here: https://github.com/huggingface/transformers/issues/14993#issuecomment-1003945387 might also be of relevance.
In short, I think there are a couple of things here:
- `beam_sample()` is quite an edge-case because encoder-decoder models are usually evaluated with `beam_search` instead? Could I ask why you chose to use `beam_sample()` here? Did it give better results for you?
- Distilled models, like `distilbart` tend to have more extreme output logits as lots of knowledge is compressed into comparably little capacity
- Lastly, as said in the linked answer above, I don't know of an "official" beam sample algorithm which is the reason `transformers` `beam_sample()` algorithm is not implemented according to an official paper or any mathematically sound algorithm.
IMO a better solution than having the beam score normalize it's outputs would be to maybe add a `Normalizer` to the logits warper so that before the logits are sampled they are being normalized.
In case we see more and more of issues like this one or https://github.com/huggingface/transformers/issues/14993 we might also consider change the `beam_sample()` algorithm to follow the approach proposed in Algorithm 2 in https://github.com/huggingface/transformers/issues/14993 . This would however be a big breaking change and I am currently really not sure that it is worth it
<|||||>Hi @patrickvonplaten,
Thank you for a detailed answer.
I noticed this behaviour testing various decoding methods, and I don't recall seeing a significant advantage of `beam_sample` in any particular use case.
Since the new approach would be a breaking change, it seems a right solution to keep it the way it is for now.
Thanks again for your answer<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,266 | closed | chunk of words for input token | Hi I have some questions about using pretrained bert.
Can I put a chunk of words into one input token? For example, split "hi my name is Linda and today i will~" as "hi my name is Linda" and "and today i will" and make each split as one embedding vector (i.e using average word2vec) and treat each split vector as one input token. Is it okay to apply it to the existing pre-trained models?
Actually i'm forced to use phrase wise token in my task so the models for long sequences are not the option.
Thanks | 04-15-2021 11:27:31 | 04-15-2021 11:27:31 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,265 | closed | TensorFlow "predict" returns empty output with MirroredStrategy | I'm trying to use the `predict` method of the Keras TensorFlow API but it returns an empty output despite the input is being processed. Calling the model seems to work.
EDIT: the predict method works correctly if the model is loaded with single GPu strategy.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: `4.5.1`
- Platform: Linux CentOS 8.1
- Python version: `3.7.10`
- PyTorch version (GPU?): -
- Tensorflow version (GPU?): `2.3.2`(True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: multi-gpu on a single machine
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using: Bert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import BertTokenizerFast, TFBertForSequenceClassification
import tensorflow as tf
strategy = tf.distribute.MirroredStrategy()
#strategy = tf.distribute.OneDeviceStrategy("/gpu:0")
with strategy.scope():
tf_model = TFBertForSequenceClassification.from_pretrained('bert-base-cased')
tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
inputs = tokenizer('This is a test', 'Esto es una prueba',
return_tensors='tf', max_length=200,
padding='max_length', truncation=True,
return_attention_mask=True,
return_token_type_ids=False)
print(tf_model.predict([inputs["input_ids"], inputs["attention_mask"]],
verbose=1))
print(tf_model([inputs["input_ids"], inputs["attention_mask"]]))
```
```
All model checkpoint layers were used when initializing TFBertForSequenceClassification.
Some layers of TFBertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
WARNING:tensorflow:From /venv/lib/python3.7/site-packages/tensorflow/python/data/ops/multi_device_iterator_ops.py:601: get_next_as_optional (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Iterator.get_next_as_optional()` instead.
WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
1/1 [==============================] - 0s 241us/step
TFSequenceClassifierOutput(loss=None, logits=None, hidden_states=None, attentions=None)
TFSequenceClassifierOutput(loss=None, logits=<tf.Tensor: shape=(1, 2), dtype=float32, numpy=array([[-0.47814545, 0.35146457]], dtype=float32)>, hidden_states=None, attentions=None)
```
## Expected behavior
Output should be the same as when model is being called.
| 04-15-2021 10:50:20 | 04-15-2021 10:50:20 | Tested on latest release and still present.<|||||>Pinging our TensorFlow expert, @Rocketknight1 <|||||>I've managed to reproduce this but I'm very confused about the cause, especially because I'm pretty sure I've used model.predict with MirroredStrategy in our codebase before.
I've tested your code snippet with a standard RNN instead of BERT and confirmed that it works fine, and I tried distilbert instead of BERT and the problem remained, so the problem does seem to be the combination of MirroredStrategy and our models.
I'm going to keep poking around at this, but if you discover anything else that might help me figure out what's going on, please let me know!<|||||>Update: This bug appears in our `run_text_classification.py` script too, again only when using predict(). I'm investigating.<|||||>Update 2: `fit()` and `evaluate()` seemed to work correctly in a MirroredStrategy context (which is good because I have a whole example that uses them). The issue is specific to `predict()`<|||||>Hi, just keeping this issue alive! I've traced the issue to the way we return our values from the `call()` methods - I think Keras doesn't like the thing we do with a subclassed OrderedDict. We're going to reach out to our contacts at Google in the next couple of days and figure out what the best approach is - whether we need to refactor that totally, or if there's an easy workaround.<|||||>Putting this here as a writeup of what we know so far:
The issue is not caused by returning an `OrderedDict`, but instead because we return a `TFBaseModelOutput`, which is a subclass of `OrderedDict` decorated with dataclass. Refer to the code [here](https://github.com/huggingface/transformers/blob/38a716cd41f22f6a7d5ff3dc081903090198803a/src/transformers/modeling_tf_outputs.py#L24-L46).
If we just return a dict, `OrderedDict` or `ModelOutput` (the parent class for `TFBaseModelOutput`, subclassed from `OrderedDict`), everything works okay. Therefore the central issue is this data class, which will probably need to be removed. We're looking at how we can do that now!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi,
Any updates about this issue? <|||||>definitely looking forward to a fix for this. how can we help @Rocketknight1?<|||||>@jmwoloso @ayalaall
Hey all! I'm going to reopen this issue, even though we're short on bandwidth for it right now. The current situation is that we know where the problem lies - it's in the fact that we're returning a `@dataclass` decorated object from our models, and that doesn't play nicely with Keras. We get away with it when we're not in a `Strategy` context, but it breaks inside of one, even though `fit()` still usually works correctly.
The problem is that even though the change needed is relatively small, it's finicky because we place a lot of value on maintaining a very consistent API for users, and changing the return class for every TF model on the entire hub is a big deal. So we need to find some way to make sure existing code is as unaffected as possible in the process, and that requires some engineering exploration.
The good news is the `@dataclass` decorator is really just for convenience rather than a critical part of the class - we just use it to ensure that certain keys are always present in the output dict, and set with default values, and it got ported over from the original PyTorch code. We could probably make some other subclass of `Dict` or `OrderedDict` and return that, and maybe that would play nicer with Keras, but I have a few other major things on my to do list, so I don't know if I'll be able to get to that for a month or two. If anyone wants to experiment and file a PR, feel free to ask any questions you want here. If not, I'll do my best to get to it as soon as I can.<|||||>Say no more @Rocketknight1! I'll take a look and get familiar with the components involved and see if I can devise a minimally-invasive solution. Thanks for re-opening!<|||||>You can assign this to me if you like as well.<|||||>@jmwoloso Sure, if you'd like! If you have any questions along the way, feel free to ask.<|||||>@ZJaume @ayalaall @Rocketknight1
An update for the group. I'm still doing some testing, but this is fixed in both `master` and `transformers==4.10.0`!
Using a single VM (4 v100 GPUs) with `MirroredStrategy` works out of the box. `transformers==4.9.2` (the version I happened to be using) it does not work.
```
from transformers import TFDistilBertForSequenceClassification, DistilBertTokenizerFast
import tensorflow as tf
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
tf_model = TFDistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased')
tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')
inputs = tokenizer('This is a test', 'Esto es una prueba',
return_tensors='tf', max_length=200,
padding='max_length', truncation=True,
return_attention_mask=True,
return_token_type_ids=False)
print(tf_model.predict([inputs["input_ids"], inputs["attention_mask"]], verbose=1))
print(tf_model([inputs["input_ids"], inputs["attention_mask"]]))
```
```
WARNING:tensorflow:Collective ops is not configured at program startup. Some performance features may not be enabled.
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3')
Downloading: 100%|██████████| 483/483 [00:00<00:00, 551kB/s]
Downloading: 100%|██████████| 363M/363M [00:04<00:00, 79.6MB/s]
Some layers from the model checkpoint at distilbert-base-uncased were not used when initializing TFDistilBertForSequenceClassification: ['vocab_projector', 'vocab_layer_norm', 'vocab_transform', 'activation_13']
- This IS expected if you are initializing TFDistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFDistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some layers of TFDistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['dropout_19', 'classifier', 'pre_classifier']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Downloading: 100%|██████████| 232k/232k [00:00<00:00, 1.03MB/s]
Downloading: 100%|██████████| 466k/466k [00:00<00:00, 1.52MB/s]
Downloading: 100%|██████████| 28.0/28.0 [00:00<00:00, 28.1kB/s]
WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
WARNING:tensorflow:From /databricks/python/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py:5043: calling gather (from tensorflow.python.ops.array_ops) with validate_indices is deprecated and will be removed in a future version.
Instructions for updating:
The `validate_indices` argument has no effect. Indices are always validated on CPU and never validated on GPU.
WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
1/1 [==============================] - 10s 10s/step
TFSequenceClassifierOutput(loss=None, logits=array([[ 0.03777119, -0.12381434]], dtype=float32), hidden_states=None, attentions=None)
TFSequenceClassifierOutput(loss=None, logits=<tf.Tensor: shape=(1, 2), dtype=float32, numpy=array([[ 0.0377712 , -0.12381432]], dtype=float32)>, hidden_states=None, attentions=None)
```<|||||>@jmwoloso That's really fascinating! I didn't think I touched any relevant code between those releases, but possibly one of the other engineers did. Can you try a couple of other models, say BERT or RoBERTa, to see if you see the same pattern with both?<|||||>I tried with Roberta and DistilBert with the new version and it doesn't give empty output any more. Thank you!<|||||>Hi @jmwoloso @ZJaume this is great, thank you! Can you confirm it still works with an input array larger than the batch size? (to ensure the work is getting distributed to multiple GPUs and then merged correctly)<|||||>@Rocketknight1 yeah i'll take a look at doing that today and posting confirmation in here and then we can close this out!<|||||>Working with 1024 samples and 8 batch size per gpu.<|||||>I'm still trying to test it out but databricks is having issues spinning up gpu clusters today :roll_eyes:
I think we're good to close this out @Rocketknight1 unless there are other scenarios you want us to check out.<|||||>So I noticed I had the same problem when I do this with basic Tensorflow. I found that the Tokenizer() function from tensorflow.keras.preprocessing.text seems to be an empty when you load the model. Which is understandable because you are not loading any sort of data to the Tokenizer.
How I was able to solve it was
```
import pickle
# saving
with open('tokenizer.pickle', 'wb') as handle:
pickle.dump(tokenizer, handle, protocol=pickle.HIGHEST_PROTOCOL)
# loading
with open('tokenizer.pickle', 'rb') as handle:
tokenizer = pickle.load(handle)
```<|||||>@jmwoloso @ZJaume Thank you for all your help! I'm gonna mark this as resolved now, since the problem doesn't seem to have recurred.
@JithLord I think that's a separate problem, unless it's also unique to `MirroredStrategy` contexts, so I'm gonna close this issue anyway. If you think you've found a bugs in the repo, though, please feel free to file a separate issue (or file it with Tensorflow upstream if you think the bug is there). |
transformers | 11,264 | closed | Multi-Workers distributed training | Hi, does transformers support multi-workers distributed training for bert fine-tuning? | 04-15-2021 10:23:54 | 04-15-2021 10:23:54 | Hi, here are a few resources to get you started:
- [Examples](https://github.com/huggingface/transformers/tree/master/examples)
- [Docs on distributed training](https://huggingface.co/transformers/examples.html#distributed-training-and-mixed-precision)<|||||>> Hi, here are a few resources to get you started:
>
> * [Examples](https://github.com/huggingface/transformers/tree/master/examples)
> * [Docs on distributed training](https://huggingface.co/transformers/examples.html#distributed-training-and-mixed-precision)
Got it, after I get familiar with pytorch, this problem solved😀 |
transformers | 11,263 | closed | TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect | Probably due to assertion, traceback is lost and I cannot debug the code.
C:\Users\m00596504\.virtualenvs\porn_tr\lib\site-packages\transformers\modeling_utils.py:1759: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert all(
Process finished with exit code -1073741819 (0xC0000005) | 04-15-2021 07:44:42 | 04-15-2021 07:44:42 | Hi, please provide all the information required in the template so that we may help you. Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,262 | closed | Failed to import transformers | I got this error when importing transformers. Please help.
My system is Debian 10, Anaconda3.
```
$ python
Python 3.8.5 (default, Sep 4 2020, 07:30:14)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import pipeline
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/notooth/anaconda3/lib/python3.8/site-packages/transformers/__init__.py", line 2487, in __getattr__
return super().__getattr__(name)
File "/home/notooth/anaconda3/lib/python3.8/site-packages/transformers/file_utils.py", line 1699, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/home/notooth/anaconda3/lib/python3.8/site-packages/transformers/__init__.py", line 2481, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/home/notooth/anaconda3/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/home/notooth/anaconda3/lib/python3.8/site-packages/transformers/pipelines/__init__.py", line 24, in <module>
from ..modelcard import ModelCard
File "/home/notooth/anaconda3/lib/python3.8/site-packages/transformers/modelcard.py", line 31, in <module>
from .models.auto.configuration_auto import ALL_PRETRAINED_CONFIG_ARCHIVE_MAP
File "/home/notooth/anaconda3/lib/python3.8/site-packages/transformers/models/__init__.py", line 19, in <module>
from . import (
File "/home/notooth/anaconda3/lib/python3.8/site-packages/transformers/models/layoutlm/__init__.py", line 23, in <module>
from .tokenization_layoutlm import LayoutLMTokenizer
File "/home/notooth/anaconda3/lib/python3.8/site-packages/transformers/models/layoutlm/tokenization_layoutlm.py", line 19, in <module>
from ..bert.tokenization_bert import BertTokenizer
File "/home/notooth/anaconda3/lib/python3.8/site-packages/transformers/models/bert/tokenization_bert.py", line 23, in <module>
from ...tokenization_utils import PreTrainedTokenizer, _is_control, _is_punctuation, _is_whitespace
File "/home/notooth/anaconda3/lib/python3.8/site-packages/transformers/tokenization_utils.py", line 26, in <module>
from .tokenization_utils_base import (
File "/home/notooth/anaconda3/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 68, in <module>
from tokenizers import AddedToken
File "/home/notooth/anaconda3/lib/python3.8/site-packages/tokenizers/__init__.py", line 79, in <module>
from .tokenizers import (
ImportError: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /home/notooth/anaconda3/lib/python3.8/site-packages/tokenizers/tokenizers.cpython-38-x86_64-linux-gnu.so)
``` | 04-15-2021 06:16:06 | 04-15-2021 06:16:06 | I think this is the same issue as https://github.com/huggingface/tokenizers/issues/585<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I am struggling with the same issue. Have you solved the problem?<|||||>>
>
> I am struggling with the same issue. Have you solved the problem?
use pip instead of conda:
```
conda uninstall tokenizers, transformers
pip install transformers
```
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Solved this issue by downgrading to python 3.6 and conda 4.6.14<|||||>Solved this by downgrading from python 3.8 to 3.7<|||||>Solved this by doing `pip install pytorch-transformers` and then reload the notebook/application. I keep my python version 3.7.<|||||>> Solved this by doing `pip install pytorch-transformers` and then reload the notebook/application. I keep my python version 3.7.
didn't work for me :(, details: https://github.com/huggingface/transformers/issues/15062<|||||>Maybe your numpy version is too low, try again after updating<|||||>> Maybe your numpy version is too low, try again after updating
pip install numpy==1.24.2 works |
transformers | 11,261 | closed | --sharded_ddp "zero_dp_3 offload" fails with AssertionError | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: Linux-5.4.0-1043-aws-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.1+cu110 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: 8 x A100 (AWS p4d.24xlarge)
- Using distributed or parallel set-up in script?: python -m torch.distributed.launch
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
-->
Library:
- deepspeed: @stas00
- trainer: @sgugger
## Information
Model I am using (Bert, XLNet ...): roberta-base
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
I want to perform distributed training using the example `run_mlm.py` script on the wikitext dataset. Specifically, I'm trying to use sharded_ddp zero_dp_3 (i.e., fairscale) and **with offloading enabled**. When I run _without_ offloading, it works. But if I use the "offload" option, an AssertionError is thrown, as shown in the stack trace below.
Steps to reproduce the behavior:
1. Install fairscale
pip install fairscale==0.3.4
2. Run the example run_mlm.py as follows:
export OMP_NUM_THREADS=11;
export TOKENIZERS_PARALLELISM=true;
python -m torch.distributed.launch --nproc_per_node=8 run_mlm.py --model_name_or_path roberta-base \
--use_fast_tokenizer \
--dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
--do_train --do_eval --num_train_epochs 5 \
--output_dir ./experiments/wikitext --sharded_ddp "zero_dp_3 offload" --fp16
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Traceback (most recent call last):
File "run_mlm.py", line 492, in <module> main() File "run_mlm.py", line 458, in main train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/me/ve/lib/python3.6/site-packages/transformers/trainer.py", line 1120, in train tr_loss += self.training_step(model, inputs)
File "/home/me/ve/lib/python3.6/site-packages/transformers/trainer.py", line 1522, in training_step loss = self.compute_loss(model, inputs)
File "/home/me/ve/lib/python3.6/site-packages/transformers/trainer.py", line 1556, in compute_loss outputs = model(**inputs)
File "/home/me/ve/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs)
File "/home/me/ve/lib/python3.6/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 902, in forward self._lazy_init()
File "/home/me/ve/lib/python3.6/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 739, in _lazy_init self._init_param_attributes(p)
File "/home/me/ve/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context return func(*args, **kwargs)
File "/home/me/ve/lib/python3.6/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 796, in _init_param_attributes assert p._fp32_shard.device == torch.device("cpu")
AssertionError
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
It should proceed to train.
| 04-15-2021 05:43:16 | 04-15-2021 05:43:16 | As replied on the forums, you should rather use `--deepspeed` for Zero-offload. We will investigate this bug, but there is another one for the gradient scaler that will block you either way.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,260 | closed | About pre-trained model : facebook/wav2vec2-large-xlsr-53 & facebook/wav2vec2-base | Hi,
I am trying to load the pre-trained 'no fine-tuning' model of wav2vec2 to exact some features . The models, 'wav2vec2-base' and 'wav2vec2-large-xlsr-53', are not fine-tuned. Why are these files not exactly the same?
https://huggingface.co/facebook/wav2vec2-base/tree/main
https://huggingface.co/facebook/wav2vec2-large-xlsr-53/tree/main
'wav2vec2-base' can be loaded smoothly, but it doesn't work for 'wav2vec2-large-xlsr-53'
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base")
model = Wav2Vec2Model.from_pretrained("facebook/wav2vec2-base").to('cuda')
@patrickvonplaten
Thank you very much!
| 04-15-2021 02:39:05 | 04-15-2021 02:39:05 | The xlsr model has no vocab, you need to build the processor yourself<|||||>https://huggingface.co/blog/fine-tune-xlsr-wav2vec2
You could check out this blogpost, there is it explained very well<|||||>@flozi00 Thank you very much! Processor is built.
Actually, I want to visualize the output of hidden layer in the wav2vec model before fine-funing. It seems the output of wav2vec2-base is normal, but the output of wav2vec2-large-xlsr-53 is not. The results are attached (x axis: time, y axis: hidden units).
The output of hidden layer using wav2vec2-base

The output of hidden layer using wav2vec2-large-xlsr-53

Could you explain it? Thank you!
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,259 | closed | [Benchmark] | # 🖥 Benchmarking `transformers`
## Benchmark
Which part of `transformers` did you benchmark?
## Set-up
What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use?
## Results
Put your results here! | 04-15-2021 01:49:00 | 04-15-2021 01:49:00 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,258 | closed | Support for set_epoch in IterableDataset | # What does this PR do?
I merged #11254 a bit too fast and forgot to actually call the `set_epoch` method in the main training loop at the beginning of each epoch.
Also, it looks like the Datasets library will deal internally with the RNG logic by having a `set_epoch` method, this PR allows support for that. | 04-14-2021 21:18:33 | 04-14-2021 21:18:33 | |
transformers | 11,257 | closed | [Benchmark] | # 🖥 Benchmarking `transformers`
## Benchmark
Which part of `transformers` did you benchmark?
## Set-up
What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use?
## Results
Put your results here! | 04-14-2021 21:16:19 | 04-14-2021 21:16:19 |
```<|||||>> # 🖥 Benchmarking `transformers`
>
> ## Benchmark
>
> Which part of `transformers` did you benchmark?
>
> ## Set-up
>
> What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use?
>
> ## Results
>
> Put your results here!
|
transformers | 11,256 | closed | Getting KeyError: 'loss' when fine-tuning model on a pre-trained MLM | ## Environment info
- `transformers` version: 4.5.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: Depends - CPU for debugging
- Using distributed or parallel set-up in script?: False
### Who can help
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- tokenizers: @LysandreJik
## Information
Model I am using (Bert, XLNet ...): Longformer (custom upload on Model Hub)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
I created a LM model and uploaded it to Huggingface Model Hub via [this colab notebook](https://colab.research.google.com/drive/153754DbFXRhKdHvjdSUUp9VSB5JqtZwX?usp=sharing)
But when fine-tuning the model on simple reproducible data, I get:-
```py
%%bash
pip install -q transformers
pip install -q datasets
import numpy as np
train_text = np.array(['a foxy', 'b ball', 'c cats r bad', 'as das', 'sagha','asdfsd','asd','ad','aets','hsdg','reya','arey','areyareh','yui','aEWY','DSH','ASUYH','ASFH','ASDFHG','OOO'], dtype='<U5280')
train_label = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]
val_text = np.array(['a foxy', 'r c cats'], dtype='<U5280')
val_label = [1, 2]
from datasets import Dataset
train_dataset = Dataset.from_dict({'src':train_text, 'tgt':train_label})
val_dataset = Dataset.from_dict({'src':val_text, 'tgt':val_label})
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("MalawiUniST/ISO6392.nya.ny", use_fast=True, truncation=True, padding=True, max_length=10) #try fast=False
def tok(example):
encodings = tokenizer(example['src'], truncation=True, padding="max_length", max_length=10)
return encodings
train_encoded_dataset = train_dataset.map(tok, batched=True)
val_encoded_dataset = val_dataset.map(tok, batched=True)
from transformers import Trainer, TrainingArguments, AutoModelForSequenceClassification
from sklearn.metrics import accuracy_score, precision_recall_fscore_support
def compute_metrics(pred):
labels = pred.label_ids
preds = pred.predictions.argmax(-1)
precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='weighted',zero_division=1) #none gives score for each class
acc = accuracy_score(labels, preds)
return {
'accuracy': acc,
'f1': f1,
'precision': precision,
'recall': recall
}
training_args = TrainingArguments(
output_dir='/content/results/', # output directory
overwrite_output_dir = True,
num_train_epochs=16, # total number of training epochs
per_device_train_batch_size=32, # batch size per device during training
per_device_eval_batch_size=32, # batch size for evaluation
warmup_steps=600, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='/content/logs', # directory for storing logs
logging_steps=10,
evaluation_strategy='epoch',
learning_rate=1e-6,
#fp16 = True,
load_best_model_at_end = True,
metric_for_best_model = 'eval_loss',
greater_is_better = False,
seed = 101,
save_total_limit=5,
)
model = AutoModelForSequenceClassification.from_pretrained("MalawiUniST/ISO6392.nya.ny", num_labels=20)
trainer = Trainer(
model=model, # the instantiated Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_encoded_dataset, # training dataset
eval_dataset=val_encoded_dataset, # evaluation dataset
compute_metrics=compute_metrics,
tokenizer=tokenizer
)
train_results = trainer.train()
```
This error:-
```
Some weights of the model checkpoint at MalawiUniST/ISO6392.nya.ny were not used when initializing LongformerForSequenceClassification: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'lm_head.decoder.bias']
- This IS expected if you are initializing LongformerForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing LongformerForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of LongformerForSequenceClassification were not initialized from the model checkpoint at MalawiUniST/ISO6392.nya.ny and are newly initialized: ['classifier.dense.weight', 'classifier.dense.bias', 'classifier.out_proj.weight', 'classifier.out_proj.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-113-a2ff149dfd3d> in <module>()
46 )
47
---> 48 train_results = trainer.train()
3 frames
/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs)
1118 tr_loss += self.training_step(model, inputs)
1119 else:
-> 1120 tr_loss += self.training_step(model, inputs)
1121 self._total_flos += float(self.floating_point_ops(inputs))
1122
/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in training_step(self, model, inputs)
1522 loss = self.compute_loss(model, inputs)
1523 else:
-> 1524 loss = self.compute_loss(model, inputs)
1525
1526 if self.args.n_gpu > 1:
/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs)
1564 else:
1565 # We don't use .loss here since the model may return tuples instead of ModelOutput.
-> 1566 loss = outputs["loss"] if isinstance(outputs, dict) else outputs[0]
1567
1568 return (loss, outputs) if return_outputs else loss
/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py in __getitem__(self, k)
1614 if isinstance(k, str):
1615 inner_dict = {k: v for (k, v) in self.items()}
-> 1616 return inner_dict[k]
1617 else:
1618 return self.to_tuple()[k]
KeyError: 'loss'
```
From sgugger's reply [here on forums](https://discuss.huggingface.co/t/key-error-loss-while-fine-tuning-gpt-2-with-the-trainer-utility/2861/4?u=neel-gupta) it seems that one strong cause is when the labels aren't present (even though they certainly are upon printing it out)
This seems like a bug and the code is reproducible on colab. Any ideas to possible workarounds? | 04-14-2021 20:05:24 | 04-14-2021 20:05:24 | Hi @neel04. Regarding your code sample with simple reproducible data. I believe there are two errors here:
- First, you've created your labels as a range from 1 to 20. You've set the model's `num_labels` to 20, but that, unfortunately, means it has a range of `[0, 19]`, therefore unable to satisfy the label 20. I would start the labels at 0.
- Secondly, and more importantly, in your encoding method you're only tokenizing the source input, and doing nothing to your target input:
```py
def tok(example):
encodings = tokenizer(example['src'], truncation=True, padding="max_length", max_length=10)
return encodings
train_encoded_dataset = train_dataset.map(tok, batched=True)
val_encoded_dataset = val_dataset.map(tok, batched=True)
```
Therefore, here, your `train_encoded_dataset` and `val_encoded_dataset` contain dictionaries with the following keys: `input_ids` and `attention_mask`. There are no labels.
You could manually add your labels to your encoding by tweaking your `tok` method:
```py
def tok(example):
encodings = tokenizer(example['src'], truncation=True, padding="max_length", max_length=10)
encodings["labels"] = example["tgt"]
return encodings
train_encoded_dataset = train_dataset.map(tok, batched=True)
val_encoded_dataset = val_dataset.map(tok, batched=True)
```
Otherwise, instead of naming your variable `tgt` inside the dataset, you could name it `labels` so that it's adequately named in your `Dataset` right away.
I've been running your colab, but didn't run into an issue yet, I'm at step 2000. If I run into an issue, I'll try to see what's going on.
Hope that helps.<|||||>Thanx a ton for replying @LysandreJik !!! :hugs: :+1:
About the labels, I think you may be right - I totally missed that point.
secondly, are you not seeing `tgt` in the `train_encoded_dataset` inside the repro? I do see it when printing it out :thinking:
> I've been running your colab
do you mean you are re-training the LM? I already have it on model hub BTW - can you fine-tune that pre-trained model successfully? <|||||>I'm sorry, you are correct, the `dataset` has the following attributes: `['attention_mask', 'input_ids', 'src', 'tgt']`. However, the model only cares about the `attention_mask` and `input_ids`. It also cares about the `labels`, which are absent in this case, hence why your code was failing.
If you want to have a look at what inputs the model needs, I encourage you to take a look at the docs; you're using `LongformerForSequenceClassification`, see the parameters it acepts [here](https://huggingface.co/transformers/model_doc/longformer.html#transformers.LongformerForSequenceClassification.forward).
I did manage to run your code example, but I thought the colab would fail in a similar fashion. It seems it trained correctly, so that is not an issue.
Is there anything else you need help with?<|||||>Thanx for the help! I am surprised why we need to add a `labels` attribute since we specify it when constructing the `Dataset` object - so it must be easy for HF to guess the numerical value and use it as labels accordingly.
it does work for repro, so the issue does not remain now - but I would greatly appreciate if you can help me out! I am trying to train it on my normal data.
I have used the `train_text_split` -er to split into NumPy arrays and am trying to pass it - but it still gives me the index error.
Repro:
```py
Dataset({
features: ['attention_mask', 'input_ids', 'labels', 'src', 'tgt'],
num_rows: 20
})
```
Main dataset:
```py
Dataset({
features: ['attention_mask', 'input_ids', 'labels', 'src', 'tgt'],
num_rows: 4572
})
```
There doesn't seem any surface difference; I checked the length of the mask and ids - they are as expected. checked 'labels', is numeric - doesn't cross 20.
Clearly, there is some problem with my input data. Could you give an idea about what the error might indicate is wrong with my data?
My input data is basically documents - long strings. they are cleaned thoroughly and are purely text. the only thing is that they are pretty long (sometimes longer than 2000 tokens).
Any opinions on what the issue could be?<|||||>> I am surprised why we need to add a labels attribute since we specify it when constructing the Dataset object - so it must be easy for HF to guess the numerical value and use it as labels accordingly.
Are you referencing the fact that we're passing the `Dataset` a `tgt` value during initialization? If so, then yes those are labels but since the model looks for the field `labels`, it will not look at `tgt`. If you define it as `labels` right off the bat, it should work!
Regarding your second question, I fear I'm out of the loop. If you have an example which fail, that would be great, with a minimal reproducible code example, that would be even better!
Do you have an idea of which document especially might cause an issue? Thank you.<|||||>I was able to repro the issue with this dummy dataset:
```py
import numpy as np
train_text = np.array(['lorem ipsum'*499]*20)
train_label = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19]
val_text = np.array(['lorem ipsum'*499]*2)
val_label = [1, 2]
```
What's your take on it? looks like a bug for high length strings - Even though they do seem padded and truncated via `datasets`.
---
**EDIT:-** This is the full error, in case you want something to refer to, instead of running your own code
```py
Downloading: 100%
199M/199M [00:09<00:00, 21.2MB/s]
Some weights of the model checkpoint at MalawiUniST/ISO6392.nya.ny were not used when initializing LongformerForSequenceClassification: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'lm_head.decoder.bias']
- This IS expected if you are initializing LongformerForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing LongformerForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of LongformerForSequenceClassification were not initialized from the model checkpoint at MalawiUniST/ISO6392.nya.ny and are newly initialized: ['classifier.dense.weight', 'classifier.dense.bias', 'classifier.out_proj.weight', 'classifier.out_proj.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-9-d3bd01a1a0a7> in <module>()
46 )
47
---> 48 train_results = trainer.train()
11 frames
/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs)
1118 tr_loss += self.training_step(model, inputs)
1119 else:
-> 1120 tr_loss += self.training_step(model, inputs)
1121 self._total_flos += float(self.floating_point_ops(inputs))
1122
/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in training_step(self, model, inputs)
1522 loss = self.compute_loss(model, inputs)
1523 else:
-> 1524 loss = self.compute_loss(model, inputs)
1525
1526 if self.args.n_gpu > 1:
/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs)
1554 else:
1555 labels = None
-> 1556 outputs = model(**inputs)
1557 # Save past state if it exists
1558 # TODO: this needs to be fixed and made cleaner later.
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
/usr/local/lib/python3.7/dist-packages/transformers/models/longformer/modeling_longformer.py in forward(self, input_ids, attention_mask, global_attention_mask, head_mask, token_type_ids, position_ids, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict)
1855 output_attentions=output_attentions,
1856 output_hidden_states=output_hidden_states,
-> 1857 return_dict=return_dict,
1858 )
1859 sequence_output = outputs[0]
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
/usr/local/lib/python3.7/dist-packages/transformers/models/longformer/modeling_longformer.py in forward(self, input_ids, attention_mask, global_attention_mask, head_mask, token_type_ids, position_ids, inputs_embeds, output_attentions, output_hidden_states, return_dict)
1662
1663 embedding_output = self.embeddings(
-> 1664 input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
1665 )
1666
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
/usr/local/lib/python3.7/dist-packages/transformers/models/longformer/modeling_longformer.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
491 if inputs_embeds is None:
492 inputs_embeds = self.word_embeddings(input_ids)
--> 493 position_embeddings = self.position_embeddings(position_ids)
494 token_type_embeddings = self.token_type_embeddings(token_type_ids)
495
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/sparse.py in forward(self, input)
156 return F.embedding(
157 input, self.weight, self.padding_idx, self.max_norm,
--> 158 self.norm_type, self.scale_grad_by_freq, self.sparse)
159
160 def extra_repr(self) -> str:
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1914 # remove once script supports set_grad_enabled
1915 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1916 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1917
1918
IndexError: index out of range in self
```<|||||>Thank you for the clear reproducible example and the error stack trace.
That's curious, it does not break on my machine; running the following code, which is a concatenation of the sample you've just given me regarding the dataset and the training code of your initial issue description:
```py
import numpy as np
train_text = np.array(['lorem ipsum'*499]*20)
train_label = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19]
val_text = np.array(['lorem ipsum'*499]*2)
val_label = [1, 2]
from datasets import Dataset
train_dataset = Dataset.from_dict({'src': train_text, 'tgt': train_label})
val_dataset = Dataset.from_dict({'src': val_text, 'tgt': val_label})
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("MalawiUniST/ISO6392.nya.ny", use_fast=True, truncation=True, padding=True,
max_length=10) # try fast=False
def tok(example):
encodings = tokenizer(example['src'], truncation=True, padding="max_length", max_length=10)
encodings["labels"] = example["tgt"]
return encodings
train_encoded_dataset = train_dataset.map(tok, batched=True)
val_encoded_dataset = val_dataset.map(tok, batched=True)
print(train_encoded_dataset)
from transformers import Trainer, TrainingArguments, AutoModelForSequenceClassification
from sklearn.metrics import accuracy_score, precision_recall_fscore_support
def compute_metrics(pred):
labels = pred.label_ids
preds = pred.predictions.argmax(-1)
precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='weighted',zero_division=1) #none gives score for each class
acc = accuracy_score(labels, preds)
return {
'accuracy': acc,
'f1': f1,
'precision': precision,
'recall': recall
}
training_args = TrainingArguments(
output_dir='content/results/', # output directory
overwrite_output_dir = True,
num_train_epochs=16, # total number of training epochs
per_device_train_batch_size=32, # batch size per device during training
per_device_eval_batch_size=32, # batch size for evaluation
warmup_steps=600, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='content/logs', # directory for storing logs
logging_steps=10,
evaluation_strategy='epoch',
learning_rate=1e-6,
#fp16 = True,
load_best_model_at_end = True,
metric_for_best_model = 'eval_loss',
greater_is_better = False,
seed = 101,
save_total_limit=5,
)
model = AutoModelForSequenceClassification.from_pretrained("MalawiUniST/ISO6392.nya.ny", num_labels=20)
trainer = Trainer(
model=model, # the instantiated Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_encoded_dataset, # training dataset
eval_dataset=val_encoded_dataset, # evaluation dataset
compute_metrics=compute_metrics,
tokenizer=tokenizer
)
train_results = trainer.train()
```
However, in your error, I understand that there's an `IndexError` happening with the position embeddings, the interesting line being this one:
```
--> 493 position_embeddings = self.position_embeddings(position_ids)
```
This is very frequently an issue with padding/truncation, as you have correctly identified. I can indeed reproduce if I remove the notion of padding/truncation from your tokenizer call in your `tok` method:
```diff
def tok(example):
- encodings = tokenizer(example['src'], truncation=True, padding="max_length", max_length=10)
+ encodings = tokenizer(example['src'])
encodings["labels"] = example["tgt"]
return encodings
```<|||||>I think it is my fault :sweat_smile: I had changed `max_length=10` to ` max_length=2000` which is the appropriate length it was intended for and pre-trained on. Maybe that's why it ran on your machine, but failed on Colab?
About the padding/truncation indeed, I am using the way it's marked in red - and can confirm that the length of each `attention mask` is 2000, along with the `input_ids`. since that's the case for samples, the only conclusion is that I am indeed padding and truncating the sequences.
So the last point for the error is the `max_length` - which can't be 2000. in the LM (accessible via the colab link in OP, fully reproducible end-to-end example) the tokenizer construction is like this:-
```py
model_checkpoint = "allenai/longformer-base-4096"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True, max_length=2000)
```
which does specify 2000 to be the maximum length. I can probably try re-training the model, but it doesn't make sense If I can't find why (and how) the error originates and what changes to make.
Any suspicions?
<|||||>With a little trial and error, I got `max_length=500` to be the maximum I can use - losing out a lot of information :face_with_head_bandage: This seems like a very weird bug. Everything seems normal, but its always errors after crossing 500<|||||>Is it failing after crossing 500 or 512? It's possible that there's a rogue max length of 512 (which obviously shouldn't be here!)
Surprisingly I'm having issues reproducing the error with your maximum length of 2000, which doesn't crash on my side either (as it shouldn't!)
Do you have an example I can run locally which fails with length > 500?<|||||>yep, it def gets an error in Colab with CPU. for making sure repro changes, I will put the whole thing in here (with the dummy data):-
```py
import numpy as np
train_text = np.array(['lorem ipsum'*499]*20)
train_label = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19]
val_text = np.array(['lorem ipsum'*499]*2)
val_label = [1, 2]
MAX_LENGTH = 2000
!pip install -q transformers
!pip install -q datasets
import transformers
transformers.__version__
from datasets import Dataset
train_dataset = Dataset.from_dict({'src':train_text, 'tgt':train_label})
val_dataset = Dataset.from_dict({'src':val_text, 'tgt':val_label})
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("MalawiUniST/ISO6392.nya.ny", use_fast=True, truncation=True, padding=True, max_length=MAX_LENGTH) #try fast=False
def tok(example):
encodings = tokenizer(example['src'], truncation=True, padding=True, max_length=MAX_LENGTH)
encodings["labels"] = example["tgt"] #Try removing this line
return encodings
len(train_encoded_dataset['attention_mask'][0])
from transformers import Trainer, TrainingArguments, AutoModelForSequenceClassification
from sklearn.metrics import accuracy_score, precision_recall_fscore_support
def compute_metrics(pred):
labels = pred.label_ids
preds = pred.predictions.argmax(-1)
precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='weighted',zero_division=1) #none gives score for each class
acc = accuracy_score(labels, preds)
return {
'accuracy': acc,
'f1': f1,
'precision': precision,
'recall': recall
}
training_args = TrainingArguments(
output_dir='/content/results/', # output directory
overwrite_output_dir = True,
num_train_epochs=16, # total number of training epochs
per_device_train_batch_size=32, # batch size per device during training
per_device_eval_batch_size=32, # batch size for evaluation
warmup_steps=600, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='/content/logs', # directory for storing logs
logging_steps=10,
evaluation_strategy='epoch',
learning_rate=1e-6,
#fp16 = True,
load_best_model_at_end = True,
metric_for_best_model = 'eval_loss',
greater_is_better = False,
seed = 101,
save_total_limit=5,
)
model = AutoModelForSequenceClassification.from_pretrained("MalawiUniST/ISO6392.nya.ny", num_labels=20)
trainer = Trainer(
model=model, # the instantiated Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_encoded_dataset, # training dataset
eval_dataset=val_encoded_dataset, # evaluation dataset
compute_metrics=compute_metrics,
tokenizer=tokenizer
)
train_results = trainer.train()
```
should reproduce the error on pasting straightaway! :+1:
---
**EDIT:-** yep, you are right - using `max_length` as `513` gets an error, and 512 doesn't. I am using Longformer here - so the whole situation becomes tricky :thinking: By default, `Longformer-base-4096` should get `4096` as the max_length.
When pre-training the LM, this is the snippet for initializing the model from scratch:
```py
from transformers import LongformerForMaskedLM
from transformers import LongformerConfig
config = LongformerConfig(
vocab_size=52_000,
max_position_embeddings=514,
num_attention_heads=2,
num_hidden_layers=1,
type_vocab_size=1,
)
model = LongformerForMaskedLM(config=config)
```
tokenizer too is gotten properly:
```py
model_checkpoint = "allenai/longformer-base-4096"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True, max_length=2000)
```
Very strange.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,255 | closed | Big Bird generate() "local variable 'next_tokens' referenced before assignment" | I am facing this problem when doing text summarization. I am using google/bigbird-roberta-base and I get the following error when calling model.generate(input, max_length = 4096, num_beams=4, early_stopping=True, length_penalty = 0.8):
```
Input length of input_ids is 4096, but ``max_length`` is set to 4096.This can lead to unexpected behavior. You should consider increasing ``config.max_length`` or ``max_length``.
---------------------------------------------------------------------------
UnboundLocalError Traceback (most recent call last)
<ipython-input-13-90a633800ba7> in <module>()
----> 1 get_ipython().run_cell_magic('time', '', ' \ni = 0\nsize = 1\nout = []\nend = False\nprint_iters = 100\nsave_iters = 5\n \nwhile True:\n if (i+size) >= n:\n last = n\n end = True\n else:\n last = i + size \n \n result = make_gen( model_sum, tokens[i:last, :].detach().clone() )\n \n for j in range(result.shape[0]):\n out.append(result[j])\n \n if last % (print_iters*size) == 0:\n print(last)\n gc.collect()\n torch.cuda.empty_cache()\n torch.cuda.synchronize()\n if last % (print_iters*size*save_iters) == 0:\n with open(path_output + name + ".pkl", \'wb\') as f:\n pickle.dump(out, f)\n print("Saved to disk")\n \n if end:\n break\n i = last')
6 frames
<decorator-gen-53> in time(self, line, cell, local_ns)
<timed exec> in <module>()
/usr/local/lib/python3.7/dist-packages/transformers/generation_utils.py in beam_search(self, input_ids, beam_scorer, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, **model_kwargs)
1808
1809 sequence_outputs = beam_scorer.finalize(
-> 1810 input_ids, beam_scores, next_tokens, next_indices, pad_token_id=pad_token_id, eos_token_id=eos_token_id
1811 )
1812
UnboundLocalError: local variable 'next_tokens' referenced before assignment
```
| 04-14-2021 18:10:21 | 04-14-2021 18:10:21 | cc @vasudevgupta7 <|||||>@OscarGarciaF, i will need more details before i could see your issue. Are you using google/bigbird-roberta-base with EncoderDecoderModel for summarization? It would be great if you can share your code. <|||||>@vasudevgupta7 I was using AutoModelForSeq2SeqLM (this is what you use for summarization right?)
I have now changed to EncoderDecoderModel but now I face a new error
```
1 input = tokens[0:1, :].to(device)
----> 2 generated = model_sum.generate(input, decoder_start_token_id = model_sum.config.decoder.pad_token_id, max_length = 512, num_beams = 4, early_stopping = True)
10 frames
/usr/local/lib/python3.7/dist-packages/transformers/models/big_bird/modeling_big_bird.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds, past_key_values_length)
293
294 position_embeddings = self.position_embeddings(position_ids)
--> 295 embeddings += position_embeddings
296
297 embeddings = self.dropout(embeddings)
RuntimeError: output with shape [4, 1, 768] doesn't match the broadcast shape [4, 0, 768]
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Any updates on this? I get the exact same error when running generate on EncoderDecoderModel.
`RuntimeError: output with shape [1, 1, 768] doesn't match the broadcast shape [1, 0, 768]`
When I remove padding from the input_ids the error goes away but I think this is a bug of some sort.<|||||>I also got exact same error (`output with shape...`) when i generate on **custom** BigBird model. I fixed it by reducing `model_max_length` value from 4096 to 4094 and afterwards i can use pipeline for inference without any problem.
```
>>> tokenizer.max_len_single_sentence
4094
>>> tokenizer.model_max_length
4096
>>> tokenizer.model_max_length = 4094
``` |
transformers | 11,254 | closed | Trainer iterable dataset | # What does this PR do?
This PR adds full support for `IterableDataset` training set in the main Trainer (just the training set, evaluation/prediction will require way more work). Up until now, the Trainer kind of support training dataset that are instances of `IterableDataset`, but in a distributed setting, the training will be on the same data on all processes, which is... not ideal. This PR fixes that and adds some tests. | 04-14-2021 18:09:20 | 04-14-2021 18:09:20 | |
transformers | 11,253 | closed | New TF examples | Opening a PR to get some feedback on the new TF example style before I write the rest.
Don't merge it yet, I haven't even finalized the filenames! | 04-14-2021 18:05:42 | 04-14-2021 18:05:42 | I opened a new branch and PR at #11360 to avoid dealing with rebasing after the folder structure was changed around |
transformers | 11,252 | closed | Fix for the issue of device-id getting hardcoded for token_type_ids during Tracing [WIP] | # What does this PR do?
Using Torchscript Trace API to convert the HF models is creating an issue , where during the tracing some of the name/id of the device getting hardcoded for some of the tensors. As a result position_embedding or token_type_embedding fail when model is loaded for inference on another device, as they were tied to the device that they were traced on (e.g cpu, gpu id). This issue will raise when one needs to switch between devices and specially for multi-gpu inference where model is Torchscripted/traced.
For Bert model, device name get hardcoded for token_type_ids, this has been previously addressed for position_ids in [the merged PR](https://github.com/huggingface/transformers/pull/5773). This PR, fixes this issue by registering a buffer for token_type_ids. Similar changes for other models are required as well, that will submit PRs accordingly.
The following code snippet can be used to test the issue and suggested fix.
```
import transformers
from pathlib import Path
import os
import json
import torch
from transformers import (AutoModelForSequenceClassification, AutoTokenizer, AutoModelForQuestionAnswering,
AutoModelForTokenClassification, AutoConfig)
device1 = torch.device('cuda') # this can be changed to cuda:0 in multi-gpu use-case
device2 = torch.device('cpu')# this can be changed to cuda:1 in multi-gpu use-case
model_name = 'bert-base-uncased'
config = AutoConfig.from_pretrained(model_name,num_labels=2,torchscript=True)
model = AutoModelForSequenceClassification.from_pretrained(model_name, config=config)
tokenizer = AutoTokenizer.from_pretrained(model_name,do_lower_case=True)
dummy_input = "This is a dummy input for torch jit trace"
max_length = 20
inputs = tokenizer.encode_plus(dummy_input,max_length = int(max_length),pad_to_max_length = True, add_special_tokens = True, return_tensors = 'pt')
input_ids = inputs["input_ids"]
attention_mask = inputs["attention_mask"]
print('device1 {}, device2 {}'.format(device1,device2))
outputs = model(**inputs)
model.to(device1).eval()
last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
traced_model = torch.jit.trace(model,(input_ids.to(device1),attention_mask.to(device1)))
torch.jit.save(traced_model, "bert.pt")
print("*************************** traced model graph on device 1 ***************************")
print(traced_model.graph)
print("\n")
loaded = torch.jit.load("bert.pt", map_location=device2)
print("\n")
print("*************************** model graph on loaded on device 2 ***************************")
print(loaded.graph)
outputs = loaded(input_ids.to(device2),attention_mask.to(device2))
print(outputs)
```
Error log :
[bert_gpu_to_cpu.logs.txt](https://github.com/huggingface/transformers/files/6312550/bert_gpu_to_cpu.logs.txt)
Fix log:
[bert_gpu_to_cpu_fixed.logs.txt](https://github.com/huggingface/transformers/files/6312560/bert_gpu_to_cpu_fixed.logs.txt)
Fixes # (issue)
Registering a buffer for token_type_ids in the constructor and then resizing it in the forward method based on input-shape.
## Before submitting
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
issues #5664 and #976
## Who can review?
@LysandreJik
| 04-14-2021 17:45:20 | 04-14-2021 17:45:20 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@LysandreJik as discussed off-line, I would appreciate if you could help with re-opening the PR, thanks.<|||||>> Looks very cool! We can deploy the same fix on the other models.
Thanks @sgugger, once this can be merged, I will start the other models and submit separate PRs for them.<|||||>Great, thanks a lot @HamidShojanazeri. Before we merge; could you apply the same changes to all models affected by the `fix-copies` script in your PR? If we merge this as is, we'll have the implementation be partially supported for these models, which is unwanted.
Thank you!<|||||>@LysandreJik sure, I will update the affected models as well. <|||||>> Great, thanks a lot @HamidShojanazeri. Before we merge; could you apply the same changes to all models affected by the `fix-copies` script in your PR? If we merge this as is, we'll have the implementation be partially supported for these models, which is unwanted.
>
> Thank you!
@LysandreJik Updated!<|||||>Ran the GPU tests, works like a charm. As this gets implemented in other models, let's think of a test we can add similar to your snippet @HamidShojanazeri to ensure there is no regression.
Merging!<|||||>Thanks @LysandreJik sure, will sync off-line.<|||||>This introduced an issue in our slow tests that I'm patching in https://github.com/huggingface/transformers/pull/12336<|||||>> This introduced an issue in our slow tests that I'm patching in #12336
Thank a lot! @LysandreJik. |
transformers | 11,251 | closed | Add batching in TokenClassificationPipeline | # What does this PR do?
Currently, the NER pipeline in transformers iterates through the list of input sentences and processes them, sequentially.
This PR adds batching support in the pipeline to decrease latency and use GPU more efficiently.
Relevant Issue :- #11244
## Benchmark Report
### Without Batching (CPU)
Device: CPU
No. examples: 1000
Time taken: 283.27826976776123
Device: GPU
No. examples: 1000
Time taken: 17.89318561553955
Please check the benchmark gist [here](https://gist.github.com/parakalan/88b613ed4ca0001afb60448996f6b62a)
### Without Batching (CPU)
Device: CPU
No. examples: 1000
Batch Size: 512
Time taken: 121.81582999229431
Device: GPU
No. examples: 1000
Batch Size: 512
Time taken: 2.780881404876709
Please check the benchmark gist [here](https://gist.github.com/parakalan/f1fa25f25b8a70125145afbcbbeac85f)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. - https://github.com/huggingface/transformers/issues/11244
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| 04-14-2021 16:53:46 | 04-14-2021 16:53:46 | FYI there is also work done on this pipeline in https://github.com/huggingface/transformers/pull/10568 if you want to give it a look! It doesn't concern batching, however.<|||||>Thanks, let me check that out. <|||||>Please review this @LysandreJik , @Narsil , @joshdevins<|||||>Closing this PR based on @Narsil's review. Thanks |
transformers | 11,250 | closed | [Benchmark] | # 🖥 Benchmarking `transformers`
## Benchmark
Which part of `transformers` did you benchmark?
## Set-up
What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use?
## Results
Put your results here! | 04-14-2021 16:10:20 | 04-14-2021 16:10:20 | |
transformers | 11,249 | closed | TypeError: can't pickle _thread.RLock objects hyperparameter_search raytune | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: v4.5.1
- Platform: Linux
- Python version: 3.7.8
- PyTorch version (GPU?): yes
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): bert-base-uncased
The problem arises when using:
* [ x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run hyperparameter tuning with raytune
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
2021-04-14 15:44:01,389 INFO services.py:1264 -- View the Ray dashboard at http://127.0.0.1:8265
Traceback (most recent call last):
File "pipeline_training.py", line 311, in <module>
keep_checkpoints_num=0
File "/opt/conda/lib/python3.7/site-packages/transformers/trainer.py", line 1459, in hyperparameter_search
best_run = run_hp_search(self, n_trials, direction, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/transformers/integrations.py", line 235, in run_hp_search_ray
**kwargs,
File "/opt/conda/lib/python3.7/site-packages/ray/tune/tune.py", line 297, in run
_ray_auto_init()
File "/opt/conda/lib/python3.7/site-packages/ray/tune/tune.py", line 664, in _ray_auto_init
ray.init()
File "/opt/conda/lib/python3.7/site-packages/ray/_private/client_mode_hook.py", line 62, in wrapper
return func(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/ray/worker.py", line 785, in init
hook()
File "/opt/conda/lib/python3.7/site-packages/ray/tune/registry.py", line 171, in flush
self.references[k] = ray.put(v)
File "/opt/conda/lib/python3.7/site-packages/ray/_private/client_mode_hook.py", line 62, in wrapper
return func(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/ray/worker.py", line 1481, in put
object_ref = worker.put_object(value)
File "/opt/conda/lib/python3.7/site-packages/ray/worker.py", line 266, in put_object
serialized_value = self.get_serialization_context().serialize(value)
File "/opt/conda/lib/python3.7/site-packages/ray/serialization.py", line 324, in serialize
return self._serialize_to_msgpack(value)
File "/opt/conda/lib/python3.7/site-packages/ray/serialization.py", line 304, in _serialize_to_msgpack
self._serialize_to_pickle5(metadata, python_objects)
File "/opt/conda/lib/python3.7/site-packages/ray/serialization.py", line 264, in _serialize_to_pickle5
raise e
File "/opt/conda/lib/python3.7/site-packages/ray/serialization.py", line 261, in _serialize_to_pickle5
value, protocol=5, buffer_callback=writer.buffer_callback)
File "/opt/conda/lib/python3.7/site-packages/ray/cloudpickle/cloudpickle_fast.py", line 73, in dumps
cp.dump(obj)
File "/opt/conda/lib/python3.7/site-packages/ray/cloudpickle/cloudpickle_fast.py", line 580, in dump
return Pickler.dump(self, obj)
TypeError: can't pickle _thread.RLock objects
```
The code chunk to start the `hyperparameter_search`:
```python
def my_hp_space(trial):
from ray import tune
return {
"learning_rate": tune.uniform(1e-5, 5e-5),
"num_train_epochs": tune.choice(range(1, 6)),
"per_device_train_batch_size": tune.choice([2,4]),
"weight_decay": tune.uniform(0.0, 0.3),
"adam_epsilon": tune.loguniform(1e-10, 1e-6),
"per_device_eval_batch_size": 32
}
best_run = trainer.hyperparameter_search(
backend="ray",
n_trials=15,
hp_space=my_hp_space,
stop=None,
checkpoint_score_attr="training_iteration",
keep_checkpoints_num=0
compute_objective=lambda x: my_objective(x, metric='eval_' + used_metric)
)
```
## Expected behavior
Expect that it will not throw an error. Note that this script does work on `4.2.0`.
<!-- A clear and concise description of what you would expect to happen. -->
| 04-14-2021 15:52:08 | 04-14-2021 15:52:08 | I also have this issue (bump)<|||||>Pinging @richardliaw, @amogkam <|||||>@maxzzze looks like a serialization error with the Trainer. We will take a look at this, but in the meantime can you downgrade your transformers version to 4.4. Also see https://github.com/ray-project/ray/issues/15439.<|||||>So it looks like this seems to work as soon as we disable the memory tracker:
```
trainer._memory_tracker = None
```
Will it be possible to expose an API to temporarily disable this?
The other issue is https://github.com/huggingface/transformers/issues/11565, but we can resolve this there.
We should have tests that catch these regressions right?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I am having the same problem.
Disabling the memory tracker worked for me.
BUT, then I ran into #11565 as well<|||||>Yes, if you disable the memory tracker (pass in `skip_memory_metrics=True` into your `TrainingArguments`) then you will no longer get the pickling error.
In the next transformers release, the Ray Tune integration will automatically disable memory tracking if it's currently being enabled.<|||||>Hi, with transformers 4.26.1 on Sage maker I am still having this error: TypeError: cannot pickle '_thread.lock' object.
def hp_space(trial):
return {
"learning_rate": trial.suggest_float("learning_rate", 1e-5, 1e-3, log=True),
"num_train_epochs": trial.suggest_int("num_train_epochs", 1, 10),
"seed": trial.suggest_int("seed", 1, 40),
"per_device_train_batch_size": trial.suggest_categorical("per_device_train_batch_size", [16, 32, 64]),
"weight_decay": trial.suggest_float("weight_decay", 1e-3, 1e-1, log=True),
}
best_run = trainer.hyperparameter_search(n_trials=20, direction="minimize", hp_space=hp_space) |
transformers | 11,248 | closed | Fix #10128 | # What does this PR do?
Small bug fix in numpy_pad_and_concatenate, as reported in #10128
Fixes #10128 | 04-14-2021 15:12:19 | 04-14-2021 15:12:19 | |
transformers | 11,247 | closed | Adding pipeline task aliases. | # What does this PR do?
Two tasks were sort of not aligned with the pipeline names.
`sentiment-analysis` -> `TextClassificationPipeline`
`ner` -> `TokenClassificationPipeline`
In order to make this change backward compatible, yet make the code more consistent, this PR
introduces a TASK_ALIASES dictionary, which remaps a task name to another *canon* task name.
Previously working code should still be working, simply we are adding `text-classification` and `token-classification` tasks
available to `pipeline(...)` function.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik
@philschmid
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 04-14-2021 14:34:17 | 04-14-2021 14:34:17 | |
transformers | 11,246 | closed | Enable Wav2Vec2 Pretraining | # 🚀 Feature request
This is a feature request to add Wav2Vec2 Pretraining functionality to the transformers library. This is a "Good Second Issue" feature request, which means that interested contributors should have some experience with the transformers library and ideally also with training/fine-tuning Wav2Vec2.
## Motivation
The popular [Wav2Vec2](https://huggingface.co/models?filter=wav2vec2) model cannot be pretrained using the Hugging Face library yet. During the fine-tuning week, multiple people have reported improved results by pretraining wav2vec2 directly on the target language before fine-tuning it.
## Your contribution
I am happy to give an interesting contributor guidance throughout the PR and answer all relevant questions.
## How to start
1) To begin with, one should run a pretraining forward pass using the official Wav2Vec2 repository. The forward pass can be found here: https://github.com/pytorch/fairseq/blob/436166a00c2ecd1215df258f022608947cca2aa8/fairseq/models/wav2vec/wav2vec2.py#L474.
It is important that the argument `features_only` is set to `False` in the [`forward`](https://github.com/pytorch/fairseq/blob/436166a00c2ecd1215df258f022608947cca2aa8/fairseq/models/wav2vec/wav2vec2.py#L474) function.
Succesfully running a forward pass with fairseq is important to ensure the correctness of the hugging face implementation by comparing the two outputs.
This is probably the most difficult part of the PR.
**Note:** this also means that the loaded fairseq wav2vec2 checkpoint should include weights for the `GumbelVectorQuantizer` quantizer, see: https://github.com/pytorch/fairseq/blob/436166a00c2ecd1215df258f022608947cca2aa8/fairseq/models/wav2vec/wav2vec2.py#L277
The easiest checkpoint to try out pretraining with is probably the wav2vec2 2.0 Base - No fine-tuning [here](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small.pt)
[Here](https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#train-a-wav2vec-20-base-model) is the official Fairseq recipe on how to do so.
2) Having run a forward pass successfully, the methods can now be implemented into transformers [here](https://github.com/huggingface/transformers/blob/653076ca307520ee85fd5f5de6918019f8521bb5/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L966) as a new class that could roughly look as follows:
```python
class Wav2Vec2ForPretraining:
def init(self, config):
self.wav2vec2 = Wav2Vec2Model(config)
self.quantizer = ...
self.project_q = ...
def forward(...):
outputs = self.wav2vec2(
input_values,
attention_mask=attention_mask,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
# ... all the pretraining logic here
```
Having implemented the class it should be made sure that a forward pass of `Wav2Vec2ForPretraining` works.
3) Convert the pretrained checkpoints correctly
After `Wav2Vec2ForPretraining` was succesfully added, a "non-finuted" checkpoint, e.g., the wav2vec2 2.0 Base - No fine-tuning [here](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small.pt) should be converted to the hugging face models. One will probably have to slightly adapt the conversion script as well: https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py
Having converted the checkpoint it can be uploaded to the hub and checked whether the checkpoint yields the same outputs as the official wav2vec2 pretraining functionality.
4) Add tests
Next, a couple of tests should be implemented that make sure that the behavior stays correct in the future. This included both fast and "slow" integration tests (fast tests are "normal" tests); "slow" integration tests include a "real" checkpoint and test its output against a hardcoded expected output tensor slice as it's done, *e.g.*, [here](https://github.com/huggingface/transformers/blob/653076ca307520ee85fd5f5de6918019f8521bb5/tests/test_modeling_big_bird.py#L823).
## Ask for help
For questions regarding how to finish 1), they can be directly asked on this issue; I will try to answer them as best as I can. Also gently pinging @cceyda here as I think she has already succesfully pretrained a wav2vec2 model using fairseq. Hope it's fine to ping you here 😅) - in case you have some good tips on how to pretrained wav2vec2 with fairseq, it would be amazing to share some tips here.
For questions when doing 2), 3) & 4) please directly asked on the PR you have opened to implement the model.
| 04-14-2021 13:38:37 | 04-14-2021 13:38:37 | |
transformers | 11,245 | closed | RuntimeError: leaf variable has been moved into the graph interior | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.5.1
- Platform: GPU
- Python version: python 3.6.9
- PyTorch version (GPU): torch==1.4.0
### Who can help
@TobiasLee @julien-c
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
- Model that I am using: Bert
- Code Modification:
Actually I didn't change any code, I just used the newest code in master branch.
I am trying to use run_bertology.py to carry out head pruning by using the following script:
- The script that I used:
```
export GLUE_DIR=glue_data
export TASK_NAME=RTE
CUDA_VISIBLE_DEVICES=3 python run_bertology.py \
--model_name_or_path bert-base-uncased \
--try_masking \
--task_name $TASK_NAME \
--data_dir $GLUE_DIR/$TASK_NAME/ \
--max_seq_length 128 \
--output_dir output_headpruning_bert/${TASK_NAME} \
--overwrite_output_dir
```
But I got the error:
```
Traceback (most recent call last):
File "run_bertology.py", line 449, in <module>
main()
File "run_bertology.py", line 445, in main
prune_heads(args, model, eval_dataloader, head_mask)
File "run_bertology.py", line 213, in prune_heads
args, model, eval_dataloader, compute_entropy=False, compute_importance=False, head_mask=head_mask
File "run_bertology.py", line 103, in compute_heads_importance
loss.backward() # Backpropagate to populate the gradients in the head mask
File "/home/bil19003/anaconda3/envs/pytorch_huggingface/lib/python3.6/site-packages/torch/tensor.py", line 195, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/bil19003/anaconda3/envs/pytorch_huggingface/lib/python3.6/site-packages/torch/autograd/__init__.py", line 99, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: leaf variable has been moved into the graph interior
```
I got similar results to #3895, seemingly the problem still exists. Could you please help?
| 04-14-2021 12:40:50 | 04-14-2021 12:40:50 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@bing0037 I'm running into the same problem. Did you find a fix for it? |
transformers | 11,244 | closed | Batching in NER pipeline | # 🚀 Feature request
Currently, the NER pipeline in transformers iterates through the list of input sentences and processes them, sequentially.
It would we beneficial to add batching support in the pipeline to decrease latency and use GPU more efficiently.
## Motivation
Batching will help use the GPU more efficiently and reduce latency by a lot. The NER pipeline is amazing with its post processing, could be a production ready construct if batching is added.
This issue has been discussed here - https://github.com/huggingface/transformers/issues/8942 at the end, but looks like no one is working on it actively.
## Your contribution
Working on this issue in this PR - https://github.com/huggingface/transformers/pull/11251
Please raise if this is on the radar already, will close the issue.
| 04-14-2021 11:24:11 | 04-14-2021 11:24:11 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,243 | closed | Cant load tokenizer locally after downloading it | Hi!
I'm following the tutorial for this pretrained model https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment. It works the first time I run it (and download the tokenizer) but after that it will complain that I don't have any tokenizer on the path specified.
The code is the following
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
import csv
import urllib.request
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
# Tasks:
# emoji, emotion, hate, irony, offensive, sentiment
# stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary
task='sentiment'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# download label mapping
labels=[]
mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt"
with urllib.request.urlopen(mapping_link) as f:
html = f.read().decode('utf-8').split("\n")
csvreader = csv.reader(html, delimiter='\t')
labels = [row[1] for row in csvreader if len(row) > 1]
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Good night 😊"
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = labels[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
And fails on `tokenizer = AutoTokenizer.from_pretrained(MODEL)` with output:
```bash
2021-04-13 21:43:03.723523: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
Traceback (most recent call last):
File "train.py", line 27, in <module>
tokenizer = AutoTokenizer.from_pretrained(MODEL)
File "/home/jiwidi/anaconda3/envs/cuda/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py", line 423, in from_pretrained
return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/home/jiwidi/anaconda3/envs/cuda/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1698, in from_pretrained
raise EnvironmentError(msg)
OSError: Can't load tokenizer for '/mnt/kingston/github/MIARFID/ALC/cardiffnlp/twitter-roberta-base-sentiment'. Make sure that:
- '/mnt/kingston/github/MIARFID/ALC/cardiffnlp/twitter-roberta-base-sentiment' is a correct model identifier listed on 'https://huggingface.co/models'
- or '/mnt/kingston/github/MIARFID/ALC/cardiffnlp/twitter-roberta-base-sentiment' is the correct path to a directory containing relevant tokenizer files
```
After running the script `train.py` the tokenizer is downloaded to the path the script is on. The path structrue is like this:
```bash
├── cardiffnlp
│ └── twitter-roberta-base-sentiment
│ ├── config.json
│ └── pytorch_model.bin
└── train.py
```
I have transformers version 4.5.1 | 04-14-2021 11:02:04 | 04-14-2021 11:02:04 | Hi, that's because the tokenizer first looks to see if the path specified is a local path. Since you're saving your model on a path with the same identifier as the hub checkpoint, when you're re-running the script both the model and tokenizer will look into that folder.
The tokenizer doesn't find anything in there, as you've only saved the model, not the tokenizer. You should either save the tokenier as well, or change the path so that it isn't mistaken for a local path when it should be the hub.<|||||>> Hi, that's because the tokenizer first looks to see if the path specified is a local path. Since you're saving your model on a path with the same identifier as the hub checkpoint, when you're re-running the script both the model and tokenizer will look into that folder.
>
> The tokenizer doesn't find anything in there, as you've only saved the model, not the tokenizer. You should either save the tokenier as well, or change the path so that it isn't mistaken for a local path when it should be the hub.
How could I also save the tokenizer? Im newbie with transformer library and I took that code from the webpage.<|||||>You can add `tokenizer.save_pretrained(MODEL)` right under the model's `save_pretrained`!<|||||>i love you Lysanderjik |
transformers | 11,242 | closed | position_ids generated from Roberta | Hi,
Roberta created `position_ids` from `input_ids` using [this function](https://github.com/huggingface/transformers/blob/3d339ee6595b9e42925559ae21a0f6e77f032873/src/transformers/models/roberta/modeling_roberta.py#L1494).
When the max sequence length is 512, I expect the `position_ids` to be [0, 1, ..., 512].
However, the function gives me [1, 2, ..., 513] which later results in an CUDA index error for position embedding.
I would appreciate if someone could tell me what I am doing wrong.
```python
def create_position_ids_from_input_ids(input_ids, padding_idx, past_key_values_length=0):
mask = input_ids.ne(padding_idx).int()
incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask
return incremental_indices.long() + padding_idx
ipdb> input_ids
tensor([[ 2, 20, 630, ..., 22, 20, 3],
[ 2, 168, 106, ..., 4, 31532, 3],
[ 2, 287, 14603, ..., 284, 1594, 3],
...,
[ 2, 4, 873, ..., 5549, 24276, 3],
[ 2, 12, 56, ..., 87, 419, 3],
[ 2, 30683, 419, ..., 761, 312, 3]], device='cuda:0')
ipdb> incremental_indices
tensor([[ 1, 2, 3, ..., 510, 511, 512],
[ 1, 2, 3, ..., 510, 511, 512],
[ 1, 2, 3, ..., 510, 511, 512],
...,
[ 1, 2, 3, ..., 510, 511, 512],
[ 1, 2, 3, ..., 510, 511, 512],
[ 1, 2, 3, ..., 510, 511, 512]], device='cuda:0',
dtype=torch.int32)
ipdb> padding_idx
0
``` | 04-14-2021 10:22:49 | 04-14-2021 10:22:49 | See https://github.com/huggingface/transformers/issues/10736#issuecomment-800175342
Tip: if you search on this Github repo "position ids roberta", you get a lot of answers.
|
transformers | 11,241 | closed | add new token to Bert | Hi,
I want to fine-tune Bert on tweets and I want to add some new tokens. I tried following code but it may have problem because after adding tokens and during reading sentences and tokenizing, it has lag and couldn't pass it.
any idea please?
I tried this:
tokenizer.add_tokens(["NEW_TOKEN"])
model.resize_token_embeddings(len(tokenizer)) | 04-14-2021 04:48:47 | 04-14-2021 04:48:47 | Could you provide a reproducible code example alongside the error that happened?<|||||>my code is:
................................................................
```py
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased",max_len=256)
vocab=[]
with open('vocab30k.txt', mode='r',encoding="utf8",errors='ignore') as file2:
for line2 in file2:
line2=line2.split('\n')[0]
vocab.append(line2)
tokenizer.add_tokens(vocab)
model= BertForMaskedLM.from_pretrained("bert-base-uncased")
model.resize_token_embeddings(len(tokenizer))
dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path="fa5M_shuffeled.txt",
block_size=128,
)
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=True, mlm_probability=0.15
)
training_args = TrainingArguments(
output_dir="fineTunedModel/",
overwrite_output_dir=True,
num_train_epochs=1,
per_gpu_train_batch_size=16,
save_steps=10_000,
save_total_limit=2,
prediction_loss_only=True,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset,
)
trainer.train()
```
....................................................
when I don't add_token, every thing is ok and training start but when I use add_token to add my new vocab, my code is still running and doesn't pass to go to start training , and nothing happen.<|||||>This is probably because it takes a very long time to add all tokens. Could you install from source:
`pip install -U git+https://github.com/huggingface/transformers` and let me know if it fixes the issue? We recently merged a PR that should speed this up dramatically.<|||||>I installed via your link and try to add 5000 new vocab and it works.
thanks so much.
another question is ,
1.what is the limitation of number of tokens that we want to add? I tried to add 30k new token and got this error:
return self._tokenizer.add_tokens(new_tokens)
pyo3_runtime.PanicException: called `Result::unwrap()` on an `Err` value: CompiledTooBig(10485760)
2.when I want to add new token I uesd this : "tokenizer.add_tokens(vocab) "
and not "tokenizer.add_tokens(vocab,special_tokens=True)"
what is the differenet between these two in adding token and during fine-tune?
thanks<|||||>1. It depends on the size of the tokens. Adding tokens to the tokenizer this way is not scalable, and should only be used to handle a very limited number of tokens. Under the hood, it actually uses some Regex to extract these tokens, and there is a limitation in the size of the regex we can create.
2. Special tokens can be removed when decoding<|||||>Hi, in term of adding token, I tried to add 10k new token to my BERT model tokenizer and I saved the tokenizer with "add_token.json" file.
So when I want to use the tokenizer I got this error:
AssertionError: Non-consecutive added token '#سلام' found. Should have index 100005 but has index 100006 in saved vocabulary.
any help?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,240 | closed | Close open files to suppress ResourceWarning | # Close open files to suppress ResourceWarning
<!--
Across the repo, we are opening a bunch of files and not closing them. This seems to cause issue when trying to programatically access them. This also raise ResourceWarning at a few places, for instance - transformers/convert_slow_tokenizer.py:308: ResourceWarning. This PR closes a few files which were left open after accessing.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-14-2021 03:20:46 | 04-14-2021 03:20:46 | |
transformers | 11,239 | closed | Getting `NameError: name 'BertOnlyMLMHead' is not defined` error when upgrading to latest transformers | # 📚 Migration
## Information
<!-- Important information -->
I am getting `NameError: name 'BertOnlyMLMHead' is not defined` error when I try to upgrade the transformers version used by [Oscar code](https://github.com/microsoft/Oscar) from pytorch-transformers to latest version of huggingface transformers.
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below) not sure
* [ ] my own modified scripts: (give details below) yes
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name): no
* [ ] my own task or dataset: (give details below): no
## Details
<!-- A clear and concise description of the migration issue.
If you have code snippets, please provide it here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
-->
I am trying to upgrade the huggingface transformers version used by [Oscar code](https://github.com/microsoft/Oscar) from pytorch-transformers to latest version of huggingface transformers. However, I am getting below error:
```
Traceback (most recent call last):
File "oscar/run_captioning.py", line 1010, in <module>
main()
File "oscar/run_captioning.py", line 966, in main
from_tf=bool('.ckpt' in args.model_name_or_path), config=config)
File "/usr/local/lib/python3.6/site-packages/transformers/modeling_utils.py", line 1058, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/home/default/ephemeral_drive/work/image_captioning/Oscar_13april/oscar/modeling/modeling_bert.py", line 624, in __init__
self.cls = BertOnlyMLMHead(config)
NameError: name 'BertOnlyMLMHead' is not defined
```
I have looked into the latest transformers and seems the class is not defined. However, the class was defined in older version of transformers - https://github.com/huggingface/transformers/blob/067923d3267325f525f4e46f357360c191ba562e/pytorch_transformers/modeling_bert.py#L506-L513.
What could be a replacement of the class `BertOnlyMLMHead` when using latest version of transformers?
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: https://github.com/huggingface/transformers
- Platform: x86_64 GNU/Linux
- Python version: 3.6.8
- PyTorch version (GPU?): 1.7.0+cu101 (GPU)
- Tensorflow version (GPU?): 2.3.0 (GPU)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
<!-- IMPORTANT: which version of the former library do you use? -->
* `pytorch-transformers` or `pytorch-pretrained-bert` version (or branch): https://github.com/huggingface/transformers/tree/067923d3267325f525f4e46f357360c191ba562e
## Checklist
- [ yes] I have read the migration guide in the readme.
([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers);
[pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
- [ no] I checked if a related official extension example runs on my machine.
| 04-14-2021 02:04:55 | 04-14-2021 02:04:55 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,238 | closed | Fix dimention misspellings. | # What does this PR do?
Replaces dimention mispelling with the proper spelling dimension.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-13-2021 23:56:25 | 04-13-2021 23:56:25 | |
transformers | 11,237 | closed | [deepspeed] test on one node 2 gpus max | Deepspeed devs, who will be running our deepspeed tests as part of their tests, discovered that we had an unbound number of nodes and gpus in the tests so it was firing on multiple nodes and many gpus, so wasn't quite ready for it. So fixing it. | 04-13-2021 22:21:08 | 04-13-2021 22:21:08 | ok, it's all working in the Deepspeed team's test suite - yay! |
transformers | 11,236 | closed | [troubleshooting] add 2 points of reference to the offline mode | As discussed at https://github.com/huggingface/transformers/issues/11231#issuecomment-818976986 the offline mode doc can be hard to find, so this PR:
- starts a new "troubleshooting" document
- adds a note xref to `from_pretrained`
Surely we can start populating the new "troubleshooting" document - the idea here is to have common problems with explicit error messages and pointers to solutions.
@LysandreJik, @sgugger | 04-13-2021 20:23:33 | 04-13-2021 20:23:33 | |
transformers | 11,235 | closed | [Deepspeed] zero3 tests band aid | Currently Deepspeed integration is bleeding its global state which can impact other transformers runs in the same process w/o deepspeed, This only impacts tests that don't spawn a new process for deepspeed, which is not the norm.
This PR is just a temporary band-aid to restore the state on the test level - and I need to rethink of how to not use a global state or have the state tied to the deepspeed object which when destructed would automatically restore the state.
The main issue here is that the Trainer gets init'ed after the model, which is an important issue that I started the discussion on here:
https://github.com/huggingface/transformers/issues/10893
An example of the failing sequence is:
```
CUDA_VISIBLE_DEVICES=0 RUN_SLOW=1 pyt tests/deepspeed \
/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_early_get_last_lr_1_zero3 tests/extended \
/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq_no_dist
```
@LysandreJik | 04-13-2021 19:54:37 | 04-13-2021 19:54:37 | |
transformers | 11,234 | closed | Tokenizer fast save | # What does this PR do?
This PR changes the behavior of `save_pretrained` to have the fast tokenizer unified json file being saved as well as the files of the "slow" tokenizer. The default of `legacy_format` is changed to None with the following behavior:
- unset -> a fast tokenizer is saved with both formats (tokenizer.sjon + legacy format)
- set to True -> a fast tokenizer is saved in legacy format only
- set to False -> a fast tokenizer is saved with just the tokenizer.json format
Along with that, a slight change in the `from_pretrained` method is needed since the added tokens for a fast tokenizer are often inside the tokenizer.json file, so already added before we get to a possible added_tokens.json. There is currently a bug where loading a tokenizer from a folder with both files (tokenizer.json and added_tokens.json) will fail, this PR fixes it. | 04-13-2021 19:08:56 | 04-13-2021 19:08:56 | |
transformers | 11,233 | closed | Indent code block in the documentation | # What does this PR do?
This is a new version of #11227 starting from a fresh master and following the remarks of Stas.
| 04-13-2021 17:52:56 | 04-13-2021 17:52:56 | |
transformers | 11,232 | closed | BigBird Causal Attention | # 🚀 Feature request
I'd like to use bigbird sparse attention in a decoder. Isn't that feasible if we apply a causal mask [here](https://github.com/huggingface/transformers/blob/master/src/transformers/models/big_bird/modeling_big_bird.py#L665)?
So long as we know which entries correspond to (i, j) entries where i < j, we could apply a mask there which would do the trick. Do you agree?
## Motivation
This would allow use of sparse attention in decoder setting as well as encoder
## Your contribution
I would be happy to try to tackle this, so long as people agree with my logic
| 04-13-2021 17:32:56 | 04-13-2021 17:32:56 | Pinging @vasudevgupta7 and @patrickvonplaten <|||||>There are several issues, we will have to keep in mind before we decide to implement bigbird block sparse attention in decoder.
1) Initial number of tokens in decoder side must be > `5 x block_size + 2 x num_random_blocks x block_size` (for bigbird block sparse attention to work). Typically `block_size=64`, `num_random_blocks=3`, then we need at least 708 tokens initially.
2) I am assuming your task involves at least 1024 tokens in decoder side. Since, else it is recommended to use `original_full` attention by authors.
3) Also, if model is given 1024 tokens (block size = 64 let's say), then for predicting 1025th token, model will first `<pad>` to 1024 + 64 tokens, since this attention will work only when sequence length is multiple of block size. & similarly for predicting every single token, we will have to pad again & again. This way inference can become expensive, I believe.
4) Also, if you want to use Encoder-Decoder model, we will have to write block sparse attention in cross-attention completely. Else cross-attention layer will be very expensive if both encoder & decoder are given very long sequences.
Yeah training decoder in the bigbird block sparse attention fashion is possible if we just change masks in this [line](https://github.com/huggingface/transformers/blob/3d339ee6595b9e42925559ae21a0f6e77f032873/src/transformers/models/big_bird/modeling_big_bird.py#L2119). But again, there are several issues:
5) Currently, we have 1st & last block as global. So, if we mask for autoregressive, we will have only 1st block as global during training.
6) Because of masking all the right tokens, we will reduce number of random tokens which were chosen from the complete sequence earlier.
7) Sliding tokens will also reduce by half.
So, this way number of tokens which each query can attend, will be reduced by large amount. 5,6,7 can be resolved if we decide to implement modify a lot in current block sparse attention (instead of just changing masks). But 1,2,3,4 are still a problem.
There may be several other issues. But these are my initial thoughts on this. Let me know if I am unclear or wrong somewhere.<|||||>Thanks for the response! I was on vacation when you posted and am coming back to this now.
>Initial number of tokens in decoder side must be > 5 x block_size + 2 x num_random_blocks x block_size (for bigbird block sparse attention to work). Typically block_size=64, num_random_blocks=3, then we need at least 708 tokens initially.
I am assuming your task involves at least 1024 tokens in decoder side. Since, else it is recommended to use original_full attention by authors.
Also, if model is given 1024 tokens (block size = 64 let's say), then for predicting 1025th token, model will first <pad> to 1024 + 64 tokens, since this attention will work only when sequence length is multiple of block size. & similarly for predicting every single token, we will have to pad again & again. This way inference can become expensive, I believe.
Yes, we are using a large input size so the motivation to implement causal is there regardless.
I see how there is a minimum threshold input size below which BigBird is obsolete. That said, even in that case, hyper-parameters could be tweaked such as decreasing block size and number of random blocks. Not to mention, this minimum threshold applies to other efficient attention algos as well such as performer.
>Also, if you want to use Encoder-Decoder model, we will have to write block sparse attention in cross-attention completely. Else cross-attention layer will be very expensive if both encoder & decoder are given very long sequences.
Are you saying that BigBird is not possible in cross attention? Could use some more color on this.
>Currently, we have 1st & last block as global. So, if we mask for autoregressive, we will have only 1st block as global during training.
Because of masking all the right tokens, we will reduce number of random tokens which were chosen from the complete sequence earlier.
Sliding tokens will also reduce by half.
The attention mask is applied after computing the sparse attention matrix, so doesn't the full global attention piece still pull weight?
<|||||>> Thanks for the response! I was on vacation when you posted and am coming back to this now.
>
> > Initial number of tokens in decoder side must be > 5 x block_size + 2 x num_random_blocks x block_size (for bigbird block sparse attention to work). Typically block_size=64, num_random_blocks=3, then we need at least 708 tokens initially.
> > I am assuming your task involves at least 1024 tokens in decoder side. Since, else it is recommended to use original_full attention by authors.
> > Also, if model is given 1024 tokens (block size = 64 let's say), then for predicting 1025th token, model will first to 1024 + 64 tokens, since this attention will work only when sequence length is multiple of block size. & similarly for predicting every single token, we will have to pad again & again. This way inference can become expensive, I believe.
>
> Yes, we are using a large input size so the motivation to implement causal is there regardless.
>
> I see how there is a minimum threshold input size below which BigBird is obsolete. That said, even in that case, hyper-parameters could be tweaked such as decreasing block size and number of random blocks. Not to mention, this minimum threshold applies to other efficient attention algos as well such as performer.
>
> > Also, if you want to use Encoder-Decoder model, we will have to write block sparse attention in cross-attention completely. Else cross-attention layer will be very expensive if both encoder & decoder are given very long sequences.
>
> Are you saying that BigBird is not possible in cross attention? Could use some more color on this.
It might be possible to implement as cross-attention if we think something but major problem will that:
For bigbird block sparse attention to work, this must hold `query sequence length // block size == key sequence length // block size`. Now this can be managed during training but during inference, I am not sure if we can.
>
> > Currently, we have 1st & last block as global. So, if we mask for autoregressive, we will have only 1st block as global during training.
> > Because of masking all the right tokens, we will reduce number of random tokens which were chosen from the complete sequence earlier.
> > Sliding tokens will also reduce by half.
>
> The attention mask is applied after computing the sparse attention matrix, so doesn't the full global attention piece still pull weight?
During training, we will have to mask the last global block right; so they won't contribute to the context layer & effectively global tokens will reduce by half.
pinging @patrickvonplaten for putting some light on this issue.
<|||||>Thanks for the issue & the detailed answer @vasudevgupta7. To be honest, I just don't think it's worth yet to do any kind of sparse attention on neither the cross_attention layers nor the decoder attention layers because the output is usually quite small (think summarization, question_answering). For tasks like translation, it's often better to split per sentence anyways so here it also doesn't make too much sense. => Overall IMO, this is low priority<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,231 | closed | "Connection error, and we cannot find the requested files in the cached path." ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.2
- Platform:
- Python version: 3.7.6
- PyTorch version (GPU?): torch 1.5.0
- Tensorflow version (GPU?):
- Using GPU in script?: no ( I just mentionned the number of gpu in a shell script and launch (ex . #SBATCH --partition=gpu_p2; #SBATCH --qos=qos_gpu-dev ; #SBATCH --cpus-per-task=3 )
- Using distributed or parallel set-up in script?: no
### Who can help
- albert, bert, xlm: @LysandreJik
Library:
- tokenizers: @LysandreJik
- pipelines: @LysandreJik
Documentation: @sgugger
Model I am using (FlauBERT):
The problem arises when downloading the model from transformers library:
* [ ] the official example scripts: (I did not change much , pretty close to the original)
```
def get_flaubert_layer(texte):
modelname ='flaubert-small-cased'
flaubert, log = FlaubertModel.from_pretrained(modelname, output_loading_info=True)
flaubert_tokenizer = FlaubertTokenizer.from_pretrained(modelname, do_lowercase=False)
tokenized = texte.apply((lambda x: flaubert_tokenizer.encode(x, add_special_tokens=True)))
max_len = 0
for i in tokenized.values:
if len(i) > max_len:
max_len = len(i)
padded = np.array([i + [0] * (max_len - len(i)) for i in tokenized.values])
token_ids = torch.tensor(padded)
with torch.no_grad():
last_layer = flaubert(token_ids)[0][:,0,:].numpy()
return last_layer, modelname
### What I added to the code
def read_file(filename):
sentences = pd.read_excel(filename, sheet_name= 0)
data_id = sentences.identifiant
print("Total phrases: ", len(data_id))
data = sentences.verbatim
data_label = sentences.etiquette
classes = sentences['etiquette'].unique()
len_classes = len(classes)
return data_id, data, data_label, len_classes
def cross_validation_prediction(id_, texte, ylabels, file_, len_classes):
features, modelname = get_flaubert_layer(texte)
```
The tasks I am working on is:
* [ ] my own task or dataset: I just want to use the model of FlauBert to producve vectors for my dataset that's all
## To reproduce
Steps to reproduce the behavior:
1. get the requirements (librairies mentionned above)
2. Final part of the script to reproduce it :
```
filename = 'test'
fil = filename + ".xlsx"
os.chdir('/linkhome/rech/genlig01/umg16uw/Test_CLASS_avec_FlauBert/corpus')
print("File preprocessing: " , fil)
id_, texte_, ylabels_, len_classes_ = read_file(fil)
cross_validation_prediction(id_, texte_, ylabels_, filename, len_classes_)
```
3. stack trace error :
```
Loading pytorch-gpu/py3/1.5.0
Loading requirement: cuda/10.1.2 nccl/2.5.6-2-cuda cudnn/10.1-v7.5.1.10
gcc/8.2.0 intel-compilers/19.0.4 openmpi/4.0.1-cuda
Traceback (most recent call last):
File "test.py", line 227, in <module>
cross_validation_prediction(id_, texte_, ylabels_, filename, len_classes_)
File "test.py", line 107, in cross_validation_prediction
features, modelname = get_flaubert_layer(texte)
File "test.py", line 56, in get_flaubert_layer
flaubert, log = FlaubertModel.from_pretrained(modelname, output_loading_info=True)
File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.5.0/lib/python3.7/site-packages/transformers/modeling_utils.py", line 986, in from_pretrained
**kwargs,
File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.5.0/lib/python3.7/site-packages/transformers/configuration_utils.py", line 386, in from_pretrained
config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.5.0/lib/python3.7/site-packages/transformers/configuration_utils.py", line 438, in get_config_dict
use_auth_token=use_auth_token,
File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.5.0/lib/python3.7/site-packages/transformers/file_utils.py", line 1142, in cached_path
local_files_only=local_files_only,
File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.5.0/lib/python3.7/site-packages/transformers/file_utils.py", line 1349, in get_from_cache
"Connection error, and we cannot find the requested files in the cached path."
ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.
srun: error: jean-zay-ia808: task 0: Exited with exit code 1
srun: Terminating job step 841126.0
```
I expected the model to load and get the vectors in the appropriate varaible, instead Iget this error above.
I have internet and when trying to do it locally and not on the server with small sample it works but when I load a vritual env with the specific library I get this error; | 04-13-2021 17:29:38 | 04-13-2021 17:29:38 | Pinging @stas00 since I see in your stack trace you are working on jean-zay and I know Stas has some experience with it.<|||||>@keloemma, please have a look at the special offline mode https://huggingface.co/transformers/installation.html#offline-mode - if your gpu instance is firewalled (which is the case on JZ) it explains how to solve this problem.<|||||>@sgugger, I'm thinking the installation doc is not quite the right place for the offline mode feature. Somehow I'd never think to look there. Could you think of a better placement?
Also perhaps we should start a troubleshooting doc like we had @ fastai with pointers to solutions based on symptoms? So this could be the first entry.<|||||>I wouldn't remove the doc for offline feature from the installation page, but we can certainly duplicate it elsewhere. Why not in the `main_classes/model` apge since it has the from_pretrained method?
We can also start a troubleshooting document, that's also helpful.<|||||>Thank you for this feedback, @sgugger - I will do both.
https://github.com/huggingface/transformers/pull/11236<|||||>@stas00 Hello and thank you for your response. I have just a question,
#Setting environment variable TRANSFORMERS_OFFLINE=1 will tell 🤗 Transformers to use local files only and will not try to look things up"
=> if I understand , this means that I have to save locally FlauBert model size on a specific directory in the local and then provided the path to that directory in my original script and then I set the variable as mentionned in the doc;
so I should have something like this :
HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 \
srun python test.py --model_name_or_path t5-small ( path is already written in the script "test.py)
<|||||>If I understand correctly the instructions aren't too clear, right?
So you run in your login shell that is not firewalled:
```
cd /path/to/transformers
python examples/seq2seq/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ...
```
then for example let's do an interactive session with `srun`, e.g.:
```
srun --pty --ntasks=1 --cpus-per-task=10 --gres=gpu:2 --hint=nomultithread --time=60 bash
```
which opens a new interactive bash shell on the gpu instance, and then you repeat exactly the same command, but this time with 2 env vars:
```
cd /path/to/transformers
HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 \
python examples/seq2seq/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ...
```
Of course, you will need to figure out to load the right `modules`, etc., to get the desired environment. But I trust you have already done so.
It should work exactly the same with the non-interactive shell if you use `sbatch`.
Bottom line: do the same thing as you normally do, but the key here is to first to launch the script in your login shell (no gpus!) so that it would download the model and the dataset files. You don't need to let the script complete - it will take forever since you're running on CPU, but just the first batch is enough since by that time all files would be downloaded.
Please let me know if this helped and if it did how can I improve the documentation to make it easier to understand.
> HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1
> srun python test.py --model_name_or_path t5-small ( path is already written in the script "test.py)
I suppose you could try that as well, I haven't tried, I would start a bash shell via `srun` and then start the script from there. Since this is what you'd do if you were to use SLURM normally via `sbatch`.
<|||||>@stas00 Thank you for your clear explanation, I will try all that and tell you back what works<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,230 | closed | run_qa.py fails evaluating on Squad2 | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.0.dev0
- Platform: Linux-5.4.0-66-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.10
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Quadro RTX 8000 (Cuda 11)
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten, _benchmarks_, @sgugger _maintained examples_
## Information
Model I am using [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
python run_qa.py \
--model_name_or_path bert-base-multilingual-cased \
--dataset_name squad_v2 \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir models/mbert-squad2
```
## Expected behavior
The code should print the metrics for the Squad2 dev set.
## Output
```
File "run_qa.py", line 609, in <module>
main()
File "run_qa.py", line 582, in main
metrics = trainer.evaluate()
File "/home/tim/repos/transformers/examples/question-answering/trainer_qa.py", line 63, in evaluate
metrics = self.compute_metrics(eval_preds)
File "run_qa.py", line 543, in compute_metrics
return metric.compute(predictions=p.predictions, references=p.label_ids)
File "/home/tim/anaconda3/envs/exp/lib/python3.7/site-packages/datasets/metric.py", line 403, in compute
output = self._compute(predictions=predictions, references=references, **kwargs)
File "/home/tim/.cache/huggingface/modules/datasets_modules/metrics/squad/c0855591f1a2c2af8b7949e3146b9c86a6b7f536b4154019b03472639d310181/squad.py", line 109, in _compute
score = evaluate(dataset=dataset, predictions=pred_dict)
File "/home/tim/.cache/huggingface/modules/datasets_modules/metrics/squad/c0855591f1a2c2af8b7949e3146b9c86a6b7f536b4154019b03472639d310181/evaluate.py", line 68, in evaluate
exact_match += metric_max_over_ground_truths(exact_match_score, prediction, ground_truths)
File "/home/tim/.cache/huggingface/modules/datasets_modules/metrics/squad/c0855591f1a2c2af8b7949e3146b9c86a6b7f536b4154019b03472639d310181/evaluate.py", line 53, in metric_max_over_ground_truths
return max(scores_for_ground_truths)
ValueError: max() arg is an empty sequence
```
| 04-13-2021 17:18:50 | 04-13-2021 17:18:50 | You need to pass along `--version_2_with_negative` when using this script with a dataset that has samples with no answers (like squad v2).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,229 | closed | Avoid using no_sync on SageMaker DP | # What does this PR do?
As reported on the [forums](https://discuss.huggingface.co/t/distributeddataparallel-object-has-no-attribute-no-sync/5469), SageMaker DP is incompatible with gradient accumulation for now. This PR fixes that. | 04-13-2021 16:55:45 | 04-13-2021 16:55:45 | |
transformers | 11,228 | closed | Make "embeddings" plural in warning message within tokenization_utils_base | # What does this PR do?
Makes the word "embeddings" plural within the warning message in `tokenization_utils_base.py`.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 04-13-2021 16:26:24 | 04-13-2021 16:26:24 | |
transformers | 11,227 | closed | Make sure code blocks are indented with four spaces | # What does this PR do?
In the documentation, many code blocks are indented with two or three spaces instead of 4. This PR enforces the use of 4 by:
- replacing indents of less than 4 by 4 in `make style`/`make fixup`
- checking there is no indent less than 4 in code blocks during `make quality`
| 04-13-2021 15:01:08 | 04-13-2021 15:01:08 | Ah, I don't match the `.. code-block:: xxx`, thanks for pointing that out!<|||||>OK, the next problem is the one I was concerned about in the first place (which is why i didn't offer a perl one liner). Your fixing code isn't shifting the whole block, but only the outside lines. so it now results in broken python code - bad indentation. e.g. look at any code blocks in the diff that start with `class`, or `{`
So when you shift whitespace - you have to do it for the whole code block.<|||||>That's too hard to fix and will result in other problems (what about bash commands that have different indents with the \?). So I will remove the automatic fix and just put a hard error the user will have to manually fix.<|||||>> That's too hard to fix and will result in other problems (what about bash commands that have different indents with the ?).
Bash has no indentation issues. Only python does AFAIK. Perhaps you meant multiline python fed into a bash shell?
I think it's about finding the difference in the required whitespace, and applying this exact change to the whole block should make it work. Since everything will be shifted by the same number of characters, which is what the manual fix would do anyway.
> So I will remove the automatic fix and just put a hard error the user will have to manually fix.
Sure, that would work just fine.
Thank you.<|||||>Ok, I have something that can do the whole block, but since this PR already badly treated some parts, I need to go back from a fresh master, so closing this one. |
transformers | 11,226 | closed | Add prefix to examples in model_doc rst | # What does this PR do?
In my previous PR #11219, I was advised that the example should follow the syntax
```
>>> code_line_1
>>> code_line_2
result
```
I found some `model_doc` rst files that have code blocks without the prefix.
This PR intends to add `>>>` to those examples.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Documentation: @sgugger
| 04-13-2021 14:49:07 | 04-13-2021 14:49:07 | Hi @forest1988, thanks a lot for your PR! In parallel I've merged another PR that enforces proper indentation for those examples so now yours is conflicting. Could you rebase on master and solve the conflicts, or open a new PR from a fresh master, whichever is easier?
Thanks<|||||>Hi @sgugger,
Thank you for telling me that there is another PR merged and this PR has conflicts with it.
I've just rebased on master and solved the conflicts.
It seems there is a code quality problem, so I'll soon fix it.<|||||>`check_code_quality` shows the error message as below.
It seems something wrong happens during installing packages.
I'm sorry, but can you try to run circleci again?
```
#!/bin/bash -eo pipefail
pip install .[all,quality]
Defaulting to user installation because normal site-packages is not writeable
Processing /home/circleci/transformers
Installing build dependencies ... - \ | / done
Getting requirements to build wheel ... - done
Preparing wheel metadata ... - \ done
Requirement already satisfied: filelock in /usr/local/lib/python3.6/site-packages (from transformers==4.6.0.dev0) (3.0.12)
Collecting dataclasses
Using cached dataclasses-0.8-py3-none-any.whl (19 kB)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.6/site-packages (from transformers==4.6.0.dev0) (1.7.0)
Collecting tqdm>=4.27
Using cached tqdm-4.60.0-py2.py3-none-any.whl (75 kB)
Requirement already satisfied: requests in /usr/local/lib/python3.6/site-packages (from transformers==4.6.0.dev0) (2.25.1)
Collecting tokenizers<0.11,>=0.10.1
Using cached tokenizers-0.10.2-cp36-cp36m-manylinux2010_x86_64.whl (3.3 MB)
Requirement already satisfied: packaging in /usr/local/lib/python3.6/site-packages (from transformers==4.6.0.dev0) (20.9)
Collecting regex!=2019.12.17
Using cached regex-2021.4.4-cp36-cp36m-manylinux2014_x86_64.whl (722 kB)
Collecting numpy>=1.17
Using cached numpy-1.19.5-cp36-cp36m-manylinux2010_x86_64.whl (14.8 MB)
Collecting sacremoses
Using cached sacremoses-0.0.44-py3-none-any.whl
Requirement already satisfied: isort>=5.5.4 in /home/circleci/.local/lib/python3.6/site-packages (from transformers==4.6.0.dev0) (5.8.0)
Collecting black>=20.8b1
Using cached black-20.8b1-py3-none-any.whl
Collecting flake8>=3.8.3
Using cached flake8-3.9.0-py2.py3-none-any.whl (73 kB)
Collecting jaxlib>=0.1.59
Using cached jaxlib-0.1.65-cp36-none-manylinux2010_x86_64.whl (44.7 MB)
Collecting soundfile
Using cached SoundFile-0.10.3.post1-py2.py3-none-any.whl (21 kB)
Collecting tensorflow>=2.3
Using cached tensorflow-2.4.1-cp36-cp36m-manylinux2010_x86_64.whl (394.3 MB)
Collecting torchaudio
Using cached torchaudio-0.8.1-cp36-cp36m-manylinux1_x86_64.whl (1.9 MB)
Collecting Pillow
Using cached Pillow-8.2.0-cp36-cp36m-manylinux1_x86_64.whl (3.0 MB)
Collecting keras2onnx
Using cached keras2onnx-1.7.0-py3-none-any.whl (96 kB)
Collecting jax>=0.2.8
Using cached jax-0.2.12-py3-none-any.whl
Collecting sentencepiece==0.1.91
Using cached sentencepiece-0.1.91-cp36-cp36m-manylinux1_x86_64.whl (1.1 MB)
Collecting protobuf
Using cached protobuf-3.15.8-cp36-cp36m-manylinux1_x86_64.whl (1.0 MB)
Collecting flax>=0.3.2
Using cached flax-0.3.3-py3-none-any.whl (179 kB)
Collecting torch>=1.0
Received "killed" signal
```<|||||>Thanks, I applied all suggestions!
I'm sorry, I misunderstood the meaning of the following two that appear in https://huggingface.co/transformers/_sources/quicktour.rst.txt, and assumed that double # were required for comments in the Transformers documentation.
`## PYTORCH CODE` ` ## TENSORFLOW CODE`
If I'm not mistaken in my current understanding, these are special codes to switch between PyTorch and TensorFlow versions, right?
<|||||>Yes those are special markers, which is why they have the double #, for regular comments, we use just one # |
transformers | 11,225 | closed | Refactor GPT2 | # What does this PR do?
This PR refactors GPT2 model to make it more consistent with the rest of the models in the lib. These are mostly cosmetic changes and uses better names instead of those `nx, n_state` etc.
This does not cause any performance regression and I've verified that all slow tests are passing.
| 04-13-2021 14:41:16 | 04-13-2021 14:41:16 | |
transformers | 11,224 | closed | Doc check: a bit of clean up | # What does this PR do?
This PR removes the data collator and BertForJapaneseTokenizer from the while list in the check that all public objects are documented. It also cleans up a bit the data collator page. | 04-13-2021 14:02:51 | 04-13-2021 14:02:51 | |
transformers | 11,223 | closed | Add LUKE | # What does this PR do?
It adds [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Yamada et al. [EMNLP 2020].
LUKE is similar to RoBERTa, but it adds an entity embedding matrix (500k Wikipedia entities!) and an entity-aware self-attention mechanism to improve performance on several downstream tasks that involve reasoning about entities, such as entity typing and relation classification. It was pre-trained using MLM on both tokens and entities from Wikipedia.
Credits for this PR go to the original author @ikuyamada, who implemented everything. I've just set up everything he needed (a basic modeling file, configuration, conversion script, test files, etc.), and guided him through the process.
Models are already on the hub: https://huggingface.co/models?search=studio-ousia
There are 3 head models defined:
- `LukeForEntityClassification`, for tasks such as entity typing (given an entity in a sentence, classify it), e.g. the [Open Entity dataset](https://www.cs.utexas.edu/~eunsol/html_pages/open_entity.html).
- `LukeForEntityPairClassification`, for tasks such as relation classification (classifying the relationship between two entities), e.g. the [TACRED dataset](https://nlp.stanford.edu/projects/tacred/).
- `LukeForEntitySpanClassification`, for tasks such as NER (LUKE obtains SOTA on NER! It considers all possible entity spans in a sentence, and then classifies them accordingly), e.g. the CoNLL-2003 dataset.
To do:
- [x] add model cards (@ikuyamada this means adding READMEs to the models on the hub, you can take [BERT's one](https://huggingface.co/bert-base-uncased) as inspiration)
- [x] upload fine-tuned models to the hub
## Who can review?
@LysandreJik @sgugger
Original Github conversation on the original repo: https://github.com/studio-ousia/luke/issues/38
Fixes #10700
| 04-13-2021 13:25:16 | 04-13-2021 13:25:16 | @LysandreJik @sgugger all comments are addressed, CI is green! Incredible work by the original author. <|||||>Thanks, addressed your comments.
Added LUKE to the README, and added 3 community notebooks. <|||||>Thanks again for all your work on this! |
transformers | 11,222 | closed | Weird issue with OOM on exported save_pretrained models | Having a weird issue with DialoGPT Large model deployment. From PyTorch 1.8.0 and Transformers 4.3.3 using model.save_pretrained and tokenizer.save_pretrained, the exported pytorch_model.bin is almost twice the size of the model card repo and results in OOM on a reasonably equipped machine that when using the standard transformers download process it works fine (I am building a CI pipeline to containerize the model hence the pre-populated model requirement):
```
Model card:
pytorch_model.bin 1.6GB
model.save_pretrained and tokenizer.save_pretrained:
-rw-r--r-- 1 jrandel jrandel 800 Mar 6 16:51 config.json
-rw-r--r-- 1 jrandel jrandel 446K Mar 6 16:51 merges.txt
-rw-r--r-- 1 jrandel jrandel 3.0G Mar 6 16:51 pytorch_model.bin
-rw-r--r-- 1 jrandel jrandel 357 Mar 6 16:51 special_tokens_map.json
-rw-r--r-- 1 jrandel jrandel 580 Mar 6 16:51 tokenizer_config.json
-rw-r--r-- 1 jrandel jrandel 780K Mar 6 16:51 vocab.json
```
When I download the model card files directly however, I’m getting the following errors:
```
curl -L https://huggingface.co/microsoft/DialoGPT-large/resolve/main/config.json -o ./model/config.json
curl -L https://huggingface.co/microsoft/DialoGPT-large/resolve/main/pytorch_model.bin -o ./model/pytorch_model.bin
curl https://huggingface.co/microsoft/DialoGPT-large/resolve/main/tokenizer_config.json -o ./model/tokenizer_config.json
curl https://huggingface.co/microsoft/DialoGPT-large/resolve/main/config.json -o ./model/config.json
curl https://huggingface.co/microsoft/DialoGPT-large/resolve/main/merges.txt -o ./model/merges.txt
curl https://huggingface.co/microsoft/DialoGPT-large/resolve/main/special_tokens_map.json -o ./model/special_tokens_map.json
curl https://huggingface.co/microsoft/DialoGPT-large/resolve/main/vocab.json -o ./model/vocab.json
<snip>
tokenizer = AutoTokenizer.from_pretrained("model/")
File "/var/lang/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 395, in from_pretrained
return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/var/lang/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1788, in from_pretrained
return cls._from_pretrained(
File "/var/lang/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1801, in _from_pretrained
slow_tokenizer = (cls.slow_tokenizer_class)._from_pretrained(
File "/var/lang/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1876, in _from_pretrained
special_tokens_map = json.load(special_tokens_map_handle)
File "/var/lang/lib/python3.8/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/var/lang/lib/python3.8/json/__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "/var/lang/lib/python3.8/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/var/lang/lib/python3.8/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/runtime/bootstrap.py", line 481, in <module>
main()
File "/var/runtime/bootstrap.py", line 458, in main
lambda_runtime_client.post_init_error(to_json(error_result))
File "/var/runtime/lambda_runtime_client.py", line 42, in post_init_error
response = runtime_connection.getresponse()
File "/var/lang/lib/python3.8/http/client.py", line 1347, in getresponse
response.begin()
File "/var/lang/lib/python3.8/http/client.py", line 307, in begin
version, status, reason = self._read_status()
File "/var/lang/lib/python3.8/http/client.py", line 276, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response
time="2021-03-08T09:01:39.33" level=warning msg="First fatal error stored in appctx: Runtime.ExitError"
time="2021-03-08T09:01:39.33" level=warning msg="Process 14(bootstrap) exited: Runtime exited with error: exit status 1"
time="2021-03-08T09:01:39.33" level=error msg="Init failed" InvokeID= error="Runtime exited with error: exit status 1"
time="2021-03-08T09:01:39.33" level=warning msg="Failed to send default error response: ErrInvalidInvokeID"
time="2021-03-08T09:01:39.33" level=error msg="INIT DONE failed: Runtime.ExitError"
time="2021-03-08T09:01:39.33" level=warning msg="Reset initiated: ReserveFail"
```
So what would be causing the large file variance between save_pretrained models and the model card repo? And any ideas why the directly downloaded model card files aren’t working in this example?
Thanks in advance | 04-13-2021 10:56:04 | 04-13-2021 10:56:04 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Should be addressed.<|||||>Taking a look at the `pytorch_model.bin` saved on the `microsoft/DialoGPT-small` repository, one can see it's made up of float16 weights. When loading the model in the `GPT2Model` and saving it, the weights are saved in float32, resulting in the large increase.
If you want to keep the model in half precision, add the following line after initializing your model:
```py
model.half()
``` |
transformers | 11,221 | closed | fix docs for decoder_input_ids | # What does this PR do?
This PR fixes the docs for `decoder_input_ids` and `decoder_attention_mask` arguments. | 04-13-2021 09:50:03 | 04-13-2021 09:50:03 | |
transformers | 11,220 | closed | added cache_dir=model_args.cache_dir to all example with cache_dir arg | # What does this PR do?
This PR adds `cache_dir=model_args.cache_dir` to all example scripts using `load_dataset` and having `cache_dir` as args.
Close #11205 | 04-13-2021 09:39:07 | 04-13-2021 09:39:07 | |
transformers | 11,219 | closed | Add documentation for BertJapanese | # What does this PR do?
Add documentation for BertJapanese
Regarding #9035
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Model: bert: @LysandreJik
Documentation: @sgugger | 04-13-2021 09:08:31 | 04-13-2021 09:08:31 | Thanks a lot for your help!<|||||>@sgugger @LysandreJik
Thank you for your quick reviewing this PR!
@sgugger
Thank you for telling me how to format the code block!
> Thanks a lot for your PR! I made a couple of suggestions. Mostly, the example should follow the syntax
>
> ```
> >>> code_line_1
> >>> code_line_2
> result
> ```
I've split the big code block into two, and then make all lines prefixed with `>>>`.
Now I think I can understand the format.
However, I wonder why BertTweet and BertGeneration, which I referred to before opening this PR, has code blocks without using `>>>` in them.
Are there any specific reasons? (Could it be because the output is not specifically described?)
Or, may I correct them using `>>>`?
<|||||>We should always have the `>>>` as it allows us to use `doctest` which will test the example (it's been deactivated for a while but we will bring it back to life soon). So if you want to add those to some examples where it's missing, go ahead!
The only instance where we would not want those `>>>` is if we don't want the example to be tested.<|||||>Thanks for the detailed explanation about the prefix!
Now, I would like to add `>>>` to examples without the prefix, as far as I can find (except for which you don't want to be tested). |
transformers | 11,218 | open | [WIP] FSMT bart-like refactor | # What does this PR do?
This PR refactors `FSMT` to align it with other (bart-like) seq-2-seq models in the lib.
This PR refactors `FSMT` similar to `Bart` in that it moves the time dimension to be always at the 2nd place and the batch dimensions always in the first place. Also, the cache is refactored to consists of `tuples` instead of a `dict`.
This refactor is very similar to #10501.
I have verified that all slow-tets are passing and that all metrics (BLEU score) can be reproduced. I ran the evaluation of the following four models and the results are similar to those reported in the [model cards](https://huggingface.co/facebook/wmt19-en-ru).
- en-ru: 33.42
- ru-en: 39.20
- en-de: 42.83
- de-en: 41.39
### Benchmarking
This PR however introduces some speed and memory regression, which I'm currently investigating.
On this PR:
```
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
facebook/wmt19-en-ru 4 8 0.009
facebook/wmt19-en-ru 4 32 0.01
facebook/wmt19-en-ru 4 128 0.026
facebook/wmt19-en-ru 4 512 0.109
--------------------------------------------------------------------------------
==================== INFERENCE - MEMORY - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
facebook/wmt19-en-ru 4 8 2172
facebook/wmt19-en-ru 4 32 2200
facebook/wmt19-en-ru 4 128 2306
facebook/wmt19-en-ru 4 512 2792
--------------------------------------------------------------------------------
```
On master:
```
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
facebook/wmt19-en-ru 4 8 0.007
facebook/wmt19-en-ru 4 32 0.007
facebook/wmt19-en-ru 4 128 0.013
facebook/wmt19-en-ru 4 512 0.046
--------------------------------------------------------------------------------
==================== INFERENCE - MEMORY - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
facebook/wmt19-en-ru 4 8 2170
facebook/wmt19-en-ru 4 32 2176
facebook/wmt19-en-ru 4 128 2204
facebook/wmt19-en-ru 4 512 2356
--------------------------------------------------------------------------------
```
| 04-13-2021 08:45:28 | 04-13-2021 08:45:28 | Thanks Stas! I'm not sure what exactly introduced this memory/speed regression, so I'm going to investigate it and won't merge this PR before that.<|||||>> Thank you for doing this refactoring, @patil-suraj!
>
> It's a bit hard to review since all the code is moved around, so no easy diff to follow - so while I skimmed through it - I trust your expertise and the tests on the correctness.
>
> With regards to memory/performance regression - (thank you for running this important check!) could it be that it was introduced in the initial Bart refactor? i.e. perhaps running the same check on Bart pre and post PR that did the main refactoring (when all the Barts were split up)? And if so then the issue is bigger and needs to be looked in that PR that introduced it.
I remember that I checked that the [Bart refactor](https://github.com/huggingface/transformers/pull/8900) didn't show any regression both on the forward pass and `generate()`. I might however have overlooked something. Would definitely be a good idea to verify this first with the exact same testing params (batch_size=4, ...)!<|||||>@patil-suraj, what needs to be done to complete this?
Last we talked there was a performance regression and the suggestion was to test Bart's performance pre and post its original refactoring.<|||||>@patil-suraj, FYI, recently I made a few fixes to the model to make it work with Deepspeed:
https://github.com/huggingface/transformers/pull/12477/files#diff-564f6d9b78eec17b410c924f868840770a9ad9649032bcf3754827317b9eaba3
Are we still planning to merge this PR? As we said earlier if there is a regression it'll be on the whole Bart family, so perhaps it might be easier to just merge this? Otherwise a lot of time gets waste getting back to it again and again and not getting anywhere.
Thank you. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.