repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 2,992 | closed | Train TFXLNetForSequenceClassification model failed. | # ❓ Questions & Help
Train TFXLNetForSequenceClassification model failed.
## Details
**train_dataset,dev_dataset:**
`<RepeatDataset shapes: ({input_ids: (None, None), attention_mask: (None, None), token_type_ids: (None, None)}, (None,)), types: ({input_ids: tf.int32, attention_mask: tf.int32, token_type_ids: tf.int32}, tf.int64)>`
```
model = TFXLNetForSequenceClassification.from_pretrained(path)
model.config.num_labels=1
train_steps = 10
valid_steps = 5
model.fit(train_dataset,
epochs=6,
steps_per_epoch=train_steps,
validation_data=dev_dataset,
validation_steps=valid_steps,)
ValueError Traceback (most recent call last)
<ipython-input-76-7d07613f7463> in <module>
5 steps_per_epoch=train_steps,
6 validation_data=dev_dataset,
----> 7 validation_steps=valid_steps,)
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
817 max_queue_size=max_queue_size,
818 workers=workers,
--> 819 use_multiprocessing=use_multiprocessing)
820
821 def evaluate(self,
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
233 max_queue_size=max_queue_size,
234 workers=workers,
--> 235 use_multiprocessing=use_multiprocessing)
236
237 total_samples = _get_total_number_of_samples(training_data_adapter)
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py in _process_training_inputs(model, x, y, batch_size, epochs, sample_weights, class_weights, steps_per_epoch, validation_split, validation_data, validation_steps, shuffle, distribution_strategy, max_queue_size, workers, use_multiprocessing)
591 max_queue_size=max_queue_size,
592 workers=workers,
--> 593 use_multiprocessing=use_multiprocessing)
594 val_adapter = None
595 if validation_data:
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py in _process_inputs(model, mode, x, y, batch_size, epochs, sample_weights, class_weights, shuffle, steps, distribution_strategy, max_queue_size, workers, use_multiprocessing)
704 max_queue_size=max_queue_size,
705 workers=workers,
--> 706 use_multiprocessing=use_multiprocessing)
707
708 return adapter
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\data_adapter.py in __init__(self, x, y, sample_weights, standardize_function, **kwargs)
700
701 if standardize_function is not None:
--> 702 x = standardize_function(x)
703
704 # Note that the dataset instance is immutable, its fine to reusing the user
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py in standardize_function(dataset)
682 return x, y
683 return x, y, sample_weights
--> 684 return dataset.map(map_fn, num_parallel_calls=dataset_ops.AUTOTUNE)
685
686 if mode == ModeKeys.PREDICT:
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py in map(self, map_func, num_parallel_calls)
1589 else:
1590 return ParallelMapDataset(
-> 1591 self, map_func, num_parallel_calls, preserve_cardinality=True)
1592
1593 def flat_map(self, map_func):
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py in __init__(self, input_dataset, map_func, num_parallel_calls, use_inter_op_parallelism, preserve_cardinality, use_legacy_function)
3924 self._transformation_name(),
3925 dataset=input_dataset,
-> 3926 use_legacy_function=use_legacy_function)
3927 self._num_parallel_calls = ops.convert_to_tensor(
3928 num_parallel_calls, dtype=dtypes.int32, name="num_parallel_calls")
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py in __init__(self, func, transformation_name, dataset, input_classes, input_shapes, input_types, input_structure, add_to_graph, use_legacy_function, defun_kwargs)
3145 with tracking.resource_tracker_scope(resource_tracker):
3146 # TODO(b/141462134): Switch to using garbage collection.
-> 3147 self._function = wrapper_fn._get_concrete_function_internal()
3148
3149 if add_to_graph:
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\eager\function.py in _get_concrete_function_internal(self, *args, **kwargs)
2393 """Bypasses error checking when getting a graph function."""
2394 graph_function = self._get_concrete_function_internal_garbage_collected(
-> 2395 *args, **kwargs)
2396 # We're returning this concrete function to someone, and they may keep a
2397 # reference to the FuncGraph without keeping a reference to the
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\eager\function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
2387 args, kwargs = None, None
2388 with self._lock:
-> 2389 graph_function, _, _ = self._maybe_define_function(args, kwargs)
2390 return graph_function
2391
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\eager\function.py in _maybe_define_function(self, args, kwargs)
2701
2702 self._function_cache.missed.add(call_context_key)
-> 2703 graph_function = self._create_graph_function(args, kwargs)
2704 self._function_cache.primary[cache_key] = graph_function
2705 return graph_function, args, kwargs
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\eager\function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
2591 arg_names=arg_names,
2592 override_flat_arg_shapes=override_flat_arg_shapes,
-> 2593 capture_by_value=self._capture_by_value),
2594 self._function_attributes,
2595 # Tell the ConcreteFunction to clean up its graph once it goes out of
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\framework\func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
976 converted_func)
977
--> 978 func_outputs = python_func(*func_args, **func_kwargs)
979
980 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py in wrapper_fn(*args)
3138 attributes=defun_kwargs)
3139 def wrapper_fn(*args): # pylint: disable=missing-docstring
-> 3140 ret = _wrapper_helper(*args)
3141 ret = structure.to_tensor_list(self._output_structure, ret)
3142 return [ops.convert_to_tensor(t) for t in ret]
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py in _wrapper_helper(*args)
3080 nested_args = (nested_args,)
3081
-> 3082 ret = autograph.tf_convert(func, ag_ctx)(*nested_args)
3083 # If `func` returns a list of tensors, `nest.flatten()` and
3084 # `ops.convert_to_tensor()` would conspire to attempt to stack
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\autograph\impl\api.py in wrapper(*args, **kwargs)
235 except Exception as e: # pylint:disable=broad-except
236 if hasattr(e, 'ag_error_metadata'):
--> 237 raise e.ag_error_metadata.to_exception(e)
238 else:
239 raise
ValueError: in converted code:
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py:677 map_fn
batch_size=None)
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training.py:2410 _standardize_tensors
exception_prefix='input')
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_utils.py:510 standardize_input_data
'for each key in: ' + str(names))
ValueError: No data provided for "inputs". Need data for each key in: ['attention_mask', 'inputs', 'token_type_ids']
| 02-24-2020 13:28:26 | 02-24-2020 13:28:26 | |
transformers | 2,991 | closed | run_ner.py / bert-base-multilingual-cased can output empty tokens | This can happen when using bert-base-multilingual-cased with an input containing an unique space.
In this case, the tokenizer will output just an empty word_tokens thus leading to an non-consistent behavior
over the labels_ids tokens adding one more tokens than tokens vector. | 02-24-2020 12:48:45 | 02-24-2020 12:48:45 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2991?src=pr&el=h1) Report
> Merging [#2991](https://codecov.io/gh/huggingface/transformers/pull/2991?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/38f5fe9e0277df67a01db80a1c640ac072a2381e?src=pr&el=desc) will **decrease** coverage by `1.03%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2991?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2991 +/- ##
==========================================
- Coverage 77.16% 76.12% -1.04%
==========================================
Files 98 98
Lines 15997 15997
==========================================
- Hits 12344 12178 -166
- Misses 3653 3819 +166
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2991?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2991/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2991/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2991/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2991/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.03% <0%> (-2.21%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2991/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2991/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.2% <0%> (-0.17%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2991?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2991?src=pr&el=footer). Last update [38f5fe9...57f312d](https://codecov.io/gh/huggingface/transformers/pull/2991?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This looks good to me. Maybe we can make it a test? This might break some of the other examples as well I will check. |
transformers | 2,990 | closed | run_ner.py example | # ❓ Questions & Help
## Details
It seem that examples/run_ner.py has been removed from github repo. Is there any issue with this code?
Thanks | 02-24-2020 08:47:15 | 02-24-2020 08:47:15 | Are you looking for this? https://github.com/huggingface/transformers/blob/master/examples/ner/run_ner.py<|||||>On a (somewhat) related note: the code might still be there but on huggingface's webpage the following link doesn't point to anything: https://huggingface.co/transformers/examples.html#named-entity-recognition (there is no more an example of how to call the script)<|||||>the example has been moved to https://github.com/huggingface/transformers/tree/master/examples/ner
<|||||>Thx Jonathan<|||||>I found it here: https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner.py
The NER has been renamed to Token Classification.
|
transformers | 2,989 | closed | Documentation | Updating documentation for tokenizers.
~Still left to do:~ will do in a future PR | 02-24-2020 02:26:36 | 02-24-2020 02:26:36 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2989?src=pr&el=h1) Report
> Merging [#2989](https://codecov.io/gh/huggingface/transformers/pull/2989?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c913eb9c3894b4031dc059d22b42e38a5fcef989?src=pr&el=desc) will **increase** coverage by `<.01%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2989?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2989 +/- ##
==========================================
+ Coverage 77.26% 77.27% +<.01%
==========================================
Files 98 98
Lines 16040 16047 +7
==========================================
+ Hits 12393 12400 +7
Misses 3647 3647
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2989?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2989/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3hsbS5weQ==) | `96.22% <ø> (ø)` | :arrow_up: |
| [src/transformers/configuration\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2989/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.05% <ø> (ø)` | :arrow_up: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2989/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.16% <ø> (ø)` | :arrow_up: |
| [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2989/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.19% <ø> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2989/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.05% <ø> (ø)` | :arrow_up: |
| [src/transformers/tokenization\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2989/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZGlzdGlsYmVydC5weQ==) | `100% <ø> (ø)` | :arrow_up: |
| [src/transformers/tokenization\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/2989/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZmxhdWJlcnQucHk=) | `40.42% <ø> (ø)` | :arrow_up: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2989/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.5% <ø> (ø)` | :arrow_up: |
| [src/transformers/configuration\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/2989/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2ZsYXViZXJ0LnB5) | `100% <ø> (ø)` | :arrow_up: |
| [src/transformers/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2989/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `96.87% <ø> (ø)` | :arrow_up: |
| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/2989/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2989?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2989?src=pr&el=footer). Last update [c913eb9...b393150](https://codecov.io/gh/huggingface/transformers/pull/2989?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,988 | closed | Speed up GELU computation with torch.jit | Currently, the implementation of the GELU activation uses several unfused pointwise operations. In my experiments, computing this activation takes about 10% of forward time for GPT2-like networks for inputs of size similar to (32,128). This PR speeds up the execution of gelu_new during both forward (~3-5x) and backward (~2-3x) passes with the help of torch.jit, which might be helpful for both training and inference.
Below are the benchmarking results, done on pytorch v1.4.0 and transformers v2.5.0 with RTX 2080Ti and GTX 1080Ti. The benchmarking code is available [here](https://gist.github.com/mryab/d639d7ba5e741cdd6a6c712a118e97f8).
1080Ti:
```
torch.float32 (32, 128) gelu 2.6e-04 4.1e-04 jit 1.1e-04 1.8e-04 speedup forward 2.50 backward 2.27
torch.float32 (32, 512) gelu 2.6e-04 4.1e-04 jit 6.5e-05 1.5e-04 speedup forward 4.06 backward 2.67
torch.float32 (32, 1024) gelu 2.6e-04 4.0e-04 jit 6.7e-05 1.6e-04 speedup forward 3.94 backward 2.59
torch.float32 (32, 4096) gelu 2.5e-04 3.9e-04 jit 6.6e-05 1.6e-04 speedup forward 3.75 backward 2.51
torch.float32 (256, 128) gelu 2.7e-04 4.1e-04 jit 6.7e-05 1.6e-04 speedup forward 3.96 backward 2.61
torch.float32 (256, 512) gelu 2.5e-04 4.0e-04 jit 6.5e-05 1.5e-04 speedup forward 3.88 backward 2.57
torch.float32 (256, 1024) gelu 2.5e-04 4.0e-04 jit 6.2e-05 1.5e-04 speedup forward 4.05 backward 2.62
torch.float32 (256, 4096) gelu 2.6e-04 4.2e-04 jit 1.0e-04 1.7e-04 speedup forward 2.52 backward 2.45
torch.float32 (1024, 128) gelu 2.5e-04 3.9e-04 jit 6.5e-05 1.5e-04 speedup forward 3.82 backward 2.57
torch.float32 (1024, 512) gelu 2.5e-04 3.8e-04 jit 7.2e-05 1.5e-04 speedup forward 3.43 backward 2.52
torch.float32 (1024, 1024) gelu 2.6e-04 4.2e-04 jit 1.0e-04 1.7e-04 speedup forward 2.52 backward 2.44
torch.float32 (1024, 4096) gelu 8.8e-04 1.3e-03 jit 3.2e-04 3.5e-04 speedup forward 2.71 backward 3.79
torch.float32 (8192, 128) gelu 2.6e-04 4.2e-04 jit 1.0e-04 1.7e-04 speedup forward 2.51 backward 2.43
torch.float32 (8192, 512) gelu 8.8e-04 1.3e-03 jit 3.2e-04 3.5e-04 speedup forward 2.72 backward 3.80
torch.float32 (8192, 1024) gelu 1.7e-03 2.5e-03 jit 6.4e-04 5.9e-04 speedup forward 2.69 backward 4.30
torch.float32 (8192, 4096) gelu 6.7e-03 1.0e-02 jit 2.7e-03 2.5e-03 speedup forward 2.53 backward 4.05
torch.float16 (32, 128) gelu 2.6e-04 4.0e-04 jit 9.4e-05 1.8e-04 speedup forward 2.79 backward 2.24
torch.float16 (32, 512) gelu 2.5e-04 3.9e-04 jit 6.2e-05 1.4e-04 speedup forward 4.09 backward 2.74
torch.float16 (32, 1024) gelu 2.6e-04 4.0e-04 jit 6.2e-05 1.5e-04 speedup forward 4.22 backward 2.68
torch.float16 (32, 4096) gelu 2.4e-04 3.8e-04 jit 6.3e-05 1.5e-04 speedup forward 3.84 backward 2.56
torch.float16 (256, 128) gelu 2.6e-04 4.0e-04 jit 6.1e-05 1.4e-04 speedup forward 4.34 backward 2.81
torch.float16 (256, 512) gelu 2.5e-04 3.9e-04 jit 6.4e-05 1.5e-04 speedup forward 3.98 backward 2.59
torch.float16 (256, 1024) gelu 2.4e-04 3.7e-04 jit 6.3e-05 1.4e-04 speedup forward 3.82 backward 2.65
torch.float16 (256, 4096) gelu 2.3e-04 3.2e-04 jit 7.6e-05 1.4e-04 speedup forward 3.00 backward 2.32
torch.float16 (1024, 128) gelu 2.2e-04 3.2e-04 jit 6.3e-05 1.4e-04 speedup forward 3.47 backward 2.32
torch.float16 (1024, 512) gelu 2.2e-04 3.2e-04 jit 6.3e-05 1.4e-04 speedup forward 3.47 backward 2.31
torch.float16 (1024, 1024) gelu 2.3e-04 3.2e-04 jit 7.6e-05 1.4e-04 speedup forward 3.01 backward 2.31
torch.float16 (1024, 4096) gelu 5.4e-04 8.9e-04 jit 2.2e-04 2.6e-04 speedup forward 2.44 backward 3.40
torch.float16 (8192, 128) gelu 2.5e-04 3.8e-04 jit 7.6e-05 1.5e-04 speedup forward 3.29 backward 2.61
torch.float16 (8192, 512) gelu 5.4e-04 8.9e-04 jit 2.2e-04 2.5e-04 speedup forward 2.43 backward 3.49
torch.float16 (8192, 1024) gelu 1.0e-03 1.7e-03 jit 4.8e-04 4.6e-04 speedup forward 2.18 backward 3.60
torch.float16 (8192, 4096) gelu 4.2e-03 6.5e-03 jit 2.3e-03 2.0e-03 speedup forward 1.83 backward 3.30
```
RTX 2080Ti:
```
torch.float32 (32, 128) gelu 3.0e-04 6.2e-04 jit 1.2e-04 2.2e-04 speedup forward 2.50 backward 2.80
torch.float32 (32, 512) gelu 3.2e-04 6.8e-04 jit 6.8e-05 2.1e-04 speedup forward 4.66 backward 3.20
torch.float32 (32, 1024) gelu 3.4e-04 7.2e-04 jit 6.8e-05 2.1e-04 speedup forward 4.96 backward 3.38
torch.float32 (32, 4096) gelu 3.3e-04 7.0e-04 jit 6.4e-05 1.8e-04 speedup forward 5.07 backward 3.83
torch.float32 (256, 128) gelu 3.3e-04 6.9e-04 jit 6.5e-05 1.9e-04 speedup forward 5.07 backward 3.57
torch.float32 (256, 512) gelu 3.0e-04 6.2e-04 jit 6.4e-05 1.9e-04 speedup forward 4.73 backward 3.21
torch.float32 (256, 1024) gelu 3.3e-04 6.9e-04 jit 6.6e-05 2.1e-04 speedup forward 4.95 backward 3.35
torch.float32 (256, 4096) gelu 3.3e-04 6.8e-04 jit 9.3e-05 2.2e-04 speedup forward 3.53 backward 3.09
torch.float32 (1024, 128) gelu 3.1e-04 6.2e-04 jit 6.5e-05 1.9e-04 speedup forward 4.70 backward 3.32
torch.float32 (1024, 512) gelu 3.4e-04 6.4e-04 jit 7.7e-05 1.9e-04 speedup forward 4.41 backward 3.30
torch.float32 (1024, 1024) gelu 3.1e-04 6.1e-04 jit 9.5e-05 2.2e-04 speedup forward 3.26 backward 2.73
torch.float32 (1024, 4096) gelu 6.2e-04 9.9e-04 jit 2.7e-04 3.1e-04 speedup forward 2.26 backward 3.15
torch.float32 (8192, 128) gelu 3.1e-04 4.9e-04 jit 9.7e-05 1.9e-04 speedup forward 3.13 backward 2.55
torch.float32 (8192, 512) gelu 6.1e-04 1.0e-03 jit 2.7e-04 3.4e-04 speedup forward 2.27 backward 2.99
torch.float32 (8192, 1024) gelu 1.2e-03 1.9e-03 jit 5.3e-04 5.5e-04 speedup forward 2.21 backward 3.38
torch.float32 (8192, 4096) gelu 4.5e-03 6.7e-03 jit 2.2e-03 1.6e-03 speedup forward 2.04 backward 4.24
torch.float16 (32, 128) gelu 3.2e-04 6.3e-04 jit 1.1e-04 2.2e-04 speedup forward 2.84 backward 2.92
torch.float16 (32, 512) gelu 3.3e-04 6.9e-04 jit 6.2e-05 1.6e-04 speedup forward 5.23 backward 4.29
torch.float16 (32, 1024) gelu 3.0e-04 5.9e-04 jit 6.5e-05 1.7e-04 speedup forward 4.58 backward 3.46
torch.float16 (32, 4096) gelu 3.0e-04 6.1e-04 jit 6.4e-05 1.8e-04 speedup forward 4.63 backward 3.34
torch.float16 (256, 128) gelu 3.0e-04 5.9e-04 jit 6.4e-05 1.7e-04 speedup forward 4.61 backward 3.49
torch.float16 (256, 512) gelu 3.0e-04 5.9e-04 jit 6.3e-05 1.7e-04 speedup forward 4.68 backward 3.41
torch.float16 (256, 1024) gelu 2.9e-04 5.7e-04 jit 6.5e-05 1.6e-04 speedup forward 4.40 backward 3.54
torch.float16 (256, 4096) gelu 2.9e-04 5.5e-04 jit 7.5e-05 2.0e-04 speedup forward 3.87 backward 2.74
torch.float16 (1024, 128) gelu 3.7e-04 6.3e-04 jit 8.0e-05 2.3e-04 speedup forward 4.59 backward 2.75
torch.float16 (1024, 512) gelu 3.4e-04 6.0e-04 jit 6.6e-05 1.6e-04 speedup forward 5.13 backward 3.81
torch.float16 (1024, 1024) gelu 3.0e-04 5.9e-04 jit 7.2e-05 1.9e-04 speedup forward 4.12 backward 3.08
torch.float16 (1024, 4096) gelu 4.1e-04 6.9e-04 jit 1.6e-04 2.6e-04 speedup forward 2.49 backward 2.68
torch.float16 (8192, 128) gelu 3.6e-04 6.6e-04 jit 7.0e-05 1.8e-04 speedup forward 5.08 backward 3.73
torch.float16 (8192, 512) gelu 4.1e-04 7.0e-04 jit 1.6e-04 2.5e-04 speedup forward 2.57 backward 2.76
torch.float16 (8192, 1024) gelu 7.4e-04 1.2e-03 jit 3.2e-04 4.1e-04 speedup forward 2.30 backward 2.81
torch.float16 (8192, 4096) gelu 2.8e-03 3.9e-03 jit 1.5e-03 1.2e-03 speedup forward 1.86 backward 3.34
```
| 02-23-2020 23:55:07 | 02-23-2020 23:55:07 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2988?src=pr&el=h1) Report
> Merging [#2988](https://codecov.io/gh/huggingface/transformers/pull/2988?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/38f5fe9e0277df67a01db80a1c640ac072a2381e?src=pr&el=desc) will **decrease** coverage by `1.05%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2988?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2988 +/- ##
==========================================
- Coverage 77.16% 76.11% -1.06%
==========================================
Files 98 98
Lines 15997 15997
==========================================
- Hits 12344 12176 -168
- Misses 3653 3821 +168
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2988?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/2988/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `75% <100%> (-12.5%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2988/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2988/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2988/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2988/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.03% <0%> (-2.21%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2988/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2988/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.2% <0%> (-0.17%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2988?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2988?src=pr&el=footer). Last update [38f5fe9...7e91273](https://codecov.io/gh/huggingface/transformers/pull/2988?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Any reason why the other activation functions (swish, _gelu_python) do not need jit? (I have no experience with JIT, so this is a genuine question. When should jit.script be used, and when shouldn't it?)<|||||>Indeed, it's possible to wrap both activations you mentioned with torch.jit; in case of `_gelu_python` it's likely to yield similar reduction in execution time. I will come back with benchmarking results and, if you think it's a good idea, will add JIT compilation to this PR.
Answering your question on use of `jit.script`: it usually makes sense to optimize functions with many elementwise ops, as they tend to get fused into a single kernel, which eliminates unnecessary memory accesses. There are other advantages, e.g. removing Python overhead and lifting GIL as a result; if you're interested, [this tutorial](https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/) and [this blogpost](http://blog.christianperone.com/2018/10/pytorch-1-0-tracing-jit-and-libtorch-c-api-to-integrate-pytorch-into-nodejs/) give a good overview of underlying optimizations.
TL;DR: `jit.script` useful when you have TorchScript-friendly functions/modules with lots of custom PyTorch code; if your code uses unsupported Python features, you either leave it be or use torch.jit.trace.
Talking of `swish`, there is something I'd like to mention: its current implementation can be made more memory-efficient (see [this](https://github.com/lukemelas/EfficientNet-PyTorch/pull/88) and [this](https://medium.com/the-artificial-impostor/more-memory-efficient-swish-activation-function-e07c22c12a76)) at the cost of losing `torch.jit`/`torch.onnx` support. Not sure if swish will benefit much from JIT compilation — would memory savings be useful then?<|||||>Here's the results for both activations (done on 1080Ti, I've updated the gist with two scripts):
_gelu_python
```torch.float32 (32, 128) gelu 1.8e-04 3.6e-04 jit 1.1e-04 1.7e-04 speedup forward 1.73 backward 2.12
torch.float32 (32, 512) gelu 1.9e-04 3.7e-04 jit 7.0e-05 1.6e-04 speedup forward 2.69 backward 2.36
torch.float32 (32, 1024) gelu 1.9e-04 3.7e-04 jit 6.9e-05 1.6e-04 speedup forward 2.71 backward 2.33
torch.float32 (32, 4096) gelu 1.8e-04 3.6e-04 jit 6.9e-05 1.6e-04 speedup forward 2.66 backward 2.29
torch.float32 (256, 128) gelu 1.9e-04 3.6e-04 jit 7.0e-05 1.6e-04 speedup forward 2.66 backward 2.30
torch.float32 (256, 512) gelu 1.8e-04 3.6e-04 jit 6.9e-05 1.6e-04 speedup forward 2.65 backward 2.31
torch.float32 (256, 1024) gelu 1.8e-04 3.6e-04 jit 6.9e-05 1.6e-04 speedup forward 2.67 backward 2.30
torch.float32 (256, 4096) gelu 1.7e-04 3.6e-04 jit 9.8e-05 1.5e-04 speedup forward 1.74 backward 2.33
torch.float32 (1024, 128) gelu 1.8e-04 3.6e-04 jit 6.9e-05 1.6e-04 speedup forward 2.67 backward 2.30
torch.float32 (1024, 512) gelu 1.9e-04 3.6e-04 jit 7.3e-05 1.6e-04 speedup forward 2.55 backward 2.34
torch.float32 (1024, 1024) gelu 1.7e-04 3.5e-04 jit 9.9e-05 1.6e-04 speedup forward 1.74 backward 2.29
torch.float32 (1024, 4096) gelu 5.1e-04 1.1e-03 jit 3.1e-04 2.9e-04 speedup forward 1.65 backward 3.78
torch.float32 (8192, 128) gelu 1.7e-04 3.6e-04 jit 1.0e-04 1.5e-04 speedup forward 1.74 backward 2.30
torch.float32 (8192, 512) gelu 5.1e-04 1.1e-03 jit 3.1e-04 2.9e-04 speedup forward 1.65 backward 3.78
torch.float32 (8192, 1024) gelu 9.8e-04 2.1e-03 jit 6.1e-04 4.6e-04 speedup forward 1.61 backward 4.43
torch.float32 (8192, 4096) gelu 3.8e-03 8.1e-03 jit 2.6e-03 1.9e-03 speedup forward 1.46 backward 4.15
torch.float16 (32, 128) gelu 1.9e-04 3.6e-04 jit 9.6e-05 1.8e-04 speedup forward 1.94 backward 1.98
torch.float16 (32, 512) gelu 1.8e-04 3.6e-04 jit 6.8e-05 1.5e-04 speedup forward 2.73 backward 2.38
torch.float16 (32, 1024) gelu 1.9e-04 3.6e-04 jit 7.0e-05 1.6e-04 speedup forward 2.66 backward 2.28
torch.float16 (32, 4096) gelu 1.9e-04 3.6e-04 jit 6.9e-05 1.6e-04 speedup forward 2.68 backward 2.33
torch.float16 (256, 128) gelu 1.9e-04 3.6e-04 jit 7.0e-05 1.6e-04 speedup forward 2.66 backward 2.29
torch.float16 (256, 512) gelu 1.9e-04 3.6e-04 jit 6.9e-05 1.6e-04 speedup forward 2.67 backward 2.30
torch.float16 (256, 1024) gelu 1.9e-04 3.7e-04 jit 7.0e-05 1.6e-04 speedup forward 2.68 backward 2.31
torch.float16 (256, 4096) gelu 1.9e-04 3.7e-04 jit 7.4e-05 1.5e-04 speedup forward 2.56 backward 2.43
torch.float16 (1024, 128) gelu 1.9e-04 3.6e-04 jit 6.9e-05 1.6e-04 speedup forward 2.67 backward 2.28
torch.float16 (1024, 512) gelu 1.9e-04 3.6e-04 jit 6.9e-05 1.6e-04 speedup forward 2.69 backward 2.30
torch.float16 (1024, 1024) gelu 1.9e-04 3.7e-04 jit 7.4e-05 1.5e-04 speedup forward 2.56 backward 2.40
torch.float16 (1024, 4096) gelu 3.3e-04 8.1e-04 jit 2.1e-04 2.3e-04 speedup forward 1.62 backward 3.50
torch.float16 (8192, 128) gelu 1.9e-04 3.7e-04 jit 7.4e-05 1.6e-04 speedup forward 2.56 backward 2.34
torch.float16 (8192, 512) gelu 3.4e-04 8.1e-04 jit 2.1e-04 2.3e-04 speedup forward 1.62 backward 3.51
torch.float16 (8192, 1024) gelu 6.3e-04 1.5e-03 jit 4.5e-04 3.7e-04 speedup forward 1.39 backward 4.06
torch.float16 (8192, 4096) gelu 2.5e-03 5.9e-03 jit 2.2e-03 1.5e-03 speedup forward 1.11 backward 3.93
```
swish
```
torch.float32 (32, 128) swish 5.9e-05 1.8e-04 jit 1.0e-04 1.8e-04 speedup forward 0.59 backward 1.01
torch.float32 (32, 512) swish 5.8e-05 1.8e-04 jit 5.4e-05 1.4e-04 speedup forward 1.08 backward 1.30
torch.float32 (32, 1024) swish 5.8e-05 1.8e-04 jit 5.4e-05 1.4e-04 speedup forward 1.08 backward 1.31
torch.float32 (32, 4096) swish 5.9e-05 1.8e-04 jit 5.4e-05 1.4e-04 speedup forward 1.08 backward 1.33
torch.float32 (256, 128) swish 5.8e-05 1.8e-04 jit 5.4e-05 1.4e-04 speedup forward 1.08 backward 1.33
torch.float32 (256, 512) swish 5.9e-05 1.8e-04 jit 5.4e-05 1.3e-04 speedup forward 1.09 backward 1.36
torch.float32 (256, 1024) swish 5.9e-05 1.8e-04 jit 5.4e-05 1.4e-04 speedup forward 1.09 backward 1.32
torch.float32 (256, 4096) swish 8.6e-05 2.2e-04 jit 7.4e-05 1.4e-04 speedup forward 1.17 backward 1.57
torch.float32 (1024, 128) swish 5.8e-05 1.8e-04 jit 5.5e-05 1.4e-04 speedup forward 1.07 backward 1.28
torch.float32 (1024, 512) swish 6.7e-05 1.9e-04 jit 5.6e-05 1.4e-04 speedup forward 1.20 backward 1.31
torch.float32 (1024, 1024) swish 8.6e-05 2.2e-04 jit 7.4e-05 1.4e-04 speedup forward 1.17 backward 1.56
torch.float32 (1024, 4096) swish 2.6e-04 5.8e-04 jit 2.0e-04 2.4e-04 speedup forward 1.33 backward 2.39
torch.float32 (8192, 128) swish 8.8e-05 2.2e-04 jit 7.4e-05 1.4e-04 speedup forward 1.18 backward 1.63
torch.float32 (8192, 512) swish 2.6e-04 5.7e-04 jit 2.0e-04 2.4e-04 speedup forward 1.34 backward 2.36
torch.float32 (8192, 1024) swish 4.9e-04 1.0e-03 jit 3.7e-04 3.9e-04 speedup forward 1.32 backward 2.69
torch.float32 (8192, 4096) swish 1.9e-03 4.1e-03 jit 1.5e-03 1.6e-03 speedup forward 1.25 backward 2.56
torch.float16 (32, 128) swish 5.8e-05 1.8e-04 jit 9.5e-05 1.7e-04 speedup forward 0.62 backward 1.06
torch.float16 (32, 512) swish 5.8e-05 1.8e-04 jit 5.4e-05 1.3e-04 speedup forward 1.09 backward 1.35
torch.float16 (32, 1024) swish 5.9e-05 1.8e-04 jit 5.4e-05 1.3e-04 speedup forward 1.10 backward 1.32
torch.float16 (32, 4096) swish 5.9e-05 1.8e-04 jit 5.4e-05 1.4e-04 speedup forward 1.10 backward 1.30
torch.float16 (256, 128) swish 5.8e-05 1.8e-04 jit 5.3e-05 1.3e-04 speedup forward 1.09 backward 1.33
torch.float16 (256, 512) swish 5.9e-05 1.8e-04 jit 5.4e-05 1.4e-04 speedup forward 1.10 backward 1.29
torch.float16 (256, 1024) swish 5.9e-05 1.8e-04 jit 5.4e-05 1.4e-04 speedup forward 1.09 backward 1.30
torch.float16 (256, 4096) swish 7.5e-05 1.8e-04 jit 8.1e-05 1.4e-04 speedup forward 0.93 backward 1.31
torch.float16 (1024, 128) swish 5.9e-05 1.8e-04 jit 5.4e-05 1.4e-04 speedup forward 1.10 backward 1.29
torch.float16 (1024, 512) swish 5.9e-05 1.8e-04 jit 5.9e-05 1.4e-04 speedup forward 1.00 backward 1.30
torch.float16 (1024, 1024) swish 7.3e-05 1.8e-04 jit 8.1e-05 1.4e-04 speedup forward 0.91 backward 1.30
torch.float16 (1024, 4096) swish 2.1e-04 4.3e-04 jit 2.1e-04 2.1e-04 speedup forward 0.99 backward 2.08
torch.float16 (8192, 128) swish 7.4e-05 1.8e-04 jit 8.1e-05 1.3e-04 speedup forward 0.91 backward 1.37
torch.float16 (8192, 512) swish 2.1e-04 4.2e-04 jit 2.1e-04 2.1e-04 speedup forward 0.99 backward 2.03
torch.float16 (8192, 1024) swish 3.8e-04 7.5e-04 jit 3.7e-04 3.0e-04 speedup forward 1.02 backward 2.47
torch.float16 (8192, 4096) swish 1.4e-03 2.8e-03 jit 1.4e-03 1.1e-03 speedup forward 1.06 backward 2.60
```<|||||>Same benchmarks on RTX 2080Ti:
_python_gelu
```
torch.float32 (32, 128) gelu 2.1e-04 5.9e-04 jit 1.2e-04 2.2e-04 speedup forward 1.79 backward 2.63
torch.float32 (32, 512) gelu 2.3e-04 6.0e-04 jit 6.5e-05 1.6e-04 speedup forward 3.59 backward 3.76
torch.float32 (32, 1024) gelu 2.3e-04 5.8e-04 jit 6.4e-05 1.6e-04 speedup forward 3.54 backward 3.73
torch.float32 (32, 4096) gelu 1.7e-04 3.3e-04 jit 6.2e-05 1.4e-04 speedup forward 2.65 backward 2.38
torch.float32 (256, 128) gelu 1.7e-04 3.6e-04 jit 6.6e-05 1.9e-04 speedup forward 2.59 backward 1.94
torch.float32 (256, 512) gelu 2.5e-04 6.7e-04 jit 6.7e-05 2.0e-04 speedup forward 3.71 backward 3.34
torch.float32 (256, 1024) gelu 2.3e-04 6.1e-04 jit 6.6e-05 1.9e-04 speedup forward 3.41 backward 3.25
torch.float32 (256, 4096) gelu 2.1e-04 5.3e-04 jit 9.2e-05 2.0e-04 speedup forward 2.33 backward 2.64
torch.float32 (1024, 128) gelu 2.1e-04 5.0e-04 jit 6.5e-05 1.9e-04 speedup forward 3.25 backward 2.70
torch.float32 (1024, 512) gelu 2.2e-04 5.2e-04 jit 6.7e-05 1.8e-04 speedup forward 3.21 backward 2.91
torch.float32 (1024, 1024) gelu 2.4e-04 6.1e-04 jit 9.2e-05 2.0e-04 speedup forward 2.56 backward 3.06
torch.float32 (1024, 4096) gelu 4.0e-04 9.3e-04 jit 2.7e-04 3.6e-04 speedup forward 1.44 backward 2.58
torch.float32 (8192, 128) gelu 2.3e-04 5.7e-04 jit 9.4e-05 2.2e-04 speedup forward 2.44 backward 2.63
torch.float32 (8192, 512) gelu 4.0e-04 9.3e-04 jit 2.7e-04 3.4e-04 speedup forward 1.47 backward 2.76
torch.float32 (8192, 1024) gelu 7.4e-04 1.6e-03 jit 5.5e-04 4.8e-04 speedup forward 1.36 backward 3.42
torch.float32 (8192, 4096) gelu 2.8e-03 5.8e-03 jit 2.2e-03 1.3e-03 speedup forward 1.26 backward 4.55
torch.float16 (32, 128) gelu 2.4e-04 6.7e-04 jit 1.1e-04 2.0e-04 speedup forward 2.16 backward 3.29
torch.float16 (32, 512) gelu 2.4e-04 5.0e-04 jit 7.6e-05 1.8e-04 speedup forward 3.11 backward 2.80
torch.float16 (32, 1024) gelu 2.1e-04 5.4e-04 jit 6.4e-05 1.8e-04 speedup forward 3.31 backward 3.03
torch.float16 (32, 4096) gelu 2.2e-04 5.7e-04 jit 6.5e-05 1.9e-04 speedup forward 3.40 backward 3.04
torch.float16 (256, 128) gelu 2.1e-04 5.3e-04 jit 7.1e-05 2.0e-04 speedup forward 2.93 backward 2.61
torch.float16 (256, 512) gelu 2.2e-04 4.8e-04 jit 7.9e-05 2.1e-04 speedup forward 2.83 backward 2.27
torch.float16 (256, 1024) gelu 2.2e-04 5.8e-04 jit 6.4e-05 1.8e-04 speedup forward 3.35 backward 3.28
torch.float16 (256, 4096) gelu 1.9e-04 4.5e-04 jit 6.5e-05 1.6e-04 speedup forward 2.93 backward 2.85
torch.float16 (1024, 128) gelu 1.9e-04 4.5e-04 jit 6.4e-05 1.7e-04 speedup forward 2.99 backward 2.73
torch.float16 (1024, 512) gelu 1.9e-04 4.4e-04 jit 5.9e-05 1.5e-04 speedup forward 3.18 backward 2.97
torch.float16 (1024, 1024) gelu 2.1e-04 5.2e-04 jit 6.5e-05 1.6e-04 speedup forward 3.16 backward 3.23
torch.float16 (1024, 4096) gelu 2.8e-04 6.4e-04 jit 1.5e-04 2.4e-04 speedup forward 1.83 backward 2.60
torch.float16 (8192, 128) gelu 2.1e-04 5.4e-04 jit 6.4e-05 1.8e-04 speedup forward 3.27 backward 2.96
torch.float16 (8192, 512) gelu 2.8e-04 6.7e-04 jit 1.5e-04 2.4e-04 speedup forward 1.83 backward 2.79
torch.float16 (8192, 1024) gelu 4.8e-04 1.1e-03 jit 3.0e-04 3.5e-04 speedup forward 1.57 backward 3.03
torch.float16 (8192, 4096) gelu 1.8e-03 3.4e-03 jit 1.5e-03 8.8e-04 speedup forward 1.14 backward 3.91
```
swish
```
torch.float32 (32, 128) swish 7.5e-05 2.6e-04 jit 1.1e-04 2.0e-04 speedup forward 0.71 backward 1.32
torch.float32 (32, 512) swish 7.5e-05 2.6e-04 jit 5.8e-05 1.7e-04 speedup forward 1.31 backward 1.57
torch.float32 (32, 1024) swish 7.2e-05 2.5e-04 jit 5.8e-05 1.6e-04 speedup forward 1.24 backward 1.50
torch.float32 (32, 4096) swish 7.1e-05 2.6e-04 jit 6.1e-05 1.9e-04 speedup forward 1.17 backward 1.38
torch.float32 (256, 128) swish 7.2e-05 2.5e-04 jit 5.7e-05 1.7e-04 speedup forward 1.26 backward 1.50
torch.float32 (256, 512) swish 7.4e-05 2.7e-04 jit 5.9e-05 1.8e-04 speedup forward 1.25 backward 1.55
torch.float32 (256, 1024) swish 7.3e-05 2.6e-04 jit 6.2e-05 2.0e-04 speedup forward 1.18 backward 1.35
torch.float32 (256, 4096) swish 8.5e-05 2.7e-04 jit 6.5e-05 1.6e-04 speedup forward 1.31 backward 1.75
torch.float32 (1024, 128) swish 7.4e-05 2.7e-04 jit 5.8e-05 1.8e-04 speedup forward 1.27 backward 1.47
torch.float32 (1024, 512) swish 7.5e-05 2.8e-04 jit 6.4e-05 2.2e-04 speedup forward 1.16 backward 1.29
torch.float32 (1024, 1024) swish 9.2e-05 3.3e-04 jit 7.0e-05 2.1e-04 speedup forward 1.32 backward 1.59
torch.float32 (1024, 4096) swish 1.9e-04 5.7e-04 jit 1.6e-04 2.7e-04 speedup forward 1.24 backward 2.10
torch.float32 (8192, 128) swish 9.1e-05 3.2e-04 jit 7.2e-05 2.0e-04 speedup forward 1.26 backward 1.54
torch.float32 (8192, 512) swish 1.9e-04 5.5e-04 jit 1.6e-04 2.7e-04 speedup forward 1.20 backward 2.03
torch.float32 (8192, 1024) swish 3.5e-04 8.8e-04 jit 3.2e-04 3.9e-04 speedup forward 1.09 backward 2.24
torch.float32 (8192, 4096) swish 1.3e-03 2.7e-03 jit 1.3e-03 1.0e-03 speedup forward 0.99 backward 2.62
torch.float16 (32, 128) swish 7.0e-05 2.5e-04 jit 1.0e-04 2.1e-04 speedup forward 0.69 backward 1.18
torch.float16 (32, 512) swish 6.9e-05 2.4e-04 jit 6.6e-05 1.8e-04 speedup forward 1.05 backward 1.38
torch.float16 (32, 1024) swish 7.0e-05 2.4e-04 jit 6.0e-05 1.7e-04 speedup forward 1.18 backward 1.43
torch.float16 (32, 4096) swish 6.9e-05 2.5e-04 jit 6.0e-05 1.8e-04 speedup forward 1.14 backward 1.37
torch.float16 (256, 128) swish 6.5e-05 2.4e-04 jit 5.8e-05 1.6e-04 speedup forward 1.12 backward 1.48
torch.float16 (256, 512) swish 7.1e-05 2.6e-04 jit 6.0e-05 1.8e-04 speedup forward 1.20 backward 1.41
torch.float16 (256, 1024) swish 6.8e-05 2.5e-04 jit 6.0e-05 1.8e-04 speedup forward 1.14 backward 1.37
torch.float16 (256, 4096) swish 7.1e-05 2.5e-04 jit 9.7e-05 2.1e-04 speedup forward 0.73 backward 1.20
torch.float16 (1024, 128) swish 7.0e-05 2.5e-04 jit 6.0e-05 1.8e-04 speedup forward 1.17 backward 1.42
torch.float16 (1024, 512) swish 7.2e-05 2.6e-04 jit 6.8e-05 1.7e-04 speedup forward 1.06 backward 1.49
torch.float16 (1024, 1024) swish 6.7e-05 2.4e-04 jit 9.7e-05 2.1e-04 speedup forward 0.69 backward 1.14
torch.float16 (1024, 4096) swish 1.3e-04 3.6e-04 jit 1.9e-04 1.8e-04 speedup forward 0.69 backward 1.98
torch.float16 (8192, 128) swish 7.0e-05 2.4e-04 jit 9.7e-05 1.9e-04 speedup forward 0.73 backward 1.26
torch.float16 (8192, 512) swish 1.3e-04 3.5e-04 jit 1.9e-04 2.2e-04 speedup forward 0.66 backward 1.62
torch.float16 (8192, 1024) swish 2.1e-04 5.7e-04 jit 3.5e-04 3.1e-04 speedup forward 0.62 backward 1.82
torch.float16 (8192, 4096) swish 7.6e-04 1.6e-03 jit 1.3e-03 7.2e-04 speedup forward 0.60 backward 2.17
```
Seems like it makes sense to compile _python_gelu, and for swish the benefits are negligible<|||||>We only use _gelu_python for torch < 1.4.
My only concern with this PR is that it will break in early pytorch versions or on CPU or something, can you test it under those circumstances?<|||||>I've tested the current implementation with pytorch==1.0.0 on CPU, and it indeed breaks because torch.jit did not support python floats at that time. I have two possible solutions for this, @sshleifer what will be the best one?
First: slightly modify gelu_python and gelu_new to be backwards-compatible
```
@torch.jit.script
def jit_gelu_python(x):
""" Original Implementation of the gelu activation function in Google Bert repo when initially created.
For information: OpenAI GPT's gelu is slightly different (and gives slightly different results):
0.5 * x * (1 + torch.tanh(math.sqrt(2 / math.pi) * (x + 0.044715 * torch.pow(x, 3))))
This is now written in C in torch.nn.functional
Also see https://arxiv.org/abs/1606.08415
"""
gelu_const = torch.sqrt(torch.full((), 2.0, dtype=x.dtype, device=x.device))
return x * 0.5 * (1.0 + torch.erf(x / gelu_const))
@torch.jit.script
def jit_gelu(x):
""" Implementation of the gelu activation function currently in Google Bert repo (identical to OpenAI GPT).
Also see https://arxiv.org/abs/1606.08415
"""
gelu_const = torch.sqrt(torch.full((), 2.0/math.pi, dtype=x.dtype, device=x.device))
return 0.5 * x * (1 + torch.tanh(gelu_const * (x + 0.044715 * torch.pow(x, 3))))
```
Second: use torch.jit.script only with pytorch>1.4.0. We won't need to wrap `gelu`, as it already has a native implementation, and for `gelu_new` we'll add a single check.
<|||||>I've changed the PR so that gelu_new gets JIT-compiled only on pytorch>=1.4. Benchmarking resuts are the same with 3-4x faster forward and 3x faster backward for this activation (although no speedup on CPU float32). @sshleifer is it ready to be merged now?<|||||>In my opinion, yes. LGTM.
@LysandreJik @julien-c this is a backwards compatible speedup.
<|||||>Hello! Unfortunately, we'll have to [revert this PR](https://github.com/huggingface/transformers/pull/4050) as jitting an activation function prevents the model from being pickled.
This has already been an issue in several cases:
- For TPU support, the models should be serializable. https://github.com/huggingface/transformers/pull/3743
- For ONNX support (@mfuntowicz, @thomwolf)
- For Pytorch-Lightning https://github.com/huggingface/transformers/issues/4038#issuecomment-620624613
Nonetheless, thank you for your contribution and for such a detailed study of what was to be gained from it. |
transformers | 2,987 | closed | Add preprocessing step for transfo-xl tokenization to avoid tokenizing words followed by punction to <unk> | The problem is well shown in Issue #2000 .
This PR adds a preprocessing step to transfo-xl tokenization to better deal with the problem.
| 02-23-2020 23:13:13 | 02-23-2020 23:13:13 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2987?src=pr&el=h1) Report
> Merging [#2987](https://codecov.io/gh/huggingface/transformers/pull/2987?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/129f0604acb9e8b9cebd2897437324198fa37a0a?src=pr&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `91.66%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2987?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2987 +/- ##
==========================================
+ Coverage 77.17% 77.18% +0.01%
==========================================
Files 98 98
Lines 15997 16009 +12
==========================================
+ Hits 12345 12356 +11
- Misses 3652 3653 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2987?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/2987/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `39.8% <91.66%> (+1.57%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2987?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2987?src=pr&el=footer). Last update [129f060...e0a9fd2](https://codecov.io/gh/huggingface/transformers/pull/2987?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Good to be merged then, I think!<|||||>Using this new preprocessing step, what is the proper way to format a paragraph of text so that it will work as a prompt for XLNet? Just a few lines of code as an example would make it clear to me, I think.
The "padding text" in the example contains sentences with punctuation not separated by spaces. At the end of the text is "\<eod\> \</s\> \<eos\>" Shouldn't I include a \<eos\> after every sentence, by hand? If not and it can be done automatically, how do you avoid things like Mr. or Dr. that end in periods but don't end the sentence?
Thanks.
<|||||>Hi @summerstay, this preprocessing step is only necessary for Transfo-XL Net. For XLNet, you don't need to separate the text. Try out this code to see what I mean:
```
def show_tokenization(text, model_name):
tok = AutoTokenizer.from_pretrained(model_name)
print(tok.decode(tok.encode(text, add_special_tokens=False)))
show_tokenization('This is an example. See what happens with, and. ?', 'transfo-xl-wt103')
show_tokenization('This is an example. See what happens with, and. ?', 'xlnet-base-cased')
# prints:
# You might want to consider setting `add_space_before_punct_symbol=True` as an argument to the `tokenizer.encode()` to avoid tokenizing words with punctuation symbols to the `<unk>` token
# This is an <unk> See what happens <unk> <unk>?
# This is an example. See what happens with, and.?
```
|
transformers | 2,986 | closed | How to generate BERT/Roberta word/sentence embedding? | I know the stanford operation.
```python
tokenizer = RobertaTokenizer.from_pretrained('roberta-large')
model = RobertaModel.from_pretrained('roberta-large')
input_ids = torch.tensor(tokenizer.encode("Hello, my <span class="highlighter highlight-on">dog</span> is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
outputs = model(input_ids)
last_hidden_states = outputs[0] #(batch_size, input_len, embedding_size) But I need single vector for each sentence
```
But. I am working on improving RNN with incorporating Bert-like pretrain model embedding. How to get a sentence embedding so in this case(one vector for entire sentence)? Averaging or some transformation of the last_hidden_states? Is `add_special_token` necessary? Any suggested papers to read? | 02-23-2020 22:27:10 | 02-23-2020 22:27:10 | Hi there. A few weeks or months ago, I wrote this notebook to introduce my colleagues to doing inference on LMs. In other words: how can I get a sentence representation out of them. You can have a look [here](https://github.com/BramVanroy/bert-for-inference/blob/master/introduction-to-bert.ipynb). It should be self-explanatory.<|||||>Hey @zjplab, for sentence embeddings, I'd recommend this library https://github.com/UKPLab/sentence-transformers along with their paper. They explain how they get their sentence embeddings as well as the pros and cons to several different methods of doing it. They have embeddings for bert/roberta and many more<|||||>There's also spaCy's wrapper of transformers [spacy-transformers](https://github.com/explosion/spacy-transformers). Can compare sentences to each other, and access sentence embeddings:
[examples/Spacy_Transformers_Demo.ipynb](https://github.com/explosion/spacy-transformers/blob/master/examples/Spacy_Transformers_Demo.ipynb)
```python
# $ pip install spacy-transformers
# $ python -m spacy download en_trf_bertbaseuncased_lg
import spacy
nlp = spacy.load("en_trf_bertbaseuncased_lg")
apple1 = nlp("Apple shares rose on the news.")
apple2 = nlp("Apple sold fewer iPhones this quarter.")
apple3 = nlp("Apple pie is delicious.")
# sentence similarity
print(apple1.similarity(apple2)) #0.69861203
print(apple1.similarity(apple3)) #0.5404963
# sentence embeddings
apple1.vector # or apple1.tensor.sum(axis=0)
```
I'm fairly confident `apple1.vector` is the sentence embedding, but someone will want to double-check.
[Edit] spacy-transformers currenty requires transformers==2.0.0, which is pretty far behind. It also doesn't let you embed batches (one sentence at a time). I'm gonna use UKPLab/sentence-transformers, personally.<|||||>> There's also spaCy's wrapper of transformers [spacy-transformers](https://github.com/explosion/spacy-transformers). Can compare sentences to each other, and access sentence embeddings:
>
> [examples/Spacy_Transformers_Demo.ipynb](https://github.com/explosion/spacy-transformers/blob/master/examples/Spacy_Transformers_Demo.ipynb)
>
> ```python
> # $ pip install spacy-transformers
> # $ python -m spacy download en_trf_bertbaseuncased_lg
>
> import spacy
> nlp = spacy.load("en_trf_bertbaseuncased_lg")
> apple1 = nlp("Apple shares rose on the news.")
> apple2 = nlp("Apple sold fewer iPhones this quarter.")
> apple3 = nlp("Apple pie is delicious.")
>
> # sentence similarity
> print(apple1.similarity(apple2)) #0.69861203
> print(apple1.similarity(apple3)) #0.5404963
>
> # sentence embeddings
> apple1.vector # or apple1.tensor.sum(axis=0)
> ```
>
> I'm fairly confident `apple1.vector` is the sentence embedding, but someone will want to double-check.
>
> [Edit] spacy-transformers currenty requires transformers==2.0.0, which is pretty far behind. It also doesn't let you embed batches (one sentence at a time). I'm gonna use UKPLab/sentence-transformers, personally.
Is there any way to compare a contextualized word embedding with a word embedding? Let's say I have a sentence "Apples are delicious" and I want to compare the similarity of the contextualized word "apples" against words such as "fruit" or "company". Is there any way to do so with transformers like BERT that could deliver reliable numbers? Thanks in advance.
<|||||>This one seems to do the job too: [https://github.com/ashokc/Bow-to-Bert](https://github.com/ashokc/Bow-to-Bert), accompanied with this blog post [http://xplordat.com/2019/09/23/bow-to-bert/](http://xplordat.com/2019/09/23/bow-to-bert/) |
transformers | 2,985 | closed | `AutoModel.from_pretrained` sends config kwargs to model | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert (may apply to more)
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
```python
from transformers import AutoModel
AutoModel.from_pretrained('bert-base-uncased', output_attention=True)
```
([example from the docs](https://huggingface.co/transformers/model_doc/auto.html#transformers.AutoModel.from_pretrained))
It crashes:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "[...]/transformers/src/transformers/modeling_auto.py", line 384, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "[...]/transformers/src/transformers/modeling_utils.py", line 463, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
TypeError: __init__() got an unexpected keyword argument 'output_attention'
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
That the code returns a correct model, without crashing
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.0 (master)
- Platform: Linux-4.15.0-76-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.4
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.0.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: no | 02-23-2020 21:34:01 | 02-23-2020 21:34:01 | Seems related to #2694<|||||>Indeed! I fixed the misleading documentation with #2998. |
transformers | 2,984 | closed | add_ctags_to_git_ignore | 02-23-2020 21:32:33 | 02-23-2020 21:32:33 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2984?src=pr&el=h1) Report
> Merging [#2984](https://codecov.io/gh/huggingface/transformers/pull/2984?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/129f0604acb9e8b9cebd2897437324198fa37a0a?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2984?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2984 +/- ##
=======================================
Coverage 77.17% 77.17%
=======================================
Files 98 98
Lines 15997 15997
=======================================
Hits 12345 12345
Misses 3652 3652
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2984?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2984?src=pr&el=footer). Last update [129f060...5636868](https://codecov.io/gh/huggingface/transformers/pull/2984?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 2,983 | closed | NER support for Albert in run_ner.py and NerPipeline | I have added a class, AlbertForTokenClassification, based on BertForTokenClassification and added it to lists used for checking NER capabilities in run_ner.py and NerPipeline.
I tested NER fine-tuning on albert-base-v2, albert-large-v2 and albert-xlarge-v2 on the english CONLL 2003 dataset and they all get F1 of around 0.93-0.94, so it seems to be working. The fine-tuned models are published [here](https://huggingface.co/KB).
I've also added some command-line options to better control tokenization since different tokenizers have different possible arguments and defaults. I guess that in the end, when all tokenizers behave the same, these options will be unnecessary.
I changed how NerPipeline outputs tokens, from .decode(..) to .convert_ids_to_tokens(...) since it removes '_' at the beginning of tokens making it impossible (for sentencepiece tokens) to know which tokens form a word. Using .convert(...) would make sense if it were outputting whole words and not words split into tokens. It might make sense to change this so that NerPipeline outputs whole words. That would assume that all the tokens in a word gets classified with the same label, which is not always the case.
I had one weird thing happening: when fine-tuning albert-large-v2 specifically for 3 or 4 epochs F1 would be reported as exactly 0. When setting num_train_epochs to 2 or 5 this did not happen. I'm going to assume that this has nothing to do with the code submitted :) | 02-23-2020 20:39:26 | 02-23-2020 20:39:26 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2983?src=pr&el=h1) Report
> Merging [#2983](https://codecov.io/gh/huggingface/transformers/pull/2983?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/129f0604acb9e8b9cebd2897437324198fa37a0a?src=pr&el=desc) will **decrease** coverage by `0.11%`.
> The diff coverage is `19.23%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2983?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2983 +/- ##
==========================================
- Coverage 77.17% 77.05% -0.12%
==========================================
Files 98 98
Lines 15997 16023 +26
==========================================
+ Hits 12345 12347 +2
- Misses 3652 3676 +24
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2983?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/2983/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.91% <ø> (ø)` | :arrow_up: |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2983/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `75.47% <ø> (ø)` | :arrow_up: |
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2983/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `70.88% <ø> (ø)` | :arrow_up: |
| [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/2983/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `75.25% <19.23%> (-3.9%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2983/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.05% <0%> (-0.44%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2983?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2983?src=pr&el=footer). Last update [129f060...f711575](https://codecov.io/gh/huggingface/transformers/pull/2983?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,982 | closed | Change masking to direct labeling for TPU support. | Torch XLA conversion is very sensitive to certain core pytorch operations. These result in TPU being slower than CPU operation. The most obvious are binary masking and calls to item().
https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md
This PR replaces some of these calls that are in the example/NER pathway.
| 02-23-2020 19:35:40 | 02-23-2020 19:35:40 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2982?src=pr&el=h1) Report
> Merging [#2982](https://codecov.io/gh/huggingface/transformers/pull/2982?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d1ab1fab1be7199e082129dfbe46eb52bca92799?src=pr&el=desc) will **increase** coverage by `<.01%`.
> The diff coverage is `62.5%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2982?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2982 +/- ##
==========================================
+ Coverage 74.09% 74.09% +<.01%
==========================================
Files 93 93
Lines 15249 15253 +4
==========================================
+ Hits 11298 11301 +3
- Misses 3951 3952 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2982?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `64.95% <0%> (-0.31%)` | :arrow_down: |
| [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `95.87% <100%> (+0.01%)` | :arrow_up: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `87.92% <100%> (+0.02%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.25% <50%> (+0.04%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2982?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2982?src=pr&el=footer). Last update [d1ab1fa...757e2c3](https://codecov.io/gh/huggingface/transformers/pull/2982?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,981 | closed | Strange bug when Finetuning own pretrained model (with an even stranger solution) | # 🐛 Bug
## Information
Roberta
Language I am using the model on (English, Chinese ...): Latin script(migh have a mix of languages)
The problem arises when using:
run_glue on model obtained from run_language_modeling
The tasks I am working on is:
Sequence Classification(single)
Steps to reproduce the behavior:
1. Train model using run_language_modeling
2. Use trained model in run_glue script
Error:
File "run_glue.py", line 148, in train
optimizer.load_state_dict(torch.load(os.path.join(args.model_name_or_path, "optimizer.pt")))
File "/usr/local/lib/python3.6/dist-packages/torch/optim/optimizer.py", line 116, in load_state_dict
raise ValueError("loaded state dict contains a parameter group "
ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group
Note: I searched what might cause this error(freezing some layers and passing an incorrect params_group). But I have not done anything like that, so this error should not occur.
## Quick Hack/Solution:
There is a strange solution simply by deleting optimizer.pt and setting number of epochs to an arbitrarily large number. Not setting epochs to a very high number causes the script to proceed directly to evaluation and not do any training.
## Environment info
Google Colab
Tokenizers 0.5
Transformers 2.5
GPU:P4 | 02-23-2020 17:05:18 | 02-23-2020 17:05:18 | Hi! This is an interesting use-case, I think the error stems from the `run_glue` script trying to re-use the different attributes the `run_language_modeling` script had saved.
That includes:
- the optimizer state
- the scheduler state
- the current global step, which is inferred from the name
Your patch works because
1) the optimizer state shouldn't be kept across different trainings. Deleting the optimizer file makes sense.
2) The script believes you're already at a very high global step, as inferred from the name of your file. Setting a very high number of epochs means a very high number of steps to complete the training, hence some remaining steps.
We should work to fix the issue, but for now I would recommend deleting the files you don't need (`optimizer.pt` and `scheduler.pt`), and rename your folder containing your model/config/tokenizer files so that it doesn't end with a number.<|||||>Maybe we could raise a warning after pretraining is over. Ideally, this should be handled by the script itself, and such deletion etc. should not be required <|||||>Yes, I was also stuck on this issue. @LysandreJik , kudos to your hack.<|||||>Stuck in the same issue too. Thanks for your suggestion @LysandreJik <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,980 | closed | Cannot install Transformers version >2.3.0 with pip on CentOS | # 🐛 Bug
I cannot install `pip install transformers` for a release newer than `2.3.0`. The install errors out when trying to install `tokenizers`. This is similar to [another issue](https://github.com/huggingface/transformers/issues/2831), except I have a Rust Compiler in my environment so I do not see: `"error: can not find Rust Compiler"`.
## Information
Model I am using (Bert, XLNet ...):
N/A
Language I am using the model on (English, Chinese ...):
N/A
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
pip install transformers
```
which leads to the following error:
```
Building wheels for collected packages: tokenizers
Building wheel for tokenizers (PEP 517) ... error
ERROR: Command errored out with exit status 1:
command: /home/johnmg/t2t/bin/python /home/johnmg/t2t/lib/python3.7/site-packages/pip/_vendor/pep517/_in_process.py build_wheel /tmp/tmpcvv0fpj6
cwd: /tmp/pip-install-d2wcoxbe/tokenizers
Complete output (221 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib
creating build/lib/tokenizers
copying tokenizers/__init__.py -> build/lib/tokenizers
creating build/lib/tokenizers/models
copying tokenizers/models/__init__.py -> build/lib/tokenizers/models
creating build/lib/tokenizers/decoders
copying tokenizers/decoders/__init__.py -> build/lib/tokenizers/decoders
creating build/lib/tokenizers/normalizers
copying tokenizers/normalizers/__init__.py -> build/lib/tokenizers/normalizers
creating build/lib/tokenizers/pre_tokenizers
copying tokenizers/pre_tokenizers/__init__.py -> build/lib/tokenizers/pre_tokenizers
creating build/lib/tokenizers/processors
copying tokenizers/processors/__init__.py -> build/lib/tokenizers/processors
creating build/lib/tokenizers/trainers
copying tokenizers/trainers/__init__.py -> build/lib/tokenizers/trainers
creating build/lib/tokenizers/implementations
copying tokenizers/implementations/sentencepiece_bpe.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/char_level_bpe.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/byte_level_bpe.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/bert_wordpiece.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/base_tokenizer.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/__init__.py -> build/lib/tokenizers/implementations
copying tokenizers/__init__.pyi -> build/lib/tokenizers
copying tokenizers/models/__init__.pyi -> build/lib/tokenizers/models
copying tokenizers/decoders/__init__.pyi -> build/lib/tokenizers/decoders
copying tokenizers/normalizers/__init__.pyi -> build/lib/tokenizers/normalizers
copying tokenizers/pre_tokenizers/__init__.pyi -> build/lib/tokenizers/pre_tokenizers
copying tokenizers/processors/__init__.pyi -> build/lib/tokenizers/processors
copying tokenizers/trainers/__init__.pyi -> build/lib/tokenizers/trainers
running build_ext
running build_rust
Updating crates.io index
warning: unused manifest key: target.x86_64-apple-darwin.rustflags
Compiling proc-macro2 v1.0.8
Compiling unicode-xid v0.2.0
Compiling syn v1.0.15
Compiling libc v0.2.67
Compiling autocfg v1.0.0
Compiling lazy_static v1.4.0
Compiling cfg-if v0.1.10
Compiling semver-parser v0.7.0
Compiling memchr v2.3.3
Compiling serde v1.0.104
Compiling maybe-uninit v2.0.0
Compiling ryu v1.0.2
Compiling regex-syntax v0.6.14
Compiling getrandom v0.1.14
Compiling scopeguard v1.1.0
Compiling unicode-width v0.1.7
Compiling itoa v0.4.5
Compiling bitflags v1.2.1
Running `rustc --crate-name unicode_xid /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/unicode-xid-0.2.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' -C metadata=d0c8361b0afb9c55 -C extra-filename=-d0c8361b0afb9c55 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --edition=2018 --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/proc-macro2-1.0.8/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="proc-macro"' -C metadata=97f59661e87a2bff -C extra-filename=-97f59661e87a2bff --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/proc-macro2-97f59661e87a2bff -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --edition=2018 --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/syn-1.0.15/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="clone-impls"' --cfg 'feature="default"' --cfg 'feature="derive"' --cfg 'feature="extra-traits"' --cfg 'feature="full"' --cfg 'feature="parsing"' --cfg 'feature="printing"' --cfg 'feature="proc-macro"' --cfg 'feature="quote"' --cfg 'feature="visit"' -C metadata=be4245bf41be9154 -C extra-filename=-be4245bf41be9154 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/syn-be4245bf41be9154 -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/libc-0.2.67/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' -C metadata=4aa17b314a9f9392 -C extra-filename=-4aa17b314a9f9392 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/libc-4aa17b314a9f9392 -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name autocfg /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/autocfg-1.0.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=0b99959d54eb5a43 -C extra-filename=-0b99959d54eb5a43 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name lazy_static /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/lazy_static-1.4.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=7fe90463f0542b89 -C extra-filename=-7fe90463f0542b89 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name semver_parser /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/semver-parser-0.7.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=ce71380f50d590b6 -C extra-filename=-ce71380f50d590b6 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --edition=2018 --crate-name cfg_if /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/cfg-if-0.1.10/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=1b04fa8f4baea64e -C extra-filename=-1b04fa8f4baea64e --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/memchr-2.3.3/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' --cfg 'feature="use_std"' -C metadata=682166ccfd58c578 -C extra-filename=-682166ccfd58c578 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/memchr-682166ccfd58c578 -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/serde-1.0.104/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="derive"' --cfg 'feature="serde_derive"' --cfg 'feature="std"' -C metadata=e3191056f1858817 -C extra-filename=-e3191056f1858817 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/serde-e3191056f1858817 -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/maybe-uninit-2.0.0/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -C metadata=99bcd9a60d46382c -C extra-filename=-99bcd9a60d46382c --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/maybe-uninit-99bcd9a60d46382c -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/ryu-1.0.2/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -C metadata=e704a6b7a71f3d7a -C extra-filename=-e704a6b7a71f3d7a --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/ryu-e704a6b7a71f3d7a -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name regex_syntax /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/regex-syntax-0.6.14/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="unicode"' --cfg 'feature="unicode-age"' --cfg 'feature="unicode-bool"' --cfg 'feature="unicode-case"' --cfg 'feature="unicode-gencat"' --cfg 'feature="unicode-perl"' --cfg 'feature="unicode-script"' --cfg 'feature="unicode-segment"' -C metadata=feb44197369905d4 -C extra-filename=-feb44197369905d4 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --edition=2018 --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/getrandom-0.1.14/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="std"' -C metadata=c2394b8b43d330b2 -C extra-filename=-c2394b8b43d330b2 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/getrandom-c2394b8b43d330b2 -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name scopeguard /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/scopeguard-1.1.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=8a63ed9d96488c18 -C extra-filename=-8a63ed9d96488c18 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name unicode_width /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/unicode-width-0.1.7/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' -C metadata=397b6227577b65ae -C extra-filename=-397b6227577b65ae --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name itoa /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/itoa-0.4.5/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=b9ca519a13df71bf -C extra-filename=-b9ca519a13df71bf --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Compiling ppv-lite86 v0.2.6
Compiling rayon-core v1.7.0
Compiling unindent v0.1.5
Compiling version_check v0.9.1
Compiling strsim v0.8.0
Compiling vec_map v0.8.1
Compiling either v1.5.3
Compiling number_prefix v0.3.0
Compiling smallvec v1.2.0
Compiling ansi_term v0.11.0
Compiling unicode_categories v0.1.1
Compiling spin v0.5.2
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/bitflags-1.2.1/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="default"' -C metadata=74dee2c088f4fdf7 -C extra-filename=-74dee2c088f4fdf7 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/bitflags-74dee2c088f4fdf7 -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --edition=2018 --crate-name ppv_lite86 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/ppv-lite86-0.2.6/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="simd"' --cfg 'feature="std"' -C metadata=a66cdba604de2e91 -C extra-filename=-a66cdba604de2e91 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --edition=2018 --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.0/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -C metadata=01d02e6ac28bd2a9 -C extra-filename=-01d02e6ac28bd2a9 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/rayon-core-01d02e6ac28bd2a9 -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name version_check /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/version_check-0.9.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=cb4ea0451d56bc0d -C extra-filename=-cb4ea0451d56bc0d --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --edition=2018 --crate-name unindent /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/unindent-0.1.5/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=54ee88ab0038e61b -C extra-filename=-54ee88ab0038e61b --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name strsim /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/strsim-0.8.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=97f24fc34d9d28cd -C extra-filename=-97f24fc34d9d28cd --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name vec_map /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/vec_map-0.8.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=135a62f8a977656a -C extra-filename=-135a62f8a977656a --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name either /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/either-1.5.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=6fcca30f288e7f70 -C extra-filename=-6fcca30f288e7f70 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name number_prefix /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/number_prefix-0.3.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' -C metadata=c617a5b231fd33f2 -C extra-filename=-c617a5b231fd33f2 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --edition=2018 --crate-name smallvec /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/smallvec-1.2.0/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=5a1b93c8a07d924a -C extra-filename=-5a1b93c8a07d924a --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name ansi_term /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/ansi_term-0.11.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=f9c541f6d3ce7af3 -C extra-filename=-f9c541f6d3ce7af3 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name unicode_categories /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/unicode_categories-0.1.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=3376acf28b9791fe -C extra-filename=-3376acf28b9791fe --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name spin /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/spin-0.5.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=6c5053f023d06140 -C extra-filename=-6c5053f023d06140 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Compiling thread_local v1.0.1
Running `rustc --crate-name thread_local /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/thread_local-1.0.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=7cdd46b26d4f9805 -C extra-filename=-7cdd46b26d4f9805 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern lazy_static=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblazy_static-7fe90463f0542b89.rmeta --cap-lints allow`
Compiling textwrap v0.11.0
Running `rustc --crate-name textwrap /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/textwrap-0.11.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=0f312c8508ee8e9d -C extra-filename=-0f312c8508ee8e9d --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern unicode_width=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libunicode_width-397b6227577b65ae.rmeta --cap-lints allow`
Compiling semver v0.9.0
Running `rustc --crate-name semver /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/semver-0.9.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' -C metadata=21f81775743ad422 -C extra-filename=-21f81775743ad422 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern semver_parser=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libsemver_parser-ce71380f50d590b6.rmeta --cap-lints allow`
Compiling unicode-normalization-alignments v0.1.12
Running `rustc --crate-name unicode_normalization_alignments /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/unicode-normalization-alignments-0.1.12/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=2f95bcef5d770e35 -C extra-filename=-2f95bcef5d770e35 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern smallvec=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libsmallvec-5a1b93c8a07d924a.rmeta --cap-lints allow`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/rayon-core-01d02e6ac28bd2a9/build-script-build`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/memchr-682166ccfd58c578/build-script-build`
Running `rustc --crate-name memchr /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/memchr-2.3.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' --cfg 'feature="use_std"' -C metadata=916f9f60d041f29f -C extra-filename=-916f9f60d041f29f --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow --cfg memchr_runtime_simd --cfg memchr_runtime_sse2 --cfg memchr_runtime_sse42 --cfg memchr_runtime_avx`
Compiling rustc_version v0.2.3
Running `rustc --crate-name rustc_version /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/rustc_version-0.2.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=63af08b6d5f0b1a9 -C extra-filename=-63af08b6d5f0b1a9 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern semver=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libsemver-21f81775743ad422.rmeta --cap-lints allow`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/getrandom-c2394b8b43d330b2/build-script-build`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/bitflags-74dee2c088f4fdf7/build-script-build`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/maybe-uninit-99bcd9a60d46382c/build-script-build`
Compiling crossbeam-utils v0.7.2
Compiling crossbeam-epoch v0.8.2
Compiling num-traits v0.2.11
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/crossbeam-utils-0.7.2/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="lazy_static"' --cfg 'feature="std"' -C metadata=5b24f08aed575110 -C extra-filename=-5b24f08aed575110 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/crossbeam-utils-5b24f08aed575110 -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern autocfg=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libautocfg-0b99959d54eb5a43.rlib --cap-lints allow`
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/crossbeam-epoch-0.8.2/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="lazy_static"' --cfg 'feature="std"' -C metadata=515968de6557b6b7 -C extra-filename=-515968de6557b6b7 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/crossbeam-epoch-515968de6557b6b7 -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern autocfg=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libautocfg-0b99959d54eb5a43.rlib --cap-lints allow`
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/num-traits-0.2.11/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' -C metadata=618c6828911959ab -C extra-filename=-618c6828911959ab --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/num-traits-618c6828911959ab -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern autocfg=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libautocfg-0b99959d54eb5a43.rlib --cap-lints allow`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/ryu-e704a6b7a71f3d7a/build-script-build`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/serde-e3191056f1858817/build-script-build`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/libc-4aa17b314a9f9392/build-script-build`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/syn-be4245bf41be9154/build-script-build`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/proc-macro2-97f59661e87a2bff/build-script-build`
Running `rustc --crate-name bitflags /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/bitflags-1.2.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' -C metadata=95683541c46a6653 -C extra-filename=-95683541c46a6653 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow --cfg bitflags_const_fn`
Running `rustc --crate-name maybe_uninit /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/maybe-uninit-2.0.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=169701ffcef7a104 -C extra-filename=-169701ffcef7a104 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow --cfg derive_copy --cfg repr_transparent --cfg native_uninit`
Compiling c2-chacha v0.2.3
Running `rustc --edition=2018 --crate-name c2_chacha /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/c2-chacha-0.2.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="simd"' --cfg 'feature="std"' -C metadata=db6d7fc899faf453 -C extra-filename=-db6d7fc899faf453 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern ppv_lite86=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libppv_lite86-a66cdba604de2e91.rmeta --cap-lints allow`
Running `rustc --crate-name ryu /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/ryu-1.0.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=81c16096c65f1d25 -C extra-filename=-81c16096c65f1d25 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow --cfg integer128 --cfg must_use_return --cfg maybe_uninit`
Running `rustc --edition=2018 --crate-name proc_macro2 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/proc-macro2-1.0.8/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="proc-macro"' -C metadata=ce62abe820ec95ab -C extra-filename=-ce62abe820ec95ab --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern unicode_xid=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libunicode_xid-d0c8361b0afb9c55.rmeta --cap-lints allow --cfg use_proc_macro --cfg wrap_proc_macro`
Running `rustc --crate-name libc /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/libc-0.2.67/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' -C metadata=5d9b252ed56b1945 -C extra-filename=-5d9b252ed56b1945 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow --cfg freebsd11 --cfg libc_priv_mod_use --cfg libc_union --cfg libc_const_size_of --cfg libc_align --cfg libc_core_cvoid --cfg libc_packedN`
Compiling aho-corasick v0.7.8
Running `rustc --crate-name aho_corasick /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/aho-corasick-0.7.8/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' -C metadata=5f8df0f6460a66c2 -C extra-filename=-5f8df0f6460a66c2 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern memchr=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libmemchr-916f9f60d041f29f.rmeta --cap-lints allow`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/num-traits-618c6828911959ab/build-script-build`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/crossbeam-utils-5b24f08aed575110/build-script-build`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/crossbeam-epoch-515968de6557b6b7/build-script-build`
Compiling memoffset v0.5.3
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/memoffset-0.5.3/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -C metadata=8ebd67a7766256e7 -C extra-filename=-8ebd67a7766256e7 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/memoffset-8ebd67a7766256e7 -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern rustc_version=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/librustc_version-63af08b6d5f0b1a9.rlib --cap-lints allow`
Compiling quote v1.0.2
Running `rustc --edition=2018 --crate-name quote /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/quote-1.0.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="proc-macro"' -C metadata=5dd3b63b3c37ba50 -C extra-filename=-5dd3b63b3c37ba50 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libproc_macro2-ce62abe820ec95ab.rmeta --cap-lints allow`
Running `rustc --crate-name num_traits /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/num-traits-0.2.11/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' -C metadata=682c148c69950086 -C extra-filename=-682c148c69950086 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow --cfg has_i128`
Compiling num_cpus v1.12.0
Compiling termios v0.3.1
Compiling clicolors-control v1.0.1
Compiling atty v0.2.14
Running `rustc --edition=2018 --crate-name getrandom /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/getrandom-0.1.14/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="std"' -C metadata=3bcce62cba29d0a1 -C extra-filename=-3bcce62cba29d0a1 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern cfg_if=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libcfg_if-1b04fa8f4baea64e.rmeta --extern libc=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblibc-5d9b252ed56b1945.rmeta --cap-lints allow`
Running `rustc --crate-name num_cpus /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/num_cpus-1.12.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=5375d111fec819b4 -C extra-filename=-5375d111fec819b4 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern libc=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblibc-5d9b252ed56b1945.rmeta --cap-lints allow`
Running `rustc --crate-name termios /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/termios-0.3.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=d64097ba20dddbc5 -C extra-filename=-d64097ba20dddbc5 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern libc=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblibc-5d9b252ed56b1945.rmeta --cap-lints allow`
Running `rustc --crate-name clicolors_control /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/clicolors-control-1.0.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="terminal_autoconfig"' -C metadata=f95aedfd36305d68 -C extra-filename=-f95aedfd36305d68 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern lazy_static=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblazy_static-7fe90463f0542b89.rmeta --extern libc=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblibc-5d9b252ed56b1945.rmeta --cap-lints allow`
Running `rustc --crate-name atty /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/atty-0.2.14/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=61c34de20facc8fb -C extra-filename=-61c34de20facc8fb --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern libc=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblibc-5d9b252ed56b1945.rmeta --cap-lints allow`
Running `rustc --edition=2018 --crate-name syn /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/syn-1.0.15/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="clone-impls"' --cfg 'feature="default"' --cfg 'feature="derive"' --cfg 'feature="extra-traits"' --cfg 'feature="full"' --cfg 'feature="parsing"' --cfg 'feature="printing"' --cfg 'feature="proc-macro"' --cfg 'feature="quote"' --cfg 'feature="visit"' -C metadata=fb7a652ed3ecc931 -C extra-filename=-fb7a652ed3ecc931 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libproc_macro2-ce62abe820ec95ab.rmeta --extern quote=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libquote-5dd3b63b3c37ba50.rmeta --extern unicode_xid=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libunicode_xid-d0c8361b0afb9c55.rmeta --cap-lints allow --cfg syn_disable_nightly_tests`
Compiling clap v2.33.0
Running `rustc --crate-name clap /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/clap-2.33.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="ansi_term"' --cfg 'feature="atty"' --cfg 'feature="color"' --cfg 'feature="default"' --cfg 'feature="strsim"' --cfg 'feature="suggestions"' --cfg 'feature="vec_map"' -C metadata=4d1679758f5cc3c5 -C extra-filename=-4d1679758f5cc3c5 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern ansi_term=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libansi_term-f9c541f6d3ce7af3.rmeta --extern atty=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libatty-61c34de20facc8fb.rmeta --extern bitflags=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libbitflags-95683541c46a6653.rmeta --extern strsim=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libstrsim-97f24fc34d9d28cd.rmeta --extern textwrap=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libtextwrap-0f312c8508ee8e9d.rmeta --extern unicode_width=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libunicode_width-397b6227577b65ae.rmeta --extern vec_map=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libvec_map-135a62f8a977656a.rmeta --cap-lints allow`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/memoffset-8ebd67a7766256e7/build-script-build`
Compiling rand_core v0.5.1
Running `rustc --edition=2018 --crate-name rand_core /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/rand_core-0.5.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="alloc"' --cfg 'feature="getrandom"' --cfg 'feature="std"' -C metadata=4adb25904fdd70df -C extra-filename=-4adb25904fdd70df --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern getrandom=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libgetrandom-3bcce62cba29d0a1.rmeta --cap-lints allow`
Running `rustc --crate-name memoffset /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/memoffset-0.5.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=ee652fbed0600815 -C extra-filename=-ee652fbed0600815 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow --cfg memoffset_maybe_uninit --cfg memoffset_doctests`
Running `rustc --crate-name crossbeam_utils /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/crossbeam-utils-0.7.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="lazy_static"' --cfg 'feature="std"' -C metadata=71c95db82240db48 -C extra-filename=-71c95db82240db48 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern cfg_if=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libcfg_if-1b04fa8f4baea64e.rmeta --extern lazy_static=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblazy_static-7fe90463f0542b89.rmeta --cap-lints allow --cfg has_min_const_fn --cfg has_atomic_u8 --cfg has_atomic_u16 --cfg has_atomic_u32 --cfg has_atomic_u64`
Compiling rand_chacha v0.2.1
Running `rustc --edition=2018 --crate-name rand_chacha /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/rand_chacha-0.2.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="std"' -C metadata=164a44df65235912 -C extra-filename=-164a44df65235912 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern c2_chacha=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libc2_chacha-db6d7fc899faf453.rmeta --extern rand_core=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/librand_core-4adb25904fdd70df.rmeta --cap-lints allow`
Compiling rand v0.7.3
Running `rustc --edition=2018 --crate-name rand /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/rand-0.7.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="alloc"' --cfg 'feature="default"' --cfg 'feature="getrandom"' --cfg 'feature="getrandom_package"' --cfg 'feature="libc"' --cfg 'feature="std"' -C metadata=102d035e4ca6c699 -C extra-filename=-102d035e4ca6c699 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern getrandom_package=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libgetrandom-3bcce62cba29d0a1.rmeta --extern libc=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblibc-5d9b252ed56b1945.rmeta --extern rand_chacha=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/librand_chacha-164a44df65235912.rmeta --extern rand_core=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/librand_core-4adb25904fdd70df.rmeta --cap-lints allow`
Compiling crossbeam-queue v0.2.1
Running `rustc --crate-name crossbeam_epoch /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/crossbeam-epoch-0.8.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="lazy_static"' --cfg 'feature="std"' -C metadata=4cd0c2190306aa4a -C extra-filename=-4cd0c2190306aa4a --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern cfg_if=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libcfg_if-1b04fa8f4baea64e.rmeta --extern crossbeam_utils=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libcrossbeam_utils-71c95db82240db48.rmeta --extern lazy_static=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblazy_static-7fe90463f0542b89.rmeta --extern maybe_uninit=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libmaybe_uninit-169701ffcef7a104.rmeta --extern memoffset=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libmemoffset-ee652fbed0600815.rmeta --extern scopeguard=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libscopeguard-8a63ed9d96488c18.rmeta --cap-lints allow --cfg has_min_const_fn`
Running `rustc --crate-name crossbeam_queue /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/crossbeam-queue-0.2.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' -C metadata=e5afea70501509a9 -C extra-filename=-e5afea70501509a9 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern cfg_if=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libcfg_if-1b04fa8f4baea64e.rmeta --extern crossbeam_utils=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libcrossbeam_utils-71c95db82240db48.rmeta --cap-lints allow`
Compiling regex v1.3.4
Running `rustc --crate-name regex /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/regex-1.3.4/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="aho-corasick"' --cfg 'feature="default"' --cfg 'feature="memchr"' --cfg 'feature="perf"' --cfg 'feature="perf-cache"' --cfg 'feature="perf-dfa"' --cfg 'feature="perf-inline"' --cfg 'feature="perf-literal"' --cfg 'feature="std"' --cfg 'feature="thread_local"' --cfg 'feature="unicode"' --cfg 'feature="unicode-age"' --cfg 'feature="unicode-bool"' --cfg 'feature="unicode-case"' --cfg 'feature="unicode-gencat"' --cfg 'feature="unicode-perl"' --cfg 'feature="unicode-script"' --cfg 'feature="unicode-segment"' -C metadata=40c5630aef8afe3e -C extra-filename=-40c5630aef8afe3e --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern aho_corasick=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libaho_corasick-5f8df0f6460a66c2.rmeta --extern memchr=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libmemchr-916f9f60d041f29f.rmeta --extern regex_syntax=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libregex_syntax-feb44197369905d4.rmeta --extern thread_local=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libthread_local-7cdd46b26d4f9805.rmeta --cap-lints allow`
Compiling crossbeam-deque v0.7.3
Running `rustc --crate-name crossbeam_deque /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/crossbeam-deque-0.7.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=6fff16ed40375025 -C extra-filename=-6fff16ed40375025 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern crossbeam_epoch=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libcrossbeam_epoch-4cd0c2190306aa4a.rmeta --extern crossbeam_utils=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libcrossbeam_utils-71c95db82240db48.rmeta --extern maybe_uninit=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libmaybe_uninit-169701ffcef7a104.rmeta --cap-lints allow`
Running `rustc --edition=2018 --crate-name rayon_core /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=5e812e4a0a947026 -C extra-filename=-5e812e4a0a947026 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern crossbeam_deque=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libcrossbeam_deque-6fff16ed40375025.rmeta --extern crossbeam_queue=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libcrossbeam_queue-e5afea70501509a9.rmeta --extern crossbeam_utils=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libcrossbeam_utils-71c95db82240db48.rmeta --extern lazy_static=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblazy_static-7fe90463f0542b89.rmeta --extern num_cpus=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libnum_cpus-5375d111fec819b4.rmeta --cap-lints allow`
Compiling rayon v1.3.0
Running `rustc --edition=2018 --crate-name rayon /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-1.3.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=35b9846c719edc99 -C extra-filename=-35b9846c719edc99 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern crossbeam_deque=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libcrossbeam_deque-6fff16ed40375025.rmeta --extern either=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libeither-6fcca30f288e7f70.rmeta --extern rayon_core=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/librayon_core-5e812e4a0a947026.rmeta --cap-lints allow`
Compiling console v0.9.2
Running `rustc --edition=2018 --crate-name console /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/console-0.9.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="unicode-width"' -C metadata=a0e410a25b05d297 -C extra-filename=-a0e410a25b05d297 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern clicolors_control=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libclicolors_control-f95aedfd36305d68.rmeta --extern lazy_static=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblazy_static-7fe90463f0542b89.rmeta --extern libc=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblibc-5d9b252ed56b1945.rmeta --extern regex=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libregex-40c5630aef8afe3e.rmeta --extern termios=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libtermios-d64097ba20dddbc5.rmeta --extern unicode_width=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libunicode_width-397b6227577b65ae.rmeta --cap-lints allow`
Compiling indicatif v0.14.0
Running `rustc --edition=2018 --crate-name indicatif /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/indicatif-0.14.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' -C metadata=af81cb79d58cbea4 -C extra-filename=-af81cb79d58cbea4 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern console=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libconsole-a0e410a25b05d297.rmeta --extern lazy_static=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblazy_static-7fe90463f0542b89.rmeta --extern number_prefix=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libnumber_prefix-c617a5b231fd33f2.rmeta --extern regex=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libregex-40c5630aef8afe3e.rmeta --cap-lints allow`
Compiling pyo3-derive-backend v0.8.5
Running `rustc --edition=2018 --crate-name pyo3_derive_backend /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/pyo3-derive-backend-0.8.5/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=4e1d8a522a8e0abc -C extra-filename=-4e1d8a522a8e0abc --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libproc_macro2-ce62abe820ec95ab.rmeta --extern quote=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libquote-5dd3b63b3c37ba50.rmeta --extern syn=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libsyn-fb7a652ed3ecc931.rmeta --cap-lints allow`
Compiling serde_derive v1.0.104
Compiling proc-macro-hack v0.5.11
Compiling ctor v0.1.12
Compiling ghost v0.1.1
Compiling inventory-impl v0.1.5
Compiling pyo3cls v0.8.5
Running `rustc --crate-name serde_derive /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/serde_derive-1.0.104/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 --cfg 'feature="default"' -C metadata=c97a5ca23329a0e7 -C extra-filename=-c97a5ca23329a0e7 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libproc_macro2-ce62abe820ec95ab.rlib --extern quote=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libquote-5dd3b63b3c37ba50.rlib --extern syn=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libsyn-fb7a652ed3ecc931.rlib --cap-lints allow`
Running `rustc --edition=2018 --crate-name proc_macro_hack /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/proc-macro-hack-0.5.11/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=6c301fa525410f51 -C extra-filename=-6c301fa525410f51 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libproc_macro2-ce62abe820ec95ab.rlib --extern quote=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libquote-5dd3b63b3c37ba50.rlib --extern syn=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libsyn-fb7a652ed3ecc931.rlib --cap-lints allow`
Running `rustc --edition=2018 --crate-name ctor /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/ctor-0.1.12/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=6d15aecbd8ecf9a9 -C extra-filename=-6d15aecbd8ecf9a9 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern quote=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libquote-5dd3b63b3c37ba50.rlib --extern syn=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libsyn-fb7a652ed3ecc931.rlib --cap-lints allow`
Running `rustc --edition=2018 --crate-name ghost /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/ghost-0.1.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=a7fa8d8cb581322e -C extra-filename=-a7fa8d8cb581322e --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libproc_macro2-ce62abe820ec95ab.rlib --extern quote=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libquote-5dd3b63b3c37ba50.rlib --extern syn=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libsyn-fb7a652ed3ecc931.rlib --cap-lints allow`
Running `rustc --edition=2018 --crate-name inventory_impl /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/inventory-impl-0.1.5/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=07295296bec98a10 -C extra-filename=-07295296bec98a10 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libproc_macro2-ce62abe820ec95ab.rlib --extern quote=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libquote-5dd3b63b3c37ba50.rlib --extern syn=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libsyn-fb7a652ed3ecc931.rlib --cap-lints allow`
Running `rustc --edition=2018 --crate-name pyo3cls /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/pyo3cls-0.8.5/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=4e128ea32e108d4e -C extra-filename=-4e128ea32e108d4e --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libproc_macro2-ce62abe820ec95ab.rlib --extern pyo3_derive_backend=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libpyo3_derive_backend-4e1d8a522a8e0abc.rlib --extern quote=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libquote-5dd3b63b3c37ba50.rlib --extern syn=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libsyn-fb7a652ed3ecc931.rlib --cap-lints allow`
Compiling paste-impl v0.1.7
Compiling indoc-impl v0.3.4
Running `rustc --edition=2018 --crate-name paste_impl /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/paste-impl-0.1.7/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=0100b2f59cf859eb -C extra-filename=-0100b2f59cf859eb --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern proc_macro_hack=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libproc_macro_hack-6c301fa525410f51.so --extern proc_macro2=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libproc_macro2-ce62abe820ec95ab.rlib --extern quote=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libquote-5dd3b63b3c37ba50.rlib --extern syn=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libsyn-fb7a652ed3ecc931.rlib --cap-lints allow`
Running `rustc --edition=2018 --crate-name indoc_impl /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/indoc-impl-0.3.4/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=b6ef86cd971e397d -C extra-filename=-b6ef86cd971e397d --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern proc_macro_hack=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libproc_macro_hack-6c301fa525410f51.so --extern proc_macro2=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libproc_macro2-ce62abe820ec95ab.rlib --extern quote=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libquote-5dd3b63b3c37ba50.rlib --extern syn=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libsyn-fb7a652ed3ecc931.rlib --extern unindent=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libunindent-54ee88ab0038e61b.rlib --cap-lints allow`
Compiling inventory v0.1.5
Running `rustc --edition=2018 --crate-name inventory /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/inventory-0.1.5/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=81f198a01c0ec25f -C extra-filename=-81f198a01c0ec25f --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern ctor=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libctor-6d15aecbd8ecf9a9.so --extern ghost=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libghost-a7fa8d8cb581322e.so --extern inventory_impl=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libinventory_impl-07295296bec98a10.so --cap-lints allow`
Compiling indoc v0.3.4
Running `rustc --edition=2018 --crate-name indoc /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/indoc-0.3.4/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=3e8e6670af4a3851 -C extra-filename=-3e8e6670af4a3851 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern indoc_impl=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libindoc_impl-b6ef86cd971e397d.so --extern proc_macro_hack=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libproc_macro_hack-6c301fa525410f51.so --cap-lints allow`
Compiling paste v0.1.7
Running `rustc --edition=2018 --crate-name paste /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/paste-0.1.7/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=331a39d39ec573ee -C extra-filename=-331a39d39ec573ee --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern paste_impl=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libpaste_impl-0100b2f59cf859eb.so --extern proc_macro_hack=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libproc_macro_hack-6c301fa525410f51.so --cap-lints allow`
Running `rustc --crate-name serde /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/serde-1.0.104/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="derive"' --cfg 'feature="serde_derive"' --cfg 'feature="std"' -C metadata=ef69b68005b70c62 -C extra-filename=-ef69b68005b70c62 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern serde_derive=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libserde_derive-c97a5ca23329a0e7.so --cap-lints allow --cfg ops_bound --cfg core_reverse --cfg de_boxed_c_str --cfg de_boxed_path --cfg de_rc_dst --cfg core_duration --cfg integer128 --cfg range_inclusive --cfg num_nonzero --cfg core_try_from --cfg num_nonzero_signed --cfg std_atomic64 --cfg std_atomic`
Compiling serde_json v1.0.48
Running `rustc --edition=2018 --crate-name serde_json /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/serde_json-1.0.48/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' -C metadata=5007cd62ba6989a5 -C extra-filename=-5007cd62ba6989a5 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern itoa=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libitoa-b9ca519a13df71bf.rmeta --extern ryu=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libryu-81c16096c65f1d25.rmeta --extern serde=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libserde-ef69b68005b70c62.rmeta --cap-lints allow`
Compiling tokenizers v0.7.0 (/tmp/pip-install-d2wcoxbe/tokenizers/tokenizers-lib)
Running `rustc --edition=2018 --crate-name tokenizers tokenizers-lib/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=02f2af0a4056c877 -C extra-filename=-02f2af0a4056c877 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern clap=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libclap-4d1679758f5cc3c5.rmeta --extern indicatif=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libindicatif-af81cb79d58cbea4.rmeta --extern lazy_static=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblazy_static-7fe90463f0542b89.rmeta --extern rand=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/librand-102d035e4ca6c699.rmeta --extern rayon=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/librayon-35b9846c719edc99.rmeta --extern regex=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libregex-40c5630aef8afe3e.rmeta --extern regex_syntax=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libregex_syntax-feb44197369905d4.rmeta --extern serde=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libserde-ef69b68005b70c62.rmeta --extern serde_json=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libserde_json-5007cd62ba6989a5.rmeta --extern unicode_normalization_alignments=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libunicode_normalization_alignments-2f95bcef5d770e35.rmeta --extern unicode_categories=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libunicode_categories-3376acf28b9791fe.rmeta`
Compiling pyo3 v0.8.5
Running `rustc --edition=2018 --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/pyo3-0.8.5/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="extension-module"' --cfg 'feature="python3"' -C metadata=7ff152acc5305eee -C extra-filename=-7ff152acc5305eee --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/pyo3-7ff152acc5305eee -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern lazy_static=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblazy_static-7fe90463f0542b89.rlib --extern regex=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libregex-40c5630aef8afe3e.rlib --extern serde=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libserde-ef69b68005b70c62.rlib --extern serde_json=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libserde_json-5007cd62ba6989a5.rlib --extern version_check=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libversion_check-cb4ea0451d56bc0d.rlib --cap-lints allow`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/pyo3-7ff152acc5305eee/build-script-build`
error: failed to run custom build command for `pyo3 v0.8.5`
Caused by:
process didn't exit successfully: `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/pyo3-7ff152acc5305eee/build-script-build` (exit code: 101)
--- stderr
thread 'main' panicked at 'Error: pyo3 requires a nightly or dev version of Rust.', /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/pyo3-0.8.5/build.rs:542:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
warning: build failed, waiting for other jobs to finish...
error: build failed
cargo rustc --lib --manifest-path Cargo.toml --features pyo3/extension-module pyo3/python3 --release --verbose -- --crate-type cdylib
error: cargo failed with code: 101
----------------------------------------
ERROR: Failed building wheel for tokenizers
Failed to build tokenizers
ERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
A successful install of `transformers`.
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: `>2.3.0`
- Platform:
```
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
```
- Python version: `3.7.4`
- PyTorch version (GPU?): `1.4.0`
- Tensorflow version (GPU?): N/A.
- Using GPU in script?: N/A.
- Using distributed or parallel set-up in script?: N/A.
| 02-23-2020 16:17:03 | 02-23-2020 16:17:03 | Which version of Rust are you on? Looking at the trace [PyO3 ](https://pyo3.rs/v0.9.0-alpha.1/)requires at least 1.37.0-nightly 2019-07-19.<|||||>I am on version `1.41.0`<|||||>Interestingly, their website says 1.37.x is the requirement, but [on GitHub](https://github.com/PyO3/pyo3#usage) they say you need 1.42.0-nightly 2020-01-21. That's quite a harsh requirement, I think, but nothing you can do about it I suppose. (Except installing an older version of PyO3 from source.) Can you try either of those options and let us know?<|||||>I went with the former option (installing a nightly build of rust). Here is what I tried:
1. Install [rustup](https://rustup.rs/)
```
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
```
2. Install a [nightly build of rust](https://doc.rust-lang.org/book/appendix-07-nightly-rust.html#rustup-and-the-role-of-rust-nightly)
```
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
```
Try to update `transformers`:
```
pip install --upgrade transformers
```
No beans. Got the following stacktrace:
```
Building wheels for collected packages: tokenizers
Building wheel for tokenizers (PEP 517) ... error
ERROR: Command errored out with exit status 1:
command: /home/johnmg/t2t/bin/python /home/johnmg/t2t/lib/python3.7/site-packages/pip/_vendor/pep517/_in_process.py build_wheel /tmp/tmp6fk4hgm1
cwd: /tmp/pip-install-k2pjj650/tokenizers
Complete output (224 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib
creating build/lib/tokenizers
copying tokenizers/__init__.py -> build/lib/tokenizers
creating build/lib/tokenizers/models
copying tokenizers/models/__init__.py -> build/lib/tokenizers/models
creating build/lib/tokenizers/decoders
copying tokenizers/decoders/__init__.py -> build/lib/tokenizers/decoders
creating build/lib/tokenizers/normalizers
copying tokenizers/normalizers/__init__.py -> build/lib/tokenizers/normalizers
creating build/lib/tokenizers/pre_tokenizers
copying tokenizers/pre_tokenizers/__init__.py -> build/lib/tokenizers/pre_tokenizers
creating build/lib/tokenizers/processors
copying tokenizers/processors/__init__.py -> build/lib/tokenizers/processors
creating build/lib/tokenizers/trainers
copying tokenizers/trainers/__init__.py -> build/lib/tokenizers/trainers
creating build/lib/tokenizers/implementations
copying tokenizers/implementations/sentencepiece_bpe.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/char_level_bpe.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/byte_level_bpe.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/bert_wordpiece.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/base_tokenizer.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/__init__.py -> build/lib/tokenizers/implementations
copying tokenizers/__init__.pyi -> build/lib/tokenizers
copying tokenizers/models/__init__.pyi -> build/lib/tokenizers/models
copying tokenizers/decoders/__init__.pyi -> build/lib/tokenizers/decoders
copying tokenizers/normalizers/__init__.pyi -> build/lib/tokenizers/normalizers
copying tokenizers/pre_tokenizers/__init__.pyi -> build/lib/tokenizers/pre_tokenizers
copying tokenizers/processors/__init__.pyi -> build/lib/tokenizers/processors
copying tokenizers/trainers/__init__.pyi -> build/lib/tokenizers/trainers
running build_ext
running build_rust
Updating crates.io index
warning: unused manifest key: target.x86_64-apple-darwin.rustflags
Compiling proc-macro2 v1.0.8
Compiling unicode-xid v0.2.0
Compiling syn v1.0.15
Compiling libc v0.2.67
Compiling lazy_static v1.4.0
Compiling autocfg v1.0.0
Compiling cfg-if v0.1.10
Compiling semver-parser v0.7.0
Compiling memchr v2.3.3
Compiling serde v1.0.104
Compiling regex-syntax v0.6.14
Running `rustc --crate-name build_script_build --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/proc-macro2-1.0.8/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="proc-macro"' -C metadata=7f8009cddc5e6def -C extra-filename=-7f8009cddc5e6def --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/build/proc-macro2-7f8009cddc5e6def -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name unicode_xid /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/unicode-xid-0.2.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' -C metadata=c4b64db85789a8a8 -C extra-filename=-c4b64db85789a8a8 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/syn-1.0.15/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="clone-impls"' --cfg 'feature="default"' --cfg 'feature="derive"' --cfg 'feature="extra-traits"' --cfg 'feature="full"' --cfg 'feature="parsing"' --cfg 'feature="printing"' --cfg 'feature="proc-macro"' --cfg 'feature="quote"' --cfg 'feature="visit"' -C metadata=df69c996af1dadc1 -C extra-filename=-df69c996af1dadc1 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/build/syn-df69c996af1dadc1 -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/libc-0.2.67/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' -C metadata=59dfff2fc32cb87b -C extra-filename=-59dfff2fc32cb87b --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/build/libc-59dfff2fc32cb87b -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name lazy_static /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/lazy_static-1.4.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=daaa2cdb90fc8b44 -C extra-filename=-daaa2cdb90fc8b44 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name autocfg /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/autocfg-1.0.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=be76b16d1dfaa3e8 -C extra-filename=-be76b16d1dfaa3e8 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name cfg_if --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/cfg-if-0.1.10/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=796850ba8a8cedaa -C extra-filename=-796850ba8a8cedaa --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name semver_parser /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/semver-parser-0.7.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=69550f148bc5bb95 -C extra-filename=-69550f148bc5bb95 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Compiling maybe-uninit v2.0.0
Compiling ryu v1.0.2
Compiling getrandom v0.1.14
Compiling unicode-width v0.1.7
Compiling itoa v0.4.5
Compiling scopeguard v1.1.0
Compiling ppv-lite86 v0.2.6
Compiling bitflags v1.2.1
Compiling rayon-core v1.7.0
Compiling version_check v0.9.1
Compiling unindent v0.1.5
Compiling smallvec v1.2.0
Compiling either v1.5.3
Compiling strsim v0.8.0
Compiling vec_map v0.8.1
Compiling ansi_term v0.11.0
Compiling number_prefix v0.3.0
Compiling unicode_categories v0.1.1
Compiling spin v0.5.2
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/memchr-2.3.3/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' --cfg 'feature="use_std"' -C metadata=1d902e6b0fc561bf -C extra-filename=-1d902e6b0fc561bf --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/build/memchr-1d902e6b0fc561bf -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name regex_syntax /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/regex-syntax-0.6.14/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="unicode"' --cfg 'feature="unicode-age"' --cfg 'feature="unicode-bool"' --cfg 'feature="unicode-case"' --cfg 'feature="unicode-gencat"' --cfg 'feature="unicode-perl"' --cfg 'feature="unicode-script"' --cfg 'feature="unicode-segment"' -C metadata=49413942df53b636 -C extra-filename=-49413942df53b636 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/serde-1.0.104/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="derive"' --cfg 'feature="serde_derive"' --cfg 'feature="std"' -C metadata=d826302b09b30fa3 -C extra-filename=-d826302b09b30fa3 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/build/serde-d826302b09b30fa3 -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/maybe-uninit-2.0.0/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -C metadata=733894adef0cf9fb -C extra-filename=-733894adef0cf9fb --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/build/maybe-uninit-733894adef0cf9fb -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/ryu-1.0.2/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -C metadata=5e3d074139bd55e5 -C extra-filename=-5e3d074139bd55e5 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/build/ryu-5e3d074139bd55e5 -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/getrandom-0.1.14/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="std"' -C metadata=df35e20e514661d3 -C extra-filename=-df35e20e514661d3 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/build/getrandom-df35e20e514661d3 -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name unicode_width /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/unicode-width-0.1.7/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' -C metadata=23c2035cef4c6900 -C extra-filename=-23c2035cef4c6900 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name itoa /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/itoa-0.4.5/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=0670cd2bba7e59c0 -C extra-filename=-0670cd2bba7e59c0 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name scopeguard /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/scopeguard-1.1.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=cc84917dee271887 -C extra-filename=-cc84917dee271887 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name ppv_lite86 --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/ppv-lite86-0.2.6/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="simd"' --cfg 'feature="std"' -C metadata=013047a1e1834c1c -C extra-filename=-013047a1e1834c1c --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/bitflags-1.2.1/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="default"' -C metadata=836b697d86bba37f -C extra-filename=-836b697d86bba37f --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/build/bitflags-836b697d86bba37f -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.0/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -C metadata=f57886c0482abf7e -C extra-filename=-f57886c0482abf7e --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/build/rayon-core-f57886c0482abf7e -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name version_check /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/version_check-0.9.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=db8b107c34362735 -C extra-filename=-db8b107c34362735 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name unindent --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/unindent-0.1.5/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=dc1487787c7c90f6 -C extra-filename=-dc1487787c7c90f6 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name smallvec --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/smallvec-1.2.0/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=d5eba03a866e39a3 -C extra-filename=-d5eba03a866e39a3 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name either /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/either-1.5.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=99303bd779c82d42 -C extra-filename=-99303bd779c82d42 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name strsim /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/strsim-0.8.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=e2648e9eff68e95c -C extra-filename=-e2648e9eff68e95c --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name vec_map /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/vec_map-0.8.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=6138169e8c0f8f54 -C extra-filename=-6138169e8c0f8f54 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name ansi_term /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/ansi_term-0.11.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=fa4822742d417eef -C extra-filename=-fa4822742d417eef --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name number_prefix /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/number_prefix-0.3.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' -C metadata=e3de789ab67e6629 -C extra-filename=-e3de789ab67e6629 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name unicode_categories /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/unicode_categories-0.1.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=190c74b7ffce666b -C extra-filename=-190c74b7ffce666b --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name spin /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/spin-0.5.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=bbad36607e408080 -C extra-filename=-bbad36607e408080 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`
Compiling semver v0.9.0
Running `rustc --crate-name semver /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/semver-0.9.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' -C metadata=3dc1d287cff8dfce -C extra-filename=-3dc1d287cff8dfce --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern semver_parser=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libsemver_parser-69550f148bc5bb95.rmeta --cap-lints allow`
Compiling textwrap v0.11.0
Running `rustc --crate-name textwrap /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/textwrap-0.11.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=50a2c2e63154ee37 -C extra-filename=-50a2c2e63154ee37 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern unicode_width=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libunicode_width-23c2035cef4c6900.rmeta --cap-lints allow`
Compiling thread_local v1.0.1
Running `rustc --crate-name thread_local /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/thread_local-1.0.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=eac48644045cdc04 -C extra-filename=-eac48644045cdc04 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern lazy_static=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/liblazy_static-daaa2cdb90fc8b44.rmeta --cap-lints allow`
Compiling unicode-normalization-alignments v0.1.12
Running `rustc --crate-name unicode_normalization_alignments /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/unicode-normalization-alignments-0.1.12/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=4bec82dc12738e91 -C extra-filename=-4bec82dc12738e91 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern smallvec=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libsmallvec-d5eba03a866e39a3.rmeta --cap-lints allow`
Compiling rustc_version v0.2.3
Running `rustc --crate-name rustc_version /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/rustc_version-0.2.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=f9f112a037565bee -C extra-filename=-f9f112a037565bee --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern semver=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libsemver-3dc1d287cff8dfce.rmeta --cap-lints allow`
Compiling crossbeam-utils v0.7.2
Compiling crossbeam-epoch v0.8.2
Compiling num-traits v0.2.11
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/crossbeam-utils-0.7.2/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="lazy_static"' --cfg 'feature="std"' -C metadata=82d9fcbae2b0fcfe -C extra-filename=-82d9fcbae2b0fcfe --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/build/crossbeam-utils-82d9fcbae2b0fcfe -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern autocfg=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libautocfg-be76b16d1dfaa3e8.rlib --cap-lints allow`
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/crossbeam-epoch-0.8.2/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="lazy_static"' --cfg 'feature="std"' -C metadata=d1169acc937644c9 -C extra-filename=-d1169acc937644c9 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/build/crossbeam-epoch-d1169acc937644c9 -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern autocfg=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libautocfg-be76b16d1dfaa3e8.rlib --cap-lints allow`
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/num-traits-0.2.11/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' -C metadata=5b86f099584d3dae -C extra-filename=-5b86f099584d3dae --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/build/num-traits-5b86f099584d3dae -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern autocfg=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libautocfg-be76b16d1dfaa3e8.rlib --cap-lints allow`
Running `/tmp/pip-install-k2pjj650/tokenizers/target/release/build/rayon-core-f57886c0482abf7e/build-script-build`
Running `/tmp/pip-install-k2pjj650/tokenizers/target/release/build/memchr-1d902e6b0fc561bf/build-script-build`
Running `/tmp/pip-install-k2pjj650/tokenizers/target/release/build/bitflags-836b697d86bba37f/build-script-build`
Running `rustc --crate-name memchr /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/memchr-2.3.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' --cfg 'feature="use_std"' -C metadata=2eb004acc56bfef6 -C extra-filename=-2eb004acc56bfef6 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow --cfg memchr_runtime_simd --cfg memchr_runtime_sse2 --cfg memchr_runtime_sse42 --cfg memchr_runtime_avx`
Running `rustc --crate-name bitflags /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/bitflags-1.2.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' -C metadata=561e1eaab5576c8d -C extra-filename=-561e1eaab5576c8d --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow --cfg bitflags_const_fn`
Running `/tmp/pip-install-k2pjj650/tokenizers/target/release/build/libc-59dfff2fc32cb87b/build-script-build`
Running `rustc --crate-name libc /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/libc-0.2.67/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' -C metadata=1ecf199bab423512 -C extra-filename=-1ecf199bab423512 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow --cfg freebsd11 --cfg libc_priv_mod_use --cfg libc_union --cfg libc_const_size_of --cfg libc_align --cfg libc_core_cvoid --cfg libc_packedN`
Running `/tmp/pip-install-k2pjj650/tokenizers/target/release/build/getrandom-df35e20e514661d3/build-script-build`
Running `/tmp/pip-install-k2pjj650/tokenizers/target/release/build/ryu-5e3d074139bd55e5/build-script-build`
Running `/tmp/pip-install-k2pjj650/tokenizers/target/release/build/syn-df69c996af1dadc1/build-script-build`
Running `/tmp/pip-install-k2pjj650/tokenizers/target/release/build/serde-d826302b09b30fa3/build-script-build`
Running `/tmp/pip-install-k2pjj650/tokenizers/target/release/build/maybe-uninit-733894adef0cf9fb/build-script-build`
Running `/tmp/pip-install-k2pjj650/tokenizers/target/release/build/proc-macro2-7f8009cddc5e6def/build-script-build`
Running `rustc --crate-name maybe_uninit /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/maybe-uninit-2.0.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=d37c4119dc6540c4 -C extra-filename=-d37c4119dc6540c4 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow --cfg derive_copy --cfg repr_transparent --cfg native_uninit`
Running `rustc --crate-name ryu /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/ryu-1.0.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=c0ae351db5af2a03 -C extra-filename=-c0ae351db5af2a03 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow --cfg integer128 --cfg must_use_return --cfg maybe_uninit`
Compiling memoffset v0.5.3
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/memoffset-0.5.3/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -C metadata=3cd08936cb404f8e -C extra-filename=-3cd08936cb404f8e --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/build/memoffset-3cd08936cb404f8e -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern rustc_version=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/librustc_version-f9f112a037565bee.rlib --cap-lints allow`
Compiling c2-chacha v0.2.3
Running `rustc --crate-name c2_chacha --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/c2-chacha-0.2.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="simd"' --cfg 'feature="std"' -C metadata=6547bb04c22119fb -C extra-filename=-6547bb04c22119fb --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern ppv_lite86=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libppv_lite86-013047a1e1834c1c.rmeta --cap-lints allow`
Running `rustc --crate-name proc_macro2 --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/proc-macro2-1.0.8/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="proc-macro"' -C metadata=e43ba7375fb44eb2 -C extra-filename=-e43ba7375fb44eb2 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern unicode_xid=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libunicode_xid-c4b64db85789a8a8.rmeta --cap-lints allow --cfg use_proc_macro --cfg wrap_proc_macro --cfg proc_macro_span`
Running `/tmp/pip-install-k2pjj650/tokenizers/target/release/build/crossbeam-utils-82d9fcbae2b0fcfe/build-script-build`
Running `/tmp/pip-install-k2pjj650/tokenizers/target/release/build/num-traits-5b86f099584d3dae/build-script-build`
Compiling aho-corasick v0.7.8
Running `rustc --crate-name aho_corasick /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/aho-corasick-0.7.8/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' -C metadata=1aa04c8e211e8977 -C extra-filename=-1aa04c8e211e8977 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern memchr=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libmemchr-2eb004acc56bfef6.rmeta --cap-lints allow`
Running `/tmp/pip-install-k2pjj650/tokenizers/target/release/build/crossbeam-epoch-d1169acc937644c9/build-script-build`
Running `rustc --crate-name num_traits /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/num-traits-0.2.11/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' -C metadata=5895a2370c90e52c -C extra-filename=-5895a2370c90e52c --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow --cfg has_i128`
Running `/tmp/pip-install-k2pjj650/tokenizers/target/release/build/memoffset-3cd08936cb404f8e/build-script-build`
Compiling quote v1.0.2
Running `rustc --crate-name quote --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/quote-1.0.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="proc-macro"' -C metadata=2a3c58f3767a45fb -C extra-filename=-2a3c58f3767a45fb --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro2-e43ba7375fb44eb2.rmeta --cap-lints allow`
Running `rustc --crate-name memoffset /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/memoffset-0.5.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=2c7b74dca44a9da4 -C extra-filename=-2c7b74dca44a9da4 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow --cfg memoffset_maybe_uninit --cfg memoffset_doctests`
Running `rustc --crate-name crossbeam_utils /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/crossbeam-utils-0.7.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="lazy_static"' --cfg 'feature="std"' -C metadata=388986d928bc4f32 -C extra-filename=-388986d928bc4f32 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern cfg_if=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libcfg_if-796850ba8a8cedaa.rmeta --extern lazy_static=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/liblazy_static-daaa2cdb90fc8b44.rmeta --cap-lints allow --cfg has_min_const_fn --cfg has_atomic_u8 --cfg has_atomic_u16 --cfg has_atomic_u32 --cfg has_atomic_u64`
Running `rustc --crate-name syn --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/syn-1.0.15/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="clone-impls"' --cfg 'feature="default"' --cfg 'feature="derive"' --cfg 'feature="extra-traits"' --cfg 'feature="full"' --cfg 'feature="parsing"' --cfg 'feature="printing"' --cfg 'feature="proc-macro"' --cfg 'feature="quote"' --cfg 'feature="visit"' -C metadata=f20a4e9749d3ee5d -C extra-filename=-f20a4e9749d3ee5d --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro2-e43ba7375fb44eb2.rmeta --extern quote=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libquote-2a3c58f3767a45fb.rmeta --extern unicode_xid=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libunicode_xid-c4b64db85789a8a8.rmeta --cap-lints allow`
Compiling clicolors-control v1.0.1
Compiling num_cpus v1.12.0
Compiling termios v0.3.1
Compiling atty v0.2.14
Running `rustc --crate-name getrandom --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/getrandom-0.1.14/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="std"' -C metadata=048e32c5a0c04df6 -C extra-filename=-048e32c5a0c04df6 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern cfg_if=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libcfg_if-796850ba8a8cedaa.rmeta --extern libc=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/liblibc-1ecf199bab423512.rmeta --cap-lints allow`
Running `rustc --crate-name clicolors_control /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/clicolors-control-1.0.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="terminal_autoconfig"' -C metadata=0b0b6007b4183ec9 -C extra-filename=-0b0b6007b4183ec9 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern lazy_static=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/liblazy_static-daaa2cdb90fc8b44.rmeta --extern libc=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/liblibc-1ecf199bab423512.rmeta --cap-lints allow`
Running `rustc --crate-name num_cpus /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/num_cpus-1.12.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=d96ba1da53092c10 -C extra-filename=-d96ba1da53092c10 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern libc=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/liblibc-1ecf199bab423512.rmeta --cap-lints allow`
Running `rustc --crate-name termios /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/termios-0.3.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=ec3a50c8d1bc7e85 -C extra-filename=-ec3a50c8d1bc7e85 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern libc=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/liblibc-1ecf199bab423512.rmeta --cap-lints allow`
Running `rustc --crate-name atty /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/atty-0.2.14/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=a326ad27cb30e935 -C extra-filename=-a326ad27cb30e935 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern libc=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/liblibc-1ecf199bab423512.rmeta --cap-lints allow`
Compiling crossbeam-queue v0.2.1
Running `rustc --crate-name crossbeam_epoch /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/crossbeam-epoch-0.8.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="lazy_static"' --cfg 'feature="std"' -C metadata=91bb1210b79b03da -C extra-filename=-91bb1210b79b03da --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern cfg_if=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libcfg_if-796850ba8a8cedaa.rmeta --extern crossbeam_utils=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libcrossbeam_utils-388986d928bc4f32.rmeta --extern lazy_static=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/liblazy_static-daaa2cdb90fc8b44.rmeta --extern maybe_uninit=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libmaybe_uninit-d37c4119dc6540c4.rmeta --extern memoffset=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libmemoffset-2c7b74dca44a9da4.rmeta --extern scopeguard=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libscopeguard-cc84917dee271887.rmeta --cap-lints allow --cfg has_min_const_fn`
Running `rustc --crate-name crossbeam_queue /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/crossbeam-queue-0.2.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' -C metadata=c776341a9b185621 -C extra-filename=-c776341a9b185621 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern cfg_if=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libcfg_if-796850ba8a8cedaa.rmeta --extern crossbeam_utils=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libcrossbeam_utils-388986d928bc4f32.rmeta --cap-lints allow`
Compiling clap v2.33.0
Running `rustc --crate-name clap /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/clap-2.33.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="ansi_term"' --cfg 'feature="atty"' --cfg 'feature="color"' --cfg 'feature="default"' --cfg 'feature="strsim"' --cfg 'feature="suggestions"' --cfg 'feature="vec_map"' -C metadata=3071ac3e668ed07c -C extra-filename=-3071ac3e668ed07c --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern ansi_term=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libansi_term-fa4822742d417eef.rmeta --extern atty=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libatty-a326ad27cb30e935.rmeta --extern bitflags=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libbitflags-561e1eaab5576c8d.rmeta --extern strsim=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libstrsim-e2648e9eff68e95c.rmeta --extern textwrap=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libtextwrap-50a2c2e63154ee37.rmeta --extern unicode_width=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libunicode_width-23c2035cef4c6900.rmeta --extern vec_map=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libvec_map-6138169e8c0f8f54.rmeta --cap-lints allow`
Compiling rand_core v0.5.1
Running `rustc --crate-name rand_core --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/rand_core-0.5.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="alloc"' --cfg 'feature="getrandom"' --cfg 'feature="std"' -C metadata=7ce6542ec4257257 -C extra-filename=-7ce6542ec4257257 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern getrandom=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libgetrandom-048e32c5a0c04df6.rmeta --cap-lints allow`
Compiling regex v1.3.4
Running `rustc --crate-name regex /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/regex-1.3.4/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="aho-corasick"' --cfg 'feature="default"' --cfg 'feature="memchr"' --cfg 'feature="perf"' --cfg 'feature="perf-cache"' --cfg 'feature="perf-dfa"' --cfg 'feature="perf-inline"' --cfg 'feature="perf-literal"' --cfg 'feature="std"' --cfg 'feature="thread_local"' --cfg 'feature="unicode"' --cfg 'feature="unicode-age"' --cfg 'feature="unicode-bool"' --cfg 'feature="unicode-case"' --cfg 'feature="unicode-gencat"' --cfg 'feature="unicode-perl"' --cfg 'feature="unicode-script"' --cfg 'feature="unicode-segment"' -C metadata=88e281e069c7ba33 -C extra-filename=-88e281e069c7ba33 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern aho_corasick=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libaho_corasick-1aa04c8e211e8977.rmeta --extern memchr=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libmemchr-2eb004acc56bfef6.rmeta --extern regex_syntax=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libregex_syntax-49413942df53b636.rmeta --extern thread_local=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libthread_local-eac48644045cdc04.rmeta --cap-lints allow`
Compiling crossbeam-deque v0.7.3
Running `rustc --crate-name crossbeam_deque /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/crossbeam-deque-0.7.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=0c2beaa054f6da4d -C extra-filename=-0c2beaa054f6da4d --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern crossbeam_epoch=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libcrossbeam_epoch-91bb1210b79b03da.rmeta --extern crossbeam_utils=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libcrossbeam_utils-388986d928bc4f32.rmeta --extern maybe_uninit=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libmaybe_uninit-d37c4119dc6540c4.rmeta --cap-lints allow`
Compiling rand_chacha v0.2.1
Running `rustc --crate-name rand_chacha --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/rand_chacha-0.2.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="std"' -C metadata=9c3e099a39a894e1 -C extra-filename=-9c3e099a39a894e1 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern c2_chacha=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libc2_chacha-6547bb04c22119fb.rmeta --extern rand_core=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/librand_core-7ce6542ec4257257.rmeta --cap-lints allow`
Running `rustc --crate-name rayon_core --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=43e482f55cc760b6 -C extra-filename=-43e482f55cc760b6 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern crossbeam_deque=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libcrossbeam_deque-0c2beaa054f6da4d.rmeta --extern crossbeam_queue=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libcrossbeam_queue-c776341a9b185621.rmeta --extern crossbeam_utils=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libcrossbeam_utils-388986d928bc4f32.rmeta --extern lazy_static=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/liblazy_static-daaa2cdb90fc8b44.rmeta --extern num_cpus=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libnum_cpus-d96ba1da53092c10.rmeta --cap-lints allow`
Compiling rand v0.7.3
Running `rustc --crate-name rand --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/rand-0.7.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="alloc"' --cfg 'feature="default"' --cfg 'feature="getrandom"' --cfg 'feature="getrandom_package"' --cfg 'feature="libc"' --cfg 'feature="std"' -C metadata=d905e01484b0667a -C extra-filename=-d905e01484b0667a --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern getrandom_package=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libgetrandom-048e32c5a0c04df6.rmeta --extern libc=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/liblibc-1ecf199bab423512.rmeta --extern rand_chacha=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/librand_chacha-9c3e099a39a894e1.rmeta --extern rand_core=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/librand_core-7ce6542ec4257257.rmeta --cap-lints allow`
Compiling rayon v1.3.0
Running `rustc --crate-name rayon --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-1.3.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=5f29404b086ee91b -C extra-filename=-5f29404b086ee91b --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern crossbeam_deque=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libcrossbeam_deque-0c2beaa054f6da4d.rmeta --extern either=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libeither-99303bd779c82d42.rmeta --extern rayon_core=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/librayon_core-43e482f55cc760b6.rmeta --cap-lints allow`
Compiling console v0.9.2
Running `rustc --crate-name console --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/console-0.9.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="unicode-width"' -C metadata=2ce4edc7b4c8449e -C extra-filename=-2ce4edc7b4c8449e --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern clicolors_control=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libclicolors_control-0b0b6007b4183ec9.rmeta --extern lazy_static=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/liblazy_static-daaa2cdb90fc8b44.rmeta --extern libc=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/liblibc-1ecf199bab423512.rmeta --extern regex=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libregex-88e281e069c7ba33.rmeta --extern termios=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libtermios-ec3a50c8d1bc7e85.rmeta --extern unicode_width=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libunicode_width-23c2035cef4c6900.rmeta --cap-lints allow`
Compiling indicatif v0.14.0
Running `rustc --crate-name indicatif --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/indicatif-0.14.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' -C metadata=e47d5e054f023436 -C extra-filename=-e47d5e054f023436 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern console=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libconsole-2ce4edc7b4c8449e.rmeta --extern lazy_static=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/liblazy_static-daaa2cdb90fc8b44.rmeta --extern number_prefix=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libnumber_prefix-e3de789ab67e6629.rmeta --extern regex=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libregex-88e281e069c7ba33.rmeta --cap-lints allow`
Compiling pyo3-derive-backend v0.8.5
Running `rustc --crate-name pyo3_derive_backend --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/pyo3-derive-backend-0.8.5/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=aba0e2d131928acb -C extra-filename=-aba0e2d131928acb --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro2-e43ba7375fb44eb2.rmeta --extern quote=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libquote-2a3c58f3767a45fb.rmeta --extern syn=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libsyn-f20a4e9749d3ee5d.rmeta --cap-lints allow`
Compiling serde_derive v1.0.104
Compiling proc-macro-hack v0.5.11
Compiling ghost v0.1.1
Compiling ctor v0.1.13
Compiling inventory-impl v0.1.5
Compiling pyo3cls v0.8.5
Running `rustc --crate-name serde_derive /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/serde_derive-1.0.104/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 --cfg 'feature="default"' -C metadata=034c70940b2eedef -C extra-filename=-034c70940b2eedef --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro2-e43ba7375fb44eb2.rlib --extern quote=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libquote-2a3c58f3767a45fb.rlib --extern syn=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libsyn-f20a4e9749d3ee5d.rlib --extern proc_macro --cap-lints allow`
Running `rustc --crate-name ghost --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/ghost-0.1.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=883c66593baf1317 -C extra-filename=-883c66593baf1317 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro2-e43ba7375fb44eb2.rlib --extern quote=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libquote-2a3c58f3767a45fb.rlib --extern syn=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libsyn-f20a4e9749d3ee5d.rlib --extern proc_macro --cap-lints allow`
Running `rustc --crate-name proc_macro_hack --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/proc-macro-hack-0.5.11/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=3cf880fc746cfd7d -C extra-filename=-3cf880fc746cfd7d --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro2-e43ba7375fb44eb2.rlib --extern quote=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libquote-2a3c58f3767a45fb.rlib --extern syn=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libsyn-f20a4e9749d3ee5d.rlib --extern proc_macro --cap-lints allow`
Running `rustc --crate-name ctor --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/ctor-0.1.13/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=2c98d245a0bd2934 -C extra-filename=-2c98d245a0bd2934 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern quote=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libquote-2a3c58f3767a45fb.rlib --extern syn=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libsyn-f20a4e9749d3ee5d.rlib --extern proc_macro --cap-lints allow`
Running `rustc --crate-name inventory_impl --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/inventory-impl-0.1.5/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=753bfdf5c2c1b356 -C extra-filename=-753bfdf5c2c1b356 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro2-e43ba7375fb44eb2.rlib --extern quote=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libquote-2a3c58f3767a45fb.rlib --extern syn=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libsyn-f20a4e9749d3ee5d.rlib --extern proc_macro --cap-lints allow`
Running `rustc --crate-name pyo3cls --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/pyo3cls-0.8.5/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=3b5ed485ca9e08fe -C extra-filename=-3b5ed485ca9e08fe --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro2-e43ba7375fb44eb2.rlib --extern pyo3_derive_backend=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libpyo3_derive_backend-aba0e2d131928acb.rlib --extern quote=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libquote-2a3c58f3767a45fb.rlib --extern syn=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libsyn-f20a4e9749d3ee5d.rlib --extern proc_macro --cap-lints allow`
Compiling paste-impl v0.1.7
Compiling indoc-impl v0.3.4
Running `rustc --crate-name paste_impl --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/paste-impl-0.1.7/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=84e729460d896f1b -C extra-filename=-84e729460d896f1b --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern proc_macro_hack=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro_hack-3cf880fc746cfd7d.so --extern proc_macro2=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro2-e43ba7375fb44eb2.rlib --extern quote=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libquote-2a3c58f3767a45fb.rlib --extern syn=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libsyn-f20a4e9749d3ee5d.rlib --extern proc_macro --cap-lints allow`
Running `rustc --crate-name indoc_impl --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/indoc-impl-0.3.4/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=3a756aee825cada7 -C extra-filename=-3a756aee825cada7 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern proc_macro_hack=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro_hack-3cf880fc746cfd7d.so --extern proc_macro2=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro2-e43ba7375fb44eb2.rlib --extern quote=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libquote-2a3c58f3767a45fb.rlib --extern syn=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libsyn-f20a4e9749d3ee5d.rlib --extern unindent=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libunindent-dc1487787c7c90f6.rlib --extern proc_macro --cap-lints allow`
error: /lib64/libc.so.6: version `GLIBC_2.18' not found (required by /tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro_hack-3cf880fc746cfd7d.so)
--> /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/indoc-impl-0.3.4/src/lib.rs:12:5
|
12 | use proc_macro_hack::proc_macro_hack;
| ^^^^^^^^^^^^^^^
error: aborting due to previous error
error: could not compile `indoc-impl`.
Caused by:
process didn't exit successfully: `rustc --crate-name indoc_impl --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/indoc-impl-0.3.4/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=3a756aee825cada7 -C extra-filename=-3a756aee825cada7 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern proc_macro_hack=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro_hack-3cf880fc746cfd7d.so --extern proc_macro2=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro2-e43ba7375fb44eb2.rlib --extern quote=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libquote-2a3c58f3767a45fb.rlib --extern syn=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libsyn-f20a4e9749d3ee5d.rlib --extern unindent=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libunindent-dc1487787c7c90f6.rlib --extern proc_macro --cap-lints allow` (exit code: 1)
warning: build failed, waiting for other jobs to finish...
error: /lib64/libc.so.6: version `GLIBC_2.18' not found (required by /tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro_hack-3cf880fc746cfd7d.so)
--> /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/paste-impl-0.1.7/src/lib.rs:6:5
|
6 | use proc_macro_hack::proc_macro_hack;
| ^^^^^^^^^^^^^^^
error: aborting due to previous error
error: could not compile `paste-impl`.
Caused by:
process didn't exit successfully: `rustc --crate-name paste_impl --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/paste-impl-0.1.7/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=84e729460d896f1b -C extra-filename=-84e729460d896f1b --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern proc_macro_hack=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro_hack-3cf880fc746cfd7d.so --extern proc_macro2=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro2-e43ba7375fb44eb2.rlib --extern quote=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libquote-2a3c58f3767a45fb.rlib --extern syn=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libsyn-f20a4e9749d3ee5d.rlib --extern proc_macro --cap-lints allow` (exit code: 1)
warning: build failed, waiting for other jobs to finish...
error: build failed
cargo rustc --lib --manifest-path Cargo.toml --features pyo3/python3 pyo3/extension-module --release --verbose -- --crate-type cdylib
error: cargo failed with code: 101
----------------------------------------
ERROR: Failed building wheel for tokenizers
Failed to build tokenizers
ERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly
```<|||||>I am not sure if this is a version issue or an incompatibility issue, cf. and related: https://stackoverflow.com/questions/55363823/redhat-centos-glibc-2-18-not-found
Because of cross-platform compatibility, it might better to have `tokenizers` as an optional dependency and set `use_fast` to False by default instead to True.<|||||>Hi @JohnGiorgi,
Thanks for reporting this issue and sorry you're having trouble with transformers.
As @BramVanroy mentioned pyo3 package (which allows us to build the Python binding to Rust) requires some features only available in nightly release. However 1.42 is only required for the version `9.0-alpha` of pyo3 which we're not currently using.
I did try to reproduce the error you linked above on a Fedora machine I've at hand and wasn't able to. Can you provide some more information regarding this machine ? On which platform is it running ? x86_64 ? POWER9 ? Also if you can include some `uname -a ` output that would be very helpful.
Any information you might be able to provide to us will help us track down this build issue.
Many thanks <|||||>Looking at the trace, it seems `tokenizers` uses v0.8.5 of PyO3, which (according to their docs) requires 1.37.0-nightly 2019-07-19. So it's a bit odd that installation didn't work for OP on 1.41 from the start. But perhaps it has to do with GLIBC_2.18?
@mfuntowicz From the CentOS fora, [it seems that 2.18 will never come to CentOS 7](https://forums.centos.org/viewtopic.php?t=71740) so I fear this is just an incompatibility. That adds to my point that I would suggest that `tokenizers` is an optional dependency and use_fast should be False by default.
<|||||>@BramVanroy The nightly features can take many release cycles before landing on a stable version. So these features are probably still part of the nightly only, even in the stable `1.4.1`.
Anyway, we provide `manylinux` wheels, and these are built on `CentOS 5` so I think the real problem here is to find out why it didn't download the wheel in the first place, but tried to re-compile instead.<|||||>@n1t0 Ah, that makes sense. Thanks for the clarification.
@JohnGiorgi Can you try to install `tokenizers` with `pip debug install tokenizers -vv`? It'll show you all compatible tags.<|||||>@mfuntowicz
Output of `lscpu`
```
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 40
On-line CPU(s) list: 0-39
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
Stepping: 4
CPU MHz: 2057.373
CPU max MHz: 3700.0000
CPU min MHz: 1000.0000
BogoMIPS: 4800.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 28160K
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 invpcid_single intel_ppin intel_pt ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke md_clear spec_ctrl intel_stibp flush_l1d
```
Output of `uname -a`
```
Linux beluga1.int.ets1.calculquebec.ca 3.10.0-1062.9.1.el7.x86_64 #1 SMP Fri Dec 6 15:49:49 UTC 2019 x86_64 GNU/Linux
```
Output of `cat /etc/os-release`
```
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
```
I am working on a Compute Canada cluster, so information about it can also be found [here](https://docs.computecanada.ca/wiki/B%C3%A9luga/en).
@BramVanroy
Here is the output of `pip debug install tokenizers -vv`
```
WARNING: This command is only meant for debugging. Do not use this with automation for parsing and getting these details, since the output and options of this command may change without notice.
pip version: pip 20.0.2 from /home/johnmg/t2t/lib/python3.7/site-packages/pip (python 3.7)
sys.version: 3.7.4 (default, Jul 18 2019, 19:34:02)
[GCC 5.4.0]
sys.executable: /home/johnmg/t2t/bin/python
sys.getdefaultencoding: utf-8
sys.getfilesystemencoding: utf-8
locale.getpreferredencoding: UTF-8
sys.platform: linux
sys.implementation:
name: cpython
'cert' config value: install, wheel, :env:
REQUESTS_CA_BUNDLE: None
CURL_CA_BUNDLE: /etc/pki/tls/certs/ca-bundle.crt
pip._vendor.certifi.where(): /home/johnmg/t2t/lib/python3.7/site-packages/pip/_vendor/certifi/cacert.pem
Compatible tags: 27
cp37-cp37m-linux_x86_64
cp37-abi3-linux_x86_64
cp37-none-linux_x86_64
cp36-abi3-linux_x86_64
cp35-abi3-linux_x86_64
cp34-abi3-linux_x86_64
cp33-abi3-linux_x86_64
cp32-abi3-linux_x86_64
py37-none-linux_x86_64
py3-none-linux_x86_64
py36-none-linux_x86_64
py35-none-linux_x86_64
py34-none-linux_x86_64
py33-none-linux_x86_64
py32-none-linux_x86_64
py31-none-linux_x86_64
py30-none-linux_x86_64
cp37-none-any
py37-none-any
py3-none-any
py36-none-any
py35-none-any
py34-none-any
py33-none-any
py32-none-any
py31-none-any
py30-none-any
```<|||||>I'm also having these same errors. It gets past the install, but then when running the tests `pip install -e ".[testing]"` I get:
`error: Can not find Rust compiler
----------------------------------------
ERROR: Failed building wheel for tokenizers
Failed to build tokenizers
ERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly`<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Hey guys, @JohnGiorgi @Shane-Neeley , I think I figured out what was happening to you (to me also). At the time of your question, I think there was a high possibility that in the "setup.py" of transformers' source code, there was a line writing "tokenizers=0.5.2" ; but in fact this old version will bother the update of transformers.
But when I install the newest tokenizers by pip (pip install -U tokenizers), you will get a tokenizers==0.7.0. That's why at the time of your question, there were always conflits and bugs about this tokenizers (I also installed Rust and setuptool-rust, it was always the same error).
And now they just corrected this line in the setup.py. So I just suggest you to
1. uninstall your old version transformers (very important!)
2. pip install -U tokenizers (so that it becomes tokenizers==0.7.0)
3. install transformers from source !
Then voilà ! You'll get a brand new smoothy transformers.<|||||>> Hey guys, @JohnGiorgi @Shane-Neeley , I think I figured out what was happening to you (to me also). At the time of your question, I think there was a high possibility that in the "setup.py" of transformers' source code, there was a line writing "tokenizers=0.5.2" ; but in fact this old version will bother the update of transformers.
> But when I install the newest tokenizers by pip (pip install -U tokenizers), you will get a tokenizers==0.7.0. That's why at the time of your question, there were always conflits and bugs about this tokenizers (I also installed Rust and setuptool-rust, it was always the same error).
>
> And now they just corrected this line in the setup.py. So I just suggest you to
>
> 1. uninstall your old version transformers (very important!)
> 2. pip install -U tokenizers (so that it becomes tokenizers==0.7.0)
> 3. install transformers from source !
>
> Then voilà ! You'll get a brand new smoothy transformers.
I am installing transformers from source but I am getting the error.<|||||>`curl https://sh.rustup.rs -sSf | sh`
`source $HOME/.cargo/env`
`pip3 install --upgrade transformers`
these lines worked for me |
transformers | 2,979 | closed | Question about output pipeline(feature-extraction) | Hi,
I'm new to python and transformers so please bare with me. I have the following code to use the pipeline wrapper of transformers.
```
from transformers import (
pipeline
)
import h5py
nlp = pipeline('feature-extraction', model='bert-base-cased', config='bert-base-cased', tokenizer='bert-base-cased', device=-1)
test = nlp("PersonA: Hi . PersonB: How are you doing ? PersonA: I 'm doing alright thank you very much.")
h5f = h5py.File('test.h5', 'a')
h5f.create_dataset('name', data=test)
h5f.close()
```
The above proof of concept works fine for me however I have two questions.
1. If I look at the shape of the h5py dataset it is (1, 270, 768). For my purpose I require the shape (270, 768). How can I make sure the output gets saved to h5 in this format?
2. device=-1 means the code will be executed on CPU correct?
Could someone please help me out with these two? ;) | 02-23-2020 14:38:18 | 02-23-2020 14:38:18 | |
transformers | 2,978 | closed | unreadable codes in for utils_glue | Hi
Previously the function "convert_examples_to_features" was implemented very nicely, and now this is implemented with so many nested function calls, which are all very hard to read and IMO unreadable codes. Could you return this implementation back to previous readable version?
thanks | 02-23-2020 11:50:36 | 02-23-2020 11:50:36 | Hi, thank you for your comment. Github allows you to look at older versions of the [code](https://github.com/huggingface/transformers/blob/v1.0.0/examples/utils_glue.py). |
transformers | 2,977 | closed | Fix for case of multi-gpu | When loading the optimizer and the scheduler in a multi gpu, the loading will put the optimizer and scheduler in cuda:0, it might not have enough mem to temporary store them (until the to.device below). | 02-23-2020 08:46:02 | 02-23-2020 08:46:02 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,976 | closed | XLMRobertaTokenizer vocab size | I think the XLMRobertaTokenizer vocab_size is off. Currently double counts ```'<unk>' | '<s>' | '</s>'```
Maybe change it to
```
def vocab_size(self):
return len(self.sp_model) + self.fairseq_offset
```
| 02-23-2020 05:33:53 | 02-23-2020 05:33:53 | Running the following code caused error for me:
> import transformers
> tokenizer = transformers.AutoTokenizer.from_pretrained("xlm-roberta-base")
> tokenizer.convert_ids_to_tokens(range(tokenizer.vocab_size))
Actually, the `tokenizer.vocab_size` is `250005`, the last id `250004` is `<mask>`, but the ids from `250001` to `250003` do not exist.<|||||>> Actually, the `tokenizer.vocab_size` is `250005`, the last id `250004` is `<mask>`, but the ids from `250001` to `250003` do not exist.
Ya ok this is definitely the problem. Either way, it's an issue for the current implementation of get_vocab which will crash at 25001:
```
def get_vocab(self):
vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}
vocab.update(self.added_tokens_encoder)
return vocab
```
<|||||>I wonder if this issue will be fixed? Currently it is not...<|||||>This issue is known and will be fixed.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This should have been fixed with https://github.com/huggingface/transformers/pull/3198 |
transformers | 2,975 | closed | GPT2 always has largest attention on first token? | I am trying to print attentions from GPT2 model, and find something strange.
```
import torch
from transformers import *
device = torch.device("cuda")
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
config = GPT2Config.from_pretrained("gpt2", output_attentions=True)
model = GPT2LMHeadModel.from_pretrained("gpt2", config=config, cache_dir="./cached").to(device)
input_text = "hello, this is Tom ."
input_ids = tokenizer.encode(input_text, return_tensors="pt").to(device)
logits, _, attns = model(input_ids)
last_layer_attns = attns[-1].squeeze(0)
last_layer_attns_per_head = last_layer_attns.mean(dim=0) # (sequence_length, sequence_length)
print(last_layer_attns_per_head[-1])
```
Output:
> tensor([0.6496, 0.0758, 0.0439, 0.0328, 0.0853, 0.1125], device='cuda:0', grad_fn=<SelectBackward>)
I have also tried different sentences but the distributions of attn are the same -- the first token always has the largest attention. Is there anything wrong in my code? Or can somebody explain why this the first token has the largest attn?
| 02-23-2020 05:29:32 | 02-23-2020 05:29:32 | Hi @sysu-zjw, this is indeed a very interesting observation that you made!
One thing to notice first is that GPT2 uses casual masking. So when looking at the attention weights ( corresponding to your variable `last_layer_attns_per_head` which I think should actually be called `last_layer_attns_avg_over_heads` ;-) ):
```
[1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.8310, 0.1690, 0.0000, 0.0000, 0.0000, 0.0000],
[0.7752, 0.1165, 0.1083, 0.0000, 0.0000, 0.0000],
[0.6962, 0.1208, 0.1039, 0.0790, 0.0000, 0.0000],
[0.8273, 0.0428, 0.0410, 0.0356, 0.0533, 0.0000],
[0.6496, 0.0758, 0.0439, 0.0328, 0.0853, 0.1125]
```
It is obvious that the attention weights of the first token for example can only attend to the first token and so on.
But as you noticed the more interesting part is that also the last token seems to always focus most of its attention on the first token (for the last layer).
I played around with different inputs and also noticed that pretty much the latter half of the transformer layers focus by far most of "its attention" on the first token. After googling a bit, I found that your observations were already put in a paper (check it out [here](https://www.aclweb.org/anthology/W19-4808.pdf) - especially Section 4.2). Looking at Figure 2, you should recognize your observations ;-)
From my point of view, there is nothing wrong with your code. I think it's a pattern that has been observed but its reason is not well understood.
If you find a good explanation for GPT2's behavior in this case (might also be very similar for other transformer architectures), let me know!
<|||||>@patrickvonplaten Thanks much for your detailed comment :) I will post here if I find any good explanation. |
transformers | 2,974 | closed | New CLI using Typer | ## Summary
First pass at a New CLI using [Typer](https://typer.tiangolo.com/)
**745** total lines of code vs 917 for the old CLI, only adding one (optional) dependency (Typer) which is well documented/supported and has 99% test coverage.
Currently, this mostly keeps the same option names for each command but I think we should have a discussion on using CLI Arguments vs Options.
Didn't put a ton of work into actual refactors around defining what should be an Argument and what should be an Option yet, wanted to get some eyes on this before putting in too much time.
In short, my position is everything that's required for a command to run should be an Argument and should be documented in the docstring which is automatically included in the help info
Good information here about using Arguments over Options https://typer.tiangolo.com/tutorial/arguments/#about-cli-arguments-help whenever there are required values.
## Usage
```
pip install transformers[new-cli]
```
and adds the console_script transformers so users can run commands with (e.g.):
```
transformers login
```
or:
```
transformers serve ner
```
| 02-22-2020 23:37:31 | 02-22-2020 23:37:31 | Closing as discussed in https://github.com/huggingface/transformers/issues/2959#issuecomment-619214142 |
transformers | 2,973 | closed | Testing that batch_encode_plus is the same as encode_plus | Spoiler alert: it wasn't.
closes #2960
closes #2658
closes #2654 | 02-22-2020 22:44:08 | 02-22-2020 22:44:08 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2973?src=pr&el=h1) Report
> Merging [#2973](https://codecov.io/gh/huggingface/transformers/pull/2973?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c36416e53c29da8b6193f4a36d7b024c5f513495?src=pr&el=desc) will **increase** coverage by `0.03%`.
> The diff coverage is `92.85%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2973?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2973 +/- ##
==========================================
+ Coverage 77.14% 77.17% +0.03%
==========================================
Files 98 98
Lines 16006 16020 +14
==========================================
+ Hits 12348 12364 +16
+ Misses 3658 3656 -2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2973?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/2973/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.83% <100%> (+0.11%)` | :arrow_up: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2973/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.06% <92.3%> (+0.45%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2973?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2973?src=pr&el=footer). Last update [c36416e...9e3275a](https://codecov.io/gh/huggingface/transformers/pull/2973?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,972 | closed | Getting the output of the from the forward function of the GPT-2 | Hello,
Is there any way that I can extract the output of the ```self.merge_heads(a)``` and the ```self.c_proj(a)``` from the forward function for the Hugging Face GPT-2, which are found[ here](https://github.com/huggingface/transformers/blob/73028c5df0c28ca179fbe565482a9c2143787f61/src/transformers/modeling_gpt2.py#L192) and [here](https://github.com/huggingface/transformers/blob/73028c5df0c28ca179fbe565482a9c2143787f61/src/transformers/modeling_gpt2.py#L193)?
Thank you, | 02-22-2020 22:17:33 | 02-22-2020 22:17:33 | There isn't a way to retrieve that from the current API, but feel free to clone the repository and modify it to fit your needs.<|||||>Hello,
Thank you for your reply. Can I make a request to the Hugging Face so that the code can be modified for users to extract ```self.merge_heads(a)``` and ```self.c_proj(a)```? If yes, how can I make the request?
Thank you,<|||||>> Hello,
>
> Thank you for your reply. Can I make a request to the Hugging Face so that the code can be modified for users to extract `self.merge_heads(a)` and `self.c_proj(a)`? If yes, how can I make the request?
>
> Thank you,
You can suggest feature requests here, as a feature (so here in this topic), but chances are very slim that it will be picked up. It is not high in priority, I would think. @LysandreJik suggests that you clone the repository and implement it yourself. If you successfully do so, you can do a pull request so that your code becomes part of this repository!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,971 | closed | fix _update_memory fn call in transformer-xl | Fix bug related to #2970 | 02-22-2020 21:44:20 | 02-22-2020 21:44:20 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2971?src=pr&el=h1) Report
> Merging [#2971](https://codecov.io/gh/huggingface/transformers/pull/2971?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/92487a1dc03c919afa8a961ed7d8ba78fafa21bd?src=pr&el=desc) will **increase** coverage by `<.01%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2971?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2971 +/- ##
==========================================
+ Coverage 77.14% 77.15% +<.01%
==========================================
Files 98 98
Lines 16003 16003
==========================================
+ Hits 12346 12347 +1
+ Misses 3657 3656 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2971?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/2971/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `75.63% <100%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2971/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.53% <0%> (+0.16%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2971?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2971?src=pr&el=footer). Last update [92487a1...7209cf3](https://codecov.io/gh/huggingface/transformers/pull/2971?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,970 | closed | Bug in transfo_xl function call | # 🐛 Bug
Miss-order in the function call.
In line 785 of modeling_transfo_xl.py you call the ``new_mems = self._update_mems(hids, mems, mlen, qlen)``. Yet, the function declaration is ``def _update_mems(self, hids, mems, qlen, mlen)``
Apparently you invert the order of ``qlen`` and ``mlen``.
Yet the training performances are not affected if ``ext_len = 0`` which is the default setting.
## Information
Related to https://github.com/kimiyoung/transformer-xl/issues/96
Model I am using (Bert, XLNet ...): Transformer-XL (modeling_transfo_xl.py)
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
There is no issue during training, but it might affect the performance in evaluation
## Environment info
- `transformers` version: 2.5.0
- Platform: MACOS
- Python version: 3.6.0
- PyTorch version (GPU?): 1.2.0
- Tensorflow version (GPU?):
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: yes
| 02-22-2020 21:16:06 | 02-22-2020 21:16:06 | |
transformers | 2,969 | closed | Bart: fix layerdrop and caching shapes for generation | 02-22-2020 21:09:05 | 02-22-2020 21:09:05 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2969?src=pr&el=h1) Report
> Merging [#2969](https://codecov.io/gh/huggingface/transformers/pull/2969?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c36416e53c29da8b6193f4a36d7b024c5f513495?src=pr&el=desc) will **decrease** coverage by `<.01%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2969?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2969 +/- ##
==========================================
- Coverage 77.14% 77.14% -0.01%
==========================================
Files 98 98
Lines 16006 16003 -3
==========================================
- Hits 12348 12345 -3
Misses 3658 3658
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2969?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/2969/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `84.58% <100%> (-0.11%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2969?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2969?src=pr&el=footer). Last update [c36416e...0198f22](https://codecov.io/gh/huggingface/transformers/pull/2969?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 2,968 | closed | Delete untested, broken Model2LSTM | If you need it back `git checkout c36416e5` | 02-22-2020 20:26:56 | 02-22-2020 20:26:56 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2968?src=pr&el=h1) Report
> Merging [#2968](https://codecov.io/gh/huggingface/transformers/pull/2968?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c36416e53c29da8b6193f4a36d7b024c5f513495?src=pr&el=desc) will **decrease** coverage by `1.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2968?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2968 +/- ##
==========================================
- Coverage 77.14% 76.13% -1.02%
==========================================
Files 98 98
Lines 16006 15998 -8
==========================================
- Hits 12348 12180 -168
- Misses 3658 3818 +160
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2968?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/2968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `25.37% <ø> (-1.3%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.03% <0%> (-2.21%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2968?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2968?src=pr&el=footer). Last update [c36416e...05560e2](https://codecov.io/gh/huggingface/transformers/pull/2968?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,967 | closed | missing name entity recognition link | 02-22-2020 20:02:50 | 02-22-2020 20:02:50 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2967?src=pr&el=h1) Report
> Merging [#2967](https://codecov.io/gh/huggingface/transformers/pull/2967?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/94ff2d6ee8280c5595b92c1128c0f18e44925e56?src=pr&el=desc) will **increase** coverage by `<.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2967?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2967 +/- ##
==========================================
+ Coverage 77.11% 77.12% +<.01%
==========================================
Files 98 98
Lines 15977 15977
==========================================
+ Hits 12321 12322 +1
+ Misses 3656 3655 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2967?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2967/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.53% <0%> (+0.16%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2967?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2967?src=pr&el=footer). Last update [94ff2d6...bede5f4](https://codecov.io/gh/huggingface/transformers/pull/2967?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 2,966 | closed | Warning on `add_special_tokens` | Warning on `add_special_tokens` when passed to `encode`, `encode_plus` and `batch_encode_plus` | 02-22-2020 15:06:07 | 02-22-2020 15:06:07 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2966?src=pr&el=h1) Report
> Merging [#2966](https://codecov.io/gh/huggingface/transformers/pull/2966?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cc6775cdf5b20ad382613d3bdbf0dd8364d23219?src=pr&el=desc) will **decrease** coverage by `<.01%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2966?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2966 +/- ##
==========================================
- Coverage 77.12% 77.11% -0.01%
==========================================
Files 98 98
Lines 15977 15979 +2
==========================================
Hits 12322 12322
- Misses 3655 3657 +2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2966?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2966/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.48% <100%> (+0.02%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2966/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.2% <0%> (-0.33%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2966?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2966?src=pr&el=footer). Last update [cc6775c...7a09a54](https://codecov.io/gh/huggingface/transformers/pull/2966?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,965 | closed | Correct `special_tokens_mask` when `add_special_tokens=False` | Don't know of a use case where that would be useful, but this is more consistent | 02-22-2020 15:05:10 | 02-22-2020 15:05:10 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2965?src=pr&el=h1) Report
> Merging [#2965](https://codecov.io/gh/huggingface/transformers/pull/2965?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cc6775cdf5b20ad382613d3bdbf0dd8364d23219?src=pr&el=desc) will **decrease** coverage by `<.01%`.
> The diff coverage is `66.66%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2965?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2965 +/- ##
==========================================
- Coverage 77.12% 77.11% -0.01%
==========================================
Files 98 98
Lines 15977 15979 +2
==========================================
+ Hits 12322 12323 +1
- Misses 3655 3656 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2965?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2965/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.33% <66.66%> (-0.12%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2965/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jYW1lbWJlcnQucHk=) | `100% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2965/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG1fcm9iZXJ0YS5weQ==) | `100% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2965/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `100% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2965/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `97.82% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/2965/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `96.54% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2965/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.05% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2965/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.84% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2965/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `95.11% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2965/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.66% <0%> (ø)` | :arrow_up: |
| ... and [20 more](https://codecov.io/gh/huggingface/transformers/pull/2965/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2965?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2965?src=pr&el=footer). Last update [cc6775c...46b6238](https://codecov.io/gh/huggingface/transformers/pull/2965?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>The code wouldn't be slower if we always returned it, but one of the use-cases of `encode_plus` is that it provides the full array of necessary values for the model, and only those by default. The rest (useful values but that cannot be fed to the model) is optional. This is so we may do something like this:
```py
value = tokenizer.encode_plus("First sequence", "second sequence")
model(**value)
```
This isn't perfect yet as it returns some values which are not usable by some models (e.g. `token_type_ids` for DistilBERT which crash the model).
See #2702 and #2871 for more background. |
transformers | 2,964 | closed | [DOCS] fix hardcoded path in examples readme | I think, after merge this PR - need to rebuild docs, right? | 02-22-2020 12:38:18 | 02-22-2020 12:38:18 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2964?src=pr&el=h1) Report
> Merging [#2964](https://codecov.io/gh/huggingface/transformers/pull/2964?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/94ff2d6ee8280c5595b92c1128c0f18e44925e56?src=pr&el=desc) will **decrease** coverage by `1.03%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2964?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2964 +/- ##
==========================================
- Coverage 77.11% 76.08% -1.04%
==========================================
Files 98 98
Lines 15977 15977
==========================================
- Hits 12321 12156 -165
- Misses 3656 3821 +165
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2964?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.03% <0%> (-2.21%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2964?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2964?src=pr&el=footer). Last update [94ff2d6...4e2066b](https://codecov.io/gh/huggingface/transformers/pull/2964?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks! cc @LysandreJik |
transformers | 2,963 | closed | Length of special_tokens_mask doesn't align with the input_ids | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
data = tokenizer.encode_plus("Hello, world!", add_special_tokens=False, return_special_tokens_mask=True)
assert len(data['input_ids']) == len(data['special_tokens_mask'])
>> AssertionError
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Expect there to be no assertion error. The mask should be of the same shape as the input_ids.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.4.1
- Platform: Linux
- Python version: 3.5.2
- PyTorch version (GPU?): 1.2.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 02-22-2020 01:20:33 | 02-22-2020 01:20:33 | |
transformers | 2,962 | closed | [WIP] Proposal for Migrating to Typer for CLI and Examples | This doesn't update dependencies on purpose. This is just meant to demonstrate migrating an example from using argparse to using Typer.
| 02-22-2020 00:25:43 | 02-22-2020 00:25:43 | Closing as discussed in https://github.com/huggingface/transformers/issues/2959#issuecomment-619214142 |
transformers | 2,961 | closed | Fix max_length not taken into account when using pad_to_max_length on fast tokenizers | On fast tokenizer calling encode/ encode_plus / batch_encode_plus was not taking into account max_length when setting the padding strategy. | 02-21-2020 23:58:50 | 02-21-2020 23:58:50 | Should fix #2950<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2961?src=pr&el=h1) Report
> Merging [#2961](https://codecov.io/gh/huggingface/transformers/pull/2961?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/94ff2d6ee8280c5595b92c1128c0f18e44925e56?src=pr&el=desc) will **decrease** coverage by `<.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2961?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2961 +/- ##
==========================================
- Coverage 77.11% 77.11% -0.01%
==========================================
Files 98 98
Lines 15977 15977
==========================================
- Hits 12321 12320 -1
- Misses 3656 3657 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2961?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.45% <ø> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.77% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.23% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.54% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.2% <0%> (-0.17%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2961?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2961?src=pr&el=footer). Last update [94ff2d6...bf62e7c](https://codecov.io/gh/huggingface/transformers/pull/2961?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,960 | closed | Python Tokenizer batch_encode_plus doesn't pad input if asked to do so. | ```python
input_p = tokenizer_p.batch_encode_plus(
["This is a simple input 1", "This is a simple input 2"],
max_length=15,
pad_to_max_length=True
)
```
Output is not padded | 02-21-2020 23:55:37 | 02-21-2020 23:55:37 | |
transformers | 2,959 | closed | Switching from argparse to Typer | # 🚀 Feature request
I'm pretty new to transformers and I'm finding the examples a bit hard to read. Mainly I think this is due to argparse being so verbose. Is there any interest in migrating to something like [Plac](https://micheles.github.io/plac/) or even better [Typer](https://typer.tiangolo.com/)?
I think it would make all of the examples a lot easier to grasp right away and friendlier to newer users.
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
It will make the examples easier to read and a lot shorter. Currently, in an example like https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py, just parsing CLI arguments takes 140 lines of the 799 lines of code (17.5 % of the total code).
Then the args object is passed around to all of the other functions and that becomes hard to deal with when first looking at the example and not having an understanding of exactly what values each function needs.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
I'm happy to contribute the change across the transformers-cli and all the examples over time. Want to better understand the project requirements around adding a new dependency before submitting any PR.
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| 02-21-2020 22:52:28 | 02-21-2020 22:52:28 | Looks like [sacremoses](https://github.com/alvations/sacremoses) already uses [Click](https://click.palletsprojects.com/en/7.x/) as a dependency so using Typer would be ideal as it's only dependency is Click and it provides a much nicer interface than Click using Python 3.6+ Type Hints<|||||>Also, since Typer is using Python 3.6+ Type Hints this would require dropping Python 3.5. While that's not ideal it is the general trend for a lot projects these days as the benefits of using type hints and proper dict ordering are really high when it comes to testing and general usability.
More discussion here about Python 3.5 and whether it's worth supporting since only about a little over 1% of installs come from Python 3.5: https://github.com/huggingface/transformers/issues/2608<|||||>So I don't think the issue is argument parsing, that code is pretty much declarative and will look the same in all implementations.
We have been experimenting with using lightning to simplify the interface for these models. Do you find the code here to be more readable?
https://github.com/huggingface/transformers/blob/master/examples/ner/run_pl_ner.py
<|||||>### Code Readability
I actually think we're talking about 2 different problems:
1. Readability of the examples overall is not excellent. (This is addressed nicely with Pytorch Lightning)
2. The management of command line args with argparse creates confusion as you're passing around untyped arguments to arbitrary functions. And declaring arguments is very verbose. Typer basically makes typed python functions into CLI's mostly automatically and cleans up the code a lot.
I've implemented a couple draft PRs for using Typer.
1. Example of migrating one example to Typer https://github.com/huggingface/transformers/pull/2962
2. Full rewrite of the transformers-cli using Typer: https://github.com/huggingface/transformers/pull/2974
Typer reduces the amount of code significantly in both cases while also making the functions easier to read and understand.
### CLI Usability
There's a larger discussion about using CLI Arguments vs Options that should be had as well.
Most of the examples are overly verbose to run using the existing CLI options.
For instance, in the run_generation.py example I migrated (PR 1 above) there are only 2 required options (model_type and model_name_or_path). I made these Typer Options to not break convention for now but they should both be arguments.
That way, instead of writing:
```bash
python examples/run_generation.py --model_type gpt2 --model_name_or_path distilgpt2
```
the user can write something like:
```bash
python examples/run_generation.py gpt2 distilgpt2
```
And the docstring for the function can document that these arguments are required. Typer automatically uses the docstring in the help.
So here's the automatic help docs for run_generation
```bash
python examples/run_generation.py --help
```
```console
Usage: run_generation.py [OPTIONS] MODEL_TYPE MODEL_NAME_OR_PATH
Generate text based on a prompt using one of [gpt2, ctrl, openai-gpt,
xlnet, transfo-xl, xlm] as the model_type and a a supported model name or
path for that model_type
e.g.
$ python examples/run_generation.py gpt2 distilgpt2
Options:
--prompt TEXT
--length INTEGER
--stop-token TEXT Token at which text generation is stopped
--temperature FLOAT temperature of 1.0 has no effect, lower tend
toward greedy sampling
--repetition-penalty FLOAT primarily useful for CTRL model; in that
case, use 1.2
--k INTEGER
--p FLOAT
--padding-text TEXT Padding text for Transfo-XL and XLNet.
--xlm-language TEXT Optional language when used with the XLM
model
--seed INTEGER random seed for initialization
--no-cuda Don't use CUDA and run on CPU.
--num-return-sequences INTEGER The number of samples to generate.
--help Show this message and exit.
```<|||||>The work on transformers-cli (2) seems interesting as there are complex types there. I am pretty unconvinced on (1). The code reduction is mostly aesthetic, I don't see any really complexity wins. Given that I'm apt to stick with argparse as it is standard. (The argument/options thing could also be done in argparse. )<|||||>Thanks for the feedback
Actually I think it's more standard to use a CLI parsing dependency over argparse these days. Not a huge deal and it's not my library but I've just heard the same feedback about argparse in the examples from a few colleagues around Microsoft which is why I decided to propose the change.
If you do have some time to give a quick review on (2) that would be awesome. I think the changes there offer a lot of clarity particularly with using the Enum types.<|||||>@julien-c any thoughts on this? I don't think we want another dependency, but @kabirkhan did put a lot of work into restructuring CLI https://github.com/huggingface/transformers/pull/2974<|||||>My two cents, or maybe just one cent: I have always been torn with this, the same with plac. It feels more verbose than argparse but also, it doesn't.
Here, in this case, we currently already have the `register_subcommand`s so I think Typer actually makes sense. Looking at the code, it does greatly reduce redundancy (except for the app.command() calls). However, it is a lot less known (I think) and I still feel it is less intuitive than good ol' argparse. So if this were a vote, I'd vote aye, keeping in mind that it might be a "learning curve". On top of that, all future CLI scripts should also use this library to streamline the experience and development. As a bonus, dropping 3.5 support is an excellent side-effect in my opinion (Typing, f-strings, ordered dicts), but one might want to check how many users are on 3.5.<|||||>@BramVanroy thanks for the input. Yeah the app.command() calls are annoying, fixed that with a for loop just now. I hear your point on the "learning curve" but Typer is truly very easy to learn and really well documented. It's built by tiangolo (same guy who made FastAPI). Also the current CLI already requires python 3.6<|||||>@BramVanroy less than 1% of pip installs are on Python 3.5 according to https://pypistats.org/packages/transformers – we will probably drop it in the next couple of weeks or months
@kabirkhan Thanks for the work you've put into those PRs! This is interesting, and analogous to ideas we've been discussing internally and here in other issues. More generally we are in the process of thinking about rebuilding our training example scripts in a more scalable/re-usable way. We'll link our first thoughts from here.<|||||>I'll close this as we now have a very simple (100 lines of code) built-in argument parser named [HfArgumentParser](https://github.com/huggingface/transformers/blob/master/src/transformers/hf_argparser.py).
Example scripts are now way cleaner after #3800.
Will also close associated PR #2974.
Would love to get your feedback @kabirkhan, thanks for prompting our reflection. |
transformers | 2,958 | closed | Remove double bias | Bias is currently applied twice in BERT, RoBERTa and ALBERT. | 02-21-2020 22:09:41 | 02-21-2020 22:09:41 | |
transformers | 2,957 | closed | Bias in `BertLMPredictionHead ` is added twice | # 🐛 Bug
According to this PR #2521, a link is created between the linear layer bias and the model attribute bias in `BertLMPredictionHead`. In the `__init__`:
```python
self.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
self.bias = nn.Parameter(torch.zeros(config.vocab_size))
self.decoder.bias = self.bias # here is the problem
```
in the `forward`:
```
hidden_states = self.decoder(hidden_states) + self.bias
```
I am afraid this will cause the `self.decoder` (which is a `nn.Linear`) to have bias and as a result the bias is added twice in the `forward` function.
## To reproduce
Steps to reproduce the behavior:
For version 2.4.0, where the PR is merged and thus has this bug:
```python
from transformers import *
model = BertForMaskedLM.from_pretrained('bert-base-cased')
print(model.cls.predictions.bias)
# tensor([-0.1788, -0.1758, -0.1752, ..., -0.3448, -0.3574, -0.3483], requires_grad=True)
print(model(torch.tensor([[0]])))
# (tensor([[[-12.2630, -12.2798, -12.1221, ..., -10.2729, -10.8859, -11.1733]]], grad_fn=<AddBackward0>),)
```
For version 2.3.0, which is before the PR being merged and thus no bug:
```python
from transformers import *
model = BertForMaskedLM.from_pretrained('bert-base-cased')
print(model.cls.predictions.bias)
# tensor([-0.1788, -0.1758, -0.1752, ..., -0.3448, -0.3574, -0.3483], requires_grad=True)
print(model(torch.tensor([[0]])))
# (tensor([[[-12.0842, -12.1040, -11.9469, ..., -9.9281, -10.5285, -10.8251]]], grad_fn=<AddBackward0>),)
```
Comparing the above output, you can clearly see that for version 2.4.0, the bias is added twice.
## Environment info
- `transformers` version: 2.4.0
- Platform: Ubuntu
- Python version: 3.7
- PyTorch version (GPU?): 1.0.1 (with GPU)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 02-21-2020 21:44:29 | 02-21-2020 21:44:29 | This is a known issue (cf. https://github.com/huggingface/transformers/pull/2928). It will be fixed in another (cleaner) PR, though. |
transformers | 2,956 | closed | adding support for commonsense qa for multiple choice question answering | 02-21-2020 21:16:21 | 02-21-2020 21:16:21 | Can somebody help me in fixing the failed test?<|||||>Hi, thanks for your addition, that's really cool! Did you check the [contributions guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests)?
You should make sure you have the correct versions of `black`, `isort` and `flake8` installed and then run `make style`/`make quality` to identify the code quality issues.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
|
transformers | 2,955 | closed | Only use F.gelu for torch >=1.4.0 | 02-21-2020 21:01:01 | 02-21-2020 21:01:01 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2955?src=pr&el=h1) Report
> Merging [#2955](https://codecov.io/gh/huggingface/transformers/pull/2955?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3e98f27e4a9bb0ac3d0fe24b94d30da42cdae8a7?src=pr&el=desc) will **decrease** coverage by `<.01%`.
> The diff coverage is `66.66%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2955?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2955 +/- ##
=========================================
- Coverage 76.1% 76.1% -0.01%
=========================================
Files 98 98
Lines 15946 15948 +2
=========================================
+ Hits 12136 12137 +1
- Misses 3810 3811 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2955?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/2955/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `87.5% <66.66%> (-5.36%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2955?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2955?src=pr&el=footer). Last update [3e98f27...49c66bd](https://codecov.io/gh/huggingface/transformers/pull/2955?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 2,954 | closed | Distillation throws CUDA out of memory even with available GPU memory | # ❓ Questions & Help
## Details
<!-- Description of your issue -->
Hi! I am trying to run the distillation of XLM-Roberta to Albert (also little XLM-Roberta) on 4 GPUs (RTX 2080 Ti) after a bit of correction of the distillation script, so the training process goes through little chunks of dataset due to the difficulties with dataset preprocessing at once, but the problem is that running training throws CUDA OOM, although the GPU memory consumption is at max 70%.
I've discovered the close issue [#1179 ](https://github.com/huggingface/transformers/issues/1179) and tried to install torch from the source to avoid some bugs as it was suggested, but OOM comes again just a little bit later.
I've also tried several things, but all were unsuccessful:
1) Reducing the batch size and max length don't help, these just prolong the training process, but at some point distillation crashes again;
2) Run distillation in such manner: train on one chunk -> make checkpoint -> rerun distillation from pretrained;
3) Run with torch/apex distributed learning;
4) Run with --fp16 / --fp32;
5) Run with/without amp optimization;
Is it possible that the problem is related to the dataset? (Running training on different chunks throws OOM in different moments. BTW some chunks are processed fully without any errors).
I appreciate any help, no more guesses on how to solve this problem. Thanks! | 02-21-2020 20:35:32 | 02-21-2020 20:35:32 | An error trace would be useful.<|||||>> An error trace would be useful.
This is an error trace:
F.softmax(t_logits_slct / self.temperature, dim=-1),
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 366, in forward
return F.kl_div(input, target, reduction=self.reduction)
File "/opt/conda/lib/python3.7/site-packages/apex/amp/wrap.py", line 28, in wrapper
return orig_fn(*new_args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py", line 1987, in kl_div
reduced = torch.kl_div(input, target, reduction_enum)
RuntimeError: CUDA out of memory. Tried to allocate 818.00 MiB (GPU 0; 10.76 GiB total capacity; 8.61 GiB already allocated; 787.44 MiB free; 9.19 GiB reserved in total by PyTorch)<|||||>> 787.44 MiB free
So your GPU doesn't have enough memory available at that point. (Even if nvidia-smi says it is only using 70%.)
There are known issues with apex that it doesn't work well when you reload checkpoints and continue training in the same Python session. Does the same issue occur when you use torch DDP (not apex), no FP16, no amp?<|||||>> So your GPU doesn't have enough memory available at that point. (Even if nvidia-smi says it is only using 70%.)
Your point is right, but the strange thing is that this error can occur accidentally even if 99% of training time GPU consumption is less than 70%. (This happens even with tiny batch size).
The same error occurs with DDP, no FP16, no amp, moreover, I've also tried to run the distillation on a single GPU without distribution and the result is the same. <|||||>> There are known issues with apex that it doesn't work well when you reload checkpoints and continue training in the same Python session.
BTW, I didn't reload checkpoint in the same python session. The distillation script was relaunched with loading the last checkpoint as soon as a new checkpoint was made, so the session is new.<|||||>Hello @AzamatSultonov,
As far as I know, the memory leak mentioned in #1179 was fixed and was released a couple of updates ago in PyTorch. I didn't encounter similar problems recently.
Can I ask what is your batch size? Have you tried a batch size of 1 (and slowly increase it)? 11GB is not a lot to fit two models (and train one of them).<|||||>Hello @VictorSanh, the minimum batch size that I've tried was 3 (1 takes too much time), but OOM threw again (with available GPU memory).
The #1179 fix helped to prolong the training time for bigger batch size, but didn't solve the problem completely.
BTW, I turned off the distributed way of training and launched the distillation on a single GPU with batch size 5 (periodically emptying CUDA cache) and the training goes for almost 48 hours without crashes.
This is still slow, but at least without OOM and losses are going down.
I'll let you know as soon as the training will finish.<|||||>Are your batches of constant total size? i.e. do you need always need the exact same amount of gpu memory to do your intermediate computations?
The reason why I suggested to start with a batch size of 1 is to detect this. You can always use gradient accumulation to simulate a bigger batch size.
Something that can help is also tracking the memory in a tensorboard.<|||||>> Are your batches of constant total size? i.e. do you need always need the exact same amount of gpu memory to do your intermediate computations?
Yes, they are.
Also, in the last version of the script, I've changed the padding to the max length within the whole dataset instead of max length withing the current batch, to avoid tensor's memory reallocation by torch and reusing already allocated one.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I am facing this issue again, gpu usage is around 60% checked using `nvidia-smi`. In this tuning `batch_size` dosen't make sense, but still I had changed it but problem didn't solved.
Trying to fine tune XLM roberta for urdu classification
transformers: 4.9.1
torch: 1.9.0+cu102 |
transformers | 2,953 | closed | Migrating from `pytorch-pretrained-bert` to `pytorch-transformers` issue regarding model() output | I'm having trouble migrating my code from `pytorch_pretrained_bert` to `pytorch_transformers`. I'm attempting to run a cosine similarity exercise. I want to extract text embeddings values of the second-to-last of the 12 hidden embedding layer.
```
import torch
from pytorch_pretrained_bert import BertTokenizer, BertModel
#from pytorch_transofmers import BertTokenizer, BertModel
import pandas as pd
import numpy as np
model = BertModel.from_pretrained('bert-base-uncased')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
# This is done by default in the pytorch_transformers
model.eval()
input_query = "This is my test input query text"
marked_text = "[CLS] " + input_query + " [SEP]"
tokenized_text = tokenizer.tokenize(marked_text)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
segments_ids = [1] * len(tokenized_text)
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
with torch.no_grad():
encoded_layers, _ = model(tokens_tensor, segments_tensors)
sentence_embedding = torch.mean(encoded_layers[10], 1)
```
Using the pytorch_pretrained_bert works perfectly fine with the above code. My `encoded_layers` object is a list of 12 hidden layer tensors, allowing me to pick and reduce the 11th layer by taking an average, resulting in `sentence_embedding` object I can run cosine similarities against.
However, when I migrate my code to the `pytorch_transformers` library, the resulting `encoded_layers` object is no longer the full list of 12 hidden layers, but a single torch tensor object of shape `torch.Size([1, 7, 768])`, which results in the following error when I attempt to create the `sentence_embedding` object:
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-23-7f877a7d2f9c> in <module>
9 encoded_layers, _ = model(tokens_tensor, segments_tensors)
---> 10 sentence_embedding = torch.mean(test[10], 1)
11
IndexError: index 10 is out of bounds for dimension 0 with size 7
```
The migration documentation (https://huggingface.co/transformers/migration.html) states that I should take the first element of the `encoded_layers` object as a replacement but that does not provide me with access to the second to last hidden layer of embeddings.
How can I access it?
Thank you! | 02-21-2020 20:33:08 | 02-21-2020 20:33:08 | You need to tell the model that you wish to get all the hidden states
```python
model = BertModel.from_pretrained('bert-base-uncased', output_hidden_states=True)
```
Then, you'll find your expected output as the third item in the output tuple:
```python
encoded_layers = model(tokens_tensor, segments_tensors)[2]
```
IIRC those layers now also include the embeddings (so 13 items in total), so you might need to update the index to get the second last layer. Might be better to use a negative index to be sure (-2). |
transformers | 2,952 | closed | RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.cuda.IntTensor instead (while checking arguments for embedding) | # 🐛 Bug
```
File "C:\Users\temp\Aida\aida\agents\bertbot\Bert\bert_intent_classifier_pytorch.py", line 298, in process
logits = self.model(prediction_inputs, token_type_ids=None, attention_mask=prediction_masks)
File "C:\Users\temp\Anaconda3\envs\fresh\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\temp\Anaconda3\envs\fresh\lib\site-packages\transformers\modeling_bert.py", line 897, in forward
head_mask=head_mask)
File "C:\Users\temp\Anaconda3\envs\fresh\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\temp\Anaconda3\envs\fresh\lib\site-packages\transformers\modeling_bert.py", line 624, in forward
embedding_output = self.embeddings(input_ids, position_ids=position_ids, token_type_ids=token_type_ids)
File "C:\Users\temp\Anaconda3\envs\fresh\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\temp\Anaconda3\envs\fresh\lib\site-packages\transformers\modeling_bert.py", line 167, in forward
words_embeddings = self.word_embeddings(input_ids)
File "C:\Users\temp\Anaconda3\envs\fresh\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\temp\Anaconda3\envs\fresh\lib\site-packages\torch\nn\modules\sparse.py", line 114, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "C:\Users\temp\Anaconda3\envs\fresh\lib\site-packages\torch\nn\functional.py", line 1484, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.cuda.IntTensor instead (while checking arguments for embedding)
```
## Issue
Hi everyone when I run the line:
```py
outputs = model(input_ids = b_input_ids, attention_mask=b_input_mask, labels=b_labels)
```
with model defined as,
```py
model = BertForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=numlabels)
```
It returns the stated error. However this only happens when I am on my windows computer.
When I run the exact same code with the same python version and libraries it works perfectly fine.
I have the most up to date version of pytorch (1.4) and transformers installed.
Any help would be greatly appreciated
## Information
Using the latest version of pytorch and transformers
Model I am using (Bert, XLNet ...): BertForSequenceClassification
Language I am using the model on (English, Chinese ...): English
| 02-21-2020 16:46:51 | 02-21-2020 16:46:51 | It is weird that there is a discrepancy between Windows and Linux.
Could you try casting your variables `b_input_ids`, `b_input_mask` and `b_labels` to `torch.long`?
Are you defining some of your variables on GPU? Does it fail if everything stays on CPU?<|||||>I often prototype on Windows and push to Linux for final processing and I've never had this issue. Can you post a minimal working example that I can copy-paste to test? <|||||>Ok update I got the error to go away but to do it I had to do some janky fixes that I don't think should be necessary
- So if I cast all my variables as ex: b_labels = b_labels.type(torch.LongTensor) and I train on CPU it works (but its super slow)
- If I want to train on GPU I again cast the tensors to long but then have to cast all of my tensors to GPU (.to(device)) even though I already did it
`
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = BertForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=numlabels)
model.cuda()
#model = nn.DataParallel(model)
# This variable contains all of the hyperparemeter information our training loop needs
# Parameters:
lr = 2e-5
max_grad_norm = 1.0
num_training_steps = 1000
num_warmup_steps = 100
warmup_proportion = float(num_warmup_steps) / float(num_training_steps) # 0.1
### In Transformers, optimizer and schedules are splitted and instantiated like this:
optimizer = AdamW(model.parameters(), lr=lr, correct_bias=False) # To reproduce BertAdam specific behavior set correct_bias=False
scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=num_warmup_steps, num_training_steps=num_training_steps) # PyTorch scheduler
t = []
# Store our loss and accuracy for plotting
train_loss_set = []
# Number of training epochs (authors recommend between 2 and 4)
epochs = 5 #5:0.96
# trange is a tqdm wrapper around the normal python range
for _ in trange(epochs, desc="Epoch"):
# Training
# Set our model to training mode (as opposed to evaluation mode)
model.train()
# Tracking variables
tr_loss = 0
nb_tr_examples, nb_tr_steps = 0, 0
# Train the data for one epoch
for step, batch in enumerate(train_dataloader):
# Add batch to GPU
batch = tuple(t.to(device) for t in batch)
# Unpack the inputs from our dataloader
b_input_ids, b_input_mask, b_labels = batch
###############Bug fix code####################
b_input_ids = b_input_ids.type(torch.LongTensor)
b_input_mask = b_input_mask.type(torch.LongTensor)
b_labels = b_labels.type(torch.LongTensor)
b_input_ids = b_input_ids.to(device)
b_input_mask = b_input_mask.to(device)
b_labels = b_labels.to(device)
############################################
# Clear out the gradients (by default they accumulate)
optimizer.zero_grad()
# Forward pass
outputs = model(input_ids = b_input_ids, attention_mask=b_input_mask, labels=b_labels)
loss, logits = outputs[:2]
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), max_grad_norm) # Gradient clipping is not in AdamW anymore (so you can use amp without issue)
optimizer.step()
scheduler.step()
`
Very strange
(posted the code I thought would be useful to see let me know if you need to see more)<|||||>You're doing `.to(device)` twice for your data (once in the tuple, once separately). It is hard to reproduce this because we don't have your data, so we don't know how you encode your data. What is example contents of `batch` to reproduce your issue?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>> This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Had similar issue:
Young Sheldon's solution on below stackoverflow thread worked well.
https://stackoverflow.com/questions/56360644/pytorch-runtimeerror-expected-tensor-for-argument-1-indices-to-have-scalar-t<|||||>Having the same issue, funny thing is the whole model worked for training, but while running inference on test data the error automatically showed up<|||||>
> Having the same issue, funny thing is the whole model worked for training, but while running inference on test data the error automatically showed up
Exactly the same issue I am facing. I am using Amazon SageMaker notebook instance
<|||||>Hi,
I'm working with transformers version 4.4.2 and getting this error when not passing in the `position_ids` kwarg to the model. Adding the following line in `transformers/models/bert/modeling_bert.py` on line 207 fixes the issue for me:
```python
position_ids = position_ids.to(torch.long)
```
Of course, you can do this work by passing in your own `position_ids`, but that's no fun.<|||||>hi,I have met the same problem, just because use torch.Tensor( ),.when I check,I change it into torch.tensor,and it's OK.<|||||>@doris-art
Here's my work around. Assuming `params` is a dict that is passed to the `__call__` method of the model as `**kwargs`:
```python
# a bug in transformers 4.4.2 requires this
# https://github.com/huggingface/transformers/issues/2952
input_ids = params['input_ids']
seq_length = input_ids.size()[1]
position_ids = model.embeddings.position_ids
position_ids = position_ids[:, 0: seq_length].to(torch.long)
params['position_ids'] = position_ids
```<|||||>I am getting the same error. I am unable to resolve it.


### I am using:
Python implementation: CPython
Python version : 3.7.12
IPython version : 7.29.0
numpy : 1.19.5
pandas : 1.3.4
torch : 1.9.1
transformers: 4.12.5
Any help would be greatly appreciated.<|||||>I had the same issue in the past. after checking for the many issue for this error. i did some reverse engineering and found that my input been going as empty in the modal train.
If you pass the input sentence as empty then also faced the same error. I have resolved by filtering my dataset with null/empty sentence data point. |
transformers | 2,951 | closed | Update modelcard of bert-base-german-cased | Add image | 02-21-2020 15:46:41 | 02-21-2020 15:46:41 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2951?src=pr&el=h1) Report
> Merging [#2951](https://codecov.io/gh/huggingface/transformers/pull/2951?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3e98f27e4a9bb0ac3d0fe24b94d30da42cdae8a7?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2951?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2951 +/- ##
======================================
Coverage 76.1% 76.1%
======================================
Files 98 98
Lines 15946 15946
======================================
Hits 12136 12136
Misses 3810 3810
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2951?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2951?src=pr&el=footer). Last update [3e98f27...d3f1583](https://codecov.io/gh/huggingface/transformers/pull/2951?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>👍 |
transformers | 2,950 | closed | BertTokenizerFast ignores `pad_to_max_length` | # 🐛 Bug
Hi,
I noticed some strange behavior with the fast tokenizers in `v2.5.0`, which I think is a bug:
It seems `BertTokenizerFast` is ignoring the `pad_to_max_length` argument, as shown below:
```python
>>> from transformers import AutoTokenizer, BertTokenizer
>>> tok_auto = AutoTokenizer.from_pretrained('bert-base-uncased')
>>> tok_bert = BertTokenizer.from_pretrained('bert-base-uncased')
>>> a, b = 'Sentence 1', 'Sentence 2'
>>>
>>> tok_bert.encode(a, b, max_length=10, pad_to_max_length=True)
[101, 6251, 1015, 102, 6251, 1016, 102, 0, 0, 0] # <-- Expected behavior
>>> tok_auto.encode(a, b, max_length=10, pad_to_max_length=True)
[101, 6251, 1015, 102, 6251, 1016, 102] # <-- Actual behavior
```
Also, can some please explain the reason for the warning below that's raised when I set `pad_to_max_length=False` (which is only there in the fast tokenizer)?
```python
>>> tok_auto.encode(a, b, max_length=10, pad_to_max_length=False)
Disabled padding because no padding token set (pad_token: [PAD], pad_token_id: 0).
To remove this error, you can add a new pad token and then resize model embedding:
tokenizer.pad_token = '<PAD>'
model.resize_token_embeddings(len(tokenizer))
[101, 6251, 1015, 102, 6251, 1016, 102]
>>> tok_bert.encode(a, b, max_length=10, pad_to_max_length=False)
[101, 6251, 1015, 102, 6251, 1016, 102] # <-- No warning
```
Thanks! | 02-21-2020 15:17:09 | 02-21-2020 15:17:09 | Duplicate of #2947<|||||>Thanks @ranamihir @fte10kso,
I'll have a look today. <|||||>Thanks @mfuntowicz! |
transformers | 2,949 | closed | Change the model type after fine-tuning? | # ❓ Questions & Help
Is there a way in the transformer library to fine-tune a transformer on one downstream task and change to another model type (i.e. another downstream task architecture)?
An example:
We train a BertForQuestionAnswering model (with a linear layer for the span prediction in addition to the regular BERT). Once we finished our training, I want to disregard the linear layer on top and use the adjusted weights of the 12 BERT-layers in sentiment analysis (BertForSentenceClassification).
I am aware that this would result in a not initialized linear layer on top the BertForSentenceClassification, but that would be not problem in my case. | 02-21-2020 14:35:02 | 02-21-2020 14:35:02 | Yes, you should be able to just load the finetuned model into another architecture. The weights of overlapping parameter names will be loaded, and the others will be ignored.<|||||>Awesome, thanks for the fast response <|||||>Okay, I came across a followup question:
So from what I can tell, the pre-trained models come with the weights for the BertForMaskedLM (as this has been used to do the pre-training).
Is there a way to first fine-tune a model on a downstream task (like sentence classification) and than reload it as BertForMaskedLM, reusing the trained LM head? <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@BramVanroy could you provide a code example of how to this? I want to do sequential training on different tasks without having to save and load models from disk, or change layers directly myself. Ideally I would use some head switcher function or simply have `.from_pretrained()` accept model instances.<|||||>@timellemeet Probably best to post a new issue or [a forum post](https://discuss.huggingface.co/). |
transformers | 2,948 | closed | Some questions about change the BertForSequenceClassification | # 🐛 Bug
## Information
Model I am using Bert:
Language I am using the model on (Chinese ):
The problem arises when using BertForSequenceClassification:
I want to Concatenation pooled_output and word2vec. I change the code of BertForSequenceClassification like this and successfully trained the model.
I use **merge=torch.cat((pooled_output,Word2Vec),1)**
```
def __init__(self, config):
super(BertForSequenceClassificationEZAI, self).__init__(config)
self.num_labels = config.num_labels
self.bert = BertModel(config)#載入預訓練BERT Model
self.dropout = nn.Dropout(config.hidden_dropout_prob)
# 簡單 linear 層
self.classifier = nn.Linear(99072, self.config.num_labels)#config.hidden_size
self.init_weights()
def forward(self, input_ids=None, attention_mask=None, token_type_ids=None,
position_ids=None, head_mask=None, inputs_embeds=None, labels=None):
#BERT 輸入就是 tokens, segments, masks
outputs = self.bert(input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds)
#線性分類器將 dropout 後的 BERT repr. 轉成類別 logits
pooled_output = outputs[1]
####在這裡加上word embedding資訊######
model = word2vec.load('/share/nas165/Wendy/WordbreakCKIP/corpusWord2Vec.bin')
Word2Vec=torch.from_numpy(model.vectors.flatten()).cuda().float().expand(len(pooled_output),98304)
print(Word2Vec.size())
print(pooled_output.size())
merge=torch.cat((pooled_output,Word2Vec),1)
pooled_output = self.dropout(merge)
# pooled_output = self.dropout(pooled_output)
# print(pooled_output.size())
logits = self.classifier(pooled_output)
outputs = (logits,) + outputs[1:] # add hidden states and attention if they are here
# 輸入有 labels 的話直接計算 Cross Entropy 回傳,方便!
if labels is not None:
if self.num_labels == 1:
# We are doing regression
loss_fct = MSELoss()
loss = loss_fct(logits.view(-1), labels.view(-1))
else:
loss_fct = CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
outputs = (loss,) + outputs
return outputs # (loss), logits, (hidden_states), (attentions)
```
But when I try to predict and import the model like this
```py
bert_config, bert_class, bert_tokenizer = (BertConfig, BertForSequenceClassification, BertTokenizer)
config = bert_config.from_pretrained('/share/nas165/Wendy/BERTYiChen2.0/trained_model/3371筆_word2Vec/config.json')
model = bert_class.from_pretrained('/share/nas165/Wendy/BERTYiChen2.0/trained_model/3371筆_word2Vec/pytorch_model.bin', from_tf=bool('.ckpt' in 'bert-base-chinese'),config=config)
```
It always appears BUG RuntimeError: Error(s) in loading state_dict for BertForSequenceClassification:
size mismatch for classifier.weight: copying a param with shape torch.Size([267, 99072]) from checkpoint, the shape in current model is torch.Size([267, 768]).
I do not know how to fix it.
Thanks a lot for your help.
| 02-21-2020 13:56:07 | 02-21-2020 13:56:07 | Please don't post screenshots. Post your code instead. Use code blocks to do that: https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks<|||||>sorry, I changed
<|||||>Could you give us the content of the `/share/nas165/Wendy/BERTYiChen2.0/trained_model/3371筆_word2Vec/config.json` file? I suspect it might set `vocab_size` to 768.<|||||>ok, the config file like this , but vocab_size is 21128
```
{
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"directionality": "bidi",
"do_sample": false,
"eos_token_ids": 0,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-12,
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_beams": 1,
"num_hidden_layers": 12,
"num_labels": 267,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pad_token_id": 0,
"pooler_fc_size": 768,
"pooler_num_attention_heads": 12,
"pooler_num_fc_layers": 3,
"pooler_size_per_head": 128,
"pooler_type": "first_token_transform",
"pruned_heads": {},
"repetition_penalty": 1.0,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 21128
}
```
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,947 | closed | Fast tokenizers padding and prefix space | # 🐛 Bug
`pad_to_max_length=True` Doesn't seem to do anything when using fast tokenizers.
```python
tokenizer_roberta = RobertaTokenizer.from_pretrained('roberta-base')
tokenizer_roberta_fast = RobertaTokenizerFast.from_pretrained('roberta-base')
tokenizer_bert = BertTokenizer.from_pretrained('bert-base-uncased')
tokenizer_bert_fast = BertTokenizerFast.from_pretrained('bert-base-uncased')
def test_encode_decode(name, text, tokenizer):
encoded = tokenizer.encode(text, max_length=10, pad_to_max_length=True)
decoded = tokenizer.decode(encoded)
print(name)
print(encoded)
print(decoded)
print()
text = 'hello huggingface'
test_encode_decode('bert', text, tokenizer_bert)
test_encode_decode('bert_fast', text, tokenizer_bert_fast)
test_encode_decode('roberta', text, tokenizer_roberta)
test_encode_decode('roberta_fast', text, tokenizer_roberta_fast)
```
**Output:**
```
bert
[101, 7592, 17662, 12172, 102, 0, 0, 0, 0, 0]
[CLS] hello huggingface [SEP] [PAD] [PAD] [PAD] [PAD] [PAD]
bert_fast
[101, 7592, 17662, 12172, 102]
[CLS] hello huggingface [SEP]
roberta
[0, 20760, 31164, 9021, 2, 1, 1, 1, 1, 1]
<s> hello huggingface</s><pad><pad><pad><pad><pad>
roberta_fast
[0, 42891, 31164, 9021, 2]
<s>hello huggingface</s>
```
Additionally I can't seem to make `add_prefix_space=True` to work with `RobertaTokenizerFast`. I can only get the same output as `RobertaTokenizer` if i manually prepend a space. I saw the warning and link to #2778 but I'm not sure if I follow completely.
Thanks for taking the time!
Edit: I'm up to date with master, latest commit 53ce3854a16ad2a715bc6ac8af3e30c18b5a1d11 | 02-21-2020 12:57:00 | 02-21-2020 12:57:00 | This should be fixed in master on commit cc6775cdf5b20ad382613d3bdbf0dd8364d23219.
If you want to give it a try, otherwise the first maintenance release will soon be live.<|||||>That fixed it for me, thank you!
Also figured out what i was doing wrong with `add_prefix_space`. Instead of passing it to encode as for the regular tokenizers it should be provided at init e.g.
```python
from transformers import RobertaTokenizerFast
t = RobertaTokenizerFast.from_pretrained('roberta-base', add_prefix_space=True)
t.decode(t.encode('hello huggingface'))
# '<s> hello huggingface</s>'
``` |
transformers | 2,946 | closed | On masked-lm labels and computing the loss | Recently I was using bert for my own project, and going through the function mask_tokens I found this line of code
`labels[~masked_indices] = -100 # We only compute loss on masked tokens`
I wonder why we do this?
like i get the part where we do
```
indices_replaced = torch.bernoulli(torch.full(labels.shape, 0.8)).bool() & masked_indices
inputs[indices_replaced] = tokenizer.convert_tokens_to_ids(tokenizer.mask_token)
```
To mask the input tokens but is it necessary for labels?
Like if I had a constant -100 as ground truth and the actual id maybe say 1000 the loss may never converge
And I've found two contradictory comments ie
`labels[~masked_indices] = -100 # We only compute loss on masked tokens`
and
```
```(run_language_modeling)
masked_lm_labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`, defaults to :obj:`None`):
Labels for computing the masked language modeling loss.
Indices should be in ``[-100, 0, ..., config.vocab_size]`` (see ``input_ids`` docstring)
Tokens with indices set to ``-100`` are ignored (masked), the loss is only computed for the tokens with labels
in ``[0, ..., config.vocab_size]
```(modeling_bert)
```
One says loss will be computed on masked and another says will be ignored...
Could anyone please let me know about it... Thanks. | 02-21-2020 11:55:26 | 02-21-2020 11:55:26 | > `labels[~masked_indices] = -100 # We only compute loss on masked tokens`
-100 is the default value that gets ignored by the PyTorch `CrossEntropyLoss` method. When doing masked language modeling, we only compute the loss on the masked tokens, as is said in the comment.
> `Tokens with indices set to ``-100`` are ignored (masked), the loss is only computed for the tokens with labels in ``[0, ..., config.vocab_size]`
The wording could be improved here, but it means the same thing. The tokens with indices set to `-100` are seen as masked from the loss point of view, which means they will not be computed. These are not masked indices in the sense of masked language modeling.<|||||><img width="748" alt="Screenshot 2020-02-22 at 3 22 40 PM" src="https://user-images.githubusercontent.com/45225143/75090283-441e3e00-5587-11ea-92da-0411b0752fa5.png">
I opened up a collab and tried to simulate what is going on thanks to your comment I followed through...
What I understand from this is we do label -100 to ignore some values that the `torch.bernoulli` has given probability of being `False`
and the ones which are left are then again fed into `torch.bernoulli` and given 0.8 percent to be masked ie we convert their ids as `tokenzer.mask_tokens`
and now as seen in the screenshot 80% was the chance to be masked hence both of the ids where masked and we have built our labels tensor in such a way that it will compute the masked tokens and leave -100 be as cross-entropy will simply ignore these value, in some way we assert that these values (-100) are already right and they are used in self-attention and hence dont compute their loss which also will be simply expensive
Pls review on this<|||||>@LysandreJik I agree that the wording could be improved. I'm still not totally sure about the utility of the `masked_lm_labels` argument. I presume it is mainly for ignoring common words and special tokens during the MLM pretraining?<|||||>The `masked_lm_labels` are the labels used for computing the masked language modeling loss. There are examples in the documentation showing how to use these, for example [at the end of the `BertForMaskedLM` documentation.](https://huggingface.co/transformers/model_doc/bert.html#bertformaskedlm)<|||||>@LysandreJik Isn't the example mentioned in the official documentation missing the following line of code before feeding _labels_ into model?
`labels[inputs.input_ids != tokenizer.mask_token_id] = -100 `
I believe, with this we calculate the negative log likelihood, just for the masked token which is `Paris' in the given example.
<|||||>> @LysandreJik Isn't the example mentioned in the official documentation missing the following line of code before feeding _labels_ into model?
>
> `labels[inputs.input_ids != tokenizer.mask_token_id] = -100 `
>
> I believe, with this we calculate the negative log likelihood, just for the masked token which is `Paris' in the given example.
Yes, I was wondering why this is missing as well. There doesn't seem to be any documentation indicating that this is happening automatically before the loss is computed. And, based on some limited testing on my end I get different values for the loss when I do this. |
transformers | 2,945 | closed | Labels are now added to model config under id2label and label2id | Fixes huggingface/transformers#2487 | 02-21-2020 11:32:57 | 02-21-2020 11:32:57 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2945?src=pr&el=h1) Report
> Merging [#2945](https://codecov.io/gh/huggingface/transformers/pull/2945?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/53ce3854a16ad2a715bc6ac8af3e30c18b5a1d11?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2945?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2945 +/- ##
======================================
Coverage 76.1% 76.1%
======================================
Files 98 98
Lines 15946 15946
======================================
Hits 12136 12136
Misses 3810 3810
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2945?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2945?src=pr&el=footer). Last update [53ce385...0a5ab3f](https://codecov.io/gh/huggingface/transformers/pull/2945?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I had noticed this indeed, thanks for fixing!<|||||>@marma So the labels are supposed to be the actual labels instead of placeholders like "Label_0", "Label_1" ... right? I tried it out in 3.3.1 but it still generates the placeholder labels in config. I guess I am missing something? I would be grateful if you could help! :)
<|||||>I would also be interested in learning how I can store the actual label2id and id2label dictionaries along with a pre-trained model. Is this possible?<|||||>@konstantinmiller & @naveenjafer You can pass label2id and id2label to config then pass that config to model like in below snippet:
```py
config = AutoConfig.from_pretrained(
model_args.config_name if model_args.config_name else model_args.model_name_or_path,
num_labels=num_labels,
id2label={i: label for i, label in enumerate(label_list)},
label2id={label: i for i, label in enumerate(label_list)},
finetuning_task=data_args.task_name)
model = AutoModelForSequenceClassification.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config )
``` |
transformers | 2,944 | closed | output padding different to zero in embedding layer | # 🐛 Bug
On embedding layer, the token corresponding to the padding does not return 0.
## Information
Model I am using (Bert):
Language I am using the model on (English):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
import torch
from transformers.tokenization_bert import BertTokenizer
from transformers.modeling_bert import BertEmbeddings
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
class Config:
vocab_size = tokenizer.vocab_size
hidden_size = 768
max_position_embeddings = 512
type_vocab_size = 2
hidden_dropout_prob = 0.1
layer_norm_eps = 1e-12
max_length = 10
sentence = "I eat a green apple"
tokens = tokenizer.encode(sentence)
tokens += [tokenizer.pad_token_id] * (max_length - len(tokens))
print(tokens)
embedding = BertEmbeddings(Config)
input_ids = torch.tensor([tokens])
emb = embedding(input_ids)
print(emb[0][-1]) # the last step should return 0 tensor
```
## Expected behavior
I expect to get a zero output tensor
## Environment info
- `transformers` version: 2.3.0
- Platform: Ubuntu
- Python version: 3.7
- PyTorch version (GPU?): 1.3.1 (CPU)
- Tensorflow version (GPU?): no
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| 02-21-2020 10:53:06 | 02-21-2020 10:53:06 | Why? Typically, padding tokens are ignored in the model by the use of an attention mask. I don't understand why you want to only get the output of the embedding layer. (Perhaps you are confused with the good ol' word2vec embedding models, but you cannot/should not extract features from the embedding layer in a LM.) |
transformers | 2,943 | closed | fp16 is not compatible with the current activation code when pytorch is less than 1.4.0 | [gelu = getattr(F, "gelu", _gelu_python)](https://github.com/huggingface/transformers/blob/53ce3854a16ad2a715bc6ac8af3e30c18b5a1d11/src/transformers/activations.py#L21)
should be changed to:
```python
if torch.__version__ < '1.4.0':
gelu = _gelu_python
else:
gelu = getattr(F, "gelu", _gelu_python)
```
| 02-21-2020 03:58:21 | 02-21-2020 03:58:21 | This is a duplicate of https://github.com/huggingface/transformers/issues/2940 |
transformers | 2,942 | closed | Create README.md for xlnet_large_squad | 02-21-2020 02:51:32 | 02-21-2020 02:51:32 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2942?src=pr&el=h1) Report
> Merging [#2942](https://codecov.io/gh/huggingface/transformers/pull/2942?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/53ce3854a16ad2a715bc6ac8af3e30c18b5a1d11?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2942?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2942 +/- ##
======================================
Coverage 76.1% 76.1%
======================================
Files 98 98
Lines 15946 15946
======================================
Hits 12136 12136
Misses 3810 3810
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2942?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.15% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `94.27% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.54% <0%> (ø)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2942?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2942?src=pr&el=footer). Last update [53ce385...cfa068f](https://codecov.io/gh/huggingface/transformers/pull/2942?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 2,941 | closed | pipeline("sentiment-analysis")() can't handle more than 2 sentences | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): pipeline("sentiment-analysis")
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
>>> from transformers import pipeline
>>> analyzer = pipeline('sentiment-analysis')
Downloading: 100%|██████████████████████████████| 230/230 [00:00<00:00, 146kB/s]
>>> analyzer(["OK"]*10)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".../lib/python3.6/site-packages/transformers/pipelines.py", line 490, in __call__
scores = np.exp(outputs) / np.exp(outputs).sum(-1)
ValueError: operands could not be broadcast together with shapes (10,2) (10,)
>>>
```
`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Getting 10 results
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.0
- Platform: ubuntu 19.04
- Python version: 3.6
- PyTorch version (GPU?): 1.4.0 GPU
- Tensorflow version (GPU?): 1.14.0 GPU
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| 02-20-2020 23:57:24 | 02-20-2020 23:57:24 | `scores = np.exp(outputs) / np.exp(outputs).sum(-1).reshape(-1,1)` works for me, but I'm not sure whether it breaks other things.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This still happens on the latest version. I still have to apply
`scores = np.exp(outputs) / np.exp(outputs).sum(-1).reshape(-1,1)`
For the code to work.<|||||>Yes, this is in the process of being solved by @mfuntowicz |
transformers | 2,940 | closed | BERT model breaks during FP16 Apex training on the latest update (2.5.0) - due to gelu function | # 🐛 Bug
BERT breaks during FP16 training due to the gelu function.
```
File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/transformers/modeling_bert.py", line 407, in forward
hidden_states, attention_mask, head_mask[i], encoder_hidden_states, encoder_attention_mask
File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/transformers/modeling_bert.py", line 379, in forward
intermediate_output = self.intermediate(attention_output)
File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/transformers/modeling_bert.py", line 332, in forward
hidden_states = self.intermediate_act_fn(hidden_states)
File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/torch/nn/functional.py", line 1125, in gelu
return torch._C._nn.gelu(input)
RuntimeError: "GeluCUDAKernelImpl" not implemented for 'Half'
```
## Information
The reason this is happening is because in `modeling_bert.py`, in 2.4.1 we had:
```
def gelu(x):
""" Original Implementation of the gelu activation function in Google Bert repo when initially created.
For information: OpenAI GPT's gelu is slightly different (and gives slightly different results):
0.5 * x * (1 + torch.tanh(math.sqrt(2 / math.pi) * (x + 0.044715 * torch.pow(x, 3))))
Also see https://arxiv.org/abs/1606.08415
"""
return x * 0.5 * (1.0 + torch.erf(x / math.sqrt(2.0)))
...
ACT2FN = {"gelu": gelu, "relu": torch.nn.functional.relu, "swish": swish, "gelu_new": gelu_new, "mish": mish}
```
whereas in 2.5.0 we have:
```
ACT2FN = {"gelu": gelu, "relu": torch.nn.functional.relu, "swish": swish, "gelu_new": gelu_new, "mish": mish}
```
where `gelu` now comes from `activations.py` as:
`gelu = getattr(F, "gelu", _gelu_python)` on line 21.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Linux
- Python version: 3.6
- PyTorch version (GPU?): 1.2.0 CUDA 10.0
- Tensorflow version (GPU?): 2.5.0
- Using GPU in script?: V100
- Using distributed or parallel set-up in script?: No, but using Apex FP16 training
| 02-20-2020 22:53:34 | 02-20-2020 22:53:34 | |
transformers | 2,939 | closed | Add standardized get_vocab method to tokenizers | This PR adds a `get_vocab` method to the `PretrainedTokenizers` to standardize extracting vocabularies from tokenizers.
Comments:
- I didn't do anything with fast tokenizers. cc'ing @mfuntowicz for his thoughts there.
- I opted to keep it a method rather than a `@property` in order to encourage users to primarily use existing methods like `convert_tokens_to_ids` for general encoding/decoding purposes and use `get_vocab` only when they need the entire vocabulary.
- For tokenizers which rely on `sentencepiece`, I was unable to figure out a better way to get the vocabs than to loop through it. If someone knows a better way, please let me know. | 02-20-2020 20:44:37 | 02-20-2020 20:44:37 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2939?src=pr&el=h1) Report
> Merging [#2939](https://codecov.io/gh/huggingface/transformers/pull/2939?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ea8eba35e2984882c3cd522ff669eb8060941a94?src=pr&el=desc) will **decrease** coverage by `1.02%`.
> The diff coverage is `89.65%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2939?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2939 +/- ##
==========================================
- Coverage 75.35% 74.32% -1.03%
==========================================
Files 94 94
Lines 15445 15474 +29
==========================================
- Hits 11638 11501 -137
- Misses 3807 3973 +166
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2939?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/2939/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.71% <100%> (+1.77%)` | :arrow_up: |
| [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2939/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `83.39% <100%> (+0.13%)` | :arrow_up: |
| [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2939/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.19% <100%> (+0.07%)` | :arrow_up: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2939/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `97.02% <100%> (+0.02%)` | :arrow_up: |
| [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/2939/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `89.52% <100%> (+0.41%)` | :arrow_up: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2939/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.5% <100%> (+0.22%)` | :arrow_up: |
| [src/transformers/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2939/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `96.87% <100%> (+0.04%)` | :arrow_up: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2939/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.61% <100%> (+0.16%)` | :arrow_up: |
| [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2939/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.17% <100%> (+0.36%)` | :arrow_up: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/2939/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.22% <100%> (+0.31%)` | :arrow_up: |
| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/2939/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2939?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2939?src=pr&el=footer). Last update [ea8eba3...197d74f](https://codecov.io/gh/huggingface/transformers/pull/2939?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@sshleifer thanks for the review! The trouble is that different tokenizers store their vocabs pretty differently (thus this PR) – only BERT-inhereted tokenizers currently have a `self.vocab`, for example. I'd argue it's better to make it explicit that subclasses need to implement it rather than risk a silent error (i.e. if a subclass defines a `self.vocab` property differently than BERT's tokenizer does). |
transformers | 2,938 | closed | OpenAIGPTDoubleHeadsModel throws CUDA OOM with large number of candidates | # 🐛 Bug
I am trying to train the `OpenAIGPTDoubleHeadsModel`. I find that large number of candidates can cause CUDA OOM errors.
Case 1 (single training example with 67 candidates): CUDA OOM
```
input_ids.shape: torch.Size([1, 67, 275])
mc_token_ids.shape: torch.Size([1, 67])
lm_labels.shape: torch.Size([1, 67, 275])
mc_labels.shape: torch.Size([1])
token_type_ids.shape: torch.Size([1, 67, 275])
```
Case 2 (single training example with 3 candidates): works fine!
```
input_ids.shape: torch.Size([1, 3, 275])
mc_token_ids.shape: torch.Size([1, 3])
lm_labels.shape: torch.Size([1, 3, 275])
mc_labels.shape: torch.Size([1])
token_type_ids.shape: torch.Size([1, 3, 275])
```
## Information
Model I am using: `OpenAIGPTDoubleHeadsModel`
Language I am using the model on: English
The problem arises when using my own modified scripts based on the `transfer-learning-conv-ai` repo by Hugging Face.
## To reproduce
Simply try training `OpenAIGPTDoubleHeadsModel` with larger number of candidates (such as 67).
## Expected behavior
3 or 67 candidates shouldn't matter, both cases 1 and 2 should work fine without CUDA OOM.
## Environment info
- `transformers` version: 2.3.0
- Platform: Amazon Linux (Deep Learning AMI)
- Python version: 3.6
- PyTorch version (GPU?): the one shipped with the pytorch_p36 conda env in Amazon DL AMI
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes and No
| 02-20-2020 20:42:21 | 02-20-2020 20:42:21 | Isn't that to be expected? The large model just doesn't fit on your GPU's memory.<|||||>@BramVanroy is that expected behavior with just 1 training example though?
I initially suspected that this is a training-specific behavior (due to the need to store gradients, etc.), so I decided to fix the number of candidates to something small during training.
I then attempted to do inference with this trained model, but I used all candidates during inference. I took 1 example to do inference on, and I observed memory issues with that too.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,937 | closed | Small fix: default args for torch-lightning | Fix to the default argument passing to torch-lightning. | 02-20-2020 20:19:47 | 02-20-2020 20:19:47 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2937?src=pr&el=h1) Report
> Merging [#2937](https://codecov.io/gh/huggingface/transformers/pull/2937?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e2a6445ebbc36121817c1f605d9a09a335f5fba5?src=pr&el=desc) will **decrease** coverage by `1.06%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2937?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2937 +/- ##
==========================================
- Coverage 75.35% 74.28% -1.07%
==========================================
Files 94 94
Lines 15445 15445
==========================================
- Hits 11638 11473 -165
- Misses 3807 3972 +165
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2937?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2937/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2937/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.91% <0%> (-9.86%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2937/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `70.86% <0%> (-2.3%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2937/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `92.07% <0%> (-2.21%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2937/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2937?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2937?src=pr&el=footer). Last update [e2a6445...34e9098](https://codecov.io/gh/huggingface/transformers/pull/2937?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,936 | closed | New tokenizers issue in NER demo | # 🐛 Bug
When I use transformers 2.5 I get the following error when running the run_ner demo. It seems to work from when I use 2.4. I am guessing this is because of a slight difference in the tokenization script? It seems to fail even when running with the base German run.sh in the ner/ directory.
```Traceback (most recent call last):
File "run_pl_ner.py", line 233, in <module>
trainer = generic_train(model, args)
File "/content/transformers/examples/ner/transformer_base.py", line 268, in generic_train
trainer.fit(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 911, in fit
self.single_gpu_train(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/distrib_parts.py", line 464, in single_gpu_train
self.optimizers, self.lr_schedulers = self.init_optimizers(model.configure_optimizers())
File "/content/transformers/examples/ner/transformer_base.py", line 92, in configure_optimizers
* float(self.hparams.num_train_epochs)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/decorators.py", line 19, in _get_data_loader
value = fn(self) # Lazy evaluation, done only once.
File "/content/transformers/examples/ner/transformer_base.py", line 132, in train_dataloader
return self.load_dataset("train", self.hparams.train_batch_size)
File "run_pl_ner.py", line 50, in load_dataset
dataset = self.load_and_cache_examples(labels, self.pad_token_label_id, mode)
File "run_pl_ner.py", line 175, in load_and_cache_examples
pad_token_label_id=pad_token_label_id,
File "/content/transformers/examples/ner/utils_ner.py", line 182, in convert_examples_to_features
assert len(label_ids) == max_seq_length
``` | 02-20-2020 19:49:46 | 02-20-2020 19:49:46 | This might be https://github.com/huggingface/transformers/issues/2917<|||||>@srush I'm looking at it right now <|||||>I have the same issue while running run_tf_ner.py on the German NER dataset. I got the same AssertionError as below:
> File "run_tf_ner.py", line 655, in <module>
> app.run(main)
> File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 299, in run
> _run_main(main, args)
> File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 250, in _run_main
> sys.exit(main(argv))
> File "run_tf_ner.py", line 540, in main
> args, tokenizer, labels, pad_token_label_id, train_batch_size, mode="train"
> File "run_tf_ner.py", line 451, in load_and_cache_examples
> pad_token_label_id=pad_token_label_id,
> File "/content/transformers/examples/ner/utils_ner.py", line 182, in convert_examples_to_features
> assert len(label_ids) == max_seq_length
> AssertionError
My idea is that the [pad_token_label_id = 0](https://github.com/huggingface/transformers/blob/94ff2d6ee8280c5595b92c1128c0f18e44925e56/examples/ner/run_tf_ner.py#L511) may conflict with the orginal label_list id. Becase in utils_ner.py (line 104), `label_map = {label: i for i, label in enumerate(label_list)}` .
By the way, I run the same code on CoNLL-2003 dataset with default labels:
["O", "B-MISC", "I-MISC", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC"]
No such error message... It may be 'O' is the first token but in Germen dataset 'O' was the last label in label.text.
I don't know if this is the real problem, but I hope this bug can be fixed soon. Thx.
<|||||>> @srush I'm looking at it right now
I have the same issue as @srush. Any idea yet what the issue could be?<|||||>It works fine with 2.4 so it is likely an issue with the new tokenizer.<|||||>Hi @srush ,
I am using transformers==2.4.1, but still facing the problem with a custom data set (with extra labels). It is working with the german dataset though. Could you be more specific about the version of transformers that you use.
Thanks
<|||||>Hi @cibinjohn ,
Can you tell us which tokenizers / model you're using ? We fixed something for `bert-base-multilingual` should be merge in master quite soon.
<|||||>Any one got solution for this yet? I have the same issue with this. I used transformer 2.5.1 and the latest tokenizer come with the transformers installation by default.
Looks like the assert len(label_ids) == max_seq_length because len(label_ids) is one more than max_seq_length while 3 other asserts before it pass the tests. <|||||>@yuyongze Have you made any progress on this?
I think `pad_token_label_id = 0` is actually another issue: #3332
And related: https://stackoverflow.com/questions/60732509/label-handling-confusion-in-run-tf-ner-example<|||||>Experiencing the same problem launching run_ner.py on the WNUT17 dataset, although on German and CONLL-2003 everything works fine.<|||||>I just want to run the ner demo (TF-version) but the same issue/ error raises.. Tried with transformers version 2.4/5/6 still the same error raises. Has anyone a solution?
Edit: PyTorch script seems to work<|||||>@mfuntowicz I am using `bert-base-multilingual` model & transformers==2.4.1.
Could you let me know when the solution will be merged with master. Looking forward to hear from you.
Thanks in advance
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,935 | closed | Optimized squad.py multi-threading | If there are few examples, don't bother multi-threading . The base cost of multi-threading in Python is expensive. I can't upload a picture of the call-stack visual profile because of my stupid company firewall, but multi-threading in python depends on serializing each object before sending it off. That process has a 5 second overhead coming from <method 'dump' of '_pickle.Pickler' objects>. Don't take my word for it - test it yourself. This small optimization reduces invocation run-time from 6s down to 1.1s for a single example inference where the len(examples) == 1. This optimization singular but it should be echoed across all pipelines in the Transformers repo.
ps: I also see a lot of lists with appends across the Transformers repository as a whole. This is a big speed suck. Look into using collections.deques more - deques are like lists but are highly optimized for appends.
Good Luck HuggingFace - you guys rock! | 02-20-2020 18:16:27 | 02-20-2020 18:16:27 | This was a big performance hiccup for us. We managed to take a `QuestionAnswerAnsweringPipeline` down from 12 seconds to 6 seconds (with CUDA) and then to ~1 second (with the serialization removal optimization).<|||||>The code quality check is literally just that one line is too long.<|||||>> The code quality check is literally just that one line is too long.
Have a look at the contributing guidelines, in particular step 5. https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests
Doing style and quality checks locally makes sure that your pull request doesn't get that annoying '1 failing' note.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Thank you, this is huge. On a g4dn.xlarge, with distilbert-base-cased-distilled-squad, I get about 60% speedup. |
transformers | 2,934 | closed | Update README.md | - I added an example using the model with pipelines to show that we have set```{"use_fast": False}``` in the tokenizer for Q&A as noticed in [issue](https://github.com/huggingface/transformers/issues/2920)
- I added a Colab to play with the model and pipelines
- I added a Colab to discover Huggingface pipelines at the end of the document | 02-20-2020 17:32:38 | 02-20-2020 17:32:38 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2934?src=pr&el=h1) Report
> Merging [#2934](https://codecov.io/gh/huggingface/transformers/pull/2934?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e2a6445ebbc36121817c1f605d9a09a335f5fba5?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2934?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2934 +/- ##
=======================================
Coverage 75.35% 75.35%
=======================================
Files 94 94
Lines 15445 15445
=======================================
Hits 11638 11638
Misses 3807 3807
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2934?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2934/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG1fcm9iZXJ0YS5weQ==) | `100% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2934/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `100% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2934/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.62% <0%> (ø)` | :arrow_up: |
| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/2934/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `96% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2934/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.77% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2934/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `94.27% <0%> (ø)` | :arrow_up: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/2934/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `92.85% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2934/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.16% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2934/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `86.37% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/2934/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `83.82% <0%> (ø)` | :arrow_up: |
| ... and [19 more](https://codecov.io/gh/huggingface/transformers/pull/2934/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2934?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2934?src=pr&el=footer). Last update [e2a6445...fe93e6a](https://codecov.io/gh/huggingface/transformers/pull/2934?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>(cc @mfuntowicz for the temporary workaround) |
transformers | 2,933 | closed | Fix for fast tokenizers save_pretrained compatibility with Python. | The name of generated file doesn't match between tokenizers and transformers tokenizers, so transformers is not able to load model saved with tokenizers models. | 02-20-2020 17:22:31 | 02-20-2020 17:22:31 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2933?src=pr&el=h1) Report
> Merging [#2933](https://codecov.io/gh/huggingface/transformers/pull/2933?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6a50d501ec54fd28eed57031ddbba6480768f9bc?src=pr&el=desc) will **decrease** coverage by `1%`.
> The diff coverage is `95%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2933?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2933 +/- ##
=========================================
- Coverage 77.21% 76.2% -1.01%
=========================================
Files 98 98
Lines 16030 16040 +10
=========================================
- Hits 12377 12224 -153
- Misses 3653 3816 +163
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2933?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/2933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `41.1% <100%> (+1.3%)` | :arrow_up: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91% <66.66%> (+0.01%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.03% <0%> (-2.21%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.54% <0%> (+0.32%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2933?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2933?src=pr&el=footer). Last update [6a50d50...f22083c](https://codecov.io/gh/huggingface/transformers/pull/2933?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,932 | closed | [WIP] Add a trainer tool class to make the TF2 model training easier | Hello,
I decided to open a cleaner PR because the other was a bit too messy in my opinion. Here the features that this Trainer class will be able to handle:
- [x] Single / Multiple GPU training. Distributed is across GPUs in the same host. Distribution across multiple machines will be for a future version.
- [ ] The training can be configured with a JSON file.
- [x] Handle multiple data processor to be able to train a model over different datasets.
- [x] Select and configure a specific loss/optimizer for a training
- [x] Create multiple checkpoints during the training in order to make it fault-tolerant.
- [x] Create the logs to be able to visualize the training in Tensorboard
- [x] The final model is saved in Hugging face transformer format and in TF saved model
- [x] Able to give a data directory where to find the datasets
- [x] Automatically handle dataset/model caching
- [ ] Run an evaluation over a test dataset with proper printed results such as the one proposed by the `seqeval` package
Currently the trainer class can be used over glue and xnli datasets with the available examples `examples/run_tf_xnli_with_trainer.py` and `examples/run_tf_glue_with_trainer.py`. I will add new examples for differents tasks and datasets.
The list of features above will be checked as things progress.
Ping @julien-c @LysandreJik @thomwolf : Do not hesitate to make proposals if you have new ideas of features or advices on a better implementation of this trainer. | 02-20-2020 15:40:21 | 02-20-2020 15:40:21 | |
transformers | 2,931 | closed | Fix spell: EsperBERTo, not EspertBERTo | This misspelling almost drove me crazy :) | 02-20-2020 13:54:53 | 02-20-2020 13:54:53 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2931?src=pr&el=h1) Report
> Merging [#2931](https://codecov.io/gh/huggingface/transformers/pull/2931?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d490b5d5003654f104af3abd0556e598335b5650?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2931?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2931 +/- ##
=======================================
Coverage 75.35% 75.35%
=======================================
Files 94 94
Lines 15444 15444
=======================================
Hits 11638 11638
Misses 3806 3806
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2931?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2931?src=pr&el=footer). Last update [d490b5d...b5607ba](https://codecov.io/gh/huggingface/transformers/pull/2931?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>You're right, thanks for fixing! |
transformers | 2,930 | closed | Add local_files_only parameter to pretrained items | closes https://github.com/huggingface/transformers/issues/2867
Setting local_files_only=True disables outgoing traffic:
- etags are not looked up
- files are not downloaded (config, tokenizer, model)
An appropriate error is thrown when this argument may be the cause why a model cannot be loaded.
```python
import pyinstrument
from transformers import DistilBertConfig, DistilBertModel, DistilBertTokenizer
class TreeProfiler():
def __init__(self, show_all=False):
self.profiler = pyinstrument.Profiler()
self.show_all = show_all # verbose output of pyinstrument profiler
def __enter__(self):
print("WITH TREE_PROFILER:")
self.profiler.start()
def __exit__(self, *args):
self.profiler.stop()
print(self.profiler.output_text(unicode=True, color=True, show_all=self.show_all))
def main():
with TreeProfiler(show_all=True):
config = DistilBertConfig.from_pretrained('distilbert-base-uncased', local_files_only=True)
model = DistilBertModel.from_pretrained('distilbert-base-uncased', local_files_only=True)
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased', local_files_only=True)
if __name__ == '__main__':
main()
```
The above snippet will throw an error message when the expected files are not present in the cache. When they are, though, everything is loaded fine without the need of any additional lookups. | 02-20-2020 13:02:18 | 02-20-2020 13:02:18 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2930?src=pr&el=h1) Report
> Merging [#2930](https://codecov.io/gh/huggingface/transformers/pull/2930?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/59c23ad9c931ac4fe719abeb3c3851df046ef3a6?src=pr&el=desc) will **decrease** coverage by `1.05%`.
> The diff coverage is `92.3%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2930?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2930 +/- ##
==========================================
- Coverage 75.3% 74.24% -1.06%
==========================================
Files 94 94
Lines 15424 15430 +6
==========================================
- Hits 11615 11456 -159
- Misses 3809 3974 +165
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2930?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.49% <100%> (+0.03%)` | :arrow_up: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.1% <100%> (+0.01%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `61.46% <100%> (+0.13%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68% <88.88%> (+0.25%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.91% <0%> (-9.86%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `70.86% <0%> (-2.3%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `92.07% <0%> (-2.21%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2930?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2930?src=pr&el=footer). Last update [59c23ad...826eced](https://codecov.io/gh/huggingface/transformers/pull/2930?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I think this feature is reasonable but I'm not sure about the param name. Maybe something like `disable_networking` or `local_files_only`?<|||||>Yeah, wasn't sure about a parameter name. I like `local_files_only`, even though it's quite long.<|||||>Would it make sense to utilise this in the examples? I am thinking about multi-GPU set-ups where the online lookup only has to be done by the main process (local_rank == 0). For all other processes, local_files_only can be True. Might avoid some redundant look-ups - even though in practice it won't matter much in terms of speed (couple of seconds at most). |
transformers | 2,929 | closed | Getting the same results when evaluating Model2Model with different encoder inputs. | After fine-tuning the Model2Model with 'bert-base-uncased', I am getting the same losses values, no matter what is the encoder input. On the PreTrainedEncoderDecoder documentation, it's said that "During prediction, we perform one forward pass through the encoder,
and then perform several forward passes with the encoder's hidden
state through the decoder to decode a full sequence."
I couldn't find the place on the source code, that does the connection between the encoder and the decoder. If someone could help me to show me where this is happening, it will be a great help,
thanks! | 02-20-2020 12:57:06 | 02-20-2020 12:57:06 | It's in the forward pass of the EncoderDecoder:
https://github.com/huggingface/transformers/blob/d490b5d5003654f104af3abd0556e598335b5650/src/transformers/modeling_encoder_decoder.py#L205-L237<|||||>> It's in the forward pass of the EncoderDecoder:
>
> https://github.com/huggingface/transformers/blob/d490b5d5003654f104af3abd0556e598335b5650/src/transformers/modeling_encoder_decoder.py#L205-L237
Yes but if I do not pass an "encoder_ hidden_states" then the encoder input does not affect the decoder output?<|||||>If you pass encoder_hidden_states, the encoder is skipped (not called at all). If you do not explicitly pass encoder_hidden_states, the inputs will go through the encoder and the hidden states will be used as encoder_hidden_states.
https://github.com/huggingface/transformers/blob/d490b5d5003654f104af3abd0556e598335b5650/src/transformers/modeling_encoder_decoder.py#L228-L232<|||||>Oh I guess I have an old version, for me the line
`encoder_hidden_states = encoder_outputs[0]`
does not exists, I'll update and try again.
Thanks<|||||>Let me know if you run into other issues.<|||||>> Let me know if you run into other issues.
Still, I don't see any difference when I am changing the encoder input, for example:
`>>>model(torch.tensor([[10,20,300,4,500,600]]), torch.tensor([[400,500]]), decoder_lm_labels=torch.tensor([[400,500]]))[0]`
`tensor(17.1395, grad_fn=<NllLossBackward>)`
`>>>model(torch.tensor([[100,200,300,400]]), torch.tensor([[400,500]]), decoder_lm_labels=torch.tensor([[400,500]]))[0]`
`tensor(17.1395, grad_fn=<NllLossBackward>)`<|||||>I'm seeing the exact same issue.
you can reproduce it here: https://colab.research.google.com/drive/1DH07pETO_F0eoxE7HaErooEWwptWf_SQ
What's interesting is that if I train the model, it will output something different, but it will output that same thing regardless of what the input to the trained model is.
Also the same thing happens if I use `PreTrainedEncoderDecoder`<|||||>This is what I have discovered (I think):
when this calculation is evaluated in Model2Model forward:
`decoder_outputs = self.decoder(decoder_input_ids, **kwargs_decoder)`
we are getting to:
`outputs = self.bert(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask,
)`
and then:
`encoder_outputs = self.encoder(
embedding_output,
attention_mask=extended_attention_mask,
head_mask=head_mask,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_extended_attention_mask,
)`
to:
`layer_outputs = layer_module(
hidden_states, attention_mask, head_mask[i], encoder_hidden_states, encoder_attention_mask
)`
to:
` if self.is_decoder and encoder_hidden_states is not None:
cross_attention_outputs = self.crossattention(
attention_output, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask
)
attention_output = cross_attention_outputs[0]
outputs = outputs + cross_attention_outputs[1:] # add cross attentions if we output attention weights
`
and in this condition `if self.is_decoder and encoder_hidden_states is not None`, is decoder is always
`False` so we never go into this clause, and never using the `encoder_hidden_states`, so we get always the same results, that does not depends on the encoder input or output.<|||||>@dimi1357 It looks like the issue is that the model is getting `is_decoder` set to True **after** it has been initialized to False, but at that point `BertLayer` has `is_decoder` set to False and so it stays like that.
This seems to be a workaround:
```
decoder_config = config = AutoConfig.from_pretrained('bert-base-uncased', is_decoder=True)
model = PreTrainedEncoderDecoder.from_pretrained('bert-base-uncased', 'bert-base-uncased', decoder_config=decoder_config)
```<|||||>> @dimi1357 It looks like the issue is that the model is getting `is_decoder` set to True **after** it has been initialized to False, but at that point `BertLayer` has `is_decoder` set to False and so it stays like that.
> This seems to be a workaround:
>
> ```
> decoder_config = config = AutoConfig.from_pretrained('bert-base-uncased', is_decoder=True)
> model = PreTrainedEncoderDecoder.from_pretrained('bert-base-uncased', 'bert-base-uncased', decoder_config=decoder_config)
> ```
Yes, that seems to fix the issue,
Thanks a lot!! |
transformers | 2,928 | closed | Make RobertaForMaskedLM implementation identical to fairseq | closes https://github.com/huggingface/transformers/issues/1874
The implementation of RoBERTa in `transformers` differs from the original implementation in [fairseq](https://github.com/pytorch/fairseq/tree/master/fairseq/models/roberta), as results showed (cf. https://github.com/huggingface/transformers/issues/1874). I have documented my findings here https://github.com/huggingface/transformers/issues/1874#issuecomment-588359143 and made the corresponding changes accordingly in this PR.
Someone should check, however, that removing `get_output_embeddings()` does not have any adverse side-effects.
In addition, someone who is knowledgeable about Tensorflow should check the TF implementation of RoBERTa, too. | 02-20-2020 12:55:31 | 02-20-2020 12:55:31 | TODO: https://github.com/huggingface/transformers/pull/2913#issuecomment-588508153<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2928?src=pr&el=h1) Report
> Merging [#2928](https://codecov.io/gh/huggingface/transformers/pull/2928?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/59c23ad9c931ac4fe719abeb3c3851df046ef3a6?src=pr&el=desc) will **decrease** coverage by `<.01%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2928?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2928 +/- ##
=========================================
- Coverage 75.3% 75.3% -0.01%
=========================================
Files 94 94
Lines 15424 15423 -1
=========================================
- Hits 11615 11614 -1
Misses 3809 3809
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2928?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2928/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.75% <100%> (-0.02%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2928?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2928?src=pr&el=footer). Last update [59c23ad...1f290e5](https://codecov.io/gh/huggingface/transformers/pull/2928?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Looks good. I tested it out and the outputs match exactly everywhere I can see. Requested review from @LysandreJik as well.
Regarding the test mentioned by @sshleifer, you can just test that a slice of the outputs match rather than the entire tensor. See [here](https://github.com/huggingface/transformers/blob/2184f87003c18ad8a172ecab9a821626522cf8e7/tests/test_modeling_roberta.py#L323) for an example.<|||||>> Looks good. I tested it out and the outputs match exactly everywhere I can see. Requested review from @LysandreJik as well.
>
> Regarding the test mentioned by @sshleifer, you can just test that a slice of the outputs match rather than the entire tensor. See [here](https://github.com/huggingface/transformers/blob/2184f87003c18ad8a172ecab9a821626522cf8e7/tests/test_modeling_roberta.py#L323) for an example.
Thanks, will add tests later. I am still a bit confused why the weights of the embeddings are tied to the LMHead in the original implementation, though. I don't quite get the intention there.<|||||>Hm, perhaps this warning message should not be there.
> Weights of RobertaForMaskedLM not initialized from pretrained model: ['lm_head.weight']
> Weights from pretrained model not used in RobertaForMaskedLM: ['lm_head.decoder.weight']
- lm_head.weight is initialised because it takes the embedding weights
- the weights from the pretrained model are not used because they are not required
<|||||>@BramVanroy Where are you getting that warning? I don't see it when I call `RobertaForMaskedLM.from_pretrained`<|||||>You can only see it if your logging level is set to INFO or lower. So you can put the following before loading the model.
```python
import logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO)
```` <|||||>Oh I see. Looks like the problem is just that the weight param introduced has a different name format than before. Rather than using the functional API as you did here, I would just manually override `decoder.weight` when `weight` is passed. I.e.,
```python
self.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
if weight is not None:
self.decoder.weight = weight
```
As you mentioned, it's not a huge issue since the weights are correctly loaded from the embeddings anyway, but probably a bit cleaner if the names align.<|||||>For those interested, I found the answer to the why on Twitter because of a helpful comment. Apparently this is common practice and has been introduced a while back in [Using the output of embeddings to improve language models](https://arxiv.org/abs/1608.05859).<|||||>> Hi @BramVanroy! I can see there's an issue here but I don't think this is the way to solve it.
>
> We actually _do_ tie the weights together, so there's no need to do any additional tying. We actually tie the weights for every model that has an LM head (Masked or causal).
>
> The issue here is because of the `bias` I introduced a few weeks ago with #2521. The way I did it means that the bias was actually applied twice.
>
> The correct way to fix it would be to change
>
> ```python
> x = self.decoder(x) + self.bias
> ```
>
> to
>
> ```python
> x = self.decoder(x)
> ```
>
> in the forward method. The bias is already part of the decoder, so no need to apply it once more.
>
> Do you want to update your PR, or should I do one to fix it?
Aha, my bad. I thought I finally contributed something useful! :flushed: You can add a PR, I'll close this one. (Perhaps the updated test is still useful so that something like this doesn't happen in the future.)
Can you link to the lines where the weight tying is happening, though? I must have completely missed it.<|||||>Your contributions are more than useful, @BramVanroy, and I'm glad you tried to fix an issue when you discovered one, thank you.
To answer your question, the `PreTrainedModel` abstract class has an [`init_weights` method](https://github.com/huggingface/transformers/blob/53ce3854a16ad2a715bc6ac8af3e30c18b5a1d11/src/transformers/modeling_utils.py#L156) which ties the input embeddings to the output embeddings.
This method is not directly called by any model class, but it is called by the [`init_weights` method](https://github.com/huggingface/transformers/blob/53ce3854a16ad2a715bc6ac8af3e30c18b5a1d11/src/transformers/modeling_utils.py#L251) of that same abstract class.
It is this last method that is called by every model during their instantiation, for example with [`RobertaModel`](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_roberta.py#L152).
This is only the PyTorch way though, the TensorFlow way is different. In TensorFlow, we use a single layer that can be called as an `embedding` or a `linear` layer, as you may see in the [`BertEmbeddings` class](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_bert.py#L132-L152). Please note the `mode` flag which makes possible the choice between the layers. |
transformers | 2,927 | closed | What does ## mean in the bert vocab? | What does ## mean in the bert vocab?
some words are starts with ##, such as ##a ##m ##er ##h ,I don't quite understand。 | 02-20-2020 11:13:07 | 02-20-2020 11:13:07 | This question is better suited for [Stack Overflow](https://stackoverflow.com/). Please ask similar questions there in the future.
It indicates that the token is a subword unit, i.e. part of a larger word. For instance, the word "potatoes" might be tokenised as "po, ##ta, ##toes". If you want to learn more about this kind of tokenisation, I suggest you read up on byte-pair encoding and the like. |
transformers | 2,926 | closed | Masked LM implementation details | I read the source code of `TFBertMLMHead`, it seems that, this layer just predict the whole sequence, rather than predict the `MASKED` tokens.
`TFBertMLMHead` just do these things:
* transform the `hidden state` from the last encoder layer
* `predictions = tf.matmul(hidden_state, input_embedding_matrix)`, with shape (batch_size, sequence_length, vocab_size)
* return the `predictions` to calculate loss
The inputs are simply the `hidden state` of the last encoder layer.
But the implemenmtation from [google-research/bert](https://github.com/google-research/bert/blob/cc7051dc592802f501e8a6f71f8fb3cf9de95dc9/run_pretraining.py#L240) needs extra inputs `masked_lm_positions` and `masked_lm_weights`, and then use these inputs to calculate the masked lm loss.
So, does the `TFBertMLMHead` miss something?
| 02-20-2020 09:38:10 | 02-20-2020 09:38:10 | The method from google-research/bert you're showing returns the loss. We don't return the loss with `TFBertForMaskedLM`, we return the output distribution over the vocabulary. You can then use this output distribution to compute the loss, with a cross entropy loss for example.<|||||>I see, thanks. |
transformers | 2,925 | closed | DistilRoberta Model fine tuning on Squad dataset | # 🐛 Bug
I am trying to train the **distilroberta-base** model on **SQuAD** dataset using distillation code. I have trained the Roberta model on SQuAD 2.0 dataset, so that I can use it as a teacher model. I am using the **distilroberta-base** model as student.
## Information
Model I am using : Roberta
Language I am using the model on: English
The problem arises when using:
* [x] my own modified scripts: (give details below)
I am using the run_squad_w_distillation.py code, with the following modification
- Added the imports relevant to roberta
- Removed token_type_ids from inputs while sending to student model
```python
MODEL_CLASSES = {
"bert": (BertConfig, BertForQuestionAnswering, BertTokenizer),
"xlnet": (XLNetConfig, XLNetForQuestionAnswering, XLNetTokenizer),
"xlm": (XLMConfig, XLMForQuestionAnswering, XLMTokenizer),
"distilbert": (DistilBertConfig, DistilBertForQuestionAnswering, DistilBertTokenizer),
"roberta": (RobertaConfig, RobertaForQuestionAnswering, RobertaTokenizer)
}
```
```python
if args.model_type in ["roberta"]:
del inputs["token_type_ids"]
```
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQuAD 2.0
## To reproduce
Steps to reproduce the behavior:
1. python run_squad_w_distillation.py --model_type roberta --model_name_or_path distilroberta-base --output_dir ./distil_roberta --teacher_type roberta --teacher_name_or_path $ROBERTA_MODEL --train_file $SQUAD_DIR/train-v2.0.json --predict_file $SQUAD_DIR/dev-v2.0.json --version_2_with_negative --do_train --do_eval --do_lower_case --save_steps 5000 --logging_steps 5000
```python
/opt/conda/conda-bld/pytorch_1573049304260/work/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [206,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
Traceback (most recent call last):
File "run_squad_w_distillation.py", line 871, in <module>
main()
File "run_squad_w_distillation.py", line 813, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer, teacher=teacher)
File "run_squad_w_distillation.py", line 207, in train
"input_ids": batch[0],
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/tensor.py", line 166, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/autograd/__init__.py", line 99, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: transform: failed to synchronize: cudaErrorAssert: device-side assert triggered
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
To train properly and evaluate on predict file dataset with f1_score near roberta model f1_score(81.5)
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.3.0
- Platform: "CentOS Linux 7"
- Python version: 3.6.9
- PyTorch version (GPU?): Yes
- Tensorflow version (GPU?): No
- Using GPU in script?: Yes, CUDA Version: 10.2, Driver Version: 440.33.01
- Using distributed or parallel set-up in script?: No
| 02-20-2020 09:15:41 | 02-20-2020 09:15:41 | Hello @graviraja
You don't need to remove the `token_type_ids` for RoBERTa models. There is one matrix of token type and it has only one type (a matrix of 0).
You can remove the `do_lower_case` flag for RoBERTa models. The vocabulary is case sensitive.
Have you tried WITHOUT GPU? (`CUDA_VISIBLE_DEVICES="")<|||||>Hi @VictorSanh
Running the code on cpu throws the following error
```python
Traceback (most recent call last):
File "run_squad_w_distillation.py", line 871, in <module>
main()
File "run_squad_w_distillation.py", line 813, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer, teacher=teacher)
File "run_squad_w_distillation.py", line 207, in train
"input_ids": batch[0],
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 146, in forward
"them on device: {}".format(self.src_device_obj, t.device))
RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu
```
I have mentioned `--no_cuda flag` in the command and removed `--do_lower_case`.
Thank you for your help!<|||||>I was not able to reproduce your bug @graviraja.
I pushed an update to `run_squad_w_distillation.py` on master to include RoBERTa, let me know if that works.
As I suggested, to test without GPU, you should use the `CUDA_VISIBLE_DEVICES=""` as I believe there is an inconsistency between the `--no_cuda` flag and the `args.n_gpu = torch.cuda.device_count()`. I'll correct it.<|||||>Hi @VictorSanh still it is not working for me after pulling the latest code. I have tried with `CUDA_VISIBLE_DEVICES=""` without `--no_cuda` flag and with `--no_cuda` flag also. I am getting the below error.
```python
03/03/2020 13:26:59 - INFO - __main__ - Num examples = 135302
03/03/2020 13:26:59 - INFO - __main__ - Num Epochs = 3
03/03/2020 13:26:59 - INFO - __main__ - Instantaneous batch size per GPU = 8
03/03/2020 13:26:59 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 8
03/03/2020 13:26:59 - INFO - __main__ - Gradient Accumulation steps = 1
03/03/2020 13:26:59 - INFO - __main__ - Total optimization steps = 50739
Epoch: 0%| | 0/3 [00:00<?, ?it/s] Traceback (most recent call last):
File "run_squad_w_distillation.py", line 868, in <module>
main()
File "run_squad_w_distillation.py", line 810, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer, teacher=teacher)
File "run_squad_w_distillation.py", line 217, in train
outputs = model(**inputs)
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/transformers/modeling_roberta.py", line 708, in forward
head_mask=head_mask,
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/transformers/modeling_bert.py", line 801, in forward
input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/transformers/modeling_roberta.py", line 64, in forward
input_ids, token_type_ids=token_type_ids, position_ids=position_ids, inputs_embeds=inputs_embeds
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/transformers/modeling_bert.py", line 190, in forward
token_type_embeddings = self.token_type_embeddings(token_type_ids)
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 114, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/functional.py", line 1484, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: index out of range: Tried to access index 1 out of table with 0 rows. at /opt/conda/conda-bld/pytorch_1573049304260/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:418
```
With gpu, by setting `CUDA_VISIBLE_DEVICES=1`, I am getting the following error:
```python
File "run_squad_w_distillation.py", line 868, in <module>
main()
File "run_squad_w_distillation.py", line 810, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer, teacher=teacher)
File "run_squad_w_distillation.py", line 217, in train
outputs = model(**inputs)
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/transformers/modeling_roberta.py", line 708, in forward
head_mask=head_mask,
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/transformers/modeling_bert.py", line 808, in forward
encoder_attention_mask=encoder_extended_attention_mask,
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/transformers/modeling_bert.py", line 422, in forward
hidden_states, attention_mask, head_mask[i], encoder_hidden_states, encoder_attention_mask
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/transformers/modeling_bert.py", line 383, in forward
self_attention_outputs = self.attention(hidden_states, attention_mask, head_mask)
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/transformers/modeling_bert.py", line 329, in forward
hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/transformers/modeling_bert.py", line 231, in forward
mixed_query_layer = self.query(hidden_states)
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/functional.py", line 1372, in linear
output = input.matmul(weight.t())
RuntimeError: cublas runtime error : library not initialized at /opt/conda/conda-bld/pytorch_1573049304260/work/aten/src/THC/THCGeneral.cpp:216
```
I have trained the model `roberta` on `squad 2.0` dataset using GPU. Does this cause an issue?
<|||||>Reading the error when it's running on CPU, It looks like that you have a tokenization problem (it tries to access an index that is out of range). Are you on master? Could you make sure you tokenize the dataset and run the inference with the same version? --> Add a `--overwrite_cache` to retokenize.<|||||>@VictorSanh same issue is happening with `--overwrite_cache`. May I know the command you are using for training the distilroberta model?<|||||>@VictorSanh any update on this ?<|||||>Updating the transformers version fixes it. |
transformers | 2,924 | closed | Update modeling_tf_utils.py | Tensorflow does not use .eval() vs .train().
closes https://github.com/huggingface/transformers/issues/2906 | 02-20-2020 09:06:38 | 02-20-2020 09:06:38 | This is great, thanks @BramVanroy !! |
transformers | 2,923 | closed | Loading tensorflow first and then loading transformers errors | # 🐛 Bug
## Information
Model I am using: Bert
Language I am using the model on (English, Chinese ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Run:
```
import tensorflow as tf
from transformers import TFBertForSequenceClassification
model = TFBertForSequenceClassification.from_pretrained('/path/to/my/tf/model/', from_pt = True)
```
Will produce the following output (with error):
```
>>> import tensorflow as tf
2020-02-20 09:36:51.035083: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer.so.6
2020-02-20 09:36:51.036337: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer_plugin.so.6
>>> from transformers import TFBertForSequenceClassification
>>> model = TFBertForSequenceClassification.from_pretrained('/path/to/my/tf/model/', from_pt = True)
2020-02-20 09:36:52.226797: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-02-20 09:36:52.230595: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one
NUMA node, so returning NUMA node zero
2020-02-20 09:36:52.231392: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:02:02.0 name: GRID RTX6000-24Q computeCapability: 7.5
coreClock: 1.77GHz coreCount: 72 deviceMemorySize: 23.88GiB deviceMemoryBandwidth: 625.94GiB/s
2020-02-20 09:36:52.231447: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-02-20 09:36:52.231475: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-02-20 09:36:52.233199: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-02-20 09:36:52.233465: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-02-20 09:36:52.234866: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-02-20 09:36:52.235660: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-02-20 09:36:52.235707: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-02-20 09:36:52.235845: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one
NUMA node, so returning NUMA node zero
2020-02-20 09:36:52.236261: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 09:36:52.236765: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-02-20 09:36:52.237022: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-02-20 09:36:52.241987: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3192500000 Hz
2020-02-20 09:36:52.242277: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0xeb8bae0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-02-20 09:36:52.242294: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-02-20 09:36:52.435669: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 09:36:52.436129: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0xec01900 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-02-20 09:36:52.436153: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GRID RTX6000-24Q, Compute Capability 7.5
2020-02-20 09:36:52.436350: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 09:36:52.436672: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:02:02.0 name: GRID RTX6000-24Q computeCapability: 7.5
coreClock: 1.77GHz coreCount: 72 deviceMemorySize: 23.88GiB deviceMemoryBandwidth: 625.94GiB/s
2020-02-20 09:36:52.436706: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-02-20 09:36:52.436716: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-02-20 09:36:52.436744: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-02-20 09:36:52.436755: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-02-20 09:36:52.436765: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-02-20 09:36:52.436774: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-02-20 09:36:52.436781: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-02-20 09:36:52.436861: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 09:36:52.437204: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 09:36:52.437493: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-02-20 09:36:52.437528: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-02-20 09:36:52.936429: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-02-20 09:36:52.936466: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 0
2020-02-20 09:36:52.936474: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0: N
2020-02-20 09:36:52.936737: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 09:36:52.937283: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 09:36:52.937654: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 21423 MB memory) -> physical GPU (device: 0, name: GRID RTX6000-24Q, pci bus id: 0000:02:02.0, compute capability: 7.5)
2020-02-20 09:36:54.066446: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-02-20 09:36:54.066688: E tensorflow/stream_executor/cuda/cuda_blas.cc:238] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
2020-02-20 09:36:54.066725: E tensorflow/stream_executor/cuda/cuda_blas.cc:238] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
2020-02-20 09:36:54.066732: W tensorflow/stream_executor/stream.cc:2041] attempting to perform BLAS operation using StreamExecutor without BLAS support
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/my_lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 345, in from_pretrained
return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file, allow_missing_keys=True)
File "/my_lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 93, in load_pytorch_checkpoint_in_tf2_model
tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys
File "/my_lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 125, in load_pytorch_weights_in_tf2_model
tf_model(tf_inputs, training=False) # Make sure model is built
File "/my_lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/my_lib/python3.7/site-packages/transformers/modeling_tf_bert.py", line 916, in call
outputs = self.bert(inputs, **kwargs)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/my_lib/python3.7/site-packages/transformers/modeling_tf_bert.py", line 567, in call
encoder_outputs = self.encoder([embedding_output, extended_attention_mask, head_mask], training=training)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/my_lib/python3.7/site-packages/transformers/modeling_tf_bert.py", line 376, in call
layer_outputs = layer_module([hidden_states, attention_mask, head_mask[i]], training=training)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/my_lib/python3.7/site-packages/transformers/modeling_tf_bert.py", line 352, in call
attention_outputs = self.attention([hidden_states, attention_mask, head_mask], training=training)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/my_lib/python3.7/site-packages/transformers/modeling_tf_bert.py", line 301, in call
self_outputs = self.self_attention([input_tensor, attention_mask, head_mask], training=training)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/my_lib/python3.7/site-packages/transformers/modeling_tf_bert.py", line 230, in call
mixed_query_layer = self.query(hidden_states)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/keras/layers/core.py", line 1131, in call
outputs = standard_ops.tensordot(inputs, self.kernel, [[rank - 1], [0]])
File "/my_lib/python3.7/site-packages/tensorflow_core/python/ops/math_ops.py", line 4106, in tensordot
ab_matmul = matmul(a_reshape, b_reshape)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/util/dispatch.py", line 180, in wrapper
return target(*args, **kwargs)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/ops/math_ops.py", line 2798, in matmul
a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/ops/gen_math_ops.py", line 5616, in mat_mul
_ops.raise_from_not_ok_status(e, name)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 6606, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed : a.shape=(15, 768), b.shape=(768, 768), m=15, n=768, k=768 [Op:MatMul] name: tf_bert_for_sequence_classification/bert/encoder/layer_._0/attention/self/query/Tensordot/MatMul/
>>>
```
However, if I load transformers first and then load tensorflow, there is no problem...
(Output from console):
```
>>> from transformers import TFBertForSequenceClassification
2020-02-20 09:40:54.413603: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer.so.6
2020-02-20 09:40:54.414946: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer_plugin.so.6
>>> import tensorflow as tf
>>> model = TFBertForSequenceClassification.from_pretrained('/path/to/my/tf/model/', from_pt = True)
2020-02-20 09:40:55.402943: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-02-20 09:40:55.407404: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one
NUMA node, so returning NUMA node zero
2020-02-20 09:40:55.407771: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:02:02.0 name: GRID RTX6000-24Q computeCapability: 7.5
coreClock: 1.77GHz coreCount: 72 deviceMemorySize: 23.88GiB deviceMemoryBandwidth: 625.94GiB/s
2020-02-20 09:40:55.407828: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-02-20 09:40:55.407858: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-02-20 09:40:55.409288: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-02-20 09:40:55.409560: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-02-20 09:40:55.410954: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-02-20 09:40:55.411852: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-02-20 09:40:55.411906: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-02-20 09:40:55.412020: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one
NUMA node, so returning NUMA node zero
2020-02-20 09:40:55.412437: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 09:40:55.412712: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-02-20 09:40:55.412957: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-02-20 09:40:55.417720: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3192500000 Hz
2020-02-20 09:40:55.417908: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5be91f0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-02-20 09:40:55.417927: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-02-20 09:40:55.604909: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 09:40:55.605396: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5cc07b0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-02-20 09:40:55.605419: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GRID RTX6000-24Q, Compute Capability 7.5
2020-02-20 09:40:55.605632: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 09:40:55.605947: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:02:02.0 name: GRID RTX6000-24Q computeCapability: 7.5
coreClock: 1.77GHz coreCount: 72 deviceMemorySize: 23.88GiB deviceMemoryBandwidth: 625.94GiB/s
2020-02-20 09:40:55.605984: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-02-20 09:40:55.606000: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-02-20 09:40:55.606032: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-02-20 09:40:55.606045: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-02-20 09:40:55.606058: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-02-20 09:40:55.606070: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-02-20 09:40:55.606080: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-02-20 09:40:55.606159: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 09:40:55.606493: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 09:40:55.606763: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-02-20 09:41:00.803464: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-02-20 09:41:00.803503: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 0
2020-02-20 09:41:00.803509: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0: N
2020-02-20 09:41:00.803804: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 09:41:00.804291: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 09:41:00.804643: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 20754 MB memory) -> physical GPU (device: 0, name: GRID RTX6000-24Q, pci bus id: 0000:02:02.0, compute capability: 7.5)
>>>
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.0
- Platform: Linux
- Python version: 3.7.5
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.1.0
- Using GPU in script?:Yes
- Using distributed or parallel set-up in script?: No
| 02-20-2020 08:44:42 | 02-20-2020 08:44:42 | The import error seems to originate here:
> tensorflow/stream_executor/cuda/cuda_blas.cc:238] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
Can you try the answers provided here:
https://stackoverflow.com/questions/38303974/tensorflow-running-error-with-cublas
By the way, I would assume that when you actually try to run a model, the changed order would also trigger errors.<|||||>Have just tried it (had to modify the code to TF2) - ran this:
```
import tensorflow as tf
config = tf.compat.v1.ConfigProto()
config.gpu_options.allow_growth = True
from transformers import TFBertForSequenceClassification
model = TFBertForSequenceClassification.from_pretrained(path_to_my_model, from_pt = True)
```
But still got the same error code....
```
>>> import tensorflow as tf
2020-02-20 10:36:32.394175: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer.so.6
2020-02-20 10:36:32.395376: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer_plugin.so.6
config = tf.compat.v1.ConfigProto()
>>> config = tf.compat.v1.ConfigProto()
>>> config.gpu_options.allow_growth = True
>>> from transformers import TFBertForSequenceClassification
>>> model = TFBertForSequenceClassification.from_pretrained(path_to_my_model, from_pt = True)
2020-02-20 10:36:35.742499: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-02-20 10:36:35.746188: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 10:36:35.746521: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:02:02.0 name: GRID RTX6000-24Q computeCapability: 7.5
coreClock: 1.77GHz coreCount: 72 deviceMemorySize: 23.88GiB deviceMemoryBandwidth: 625.94GiB/s
2020-02-20 10:36:35.746558: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-02-20 10:36:35.746583: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-02-20 10:36:35.747935: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-02-20 10:36:35.748182: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-02-20 10:36:35.749540: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-02-20 10:36:35.750324: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-02-20 10:36:35.750367: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-02-20 10:36:35.750480: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 10:36:35.750878: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 10:36:35.751142: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-02-20 10:36:35.751382: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-02-20 10:36:35.759634: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3192500000 Hz
2020-02-20 10:36:35.759911: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0xf1b9b70 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-02-20 10:36:35.759927: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-02-20 10:36:35.947642: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 10:36:35.948100: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0xf22f990 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-02-20 10:36:35.948123: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GRID RTX6000-24Q, Compute Capability 7.5
2020-02-20 10:36:35.948331: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 10:36:35.948676: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:02:02.0 name: GRID RTX6000-24Q computeCapability: 7.5
coreClock: 1.77GHz coreCount: 72 deviceMemorySize: 23.88GiB deviceMemoryBandwidth: 625.94GiB/s
2020-02-20 10:36:35.948717: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-02-20 10:36:35.948727: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-02-20 10:36:35.948765: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-02-20 10:36:35.948779: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-02-20 10:36:35.948792: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-02-20 10:36:35.948805: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-02-20 10:36:35.948814: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-02-20 10:36:35.948896: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 10:36:35.949244: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 10:36:35.949538: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-02-20 10:36:35.949581: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-02-20 10:36:36.435874: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-02-20 10:36:36.435915: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 0
2020-02-20 10:36:36.435924: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0: N
2020-02-20 10:36:36.436177: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 10:36:36.436848: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 10:36:36.437179: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 21423 MB memory) -> physical GPU (device: 0, name: GRID RTX6000-24Q, pci bus id: 0000:02:02.0, compute capability: 7.5)
2020-02-20 10:36:37.545950: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-02-20 10:36:37.546193: E tensorflow/stream_executor/cuda/cuda_blas.cc:238] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
2020-02-20 10:36:37.546226: E tensorflow/stream_executor/cuda/cuda_blas.cc:238] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
2020-02-20 10:36:37.546232: W tensorflow/stream_executor/stream.cc:2041] attempting to perform BLAS operation using StreamExecutor without BLAS support
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/my_lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 345, in from_pretrained
return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file, allow_missing_keys=True)
File "/my_lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 93, in load_pytorch_checkpoint_in_tf2_model
tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys
File "/my_lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 125, in load_pytorch_weights_in_tf2_model
tf_model(tf_inputs, training=False) # Make sure model is built
File "/my_lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/my_lib/python3.7/site-packages/transformers/modeling_tf_bert.py", line 916, in call
outputs = self.bert(inputs, **kwargs)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/my_lib/python3.7/site-packages/transformers/modeling_tf_bert.py", line 567, in call
encoder_outputs = self.encoder([embedding_output, extended_attention_mask, head_mask], training=training)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/my_lib/python3.7/site-packages/transformers/modeling_tf_bert.py", line 376, in call
layer_outputs = layer_module([hidden_states, attention_mask, head_mask[i]], training=training)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/my_lib/python3.7/site-packages/transformers/modeling_tf_bert.py", line 352, in call
attention_outputs = self.attention([hidden_states, attention_mask, head_mask], training=training)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/my_lib/python3.7/site-packages/transformers/modeling_tf_bert.py", line 301, in call
self_outputs = self.self_attention([input_tensor, attention_mask, head_mask], training=training)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/my_lib/python3.7/site-packages/transformers/modeling_tf_bert.py", line 230, in call
mixed_query_layer = self.query(hidden_states)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/keras/layers/core.py", line 1131, in call
outputs = standard_ops.tensordot(inputs, self.kernel, [[rank - 1], [0]])
File "/my_lib/python3.7/site-packages/tensorflow_core/python/ops/math_ops.py", line 4106, in tensordot
ab_matmul = matmul(a_reshape, b_reshape)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/util/dispatch.py", line 180, in wrapper
return target(*args, **kwargs)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/ops/math_ops.py", line 2798, in matmul
a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/ops/gen_math_ops.py", line 5616, in mat_mul
_ops.raise_from_not_ok_status(e, name)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 6606, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
```
And yes - as the model fails to load, I can't run it (the object simply dosn't excist...)<|||||>I don't use Tensorflow daily (I use PyTorch), but my far-fetched guess would be that because of the loading order, in one case two TF sessions are created which both do `Created TensorFlow device` (you can see that in the trace). That might, then, cause that device to not be able to distinguish the sessions or run out of memory to allocate or something like this.
Someone else might chip in here.<|||||>Seems like a valid guess :) And thanks for giving it a try - at least it works as long as I load transformers and then tf...<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,922 | closed | Tokenizer fast warnings | - Warning abut padding should not trigger so often now, especially when no padding strategy is provided by the user.
- RoberTa warning is now in RoberTaTokenizer, not GPT2 base class. | 02-20-2020 08:42:04 | 02-20-2020 08:42:04 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2922?src=pr&el=h1) Report
> Merging [#2922](https://codecov.io/gh/huggingface/transformers/pull/2922?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d490b5d5003654f104af3abd0556e598335b5650?src=pr&el=desc) will **decrease** coverage by `<.01%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2922?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2922 +/- ##
==========================================
- Coverage 75.35% 75.35% -0.01%
==========================================
Files 94 94
Lines 15444 15445 +1
==========================================
Hits 11638 11638
- Misses 3806 3807 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2922?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2922/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `96.82% <ø> (-0.03%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2922/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.45% <100%> (-0.14%)` | :arrow_down: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2922/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `100% <100%> (ø)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2922?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2922?src=pr&el=footer). Last update [d490b5d...6a55286](https://codecov.io/gh/huggingface/transformers/pull/2922?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,921 | closed | Expose all constructor parameters for BertTokenizerFast | Signed-off-by: Morgan Funtowicz <[email protected]> | 02-20-2020 08:10:05 | 02-20-2020 08:10:05 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2921?src=pr&el=h1) Report
> Merging [#2921](https://codecov.io/gh/huggingface/transformers/pull/2921?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d490b5d5003654f104af3abd0556e598335b5650?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2921?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2921 +/- ##
=======================================
Coverage 75.35% 75.35%
=======================================
Files 94 94
Lines 15444 15444
=======================================
Hits 11638 11638
Misses 3806 3806
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2921?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2921/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `96.99% <ø> (ø)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2921?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2921?src=pr&el=footer). Last update [d490b5d...21ac4a0](https://codecov.io/gh/huggingface/transformers/pull/2921?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,920 | closed | Error arises when using pipeline with community model | # 🐛 Bug
## Information
Model I am using is: `mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es`
Language I am using the model on: Spanish
The problem arises when using:
* [ ] the official example scripts:
* [x] my own modified scripts:
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task:
* [x] my own task or dataset:
Steps to reproduce the behavior:
```python
from transformers import *
# Build a pipeline for QA
nlp = pipeline('question-answering', model='mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es',
tokenizer='mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es')
nlp(
{
'question': 'que queso es?',
'context': 'Se utilizo en el dia de hoy un queso Emmental'
}
)
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
This was working two days ago.
<details>
<summary>Error log</summary>
```html
convert squad examples to features: 0%| | 0/1 [00:00<?, ?it/s]WARNING:transformers.tokenization_utils:Disabled padding because no padding token set (pad_token: [PAD], pad_token_id: 1).
To remove this error, you can add a new pad token and then resize model embedding:
tokenizer.pad_token = '<PAD>'
model.resize_token_embeddings(len(tokenizer))
---------------------------------------------------------------------------
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/usr/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "/usr/local/lib/python3.6/dist-packages/transformers/data/processors/squad.py", line 141, in squad_convert_example_to_features
truncation_strategy="only_second" if tokenizer.padding_side == "right" else "only_first",
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 1796, in encode_plus
**kwargs,
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 1722, in batch_encode_plus
tokens = self._tokenizer.encode(*batch_text_or_text_pairs[0])
File "/usr/local/lib/python3.6/dist-packages/tokenizers/implementations/base_tokenizer.py", line 141, in encode
return self._tokenizer.encode(sequence, pair)
TypeError
"""
The above exception was the direct cause of the following exception:
TypeError Traceback (most recent call last)
<ipython-input-21-df466dea770c> in <module>()
8 nlp({
9 'question': question,
---> 10 'context': context
11 })
12 )
11 frames
/usr/local/lib/python3.6/dist-packages/tokenizers/implementations/base_tokenizer.py in encode()
139 An Encoding
140 """
--> 141 return self._tokenizer.encode(sequence, pair)
142
143 def encode_batch(self, sequences: List[Union[str, Tuple[str, str]]]) -> List[Encoding]:
TypeError:
```
</details>
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.0
- Python version:3.6.9
- Torch version (GPU?): 1.4.0, running on CPU
| 02-20-2020 04:22:31 | 02-20-2020 04:22:31 | Hi @ankandrew,
Thanks for reporting the issue. Effectively, the QA pipeline is not compatible with fast tokenizers for technical reasons (and I'm currently working on a fix for this).
As a workaround for now, you can disable fast tokenizers when allocating the pipeline:
```python
nlp = pipeline(
'question-answering',
model='mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es',
tokenizer=(
'mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es',
{"use_fast": False}
)
)
nlp(
{
'question': 'que queso es?',
'context': 'Se utilizo en el dia de hoy un queso Emmental'
}
)
> {'score': 0.36319364208159755, 'start': 31, 'end': 44, 'answer': 'queso Emmental'}
```<|||||>Also cc'ing @mrm8488 for information while it's in the process of being fixed<|||||>Thank for the information!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,919 | closed | Fast tokenizers ignore `add_special_tokens=False` | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import AutoTokenizer
pretrained_model_name = "bert-base-cased"
fast_tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name)
slow_tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name, use_fast=False)
text = "hello"
assert fast_tokenizer.encode(text, add_special_tokens=False) == slow_tokenizer.encode(text, add_special_tokens=False)
```
## Expected behavior
The fast tokenizers shouldn't add the special tokens if `add_special_tokens` is equal to `False`.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.0
- Platform: Linux-4.15.0-76-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.4
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.0.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No | 02-20-2020 00:10:06 | 02-20-2020 00:10:06 | This works now (but I can't close the issue).<|||||>I see now that it works with some changes:
```python
from transformers import AutoTokenizer
pretrained_model_name = "bert-base-cased"
fast_tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name,
add_special_tokens=False, use_fast=True)
slow_tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name)
text = "hello"
assert fast_tokenizer.encode(text) == slow_tokenizer.encode(text, add_special_tokens=False)
```
However, I see `add_special_tokens` needs to be specified differently in the fast version (init) and in the slow version (encode). Can it be made more homogeneous? I'll leave this issue open for this because the fast version still ignores it in `encode` and there's this discrepancy (maybe the slow version can be changed then).<|||||>It also doesn't work for `roberta-base`.<|||||>In the last version, available on `master` for now, we actually changed this to match the slow version. So in all cases, `add_special_tokens` should be specified with `tokenize`, `encode` etc, and not during initialization. |
transformers | 2,918 | closed | Fast Tokenizers save pretrained should return the list of generated file paths. | 02-19-2020 23:47:36 | 02-19-2020 23:47:36 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2918?src=pr&el=h1) Report
> Merging [#2918](https://codecov.io/gh/huggingface/transformers/pull/2918?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2708b44ee9c151a2cdb84620d295c997af6fa7f0?src=pr&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2918?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2918 +/- ##
==========================================
+ Coverage 75.33% 75.35% +0.01%
==========================================
Files 94 94
Lines 15444 15444
==========================================
+ Hits 11635 11638 +3
+ Misses 3809 3806 -3
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2918?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2918/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.58% <100%> (+0.44%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2918?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2918?src=pr&el=footer). Last update [2708b44...c10fcae](https://codecov.io/gh/huggingface/transformers/pull/2918?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 2,917 | closed | Breaking-change behavior in BERT tokenizer when stripping accents | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert (could happen with other ones, don't know)
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
```python
from transformers import AutoTokenizer
pretrained_model_name = "bert-base-cased"
fast_tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name)
slow_tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name, use_fast=False)
text = "naïve"
assert fast_tokenizer.encode(text) == slow_tokenizer.encode(text)
```
With the slow, it only strips accents if lowercase is enabled (maybe a bug?):
https://github.com/huggingface/transformers/blob/e67676424191e5935362e5fe7e04b5c317d706a9/src/transformers/tokenization_bert.py#L346
With the fast one, it'd never strip accents:
https://github.com/huggingface/tokenizers/blob/python-v0.5.0/bindings/python/tokenizers/implementations/bert_wordpiece.py#L23
https://github.com/huggingface/transformers/blob/e67676424191e5935362e5fe7e04b5c317d706a9/src/transformers/tokenization_bert.py#L557-L565
I'd be cool to have that flag also, in both tokenizers.
Finally, this warning seems odd for the simple code from above:
```pycon
>>> assert fast_tokenizer.encode(text) == slow_tokenizer.encode(text)
Disabled padding because no padding token set (pad_token: [PAD], pad_token_id: 0).
To remove this error, you can add a new pad token and then resize model embedding:
tokenizer.pad_token = '<PAD>'
model.resize_token_embeddings(len(tokenizer))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AssertionError
```
Maybe here the `if pad_to_max_length` should be nesting the rest of the if?
https://github.com/huggingface/transformers/blob/e67676424191e5935362e5fe7e04b5c317d706a9/src/transformers/tokenization_utils.py#L80-L95
Didn't check in the other transformer models.
## Expected behavior
1. The 2 tokenizer outputs (slow and fast) should be the same.
2. The tokenizers should allow you to choose if to strip accents or not.
3. That warning shouldn't appear, IMHO.
## Environment info
- `transformers` version: 2.5.0
- Platform: Linux-4.15.0-76-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.4
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.0.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No | 02-19-2020 23:45:40 | 02-19-2020 23:45:40 | Yeah, I found the same problem in my code. The "encode" won't add padding even "pad_to_max_length = True".<|||||>HI @bryant1410,
Thanks for reporting the issue. The parameter `strip_accents` was indeed enabled on `BertTokenizerFast`.
I've a PR exposing the missing parameters https://github.com/huggingface/transformers/pull/2921, it will land soon on master and will be included in the first maintenance release of 2.5 <|||||>I see, thanks! There's an incompatibility still though, which is that you can choose if to strip accents in the fast tokenizers but you can't control that in the previous tokenizers. I believe this should be fixed as well.
And be aware that, IIRC, this is still a breaking change, because in the previous tokenizers you would get stipped accents by default in one way but now it seems to behave in a different way by default.
I don't know if this also the case for the other params added in #2921, and for other models apart from BERT.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Please don't close it as this is an important issue.<|||||>Same one reported by @stefan-it, @n1t0 ?<|||||>Yes same one. Stripping accents is happening only when `do_lower_case=True` for slow tokenizers, and there is no way at the moment to change this behavior.
We can probably add an explicit option for this on slow tokenizers, and specify the default values in the configs.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Don't close it!! I want to have control of striping accents when tokenizing |
transformers | 2,916 | closed | How to train a LM with a custom Dataset? | # ❓ Questions & Help
I'm attempting to build a LM following the tutorial here (https://huggingface.co/blog/how-to-train).
Unfortunately, it is incomplete. It shows how to create a custom `Dataset` but not how to execute `run_language_modeling.py` so that it is used.
**Any chance we can get the full script for training the LM, included how to specify our custom dataset?**
| 02-19-2020 23:07:28 | 02-19-2020 23:07:28 | This is in process of being addressed at huggingface/blog#3
(You'll need to tweak the code of `run_language_modeling.py`, this is not – yet – a code-free tutorial) |
transformers | 2,915 | closed | How to train with variable number of candidates for multiple choice selection? | # ❓ Questions
## Details
I am trying to train `GPT2DoubleHeadsModel` for the tasks of generation and multiple-choice selection. My dataset has examples with variable number of candidates - some examples have 10 candidates, some have 15, etc.
I wanted to be able to create a single `TensorDataset` object for my dataset and train the model using a `DataLoader` wrapped around this dataset. But clearly, since the number of candidates varies across examples, I am unable to do so.
What is an appropriate way (or best practice) to train `GPT2DoubleHeadsModel` with such a dataset? | 02-19-2020 22:27:39 | 02-19-2020 22:27:39 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,914 | closed | Add syntax highlighting to the BibTeX in README | 02-19-2020 21:49:41 | 02-19-2020 21:49:41 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2914?src=pr&el=h1) Report
> Merging [#2914](https://codecov.io/gh/huggingface/transformers/pull/2914?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e67676424191e5935362e5fe7e04b5c317d706a9?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2914?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2914 +/- ##
=======================================
Coverage 75.32% 75.32%
=======================================
Files 94 94
Lines 15438 15438
=======================================
Hits 11629 11629
Misses 3809 3809
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2914?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2914?src=pr&el=footer). Last update [e676764...5574050](https://codecov.io/gh/huggingface/transformers/pull/2914?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 2,913 | closed | make RobertaForMaskedLM implementation identical to fairseq | closes https://github.com/huggingface/transformers/issues/1874
The implementation of RoBERTa in `transformers` differs from the original implementation in [fairseq](https://github.com/pytorch/fairseq/tree/master/fairseq/models/roberta), as results showed (cf. https://github.com/huggingface/transformers/issues/1874). I have documented my findings here https://github.com/huggingface/transformers/issues/1874#issuecomment-588359143 and made the corresponding changes accordingly in this PR.
Someone should check, however, that removing `get_output_embeddings()` does not have any adverse side-effects. | 02-19-2020 20:30:08 | 02-19-2020 20:30:08 | Awesome. Could you check if the existing @slow tests break for Roberta, and add a new one that hardcodes the fairseq logits from your example and makes sure we also return them. Trying to avoid accidental breakage. Thanks again! <|||||>@sshleifer Not sure how I would hardcode a tensor of size 1, 12, 50265. Can I just add a small pickled file to the test instead?<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2913?src=pr&el=h1) Report
> Merging [#2913](https://codecov.io/gh/huggingface/transformers/pull/2913?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/59c23ad9c931ac4fe719abeb3c3851df046ef3a6?src=pr&el=desc) will **decrease** coverage by `<.01%`.
> The diff coverage is `85.71%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2913?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2913 +/- ##
==========================================
- Coverage 75.3% 75.29% -0.01%
==========================================
Files 94 94
Lines 15424 15424
==========================================
- Hits 11615 11614 -1
- Misses 3809 3810 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2913?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2913/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.3% <85.71%> (-0.47%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2913?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2913?src=pr&el=footer). Last update [59c23ad...51514a4](https://codecov.io/gh/huggingface/transformers/pull/2913?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Sorry for the close, had to do some rebasing. |
transformers | 2,912 | closed | Override build_inputs_with_special_tokens for fast tokenizers | Signed-off-by: Morgan Funtowicz <[email protected]> | 02-19-2020 20:25:44 | 02-19-2020 20:25:44 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2912?src=pr&el=h1) Report
> Merging [#2912](https://codecov.io/gh/huggingface/transformers/pull/2912?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/59c23ad9c931ac4fe719abeb3c3851df046ef3a6?src=pr&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2912?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2912 +/- ##
==========================================
+ Coverage 75.3% 75.32% +0.01%
==========================================
Files 94 94
Lines 15424 15438 +14
==========================================
+ Hits 11615 11628 +13
- Misses 3809 3810 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2912?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.14% <100%> (+0.05%)` | :arrow_up: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `96.99% <100%> (+0.06%)` | :arrow_up: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `100% <100%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `61.15% <0%> (-0.18%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2912?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2912?src=pr&el=footer). Last update [59c23ad...3b7752b](https://codecov.io/gh/huggingface/transformers/pull/2912?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,911 | closed | missing "para" attribute in ARC dataset for multiple choice question answering model | # 🐛 Bug
## Information
Model I am using Roberta.
Language I am using the model on (English)
The problem arises when using:
* [ ] the official example scripts: (give details below)
**https://github.com/huggingface/transformers/blob/master/examples/utils_multiple_choice.py**
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
Multiple choice question answering
## To reproduce
Steps to reproduce the behavior:
1. run_multiple_choice.py
with parameters as specified in the documentation replacing the task name to arc
In the data, there is no such parameter called "para"
contexts=[
options[0]["para"].replace("_", ""),
options[1]["para"].replace("_", ""),
options[2]["para"].replace("_", ""),
options[3]["para"].replace("_", ""),
],
| 02-19-2020 18:50:31 | 02-19-2020 18:50:31 | Got it.
This parameter is for the context.<|||||>May I know how do you solve this problem? I just ran into this problem.<|||||>You will have to add a "para" field for every choice - This is for adding knowledge. To get a baseline you can simply use a dummy text in that field
`{
"id": "MCAS_2000_4_6",
"question": {
"stem": "Which technology was developed most recently?",
"choices": [
{
"text": "cellular telephone",
"label": "A",
"para": "fetched knowledge"
},
{
"text": "television",
"label": "B",
"para": "fetched knowledge"
},
{
"text": "refrigerator",
"label": "C",
"para": "fetched knowledge"
},
{
"text": "airplane",
"label": "D",
"para": "fetched knowledge"
}
]
},
"answerKey": "A"
}
` |
transformers | 2,910 | closed | `PreTrainedTokenizerFast.build_inputs_with_special_tokens` doesn't add the special tokens | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): BertTokenizer (but seems to apply to most)
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('bert-base-cased')
tokenizer.build_inputs_with_special_tokens(["abc", "def"])
```
The output is `['abc', 'def']`.
## Expected behavior
The output should include the `[CLS]` and `[SEP]` tokens.
The problem is neither `PreTrainedTokenizerFast` nor its subclasses override `build_inputs_with_special_tokens`.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.0
- Platform: Linux
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0, w/o GPU
- Tensorflow version (GPU?): -
- Using GPU in script?: No.
- Using distributed or parallel set-up in script?: No. | 02-19-2020 18:22:00 | 02-19-2020 18:22:00 | Hi @bryant1410,
Thanks for reporting the issue, as a workaround for now, can you try the following:
```python
tokenizer.tokenize("abc")
tokenizer.tokenizer("def")
```
It should do the same, let me know.
In the meantime I'll have a closer look at the function `tokenizer.build_inputs_with_special_tokens`<|||||>Should be fixed in https://github.com/huggingface/transformers/pull/2912 |
transformers | 2,909 | closed | Add slow generate tests for pretrained lm models | Move implementation of slow hardcoded generate models to this PR from. Checkout previous discussion in PR #2885 | 02-19-2020 17:23:23 | 02-19-2020 17:23:23 | > Please incorporate my comments on this file in the other PR :)
is on my radar :-) <|||||>UPDATE: Changed slow tests for language generation design according to discussion in PR #2885 .
If this looks alright, I'll add test cases for the other LMModels @LysandreJik & @sshleifer <|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2909?src=pr&el=h1) Report
> Merging [#2909](https://codecov.io/gh/huggingface/transformers/pull/2909?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/38f5fe9e0277df67a01db80a1c640ac072a2381e?src=pr&el=desc) will **decrease** coverage by `1.03%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2909?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2909 +/- ##
==========================================
- Coverage 77.16% 76.12% -1.04%
==========================================
Files 98 98
Lines 15997 15997
==========================================
- Hits 12344 12178 -166
- Misses 3653 3819 +166
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2909?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `88.43% <100%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.03% <0%> (-2.21%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.2% <0%> (-0.17%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2909?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2909?src=pr&el=footer). Last update [38f5fe9...0c5bdef](https://codecov.io/gh/huggingface/transformers/pull/2909?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Updated the language model generation slow tests following Roberta's and Bart's Integration Test style. What do you think? @LysandreJik @sshleifer <|||||>Finished to add hard-coded tests for all models with LMHead: `GPT2, OpenAI, XLNet, TransfoXL, CTRL and XLM.`
All pretrained models generate reasonable results **except** `XLM`. Might need to take a closer look in a future PR.
Also future PRs TODO:
- [ ] Add hardcoded tests for seq-to-seq language generation
- [ ] Add hardcoded tests for DoubleHead language generation<|||||>Don't feel strongly, but I would consider deleting the XLM example so that the tests don't enforce bad generations.
Also, if it is possible to make shorter examples (or more than one token per line), it would make the code more readable.
Overall, I love this. Makes me feel a lot safer editing generation code!<|||||>We could also add a sentence to `Contributing.md` telling people who change the generation code which command to run to make sure they didn't break stuff
|
transformers | 2,908 | closed | Model I am using Roberta | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
Language I am using the model on (English, Chinese ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| 02-19-2020 15:06:47 | 02-19-2020 15:06:47 | |
transformers | 2,907 | closed | Help needed with interpretation of the MLP class | Hello,
I am having some trouble understanding the MLP function used in the Hugging Face GPT-2, which is found [here](https://github.com/huggingface/transformers/blob/73028c5df0c28ca179fbe565482a9c2143787f61/src/transformers/modeling_gpt2.py#L204).
Q1. For MLP, why are we setting the n_state to be equal to 3072, which is 4 * n_embd?
Q2. Below is the definition for the MLP class:
```python
class MLP(nn.Module):
def __init__(self, n_state, config): # in MLP: n_state=3072 (4 * n_embd)
super().__init__()
nx = config.n_embd
self.c_fc = Conv1D(n_state, nx)
self.c_proj = Conv1D(nx, n_state)
self.act = gelu_new
self.dropout = nn.Dropout(config.resid_pdrop)
```
in the MLP definition above, what exactly do the lines ``` Conv1D(n_state, nx)``` (the object ```self.c_fc```), and ``` Conv1D(nx, n_state)``` (the object ```self.c_proj```) do?
Thank you, | 02-19-2020 13:44:07 | 02-19-2020 13:44:07 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>h56cho: I'm not sure if you ask about the code or the algorithm.
As far as I understand from the code, the class MLP is a basic 2-layer neural network (2 consecutive conv1d + gelu activation). This will be used to construct a bigger [network](https://github.com/huggingface/transformers/blob/73028c5df0c28ca179fbe565482a9c2143787f61/src/transformers/modeling_gpt2.py#L215). I hope this (partially) answers your Q2 question.
Q1. For the code, I think the comment is related to the number 3072 mentioned in this [link](https://jalammar.github.io/illustrated-gpt2/)
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,906 | closed | documentation for TF models mentions non-existent methods | Documentation of `TFPreTrainedModel.from_pretrained` method mentions the `.model()` and `.eval()` methods that are not defined for tensorflow models:
> The model is set in evaluation mode by default using ``model.eval()`` (Dropout modules are deactivated)
> To train the model, you should first set it back in training mode with ``model.train()``
https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_utils.py#L195
| 02-19-2020 13:44:01 | 02-19-2020 13:44:01 | |
transformers | 2,904 | closed | squad_convert_example_to_features does not work with CamembertTokenizer | # 🐛 Bug
## Information
Model I am using : CamemBERT
Language I am using the model on : French
The problem arises when using:
* [*] my own modified scripts: (give details below)
The tasks I am working on is:
* [*] an official GLUE/SQUaD task: SQUaD
## To reproduce
Steps to reproduce the behavior:
1 - Copy paste this and run it
```python
from transformers import CamembertTokenizer, SquadExample, squad_convert_examples_to_features
tokenizer = CamembertTokenizer.from_pretrained('camembert-base')
example = SquadExample(
'example_id',
"Q",
"C D E F G H",
"D",
2,
"title"
)
features, _ = squad_convert_examples_to_features(
examples=[
example
],
tokenizer=tokenizer,
max_seq_length=30,
doc_stride=128,
max_query_length=128,
is_training=True,
return_dataset="pt",
threads=1,
)
tokens = tokenizer.convert_ids_to_tokens(tokenizer.encode(example.question_text, example.context_text))
doc_token = example.doc_tokens
print({tokens[k]: doc_token[v] for k, v in features[0].token_to_orig_map.items()})
# Outputs
# {'</s>': 'C', '▁C': 'D', '▁D': 'E', '▁E': 'F', '▁F': 'G', '▁G': 'H'}
# Should be
# {'▁C': 'C', '▁D': 'D', '▁E': 'E', '▁F': 'F', '▁G': 'G', '▁H': 'H'}
```
## Expected behavior
The resulting features mapping is shifted by one when using the CamembertTokenizer.
This seems to be caused by a weird check in the method `squad_convert_example_to_features`. (`if "roberta" in str(type(tokenizer))`) is evaluated to False when using the CamembertTokenizer (which is adapted from RobertaTokenizer)
When I patch the line `if "roberta" in str(type(tokenizer))` by `if "roberta" in str(type(tokenizer)) or "camembert" in str(type(tokenizer))`, I get the expected behavior.
I do not really know what would be the best way to handle this problem.
## Environment info
- `transformers` version: 2.4.1
- Platform: MacOS
- Python version: 3.7.6
- PyTorch version : 1.4
- Tensorflow version : None
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 02-19-2020 13:03:12 | 02-19-2020 13:03:12 | Solved by #2746 |
transformers | 2,903 | closed | Update to include example of LM | The model files have been updated in order to include the classification layers, based on https://github.com/huggingface/transformers/issues/2901, and now can be also used as a LM. | 02-19-2020 11:26:28 | 02-19-2020 11:26:28 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2903?src=pr&el=h1) Report
> Merging [#2903](https://codecov.io/gh/huggingface/transformers/pull/2903?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/20fc18fbda3669c2f4a3510e0705b2acd54bff07?src=pr&el=desc) will **decrease** coverage by `1.07%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2903?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2903 +/- ##
==========================================
- Coverage 75% 73.92% -1.08%
==========================================
Files 94 94
Lines 15288 15288
==========================================
- Hits 11467 11302 -165
- Misses 3821 3986 +165
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2903?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.91% <0%> (-9.86%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `70.86% <0%> (-2.3%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `92.07% <0%> (-2.21%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2903?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2903?src=pr&el=footer). Last update [20fc18f...afa57d9](https://codecov.io/gh/huggingface/transformers/pull/2903?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>[Looks good!](https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1)<|||||>@julien-c Could you perhaps shed some light on how the TF checkpoints should be uploaded? @iliaschalkidis asked about it here https://github.com/huggingface/transformers/issues/2901#issuecomment-588163505<|||||>In the past for another project, this (`from_pt=True`) did the dirty trick for me:
```python
bert = BERT.from_pretrained(model_path+'pytorch_model.bin',
from_pt=True,
config=BertConfig().from_pretrained(model_path+'config.json'))
```
but I definitely not recommend this...
I have already uploaded the TF checkpoint files (`model_ckpt.data-00000-of-00001`, `model_ckpt.index`, `model_ckpt.meta`) in model's folder, so please feel free to troubleshoot.<|||||>see https://github.com/huggingface/transformers/issues/2901#issuecomment-591710959 |
transformers | 2,902 | closed | Convert BERT to RoBERTa | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
Given that RoBERTa outperformed BERT on several tasks, yet having a slight architecture modification, I want to know if it is possible to convert a BERT pretrained model to RoBERTa.
I am working with a BERT pretrained model on a domain specific corpora, given that I don't have the resources to train RoBERTa from scratch I want to know if I can convert the model to RoBERTa. If yes, how do I go about it? | 02-19-2020 10:27:18 | 02-19-2020 10:27:18 | The differences between the BERT and RoBERTa model are the following:
- Different pre-training (larger batch size for RoBERTa, no NSP, no token type ids ...)
- Different tokenizer
The model architecture is exactly the same. The only real difference after the pre-training is the difference in tokenization. Since you're working with a BERT model that was pre-trained, you unfortunately won't be able to change the tokenizer now from a WordPiece (BERT) to a Byte-level BPE (RoBERTa). |
transformers | 2,901 | closed | Pre-trained BERT-LM missing LM Head - returns random token predictions | # 🐛 Bug
## Information
I released Greek BERT, almost a week ago and so far I'm exploring its use by running some benchmarks in Greek datasets. Although Greek BERT works just fine for sequence tagging (`AutoModelForTokenClassification`) and text classification (`AutoModelForSequenceClassification`), there are issues when we try to to use it as Language Model (`AutoModelWithLMHead`) in order to predict masked tokens. The bug was originally reported in (https://github.com/nlpaueb/greek-bert/issues/1) by @jzbjyb.
## To reproduce
- `transformers` version:
- Platform: Linux / Mac OS
- Python version: 3.7
- PyTorch version (GPU?): 1.0.1
- Tensorflow version (GPU?): 2.1
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
The model has been trained using the official BERT release (https://github.com/google-research/bert) originally converted from Tensorflow checkpoints using the library script:
```
python transformers/convert_bert_original_tf_checkpoint_to_pytorch.py --tf_checkpoint_path /home/ichalkidis/greek_bert/variables --bert_config_file=/home/ichalkidis/greek_bert/config.json --pytorch_dump_path=/home/ichalkidis/greek_bert/pytorch_model.bin
```
and then exported accompanied by the tokenizer files using:
```python
from transformers import BertModel
from transformers import BertConfig, BertTokenizer
model_path = '/home/ichalkidis/greek_bert/'
bert = BertModel.from_pretrained(model_path + 'pytorch_model.bin',
config=BertConfig().from_pretrained(model_path + 'config.json'))
bert.save_pretrained('/home/ichalkidis/bert-base-greek-uncased-v1/')
tokenizer = BertTokenizer.from_pretrained(model_path+'vocab.txt')
tokenizer.save_pretrained('/home/ichalkidis/bert-base-greek-uncased-v1/')
```
You can replicate the inconsistent behaviour of the LM with the following script:
```python
import torch
from transformers import *
text = 'Είναι ένας [MASK] άνθρωπος.'
tokenizer_greek = AutoTokenizer.from_pretrained('nlpaueb/bert-base-greek-uncased-v1')
lm_model_greek = AutoModelWithLMHead.from_pretrained('nlpaueb/bert-base-greek-uncased-v1')
input_ids = tokenizer_greek.encode(text)
print(tokenizer_greek.convert_ids_to_tokens(input_ids))
# ['[CLS]', 'ειναι', 'ενας', '[MASK]', 'ανθρωπος', '.', '[SEP]']
outputs = lm_model_greek(torch.tensor([input_ids]))[0]
print(tokenizer_greek.convert_ids_to_tokens(outputs[0, 3].max(0)[1].item()))
# the most plausible prediction for [MASK] is changing in every single run
```
It is obvious that the LM Head (layer) is missing from `pytorch_model.bin`, my main questions and report are:
* How we could preserve this layer moving from the TF checkpoint to the final PyTorch model?
* Is it possible to serve the model both in Pytorch and TF binaries? | 02-19-2020 09:54:26 | 02-19-2020 09:54:26 | My guess would be that you'd need to load (and then save) with `BertForPreTraining` rather than `BertModel`.
https://github.com/huggingface/transformers/blob/20fc18fbda3669c2f4a3510e0705b2acd54bff07/src/transformers/modeling_bert.py#L806<|||||>@BramVanroy you're my hero for today! Appreciated man!
Test Cases:
```python
import torch
from transformers import *
model_path = '/Users/kiddothe2b/Downloads/bert-base-greek-uncased-v2'
tokenizer_greek = AutoTokenizer.from_pretrained(model_path)
lm_model_greek = AutoModelWithLMHead.from_pretrained(model_path)
# ================ EXAMPLE 1 ================
text_1 = 'O ποιητής έγραψε ένα [MASK] .'
# EN: The poet wrote a [MASK] . '
input_ids = tokenizer_greek.encode(text_1)
print(tokenizer_greek.convert_ids_to_tokens(input_ids))
# ['[CLS]', 'o', 'ποιητης', 'εγραψε', 'ενα', '[MASK]', '.', '[SEP]']
outputs = lm_model_greek(torch.tensor([input_ids]))[0]
print(tokenizer_greek.convert_ids_to_tokens(outputs[0, 5].max(0)[1].item()))
# the most plausible prediction for [MASK] is "song"
# ================ EXAMPLE 2 ================
text_2 = 'Είναι ένας [MASK] άνθρωπος.'
# EN: He is a [MASK] person. '
input_ids = tokenizer_greek.encode(text_1)
print(tokenizer_greek.convert_ids_to_tokens(input_ids))
# ['[CLS]', 'ειναι', 'ενας', '[MASK]', 'ανθρωπος', '.', '[SEP]']
outputs = lm_model_greek(torch.tensor([input_ids]))[0]
print(tokenizer_greek.convert_ids_to_tokens(outputs[0, 3].max(0)[1].item()))
# the most plausible prediction for [MASK] is "good"
# ================ EXAMPLE 3 ================
text_3 = 'Είναι ένας [MASK] άνθρωπος και κάνει συχνά [MASK].'
# EN: He is a [MASK] person he does frequently [MASK]. '
input_ids = tokenizer_greek.encode(text_3)
print(tokenizer_greek.convert_ids_to_tokens(input_ids))
# ['[CLS]', 'ειναι', 'ενας', '[MASK]', 'ανθρωπος', 'και', 'κανει', 'συχνα', '[MASK]', '.', '[SEP]']
outputs = lm_model_greek(torch.tensor([input_ids]))[0]
print(tokenizer_greek.convert_ids_to_tokens(outputs[0, 8].max(0)[1].item()))
# the most plausible prediction for the second [MASK] is "trips"
```
I will release the updated version later today. Although I think this is a very important technical detail and needs to be referred in the examples.
Is it possible to load the same saved model with TF2 binding?
<|||||>Tensorflow checkpoints can be loaded when using `from_pretrained`. Have a look at the documentation, particularly this line:
https://github.com/huggingface/transformers/blob/20fc18fbda3669c2f4a3510e0705b2acd54bff07/src/transformers/modeling_utils.py#L317<|||||>Ok, so it is also suggested to upload the initial TF checkpoint files [ `variables.data-00000-of-00001`, `variables.index`, `variables.meta`] through the CLI, otherwise people cannot use the model in TF2 with `AutoModelWith.from_pretrained()`?
<|||||>I am not sure to be honest. Perhaps someone else can help you further along.<|||||>In the past for another project, this (`from_pt=True`) did the dirty trick for me:
```python
bert = BERT.from_pretrained(model_path+'pytorch_model.bin',
from_pt=True,
config=BertConfig().from_pretrained(model_path+'config.json'))
```
but I definitely not recommend this...
I have already uploaded the TF checkpoint files (`model_ckpt.data-00000-of-00001`, `model_ckpt.index`, `model_ckpt.meta`) in model's folder, so please feel free to troubleshoot.<|||||>Looks like you managed to do it in the meantime?
**For reference here's how to do it:** starting with version 2.5.0 thanks to this commit https://github.com/huggingface/transformers/pull/2765/commits/961c69776f8a2c95b92407a086848ebca037de5d
You can now just do
```python
tf_model = TFAutoModelForPreTraining.from_pretrained(
"nlpaueb/bert-base-greek-uncased-v1",
from_pt=True
)
tf_model.save_pretrained(dirname)
```
**And then you can upload the TF weights using the CLI.**
Of course, if you have the model in a local folder, you don't need to use the remote id.
cc @LysandreJik <|||||>> Looks like you managed to do it in the meantime?
>
> **For reference here's how to do it:** starting with version 2.5.0 thanks to this commit [961c697](https://github.com/huggingface/transformers/commit/961c69776f8a2c95b92407a086848ebca037de5d)
>
> You can now just do
>
> ```python
> tf_model = TFAutoModelForPreTraining.from_pretrained(
> "nlpaueb/bert-base-greek-uncased-v1",
> from_pt=True
> )
> tf_model.save_pretrained(dirname)
> ```
>
> Of course, if you have the model in a local folder, you don't need to use the remote id.
>
> cc @LysandreJik
Does that imply that the CLI upload should only upload PyTorch checkpoints? (I suppose for saving space.) I am asking because the documentation emphasises that loading PT checkpoints to TF and the other way around is quite slow. Also, if all checkpoints are indeed PyTorch only, it might be useful to set from_pt=True automatically when a model is fetched from the HF bucket (since those would then all contain PT checkpoints anyway).<|||||>Thanx @julien-c and @BramVanroy
Indeed I followed a very similar process:
```python
from transformers import TFBertForPreTraining
from transformers import BertConfig
model_path = '/home/ichalkidis/greek_bert/'
bert = TFBertForPreTraining.from_pretrained(model_path + 'pytorch_model.bin',
config=BertConfig().from_pretrained(model_path + 'config.json'), from_pt=True)
bert.save_pretrained('/home/ichalkidis/bert-base-greek-uncased-v1/')
```
From now on, we also serve the `tf_model.h5` and everyone will be able to load the model in TF2 without any further issue.<|||||>Ah, it seems that I misunderstood, then. In terms of the models themselves, you can upload the tf_model.h5 and the pytorch_model.bin, and when someone requests `/home/ichalkidis/bert-base-greek-uncased-v1/` based on the framework (TF or PT), the appropriate model (.h5 or .bin) is downloaded?<|||||>@BramVanroy, yes that's how it works! You can also explicitely specify `from_pt`/`from_tf` to `from_pretrained` for the model to fetch the other framework's checkpoint and convert it.<|||||>Yep @BramVanroy , I added the line "And then you can upload the TF weights using the CLI." to my comment above to try and make that clearer<|||||>I guess we can close this issue now @iliaschalkidis?<|||||>Of course @julien-c . In the end, this was just a misunderstanding, not a bug at all. Thank all of you for your help! |
transformers | 2,900 | closed | pull from original | 02-19-2020 08:39:22 | 02-19-2020 08:39:22 | ||
transformers | 2,899 | closed | RobertaTokenizer different than fairseq for 'world' | `pip install fairseq`
```
roberta = torch.hub.load('pytorch/fairseq', 'roberta.base')
rt = RobertaTokenizer.from_pretrained('roberta-base')
for ex in ['Hello world', ' Hello world', ' world', 'world', 'Hello', ' Hello']:
print(f'{ex} fairseq: {roberta.encode(ex).tolist()}, Transformers: {rt.encode(ex, add_prefix_space=True)}')
>>>
Hello world fairseq: [0, 31414, 232, 2], Transformers: [0, 20920, 232, 2]
Hello world fairseq: [0, 20920, 232, 2], Transformers: [0, 20920, 232, 2]
world fairseq: [0, 232, 2], Transformers: [0, 232, 2]
world fairseq: [0, 8331, 2], Transformers: [0, 232, 2]
Hello fairseq: [0, 31414, 2], Transformers: [0, 20920, 2]
Hello fairseq: [0, 20920, 2], Transformers: [0, 20920, 2]
```
Notice that even the token "world" is different, but results always the same with leading spaces.
Is this related to @joeddav recent work?
h/t @pnpnpn for uncovering:) | 02-19-2020 03:41:03 | 02-19-2020 03:41:03 | Reading more, pretty sure we only expect to have the same results as fairseq when the argument to fairseq starts with a space. Closing but would love verification/knowledge !<|||||>Yes, this comes from #2778, which changes the default behavior to automatically prepending a space when `add_special_tokens=True` for Roberta, since you want a space after `<s>`. Can be overriden with `add_prefix_space=False`. This does deviate from fairseq's encode fn, ~~but reflects the behavior of their `fill_mask` [which also prepends a space](https://github.com/pytorch/fairseq/blob/master/fairseq/models/roberta/hub_interface.py#L149).~~ Nvm, fairseq's `fill_mask` function doesn't prepend a space after all. They expect the user to know that they have to prepend a space to get correctly encoded sequences. |
transformers | 2,898 | closed | Language modeling example script missing the next sentence predicion | The example script in `run_language_modeling.py` does not include the next sentence prediction for pre-training BERT. I was wondering if that is a) an oversight, b) for simplicity, or c) because you have found its impact to be non-significant? | 02-19-2020 00:31:52 | 02-19-2020 00:31:52 | It is both b) and c) :).<|||||>Hello @LysandreJik and @BramVanroy
Did you have any results in training run_language_modeling.py for some language from scratch (i mean with and without NSP (next sentence prediction) as is in that script) ?
Did you get better or relatively the same losses (and perplexity, accuracy) by first doing MLM then
NSP ?
what were your warmup steps and block size (maximum sequence length) for each (MLM , NSP) task , if you have down them separately ? <|||||>as @LysandreJik mentioned here
https://github.com/huggingface/transformers/issues/2693#issuecomment-589819382
"the RoBERTa paper has proven that the NSP objective was not particularly helpful"
Is that also right for BERT ? (training for some language from scratch)
<|||||>If it is right , I think we can set block size (maximum sequence length) equal to 64 for MLM
Because the original paper used 128 (except for 10% of the last steps) , and it had two sentences (A and B for the sake of NSP and model learns that it shouldn't see B for filling masked words in A, because A and B aren't relevant to each other half of the times) which means average block size of 64 for each one
And it means grate speed up in training BERT, if I say right
because, I haven't got a TPU or even enough GPU to do that
As mentioned in original paper of BERT "Longer sequences are disproportionately expensive because attention is quadratic to the sequence length"<|||||>Perhaps, another idea is that
words in sentence A do attention on sentence B, anyway
and that attention is very important to get great results in MLM task (here I mean block size of 128 which means 64 for A and another 64 for B)
even regarding the fact that A and B are related sentences, just half of the times
(and that attentions are the main inputs for task NSP)
And by using block size of 64 (and not doing task NSP during the training) , I will get very bad results |
transformers | 2,897 | closed | save_pretrained doesn't work with GPT2FastTokenizer | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): GPT2TokenizerFast
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
> from transformers import *
> tok = GPT2TokenizerFast.from_pretrained('distilgpt2')
> tok.save_pretrained('./')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/bilal/Documents/transformers/src/transformers/tokenization_utils.py", line 519, in save_pretrained
vocab_files = self.save_vocabulary(save_directory)
File "/Users/bilal/Documents/transformers/src/transformers/tokenization_utils.py", line 529, in save_vocabulary
raise NotImplementedError
NotImplementedError
```
## Expected behavior
The tokenizer should be able to be saved
## Environment info
- `transformers` version: 2.4.1
- Platform: Darwin-19.3.0-x86_64-i386-64bit
- Python version: 3.7.5
- PyTorch version (GPU?): 1.3.1 (False)
- Tensorflow version (GPU?): 2.0.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No | 02-18-2020 23:30:53 | 02-18-2020 23:30:53 | After upgrading to 2.5.0, the code now throws
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/bilal/Documents/transformers/src/transformers/tokenization_utils.py", line 587, in save_pretrained
return vocab_files + (special_tokens_map_file, added_tokens_file)
TypeError: unsupported operand type(s) for +: 'NoneType' and 'tuple'
```<|||||>Hi @bkkaggle ,
Thanks for reporting the issue, it should be fixed through https://github.com/huggingface/transformers/pull/2918 and land very soon on master.
I'll be included in the first maintenance release following 2.5
Morgan<|||||>Saving tokenizers works now, but restoring them doesn't
```
> from transformers import *
> tok = GPT2TokenizerFast.from_pretrained('distilgpt2')
> tok.save_pretrained('./')
('./vocab.json-vocab.json', './vocab.json-merges.txt', './special_tokens_map.json', './added_tokens.json')
> tok = GPT2TokenizerFast.from_pretrained('./')
RobertaTokenizerFast has an issue when working on mask language modeling where it introduces an extra encoded space before the mask token.See https://github.com/huggingface/transformers/pull/2778 for more information.
> tok.tokenize('test')
[]
```<|||||>I can't reproduce on my side, using your code:
```python
tok = GPT2TokenizerFast.from_pretrained('distilgpt2')
tok.save_pretrained('./')
> ('./vocab.json-vocab.json', './vocab.json-merges.txt', './special_tokens_map.json', './added_tokens.json')
tok = GPT2TokenizerFast.from_pretrained('./')
tok.tokenize('test')
> ['test']
```<|||||>I made a colab notebook to reproduce the error
The error appears when installing from source on the master branch
colab: https://colab.research.google.com/drive/1OJdm6LzVtyb-biVR1ky6joSX7gBgl6St<|||||>Thanks, I'm able to reproduce now, I'll have a look hopefully tomorrow morning.
I'll keep you posted here 👀 <|||||>Fixed |
transformers | 2,896 | closed | BertModel' object missing 'save_pretrained' attribute | I was attempting to download a pre-trained BERT model & save it to my cloud directory using Google Colab.
model.save_pretrained() seems to be missing completely for some reason.
Link to Colab notebook: https://colab.research.google.com/drive/1ix_nNhsd89nLfTy6Nyh-Ak8PHn1SYm-0
Here's my code:
```
!pip install pytorch_pretrained_bert
import torch
from pytorch_pretrained_bert import BertTokenizer, BertModel
import pandas as pd
import numpy as np
### Let's download a model and tokenizer
model = BertModel.from_pretrained('bert-base-uncased')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
### Now let's save our model and tokenizer to a directory
model.save_pretrained('./models/')
tokenizer.save_pretrained('./models/')
```
Error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-18-1a3b2c8b8e82> in <module>()
11
12 ### Now let's save our model and tokenizer to a directory
---> 13 model.save_pretrained('./models/')
14 tokenizer.save_pretrained('./models/')
15
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __getattr__(self, name)
574 return modules[name]
575 raise AttributeError("'{}' object has no attribute '{}'".format(
--> 576 type(self).__name__, name))
577
578 def __setattr__(self, name, value):
```
AttributeError: 'BertModel' object has no attribute 'save_pretrained'
| 02-18-2020 21:35:52 | 02-18-2020 21:35:52 | Hi, `from_pretrained` appeared in an older version of the library. `pytorch-pretrained-BERT` is a year old, is less robust and lacks certain functionalities (such as the one you mentioned) which are present in `transformers`. |
transformers | 2,895 | closed | Enable 'from transformers import AlbertMLMHead' | Discussed at https://github.com/huggingface/transformers/issues/2894
I'm writing a custom pretraining script that incorporates both the masked language modeling (MLM) and sentence order prediction (SOP) objectives. I'm able to use the TFAlbertForMaskedLM model for the MLM objective, but need access to the last_hidden_state to write my SOP objective. I can do this if I have a raw TFAlbertModel and write my own MLM objective, but would prefer to just create my own model from pre-existing modularized components.
I know that there's a lot of care taken in API design, and the team may have explicitly decided against this. But if it is an option, it would make the transformers repo much more extensible for research. | 02-18-2020 18:54:02 | 02-18-2020 18:54:02 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2895?src=pr&el=h1) Report
> Merging [#2895](https://codecov.io/gh/huggingface/transformers/pull/2895?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2ae98336d17fceea7506af9880b862b6252a38f6?src=pr&el=desc) will **decrease** coverage by `1.07%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2895?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2895 +/- ##
==========================================
- Coverage 75.06% 73.98% -1.08%
==========================================
Files 94 94
Lines 15288 15288
==========================================
- Hits 11476 11311 -165
- Misses 3812 3977 +165
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2895?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/2895/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.87% <ø> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2895/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2895/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.91% <0%> (-9.86%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2895/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `70.86% <0%> (-2.3%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2895/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `92.07% <0%> (-2.21%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2895/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2895?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2895?src=pr&el=footer). Last update [2ae9833...014ad24](https://codecov.io/gh/huggingface/transformers/pull/2895?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>(I removed myself because GitHub suggests me as a reviewer on every PR because of a refactoring, and I can't review every PR, not because this is a bad idea. The PR is most likely good.) |
transformers | 2,894 | closed | Allow import of model components, e.g. `from transformers import TFAlbertMLMHead` | # 🚀 Feature request
Expose model components such as custom layers like `TFAlbertMLMHead`, that are created in `modeling_tf_albert.py` and others.
## Motivation
I'm writing a custom pretraining script that incorporates both the masked language modeling (MLM) and sentence order prediction (SOP) objectives. I'm able to use the `TFAlbertForMaskedLM` model for the MLM objective, but need access to the `last_hidden_state` to write my SOP objective. I can do this if I have a raw `TFAlbertModel` and write my own MLM objective, but would prefer to just create my own model from pre-existing modularized components.
## Your contribution
I can contribute this, it would just be modifying `__init__.py`. I know that there's a lot of care taken in API design, and the team may have explicitly decided against this. But if it is an option, it would make the transformers repo much more extensible for research.
| 02-18-2020 18:41:14 | 02-18-2020 18:41:14 | Why don't you import it directly via
`from transformers.modeling_tf_albert import TFAlbertMLMHead`
?
In my opinion it is not good practise to expose everything via `__init__.py` because the autocomplete feature of an IDE will become messy.<|||||>That's true, don't know why that slipped my mind. Thanks for the suggestion! |
transformers | 2,893 | closed | Pipeline Loading Models and Tokenizers | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Hi I'm trying to use 'fmikaelian/flaubert-base-uncased-squad' for question answering. I understand that I should load the model and the tokenizers. I'm not sure how should I do this.
My code is basically far
`
from transformers import pipeline, BertTokenizer
nlp = pipeline('question-answering', \
model='fmikaelian/flaubert-base-uncased-squad', \
tokenizer='fmikaelian/flaubert-base-uncased-squad')`
Most probably this can be solve with a two liner.
Many thanks
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
https://stackoverflow.com/questions/60287465/pipeline-loading-models-and-tokenizers | 02-18-2020 18:40:40 | 02-18-2020 18:40:40 | Also cc'ing @fmikaelian on this for information :)<|||||>Apologize for the careless mistake @fmikaelian <|||||>Hi, other than the careless mistake, I'm trying to understand why I cannot load any model from transformers S3 repo. I have tried :
1) from transformers import FlaubertModel, FlaubertTokenizer
2) from transformers import CamembertTokenizer
3)from transformers import CamembertModel
4)from transformers import BertModel
model = BertModel.from_pretrained('bert-base-uncased')
Only the forth option has triggered the download process. All other options return :
`"ImportError: cannot import name 'CamembertModel'"`
i was wondering if there is an issue since I'm using conda in a Windows PC.
Many thanks for your help.
<|||||>I tried to update transformers with conda but that did not work and I also tried to do some pip install but also getting some errors:
```
File "C:\Users\Ruben Contesti\AppData\Local\Continuum\Anaconda3\envs\...\lib\site-packages\transformers\configuration_utils.py", line 145, in from_pretrained
raise EnvironmentError(msg)
OSError: Model name 'flaubert-base-uncased-squad' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased). We assumed 'flaubert-base-uncased-squad' was a path or url to a configuration file named config.json or a directory containing such a file but couldn't find any such file at this path or url.
```<|||||>As pointed out in my Stackoverflow answer, I suspect a versioning conflict. I successfully managed to load the pipeline in `2.5.0`, but had errors in `2.4.1` (not quite the same as @rcontesti , but similar enough for me to assume problems with an older version).<|||||>Do you have torch installed in your environment? That might explain why you can't import `CamembertModel`.
The error
```
OSError: Model name 'flaubert-base-uncased-squad' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased). We assumed 'flaubert-base-uncased-squad' was a path or url to a configuration file named config.json or a directory containing such a file but couldn't find any such file at this path or url.
```
means you're trying to load a flaubert checkpoint in BERT. Could you share the code that raised the last error so that we may try to reproduce the error?<|||||>Guyz thank so much for your answers. I was able to solve the version problem but now I'm running into a different problem(Should I open a new thread?):
I'm currently using:
```py
model_=transformers.FlaubertForQuestionAnswering
tokenizer_ = transformers.FlaubertTokenizer
```
But when I place them into pipeline:
```py
nlp = pipeline('question-answering', \
model=model, \
tokenizer=tokenizer)
```
I'm getting the following error:
```
Traceback (most recent call last):
File "C:\Users\Ruben Contesti\AppData\Local\Continuum\Anaconda3\envs\..\lib\multiprocessing\pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "C:\Users\Ruben Contesti\AppData\Local\Continuum\Anaconda3\envs\..\lib\multiprocessing\pool.py", line 44, in mapstar
return list(map(*args))
File "C:\Users\Ruben Contesti\AppData\Local\Continuum\Anaconda3\envs\..\lib\site-packages\transformers\data\processors\squad.py", line 105, in squad_convert_example_to_features
sub_tokens = tokenizer.tokenize(token)
TypeError: tokenize() missing 1 required positional argument: 'text'
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "question_extraction.py", line 61, in <module>
answer, score=question_extraction(text, question_, model_, tokenizer_, language_, verbose= True)
File "question_extraction.py", line 44, in question_extraction
output=nlp({'question':question, 'context': text})
File "C:\Users\Ruben Contesti\AppData\Local\Continuum\Anaconda3\envs\..\lib\site-packages\transformers\pipelines.py", line 802, in __call__
for example in examples
File "C:\Users\Ruben Contesti\AppData\Local\Continuum\Anaconda3\envs\socgen_nlp\lib\site-packages\transformers\pipelines.py", line 802, in <listcomp>
for example in examples
File "C:\Users\Ruben Contesti\AppData\Local\Continuum\Anaconda3\envs\..\lib\site-packages\transformers\data\processors\squad.py", line 316, in squad_convert_examples_to_features
desc="convert squad examples to features",
File "C:\Users\Ruben Contesti\AppData\Local\Continuum\Anaconda3\envs\..\lib\site-packages\tqdm\std.py", line 1097, in __iter__
for obj in iterable:
File "C:\Users\Ruben Contesti\AppData\Local\Continuum\Anaconda3\envs\..\lib\multiprocessing\pool.py", line 320, in <genexpr>
return (item for chunk in result for item in chunk)
File "C:\Users\Ruben Contesti\AppData\Local\Continuum\Anaconda3\envs\..\lib\multiprocessing\pool.py", line 735, in next
raise value
TypeError: tokenize() missing 1 required positional argument: 'text'
convert squad examples to features: 0%|
```<|||||>You need to initialize your model and tokenizer with a checkpoint, for example instead of
```py
model_=transformers.FlaubertForQuestionAnswering
tokenizer_ = transformers.FlaubertTokenizer
```
You would specify a flaubert checkpoint:
```py
model_ = transformers.FlaubertModel.from_pretrained("fmikaelian/flaubert-base-uncased-squad")
tokenizer_ = transformers.FlaubertTokenizer.from_pretrained("fmikaelian/flaubert-base-uncased-squad")
```
I chose a community checkpoint that was trained using question answering. You can check all available FlauBERT models [here](https://huggingface.co/models?search=flaubert).<|||||>Once again many thanks @LysandreJik for the help. I proceed as suggested and now when I'm trying to put both the tokenizer and the model into pipeline I'm running into the following error:
`Traceback (most recent call last):
File "question_extraction.py", line 72, in <module>
answer, score=question_extraction(text, question_, model_, tokenizer_, language_, verbose= True)
File "question_extraction.py", line 55, in question_extraction
output=nlp({'question':question, 'context': text})
File "C:\Users\Ruben Contesti\AppData\Local\Continuum\Anaconda3\envs\..\lib\site-packages\transformers\pipelines.py", line 818, in __call__
start, end = self.model(**fw_args)
ValueError: not enough values to unpack (expected 2, got 1)`
It seems like the dictionary of values start and end I'm getting is not a tuple or something like that.<|||||>I updated the code so that it loads a previously saved model
```python
tokenizer_ = FlaubertTokenizer.from_pretrained(MODELS)
model_ = FlaubertModel.from_pretrained(MODELS)
def question_extraction(text, question, model, tokenizer, language="French", verbose=False):
if language=="French":
nlp = pipeline('question-answering', \
model=model, \
tokenizer=tokenizer)
else:
nlp=pipeline('question-answering')
output=nlp({'question':question, 'context': text})
answer, score = output.answer, output.score
if verbose==True:
print("Q: ", question ,"\n",\
"A:", answer,"\n", \
"Confidence (%):", "{0:.2f}".format(str(score*100) )
)
return answer, score
if __name__=="__main__":
question_="Quel est le montant de la garantie?"
language_="French"
text="le montant de la garantie est € 1000"
answer, score=question_extraction(text, question_, model_, tokenizer_, language_, verbose= True)
```
But now I'm getting an unpacking error:
```
C:\...\NLP\src>python question_extraction.py
OK
OK
convert squad examples to features: 100%|████████████████████████████████████████████████| 1/1 [00:00<00:00, 4.66it/s]
add example index and unique id: 100%|███████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s]
Traceback (most recent call last):
File "question_extraction.py", line 77, in <module>
answer, score=question_extraction(text, question_, model_, tokenizer_, language_, verbose= True)
File "question_extraction.py", line 60, in question_extraction
output=nlp({'question':question, 'context': text})
File "C:\...\transformers\pipelines.py", line 818, in __call__
start, end = self.model(**fw_args)
ValueError: not enough values to unpack (expected 2, got 1)
```<|||||>Hi @rcontesti, I've investigated further and found a few issues. First of all, the checkpoint you're trying to load is `fmikaelian/flaubert-base-uncased-squad`, which unfortunately cannot be used by pipelines.
This is because this model was fine-tuned with `FlaubertForQuestionAnswering` instead of `FlaubertForQuestionAnsweringSimple`, and only the latter can be used by pipelines. Since it was fine-tuned leveraging a different architecture for the QA head, it, unfortunately, won't be usable by pipelines. The usage example on the [models page](https://huggingface.co/fmikaelian/flaubert-base-uncased-squad) is misleading because of that (cc @fmikaelian).
Unfortunately, there is no French model that can be used with the pipelines, so you would need to do a custom inference leveraging the model. We don't have any examples showcasing how to leverage `XLNet/XLM/FlaubertForQuestionAnswering`, but it is on our roadmap.<|||||>@LysandreJik many thanks for your answer. It was very clarifying.
Some follow up questions on my side:
1. If I use FlaubertForQuestionAnsweringSimple then can I use pipelines? If that is the case would you show me how?
2. Is it also the case that I cannot use CammmBert for QA?
3. I guess that because we have different architectures theres is no quick hack to adapt it to pipelines, am I getting it right?
4. If I were to do custom inferencing, without pipelines and only using pytorch, would you mind showing me the resources to do so?
Many thanks!!!
<|||||>1. You can indeed use `FlaubertForQuestionAnsweringSimple` with pipelines, the issue is that there is currently no model fine-tuned on QA for this model.
2. You could also use the `CamembertForQuestionAnswering` model with pipelines I believe, but unfortunately there is no model fine-tuned on QA for this model either.
3. Indeed, we should add these down the line, but it is not very high on our priority list right now cc @mfuntowicz
4. Yes, I'm currently working on some [examples](https://github.com/huggingface/transformers/pull/2850) that should be merged sometimes today. I'll look into using a `XLNet/XLM/FlaubertForQuestionAnswering` and their differing architecture as well.<|||||>@rcontesti @LysandreJik
I will fine-tune `FlaubertForQuestionAnsweringSimple` and `CamembertForQuestionAnswering` on French QA in the next days and let you know if we can use the pipeline with those<|||||>@fmikaelian, @LysandreJik
Many thanks for the help. Eventually I could train it myself, I haven't use Pytorch in a year but if you could point to a good dataset I could do training. Many thanks!<|||||>@rcontesti @LysandreJik
I fine-tuned `FlaubertForQuestionAnsweringSimple` on [FQuAD](https://fquad.illuin.tech/), by editing `run_squad.py` using the same approach as #2746, but still got `ValueError: not enough values to unpack (expected 2, got 1)` when using the model with a pipeline.
I also fine-tuned `CamembertForQuestionAnswering` on [FQuAD](https://fquad.illuin.tech/) and [French-SQuAD](https://github.com/Alikabbadj/French-SQuAD), and pipelines are working :-]
```python3
from transformers import pipeline
nlp = pipeline('question-answering', model='fmikaelian/camembert-base-squad', tokenizer='fmikaelian/camembert-base-squad')
nlp({
'question': "Qui est Claude Monet?",
'context': "Claude Monet, né le 14 novembre 1840 à Paris et mort le 5 décembre 1926 à Giverny, est un peintre français et l’un des fondateurs de l'impressionnisme."
})
```
```
{'answer': 'un peintre français',
'end': 106,
'score': 0.498404793881182,
'start': 87}
```
Model links:
- [`fmikaelian/camembert-base-fquad`](https://huggingface.co/fmikaelian/camembert-base-fquad)
- [`fmikaelian/camembert-base-squad`](https://huggingface.co/fmikaelian/camembert-base-squad)
Will open a PR for models cards (#3089)<|||||>@fmikaelian That's really cool, thanks for taking the time to fine-tune those models! I'll look into the error with the pipeline ASAP, I'm pretty sure I know where it comes from.
Really cool to have the first community model for question answering in French!<|||||>Hi @fmikaelian
Just installed transformers from source and it seems the model is still not there
`Model name 'fmikaelian/camembert-base-squad' was not found in model name list `
Also tried to download from S3 but it also does not seem to be there:
`OSError: Model name '../models/fmikaelian/camembert-base-squad' was not found in model name list. We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/../models/fmikaelian/camembert-base-squad/config.json' was a path, a model identifier, or url to a configuration file named config.json or a directory containing such a file but couldn't find any such file at this path or url.`
Would you mind sharing the s3 paths? I couldn t get them.<|||||>The models are on the S3. What command did you use? Why is there "../" in your model name?
The following works:
```py
from transformers import CamembertModel
model = CamembertModel.from_pretrained("fmikaelian/camembert-base-squad")
```
The following also works:
```py
from transformers import pipeline
nlp = pipeline("question-answering", model="fmikaelian/camembert-base-squad", tokenizer="fmikaelian/camembert-base-squad")
```<|||||>@LysandreJik, is working now. Many thanks! |
transformers | 2,892 | closed | Create README.md | 02-18-2020 18:30:49 | 02-18-2020 18:30:49 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2892?src=pr&el=h1) Report
> Merging [#2892](https://codecov.io/gh/huggingface/transformers/pull/2892?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2ae98336d17fceea7506af9880b862b6252a38f6?src=pr&el=desc) will **decrease** coverage by `1.07%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2892?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2892 +/- ##
==========================================
- Coverage 75.06% 73.98% -1.08%
==========================================
Files 94 94
Lines 15288 15288
==========================================
- Hits 11476 11311 -165
- Misses 3812 3977 +165
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2892?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2892/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2892/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.91% <0%> (-9.86%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2892/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `70.86% <0%> (-2.3%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2892/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `92.07% <0%> (-2.21%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2892/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2892?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2892?src=pr&el=footer). Last update [2ae9833...25c0467](https://codecov.io/gh/huggingface/transformers/pull/2892?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks for sharing @BinWang28!<|||||>[model page](https://huggingface.co/binwang/xlnet-base-cased) |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.