repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 2,891 | closed | Fix InputExample docstring | 
| 02-18-2020 17:24:42 | 02-18-2020 17:24:42 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2891?src=pr&el=h1) Report
> Merging [#2891](https://codecov.io/gh/huggingface/transformers/pull/2891?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2ae98336d17fceea7506af9880b862b6252a38f6?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2891?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2891 +/- ##
=======================================
Coverage 75.06% 75.06%
=======================================
Files 94 94
Lines 15288 15288
=======================================
Hits 11476 11476
Misses 3812 3812
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2891?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/2891/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `21.73% <ø> (ø)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2891?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2891?src=pr&el=footer). Last update [2ae9833...e0b3974](https://codecov.io/gh/huggingface/transformers/pull/2891?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks!<|||||>Thank you! Easiest review ever! Keep em coming :) |
transformers | 2,890 | closed | Support for torch-lightning in NER examples | Update of https://github.com/huggingface/transformers/pull/2816
This PR creates a new example coding style for the pytorch code.
* Uses pytorch-lightning for the underlying training.
* Separates out the base transformer loading from the individual training.
* Moves each individual example to its own directory.
* Move the code in the readme to bash scripts.
The only two new files are run_pl_ner.py and transformers_base.py.
The goal is to keep the same format as the original command-line. Most of the argument names are preserved. I have verified that for NER the results of the same on GPU.
There are several nice benefits of lightning -> somewhat nicer logging and library integration (e.g. wandb), auto-checkpointing. Mostly the goal though is code readability with identical functionality.
Tests I ran:
* make sure that the test results are identical.
* print test results after training.
* test multi-gpu and apex (multigpu gives a nice speedup) | 02-18-2020 17:22:01 | 02-18-2020 17:22:01 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2890?src=pr&el=h1) Report
> Merging [#2890](https://codecov.io/gh/huggingface/transformers/pull/2890?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0dbddba6d2c5b2c6fc08866358c1994a00d6a1ff?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2890?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2890 +/- ##
=======================================
Coverage 75.06% 75.06%
=======================================
Files 94 94
Lines 15288 15288
=======================================
Hits 11476 11476
Misses 3812 3812
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2890?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2890?src=pr&el=footer). Last update [0dbddba...8f8137f](https://codecov.io/gh/huggingface/transformers/pull/2890?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@LysandreJik
This implementation does not work with `pytorch_lightning` > `0.7.1`.
It throws the exception `'Trainer' object has no attribute 'avg_loss'` because since version `0.7.2` they removed the `avg_loss` field from the `Trainer` class.
See https://github.com/huggingface/transformers/pull/2890/files#diff-d68a6ecfacd8231c59af0ea67d77bb9cR120<|||||>@simonepri can you file an issue? I guess we should just remove that key/value pair. <|||||>@simonepri did you try 0.7.5? |
transformers | 2,889 | closed | Getting: AttributeError: 'BertTokenizer' object has no attribute 'encode' | # 🐛 Bug
## AttributeError: 'BertTokenizer' object has no attribute 'encode'
Model, I am using Bert
The language I am using the model on English
The problem arises when using:
```
input_ids = torch.tensor([tokenizer.encode("raw_text", add_special_tokens=True)])
```
The tasks I am working on is:
```
##Text Summary for the following paragraph of text
<code>
"['26The Indian organic market\nhave begun to disrupt the market with their one-of-a-kind \nofferings.', 'In an effort to promote a healthier lifestyle, these \n\nplayers are playing a pivotal role by providing consumers with \n\nwholesome organic produce.', 'Since the organic food segment is still at a nascent stage \nin India, both the Government and private players need \n\n\n\ninvolved.', 'The organic farming industry in India holds immense \n\npotential to grow, provided it receives steady investment \n\n\n\nlike incentivizing organic cultivation, food processing, \n\n\n\nof the challenges faced by the organic sector today can be \n\ngrouped into three heads:\n\nŁ \n\nlengthy procedures, international validity, inadequate \ncertifying agencies and inadequate supporting infrastructure \n\n\n\n\ncost of internal audits and documentation is approximately \n\n\n\nreduced, it is expensive for many small groups of farmers or \nindividual farmers.', 'Ł \nThere is also a gap in the \n\nrequirements.', 'Additionally, key trading partners have \ntraditionally demonstrated a lack of willingness to sign \n\nequivalence arrangements.', 'Ł \nThe \n\n\nprocess of the farm or crop cannot be placed in the organic \n\n\nharvest is sold as conventional crops, thereby causing the \nfarmer to incur a loss.', 'Ł \ncommodities: \nDairy products have a different standard while \nmeat has a different standard .', 'The process of standardization \n\nof organic coconut will be different from that of the value-\n\nadded products of coconut.', 'Therefore, a company having \n\nand maintain multiple records as per the applicable standards.', 'Ł \n\nnumber of producers in the world yet they cultivate less than \n1% of the organic area.', 'The conventional production system is \nmore lucrative given the land fragmentation.', 'Ł Lack of incentives for farmers: \nThe transition from \n\nconventional to organic farming is accompanied by high \ninput costs and low yields in the initial years.', 'The cost of \ngoing completely organic is quite high, due to the high cost \n\nof organic manure.', 'The commercially available bio-manure \nproducts may not be completely organic, and therefore the \n\n\nThis is one of the many reasons why farmers are skeptical \nwhen it comes to shifting from conventional to organic \nfarming.', 'In such cases, the farmers choose to play it safe by \n\npracticing conventional methods of farming.', 'Ł Lack of standardized organic agriculture inputs and subsidy \non organic inputs:\n Farmers also face an acute shortage of \nquality standardized organic agriculture inputs, which are \noften much more expensive than conventional agricultural \n\ninputs.', 'There are no subsidies from the Government on \nagriculture inputs, especially biofertilizers and biopesticides, \nmaking the cost of cultivation for organic farming quite high.', 'Unless the farmers use their own farm grown manure in \nlarge quantities, they are unable to meet the expenses.', 'Lack \nof proper organic inputs often results in low yield making \n\norganic farming unsustainable for the farmers.', 'Ł Lack of organic cultivation research and extension: \nThe \n\ncurrent research and extension on organic farming are much \nlesser than that on conventional farming.', 'There is a lack of \n\n\nStrong government support for producing non-GMO high \nyielding varieties and niche crops for organic farming \nunder different agro-ecological zones across India require \n\ninvestment in organic research and extension.', 'The extension \nservices are very limited for organic, for example, the ATMA \nscheme focuses more on conventional farming.', 'There is no \n\ntimely advisory available for organic pest and disease control \n\nmeasures.', 'Processor-level challenges\nŁ Supply chain issues: \nMany farmers are apprehensive of \n\norganic farming since it involves high production costs.', 'The emphasis on collection, transportation and storage of \nfresh organic produce is very high.', 'Due to relatively low \n\nvolumes, the marketing and distribution chain of organic food \n\nvery high.', 'For example, organic produce cannot be stored in \n\ngovernment warehouses that practice chemical treatment of \nstorage areas.', 'High demand and low supply further create \n\n\nthese products have higher price markups than conventional \nproducts.', 'Additionally, many sellers mix the produce from \ndifferent geographical regions to help attain a competitive \n\nprice, thus compromising the geographical origin norm.', 'Ł Lack of a proper organic supply chain is felt more acutely in \n\nhilly, tribal and remote areas that have a high potential for \n\ninfrastructure.', 'Ł Global competitiveness:\n A major challenge India faces is \n\nthat of increasing its share in the global organic food export \nmarket, in lieu of global competitiveness.', 'There often exists a \ndichotomy between international quality and safety standards \n\nand Indian organic stands, which puts Indian produce at a \ndisadvantage.', 'Ł Lack of proper branding and packaging: \n\nof organic products require separate packing material that is \nnatural and requires distinctive branding that distinguishes \norganic from conventional products.', 'At present, there is \n\nan absence of regulations on labeling standards.', 'There is \n34\n\n10, 201835']"
```
## To reproduce
Steps to reproduce the behavior:
1. In the first Imported torch
```Python
import torch
from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM
import logging
```
2. defined models :
```
MODELS = [(BertModel, BertTokenizer, 'bert-base-uncased') ]
```
3.
```
# Let's encode some text in a sequence of hidden-states using each model:
for model_class, tokenizer_class, pretrained_weights in MODELS:
# Load pretrained model/tokenizer
tokenizer = tokenizer_class.from_pretrained(pretrained_weights)
model = model_class.from_pretrained(pretrained_weights)
```
4. If I am trying to encode with following code
```
# Encode text
<code> input_ids = torch.tensor([tokenizer.encode("raw_text", add_special_tokens=True)]) # Add special tokens takes care of adding [CLS], [SEP], <s>... tokens in the right way for each model.
with torch.no_grad():
last_hidden_states = model(input_ids)[0]
```
I am getting following error
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-10-190085fa3098> in <module>
1 # Encode text
----> 2 input_ids = torch.tensor([tokenizer.encode("raw_text", add_special_tokens=True)]) # Add special tokens takes care of adding [CLS], [SEP], <s>... tokens in the right way for each model.
3 with torch.no_grad():
4 last_hidden_states = model(input_ids)[0] # Models outputs are now tuples
AttributeError: 'BertTokenizer' object has no attribute 'encode'
```
## Expected behavior
Tokenization should get completed
## Environment info
- `transformers` version: '0.6.2'
- Platform: Windows 10
- Python version: 3.5
- PyTorch version (GPU?): 1.1.0 no gpu
- Tensorflow version (GPU?): Tensorflow 2.0
- Using GPU in script?:No
- Using distributed or parallel set-up in script?:No
| 02-18-2020 16:33:43 | 02-18-2020 16:33:43 | Please fix the formatting of your post and use code tags.<|||||>I made the changes still all the text is shown struck off form.
I am new to this bug log not sure how to change to code tag
<|||||>I have used <code> tag <|||||>Read how to use tags here: https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks<|||||>I did the tags as suggested by BramVanroy by using guidelines::https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks<|||||>You clearly did something wrong because, as you can see yourself, all text is striked through. Likely caused by having tildes (~) around your post.<|||||>Thanks, I cleared it, there was one hiding beside a comment.<|||||>You are using an old version of the library (pytorch_pretrained_bert). You should move to `transformers` instead.<|||||>I upgraded latest ``` transformers ``` still I am getting following error message :
```
ERROR:root:Internal Python error in the inspect module.
Below is the traceback from this internal error.
Traceback (most recent call last):
File "C:\Users\Veeresh\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3319, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-13-645c7873d473>", line 1, in <module>
encoding = tokenizer.encode(raw_text)
AttributeError: 'BertTokenizer' object has no attribute 'encode'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Veeresh\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2034, in showtraceback
stb = value._render_traceback_()
AttributeError: 'AttributeError' object has no attribute '_render_traceback_'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Veeresh\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\Veeresh\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Users\Veeresh\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "C:\Users\Veeresh\Anaconda3\lib\imp.py", line 242, in load_module
return load_dynamic(name, filename, file)
File "C:\Users\Veeresh\Anaconda3\lib\imp.py", line 342, in load_dynamic
return _load(spec)
ImportError: DLL load failed: The specified module could not be found.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Veeresh\Anaconda3\lib\site-packages\IPython\core\ultratb.py", line 1151, in get_records
return _fixed_getinnerframes(etb, number_of_lines_of_context, tb_offset)
File "C:\Users\Veeresh\Anaconda3\lib\site-packages\IPython\core\ultratb.py", line 319, in wrapped
return f(*args, **kwargs)
File "C:\Users\Veeresh\Anaconda3\lib\site-packages\IPython\core\ultratb.py", line 353, in _fixed_getinnerframes
records = fix_frame_records_filenames(inspect.getinnerframes(etb, context))
File "C:\Users\Veeresh\Anaconda3\lib\inspect.py", line 1502, in getinnerframes
frameinfo = (tb.tb_frame,) + getframeinfo(tb, context)
File "C:\Users\Veeresh\Anaconda3\lib\inspect.py", line 1460, in getframeinfo
filename = getsourcefile(frame) or getfile(frame)
File "C:\Users\Veeresh\Anaconda3\lib\inspect.py", line 696, in getsourcefile
if getattr(getmodule(object, filename), '__loader__', None) is not None:
File "C:\Users\Veeresh\Anaconda3\lib\inspect.py", line 733, in getmodule
if ismodule(module) and hasattr(module, '__file__'):
File "C:\Users\Veeresh\AppData\Roaming\Python\Python37\site-packages\tensorflow\__init__.py", line 50, in __getattr__
module = self._load()
File "C:\Users\Veeresh\AppData\Roaming\Python\Python37\site-packages\tensorflow\__init__.py", line 44, in _load
module = _importlib.import_module(self.__name__)
File "C:\Users\Veeresh\Anaconda3\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "C:\Users\Veeresh\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\__init__.py", line 42, in <module>
from . _api.v2 import audio
File "C:\Users\Veeresh\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\_api\v2\audio\__init__.py", line 10, in <module>
from tensorflow.python.ops.gen_audio_ops import decode_wav
File "C:\Users\Veeresh\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\ops\gen_audio_ops.py", line 9, in <module>
from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow
File "C:\Users\Veeresh\AppData\Roaming\Python\Python37\site-packages\tensorflow\__init__.py", line 50, in __getattr__
module = self._load()
File "C:\Users\Veeresh\AppData\Roaming\Python\Python37\site-packages\tensorflow\__init__.py", line 44, in _load
module = _importlib.import_module(self.__name__)
File "C:\Users\Veeresh\Anaconda3\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "C:\Users\Veeresh\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "C:\Users\Veeresh\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "C:\Users\Veeresh\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3319, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-13-645c7873d473>", line 1, in <module>
encoding = tokenizer.encode(raw_text)
AttributeError: 'BertTokenizer' object has no attribute 'encode'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Veeresh\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2034, in showtraceback
stb = value._render_traceback_()
AttributeError: 'AttributeError' object has no attribute '_render_traceback_'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Veeresh\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\Veeresh\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Users\Veeresh\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "C:\Users\Veeresh\Anaconda3\lib\imp.py", line 242, in load_module
return load_dynamic(name, filename, file)
File "C:\Users\Veeresh\Anaconda3\lib\imp.py", line 342, in load_dynamic
return _load(spec)
ImportError: DLL load failed: The specified module could not be found.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
---------------------------------------------------------------------------
```<|||||>There's a lot going wrong in that trace. Please recreate your environment from scratch to ensure that all correct dependencies are installed. Particularly, in your first post you were using torch, but your new trace throws Tensorflow errors.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,888 | closed | [WIP] Adapt lm generate fn for seq 2 seq models | From looking at the soon-to-be-added Bart model, I though the language generation could be conceptually adapted as shown below to be able to produce language from seq-to-seq models (Bart & T5).
So far this is not tested at all and only adapted for the `_generate_no_beam_search()` function. Also it still has to be checked whether this is compatible with T5.
Would be happy about feedback @sshleifer, @thomwolf | 02-18-2020 14:55:53 | 02-18-2020 14:55:53 | Excited for this!
A little early for me to have an opinion, but I'd start by adding a bunch of failing tests (e.g. for t5.generate), and some slow tests that verify that T5.generate/another non seq2seq model generate reasonable results. (You have to run those locally).
Stylistically, I'd say `is_seq_to_seq` should probably not be a function, just an attribute. But I think style here much less important than test coverage :)
Bon Chance!<|||||>Meant to just comment, sorry!<|||||>I adapted the language generation according to the newly added Bart file. This is still very much work in progress that's why I left a lot of comments in all files. Would be very happy about some feedback! @sshleifer <|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2888?src=pr&el=h1) Report
> Merging [#2888](https://codecov.io/gh/huggingface/transformers/pull/2888?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fc38d4c86fe4bbde91b194880fe38b821a346123?src=pr&el=desc) will **decrease** coverage by `33.74%`.
> The diff coverage is `2.38%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2888?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2888 +/- ##
===========================================
- Coverage 77.12% 43.37% -33.75%
===========================================
Files 98 98
Lines 15975 15995 +20
===========================================
- Hits 12320 6938 -5382
- Misses 3655 9057 +5402
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2888?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `0% <0%> (-86.12%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `0% <0%> (-92.38%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `0% <0%> (-75.78%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/2888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `0% <0%> (-84.69%)` | :arrow_down: |
| [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/2888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `0% <0%> (-75.64%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `0% <0%> (-98.24%)` | :arrow_down: |
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/2888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `36.36% <0%> (-63.64%)` | :arrow_down: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.49% <100%> (+0.03%)` | :arrow_up: |
| [src/transformers/modeling\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG1fcm9iZXJ0YS5weQ==) | `0% <0%> (-100%)` | :arrow_down: |
| [src/transformers/modeling\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `0% <0%> (-100%)` | :arrow_down: |
| ... and [29 more](https://codecov.io/gh/huggingface/transformers/pull/2888/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2888?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2888?src=pr&el=footer). Last update [fc38d4c...ab13956](https://codecov.io/gh/huggingface/transformers/pull/2888?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,887 | closed | Regarding attention size returned by the model | hi, while doing output_attentions - True in the huggingface model.it return attention of size:(no. of heads,seq length,seq length)
can we configure it to return only the attention of the last 2 heads of the model.
please let me know.
| 02-18-2020 13:30:43 | 02-18-2020 13:30:43 | Hi,
which model do you use? Can't you simply remove the output of the other heads?<|||||>I am using TFDistilbertmodelforsequenceclassification.
You are right,i can remove other head attention,but while using tfserving,it's taking a lot time ,since the output attentions has huge dimension.
So,that's why ,i was asking if i could get just last 2 head attention instead of getting all head attentions while hitting tfserving<|||||>Unfortunately, we don't have a way to do that right now. |
transformers | 2,886 | closed | Load Pretrained Model Error in Inherit Class | # ❓ Questions & Help
I wrote a new class based on RoBerta Model(pytorch)
## Details
The code is shown below
```
import torch
import numpy as np
import logging
import torch
from torch import nn
from torch.autograd import Variable
from torch.nn import CrossEntropyLoss
import torch.nn.functional as F
from transformers import BertPreTrainedModel
from transformers import RobertaConfig, RobertaTokenizer, RobertaModel
# from transformers.modeling_bert import BertEmbeddings
logging.basicConfig(format='%(asctime)s - %(levelname)s - %(name)s - %(message)s',
datefmt='%m/%d/%Y %H:%M:%S',
level=logging.INFO)
logger = logging.getLogger(__name__)
class RoBertaMultiwayMatch(BertPreTrainedModel):
def __init__(self, config, num_choices=4):
super(RoBertaMultiwayMatch, self).__init__(config)
self.num_choices = num_choices
self.RoBerta = RobertaModel(config)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.linear_trans = nn.Linear(config.hidden_size, config.hidden_size)
self.linear_fuse_p = nn.Linear(config.hidden_size*2, config.hidden_size)
self.linear_fuse_q = nn.Linear(config.hidden_size*2, config.hidden_size)
self.linear_fuse_a = nn.Linear(config.hidden_size * 2, config.hidden_size)
self.classifier = nn.Linear(config.hidden_size*3, 1)
self.init_weights()
def matching(self, passage_encoded, question_encoded, passage_attention_mask, question_attention_mask): ...
def fusing_mlp(self, passage_encoded, mp_q, mp_a, mp_qa, question_encoded, ...
def forward(self, input_ids, token_type_ids=None, attention_mask=None, doc_len=None, ...
if __name__ == "__main__":
# tokenizer = RobertaTokenizer.from_pretrained('roberta-large', do_lower_case=True)
model = RoBertaMultiwayMatch.from_pretrained('/data3/yangzhicheng/Data/RoBerta/roberta-large/', num_choices=4)
```
But the logger indicate that some weights are not initialized:
```
02/18/2020 15:24:00 - INFO - transformers.modeling_utils - loading weights file /data3/yangzhicheng/Data/RoBerta/roberta-large/pytorch_model.bin
02/18/2020 15:24:29 - INFO - transformers.modeling_utils - Weights of RoBertaMultiwayMatch not initialized from pretrained model: ['roberta.RoBerta.embeddings.word_embeddings.weight', 'roberta.RoBerta.embeddings.position_embeddings.weight', 'roberta.RoBerta.embeddings.token_type_embeddings.weight', 'roberta.RoBerta.embeddings.LayerNorm.weight', 'roberta.RoBerta.embeddings.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.0.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.0.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.0.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.0.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.0.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.0.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.0.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.0.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.0.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.0.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.0.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.0.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.0.output.dense.weight', 'roberta.RoBerta.encoder.layer.0.output.dense.bias', 'roberta.RoBerta.encoder.layer.0.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.0.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.1.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.1.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.1.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.1.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.1.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.1.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.1.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.1.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.1.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.1.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.1.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.1.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.1.output.dense.weight', 'roberta.RoBerta.encoder.layer.1.output.dense.bias', 'roberta.RoBerta.encoder.layer.1.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.1.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.2.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.2.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.2.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.2.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.2.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.2.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.2.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.2.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.2.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.2.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.2.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.2.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.2.output.dense.weight', 'roberta.RoBerta.encoder.layer.2.output.dense.bias', 'roberta.RoBerta.encoder.layer.2.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.2.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.3.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.3.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.3.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.3.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.3.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.3.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.3.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.3.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.3.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.3.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.3.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.3.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.3.output.dense.weight', 'roberta.RoBerta.encoder.layer.3.output.dense.bias', 'roberta.RoBerta.encoder.layer.3.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.3.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.4.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.4.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.4.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.4.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.4.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.4.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.4.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.4.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.4.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.4.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.4.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.4.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.4.output.dense.weight', 'roberta.RoBerta.encoder.layer.4.output.dense.bias', 'roberta.RoBerta.encoder.layer.4.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.4.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.5.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.5.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.5.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.5.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.5.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.5.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.5.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.5.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.5.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.5.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.5.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.5.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.5.output.dense.weight', 'roberta.RoBerta.encoder.layer.5.output.dense.bias', 'roberta.RoBerta.encoder.layer.5.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.5.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.6.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.6.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.6.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.6.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.6.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.6.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.6.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.6.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.6.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.6.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.6.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.6.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.6.output.dense.weight', 'roberta.RoBerta.encoder.layer.6.output.dense.bias', 'roberta.RoBerta.encoder.layer.6.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.6.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.7.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.7.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.7.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.7.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.7.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.7.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.7.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.7.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.7.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.7.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.7.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.7.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.7.output.dense.weight', 'roberta.RoBerta.encoder.layer.7.output.dense.bias', 'roberta.RoBerta.encoder.layer.7.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.7.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.8.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.8.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.8.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.8.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.8.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.8.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.8.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.8.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.8.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.8.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.8.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.8.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.8.output.dense.weight', 'roberta.RoBerta.encoder.layer.8.output.dense.bias', 'roberta.RoBerta.encoder.layer.8.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.8.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.9.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.9.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.9.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.9.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.9.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.9.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.9.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.9.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.9.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.9.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.9.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.9.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.9.output.dense.weight', 'roberta.RoBerta.encoder.layer.9.output.dense.bias', 'roberta.RoBerta.encoder.layer.9.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.9.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.10.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.10.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.10.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.10.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.10.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.10.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.10.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.10.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.10.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.10.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.10.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.10.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.10.output.dense.weight', 'roberta.RoBerta.encoder.layer.10.output.dense.bias', 'roberta.RoBerta.encoder.layer.10.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.10.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.11.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.11.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.11.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.11.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.11.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.11.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.11.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.11.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.11.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.11.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.11.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.11.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.11.output.dense.weight', 'roberta.RoBerta.encoder.layer.11.output.dense.bias', 'roberta.RoBerta.encoder.layer.11.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.11.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.12.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.12.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.12.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.12.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.12.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.12.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.12.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.12.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.12.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.12.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.12.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.12.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.12.output.dense.weight', 'roberta.RoBerta.encoder.layer.12.output.dense.bias', 'roberta.RoBerta.encoder.layer.12.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.12.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.13.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.13.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.13.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.13.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.13.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.13.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.13.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.13.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.13.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.13.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.13.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.13.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.13.output.dense.weight', 'roberta.RoBerta.encoder.layer.13.output.dense.bias', 'roberta.RoBerta.encoder.layer.13.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.13.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.14.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.14.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.14.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.14.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.14.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.14.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.14.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.14.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.14.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.14.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.14.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.14.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.14.output.dense.weight', 'roberta.RoBerta.encoder.layer.14.output.dense.bias', 'roberta.RoBerta.encoder.layer.14.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.14.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.15.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.15.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.15.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.15.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.15.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.15.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.15.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.15.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.15.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.15.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.15.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.15.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.15.output.dense.weight', 'roberta.RoBerta.encoder.layer.15.output.dense.bias', 'roberta.RoBerta.encoder.layer.15.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.15.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.16.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.16.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.16.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.16.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.16.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.16.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.16.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.16.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.16.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.16.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.16.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.16.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.16.output.dense.weight', 'roberta.RoBerta.encoder.layer.16.output.dense.bias', 'roberta.RoBerta.encoder.layer.16.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.16.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.17.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.17.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.17.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.17.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.17.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.17.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.17.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.17.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.17.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.17.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.17.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.17.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.17.output.dense.weight', 'roberta.RoBerta.encoder.layer.17.output.dense.bias', 'roberta.RoBerta.encoder.layer.17.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.17.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.18.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.18.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.18.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.18.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.18.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.18.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.18.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.18.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.18.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.18.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.18.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.18.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.18.output.dense.weight', 'roberta.RoBerta.encoder.layer.18.output.dense.bias', 'roberta.RoBerta.encoder.layer.18.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.18.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.19.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.19.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.19.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.19.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.19.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.19.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.19.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.19.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.19.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.19.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.19.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.19.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.19.output.dense.weight', 'roberta.RoBerta.encoder.layer.19.output.dense.bias', 'roberta.RoBerta.encoder.layer.19.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.19.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.20.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.20.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.20.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.20.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.20.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.20.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.20.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.20.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.20.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.20.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.20.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.20.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.20.output.dense.weight', 'roberta.RoBerta.encoder.layer.20.output.dense.bias', 'roberta.RoBerta.encoder.layer.20.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.20.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.21.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.21.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.21.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.21.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.21.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.21.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.21.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.21.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.21.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.21.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.21.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.21.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.21.output.dense.weight', 'roberta.RoBerta.encoder.layer.21.output.dense.bias', 'roberta.RoBerta.encoder.layer.21.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.21.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.22.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.22.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.22.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.22.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.22.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.22.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.22.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.22.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.22.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.22.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.22.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.22.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.22.output.dense.weight', 'roberta.RoBerta.encoder.layer.22.output.dense.bias', 'roberta.RoBerta.encoder.layer.22.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.22.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.23.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.23.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.23.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.23.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.23.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.23.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.23.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.23.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.23.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.23.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.23.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.23.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.23.output.dense.weight', 'roberta.RoBerta.encoder.layer.23.output.dense.bias', 'roberta.RoBerta.encoder.layer.23.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.23.output.LayerNorm.bias', 'roberta.RoBerta.pooler.dense.weight', 'roberta.RoBerta.pooler.dense.bias', 'roberta.linear_trans.weight', 'roberta.linear_trans.bias', 'roberta.linear_fuse_p.weight', 'roberta.linear_fuse_p.bias', 'roberta.linear_fuse_q.weight', 'roberta.linear_fuse_q.bias', 'roberta.linear_fuse_a.weight', 'roberta.linear_fuse_a.bias', 'roberta.classifier.weight', 'roberta.classifier.bias']
02/18/2020 15:24:29 - INFO - transformers.modeling_utils - Weights from pretrained model not used in RoBertaMultiwayMatch: ['roberta.embeddings.word_embeddings.weight', 'roberta.embeddings.position_embeddings.weight', 'roberta.embeddings.token_type_embeddings.weight', 'roberta.embeddings.LayerNorm.weight', 'roberta.embeddings.LayerNorm.bias', 'roberta.encoder.layer.0.attention.self.query.weight', 'roberta.encoder.layer.0.attention.self.query.bias', 'roberta.encoder.layer.0.attention.self.key.weight', 'roberta.encoder.layer.0.attention.self.key.bias', 'roberta.encoder.layer.0.attention.self.value.weight', 'roberta.encoder.layer.0.attention.self.value.bias', 'roberta.encoder.layer.0.attention.output.dense.weight', 'roberta.encoder.layer.0.attention.output.dense.bias', 'roberta.encoder.layer.0.attention.output.LayerNorm.weight', 'roberta.encoder.layer.0.attention.output.LayerNorm.bias', 'roberta.encoder.layer.0.intermediate.dense.weight', 'roberta.encoder.layer.0.intermediate.dense.bias', 'roberta.encoder.layer.0.output.dense.weight', 'roberta.encoder.layer.0.output.dense.bias', 'roberta.encoder.layer.0.output.LayerNorm.weight', 'roberta.encoder.layer.0.output.LayerNorm.bias', 'roberta.encoder.layer.1.attention.self.query.weight', 'roberta.encoder.layer.1.attention.self.query.bias', 'roberta.encoder.layer.1.attention.self.key.weight', 'roberta.encoder.layer.1.attention.self.key.bias', 'roberta.encoder.layer.1.attention.self.value.weight', 'roberta.encoder.layer.1.attention.self.value.bias', 'roberta.encoder.layer.1.attention.output.dense.weight', 'roberta.encoder.layer.1.attention.output.dense.bias', 'roberta.encoder.layer.1.attention.output.LayerNorm.weight', 'roberta.encoder.layer.1.attention.output.LayerNorm.bias', 'roberta.encoder.layer.1.intermediate.dense.weight', 'roberta.encoder.layer.1.intermediate.dense.bias', 'roberta.encoder.layer.1.output.dense.weight', 'roberta.encoder.layer.1.output.dense.bias', 'roberta.encoder.layer.1.output.LayerNorm.weight', 'roberta.encoder.layer.1.output.LayerNorm.bias', 'roberta.encoder.layer.2.attention.self.query.weight', 'roberta.encoder.layer.2.attention.self.query.bias', 'roberta.encoder.layer.2.attention.self.key.weight', 'roberta.encoder.layer.2.attention.self.key.bias', 'roberta.encoder.layer.2.attention.self.value.weight', 'roberta.encoder.layer.2.attention.self.value.bias', 'roberta.encoder.layer.2.attention.output.dense.weight', 'roberta.encoder.layer.2.attention.output.dense.bias', 'roberta.encoder.layer.2.attention.output.LayerNorm.weight', 'roberta.encoder.layer.2.attention.output.LayerNorm.bias', 'roberta.encoder.layer.2.intermediate.dense.weight', 'roberta.encoder.layer.2.intermediate.dense.bias', 'roberta.encoder.layer.2.output.dense.weight', 'roberta.encoder.layer.2.output.dense.bias', 'roberta.encoder.layer.2.output.LayerNorm.weight', 'roberta.encoder.layer.2.output.LayerNorm.bias', 'roberta.encoder.layer.3.attention.self.query.weight', 'roberta.encoder.layer.3.attention.self.query.bias', 'roberta.encoder.layer.3.attention.self.key.weight', 'roberta.encoder.layer.3.attention.self.key.bias', 'roberta.encoder.layer.3.attention.self.value.weight', 'roberta.encoder.layer.3.attention.self.value.bias', 'roberta.encoder.layer.3.attention.output.dense.weight', 'roberta.encoder.layer.3.attention.output.dense.bias', 'roberta.encoder.layer.3.attention.output.LayerNorm.weight', 'roberta.encoder.layer.3.attention.output.LayerNorm.bias', 'roberta.encoder.layer.3.intermediate.dense.weight', 'roberta.encoder.layer.3.intermediate.dense.bias', 'roberta.encoder.layer.3.output.dense.weight', 'roberta.encoder.layer.3.output.dense.bias', 'roberta.encoder.layer.3.output.LayerNorm.weight', 'roberta.encoder.layer.3.output.LayerNorm.bias', 'roberta.encoder.layer.4.attention.self.query.weight', 'roberta.encoder.layer.4.attention.self.query.bias', 'roberta.encoder.layer.4.attention.self.key.weight', 'roberta.encoder.layer.4.attention.self.key.bias', 'roberta.encoder.layer.4.attention.self.value.weight', 'roberta.encoder.layer.4.attention.self.value.bias', 'roberta.encoder.layer.4.attention.output.dense.weight', 'roberta.encoder.layer.4.attention.output.dense.bias', 'roberta.encoder.layer.4.attention.output.LayerNorm.weight', 'roberta.encoder.layer.4.attention.output.LayerNorm.bias', 'roberta.encoder.layer.4.intermediate.dense.weight', 'roberta.encoder.layer.4.intermediate.dense.bias', 'roberta.encoder.layer.4.output.dense.weight', 'roberta.encoder.layer.4.output.dense.bias', 'roberta.encoder.layer.4.output.LayerNorm.weight', 'roberta.encoder.layer.4.output.LayerNorm.bias', 'roberta.encoder.layer.5.attention.self.query.weight', 'roberta.encoder.layer.5.attention.self.query.bias', 'roberta.encoder.layer.5.attention.self.key.weight', 'roberta.encoder.layer.5.attention.self.key.bias', 'roberta.encoder.layer.5.attention.self.value.weight', 'roberta.encoder.layer.5.attention.self.value.bias', 'roberta.encoder.layer.5.attention.output.dense.weight', 'roberta.encoder.layer.5.attention.output.dense.bias', 'roberta.encoder.layer.5.attention.output.LayerNorm.weight', 'roberta.encoder.layer.5.attention.output.LayerNorm.bias', 'roberta.encoder.layer.5.intermediate.dense.weight', 'roberta.encoder.layer.5.intermediate.dense.bias', 'roberta.encoder.layer.5.output.dense.weight', 'roberta.encoder.layer.5.output.dense.bias', 'roberta.encoder.layer.5.output.LayerNorm.weight', 'roberta.encoder.layer.5.output.LayerNorm.bias', 'roberta.encoder.layer.6.attention.self.query.weight', 'roberta.encoder.layer.6.attention.self.query.bias', 'roberta.encoder.layer.6.attention.self.key.weight', 'roberta.encoder.layer.6.attention.self.key.bias', 'roberta.encoder.layer.6.attention.self.value.weight', 'roberta.encoder.layer.6.attention.self.value.bias', 'roberta.encoder.layer.6.attention.output.dense.weight', 'roberta.encoder.layer.6.attention.output.dense.bias', 'roberta.encoder.layer.6.attention.output.LayerNorm.weight', 'roberta.encoder.layer.6.attention.output.LayerNorm.bias', 'roberta.encoder.layer.6.intermediate.dense.weight', 'roberta.encoder.layer.6.intermediate.dense.bias', 'roberta.encoder.layer.6.output.dense.weight', 'roberta.encoder.layer.6.output.dense.bias', 'roberta.encoder.layer.6.output.LayerNorm.weight', 'roberta.encoder.layer.6.output.LayerNorm.bias', 'roberta.encoder.layer.7.attention.self.query.weight', 'roberta.encoder.layer.7.attention.self.query.bias', 'roberta.encoder.layer.7.attention.self.key.weight', 'roberta.encoder.layer.7.attention.self.key.bias', 'roberta.encoder.layer.7.attention.self.value.weight', 'roberta.encoder.layer.7.attention.self.value.bias', 'roberta.encoder.layer.7.attention.output.dense.weight', 'roberta.encoder.layer.7.attention.output.dense.bias', 'roberta.encoder.layer.7.attention.output.LayerNorm.weight', 'roberta.encoder.layer.7.attention.output.LayerNorm.bias', 'roberta.encoder.layer.7.intermediate.dense.weight', 'roberta.encoder.layer.7.intermediate.dense.bias', 'roberta.encoder.layer.7.output.dense.weight', 'roberta.encoder.layer.7.output.dense.bias', 'roberta.encoder.layer.7.output.LayerNorm.weight', 'roberta.encoder.layer.7.output.LayerNorm.bias', 'roberta.encoder.layer.8.attention.self.query.weight', 'roberta.encoder.layer.8.attention.self.query.bias', 'roberta.encoder.layer.8.attention.self.key.weight', 'roberta.encoder.layer.8.attention.self.key.bias', 'roberta.encoder.layer.8.attention.self.value.weight', 'roberta.encoder.layer.8.attention.self.value.bias', 'roberta.encoder.layer.8.attention.output.dense.weight', 'roberta.encoder.layer.8.attention.output.dense.bias', 'roberta.encoder.layer.8.attention.output.LayerNorm.weight', 'roberta.encoder.layer.8.attention.output.LayerNorm.bias', 'roberta.encoder.layer.8.intermediate.dense.weight', 'roberta.encoder.layer.8.intermediate.dense.bias', 'roberta.encoder.layer.8.output.dense.weight', 'roberta.encoder.layer.8.output.dense.bias', 'roberta.encoder.layer.8.output.LayerNorm.weight', 'roberta.encoder.layer.8.output.LayerNorm.bias', 'roberta.encoder.layer.9.attention.self.query.weight', 'roberta.encoder.layer.9.attention.self.query.bias', 'roberta.encoder.layer.9.attention.self.key.weight', 'roberta.encoder.layer.9.attention.self.key.bias', 'roberta.encoder.layer.9.attention.self.value.weight', 'roberta.encoder.layer.9.attention.self.value.bias', 'roberta.encoder.layer.9.attention.output.dense.weight', 'roberta.encoder.layer.9.attention.output.dense.bias', 'roberta.encoder.layer.9.attention.output.LayerNorm.weight', 'roberta.encoder.layer.9.attention.output.LayerNorm.bias', 'roberta.encoder.layer.9.intermediate.dense.weight', 'roberta.encoder.layer.9.intermediate.dense.bias', 'roberta.encoder.layer.9.output.dense.weight', 'roberta.encoder.layer.9.output.dense.bias', 'roberta.encoder.layer.9.output.LayerNorm.weight', 'roberta.encoder.layer.9.output.LayerNorm.bias', 'roberta.encoder.layer.10.attention.self.query.weight', 'roberta.encoder.layer.10.attention.self.query.bias', 'roberta.encoder.layer.10.attention.self.key.weight', 'roberta.encoder.layer.10.attention.self.key.bias', 'roberta.encoder.layer.10.attention.self.value.weight', 'roberta.encoder.layer.10.attention.self.value.bias', 'roberta.encoder.layer.10.attention.output.dense.weight', 'roberta.encoder.layer.10.attention.output.dense.bias', 'roberta.encoder.layer.10.attention.output.LayerNorm.weight', 'roberta.encoder.layer.10.attention.output.LayerNorm.bias', 'roberta.encoder.layer.10.intermediate.dense.weight', 'roberta.encoder.layer.10.intermediate.dense.bias', 'roberta.encoder.layer.10.output.dense.weight', 'roberta.encoder.layer.10.output.dense.bias', 'roberta.encoder.layer.10.output.LayerNorm.weight', 'roberta.encoder.layer.10.output.LayerNorm.bias', 'roberta.encoder.layer.11.attention.self.query.weight', 'roberta.encoder.layer.11.attention.self.query.bias', 'roberta.encoder.layer.11.attention.self.key.weight', 'roberta.encoder.layer.11.attention.self.key.bias', 'roberta.encoder.layer.11.attention.self.value.weight', 'roberta.encoder.layer.11.attention.self.value.bias', 'roberta.encoder.layer.11.attention.output.dense.weight', 'roberta.encoder.layer.11.attention.output.dense.bias', 'roberta.encoder.layer.11.attention.output.LayerNorm.weight', 'roberta.encoder.layer.11.attention.output.LayerNorm.bias', 'roberta.encoder.layer.11.intermediate.dense.weight', 'roberta.encoder.layer.11.intermediate.dense.bias', 'roberta.encoder.layer.11.output.dense.weight', 'roberta.encoder.layer.11.output.dense.bias', 'roberta.encoder.layer.11.output.LayerNorm.weight', 'roberta.encoder.layer.11.output.LayerNorm.bias', 'roberta.encoder.layer.12.attention.self.query.weight', 'roberta.encoder.layer.12.attention.self.query.bias', 'roberta.encoder.layer.12.attention.self.key.weight', 'roberta.encoder.layer.12.attention.self.key.bias', 'roberta.encoder.layer.12.attention.self.value.weight', 'roberta.encoder.layer.12.attention.self.value.bias', 'roberta.encoder.layer.12.attention.output.dense.weight', 'roberta.encoder.layer.12.attention.output.dense.bias', 'roberta.encoder.layer.12.attention.output.LayerNorm.weight', 'roberta.encoder.layer.12.attention.output.LayerNorm.bias', 'roberta.encoder.layer.12.intermediate.dense.weight', 'roberta.encoder.layer.12.intermediate.dense.bias', 'roberta.encoder.layer.12.output.dense.weight', 'roberta.encoder.layer.12.output.dense.bias', 'roberta.encoder.layer.12.output.LayerNorm.weight', 'roberta.encoder.layer.12.output.LayerNorm.bias', 'roberta.encoder.layer.13.attention.self.query.weight', 'roberta.encoder.layer.13.attention.self.query.bias', 'roberta.encoder.layer.13.attention.self.key.weight', 'roberta.encoder.layer.13.attention.self.key.bias', 'roberta.encoder.layer.13.attention.self.value.weight', 'roberta.encoder.layer.13.attention.self.value.bias', 'roberta.encoder.layer.13.attention.output.dense.weight', 'roberta.encoder.layer.13.attention.output.dense.bias', 'roberta.encoder.layer.13.attention.output.LayerNorm.weight', 'roberta.encoder.layer.13.attention.output.LayerNorm.bias', 'roberta.encoder.layer.13.intermediate.dense.weight', 'roberta.encoder.layer.13.intermediate.dense.bias', 'roberta.encoder.layer.13.output.dense.weight', 'roberta.encoder.layer.13.output.dense.bias', 'roberta.encoder.layer.13.output.LayerNorm.weight', 'roberta.encoder.layer.13.output.LayerNorm.bias', 'roberta.encoder.layer.14.attention.self.query.weight', 'roberta.encoder.layer.14.attention.self.query.bias', 'roberta.encoder.layer.14.attention.self.key.weight', 'roberta.encoder.layer.14.attention.self.key.bias', 'roberta.encoder.layer.14.attention.self.value.weight', 'roberta.encoder.layer.14.attention.self.value.bias', 'roberta.encoder.layer.14.attention.output.dense.weight', 'roberta.encoder.layer.14.attention.output.dense.bias', 'roberta.encoder.layer.14.attention.output.LayerNorm.weight', 'roberta.encoder.layer.14.attention.output.LayerNorm.bias', 'roberta.encoder.layer.14.intermediate.dense.weight', 'roberta.encoder.layer.14.intermediate.dense.bias', 'roberta.encoder.layer.14.output.dense.weight', 'roberta.encoder.layer.14.output.dense.bias', 'roberta.encoder.layer.14.output.LayerNorm.weight', 'roberta.encoder.layer.14.output.LayerNorm.bias', 'roberta.encoder.layer.15.attention.self.query.weight', 'roberta.encoder.layer.15.attention.self.query.bias', 'roberta.encoder.layer.15.attention.self.key.weight', 'roberta.encoder.layer.15.attention.self.key.bias', 'roberta.encoder.layer.15.attention.self.value.weight', 'roberta.encoder.layer.15.attention.self.value.bias', 'roberta.encoder.layer.15.attention.output.dense.weight', 'roberta.encoder.layer.15.attention.output.dense.bias', 'roberta.encoder.layer.15.attention.output.LayerNorm.weight', 'roberta.encoder.layer.15.attention.output.LayerNorm.bias', 'roberta.encoder.layer.15.intermediate.dense.weight', 'roberta.encoder.layer.15.intermediate.dense.bias', 'roberta.encoder.layer.15.output.dense.weight', 'roberta.encoder.layer.15.output.dense.bias', 'roberta.encoder.layer.15.output.LayerNorm.weight', 'roberta.encoder.layer.15.output.LayerNorm.bias', 'roberta.encoder.layer.16.attention.self.query.weight', 'roberta.encoder.layer.16.attention.self.query.bias', 'roberta.encoder.layer.16.attention.self.key.weight', 'roberta.encoder.layer.16.attention.self.key.bias', 'roberta.encoder.layer.16.attention.self.value.weight', 'roberta.encoder.layer.16.attention.self.value.bias', 'roberta.encoder.layer.16.attention.output.dense.weight', 'roberta.encoder.layer.16.attention.output.dense.bias', 'roberta.encoder.layer.16.attention.output.LayerNorm.weight', 'roberta.encoder.layer.16.attention.output.LayerNorm.bias', 'roberta.encoder.layer.16.intermediate.dense.weight', 'roberta.encoder.layer.16.intermediate.dense.bias', 'roberta.encoder.layer.16.output.dense.weight', 'roberta.encoder.layer.16.output.dense.bias', 'roberta.encoder.layer.16.output.LayerNorm.weight', 'roberta.encoder.layer.16.output.LayerNorm.bias', 'roberta.encoder.layer.17.attention.self.query.weight', 'roberta.encoder.layer.17.attention.self.query.bias', 'roberta.encoder.layer.17.attention.self.key.weight', 'roberta.encoder.layer.17.attention.self.key.bias', 'roberta.encoder.layer.17.attention.self.value.weight', 'roberta.encoder.layer.17.attention.self.value.bias', 'roberta.encoder.layer.17.attention.output.dense.weight', 'roberta.encoder.layer.17.attention.output.dense.bias', 'roberta.encoder.layer.17.attention.output.LayerNorm.weight', 'roberta.encoder.layer.17.attention.output.LayerNorm.bias', 'roberta.encoder.layer.17.intermediate.dense.weight', 'roberta.encoder.layer.17.intermediate.dense.bias', 'roberta.encoder.layer.17.output.dense.weight', 'roberta.encoder.layer.17.output.dense.bias', 'roberta.encoder.layer.17.output.LayerNorm.weight', 'roberta.encoder.layer.17.output.LayerNorm.bias', 'roberta.encoder.layer.18.attention.self.query.weight', 'roberta.encoder.layer.18.attention.self.query.bias', 'roberta.encoder.layer.18.attention.self.key.weight', 'roberta.encoder.layer.18.attention.self.key.bias', 'roberta.encoder.layer.18.attention.self.value.weight', 'roberta.encoder.layer.18.attention.self.value.bias', 'roberta.encoder.layer.18.attention.output.dense.weight', 'roberta.encoder.layer.18.attention.output.dense.bias', 'roberta.encoder.layer.18.attention.output.LayerNorm.weight', 'roberta.encoder.layer.18.attention.output.LayerNorm.bias', 'roberta.encoder.layer.18.intermediate.dense.weight', 'roberta.encoder.layer.18.intermediate.dense.bias', 'roberta.encoder.layer.18.output.dense.weight', 'roberta.encoder.layer.18.output.dense.bias', 'roberta.encoder.layer.18.output.LayerNorm.weight', 'roberta.encoder.layer.18.output.LayerNorm.bias', 'roberta.encoder.layer.19.attention.self.query.weight', 'roberta.encoder.layer.19.attention.self.query.bias', 'roberta.encoder.layer.19.attention.self.key.weight', 'roberta.encoder.layer.19.attention.self.key.bias', 'roberta.encoder.layer.19.attention.self.value.weight', 'roberta.encoder.layer.19.attention.self.value.bias', 'roberta.encoder.layer.19.attention.output.dense.weight', 'roberta.encoder.layer.19.attention.output.dense.bias', 'roberta.encoder.layer.19.attention.output.LayerNorm.weight', 'roberta.encoder.layer.19.attention.output.LayerNorm.bias', 'roberta.encoder.layer.19.intermediate.dense.weight', 'roberta.encoder.layer.19.intermediate.dense.bias', 'roberta.encoder.layer.19.output.dense.weight', 'roberta.encoder.layer.19.output.dense.bias', 'roberta.encoder.layer.19.output.LayerNorm.weight', 'roberta.encoder.layer.19.output.LayerNorm.bias', 'roberta.encoder.layer.20.attention.self.query.weight', 'roberta.encoder.layer.20.attention.self.query.bias', 'roberta.encoder.layer.20.attention.self.key.weight', 'roberta.encoder.layer.20.attention.self.key.bias', 'roberta.encoder.layer.20.attention.self.value.weight', 'roberta.encoder.layer.20.attention.self.value.bias', 'roberta.encoder.layer.20.attention.output.dense.weight', 'roberta.encoder.layer.20.attention.output.dense.bias', 'roberta.encoder.layer.20.attention.output.LayerNorm.weight', 'roberta.encoder.layer.20.attention.output.LayerNorm.bias', 'roberta.encoder.layer.20.intermediate.dense.weight', 'roberta.encoder.layer.20.intermediate.dense.bias', 'roberta.encoder.layer.20.output.dense.weight', 'roberta.encoder.layer.20.output.dense.bias', 'roberta.encoder.layer.20.output.LayerNorm.weight', 'roberta.encoder.layer.20.output.LayerNorm.bias', 'roberta.encoder.layer.21.attention.self.query.weight', 'roberta.encoder.layer.21.attention.self.query.bias', 'roberta.encoder.layer.21.attention.self.key.weight', 'roberta.encoder.layer.21.attention.self.key.bias', 'roberta.encoder.layer.21.attention.self.value.weight', 'roberta.encoder.layer.21.attention.self.value.bias', 'roberta.encoder.layer.21.attention.output.dense.weight', 'roberta.encoder.layer.21.attention.output.dense.bias', 'roberta.encoder.layer.21.attention.output.LayerNorm.weight', 'roberta.encoder.layer.21.attention.output.LayerNorm.bias', 'roberta.encoder.layer.21.intermediate.dense.weight', 'roberta.encoder.layer.21.intermediate.dense.bias', 'roberta.encoder.layer.21.output.dense.weight', 'roberta.encoder.layer.21.output.dense.bias', 'roberta.encoder.layer.21.output.LayerNorm.weight', 'roberta.encoder.layer.21.output.LayerNorm.bias', 'roberta.encoder.layer.22.attention.self.query.weight', 'roberta.encoder.layer.22.attention.self.query.bias', 'roberta.encoder.layer.22.attention.self.key.weight', 'roberta.encoder.layer.22.attention.self.key.bias', 'roberta.encoder.layer.22.attention.self.value.weight', 'roberta.encoder.layer.22.attention.self.value.bias', 'roberta.encoder.layer.22.attention.output.dense.weight', 'roberta.encoder.layer.22.attention.output.dense.bias', 'roberta.encoder.layer.22.attention.output.LayerNorm.weight', 'roberta.encoder.layer.22.attention.output.LayerNorm.bias', 'roberta.encoder.layer.22.intermediate.dense.weight', 'roberta.encoder.layer.22.intermediate.dense.bias', 'roberta.encoder.layer.22.output.dense.weight', 'roberta.encoder.layer.22.output.dense.bias', 'roberta.encoder.layer.22.output.LayerNorm.weight', 'roberta.encoder.layer.22.output.LayerNorm.bias', 'roberta.encoder.layer.23.attention.self.query.weight', 'roberta.encoder.layer.23.attention.self.query.bias', 'roberta.encoder.layer.23.attention.self.key.weight', 'roberta.encoder.layer.23.attention.self.key.bias', 'roberta.encoder.layer.23.attention.self.value.weight', 'roberta.encoder.layer.23.attention.self.value.bias', 'roberta.encoder.layer.23.attention.output.dense.weight', 'roberta.encoder.layer.23.attention.output.dense.bias', 'roberta.encoder.layer.23.attention.output.LayerNorm.weight', 'roberta.encoder.layer.23.attention.output.LayerNorm.bias', 'roberta.encoder.layer.23.intermediate.dense.weight', 'roberta.encoder.layer.23.intermediate.dense.bias', 'roberta.encoder.layer.23.output.dense.weight', 'roberta.encoder.layer.23.output.dense.bias', 'roberta.encoder.layer.23.output.LayerNorm.weight', 'roberta.encoder.layer.23.output.LayerNorm.bias', 'roberta.pooler.dense.weight', 'roberta.pooler.dense.bias']
```
How can I solve this problem?
Thanks in advance! | 02-18-2020 08:18:39 | 02-18-2020 08:18:39 | ```
class RoBertaMultiwayMatch(nn.Module):
def __init__(self, pretrainedConfigName, num_choices=4):
super(RoBertaMultiwayMatch, self).__init__()
self.num_choices = num_choices
self.RoBerta = RobertaModel.from_pretrained(pretrainedConfigName)
config = self.RoBerta.config
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.linear_trans = nn.Linear(config.hidden_size, config.hidden_size)
self.linear_fuse_p = nn.Linear(config.hidden_size*2, config.hidden_size)
self.linear_fuse_q = nn.Linear(config.hidden_size*2, config.hidden_size)
self.linear_fuse_a = nn.Linear(config.hidden_size * 2, config.hidden_size)
self.classifier = nn.Linear(config.hidden_size*3, 1)
#def matching(self, passage_encoded, question_encoded, passage_attention_mask, question_attention_mask): ...
#def fusing_mlp(self, passage_encoded, mp_q, mp_a, mp_qa, question_encoded, ...
#def forward(self, input_ids, token_type_ids=None, attention_mask=None, doc_len=None, ...
```
<|||||>You're inheriting from `BertPreTrainedModel` in your `RoBertaMultiwayMatch`, with an attribute `RoBerta` which contains a `RobertaModel`.
As I see it, you want to load your roberta model from a given set of weights, but by calling `from_pretrained` on your class, he's looking to load those weights directly on your model.
I believe you could override the `from_pretrained` method as such:
```py
def from_pretrained(...):
self.RoBerta.from_pretrained(...)
```
by specifying the correct arguments to your method and to your `RoBerta`'s method. This way when you call `from_pretrained`, it only loads the data for `RoBerta`. You'd have to find a way to save/load your data for your own layers though.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,885 | closed | Improve special_token_id logic in run_generation.py and add tests | This PR finally implements the following `bos_token_id, pad_token_id, eos_token_ids` logic for lm model generation.
1. If `bos_token_id` is None, then the input_ids must be defined otherwise, the model cannot generate text, which is checked by the asserts in the beginning. The `bos_token_id` is only relevant for starting a new sentence.
2. If `eos_token_id` is None, then the length of the generated text will always equal max_length,
no matter how the pad_token_id is defined. Since there is no `eos_token_id` the text will also not "end".
3. If `pad_token_id` is None and `eos_token_ids` is defined (as it is the case for gpt2), then the pad_token_id will be set to the `eos_token_ids[0]` tensor `batches_len` is used to keep track of the first time the sequence generated an eos_token and will later set all tokens following this token to the `pad_token_id`, which is `eos_token_ids[0]` and can thus be handled by the tokenizer (whereas the -1 cannot be handled by the tokenizer).
4. **No** eos_token_id is appended to sentences that finish due to `max_length`. Instead those sentences are returned with the last token being the last token produced by the model until `max_length` was hit.
As an overview, here a table showing which LMModel Tokenizer have which of the tokens `bos_token_id`, `pad_token_id` and `eos_token_ids` is defined:
| LM Model | bos_token_id | pad_token_id | eos_token_ids |
| ------------- | ------------- | ------------- | ------------- |
| XLNet | x | x | x |
| OpenAIGPT | o | o | o |
| CTRL | o | o | o |
| GPT2 | x | o | x |
| Transfo-XL | o | o | x |
| XLM | x | x | o |
Testing times are increased by the following times (measured on a local machine):
| LM Model | Increase in test time |
| ------------- | ------------- |
| XLNet | 8.0s -> 9.7s |
| OpenAIGPT | 7.1s -> 8.3s |
| CTRL | 2.5s -> 4.3s |
| GPT2 | 7.3s -> 8.0s |
| Transfo-XL | 7.5s -> 8.0s |
| XLM | 7.4s -> 7.7s |
-> So overall mostly around 10% increase in testing time
## Future PRs:
- [x] [WIP] adding hard-coded slow tests for pretrained lms in PR #2909
- [x] [WIP] adapting the `generate` function for Seq-2-Seq and DoubleHeads or other special LM models in PR #2888
- [x] checking and possibly adapting behavior of `generate_beam_search`
- [x] treat Issues: #2482 and #2415 | 02-17-2020 20:37:46 | 02-17-2020 20:37:46 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2885?src=pr&el=h1) Report
> Merging [#2885](https://codecov.io/gh/huggingface/transformers/pull/2885?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d490b5d5003654f104af3abd0556e598335b5650?src=pr&el=desc) will **increase** coverage by `1.75%`.
> The diff coverage is `86.22%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2885?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2885 +/- ##
=========================================
+ Coverage 75.35% 77.1% +1.75%
=========================================
Files 94 98 +4
Lines 15444 15971 +527
=========================================
+ Hits 11638 12315 +677
+ Misses 3806 3656 -150
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2885?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.59% <ø> (-0.04%)` | :arrow_down: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `96.99% <ø> (ø)` | :arrow_up: |
| [src/transformers/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `96.82% <ø> (-0.03%)` | :arrow_down: |
| [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `26.66% <ø> (+1.36%)` | :arrow_up: |
| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `21.73% <ø> (ø)` | :arrow_up: |
| [src/transformers/utils\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy91dGlsc19lbmNvZGVyX2RlY29kZXIucHk=) | `0% <0%> (ø)` | |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `75.47% <100%> (+0.23%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.2% <100%> (+30.87%)` | :arrow_up: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.45% <100%> (-0.14%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <100%> (-0.07%)` | :arrow_down: |
| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2885?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2885?src=pr&el=footer). Last update [d490b5d...80ca73d](https://codecov.io/gh/huggingface/transformers/pull/2885?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>> Love the tests!
>
> Do we ever test cases:
>
> * eos_token_ids are None, pad_token_id present
> * pad_token_id=None, eos_ids present
> * pad_token_id present, eos_ids None
>
> We should also figure out if those are realistic scenarios. Because if they are not we can delete a lot of code!
eos_token_ids present and pad_token_id = None -> GPT2, so this scenario is tested. This case is the hardest to handle for batch_size > 1, therefore quite a lot of assert statements and a warning in modeling_utils.py
eos_token_ids = None and pad_token_id present -> If eos_token_ids = None, the pad_token_id is somewhat irrelevant for generation because all batches will always be of max. length and therefore never will have to be padded (they can't finish because no eos_token can be generated)
<|||||>This PR finally implements the following `bos_token_id, pad_token_id, eos_token_ids` logic for lm model generation.
1. If `bos_token_id` is None, then the input_ids must be defined otherwise, the model cannot generate text, which is checked by the asserts in the beginning. The `bos_token_id` is only relevant for starting a new sentence.
2. If `eos_token_id` is None, then the length of the generated text will always equal max_length,
no matter how the pad_token_id is defined. Since there is no `eos_token_id` the text will also not "end".
3. If `pad_token_id` is None and `eos_token_ids` is defined (as it is the case for gpt2), then the pad_token_id will be set to the `eos_token_ids[0]` tensor `batches_len` is used to keep track of the first time the sequence generated an eos_token and will later set all tokens following this token to the `pad_token_id`, which is `eos_token_ids[0]` and can thus be handled by the tokenizer (whereas the -1 cannot be handled by the tokenizer).
4. **No** eos_token_id is appended to sentences that finish due to `max_length`. Instead those sentences are returned with the last token being the last token produced by the model until `max_length` was hit.
As an overview, here a table showing which LMModel Tokenizer have which of the tokens `bos_token_id`, `pad_token_id` and `eos_token_ids` is defined:
| LM Model | bos_token_id | pad_token_id | eos_token_ids |
| ------------- | ------------- | ------------- | ------------- |
| XLNet | x | x | x |
| OpenAIGPT | o | o | o |
| CTRL | o | o | o |
| GPT2 | x | o | x |
| Transfo-XL | o | o | x |
| XLM | x | x | o |
## Future PRs:
- [x] [WIP] adding hard-coded slow tests for pretrained lms in PR #2909
- [ ] [WIP] adapting the `generate` function for Seq-2-Seq and DoubleHeads or other special LM models in PR #2888
- [x] checking and possibly adapting behavior of `generate_beam_search`
- [x] treat Issues: #2482 and #2415<|||||>> # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2885?src=pr&el=h1) Report
> > Merging [#2885](https://codecov.io/gh/huggingface/transformers/pull/2885?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/59c23ad9c931ac4fe719abeb3c3851df046ef3a6?src=pr&el=desc) will **increase** coverage by `1.49%`.
> > The diff coverage is `100%`.
>
> [](https://codecov.io/gh/huggingface/transformers/pull/2885?src=pr&el=tree)
>
> ```diff
> @@ Coverage Diff @@
> ## master #2885 +/- ##
> ==========================================
> + Coverage 75.3% 76.79% +1.49%
> ==========================================
> Files 94 94
> Lines 15424 15448 +24
> ==========================================
> + Hits 11615 11864 +249
> + Misses 3809 3584 -225
> ```
>
> [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2885?src=pr&el=tree) Coverage Δ
> [src/transformers/modeling_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) `92.14% <100%> (+30.81%)`
> [src/transformers/configuration_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) `96.46% <100%> (ø)`
> [src/transformers/modeling_transfo_xl.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) `75.63% <0%> (+0.84%)`
> [src/transformers/modeling_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) `88.43% <0%> (+2.05%)`
> [src/transformers/modeling_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) `75.77% <0%> (+2.61%)`
> [src/transformers/modeling_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) `86.11% <0%> (+2.83%)`
> [src/transformers/modeling_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) `98.23% <0%> (+3.96%)`
> [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2885?src=pr&el=continue).
>
> > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2885?src=pr&el=footer). Last update [59c23ad...ac2e172](https://codecov.io/gh/huggingface/transformers/pull/2885?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
I'm a bit confused that the coverage for the file modeling_openai.py did not change. Tests for the openai lm were added but they had seemingly no effect - do you know why? @LysandreJik <|||||>No need to do all my `torch.Tensor` vs .new comments, TIL that it lets you copy the device and dtype of the first tensor. |
transformers | 2,884 | closed | Evaluation and Inference added to run_glue.py | # 🚀 Feature request
It would be useful to have the following arguments added to `run_glue.py`, as well as probably the other task example scripts: `--eval_only` and `--inference_only`.
## Motivation
This will allow users to either provide a `.tsv` or `.csv` with either sentences and labels or just sentences and then perform evaluation or inference on the model without having to train. This may already exist, but I was unable to find it. Consequently, I'm using https://github.com/kaushaltrivedi/fast-bert for these tasks. It would be helpful to not have to stray from the main huggingface transformers library.
| 02-17-2020 18:20:23 | 02-17-2020 18:20:23 | Hi! You can already provide `do_eval` to `run_glue` to do the evaluation. If you don't specify `do_train`, it will only do the evaluation, and no training.
The inference would be a nice addition.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,883 | closed | Create README.md in the right path for bert-spanish-cased-finetuned-ner | 02-17-2020 14:23:42 | 02-17-2020 14:23:42 | 👍 |
|
transformers | 2,882 | closed | No prediction for some words (BERT NER) when run on GPU | I have tried to make predictions over the test data using GPU, but ended up having no predictions for some words. Any help would be appreciated.
Following is the shell script used for prediction.
export MAX_LENGTH=128
export BERT_MODEL=bert-base-multilingual-cased
DATA_DIR=../DATA/after_preprocess
OUTPUT_DIR=../CHECKPOINTS/GPU/CONLL_25L
LABELS_FILE_25=../DATA/after_preprocess/labels.txt
export BATCH_SIZE=32
export NUM_EPOCHS=3
export SAVE_STEPS=750
export SEED=1
cd ../examples
python3 -m torch.distributed.launch run_ner.py --data_dir $DATA_DIR/ \
--model_type bert \
--labels $LABELS_FILE_25 \
--model_name_or_path $BERT_MODEL \
--output_dir $OUTPUT_DIR \
--max_seq_length $MAX_LENGTH \
--num_train_epochs $NUM_EPOCHS \
--per_gpu_train_batch_size $BATCH_SIZE \
--save_steps $SAVE_STEPS \
--seed $SEED \
--do_predict
Following is the error message:
02/17/2020 12:07:51 - INFO - transformers.tokenization_utils - Model name '../CHECKPOINTS/GPU/CONLL_25L' not found in model shortcut name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1, bert-base-dutch-cased). Assuming '../CHECKPOINTS/GPU/CONLL_25L' is a path, a model identifier, or url to a directory containing tokenizer files.
02/17/2020 12:07:51 - INFO - transformers.tokenization_utils - Didn't find file ../CHECKPOINTS/GPU/CONLL_25L/added_tokens.json. We won't load it.
02/17/2020 12:07:51 - INFO - transformers.tokenization_utils - loading file ../CHECKPOINTS/GPU/CONLL_25L/vocab.txt
02/17/2020 12:07:51 - INFO - transformers.tokenization_utils - loading file None
02/17/2020 12:07:51 - INFO - transformers.tokenization_utils - loading file ../CHECKPOINTS/GPU/CONLL_25L/special_tokens_map.json
02/17/2020 12:07:51 - INFO - transformers.tokenization_utils - loading file ../CHECKPOINTS/GPU/CONLL_25L/tokenizer_config.json
02/17/2020 12:07:51 - INFO - transformers.configuration_utils - loading configuration file ../CHECKPOINTS/GPU/CONLL_25L/config.json
02/17/2020 12:07:51 - INFO - transformers.configuration_utils - Model config BertConfig {
"architectures": [
"BertForTokenClassification"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"directionality": "bidi",
"do_sample": false,
"eos_token_ids": 0,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-12,
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_beams": 1,
"num_hidden_layers": 12,
"num_labels": 25,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pad_token_id": 0,
"pooler_fc_size": 768,
"pooler_num_attention_heads": 12,
"pooler_num_fc_layers": 3,
"pooler_size_per_head": 128,
"pooler_type": "first_token_transform",
"pruned_heads": {},
"repetition_penalty": 1.0,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 119547
}
02/17/2020 12:07:51 - INFO - transformers.modeling_utils - loading weights file ../CHECKPOINTS/GPU/CONLL_25L/pytorch_model.bin
02/17/2020 12:07:54 - INFO - __main__ - Creating features from dataset file at ../DATA/after_preprocess/
02/17/2020 12:07:54 - INFO - utils_ner - Writing example 0 of 5100
02/17/2020 12:07:54 - INFO - utils_ner - *** Example ***
02/17/2020 12:07:54 - INFO - utils_ner - guid: test-1
02/17/2020 12:07:54 - INFO - utils_ner - tokens: [CLS] 1951 bis 1953 wurde der nördlich ##e Teil als Jugend ##burg des Ko ##lp ##ing ##werke ##s gebaut . [SEP]
02/17/2020 12:07:54 - INFO - utils_ner - input_ids: 101 11200 10467 11087 10283 10118 28253 10112 13043 10223 32790 12248 10139 30186 35451 10230 32827 10107 25760 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/17/2020 12:07:54 - INFO - utils_ner - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/17/2020 12:07:54 - INFO - utils_ner - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/17/2020 12:07:54 - INFO - utils_ner - label_ids: -100 24 24 24 24 24 24 -100 24 24 24 -100 24 6 -100 -100 -100 -100 24 24 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100
02/17/2020 12:07:54 - INFO - utils_ner - *** Example ***
02/17/2020 12:07:54 - INFO - utils_ner - guid: test-2
02/17/2020 12:07:54 - INFO - utils_ner - tokens: [CLS] Da Mu ##ck das Krieg ##ss ##chreiben nicht über ##bra ##cht hat , wird er als Re ##tter des Landes ausgezeichnet und soll zum Sc ##hat ##zm ##eister ernannt werden . [SEP]
02/17/2020 12:07:54 - INFO - utils_ner - input_ids: 101 11818 49056 11263 10242 20587 13420 82089 10726 10848 13581 11640 11250 117 10790 10163 10223 20304 18413 10139 23244 32149 10130 17375 10580 55260 19180 37661 45940 27093 10615 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/17/2020 12:07:54 - INFO - utils_ner - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/17/2020 12:07:54 - INFO - utils_ner - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/17/2020 12:07:54 - INFO - utils_ner - label_ids: -100 24 9 -100 24 24 -100 -100 24 24 -100 -100 24 24 24 24 24 24 -100 24 24 24 24 24 24 24 -100 -100 -100 24 24 24 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100
02/17/2020 12:07:54 - INFO - utils_ner - *** Example ***
02/17/2020 12:07:54 - INFO - utils_ner - guid: test-3
02/17/2020 12:07:54 - INFO - utils_ner - tokens: [CLS] Mit 1 . Jänner 2007 wurde Robert Sc ##h ##ör ##gen ##hof ##er , als Nachfolger des aus ##ges ##chie ##dene ##n Dietmar Dr ##abe ##k , in die Kader ##liste der FIFA - Sc ##hie ##ds ##richter aufgenommen . [SEP]
02/17/2020 12:07:54 - INFO - utils_ner - input_ids: 101 12699 122 119 105531 10202 10283 10820 55260 10237 15020 11280 20202 10165 117 10223 27968 10139 10441 13156 50784 49906 10115 102411 11612 40929 10174 117 10106 10128 53361 26719 10118 13707 118 55260 72287 13268 59410 25919 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/17/2020 12:07:54 - INFO - utils_ner - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/17/2020 12:07:54 - INFO - utils_ner - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/17/2020 12:07:54 - INFO - utils_ner - label_ids: -100 24 24 -100 24 24 24 9 21 -100 -100 -100 -100 -100 24 24 24 24 24 -100 -100 -100 -100 9 21 -100 -100 24 24 24 24 -100 24 5 -100 -100 -100 -100 -100 24 24 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100
02/17/2020 12:07:54 - INFO - utils_ner - *** Example ***
02/17/2020 12:07:54 - INFO - utils_ner - guid: test-4
02/17/2020 12:07:54 - INFO - utils_ner - tokens: [CLS] Die These , Sc ##hla ##tter sei Anti ##sem ##it gewesen , wurde seither in der theo ##logischen Fach ##lite ##ratu ##r nicht mehr vertreten . [SEP]
02/17/2020 12:07:54 - INFO - utils_ner - input_ids: 101 10236 13252 117 55260 74935 18413 13868 26267 38443 10486 27044 117 10283 85983 10106 10118 13951 57325 100705 66289 50088 10129 10726 12471 41852 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/17/2020 12:07:54 - INFO - utils_ner - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/17/2020 12:07:54 - INFO - utils_ner - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/17/2020 12:07:54 - INFO - utils_ner - label_ids: -100 24 24 24 9 -100 -100 24 24 -100 -100 24 24 24 24 24 24 24 -100 24 -100 -100 -100 24 24 24 24 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100
02/17/2020 12:07:54 - INFO - utils_ner - *** Example ***
02/17/2020 12:07:54 - INFO - utils_ner - guid: test-5
02/17/2020 12:07:54 - INFO - utils_ner - tokens: [CLS] " Le ##hm ##bru ##ck - Be ##uy ##s . Zeichnungen " lautet der Titel der gerade eröffnete ##n Ausstellung , die Kur ##atori ##n Dr . Marion Born ##sche ##uer bis zum 11 . Januar im Le ##hm ##bru ##ck - Museum präsentiert . [SEP]
02/17/2020 12:07:54 - INFO - utils_ner - input_ids: 101 107 10281 29389 40309 11263 118 14321 53452 10107 119 96784 107 77566 10118 16076 10118 43234 61469 10115 41972 117 10128 61912 45804 10115 11612 119 27276 18021 12279 19047 10467 10580 10193 119 12468 10211 10281 29389 40309 11263 118 11325 91619 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/17/2020 12:07:54 - INFO - utils_ner - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/17/2020 12:07:54 - INFO - utils_ner - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/17/2020 12:07:54 - INFO - utils_ner - label_ids: -100 24 6 -100 -100 -100 18 18 -100 -100 18 -100 24 24 24 24 24 24 24 -100 24 24 24 24 -100 -100 24 24 9 21 -100 -100 24 24 24 -100 24 24 3 -100 -100 -100 -100 -100 24 24 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100
02/17/2020 12:07:57 - INFO - __main__ - Saving features into cached file ../DATA/after_preprocess/cached_test_bert-base-multilingual-cased_128
02/17/2020 12:07:58 - INFO - __main__ - ***** Running evaluation *****
02/17/2020 12:07:58 - INFO - __main__ - Num examples = 5100
02/17/2020 12:07:58 - INFO - __main__ - Batch size = 8
Evaluating: 0%| | 0/638 [00:00<?, ?it/s]
Evaluating: 100%|██████████| 638/638 [00:27<00:00, 23.15it/s]
02/17/2020 12:08:27 - INFO - __main__ - ***** Eval results *****
02/17/2020 12:08:27 - INFO - __main__ - f1 = 0.8600886024969794
02/17/2020 12:08:27 - INFO - __main__ - loss = 0.070130527494103
02/17/2020 12:08:27 - INFO - __main__ - precision = 0.8560205226871893
02/17/2020 12:08:27 - INFO - __main__ - recall = 0.8641955325348009
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'wird'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'er'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'als'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Retter'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'des'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Landes'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'ausgezeichnet'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'und'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'soll'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'zum'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Schatzmeister'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'ernannt'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'werden'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for '.'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Nachfolger'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'des'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'ausgeschiedenen'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Dietmar'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Drabek'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for ','.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'in'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'die'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Kaderliste'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'der'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'FIFA-Schiedsrichter'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'aufgenommen'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for '.'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for '"'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'lautet'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'der'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Titel'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'der'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'gerade'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'eröffneten'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Ausstellung'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for ','.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'die'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Kuratorin'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Dr'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for '.'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Marion'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Bornscheuer'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'bis'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'zum'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for '11.'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Januar'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'im'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Lehmbruck-Museum'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'präsentiert'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for '.'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'an'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'der'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Südseite'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'des'.
02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Saals'.
| 02-17-2020 13:35:38 | 02-17-2020 13:35:38 | I tried to make the predictions using CPU, and it worked just fine. But the predictions made by CPU is totally different from the predictions made by GPU. Isn't a model supposed to give the same predictions irrespective of whether it is loaded in GPU or CPU?
Any help would be appreciated.
Thanks in advance<|||||>Can you post a short reproducibility case?<|||||>@cibinjohn
The error msg "Maximum sequence length exceeded" indicated that the input length (assume 257) was longer than the max_seq_len parameter (assume 256). In this case, the last token will not be predicted (actually be trimmed during the preprocessing). You have to reduce the length of your input or increase max_seq_len whichever works for you.
For the question of difference results between CPU and GPU, I cannot repeat a case similar to yours.<|||||>@cibinjohn
I have been facing the same issue yesterday while testing on my own database. I tried doubling the max_seq_length value which was before equal to the MAX_LENGTH env variable. It worked for me.
<|||||>If any of you can post a short reproducible example, we can look into this.<|||||>@BramVanroy what I observed is that the prediction list has a list of different sizes which depends upon the length of the row data before every empty line in test.txt which is created after pre-processing using preprocess.py.
So, I kept MAX_LENGTH as 128 and max_sequence length in the range of 190-256 and it worked.
According to me, it might be because of the length of the word_tokens list or maybe because of the length of the row data before every empty line of the test.txt file.
[test (5).txt](https://github.com/huggingface/transformers/files/4255447/test.5.txt)
If for this text file if you keep max_sequence_length as 128 then it will show Maximum_sequence_length_exceeded for around 400 tokens.<|||||> @BramVanroy
%%capture
!pip install -qU transformers==2.4
!pip install -qU pytorch-lightning
!git clone --branch fixlight https://github.com/srush/transformers
!pip install -r transformers/examples/requirements.txt
%%capture
%%bash
cd transformers/examples/ner/
wget "https://raw.githubusercontent.com/stefan-it/fine-tuned-berts-seq/master/scripts/preprocess.py"
export MAX_LENGTH=128
export BERT_MODEL=bert-base-multilingual-cased
python3 preprocess.py train.txt.tmp $BERT_MODEL $MAX_LENGTH > train.txt
python3 preprocess.py dev.txt.tmp $BERT_MODEL $MAX_LENGTH > dev.txt
python3 preprocess.py test.txt.tmp $BERT_MODEL $MAX_LENGTH > test.txt
cat train.txt dev.txt test.txt | cut -d " " -f 2 | grep -v "^$"| sort | uniq > labels.txt
!cd transformers/examples/ner/; \
export MAX_LENGTH=190; \
export BERT_MODEL=bert-base-multilingual-cased; \
export OUTPUT_DIR=germeval-model; \
export BATCH_SIZE=32; \
export NUM_EPOCHS=3; \
export SAVE_STEPS=750; \
export SEED=42; \
python3 run_ner.py --data_dir ./ \
--model_type bert \
--labels ./labels.txt \
--model_name_or_path $BERT_MODEL \
--output_dir $OUTPUT_DIR \
--max_seq_length $MAX_LENGTH \
--num_train_epochs $NUM_EPOCHS \
--per_gpu_train_batch_size $BATCH_SIZE \
--save_steps $SAVE_STEPS \
--seed $SEED \
--do_train \
--do_eval \
--do_predict<|||||>I met this question last day ,and i checked all cases but nothing has gone wrong。
so,i made a new dir,then just put ‘train.txt’、“test.txt”、“dev.txt”、“labels.txt” there, start;
then,all is ok。
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,881 | closed | update .gitignore to ignore .swp files created when using vim | adds one line to .gitignore | 02-17-2020 13:27:28 | 02-17-2020 13:27:28 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2881?src=pr&el=h1) Report
> Merging [#2881](https://codecov.io/gh/huggingface/transformers/pull/2881?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6083c1566e261668a5de73cfe484c171ce232812?src=pr&el=desc) will **decrease** coverage by `1.07%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2881?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2881 +/- ##
==========================================
- Coverage 75.06% 73.98% -1.08%
==========================================
Files 94 94
Lines 15288 15288
==========================================
- Hits 11476 11311 -165
- Misses 3812 3977 +165
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2881?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2881/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2881/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.91% <0%> (-9.86%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2881/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `70.86% <0%> (-2.3%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2881/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `92.07% <0%> (-2.21%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2881/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2881?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2881?src=pr&el=footer). Last update [6083c15...fb4d8d0](https://codecov.io/gh/huggingface/transformers/pull/2881?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Welcome @patrickvonplaten :) |
transformers | 2,880 | closed | Transformers と Simpletransfomrersを使ったAlebertでのNERの対応 | AlbertでNERが出来るようになっていなかったので、その対応をしました。
主に、modeling_albert.pyにAlbertForTokenClassificationを実装したのと、それと整合性をとるために、その他のコードの変更を行いました。 | 02-17-2020 06:51:56 | 02-17-2020 06:51:56 | |
transformers | 2,879 | closed | [model_cards] 🇹🇷 Add new (cased) BERTurk model | Hi,
this PR adds the model card for the (cased) community-driven 🇹🇷 BERTurk model.
Uncased model is coming soon! | 02-17-2020 00:19:09 | 02-17-2020 00:19:09 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2879?src=pr&el=h1) Report
> Merging [#2879](https://codecov.io/gh/huggingface/transformers/pull/2879?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6083c1566e261668a5de73cfe484c171ce232812?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2879?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2879 +/- ##
=======================================
Coverage 75.06% 75.06%
=======================================
Files 94 94
Lines 15288 15288
=======================================
Hits 11476 11476
Misses 3812 3812
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2879?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2879?src=pr&el=footer). Last update [6083c15...d18f775](https://codecov.io/gh/huggingface/transformers/pull/2879?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Looks good: https://huggingface.co/dbmdz/bert-base-turkish-cased |
transformers | 2,878 | closed | FileNotFoundError when python runs setup.py for sentencepiece | # 🐛 Bug
FileNotFoundError when python runs setup.py for sentencepiece
I am Running Python 3.7, Tensorflow 2.1, Buster
Model I am using (Bert, XLNet ...): Would be using gpt-2 if I can install it...
Language I am using the model on is English
The problem arises when installing using
pip install transformers
## To reproduce
Steps to reproduce the behavior:
Run on a Raspberry Pi 4B (4Gig) running Python 3.7, Tensorflow 2.1 and Buster
1. pip install transformers
Wait for other downloads and installing to complete and this will eventually arise:
```
Collecting sentencepiece (from transformers)
Downloading https://files.pythonhosted.org/packages/1b/87/c3c2fa8cbec61fffe031ca9f0da512747520bec9be7f886f748457daac31/sentencepiece-0.1.83.tar.gz (497kB)
100% |████████████████████████████████| 501kB 225kB/s
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-xohx1aio/sentencepiece/setup.py", line 29, in <module>
with codecs.open(os.path.join('..', 'VERSION'), 'r', 'utf-8') as f:
File "/usr/lib/python3.7/codecs.py", line 898, in open
file = builtins.open(filename, mode, buffering)
FileNotFoundError: [Errno 2] No such file or directory: '../VERSION'
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-xohx1aio/sentencepiece/
```
SO I TRIED download the wheel file from https://github.com/google/sentencepiece/releases for my python version and installing it with pip install sentencepiece-xxx-cpxx-xx.whl
However I see only Mac, x86, and "manylinux" wheels but the manylinux wheels specifically reference iOS or x86, nothing I can see for Arm Core 71 (linux_armv7l).
Also tried a straight install on sentencepiece with identical failure::
```
$ pip install sentencepiece
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Collecting sentencepiece
Using cached https://files.pythonhosted.org/packages/1b/87/c3c2fa8cbec61fffe031ca9f0da512747520bec9be7f886f748457daac31/sentencepiece-0.1.83.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-6tdniw95/sentencepiece/setup.py", line 29, in <module>
with codecs.open(os.path.join('..', 'VERSION'), 'r', 'utf-8') as f:
File "/usr/lib/python3.7/codecs.py", line 898, in open
file = builtins.open(filename, mode, buffering)
FileNotFoundError: [Errno 2] No such file or directory: '../VERSION'
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-6tdniw95/sentencepiece/
```
## Expected behavior
I was expecting installation to complete successfully.
## Environment info
$ python transformers-cli env
python: can't open file 'transformers-cli': [Errno 2] No such file or directory
This makes sense since transformers never finishes installing
- `transformers` version: Newest available with pip install
- Platform: Raspbian Buster
- Python version: 3.7.3
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.1 cpu
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
Thanks for your assistance.
| 02-16-2020 19:58:41 | 02-16-2020 19:58:41 | Sounds like a [sentencepiece](https://github.com/google/sentencepiece) issue?<|||||>I have an inquiry there as well since just a straight install of sentencepiece gives same results - I was just hoping there might be a way of using the HuggingFace Transformers without sentencepiece (although the name sentencepiece suggests to me that it may perform some critical functions...) something like "pip install transformers --skip sentencepiece" ? :)
<|||||>@DaveXanatos Even I faced same issues. Issue seems to be with pip package of sentencepiece. I have opened an issue with them.
As a word around, I installed sentence piece from conda and it worked. After installing this you can install transformer.
`conda install -c powerai sentencepiece`
<|||||>Be wary of using conda and pip at the same time, if you don't know _exactly_ what you are doing, this will lead to unexpected complications.
SentencePiece is required for most recent models, so it is a hard dependency. I advise you to just wait until this is solved in the sentencepiece library, or download and install an earlier version of their library, e.g. https://github.com/google/sentencepiece/releases/tag/v0.1.84<|||||>@tkhan3 Thanks for the conda possibility, I will look into that in the interim.
@BramVanroy I have heard this before and a couple of years ago I completely hosed my build doing just this :) Where would you suggest, as the most direct route to understanding exactly the differences between pip installs and conda installs in terms of paths, dependencies, etc., such that I could conda install with confidence a package in an otherwise pip installed environment?<|||||>I'm afraid I cannot help with that. I stay away from conda as much as I can. Pipenv is my main driver, falling back to an environment's pip where necessary.<|||||>@BramVanroy I'm with you on that from my experience, although I know some folks swear by it... thanks for the warning. I'll probably flash a backup image and then go and play with the possibilities and see if I can get it to work... I can always reflash back to the backup if I break everything again. If I have success I'll let you know what I found.<|||||>Closing this. I propose that the discussion is moved to the sentencepiece library. https://github.com/google/sentencepiece/issues/452<|||||>I got a similar issue.
When I install sentence-transformers on Linux by python, I got an error message:
ERROR: Could not find a version that satisfies the requirement transformers>=3.0.2 (from sentence-transformers) (fr
om versions: 0.1, 2.0.0, 2.1.0, 2.1.1, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.4.0, 2.4.1, 2.5.0, 2.5.1)
ERROR: No matching distribution found for transformers>=3.0.2 (from sentence-transformers)
system: Linux 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1+deb9u1 (2020-06-07) x86_64 on GCP VM instance.
Is there any suggestion?
<|||||>I had the same issue but on Unbuntu 20.04.1.
My problem was that i used a pip version to old to install sentencepiece, as it requires pip>=19.3. (https://github.com/google/sentencepiece/issues/572#issuecomment-716890916)
So my solution was to upgrade my pip installation to 20.2.4
$ pip install --upgrade pip
Similar issues has been discussed here https://github.com/google/sentencepiece/issues/572
Hope it helps |
transformers | 2,877 | closed | Error with run_language_modeling.py training from scratch | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Training from scratch
Language I am using the model on (English, Chinese ...): Training from scratch with Esperanto (per tutorial)
The problem arises when using:
* [ ] the official example scripts: (give details below) run_language_model.py
* [x] my own modified scripts: (give details below)
Added the class per the tutorial (https://huggingface.co/blog/how-to-train) and call it instead of TextDataset
```
class EsperantoDataset(Dataset):
def __init__(self, evaluate: bool = false):
tokenizer = ByteLevelBPETokenizer(
"./models/EsperBERTo-small/vocab.json",
"./models/EsperBERTo-small/merges.txt",
)
tokenizer._tokenizer.post_processor = BertProcessing(
("</s>", tokenizer.token_to_id("</s>")),
("<s>", tokenizer.token_to_id("<s>")),
)
tokenizer.enable_truncation(max_length=512)
# or use the RobertaTokenizer from `transformers` directly.
self.examples = []
src_files = Path("./data/").glob("*-eval.txt") if evaluate else Path("./data/").glob("*-train.txt")
for src_file in src_files:
print("🔥", src_file)
lines = src_file.read_text(encoding="utf-8").splitlines()
self.examples += [x.ids for x in tokenizer.encode_batch(lines)]
def __len__(self):
return len(self.examples)
def __getitem__(self, i):
# We’ll pad at the batch level.
return torch.tensor(self.examples[i])
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name) Training from scratch
* [x] my own task or dataset: (give details below)
eo_dedup.txt.gz (https://traces1.inria.fr/oscar/)
## To reproduce
Steps to reproduce the behavior:
1. file structure:
project
│ run_language_modeling.py
│
└───models
│ │
│ └───EsperBERTo-small
│ │ merges.txt
│ │ vocab.json
|
└───datasets
│ eo-dedup-train.txt
2.
```
python run_language_model.py
--output_dir ./models/EsperBERTo-small-v1
--model_type roberta
--mlm
--tokenizer_name ./models/EsperBERTo-small
--do_train
--learning_rate 1e-4
--num_train_epochs 5
--save_total_limit 2
--save_steps 2000
--per_gpu_train_batch_size 4
--evaluate_during_training
--seed 42
--train_data_file eo-dedup-train.txt
```
Results in this stack trace with CUDA_LAUNCH_BLOCKING=1:
```
/opt/conda/conda-bld/pytorch_1573049304260/work/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [53,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1573049304260/work/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [53,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
Traceback (most recent call last):
File "test.py", line 832, in <module>
main()
File "test.py", line 782, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "test.py", line 386, in train
outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels)
File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/modeling_roberta.py", line 243, in forward
inputs_embeds=inputs_embeds,
File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/modeling_bert.py", line 799, in forward
input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/modeling_roberta.py", line 64, in forward
input_ids, token_type_ids=token_type_ids, position_ids=position_ids, inputs_embeds=inputs_embeds
File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/modeling_bert.py", line 193, in forward
embeddings = inputs_embeds + position_embeddings + token_type_embeddings
RuntimeError: CUDA error: device-side assert triggered
```
## Expected behavior
## Environment info
- `transformers` version: 2.4.1
- Platform: AWS p2.xlarge ubuntu
- Python version: 3.6.5
- PyTorch version (GPU?): 1.3.1
- Tensorflow version (GPU?): N/A
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 02-16-2020 18:18:32 | 02-16-2020 18:18:32 | I ran into this with my own dataset. Following some discussion in #1538 I changed truncation to 256.
> tokenizer.enable_truncation(max_length=256)
I also had to make sure that the pad token had index 1, as that seems to be hardcoded in roberta.
This appears to work, though the previous error had only appeared fairly deep into training, and reducing context isn't great. So I'm not satisfied by this hack. <|||||>If you want to see the error clearer, switch it to CPU, then it will print out the real error. I rant into this error in another project, and finally, I found out it is basically `index out of range` error. Fixed it by add some missing words to the vocabulary.txt and resize the model itself.<|||||>> I ran into this with my own dataset. Following some discussion in #1538 I changed truncation to 256.
>
> > tokenizer.enable_truncation(max_length=256)
>
> I also had to make sure that the pad token had index 1, as that seems to be hardcoded in roberta.
>
> This appears to work, though the previous error had only appeared fairly deep into training, and reducing context isn't great. So I'm not satisfied by this hack.
@reidsanders were u able to train language model with ur own dataset eventually?
@binhna have u added missing words inplaceon unk or randomly added them in vocab, ?
<|||||>Hello @reidsanders, @samreenkazi
I encountered the same error and tried `tokenizer.enable_truncation(max_length=256)` on some BERT models. But it seems that there is no such method:
`'PreTrainedTokenizer' object has no attribute 'enable_truncation'`
Could you give more details about how you solved the problem?<|||||>I am unable to solve this problem as yet
On Thu, Mar 5, 2020 at 10:22 PM Jinan Zhou <[email protected]> wrote:
> Hello @reidsanders <https://github.com/reidsanders>, @samreenkazi
> <https://github.com/samreenkazi>
>
> I encountered the same error tried
> tokenizer.enable_truncation(max_length=256) on some BERT models. But it
> seems that there is no such method:
> 'PreTrainedTokenizer' object has no attribute 'enable_truncation'
>
> Could you give more details about how you solved the problem?
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/2877?email_source=notifications&email_token=ALPPGB4ZSN6BT5PXL6QRWWTRF7NU5A5CNFSM4KWFP7FKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEN6ETGY#issuecomment-595347867>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ALPPGBYDQHHQTNNOESJOZ33RF7NU5ANCNFSM4KWFP7FA>
> .
>
<|||||>> Hello @reidsanders, @samreenkazi
>
> I encountered the same error and tried `tokenizer.enable_truncation(max_length=256)` on some BERT models. But it seems that there is no such method:
> `'PreTrainedTokenizer' object has no attribute 'enable_truncation'`
>
> Could you give more details about how you solved the problem?
enable_truncation is not a method in PreTrainedTokenizers (we are training from scratch, not pretrained). I'm using ByteLevelBPETokenizer imported from tokenizers module as in the example blog post (and op).<|||||>Thanks for your comment @reidsanders @samreenkazi
According to my observation, the error is indeed caused by the data samples longer than 512 tokens. A conservative solution is fixing `block_size=512` in `TextDataset` and `LineByLineTextDataset` class. Or if you worry about the truncation of data, you can go through `self.examples` in these two classes, check whether each sample is shorter than 512. If not, split it into multiple chunks with length 512.
<|||||>For me it helped to just specify the argument `--block_size=512` (it's -1 by default)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,876 | closed | Create bert-spanish-cased-finedtuned-ner.md | 02-16-2020 14:20:10 | 02-16-2020 14:20:10 | The file path should be `model_cards/mrm8488/bert-spanish-cased-finedtuned-ner/README.md` @mrm8488 |
|
transformers | 2,875 | closed | Update README.md | I trained the model for more epochs so I improved the results. This commit will update the results of the model and add a gif using it with **transformers/pipelines** | 02-16-2020 12:11:02 | 02-16-2020 12:11:02 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2875?src=pr&el=h1) Report
> Merging [#2875](https://codecov.io/gh/huggingface/transformers/pull/2875?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/73028c5df0c28ca179fbe565482a9c2143787f61?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2875?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2875 +/- ##
=======================================
Coverage 75.06% 75.06%
=======================================
Files 94 94
Lines 15288 15288
=======================================
Hits 11476 11476
Misses 3812 3812
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2875?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2875?src=pr&el=footer). Last update [73028c5...cb1cba9](https://codecov.io/gh/huggingface/transformers/pull/2875?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks for updating! GIFs on Giphy are a bit over-compressed so you can also host them
- on one of your GitHub repos
- or even directly in the model card's folder (see severinsimmler/literary-german-bert/README.md as an example)<|||||>Also, great results :)<|||||>Thank you!! |
transformers | 2,874 | closed | How to run TFBERT model in disable_eager_execution() mode | How to run `TFBERT `model in `disable_eager_execution()` mode.
If it is possible please let me know, thanks! | 02-16-2020 08:49:50 | 02-16-2020 08:49:50 | TFBERT model can only be loaded in tf>2.0 version.And,tf 2.0 onwards,eager_execution() is on by default |
transformers | 2,873 | closed | how to get "xlnet-base-cased-pytorch_model.bin" original 'last modified' date? | I'd like to test my finetuning on some wikipedia articles that have not been seen by the model.
For that, I can find the date of creation on the wikipedia article, but I'd also like to verify that this is after the 'last modified' date of the model.
How can I get the answer to when was the model parameters' last modified? | 02-16-2020 06:34:16 | 02-16-2020 06:34:16 | Just click on 'List all files in model' and you will see the upload date [1].
[1] https://huggingface.co/xlnet-base-cased |
transformers | 2,872 | closed | Explanation of the results derived from fine tuning | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Hi,
It would be super nice if you can add a visualization util that can show why the model inferred a particular result such that which words of the "sentencex" made it labelled as positive?
Thank you. | 02-16-2020 06:19:11 | 02-16-2020 06:19:11 | @gofimofi You might want to look at https://github.com/jessevig/bertviz which is compatible with transformers<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,871 | closed | RoBERTa has a token_type layer (just a cosmetic issue) | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): RoBERTa
Language I am using the model on (English, Chinese ...): Language-independent
The cosmetic problem:
The fairseq RoBERTa doesn't has a token_type layer:
```
TransformerSentenceEncoder(
(embed_tokens): Embedding(50265, 768, padding_idx=1)
(embed_positions): LearnedPositionalEmbedding(514, 768, padding_idx=1)
```
The huggingface implementation of RoBERTa accepts token typ ids because RobertaModel inherits from BertModel and the layer is inherited by RobertaEmbeddings from BertEmbeddings:
```
RobertaEmbeddings(
(word_embeddings): Embedding(50265, 768, padding_idx=1)
(position_embeddings): Embedding(514, 768, padding_idx=1)
(token_type_embeddings): Embedding(1, 768)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
```
@julien-c wrote in #2702:
> On the fact that our RoBERTa implem(tation) takes (inoperant by default) `token_type_ids`, maybe we should actually remove them from the implem. If you want to train some, you can always subclass RoBERTa and add them back (but I'm not 100% sure a lot of people use them). Thoughts?
## Expected behavior
I think the huggingface models should be as close to original as possible and therefore RoBERTA should not have a token_type_embeddings layer and not accept token_type_ids. I know this is just a cosmetic issue, but I think it causes some confusion. I would like to use this issue to collect some opinions.
When there are no others opinions, I would like to work on this. This also affects #2727
| 02-16-2020 06:01:47 | 02-16-2020 06:01:47 | This is something that we're looking at with @LysandreJik and @thomwolf – In the meantime, feel free to open a draft PR.<|||||>As discussed in the other issues, it would be great if a lot of care is taken in maximising the compatibility between a tokenizer and its corresponding model, as I discussed in https://github.com/huggingface/transformers/issues/2702#issuecomment-581480669. In other words, the tokenizer encode methods should only return those values that are accepted by its model's forward method.<|||||>RoBERTa has only one possible token type id (`0`), but the embedding for that still needs to be there. That embedding is added to all word piece embeddings, and that's how it was trained. If you suddenly stop doing that, the model will stop working.
You could just add that embedding to all word piece embeddings and store the model under a new name to achieve the same effect. But you can't just take it away.
If you take it away, you have to retrain the whole thing.<|||||>In my opinion you dont have to retrain it as the weights of this layer are zero. Have a look at the example below:
```from transformers.modeling_roberta import RobertaForSequenceClassification
model = RobertaForSequenceClassification.from_pretrained('roberta-base')
print(model.state_dict()['roberta.embeddings.token_type_embeddings.weight'])
##Output truncated by me:
##tensor([[0., 0., 0., 0., 0., .....0., 0., 0., 0.]])
```
So what happens when we remove this layer:
```
##Defining our own roberta class without the token_type_embeddings layer
from torch import nn
from transformers.modeling_bert import BertLayerNorm, BertModel, BertPreTrainedModel
from transformers.modeling_roberta import RobertaClassificationHead, RobertaConfig, ROBERTA_PRETRAINED_MODEL_ARCHIVE_MAP, create_position_ids_from_input_ids, CrossEntropyLoss
class MyBertEmbeddings(nn.Module):
def __init__(self, config):
super().__init__()
self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=0)
self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size)
#self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size)
# self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load
# any TensorFlow checkpoint file
self.LayerNorm = BertLayerNorm(config.hidden_size, eps=config.layer_norm_eps)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
def forward(self, input_ids=None, token_type_ids=None, position_ids=None, inputs_embeds=None):
if input_ids is not None:
input_shape = input_ids.size()
else:
input_shape = inputs_embeds.size()[:-1]
seq_length = input_shape[1]
device = input_ids.device if input_ids is not None else inputs_embeds.device
if position_ids is None:
position_ids = torch.arange(seq_length, dtype=torch.long, device=device)
position_ids = position_ids.unsqueeze(0).expand(input_shape)
if token_type_ids is None:
token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device)
if inputs_embeds is None:
inputs_embeds = self.word_embeddings(input_ids)
position_embeddings = self.position_embeddings(position_ids)
#token_type_embeddings = self.token_type_embeddings(token_type_ids)
embeddings = inputs_embeds + position_embeddings #+ token_type_embeddings
embeddings = self.LayerNorm(embeddings)
embeddings = self.dropout(embeddings)
return embeddings
class MyRobertaEmbeddings(MyBertEmbeddings):
def __init__(self, config):
super().__init__(config)
self.padding_idx = 1
self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=self.padding_idx)
self.position_embeddings = nn.Embedding(
config.max_position_embeddings, config.hidden_size, padding_idx=self.padding_idx
)
def forward(self, input_ids=None, token_type_ids=None, position_ids=None, inputs_embeds=None):
if position_ids is None:
if input_ids is not None:
# Create the position ids from the input token ids. Any padded tokens remain padded.
position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx).to(input_ids.device)
else:
position_ids = self.create_position_ids_from_inputs_embeds(inputs_embeds)
return super().forward(
input_ids, token_type_ids=token_type_ids, position_ids=position_ids, inputs_embeds=inputs_embeds
)
def create_position_ids_from_inputs_embeds(self, inputs_embeds):
input_shape = inputs_embeds.size()[:-1]
sequence_length = input_shape[1]
position_ids = torch.arange(
self.padding_idx + 1, sequence_length + self.padding_idx + 1, dtype=torch.long, device=inputs_embeds.device
)
return position_ids.unsqueeze(0).expand(input_shape)
class MyRobertaModel(BertModel):
config_class = RobertaConfig
pretrained_model_archive_map = ROBERTA_PRETRAINED_MODEL_ARCHIVE_MAP
base_model_prefix = "roberta"
def __init__(self, config):
super().__init__(config)
self.embeddings = MyRobertaEmbeddings(config)
self.init_weights()
def get_input_embeddings(self):
return self.embeddings.word_embeddings
def set_input_embeddings(self, value):
self.embeddings.word_embeddings = value
class MyRobertaForSequenceClassification(BertPreTrainedModel):
config_class = RobertaConfig
pretrained_model_archive_map = ROBERTA_PRETRAINED_MODEL_ARCHIVE_MAP
base_model_prefix = "roberta"
def __init__(self, config):
super().__init__(config)
self.num_labels = config.num_labels
self.roberta = MyRobertaModel(config)
self.classifier = RobertaClassificationHead(config)
def forward(
self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
labels=None,
):
outputs = self.roberta(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
)
sequence_output = outputs[0]
logits = self.classifier(sequence_output)
outputs = (logits,) + outputs[2:]
if labels is not None:
if self.num_labels == 1:
# We are doing regression
loss_fct = MSELoss()
loss = loss_fct(logits.view(-1), labels.view(-1))
else:
loss_fct = CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
outputs = (loss,) + outputs
return outputs # (loss), logits, (hidden_states), (attentions)
```
```
mymodel = MyRobertaForSequenceClassification.from_pretrained('roberta-base')
##We need to set the weights of the randomly initialized layers to the same values
import torch
for name, param in mymodel.named_parameters():
if not(torch.all(param.data.eq(model.state_dict()[name]))):
print('{} is not identical'.format(name))
param.data = model.state_dict()[name]
##Output:
##classifier.dense.weight is not identical
##classifier.dense.bias is not identical
##classifier.out_proj.weight is not identical
##classifier.out_proj.bias is not identical
```
Now we can compare mymodel with model:
```
from transformers import RobertaTokenizer
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
for m in [mymodel, model]:
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
labels = torch.tensor([1]).unsqueeze(0) # Batch size 1
outputs = m(input_ids, labels=labels)
print(outputs)
##(tensor(0.8658, grad_fn=<NllLossBackward>), tensor([[ 0.0927, -0.2271]], grad_fn=<AddmmBackward>))
##(tensor(0.8658, grad_fn=<NllLossBackward>), tensor([[ 0.0927, -0.2271]], grad_fn=<AddmmBackward>))
```
and see that the output is the same.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,870 | closed | distilberttokenizer.encode_plus() token_type_ids are non-default | DistilBert doesn't use token_type_ids. Therefore the encode_plus() method of the DistilBertTokenizer should generate them per default. This fix sets the default value of return_token_type_ids to False.
Closes #2702 | 02-16-2020 04:09:15 | 02-16-2020 04:09:15 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2870?src=pr&el=h1) Report
> Merging [#2870](https://codecov.io/gh/huggingface/transformers/pull/2870?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/73028c5df0c28ca179fbe565482a9c2143787f61?src=pr&el=desc) will **increase** coverage by `<.01%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2870?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2870 +/- ##
==========================================
+ Coverage 75.06% 75.06% +<.01%
==========================================
Files 94 94
Lines 15288 15290 +2
==========================================
+ Hits 11476 11478 +2
Misses 3812 3812
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2870?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2870/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `86.1% <ø> (ø)` | :arrow_up: |
| [src/transformers/tokenization\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2870/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZGlzdGlsYmVydC5weQ==) | `100% <100%> (ø)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2870?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2870?src=pr&el=footer). Last update [73028c5...db509eb](https://codecov.io/gh/huggingface/transformers/pull/2870?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>obsolet since 2.6 release |
transformers | 2,869 | closed | ValueError: too many dimensions 'str' | # 🐛 Bug
**To Reproduce**
Steps to reproduce the behavior:
Here is my Colab Notebook you can run to to see the error
https://colab.research.google.com/drive/1ESyf46RNBvrg-7DDQ5l8zhlKZjWGdqUv#scrollTo=MqlsdjFVMmMZ
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-9-0b9dcdf94c77> in <module>()
71
72 # Train the model
---> 73 model.train_model(train_df)
74
75 # Evaluate the model
1 frames
/usr/local/lib/python3.6/dist-packages/simpletransformers/classification/classification_model.py in train_model(self, train_df, multi_label, output_dir, show_running_loss, args, eval_df, verbose, **kwargs)
261 ]
262
--> 263 train_dataset = self.load_and_cache_examples(train_examples, verbose=verbose)
264
265 os.makedirs(output_dir, exist_ok=True)
/usr/local/lib/python3.6/dist-packages/simpletransformers/classification/classification_model.py in load_and_cache_examples(self, examples, evaluate, no_cache, multi_label, verbose, silent)
757
758 if output_mode == "classification":
--> 759 all_label_ids = torch.tensor([f.label_id for f in features], dtype=torch.long)
760 elif output_mode == "regression":
761 all_label_ids = torch.tensor([f.label_id for f in features], dtype=torch.float)
ValueError: too many dimensions 'str'
```
The problem arises when using
```
from simpletransformers.classification import ClassificationModel
import pandas as pd
prefix = '/content/'
train_df = pd.read_csv(prefix + 'train.csv', header=None)
train_df=train_df.drop(index=0)
model = ClassificationModel('roberta', 'roberta-base')
model.train_model(train_df)
```
| 02-16-2020 04:07:45 | 02-16-2020 04:07:45 | I suggest to close this topic and keep the discussion over at https://github.com/ThilinaRajapakse/simpletransformers/issues/229. |
transformers | 2,868 | closed | How can I run NER on ALBERT? | I what to run NER on ALBERT, so I checked the [run_ner.py](https://github.com/huggingface/transformers/blob/master/examples/run_ner.py), but it seems like no ALBERT support.
So can I simply import `AlbertTokenizer`, `AlbertForTokenClassification` and
`AlbertConfig` in the script and add them to `MODEL_CLASSES` and `ALL_MODELS`(Or need other config)? | 02-15-2020 23:32:10 | 02-15-2020 23:32:10 | +1
Same for [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py)?<|||||>What do you think about moving the examples to `AutoModels` (in this case `AutoModelForTokenClassification`) @srush @LysandreJik @julien-c ?<|||||>@thomwolf Indeed, that would be nice.<|||||>Yup, sounds good to me (it will make things much simpler). <|||||>Yes, makes sense<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,867 | closed | from_pretrained making internet connection if internet turned on | I'd like to ask why model.from_pretrained makes ssl connection event though I provide cache_dir? If I turn off the internet everything works just fine.
```
│ └─ 0.726 from_pretrained transformers/tokenization_utils.py:256
│ └─ 0.726 _from_pretrained transformers/tokenization_utils.py:311
│ ├─ 0.570 cached_path transformers/file_utils.py:205
│ │ └─ 0.570 get_from_cache transformers/file_utils.py:333
│ │ └─ 0.570 head requests/api.py:91
│ │ └─ 0.570 request requests/api.py:16
│ │ └─ 0.565 request requests/sessions.py:466
│ │ └─ 0.558 send requests/sessions.py:617
│ │ └─ 0.558 send requests/adapters.py:394
│ │ └─ 0.557 urlopen urllib3/connectionpool.py:494
│ │ └─ 0.557 _make_request urllib3/connectionpool.py:351
│ │ ├─ 0.413 _validate_conn urllib3/connectionpool.py:986
│ │ │ └─ 0.413 connect urllib3/connection.py:298
│ │ │ ├─ 0.281 ssl_wrap_socket urllib3/util/ssl_.py:296
│ │ │ │ ├─ 0.263 wrap_socket ssl.py:410
│ │ │ │ │ └─ 0.263 _create ssl.py:813
│ │ │ │ │ └─ 0.263 do_handshake ssl.py:1132
│ │ │ │ └─ 0.018 [self]
│ │ │ └─ 0.132 _new_conn urllib3/connection.py:143
│ │ │ └─ 0.132 create_connection urllib3/util/connection.py:33
│ │ │ └─ 0.130 [self]
│ │ └─ 0.144 getresponse http/client.py:1300
│ │ └─ 0.144 begin http/client.py:299
│ │ └─ 0.144 _read_status http/client.py:266
│ │ └─ 0.144 readinto socket.py:575
│ │ └─ 0.144 recv_into ssl.py:1060
│ │ └─ 0.144 read ssl.py:920
```
and here's the output with internet turned off
```
└─ 0.358 from_pretrained transformers/tokenization_utils.py:256
│ └─ 0.358 _from_pretrained transformers/tokenization_utils.py:311
│ ├─ 0.255 __init__ transformers/tokenization_bert.py:138
│ │ ├─ 0.163 load_vocab transformers/tokenization_bert.py:98
│ │ │ └─ 0.160 [self]
│ │ ├─ 0.056 <listcomp> transformers/tokenization_bert.py:186
│ │ └─ 0.036 [self]
│ └─ 0.102 cached_path transformers/file_utils.py:205
│ └─ 0.101 get_from_cache transformers/file_utils.py:333
│ ├─ 0.083 head requests/api.py:91
│ │ └─ 0.083 request requests/api.py:16
│ │ └─ 0.080 request requests/sessions.py:466
│ │ ├─ 0.066 send requests/sessions.py:617
│ │ │ └─ 0.066 send requests/adapters.py:394
│ │ │ ├─ 0.046 urlopen urllib3/connectionpool.py:494
│ │ │ │ ├─ 0.035 _make_request urllib3/connectionpool.py:351
│ │ │ │ │ └─ 0.035 _validate_conn urllib3/connectionpool.py:986
│ │ │ │ │ └─ 0.035 connect urllib3/connection.py:298
│ │ │ │ │ └─ 0.035 _new_conn urllib3/connection.py:143
│ │ │ │ │ ├─ 0.015 create_connection urllib3/util/connection.py:33
│ │ │ │ │ │ └─ 0.014 getaddrinfo socket.py:735
│ │ │ │ │ ├─ 0.012 [self]
│ │ │ │ │ └─ 0.008 __init__ urllib3/exceptions.py:20
│ │ │ │ └─ 0.006 increment urllib3/util/retry.py:355
│ │ │ ├─ 0.008 [self]
│ │ │ └─ 0.008 __init__ requests/exceptions.py:17
│ │ ├─ 0.006 merge_environment_settings requests/sessions.py:690
│ │ │ └─ 0.005 get_environ_proxies requests/utils.py:755
│ │ └─ 0.006 [self]
│ └─ 0.014 filter fnmatch.py:48
│ └─ 0.009 _compile_pattern fnmatch.py:38
│ └─ 0.005 compile re.py:232
│ └─ 0.005 _compile re.py:271
│ └─ 0.005 compile sre_compile.py:759
``` | 02-15-2020 17:24:24 | 02-15-2020 17:24:24 | I might be mistaken, but it seems that `s3_etag` verifies that the etag of a cached (downloaded) file is the same as the one that is in the S3 bucket, to ensure that you have the right files (in terms of versions, or corruption). If those files are not in the cached folder, they are downloaded.
See
https://github.com/huggingface/transformers/blob/0dbddba6d2c5b2c6fc08866358c1994a00d6a1ff/src/transformers/file_utils.py#L330-L336<|||||>Is there any way to turn that off? <|||||>Not as far as I can see. What is your use-case? Why do you need this?<|||||>A use case where validating against external servers is not ideal is if the network is behind a firewall and/or is a containerized microservice, and you want to avoid pinging outside the firewall as much as possible.
I would appreciate a config flag that disables all external pinging.<|||||>It's not comfortable for development - I'm doing many tests with the pretrained model and it's pretty annoying as it slows down my experiments considerably. I quess I could just save and load the model myself but I was curious why `from_pretrained` takes so long.<|||||>I think it should be possible by skipping this block (and setting `etag=None`)
https://github.com/huggingface/transformers/blob/0dbddba6d2c5b2c6fc08866358c1994a00d6a1ff/src/transformers/file_utils.py#L399-L409
which will then fallback to
https://github.com/huggingface/transformers/blob/0dbddba6d2c5b2c6fc08866358c1994a00d6a1ff/src/transformers/file_utils.py#L418-L430
A flag should be added to the signature, something like: `disable_outgoing=False`. When `True`, it will skip the lookup and possible download.
I might be able to work on this in the future, but it's not high on my priority list.
Opinions? @minimaxir @Swarzkopf314 <|||||>Yeah that would be great :)<|||||>@Swarzkopf314 Can you tell me how you made the graphs in OP? (some library, I presume) So I can use them for testing.<|||||>I made a wrapper for `pyinstrument`, feel free to use it:
```python
import pyinstrument
# with TreeProfiler(show_all=True):
# # code to profie...
class TreeProfiler(object):
def __init__(self, show_all=False):
self.profiler = pyinstrument.Profiler()
self.show_all = show_all # verbose output of pyinstrument profiler
def __enter__(self):
print("WITH TREE_PROFILER:")
self.profiler.start()
def __exit__(self, *args):
self.profiler.stop()
print(self.profiler.output_text(unicode=True, color=True, show_all=self.show_all))
```<|||||>You can try out my PR https://github.com/huggingface/transformers/pull/2930 if you want.
```python
import pyinstrument
from transformers import DistilBertConfig, DistilBertModel, DistilBertTokenizer
class TreeProfiler():
def __init__(self, show_all=False):
self.profiler = pyinstrument.Profiler()
self.show_all = show_all # verbose output of pyinstrument profiler
def __enter__(self):
print("WITH TREE_PROFILER:")
self.profiler.start()
def __exit__(self, *args):
self.profiler.stop()
print(self.profiler.output_text(unicode=True, color=True, show_all=self.show_all))
def main():
with TreeProfiler(show_all=True):
config = DistilBertConfig.from_pretrained('distilbert-base-uncased', disable_outgoing=True)
model = DistilBertModel.from_pretrained('distilbert-base-uncased', disable_outgoing=True)
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased', disable_outgoing=True)
if __name__ == '__main__':
main()
```
The above snippet will throw an error message when the expected files are not present in the cache. When they are, though, everything is loaded fine without the need of any additional lookups.<|||||>Amazing, thanks a lot! <3<|||||>No problem. Note that I have not written tests for this functionality yet. I don't think it should break the library, but if you do find some inconsistencies, please let me know.<|||||>Excellent! :D<|||||>Note that the parameter name has been changed to `local_files_only`.<|||||>Note that in practice, I find some parameter "local_files_first" which will resolve this issue even further. As named, it will first check if the model is cached. If not, it will make internet connection and download that model. I find this useful for production and testing, thus might write some pull requests for this new feature. |
transformers | 2,866 | closed | How to get the matrix that is used to combine output from multiple number of attention heads? | Hello,
if I am understanding transformers correctly, right before the feedforward layer, output of individual attention head are concatenated and multiplied by a matrix **H**, so that the outputs from the multiple number of heads will be combined into one output which will then be an input to the subsequent feedforward block within the same layer.
Is there any way that I can retrieve the matrix **H** from the Hugging Face GPT2 model?
Thank you, | 02-14-2020 23:29:29 | 02-14-2020 23:29:29 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,865 | closed | UserWarning: The number of elements in the out tensor of shape [1] is 1 | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I am using HuggingFace pytorch-transformers and one of my pre-trained models refuse to fine tune giving me those UserWarnings for every torch.utils.data.DataLoader call.
I have described the details in https://stackoverflow.com/questions/60218634/userwarning-the-number-of-elements-in-the-out-tensor-of-shape-1-is-1
Here is my Notebook so you can run and see the results:
https://colab.research.google.com/drive/1mq9RZ_BX1O5vgxCM0CvPzAm9YVKnq4DQ
But someone downgraded my question for some reason. What am I missing?
Thanks for your help!
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: https://stackoverflow.com/questions/60218634/userwarning-the-number-of-elements-in-the-out-tensor-of-shape-1-is-1 | 02-14-2020 20:14:22 | 02-14-2020 20:14:22 | It is likely that your SO question was downvoted because it is a lot of unreproducible code, and not a lot of explanation. In other words: when someone reads your qusetion, it is almost impossible to answer because we cannot try your code ourselves. Try reducing it to a minimal, verifiable, executable example.
That being said: you are mixing conda and pip installations, which is a drag. Also you don't need to install pytorch-transformers AND transformers. The latter is the successor to the former, so you should only install one or the other (preferably only transformers), and fix your imports accordingly. Just install everything with pip, is my advice.<|||||>Thanks for your answer. I am following your suggestions. However when I replace
```
from pytorch_transformers import AdamW, WarmupLinearSchedule
```
with
```
from transformers import AdamW, WarmupLinearSchedule
```
I get this error
```
ImportError Traceback (most recent call last)
<ipython-input-7-fc8519a4dbdc> in <module>()
19 RobertaConfig, RobertaForSequenceClassification, RobertaTokenizer)
20
---> 21 from transformers import AdamW, WarmupLinearSchedule
22
23 from utils import (convert_examples_to_features,output_modes, processors)
ImportError: cannot import name 'WarmupLinearSchedule'
```
Can you help me out?
Thanks
<|||||>You are probably looking for
https://github.com/huggingface/transformers/blob/20fc18fbda3669c2f4a3510e0705b2acd54bff07/src/transformers/optimization.py#L47-L59<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,864 | closed | Update model card: new performance chart | We found a bug in our German conll03 data and fixed it. See deepset-ai/FARM#235
We reran the eval scripts on the new data and updated our charts accordingly. | 02-14-2020 18:38:02 | 02-14-2020 18:38:02 | looks good!<|||||>On fire! :D<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2864?src=pr&el=h1) Report
> Merging [#2864](https://codecov.io/gh/huggingface/transformers/pull/2864?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/92e974196fc35eb826f64808ae82d20c4380e3eb?src=pr&el=desc) will **increase** coverage by `1.1%`.
> The diff coverage is `90.9%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2864?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2864 +/- ##
=========================================
+ Coverage 73.95% 75.06% +1.1%
=========================================
Files 93 94 +1
Lines 15272 15288 +16
=========================================
+ Hits 11295 11476 +181
+ Misses 3977 3812 -165
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2864?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2864/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100% <ø> (ø)` | :arrow_up: |
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/2864/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.87% <ø> (ø)` | :arrow_up: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/2864/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.89% <0%> (-0.29%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2864/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.15% <100%> (+2.21%)` | :arrow_up: |
| [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2864/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.62% <100%> (-0.01%)` | :arrow_down: |
| [src/transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2864/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `86.37% <100%> (-0.04%)` | :arrow_down: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2864/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.16% <100%> (+0.25%)` | :arrow_up: |
| [src/transformers/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2864/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `96.26% <100%> (+0.05%)` | :arrow_up: |
| [src/transformers/modeling\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/2864/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `83.82% <100%> (ø)` | :arrow_up: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2864/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `86.1% <100%> (+0.41%)` | :arrow_up: |
| ... and [14 more](https://codecov.io/gh/huggingface/transformers/pull/2864/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2864?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2864?src=pr&el=footer). Last update [92e9741...a2925e9](https://codecov.io/gh/huggingface/transformers/pull/2864?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,863 | closed | What does the variable 'present' represent? | Hello,
does the variable 'present' shown in [this](https://github.com/huggingface/transformers/blob/4e69104a1fba717026d6909d06288788e684c749/src/transformers/modeling_gpt2.py#L187) line of Hugging Face GPT-2 code represent final output of a single attention-head? (i.e. **not** the final output of the _output head_ , but the final output of the individual _attention head_, which is placed right before the feedforward block of the same layer).
If yes, is there anyway that I can retrieve the value of the variable 'present'?
Would it be possible that Hugging Face will make the value available for everyone?
Thank you, | 02-14-2020 16:37:24 | 02-14-2020 16:37:24 | |
transformers | 2,862 | closed | PreTrainedTokenizer returns potentially incorrect attention mask | # 🐛 Bug
## Information
When deriving the `attention_mask` `PreTrainedTokenizer` makes an assumption in `prepare_for_model` that the input hasn't been padded prior, this assumption can be false. For example, in the case where one precomputes padded token ids for sentences separately and then uses `BertTokenizer.encode_plus` to join them.
I'm submitting this issue, in order to find out whether this assumption has been made on purpose and if it hasn't I can easily submit a PR fixing it.
In the `PreTrainedTokenizer`, the `attention_mask` is obtained in two places:
- line `1175`: `encoded_inputs["attention_mask"] = [0] * difference + [1] * len(encoded_inputs["attention_mask"] = [0] * difference + [1] * len(encoded_inputs["input_ids"]) `
- line `1188`: `encoded_inputs["attention_mask"] = [1] * len(encoded_inputs["input_ids"])`.
I suggest that instead of making the assumption attention mask is derived as:
`encoded_inputs["attention_mask"] = encoded_inputs["input_ids"] != 0` | 02-14-2020 16:29:08 | 02-14-2020 16:29:08 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,861 | closed | DistilBERT distilbert-base-cased failed to load | **Issue**
DistilBERT **distilbert-base-cased** failed to load. _Please note, 'distilbert-base-uncased' works perfectly fine._
**Error Message**
OSError: Model name 'distilbert-base-cased' was not found in tokenizers model name list (distilbert-base-uncased, distilbert-base-uncased-distilled-squad, distilbert-base-german-cased, distilbert-base-multilingual-cased). We assumed 'distilbert-base-cased' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.
**Model I am using** : distilbert-base-cased
**Language** : English
**The problem arises when using below code**
```
MODELS = [(DistilBertModel, DistilBertTokenizer, 'distilbert-base-cased')]
for model_class, tokenizer_class, pretrained_weights in MODELS:
# Load pretrained model/tokenizer
tokenizer = tokenizer_class.from_pretrained(pretrained_weights)
model = model_class.from_pretrained(pretrained_weights)
```
**Environment info**
Python 3.6.9
ipykernel==5.1.3
ipython==7.11.1
ipython-genutils==0.2.0
ipywidgets==7.5.1
jupyter==1.0.0
jupyter-client==5.3.4
jupyter-console==6.0.0
jupyter-core==4.6.1
jupyter-http-over-ws==0.0.7
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.0
matplotlib==3.1.2
numpy==1.18.1
scipy==1.4.1
tensorboard==2.1.0
tensorflow-estimator==2.1.0
tensorflow-gpu==2.1.0
tokenizers==0.0.11
torch==1.4.0
tornado==6.0.3
tqdm==4.42.1
traitlets==4.3.3
transformers==2.4.1 | 02-14-2020 15:35:13 | 02-14-2020 15:35:13 | Should be fixed with ee5a6856caec83e7f2f305418f3199b87ea6cc2d. I can execute your code without an error with the latest version from github.<|||||>> Should be fixed with [ee5a685](https://github.com/huggingface/transformers/commit/ee5a6856caec83e7f2f305418f3199b87ea6cc2d). I can execute your code without an error with the latest version from github.
@cronoik I appreciate the prompt response. **I didn't compile from git, rather installed via**
`pip install transformer --upgrade`
It upgraded to transformers==2.4.1 -- post-upgrade though the error code changed to below:
`ValueError: Can't find a vocabulary file at path /root/.cache/torch/transformers/37cc1eaaea18a456726fc28ecb438852f0ca1d9e7d259e6e3747ee33065936f6'. To load the vocabulary from a Google pretrained model use tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)`
<|||||>The mentioned commit is not part of 2.4.1. You have to wait for the next release or pull transformers from git.<|||||>Ok, I will pull from the git for the time being. Thank you!<|||||>Please close the issure when your problem is solved.<|||||>We can close this since we have a workaround, and the team is aware of the issue to be rolled out in the next release. Thanks!!<|||||>v2.5.0 was released a few days ago, `distilbert-base-cased` is now accessible via the pip release! :)<|||||>The vocab file is missing here:
https://huggingface.co/distilbert-base-cased#list-files
While the auto-downloaded model has one.<|||||>I'm still having the same problem. Using transformers version 2.8.0, neither `distilbert-base-cased` or `distilbert-base-uncased` are available. I also ran the following command:
```
import pytorch_pretrained_bert as ppb
assert 'distilbert-base-uncased' in ppb.modeling.PRETRAINED_MODEL_ARCHIVE_MAP
```
Which results in `AssertionError`. Any thoughts on what might be going on here?<|||||>Are you really using the `transformers` [1] package? The code you have showed contains only the `pytorch_pretrained_bert` [2] package which doesn't contain distilbert. While `pytorch_pretrained_bert` [2] and `transformers` [1] are both packages from huggingface, they are not the same. `pytorch_pretrained_bert` last release is from april 2019. Please use the `transformers` package [1].
[1] https://pypi.org/project/transformers/
[2] https://pypi.org/project/pytorch-pretrained-bert/#description<|||||>Thanks for the quick reply: I am using transformers, I picked up that code snippet from another issue, must have been for that package.
I realize what I did wrong: I was using BertTokenizer/BertModel to load, and I should have been using DistilBertTokenizer/DistilBertModel. It's working now, thanks! |
transformers | 2,860 | closed | Post-padding affects the Bert embedding output | # 🐛 Bug
## Information
Model: BertModel
Language: English
The problem arises when using:
```
# Load model
from transformers import *
import torch
model_class = BertModel
tokenizer_class = BertTokenizer
pretrained_weights = 'bert-base-uncased'
tokenizer = tokenizer_class.from_pretrained(pretrained_weights)
model = model_class.from_pretrained(pretrained_weights).to('cuda')
# First example
batch_id = [[101, 1996, 3035, 2038, 2741, 1037, 1056, 28394, 2102, 2000, 1996, 3035, 2012, 17836, 4186, 2000, 8439, 2014, 3938, 2705, 5798, 102]]
batch_id = torch.tensor(batch_id).to('cuda')
with torch.no_grad():
last_hidden_states = model(batch_id)[0].cpu().numpy()
print (last_hidden_states[0][:10])
# Second example
batch_id = [[101, 1996, 3035, 2038, 2741, 1037, 1056, 28394, 2102, 2000, 1996, 3035, 2012, 17836, 4186, 2000, 8439, 2014, 3938, 2705, 5798, 102, 0, 0]]
batch_id = torch.tensor(batch_id).to('cuda')
with torch.no_grad():
last_hidden_states = model(batch_id)[0].cpu().numpy()
print (last_hidden_states[0][:10])
```
Output for the first example
```
array([[ 0.00197573, -0.06912418, 0.24121636, ..., -0.13239928,
0.13210389, 0.3860737 ],
[ 0.18745837, -0.15252575, 0.16234997, ..., -0.34497464,
1.0031146 , 0.20545363],
[ 0.40690556, -0.7345518 , 1.1162403 , ..., -1.148023 ,
-0.38943186, -0.6397534 ],
...,
[ 1.3574413 , -0.87637144, 1.007168 , ..., -0.7466023 ,
-0.5337318 , -0.02415964],
[ 0.0907229 , -1.0051603 , 0.7100666 , ..., -0.00599465,
-0.37829682, 0.4773703 ],
[-0.00619348, -0.34730428, 0.9920887 , ..., 0.28678447,
0.2980772 , 0.8005251 ]], dtype=float32)
```
Output for the second example
```
array([[-0.10877508, 0.0271297 , 0.17947783, ..., -0.2650592 ,
0.15821457, 0.35017303],
[-0.1396759 , -0.25098413, 0.3990493 , ..., -0.52468735,
0.8060062 , 0.42330667],
[ 0.18865047, -1.0035415 , 1.3446846 , ..., -1.1652598 ,
-0.60856164, -0.419513 ],
...,
[ 1.3687737 , -0.9032434 , 1.0184443 , ..., -0.7951573 ,
-0.56618035, -0.00522863],
[ 0.02363256, -0.962884 , 0.68822455, ..., -0.03798304,
-0.34567115, 0.5442954 ],
[-0.00341167, -0.33559048, 1.0627198 , ..., 0.31898227,
0.2941662 , 0.7981017 ]], dtype=float32)
```
I also checked that the output for `tokenizer.convert_tokens_to_ids(tokenizer.pad_token)` is `0`
## Expected behavior
The embeddings for the padded sequence should be same with the ones without padding.
## Environment info
- `transformers` version: transformers 2.4.1
- Platform: Linux
- Python version: Python 3.7.5
- PyTorch version (GPU?): PyTorch 1.4.0 (CUDA Version 10.1.243)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| 02-14-2020 11:49:47 | 02-14-2020 11:49:47 | Hi, please look into the documentation of the [attention mask](https://huggingface.co/transformers/glossary.html#attention-mask).<|||||>Actually, it was a valid question. The output will be numerically different for sure since there are extra positions to attend and even if those are paddings -> there is a difference among the float values. However, the real question is whether the (cosine/dot) similarity among the resulting vectors have changed at all. |
transformers | 2,859 | closed | Added model card for bert-base-multilingual-uncased-sentiment | Added the model card for nlptown/bert-base-multilingual-uncased-sentiment | 02-14-2020 10:58:17 | 02-14-2020 10:58:17 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2859?src=pr&el=h1) Report
> Merging [#2859](https://codecov.io/gh/huggingface/transformers/pull/2859?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/925a13ced1e155ea7e55e14e177a7b5ae7ad174c?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2859?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2859 +/- ##
=======================================
Coverage 75.06% 75.06%
=======================================
Files 94 94
Lines 15287 15287
=======================================
Hits 11475 11475
Misses 3812 3812
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2859?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2859?src=pr&el=footer). Last update [925a13c...917aa8d](https://codecov.io/gh/huggingface/transformers/pull/2859?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@yvespeirsman Thanks for sharing! I can't push to your fork so I'll merge this and tweak it (languages have to be in a list) |
transformers | 2,858 | closed | is right? | ERROR: type should be string, got "https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py\r\nin 225 row\r\n\r\n\r\n\r\n# 10% of the time, we replace masked input tokens with random word\r\nbut write 0.5 \r\nis ok?" | 02-14-2020 10:52:44 | 02-14-2020 10:52:44 | Hi @ARDUJS can you update your issue title to something more descriptive? Thanks!<|||||>Should be correct -> 80% masked, that means 20% is left. Using this 20% in 50 % the random word is used, 50% original token is kept. So both random word and original has an overall prob. of 10%.
Original BERT is using the same logic, see [here](https://github.com/google-research/bert/blob/master/create_pretraining_data.py#L391). |
transformers | 2,857 | closed | Fix typos | 02-14-2020 09:41:09 | 02-14-2020 09:41:09 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2857?src=pr&el=h1) Report
> Merging [#2857](https://codecov.io/gh/huggingface/transformers/pull/2857?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/925a13ced1e155ea7e55e14e177a7b5ae7ad174c?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2857?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2857 +/- ##
=======================================
Coverage 75.06% 75.06%
=======================================
Files 94 94
Lines 15287 15287
=======================================
Hits 11475 11475
Misses 3812 3812
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2857?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2857?src=pr&el=footer). Last update [925a13c...acca7c4](https://codecov.io/gh/huggingface/transformers/pull/2857?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 2,856 | closed | Fix typo | 02-14-2020 08:46:21 | 02-14-2020 08:46:21 | ||
transformers | 2,855 | closed | Fix typo | 02-14-2020 04:53:55 | 02-14-2020 04:53:55 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2855?src=pr&el=h1) Report
> Merging [#2855](https://codecov.io/gh/huggingface/transformers/pull/2855?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/925a13ced1e155ea7e55e14e177a7b5ae7ad174c?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2855?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2855 +/- ##
=======================================
Coverage 75.06% 75.06%
=======================================
Files 94 94
Lines 15287 15287
=======================================
Hits 11475 11475
Misses 3812 3812
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2855?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2855?src=pr&el=footer). Last update [925a13c...c86fc74](https://codecov.io/gh/huggingface/transformers/pull/2855?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 2,854 | closed | Create model card for 'distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es' | 02-14-2020 03:59:28 | 02-14-2020 03:59:28 | Thanks!<|||||>Welcome, Julien!
This one won't be my last contribution! :)
Not so easy :P
El vie., 14 feb. 2020 5:05, Julien Chaumond <[email protected]>
escribió:
> Thanks!
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/pull/2854?email_source=notifications&email_token=AA34BHPFYBDFG2UBR7FZQHDRCYJ6ZA5CNFSM4KVADOQ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOELXP3CI#issuecomment-586087817>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AA34BHIPBUNF5MRYBVO6ZPTRCYJ6ZANCNFSM4KVADOQQ>
> .
>
<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2854?src=pr&el=h1) Report
> Merging [#2854](https://codecov.io/gh/huggingface/transformers/pull/2854?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4d36472b96d144887cbe95b083f0d2091fd5ff03?src=pr&el=desc) will **decrease** coverage by `25.28%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2854?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2854 +/- ##
===========================================
- Coverage 75.06% 49.77% -25.29%
===========================================
Files 94 94
Lines 15287 15287
===========================================
- Hits 11475 7609 -3866
- Misses 3812 7678 +3866
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2854?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jYW1lbWJlcnQucHk=) | `0% <0%> (-100%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG1fcm9iZXJ0YS5weQ==) | `0% <0%> (-100%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `0% <0%> (-100%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `0% <0%> (-97.83%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/2854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `0% <0%> (-96.55%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `0% <0%> (-96.06%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `0% <0%> (-95.85%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `0% <0%> (-95.12%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `0% <0%> (-94.67%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `0% <0%> (-92.79%)` | :arrow_down: |
| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/2854/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2854?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2854?src=pr&el=footer). Last update [4d36472...3643bb8](https://codecov.io/gh/huggingface/transformers/pull/2854?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 2,853 | closed | [pipeline] Alias NerPipeline as TokenClassificationPipeline | 02-14-2020 01:15:04 | 02-14-2020 01:15:04 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2853?src=pr&el=h1) Report
> Merging [#2853](https://codecov.io/gh/huggingface/transformers/pull/2853?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1eec69a90007b8f4a7af10805dab4904ea5dea77?src=pr&el=desc) will **decrease** coverage by `1.07%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2853?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2853 +/- ##
==========================================
- Coverage 75.06% 73.98% -1.08%
==========================================
Files 94 94
Lines 15287 15288 +1
==========================================
- Hits 11475 11311 -164
- Misses 3812 3977 +165
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2853?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/2853/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.87% <ø> (ø)` | :arrow_up: |
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2853/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `71.5% <100%> (+0.07%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2853/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2853/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.91% <0%> (-9.86%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2853/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `70.86% <0%> (-2.3%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2853/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `92.07% <0%> (-2.21%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2853/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2853?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2853?src=pr&el=footer). Last update [1eec69a...549ce87](https://codecov.io/gh/huggingface/transformers/pull/2853?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 2,852 | closed | Update with additional information | Added a "Pre-training details" section | 02-14-2020 00:50:37 | 02-14-2020 00:50:37 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2852?src=pr&el=h1) Report
> Merging [#2852](https://codecov.io/gh/huggingface/transformers/pull/2852?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1eec69a90007b8f4a7af10805dab4904ea5dea77?src=pr&el=desc) will **decrease** coverage by `1.07%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2852?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2852 +/- ##
==========================================
- Coverage 75.06% 73.98% -1.08%
==========================================
Files 94 94
Lines 15287 15287
==========================================
- Hits 11475 11310 -165
- Misses 3812 3977 +165
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2852?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.91% <0%> (-9.86%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `70.86% <0%> (-2.3%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `92.07% <0%> (-2.21%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2852?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2852?src=pr&el=footer). Last update [1eec69a...59baea0](https://codecov.io/gh/huggingface/transformers/pull/2852?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>See #2851.
Thanks! |
transformers | 2,851 | closed | Create model card for the newly released 'nlpaueb/bert-base-greek-uncased-v1' | 02-14-2020 00:18:07 | 02-14-2020 00:18:07 | Thanks for sharing!
How did you pre-train this model (infrastructure, number of epochs, etc.)?
Do you have eval results on downstream tasks?
Also you can add a
```
---
language: greek
---
```
tag to the top of the file
I'll merge this in the meantime, thanks for sharing!<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2851?src=pr&el=h1) Report
> Merging [#2851](https://codecov.io/gh/huggingface/transformers/pull/2851?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8744402f1eb51c7ae6b86cae1015983096beb655?src=pr&el=desc) will **decrease** coverage by `1.07%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2851?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2851 +/- ##
==========================================
- Coverage 75.06% 73.98% -1.08%
==========================================
Files 94 94
Lines 15287 15287
==========================================
- Hits 11475 11310 -165
- Misses 3812 3977 +165
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2851?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2851/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2851/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.91% <0%> (-9.86%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2851/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `70.86% <0%> (-2.3%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2851/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `92.07% <0%> (-2.21%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2851/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2851?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2851?src=pr&el=footer). Last update [8744402...6aa9688](https://codecov.io/gh/huggingface/transformers/pull/2851?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I just amended the initial model card with extra information on the pre-training process. No evaluation yet, I hope will have some experiments, pretty soon. Thanks! |
|
transformers | 2,850 | closed | Adding usage examples for common tasks | Adding a documentation page detailing usage for common tasks (inference, not training | 02-13-2020 21:41:00 | 02-13-2020 21:41:00 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2850?src=pr&el=h1) Report
> Merging [#2850](https://codecov.io/gh/huggingface/transformers/pull/2850?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7f98edd7e362a64c947b083cfc0c401c4d0ffe91?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2850?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2850 +/- ##
=======================================
Coverage 75.06% 75.06%
=======================================
Files 94 94
Lines 15287 15287
=======================================
Hits 11475 11475
Misses 3812 3812
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2850?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2850?src=pr&el=footer). Last update [7f98edd...51830ef](https://codecov.io/gh/huggingface/transformers/pull/2850?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I've added a way to switch between `PyTorch` and `TensorFlow` implementations. I didn't want to have a wall of code that had the two frameworks, so now there's a toggle to show which framework you would like to see.
It works as follows: a javascript method parses the documentation page shown and looks for `.highlight` classes, which are the code blocks. In there, it looks for `## PYTORCH CODE`, which represents the beginning of a `PyTorch` snippet and `## TENSORFLOW CODE` which represents the beginning of a `TensorFlow` snippet.
Would love an opinion on the Javascript code as well. Would love to convert this to TS down the road.
Here's a gif of the result

<|||||>@LysandreJik reviewing the JS now and had a question. Depending on your thoughts, would it make sense from a UX standpoint for all of the buttons to toggle together? So, if a user selects "Tensorflow" all of the code blocks would switch to "Tensorflow". <|||||>I guess this would be cool and makes sense from a UX standpoint. Do you think it's necessary or can it wait for the second version?<|||||>It can 100% wait for a second version. |
transformers | 2,849 | closed | PreTrainedEncoderDecoder does not work for LSTM | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
If we want to have a BERT-based encoder and LSTM encoder, that is not currently possible with the current huggingface implementation, mostly because torch.nn.LSTM does not contain a config class variable.
## Stack Trace
File "/beegfs/yp913/anaconda3/envs/jiant_new/lib/python3.6/site-packages/transformers/modeling_encoder_decoder.py", line 349, in from_pretrained
model = super().from_pretrained(*args, **kwargs)
File "/beegfs/yp913/anaconda3/envs/jiant_new/lib/python3.6/site-packages/transformers/modeling_encoder_decoder.py", line 153, in from_pretrained
decoder.config.is_decoder = True
File "/beegfs/yp913/anaconda3/envs/jiant_new/lib/python3.6/site-packages/torch/nn/modules/module.py", line 539, in __getattr__
type(self).__name__, name))
AttributeError: 'LSTM' object has no attribute 'config'
## To reproduce
You can reproduce this by:
import transformers
from transformers.modeling_encoder_decoder import Model2LSTM
model = Model2LSTM.from_pretrained("roberta-large", decoder_config={"hidden_size":512, "input_size":1024, "num_layers": 2})
(When you initialize Model2LSTM like the above it runs into a separate error. I believe a ** is missing from the Model2LSTM decoder LSTM initialization).
| 02-13-2020 20:27:40 | 02-13-2020 20:27:40 | I put this as a bug because the code as-is does not hint that Model2LSTM does not work.
https://github.com/huggingface/transformers/blob/90ab15cb7a8fcf8bf58c05453ddf1aa6a4fa00c1/src/transformers/modeling_encoder_decoder.py
It would be great to say that LSTM is not currently supported there. <|||||>Indeed, my initial comment was a mistake. I'm looking into it now. |
transformers | 2,848 | closed | Add `masked_lm_labels` argument to `TFAlbertForMaskedLM` | # 🚀 Feature request
The PyTorch `AlbertForMaskedLM` model has support for the `masked_lm_labels` parameter, while `TFAlbertForMaskedLM` does not. I'd like to bring feature parity.
It looks like a similar feature is also missing for `TFBertForMaskedLM`, `TFRobertaForMaskedLM`, `TFDistilBertForMaskedLM`. I'd be happy to add support for those models as well.
## Motivation
I'm pretraining TF NLP models, and this would simplify the training script by encapsulating the loss function.
## Your contribution
I'm happy to contribute the code. I'll follow CONTRIBUTING.md, any gotchas I should be aware of?
| 02-13-2020 18:55:18 | 02-13-2020 18:55:18 | Hi! This feature would be great to have.
I'm curious how `TFBertMaskedLM` (and the like) are supposed to be used with the keras `fit()` functionality?
It seems like one is supposed to loop through the training data and calculate the cross-entropy loss for each batch (#2926). I see there was related discussion also here #1999.
Happy for any input!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Closing because this feature was added in #4530 |
transformers | 2,847 | closed | BART/T5 seq2seq example | # 🚀 Feature request
Can we have a seq2seq example with training/fine-tuning and generation for BART/T5 models? | 02-13-2020 15:52:57 | 02-13-2020 15:52:57 | We are hard at work on this! I'd estimate 6 weeks out.<|||||>Looking forward to this for the T5 model :)<|||||>@sshleifer any updates? <|||||>The example doesn't seem to show training/fine-tuning, only evaluation of already fine-tuned models.<|||||>@sshleifer Hello, any updates for training/fine-tuning on text generation for T5 model ?<|||||>`summarization/bart/finetune.py` supports T5. |
transformers | 2,846 | closed | Error reported when running ''run_language_modeling.py" file | # 🐛 Bug
## Information
Model I am using (Bert and RoBerta):
Language I am using the model on (English).
The problem arises when using:
* [ ] the official example scripts: (give details below)
I followed the tutorial of how to fine tuning the Bert model on own corpus data, and used recommended corpus 'wiki-text-2' corpus to fine tune the Bert Model.
However there is always an error appears: "RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed. at ../aten/src/THNN/generic/ClassNLLCriterion.c:97".
Thus I am not sure are there something wrong with the "run_language_modeling.py" file, because I did do any change of the original code and used recommended wiki corpus. Could you help me check this error?
* [ ] my own modified scripts: (give details below)
None
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
Fine tuning the language model of Bert on our own customer corpus data.
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?: | 02-13-2020 15:39:53 | 02-13-2020 15:39:53 | Hi, this is probably due to a version mismatch. Can you update your repository to be on the same version than the script's ?
If it's `run_language_modeling` (was `run_lm_finetuning` up until very recently), that would be version 2.4.1 (safe, but the script may have evolved a bit since the release 13 days ago) or `master` (safer, should work 100%).<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,845 | closed | Skip flaky test_tf_question_answering | Reasoning: While we diagnose the problem, better to keep circleci from randomly failing. | 02-13-2020 13:53:22 | 02-13-2020 13:53:22 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2845?src=pr&el=h1) Report
> Merging [#2845](https://codecov.io/gh/huggingface/transformers/pull/2845?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ef74b0f07a190f19c69abc0732ea955e8dd7330f?src=pr&el=desc) will **decrease** coverage by `0.05%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2845?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2845 +/- ##
==========================================
- Coverage 75.04% 74.98% -0.06%
==========================================
Files 94 94
Lines 15274 15274
==========================================
- Hits 11462 11453 -9
- Misses 3812 3821 +9
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2845?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.77% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `68.62% <0%> (-5.89%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.15% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `94.27% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.54% <0%> (ø)` | :arrow_up: |
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `71.17% <0%> (-0.77%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2845?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2845?src=pr&el=footer). Last update [ef74b0f...4c62bdc](https://codecov.io/gh/huggingface/transformers/pull/2845?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,844 | closed | Attempt to increase timeout for circleci slow tests | @LysandreJik can you help me test this? | 02-13-2020 13:26:59 | 02-13-2020 13:26:59 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2844?src=pr&el=h1) Report
> Merging [#2844](https://codecov.io/gh/huggingface/transformers/pull/2844?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f54a5bd37f99e3933a396836cb0be0b5a497c077?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2844?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2844 +/- ##
=======================================
Coverage 75.02% 75.02%
=======================================
Files 93 93
Lines 15275 15275
=======================================
Hits 11460 11460
Misses 3815 3815
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2844?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2844?src=pr&el=footer). Last update [f54a5bd...68880a1](https://codecov.io/gh/huggingface/transformers/pull/2844?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Julien verbal approval :)<|||||>@sshleifer:
```
Configuration errors: 1 error occurred:
* In step 4 definition: step type "no_output_timeout" is not a valid type
```
in https://app.circleci.com/jobs/github/huggingface/transformers/18406 |
transformers | 2,843 | closed | Model card: Literary German BERT | This PR adds a model card for [severinsimmler/literary-german-bert](https://huggingface.co/severinsimmler/literary-german-bert), a domain-adapted and fine-tuned BERT for named entity recognition in German literary texts. | 02-13-2020 13:12:54 | 02-13-2020 13:12:54 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2843?src=pr&el=h1) Report
> Merging [#2843](https://codecov.io/gh/huggingface/transformers/pull/2843?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/21da895013a95e60df645b7d6b95f4a38f604759?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2843?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2843 +/- ##
=======================================
Coverage 75.02% 75.02%
=======================================
Files 93 93
Lines 15275 15275
=======================================
Hits 11460 11460
Misses 3815 3815
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2843?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2843?src=pr&el=footer). Last update [21da895...6f2b608](https://codecov.io/gh/huggingface/transformers/pull/2843?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks for using our BERT model! Great to see that your fine-tuned model beats the CRF baseline :)<|||||>Thanks for sharing your German BERT -- outperformed the multilingual one by the way.<|||||>@severinsimmler Thank you! I tweaked the references to images (this should be documented at some point, but not sure where we can put it for now) + added tags
Also thank you @stefan-it <|||||>Hi @julien-c, why is this page offline? https://huggingface.co/severinsimmler/literary-german-bert
The model is neither listed here anymore: https://huggingface.co/models
nor says my user page that there are any published models: https://huggingface.co/severinsimmler
But the CLI says it's still there:
```
$ transformers-cli s3 ls
Filename LastModified ETag Size
-------------------------------------------- ------------------------ ---------------------------------- ---------
literary-german-bert/config.json 2020-02-13T13:37:48.000Z "7e68409fc147acec10dadb06b33d0ba6" 1043
literary-german-bert/eval_results.txt 2020-02-13T12:24:48.000Z "cda28cf0e39c7783bf8c8995ef940492" 147
literary-german-bert/pytorch_model.bin 2020-02-13T12:25:18.000Z "27c22d3d221287715ca781d3939f9bb2" 439770223
literary-german-bert/special_tokens_map.json 2020-02-13T12:24:50.000Z "8b3fb1023167bb4ab9d70708eb05f6ec" 112
literary-german-bert/test_results.txt 2020-02-13T12:24:45.000Z "c5276b24e5788305862f5b7bc847fa95" 147
literary-german-bert/tokenizer_config.json 2020-02-13T12:24:49.000Z "b2db3b45d8945539dab67f41f04101d7" 152
literary-german-bert/training_args.bin 2020-02-13T12:24:44.000Z "ce5c09e8214e66daa6a97005f20e7300" 1309
literary-german-bert/vocab.txt 2020-02-13T12:24:46.000Z "5787056a1ea58629b0c71cfc37728ce4" 239836
```
And I am also able to download and use it. 🤔 <|||||>We had a small hiccup on the website (due to improperly sanitized user-input – a.k.a. developer error :)
Your model should be back up.<|||||>Thanks for the quick response and fix! Keep up the great work :) |
transformers | 2,842 | closed | when will add XLMRobertaForQuestionAnswering package | I will study squad of multilingual.
I found that question answer package did include in run_squad.py.
I wanna release that package
Are you plan to release XLMRobertaForQuestionAnwering?
please let me know. | 02-13-2020 12:39:42 | 02-13-2020 12:39:42 | It is pretty easy to add the code yourself since RobertaForQuestionAnswering is already implemented and XLMRobertaForQuestionAnswering is just a wrapper around it. <|||||>Thank you for your answering.
I got your mention
I have question one more
Is it possible to learn XLM-Roberta data to Roberta
(XLM-Roberta data is https://github.com/pytorch/fairseq/tree/master/examples/xlmr)
If it is possible, can you show me how to set up.
please let me know.<|||||>I resolve the problem.
Thank you your answering |
transformers | 2,841 | closed | cannot find model in model name list | Hi , thank you for developing well-made pytorch version of BERT !
I am new to NLP area and have problem while coding like this:
```python
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
```
The error discription is below:
```
INFO:pytorch_pretrained_bert.file_utils:https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt not found in cache, downloading to C:\Users\zxr\AppData\Local\Temp\tmpb3lgzjlo
ERROR:pytorch_pretrained_bert.tokenization:Model name 'bert-base-uncased' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese). We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt' was a path or url but couldn't find any file associated to this path or url.
```
I searched it and thought it might be poor internet connect.
I downloaded the model i want but donot know how to load it with code.
Thank you very much for your attention! | 02-13-2020 12:32:49 | 02-13-2020 12:32:49 | Hi, could you please provide all the information required in the template so that we may help you? Namely which version of `transformers`, python and PyTorch are you using?
You seem to be using `pytorch-pretrained-BERT`, which is a very old version of this repository. Have you tried using the newer `transformers`, which has much more functionalities and is more robust than `pytorch-pretrained-BERT`?<|||||>I am using `Python 3.6.9` , `torch 1.3.1` and `pytorch-pretrained-bert 0.6.2` .
I am following the tutorial on https://pypi.org/project/pytorch-pretrained-bert/ and meet with this problem.<|||||>I changed to a better Internet and this problem solved .
Sorry for your time and thank you for your attention !<|||||>can u tell me how to change a better internet? i met thie question too<|||||>@zxr19980213 |
transformers | 2,840 | closed | [WIP] Add patience argument to run_language_modeling script | # Summary
Often, we want to stop training if loss does not improve for a number of epochs. This PR adds a "patience" argument, which is a limit on the number of times we can get a non-improving eval loss before stopping training early.
It is implemented by other NLP frameworks, such as AllenNLP (see [trainer.py](https://github.com/allenai/allennlp/blob/master/allennlp/training/trainer.py#L95) and [metric_tracker.py](https://github.com/allenai/allennlp/blob/1a8a12cd1b065d74fec3d2e80105a684736ff709/allennlp/training/metric_tracker.py#L6)).
# Motivation
This feature allows faster fine-tuning by breaking the training loop early and avoids users the toil of checking metrics on Tensorboard.
# Caveats
Often, models are evaluated once per epoch, but run_lm_finetuning.py has an option to evaluate after a set number of model update steps (dictated by `--logging_steps` if `--evaluate_during_training` is true). Because of this, I've elected to tie patience to the number of evaluations without improvement in loss.
# To-do
- Add tests
- Fix long lines | 02-13-2020 10:40:47 | 02-13-2020 10:40:47 | Sounds great! I'll go ahead and fix the code quality check.<|||||>Since `run_langauge_modeling.py` now uses the `Trainer` class, I'll likely create a new PR that adds patience to `Trainer`. |
transformers | 2,839 | closed | Fine-tuning the model using classification tasks | Hello All,
Could anyone tell how can I fine-tune the language model using classification tasks, however not using any GLUE data, as I have my own custom dataset?
Is there any solution and/or method to do the classification using a custom dataset? | 02-13-2020 10:36:46 | 02-13-2020 10:36:46 | Hi, the `run_glue` example script was designed to showcase how to fine-tune any model to a classification task. It showcases many things you maybe don't need, such as data-parallel, checkpointing, half-precision, etc. You can adapt this script or study the training loop to create your own.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,838 | closed | A small model for CTRL | # 🚀 Feature request
A smaller version of the pre-trained model CTRL, related to the stack overflow question;
[https://stackoverflow.com/questions/60142937/huggingface-transformers-for-text-generation-with-ctrl]
## Motivation
I've been trying to generate text using CTRL and I run to a memory insufficiency, since it is a large model. And was wondering whether there would be a small version of CTRL like a Distill version like some of the other transformer models do.
| 02-13-2020 01:25:24 | 02-13-2020 01:25:24 | cc'ing @keskarnitish on this issue just in case!<|||||>Thank you, @julien-c.<|||||>hi, any update on this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,837 | closed | Pretrained TFAlbertForMaskedLM returns seemingly random token predictions | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): BERT, ALBERT
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task:
* [x] my own task or dataset: toy data.
## To reproduce
```
import tensorflow as tf
from transformers import BertTokenizer, TFBertForMaskedLM, AlbertTokenizer, TFAlbertForMaskedLM
tf.random.set_seed(1)
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
model = TFBertForMaskedLM.from_pretrained("bert-base-uncased")
input_ids = tokenizer.encode(f"I {tokenizer.mask_token} you", return_tensors="tf")
outputs = model(input_ids)
prediction_scores = outputs[0]
predicted_ids = tf.reshape(tf.argmax(prediction_scores, -1), [-1])
predicted_tokens = tokenizer.convert_ids_to_tokens(predicted_ids)
print(predicted_tokens)
# ['.', 'i', 'love', 'you', '.']
tf.random.set_seed(1)
tokenizer = AlbertTokenizer.from_pretrained("albert-base-v2")
model = TFAlbertForMaskedLM.from_pretrained("albert-base-v2")
input_ids = tokenizer.encode(f"I {tokenizer.mask_token} you", return_tensors="tf")
outputs = model(input_ids)
prediction_scores = outputs[0]
predicted_ids = tf.reshape(tf.argmax(prediction_scores, -1), [-1])
predicted_tokens = tokenizer.convert_ids_to_tokens(predicted_ids)
print(predicted_tokens)
# ['_pawn', '_addressing', '_fundraising', '_george', '_hybrid']
```
## Expected behavior
I would expect both commands to return the same result, filling in the middle with "love" or some other word. BERT performs correctly, while ALBERT seems to return nonsense. Any idea why this is happening?
## Environment info
- `transformers` version: 2.4.1
- Platform: Linux
- Python version: 3.6.5
- PyTorch version (GPU?): not installed
- Tensorflow version (GPU?): 2.0.0 (True)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 02-13-2020 00:52:34 | 02-13-2020 00:52:34 | Hi, thank you for opening an issue, there was indeed an error with the way the `TFAlbertModel` was implemented! It was fixed with https://github.com/huggingface/transformers/commit/1abd53b1aa2f15953bbbbbfefda885d1d9c9d94b.
Even with the fix, the sequence `I <mask> you` is hard for ALBERT, but using your sample with a longer sequence yields satisfying results:
```py
import tensorflow as tf
from transformers import BertTokenizer, TFBertForMaskedLM, AlbertTokenizer, TFAlbertForMaskedLM
tf.random.set_seed(1)
tokenizer = AlbertTokenizer.from_pretrained("albert-base-v2")
model = TFAlbertForMaskedLM.from_pretrained("albert-base-v2")
input_ids = tokenizer.encode(f"This is the best thing I've {nlp.tokenizer.mask_token} in my life.", return_tensors="tf")
outputs = model(input_ids)
prediction_scores = outputs[0]
predicted_ids = tf.reshape(tf.argmax(prediction_scores, -1), [-1])
predicted_tokens = tokenizer.convert_ids_to_tokens(predicted_ids)
print(predicted_tokens)
# ['▁time', '▁this', '▁is', '▁the', '▁best', '▁thing', '▁i', "'", 've', '▁done', '▁in', '▁my', '▁life', '!!!', '▁your']
```
Let me know if the updated model works for you.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,836 | closed | Getting value of [UNK] labels | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
I have created a NER model based ob Bert with this library, but I have a problem when I run my model due to `[UNK]`. Sometimos there are entities that aren't on my vocab so they are marked as unkowns so I cant know what they are.
I know I can't revert `[UNK]` label, so would like to be able to define the words that would be unknown before the processing of the sentence.
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: [https://stackoverflow.com/questions/60192523/get-the-value-of-unk-in-bert](url) | 02-12-2020 23:59:58 | 02-12-2020 23:59:58 | Did you try using the `add_tokens` method on the tokenizer alongside the `resize_token_embeddings` on the model, to add your tokens to the vocabulary? The won't be marked as `[UNK]` this way, but will instead receive brand new embeddings (which need to be trained).<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@LysandreJik Hello, I have the same problem。
I want to implement [pointer-generator](https://arxiv.org/abs/1704.04368) by BertTokenizer. Pointer-generator can generate OOV tokens in the inputs by dynamically extending vocab. And there is no way to add all the tokens directly, especially in the test set.
Do you have any good solutions?<|||||>Have you tried using the `add_tokens` method on the tokenizer and the `resize_token_embeddings` method on your model?<|||||>I tried the dumbest solution:
```
bert_tokens = tokenizer.tokenize(query)
tokens = []
pre_text = ""
for i in range(len(bert_tokens)):
bert_token = bert_tokens[i].replace("##", "")
if i+1 < len(bert_tokens):
post_token = bert_tokens[i+1].replace("##", "")
else:
post_token = ""
if bert_token == '[UNK]':
token = str(
re.match(f"{pre_text}(.*){post_token}(.*)",
query).group(1))
tokens.append(token)
pre_text += token
else:
tokens.append(bert_token)
pre_text += bert_token
return tokens
``` |
transformers | 2,835 | closed | Failing slow RobertaModelIntegrationTest | ```
RUN_SLOW=1 pytest tests/test_modeling_roberta.py::RobertaModelIntegrationTest::test_inference_masked_lm
```
Have not investigated at all, but wanted to record.
Traceback:
```
self = <tests.test_modeling_roberta.RobertaModelIntegrationTest testMethod=test_inference_masked_lm>
@slow
def test_inference_masked_lm(self):
model = RobertaForMaskedLM.from_pretrained("roberta-base")
input_ids = torch.tensor([[0, 31414, 232, 328, 740, 1140, 12695, 69, 46078, 1588, 2]])
output = model(input_ids)[0]
expected_shape = torch.Size((1, 11, 50265))
self.assertEqual(output.shape, expected_shape)
# compare the actual values for a slice.
expected_slice = torch.Tensor(
[[[33.8843, -4.3107, 22.7779], [4.6533, -2.8099, 13.6252], [1.8222, -3.6898, 8.8600]]]
)
> self.assertTrue(torch.allclose(output[:, :3, :3], expected_slice, atol=1e-3))
E AssertionError: False is not true
``` | 02-12-2020 22:44:51 | 02-12-2020 22:44:51 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,834 | closed | Failing slow AutoModelTest/BertForPreTraining | ```
RUN_SLOW=1 pytest tests/test_modeling_auto.py::AutoModelTest::test_model_for_pretraining_from_pretrained
```
Have not investigated at all, but wanted to record since the slow test failures are evasive :)
Clues:
model: `transformers.modeling_bert.BertForPreTraining`
```
loading_info = {'missing_keys': ['cls.predictions.decoder.bias'], 'unexpected_keys': [], 'error_msgs': []}
```
Likely related to this funkiness
https://github.com/huggingface/transformers/blob/ee5de0ba449d638da704e1c03ffcc20a930f5589/src/transformers/modeling_bert.py#L482-L483 | 02-12-2020 22:43:56 | 02-12-2020 22:43:56 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,833 | closed | add model_card flaubert-base-uncased-squad | A baseline model for question-answering in french ([flaubert](https://github.com/getalp/Flaubert) model fine-tuned on [french-translated SQuAD 1.1 dataset](https://github.com/Alikabbadj/French-SQuAD))
Small error when trying it with the pipeline though:
```python-traceback
>>> nlp = pipeline('question-answering', model='fmikaelian/flaubert-base-uncased-squad', tokenizer='fmikaelian/flaubert-base-uncased-squad')
nlp({
'question': "Qui est Claude Monet?",
'context': "Claude Monet, né le 14 novembre 1840 à Paris et mort le 5 décembre 1926 à Giverny, est un peintre français et l’un des fondateurs de l'impressionnisme."
})
Model name 'fmikaelian/flaubert-base-uncased-squad' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-japanese, bert-base-japanese-whole-word-masking, bert-base-japanese-char, bert-base-japanese-char-whole-word-masking, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1, bert-base-dutch-cased, openai-gpt, transfo-xl-wt103, gpt2, gpt2-medium, gpt2-large, gpt2-xl, distilgpt2, ctrl, xlnet-base-cased, xlnet-large-cased, xlm-mlm-en-2048, xlm-mlm-ende-1024, xlm-mlm-enfr-1024, xlm-mlm-enro-1024, xlm-mlm-tlm-xnli15-1024, xlm-mlm-xnli15-1024, xlm-clm-enfr-1024, xlm-clm-ende-1024, xlm-mlm-17-1280, xlm-mlm-100-1280, roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector, distilbert-base-uncased, distilbert-base-uncased-distilled-squad, distilbert-base-cased, distilbert-base-cased-distilled-squad, distilbert-base-german-cased, distilbert-base-multilingual-cased, distilbert-base-uncased-finetuned-sst-2-english, albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2, camembert-base, umberto-commoncrawl-cased-v1, umberto-wikipedia-uncased-v1, t5-small, t5-base, t5-large, t5-3b, t5-11b, xlm-roberta-base, xlm-roberta-large, xlm-roberta-large-finetuned-conll02-dutch, xlm-roberta-large-finetuned-conll02-spanish, xlm-roberta-large-finetuned-conll03-english, xlm-roberta-large-finetuned-conll03-german, flaubert-small-cased, flaubert-base-uncased, flaubert-base-cased, flaubert-large-cased). We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/fmikaelian/flaubert-base-uncased-squad/modelcard.json' was a path or url to a model card file named modelcard.json or a directory containing such a file but couldn't find any such file at this path or url.
Creating an empty model card.
>>>
>>> nlp({
... 'question': "Qui est Claude Monet?",
... 'context': "Claude Monet, né le 14 novembre 1840 à Paris et mort le 5 décembre 1926 à Giverny, est un peintre français et l’un des fondateurs de l'impressionnisme."
... })
convert squad examples to features: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 3.25it/s]
add example index and unique id: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 4181.76it/s]
Traceback (most recent call last):
File "<stdin>", line 3, in <module>
File "/usr/local/lib/python3.7/site-packages/transformers/pipelines.py", line 815, in __call__
start, end = self.model(**fw_args)
ValueError: too many values to unpack (expected 2)
``` | 02-12-2020 22:24:48 | 02-12-2020 22:24:48 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2833?src=pr&el=h1) Report
> Merging [#2833](https://codecov.io/gh/huggingface/transformers/pull/2833?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f54a5bd37f99e3933a396836cb0be0b5a497c077?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2833?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2833 +/- ##
=======================================
Coverage 75.02% 75.02%
=======================================
Files 93 93
Lines 15275 15275
=======================================
Hits 11460 11460
Misses 3815 3815
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2833?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2833?src=pr&el=footer). Last update [f54a5bd...286b4fa](https://codecov.io/gh/huggingface/transformers/pull/2833?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,832 | closed | 'distilbert-base-cased-distilled-squad' was not found error | # 🐛 Bug
## Information
Model I am using: distilbert-base-cased-distilled-squad
The problem arises when using: AutoTokenizer or AutoModelForQuestionAnswering
Steps to reproduce the behavior:
0. make sure you have everything on colab installed and imported
```
!pip install transformers
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
```
1. run the code on google colab:
```
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-cased-distilled-squad")
model = AutoModelForQuestionAnswering.from_pretrained("distilbert-base-cased-distilled-squad")
```
and the error:
```
OSError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, pretrained_config_archive_map, **kwargs)
224 if resolved_config_file is None:
--> 225 raise EnvironmentError
226 config_dict = cls._dict_from_json_file(resolved_config_file)
OSError:
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
3 frames
/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, pretrained_config_archive_map, **kwargs)
239 )
240 )
--> 241 raise EnvironmentError(msg)
242
243 except json.JSONDecodeError:
OSError: Model name 'distilbert-base-cased-distilled-squad' was not found in model name list. We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-cased-distilled-squad/config.json' was a path, a model identifier, or url to a configuration file named config.json or a directory containing such a file but couldn't find any such file at this path or url.
```
## Environment info
- `transformers` version: 2.4.1
- Platform: Google Colab
- PyTorch version :1.4.0
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| 02-12-2020 21:34:02 | 02-12-2020 21:34:02 | Hi! This checkpoint was added six days ago but our latest release was 13 days ago, so you would need to install the repository from source to use that model:
```
pip install git+https://github.com/huggingface/transformers
```
It'll be available in a pip install once we do a new release. |
transformers | 2,831 | closed | Installation Error - Failed building wheel for tokenizers | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): N/A
Language I am using the model on (English, Chinese ...): N/A
The problem arises when using:
* [X] the official example scripts: (give details below)
Problem arises in transformers installation on Microsoft Windows 10 Pro, version 10.0.17763
After creating and activating the virtual environment, installing transformers is not possible, because the following error occurs:
"error: can not find Rust Compiler"
"ERROR: Failed building wheel for tokenizers"
Failed to build tokenizers
ERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed d
The tasks I am working on is:
[X ] transformers installation
## To reproduce
Steps to reproduce the behavior:
1. From command line interface, create and activate a virtual environment by following the steps in this URL: https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/
2. Install transformers from source, by following the example in the topic From Source on this URL: https://github.com/huggingface/transformers
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
-m pip --version
-m pip install --upgrade pip
-m pip install --user virtualenv
-m venv env
.\env\Scripts\activate
pip install transformers
ERROR: Command errored out with exit status 1:
command: 'c:\users\vbrandao\env\scripts\python.exe' 'c:\users\vbrandao\env\lib\site-packages\pip\_vendor\pep517\_in_process.py' build_wheel 'C:\Users\vbrandao\AppData\Local\Temp\tmpj6evjmze'
cwd: C:\Users\vbrandao\AppData\Local\Temp\pip-install-sza2_lmj\tokenizers
Complete output (10 lines):
running bdist_wheel
running build
running build_py
creating build
creating build\lib
creating build\lib\tokenizers
copying tokenizers\__init__.py -> build\lib\tokenizers
running build_ext
running build_rust
error: Can not find Rust compiler
----------------------------------------
ERROR: Failed building wheel for tokenizers
Failed to build tokenizers
ERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Installation of transformers should be complete.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: N/A - installation step
- Platform: Command Line Interface / Virtual Env
- Python version: python 3.8
- PyTorch version (GPU?): N/A
- Tensorflow version (GPU?): N/A
- Using GPU in script?: N/A
- Using distributed or parallel set-up in script?: N/A

| 02-12-2020 19:52:28 | 02-12-2020 19:52:28 | Having the exact same issue on a Linux machine!<|||||>Environment: macOS Mojave Ver 10.14.6
Tried installing both from pip and source. Same issue:
> Successfully built transformers
> Failed to build tokenizers
Result was that Transformers was not installed (not listed in pip freeze)
This however should work - seems like you just won't get the the new tokenizers:
pip install transformers==2.4.1<|||||>@GDBSD I had the same issue on the same OS version and also tried pip and source. Your version specification worked. <|||||>Had the same issue on MacOS Mojave when doing pip3 install. Tried pip2 install, it worked but I got another error when running my script telling me I should really be using python 3.
I tried @GDBSD 's answer, but I got this error:
```
ERROR: Exception:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/py_compile.py", line 143, in compile
_optimize=optimize)
File "<frozen importlib._bootstrap_external>", line 791, in source_to_code
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/private/var/folders/g0/5zwy4mtx7579v5x6rxqb083r0000gn/T/pip-unpacked-wheel-k410h9s0/sacremoses/sent_tokenize.py", line 69
if re.search(IS_EOS, token)
^
SyntaxError: invalid syntax
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/compileall.py", line 159, in compile_file
invalidation_mode=invalidation_mode)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/py_compile.py", line 147, in compile
raise py_exc
py_compile.PyCompileError: File "/private/var/folders/g0/5zwy4mtx7579v5x6rxqb083r0000gn/T/pip-unpacked-wheel-k410h9s0/sacremoses/sent_tokenize.py", line 69
if re.search(IS_EOS, token)
^
SyntaxError: invalid syntax
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_internal/cli/base_command.py", line 186, in _main
status = self.run(options, args)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_internal/commands/install.py", line 404, in run
use_user_site=options.use_user_site,
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_internal/req/__init__.py", line 71, in install_given_reqs
**kwargs
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_internal/req/req_install.py", line 815, in install
warn_script_location=warn_script_location,
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_internal/operations/install/wheel.py", line 614, in install_wheel
warn_script_location=warn_script_location,
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_internal/operations/install/wheel.py", line 338, in install_unpacked_wheel
compileall.compile_dir(source, force=True, quiet=True)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/compileall.py", line 97, in compile_dir
legacy, optimize, invalidation_mode):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/compileall.py", line 169, in compile_file
msg = err.msg.encode(sys.stdout.encoding,
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_internal/utils/misc.py", line 554, in encoding
return self.orig_stream.encoding
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/codecs.py", line 409, in __getattr__
return getattr(self.stream, name)
AttributeError: '_io.BufferedWriter' object has no attribute 'encoding'
```<|||||>yes I had the same issue with `pip3.6 install`<|||||>Can you all run `python transformers-cli env` and post the output here? It provides some useful information about your platform that might be helpful to debug.<|||||>Hi, I had the same problem and resolved it by installing rust.
"error: Can not find Rust compiler"
For MacOS, I used "curl https://sh.rustup.rs -sSf | sh". I also found that it needed a nightly version of rust, so you have to specify that in the install options. <|||||>Hi, I also had the same problem with my initial installation of the library. After some time, I realized that my anaconda version was on 32Bit. You can check your version with
`python -c "import struct;print( 8 * struct.calcsize('P'))"`
The output should be 64.
If it is 32 then you have to reinstall your IDE
<|||||>@Wild3d I can confirm after running your snippet that I am on a 64bit version <|||||>@gardnerds after creating a new environment to try your solution that also worked for me. I didn't have rust installed before. It successfully built the wheel for tokenizers (PEP 517). <|||||>@gardnerds also worked for me. Using python 3.7 and built from source using a clean conda env<|||||>Install Python 64-bit instead of 32-bit solved my same issue.<|||||>I was having the same issue on virtualenv over Mac OS Mojave. Managed to solve it and install Transformers 2.5.1 by manually install the last version of tokenizers (0.6.0) instead of 0.5.2 that is required in the transformer package.
pip install tokenizers
Git clone latest version of transformers:
git clone https://github.com/huggingface/transformers
Before running the installation edit transformers/setup.py and change requirement of tokenizers to 0.6.0
Line 93: install_requires=[
"numpy",
"tokenizers == 0.6.0",
Then run as usual:
cd transformers
pip install .
I assume that you could also skip the first step and just collect the package as you run the install.
I'm quite new to this, so just wanted to share my take.<|||||>@dafraile That solves mine! Thank you very much!<|||||>@dafraile That helps, thanks a lot!<|||||>I managed to solve the issue by installing Rust compiler
- Install Rust [link](https://www.rust-lang.org/tools/install) `curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh`
- Restart the terminal
- `pip install transformers==2.5.1`<|||||>> Environment: macOS Mojave Ver 10.14.6
> Tried installing both from pip and source. Same issue:
>
> > Successfully built transformers
> > Failed to build tokenizers
>
> Result was that Transformers was not installed (not listed in pip freeze)
>
> This however should work - seems like you just won't get the the new tokenizers:
> pip install transformers==2.4.1
This solution is working for me<|||||>> I managed to solve the issue by installing Rust compiler
>
> * Install Rust [link](https://www.rust-lang.org/tools/install) `curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh`
> * Restart the terminal
> * `pip install transformers==2.5.1`
It works for me, thanks!
You can do `source $HOME/.cargo/env` instead of restarting the terminal.<|||||>@gardnerds, adding `$HOME/.cargo/bin` to PATH after installing rust fixed my installation. Thank you. <|||||>@dafraile Thanks a lot. It solves my problem<|||||>@dafraile Thanks! It works!<|||||>@AvivNavon Thanks ! Solved my problem too. (MacOS Mojave)
I install latest version of transformers though (2.8.0)
`pip install transformers` instead of `pip install transformers==2.5.1`<|||||>resolved this issue by installing Rust <|||||>I resolved this issue by installing Rust - I initially did forget to restart the terminal first.
I'm using Mojave 10.14.5.
This thread is great! Btw I had no such issues on my Ubuntu 18.04 machine.<|||||>@phihung recommendation works. <|||||>Just installing rust compiler works for me too (Thanks @phihung ) I'm on Mac Mojave 10.14.6.
May be conda installation should be able to over come this? (don't know if pip can force install a 3rd party compiler)?<|||||>@dafraile Actually your solution is the closest one ! But now I saw that they just corrected that line in setup.py so it became tokenizers==0.7.0 now (and the newest tokenizers are 0.7.0).
So the real importance is that we should
1. always update the transformers from the source
2. (really important !) uninstall the old version before we reinstall the newest :p
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I am facing a similar issue trying to build on a PowerPC with RedHat
I am getting errors when trying to build tokenizers:
```
Building wheels for collected packages: tokenizers
Building wheel for tokenizers (PEP 517) ... error
ERROR: Command errored out with exit status 1:
command: /home/aarbelle/.conda/envs/gbs/bin/python3.6 /home/aarbelle/.conda/envs/gbs/lib/python3.6/site-packages/pip/_vendor/pep517/_in_process.py build_wheel /tmp/tmpd6q9xccz
cwd: /tmp/pip-install-ohxny31i/tokenizers
Complete output (136 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib
creating build/lib/tokenizers
copying tokenizers/__init__.py -> build/lib/tokenizers
creating build/lib/tokenizers/models
copying tokenizers/models/__init__.py -> build/lib/tokenizers/models
creating build/lib/tokenizers/decoders
copying tokenizers/decoders/__init__.py -> build/lib/tokenizers/decoders
creating build/lib/tokenizers/normalizers
copying tokenizers/normalizers/__init__.py -> build/lib/tokenizers/normalizers
creating build/lib/tokenizers/pre_tokenizers
copying tokenizers/pre_tokenizers/__init__.py -> build/lib/tokenizers/pre_tokenizers
creating build/lib/tokenizers/processors
copying tokenizers/processors/__init__.py -> build/lib/tokenizers/processors
creating build/lib/tokenizers/trainers
copying tokenizers/trainers/__init__.py -> build/lib/tokenizers/trainers
creating build/lib/tokenizers/implementations
copying tokenizers/implementations/bert_wordpiece.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/__init__.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/byte_level_bpe.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/sentencepiece_bpe.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/base_tokenizer.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/char_level_bpe.py -> build/lib/tokenizers/implementations
copying tokenizers/__init__.pyi -> build/lib/tokenizers
copying tokenizers/models/__init__.pyi -> build/lib/tokenizers/models
copying tokenizers/decoders/__init__.pyi -> build/lib/tokenizers/decoders
copying tokenizers/normalizers/__init__.pyi -> build/lib/tokenizers/normalizers
copying tokenizers/pre_tokenizers/__init__.pyi -> build/lib/tokenizers/pre_tokenizers
copying tokenizers/processors/__init__.pyi -> build/lib/tokenizers/processors
copying tokenizers/trainers/__init__.pyi -> build/lib/tokenizers/trainers
running build_ext
running build_rust
Updating crates.io index
Updating git repository `https://github.com/n1t0/rayon-cond`
warning: unused manifest key: target.x86_64-apple-darwin.rustflags
Compiling proc-macro2 v1.0.21
Compiling unicode-xid v0.2.1
Compiling autocfg v1.0.1
Compiling syn v1.0.41
Compiling libc v0.2.77
Compiling lazy_static v1.4.0
Compiling cfg-if v0.1.10
Compiling memchr v2.3.3
Compiling serde_derive v1.0.116
Compiling scopeguard v1.1.0
Compiling serde v1.0.116
Compiling maybe-uninit v2.0.0
Compiling regex-syntax v0.6.18
Compiling ryu v1.0.5
Compiling rayon-core v1.8.1
Compiling getrandom v0.1.15
Compiling serde_json v1.0.57
Compiling smallvec v1.4.2
Compiling itoa v0.4.6
Compiling inventory v0.1.9
Compiling pkg-config v0.3.18
Compiling proc-macro-hack v0.5.18
Compiling bitflags v1.2.1
Compiling cc v1.0.60
Compiling unicode-width v0.1.8
Compiling either v1.6.1
Running `rustc --crate-name build_script_build --edition=2018 /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/proc-macro2-1.0.21/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature="default"' --cfg 'feature="proc-macro"' -C metadata=93385cb1e678e330 -C extra-filename=-93385cb1e678e330 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/build/proc-macro2-93385cb1e678e330 -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name unicode_xid /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/unicode-xid-0.2.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature="default"' -C metadata=cac161967aa527e1 -C extra-filename=-cac161967aa527e1 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name autocfg /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/autocfg-1.0.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=ddb9624730d1e52a -C extra-filename=-ddb9624730d1e52a --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build --edition=2018 /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/syn-1.0.41/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature="clone-impls"' --cfg 'feature="default"' --cfg 'feature="derive"' --cfg 'feature="extra-traits"' --cfg 'feature="full"' --cfg 'feature="parsing"' --cfg 'feature="printing"' --cfg 'feature="proc-macro"' --cfg 'feature="quote"' --cfg 'feature="visit"' -C metadata=9988fc7a157e69c9 -C extra-filename=-9988fc7a157e69c9 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/build/syn-9988fc7a157e69c9 -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/libc-0.2.77/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature="default"' --cfg 'feature="std"' -C metadata=5a4798f2b06c36bd -C extra-filename=-5a4798f2b06c36bd --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/build/libc-5a4798f2b06c36bd -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name cfg_if --edition=2018 /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/cfg-if-0.1.10/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=a7dbefe7725970f6 -C extra-filename=-a7dbefe7725970f6 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name lazy_static /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/lazy_static-1.4.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=09f05f31cfc64306 -C extra-filename=-09f05f31cfc64306 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/memchr-2.3.3/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature="default"' --cfg 'feature="std"' --cfg 'feature="use_std"' -C metadata=a8f56f28f9bbd928 -C extra-filename=-a8f56f28f9bbd928 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/build/memchr-a8f56f28f9bbd928 -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/serde_derive-1.0.116/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature="default"' -C metadata=d850080603f4774e -C extra-filename=-d850080603f4774e --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/build/serde_derive-d850080603f4774e -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name scopeguard /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/scopeguard-1.1.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=91afa33e60eb09b1 -C extra-filename=-91afa33e60eb09b1 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/serde-1.0.116/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature="default"' --cfg 'feature="derive"' --cfg 'feature="serde_derive"' --cfg 'feature="std"' -C metadata=1a02cab7c16e427d -C extra-filename=-1a02cab7c16e427d --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/build/serde-1a02cab7c16e427d -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/maybe-uninit-2.0.0/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -Cembed-bitcode=no -C metadata=9f94ee50e1295f1f -C extra-filename=-9f94ee50e1295f1f --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/build/maybe-uninit-9f94ee50e1295f1f -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name regex_syntax /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/regex-syntax-0.6.18/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature="default"' --cfg 'feature="unicode"' --cfg 'feature="unicode-age"' --cfg 'feature="unicode-bool"' --cfg 'feature="unicode-case"' --cfg 'feature="unicode-gencat"' --cfg 'feature="unicode-perl"' --cfg 'feature="unicode-script"' --cfg 'feature="unicode-segment"' -C metadata=604baccf8464f333 -C extra-filename=-604baccf8464f333 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build --edition=2018 /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/ryu-1.0.5/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -Cembed-bitcode=no -C metadata=a40cc9c191e07da8 -C extra-filename=-a40cc9c191e07da8 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/build/ryu-a40cc9c191e07da8 -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build --edition=2018 /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/getrandom-0.1.15/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature="std"' -C metadata=3134d02611660405 -C extra-filename=-3134d02611660405 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/build/getrandom-3134d02611660405 -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build --edition=2018 /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.8.1/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -Cembed-bitcode=no -C metadata=4f258883be84b941 -C extra-filename=-4f258883be84b941 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/build/rayon-core-4f258883be84b941 -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build --edition=2018 /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/serde_json-1.0.57/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature="default"' --cfg 'feature="std"' -C metadata=9c7f2a71de758875 -C extra-filename=-9c7f2a71de758875 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/build/serde_json-9c7f2a71de758875 -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name smallvec --edition=2018 /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/smallvec-1.4.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=af516ba081f6df94 -C extra-filename=-af516ba081f6df94 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name itoa /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/itoa-0.4.6/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=def6b42508610d1c -C extra-filename=-def6b42508610d1c --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build --edition=2018 /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/inventory-0.1.9/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -Cembed-bitcode=no -C metadata=55eb92d7e72d18d1 -C extra-filename=-55eb92d7e72d18d1 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/build/inventory-55eb92d7e72d18d1 -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name proc_macro_hack --edition=2018 /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/proc-macro-hack-0.5.18/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -Cembed-bitcode=no -C metadata=24f8c9a7698fc568 -C extra-filename=-24f8c9a7698fc568 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --extern proc_macro --cap-lints allow`
Running `rustc --crate-name pkg_config /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/pkg-config-0.3.18/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=a729ffec8f42b1bf -C extra-filename=-a729ffec8f42b1bf --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/bitflags-1.2.1/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature="default"' -C metadata=86d2212697398c07 -C extra-filename=-86d2212697398c07 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/build/bitflags-86d2212697398c07 -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name cc --edition=2018 /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/cc-1.0.60/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=bd7ffcf8ae7a9c20 -C extra-filename=-bd7ffcf8ae7a9c20 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Compiling unindent v0.1.6
Compiling version_check v0.9.2
Compiling ppv-lite86 v0.2.9
Compiling number_prefix v0.3.0
Compiling strsim v0.8.0
Compiling vec_map v0.8.2
Compiling ansi_term v0.11.0
Compiling unicode_categories v0.1.1
Running `rustc --crate-name unicode_width /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/unicode-width-0.1.8/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature="default"' -C metadata=2ffe7097d8c6b666 -C extra-filename=-2ffe7097d8c6b666 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name either /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/either-1.6.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature="default"' --cfg 'feature="use_std"' -C metadata=644a45e467402f81 -C extra-filename=-644a45e467402f81 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name version_check /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/version_check-0.9.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=aa50462cc4c9df50 -C extra-filename=-aa50462cc4c9df50 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name unindent --edition=2018 /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/unindent-0.1.6/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=fdeaf6996f560ff0 -C extra-filename=-fdeaf6996f560ff0 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name ppv_lite86 --edition=2018 /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/ppv-lite86-0.2.9/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature="simd"' --cfg 'feature="std"' -C metadata=e3e8e9d2c7899d24 -C extra-filename=-e3e8e9d2c7899d24 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name number_prefix /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/number_prefix-0.3.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature="default"' --cfg 'feature="std"' -C metadata=a640ea83003307f7 -C extra-filename=-a640ea83003307f7 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name strsim /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/strsim-0.8.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=816b20067865d64c -C extra-filename=-816b20067865d64c --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name vec_map /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/vec_map-0.8.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=a7a30dfbdcea21f0 -C extra-filename=-a7a30dfbdcea21f0 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name ansi_term /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/ansi_term-0.11.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=9c09db9f9cbc7749 -C extra-filename=-9c09db9f9cbc7749 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name unicode_categories /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/unicode_categories-0.1.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=f5d72f9ccd926082 -C extra-filename=-f5d72f9ccd926082 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`
Compiling lock_api v0.3.4
Running `rustc --crate-name lock_api --edition=2018 /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/lock_api-0.3.4/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature="nightly"' -C metadata=54cc9296368f9d0e -C extra-filename=-54cc9296368f9d0e --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --extern scopeguard=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps/libscopeguard-91afa33e60eb09b1.rmeta --cap-lints allow`
Compiling thread_local v1.0.1
Running `rustc --crate-name thread_local /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/thread_local-1.0.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=44b3f6e675105288 -C extra-filename=-44b3f6e675105288 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --extern lazy_static=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps/liblazy_static-09f05f31cfc64306.rmeta --cap-lints allow`
Compiling textwrap v0.11.0
Running `rustc --crate-name textwrap /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/textwrap-0.11.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=05dca2f2bb6ce7b5 -C extra-filename=-05dca2f2bb6ce7b5 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --extern unicode_width=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps/libunicode_width-2ffe7097d8c6b666.rmeta --cap-lints allow`
Running `/tmp/pip-install-ohxny31i/tokenizers/target/release/build/serde_json-9c7f2a71de758875/build-script-build`
Running `/tmp/pip-install-ohxny31i/tokenizers/target/release/build/rayon-core-4f258883be84b941/build-script-build`
error: failed to run custom build command for `serde_json v1.0.57`
Caused by:
could not execute process `/tmp/pip-install-ohxny31i/tokenizers/target/release/build/serde_json-9c7f2a71de758875/build-script-build` (never executed)
Caused by:
No such file or directory (os error 2)
warning: build failed, waiting for other jobs to finish...
error: failed to run custom build command for `rayon-core v1.8.1`
Caused by:
could not execute process `/tmp/pip-install-ohxny31i/tokenizers/target/release/build/rayon-core-4f258883be84b941/build-script-build` (never executed)
Caused by:
No such file or directory (os error 2)
warning: build failed, waiting for other jobs to finish...
error: build failed
/tmp/pip-build-env-7kdpvzfy/overlay/lib/python3.6/site-packages/setuptools/dist.py:452: UserWarning: Normalizing '0.8.1.rc2' to '0.8.1rc2'
warnings.warn(tmpl.format(**locals()))
cargo rustc --lib --manifest-path Cargo.toml --features pyo3/extension-module --release --verbose -- --crate-type cdylib
error: cargo failed with code: 101
----------------------------------------
ERROR: Failed building wheel for tokenizers
Failed to build tokenizers
ERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly
```<|||||>@arbellea Please make an issue on the tokenizer page. https://github.com/huggingface/tokenizers<|||||>The solution is here: https://github.com/huggingface/tokenizers/issues/431<|||||>`sudo pip3 install transformers --no-binary :all:` worked for me<|||||>conda install -c huggingface transformers
Use This it will work for sure (M1 also) no need for rust if u get sure try rust and then this in your specific env<|||||>Installing Rust via `homebrew` did it for me on a **_Mac OS Monterey M1 Silicon_**
- Python 3.9.9
- pip 21.3.1
Using a conda environment..
```
brew install rustup
rustup-init
source ~/.cargo/env
rustc --version
pip install tokenizers
```<|||||>> I was having the same issue on virtualenv over Mac OS Mojave. Managed to solve it and install Transformers 2.5.1 by manually install the last version of tokenizers (0.6.0) instead of 0.5.2 that is required in the transformer package.
>
> pip install tokenizers
>
> Git clone latest version of transformers:
>
> git clone https://github.com/huggingface/transformers
>
> Before running the installation edit transformers/setup.py and change requirement of tokenizers to 0.6.0
>
> Line 93: install_requires=[ "numpy", "tokenizers == 0.6.0",
>
> Then run as usual:
>
> cd transformers pip install .
>
> I assume that you could also skip the first step and just collect the package as you run the install. I'm quite new to this, so just wanted to share my take.
This resolved my issue.<|||||>>
Solved this problem by using python=3.7.9 instead of python=3.6.7 in conda env<|||||>> > I managed to solve the issue by installing Rust compiler
> >
> > * Install Rust [link](https://www.rust-lang.org/tools/install) `curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh`
> > * Restart the terminal
> > * `pip install transformers==2.5.1`
>
> It works for me, thanks! You can do `source $HOME/.cargo/env` instead of restarting the terminal.
Using an M1 macbook here. This solved the issue for me, thanks a ton!<|||||>> I managed to solve the issue by installing Rust compiler
>
> * Install Rust [link](https://www.rust-lang.org/tools/install) `curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh`
> * Restart the terminal
> * `pip install transformers==2.5.1`
Successfully built tokenizers
Failed to build sentencepiece
Installing collected packages: tokenizers, sentencepiece, certifi, urllib3, tqdm, regex, jmespath, idna, filelock, click, charset-normalizer, sacremoses, requests, botocore, s3transfer, boto3, transformers
Running setup.py install for sentencepiece ... error<|||||>> I managed to solve the issue by installing Rust compiler
>
> * Install Rust [link](https://www.rust-lang.org/tools/install) `curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh`
> * `pip install transformers==2.5.1`
Only solution you need<|||||>I'm facing the issue in Vercel serverless deployments, and the problem is I'cant install rust there? Any other solution?
Python version is 3.9<|||||>> Hi, I had the same problem and resolved it by installing rust. "error: Can not find Rust compiler"
>
> For MacOS, I used "curl https://sh.rustup.rs -sSf | sh". I also found that it needed a nightly version of rust, so you have to specify that in the install options.
Same issue, missing Rust compiler. This command fixed it<|||||> Rust compiler, fixed this for me.
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh<|||||>> >
>
> Solved this problem by using python=3.7.9 instead of python=3.6.7 in conda env
worked for me on windows10<|||||>> I'm facing the issue in Vercel serverless deployments, and the problem is I'cant install rust there? Any other solution?
> Python version is 3.9
@akashp1712 facing the same issue is a deployment context, where it's not possible to install rust. Did you find a solution?<|||||>> > I'm facing the issue in Vercel serverless deployments, and the problem is I'cant install rust there? Any other solution?
> > Python version is 3.9
>
> @akashp1712 facing the same issue is a deployment context, where it's not possible to install rust. Did you find a solution?
Yes @bfelbo, I tried the below in requirements.txt and it worked but couldn't use it as Vercel have hard dependency of 150MB build.
**requirements.txt**
```
nltk
-f https://download.pytorch.org/whl/torch_stable.html
torch==1.11.0+cpu
transformers==2.8.0
```<|||||>Mac M1 Monterry.
`pip install transformers==**` does not work for my site.
`conda install transformers` works well.<|||||>I have same problem how to solve?
I have installed rust from here and now have different error
windows 10
Python 3.9.9 (tags/v3.9.9:ccb0e6a, Nov 15 2021, 18:08:50) [MSC v.1929 64 bit (AMD64)] on win32
https://github.com/xashru/punctuation-restoration/blob/master/requirements.txt
https://www.rust-lang.org/tools/install
```
C:\punctuation-restoration>pip install -r requirements.txt
Collecting transformers==v2.11.0
Using cached transformers-2.11.0-py3-none-any.whl (674 kB)
Collecting pytorch-crf
Using cached pytorch_crf-0.7.2-py3-none-any.whl (9.5 kB)
Requirement already satisfied: packaging in c:\python399\lib\site-packages (from transformers==v2.11.0->-r requirements.txt (line 1)) (21.3)
Requirement already satisfied: requests in c:\python399\lib\site-packages (from transformers==v2.11.0->-r requirements.txt (line 1)) (2.21.0)
Collecting sacremoses
Using cached sacremoses-0.0.53.tar.gz (880 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: sentencepiece in c:\python399\lib\site-packages (from transformers==v2.11.0->-r requirements.txt (line 1)) (0.1.97)
Requirement already satisfied: regex!=2019.12.17 in c:\python399\lib\site-packages (from transformers==v2.11.0->-r requirements.txt (line 1)) (2022.9.13)
Requirement already satisfied: tqdm>=4.27 in c:\python399\lib\site-packages (from transformers==v2.11.0->-r requirements.txt (line 1)) (4.64.1)
Requirement already satisfied: numpy in c:\python399\lib\site-packages (from transformers==v2.11.0->-r requirements.txt (line 1)) (1.23.3)
Collecting tokenizers==0.7.0
Using cached tokenizers-0.7.0.tar.gz (81 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: filelock in c:\python399\lib\site-packages (from transformers==v2.11.0->-r requirements.txt (line 1)) (3.8.0)
Requirement already satisfied: colorama in c:\python399\lib\site-packages (from tqdm>=4.27->transformers==v2.11.0->-r requirements.txt (line 1)) (0.4.5)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in c:\python399\lib\site-packages (from packaging->transformers==v2.11.0->-r requirements.txt (line 1)) (3.0.9)
Requirement already satisfied: idna<2.9,>=2.5 in c:\python399\lib\site-packages (from requests->transformers==v2.11.0->-r requirements.txt (line 1)) (2.8)
Requirement already satisfied: certifi>=2017.4.17 in c:\python399\lib\site-packages (from requests->transformers==v2.11.0->-r requirements.txt (line 1)) (2022.9.14)
Requirement already satisfied: urllib3<1.25,>=1.21.1 in c:\python399\lib\site-packages (from requests->transformers==v2.11.0->-r requirements.txt (line 1)) (1.24.3)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in c:\python399\lib\site-packages (from requests->transformers==v2.11.0->-r requirements.txt (line 1)) (3.0.4)
Requirement already satisfied: six in c:\python399\lib\site-packages (from sacremoses->transformers==v2.11.0->-r requirements.txt (line 1)) (1.12.0)
Collecting click
Using cached click-8.1.3-py3-none-any.whl (96 kB)
Collecting joblib
Using cached joblib-1.2.0-py3-none-any.whl (297 kB)
Building wheels for collected packages: tokenizers
Building wheel for tokenizers (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for tokenizers (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [258 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-39
creating build\lib.win-amd64-cpython-39\tokenizers
copying tokenizers\__init__.py -> build\lib.win-amd64-cpython-39\tokenizers
creating build\lib.win-amd64-cpython-39\tokenizers\models
copying tokenizers\models\__init__.py -> build\lib.win-amd64-cpython-39\tokenizers\models
creating build\lib.win-amd64-cpython-39\tokenizers\decoders
copying tokenizers\decoders\__init__.py -> build\lib.win-amd64-cpython-39\tokenizers\decoders
creating build\lib.win-amd64-cpython-39\tokenizers\normalizers
copying tokenizers\normalizers\__init__.py -> build\lib.win-amd64-cpython-39\tokenizers\normalizers
creating build\lib.win-amd64-cpython-39\tokenizers\pre_tokenizers
copying tokenizers\pre_tokenizers\__init__.py -> build\lib.win-amd64-cpython-39\tokenizers\pre_tokenizers
creating build\lib.win-amd64-cpython-39\tokenizers\processors
copying tokenizers\processors\__init__.py -> build\lib.win-amd64-cpython-39\tokenizers\processors
creating build\lib.win-amd64-cpython-39\tokenizers\trainers
copying tokenizers\trainers\__init__.py -> build\lib.win-amd64-cpython-39\tokenizers\trainers
creating build\lib.win-amd64-cpython-39\tokenizers\implementations
copying tokenizers\implementations\base_tokenizer.py -> build\lib.win-amd64-cpython-39\tokenizers\implementations
copying tokenizers\implementations\bert_wordpiece.py -> build\lib.win-amd64-cpython-39\tokenizers\implementations
copying tokenizers\implementations\byte_level_bpe.py -> build\lib.win-amd64-cpython-39\tokenizers\implementations
copying tokenizers\implementations\char_level_bpe.py -> build\lib.win-amd64-cpython-39\tokenizers\implementations
copying tokenizers\implementations\sentencepiece_bpe.py -> build\lib.win-amd64-cpython-39\tokenizers\implementations
copying tokenizers\implementations\__init__.py -> build\lib.win-amd64-cpython-39\tokenizers\implementations
copying tokenizers\__init__.pyi -> build\lib.win-amd64-cpython-39\tokenizers
copying tokenizers\models\__init__.pyi -> build\lib.win-amd64-cpython-39\tokenizers\models
copying tokenizers\decoders\__init__.pyi -> build\lib.win-amd64-cpython-39\tokenizers\decoders
copying tokenizers\normalizers\__init__.pyi -> build\lib.win-amd64-cpython-39\tokenizers\normalizers
copying tokenizers\pre_tokenizers\__init__.pyi -> build\lib.win-amd64-cpython-39\tokenizers\pre_tokenizers
copying tokenizers\processors\__init__.pyi -> build\lib.win-amd64-cpython-39\tokenizers\processors
copying tokenizers\trainers\__init__.pyi -> build\lib.win-amd64-cpython-39\tokenizers\trainers
running build_ext
running build_rust
info: syncing channel updates for 'nightly-x86_64-pc-windows-msvc'
info: latest update on 2022-10-18, rust version 1.66.0-nightly (06f049a35 2022-10-17)
info: downloading component 'cargo'
info: downloading component 'clippy'
info: downloading component 'rust-docs'
info: downloading component 'rust-std'
info: downloading component 'rustc'
info: downloading component 'rustfmt'
info: installing component 'cargo'
info: installing component 'clippy'
info: installing component 'rust-docs'
info: retrying renaming 'C:\Users\King\.rustup\tmp\qkopjzc7kj3elau0_dir\rust-docs\share/doc/rust/html' to 'C:\Users\King\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\share/doc/rust/html'
info: installing component 'rust-std'
info: installing component 'rustc'
info: installing component 'rustfmt'
cargo rustc --lib --message-format=json-render-diagnostics --manifest-path Cargo.toml --release -v --features pyo3/extension-module -- --crate-type cdylib
warning: unused manifest key: target.x86_64-apple-darwin.rustflags
Updating crates.io index
Downloading crates ...
Downloaded autocfg v1.1.0
Downloaded serde v1.0.145
Downloaded serde_json v1.0.86
Downloaded rand v0.7.3
Downloaded itoa v1.0.4
Downloaded memchr v2.5.0
Downloaded num_cpus v1.13.1
Downloaded inventory-impl v0.1.11
Downloaded parking_lot v0.10.2
Downloaded parking_lot_core v0.7.2
Downloaded number_prefix v0.3.0
Downloaded indoc-impl v0.3.6
Downloaded serde_derive v1.0.145
Downloaded ryu v1.0.11
Downloaded version_check v0.9.4
Downloaded unicode-width v0.1.10
Downloaded syn v1.0.102
Downloaded ghost v0.1.6
Downloaded lock_api v0.3.4
Downloaded inventory v0.1.11
Downloaded indicatif v0.14.0
Downloaded getrandom v0.1.16
Downloaded textwrap v0.11.0
Downloaded lazy_static v1.4.0
Downloaded ctor v0.1.24
Downloaded crossbeam-utils v0.8.12
Downloaded num-traits v0.2.15
Downloaded unindent v0.1.10
Downloaded unicode-normalization-alignments v0.1.12
Downloaded pyo3cls v0.9.2
Downloaded pyo3 v0.9.2
Downloaded paste-impl v0.1.18
Downloaded rand_core v0.5.1
Downloaded strsim v0.8.0
Downloaded scopeguard v1.1.0
Downloaded vec_map v0.8.2
Downloaded unicode-ident v1.0.5
Downloaded rayon-core v1.9.3
Downloaded rayon v1.5.3
Downloaded unicode_categories v0.1.1
Downloaded pyo3-derive-backend v0.9.2
Downloaded bitflags v1.3.2
Downloaded cfg-if v1.0.0
Downloaded cfg-if v0.1.10
Downloaded atty v0.2.14
Downloaded regex v1.6.0
Downloaded memoffset v0.6.5
Downloaded indoc v0.3.6
Downloaded either v1.8.0
Downloaded crossbeam-deque v0.8.2
Downloaded crossbeam-channel v0.5.6
Downloaded clap v2.34.0
Downloaded regex-syntax v0.6.27
Downloaded winapi v0.3.9
Downloaded libc v0.2.135
Downloaded aho-corasick v0.7.19
Downloaded console v0.15.2
Downloaded crossbeam-epoch v0.9.11
Downloaded encode_unicode v0.3.6
Downloaded smallvec v1.10.0
Downloaded terminal_size v0.1.17
Downloaded proc-macro2 v1.0.47
Downloaded quote v1.0.21
Downloaded rand_chacha v0.2.2
Downloaded proc-macro-hack v0.5.19
Downloaded ppv-lite86 v0.2.16
Downloaded paste v0.1.18
Compiling proc-macro2 v1.0.47
Compiling quote v1.0.21
Compiling unicode-ident v1.0.5
Compiling syn v1.0.102
Compiling autocfg v1.1.0
Compiling memchr v2.5.0
Compiling winapi v0.3.9
Compiling cfg-if v1.0.0
Compiling serde_derive v1.0.145
Compiling serde v1.0.145
Compiling serde_json v1.0.86
Compiling scopeguard v1.1.0
Compiling crossbeam-utils v0.8.12
Compiling proc-macro-hack v0.5.19
Compiling getrandom v0.1.16
Compiling libc v0.2.135
Running `rustc --crate-name build_script_build --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\proc-macro2-1.0.47\build.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C debug-assertions=off --cfg "feature=\"default\"" --cfg "feature=\"proc-macro\"" -C metadata=5b0e58d159021849 -C extra-filename=-5b0e58d159021849 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\proc-macro2-5b0e58d159021849 -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Running `rustc --crate-name build_script_build --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\quote-1.0.21\build.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C debug-assertions=off --cfg "feature=\"default\"" --cfg "feature=\"proc-macro\"" -C metadata=142419b9cedb1d17 -C extra-filename=-142419b9cedb1d17 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\quote-142419b9cedb1d17 -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Running `rustc --crate-name unicode_ident --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\unicode-ident-1.0.5\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C embed-bitcode=no -C debug-assertions=off -C metadata=1afecb6243c40eff -C extra-filename=-1afecb6243c40eff --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Running `rustc --crate-name build_script_build --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\syn-1.0.102\build.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C debug-assertions=off --cfg "feature=\"clone-impls\"" --cfg "feature=\"default\"" --cfg "feature=\"derive\"" --cfg "feature=\"extra-traits\"" --cfg "feature=\"full\"" --cfg "feature=\"parsing\"" --cfg "feature=\"printing\"" --cfg "feature=\"proc-macro\"" --cfg "feature=\"quote\"" -C metadata=b8e33a9e5e50652b -C extra-filename=-b8e33a9e5e50652b --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\syn-b8e33a9e5e50652b -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Running `rustc --crate-name autocfg C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\autocfg-1.1.0\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C embed-bitcode=no -C debug-assertions=off -C metadata=572016c50f479faf -C extra-filename=-572016c50f479faf --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Running `rustc --crate-name build_script_build --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\memchr-2.5.0\build.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C debug-assertions=off --cfg "feature=\"default\"" --cfg "feature=\"std\"" -C metadata=4ba3537c9d483825 -C extra-filename=-4ba3537c9d483825 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\memchr-4ba3537c9d483825 -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Running `rustc --crate-name build_script_build C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\winapi-0.3.9\build.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C debug-assertions=off --cfg "feature=\"consoleapi\"" --cfg "feature=\"errhandlingapi\"" --cfg "feature=\"fileapi\"" --cfg "feature=\"handleapi\"" --cfg "feature=\"minwinbase\"" --cfg "feature=\"minwindef\"" --cfg "feature=\"ntstatus\"" --cfg "feature=\"processenv\"" --cfg "feature=\"winbase\"" --cfg "feature=\"wincon\"" --cfg "feature=\"winerror\"" --cfg "feature=\"winnt\"" --cfg "feature=\"winuser\"" -C metadata=9ff0f262099ebbbd -C extra-filename=-9ff0f262099ebbbd --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\winapi-9ff0f262099ebbbd -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Running `rustc --crate-name cfg_if --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\cfg-if-1.0.0\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no -C metadata=360d109d20cdf1de -C extra-filename=-360d109d20cdf1de --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Running `rustc --crate-name build_script_build C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\serde_derive-1.0.145\build.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C debug-assertions=off --cfg "feature=\"default\"" -C metadata=84c8f4354cf6545a -C extra-filename=-84c8f4354cf6545a --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\serde_derive-84c8f4354cf6545a -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Running `rustc --crate-name build_script_build C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\serde-1.0.145\build.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C debug-assertions=off --cfg "feature=\"default\"" --cfg "feature=\"derive\"" --cfg "feature=\"serde_derive\"" --cfg "feature=\"std\"" -C metadata=1080dbca61ce40eb -C extra-filename=-1080dbca61ce40eb --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\serde-1080dbca61ce40eb -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Running `rustc --crate-name build_script_build --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\serde_json-1.0.86\build.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C debug-assertions=off --cfg "feature=\"default\"" --cfg "feature=\"std\"" -C metadata=b3ba47c415a7892f -C extra-filename=-b3ba47c415a7892f --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\serde_json-b3ba47c415a7892f -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Running `rustc --crate-name scopeguard C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\scopeguard-1.1.0\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no -C metadata=38978501dffc7e67 -C extra-filename=-38978501dffc7e67 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Running `rustc --crate-name build_script_build --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\crossbeam-utils-0.8.12\build.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C debug-assertions=off --cfg "feature=\"default\"" --cfg "feature=\"std\"" -C metadata=66c260199ab3a6b1 -C extra-filename=-66c260199ab3a6b1 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\crossbeam-utils-66c260199ab3a6b1 -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Running `rustc --crate-name build_script_build --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\proc-macro-hack-0.5.19\build.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C debug-assertions=off -C metadata=43fb356fc25df1f2 -C extra-filename=-43fb356fc25df1f2 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\proc-macro-hack-43fb356fc25df1f2 -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Running `rustc --crate-name build_script_build --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\getrandom-0.1.16\build.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C debug-assertions=off --cfg "feature=\"std\"" -C metadata=c55ade8324d95812 -C extra-filename=-c55ade8324d95812 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\getrandom-c55ade8324d95812 -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Running `rustc --crate-name build_script_build C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\libc-0.2.135\build.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C debug-assertions=off --cfg "feature=\"default\"" --cfg "feature=\"std\"" -C metadata=e99f7f26a77660cd -C extra-filename=-e99f7f26a77660cd --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\libc-e99f7f26a77660cd -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Compiling unicode-width v0.1.10
Compiling rayon-core v1.9.3
Running `rustc --crate-name unicode_width C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\unicode-width-0.1.10\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no --cfg "feature=\"default\"" -C metadata=5f855d5352d382a0 -C extra-filename=-5f855d5352d382a0 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Running `rustc --crate-name build_script_build --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\rayon-core-1.9.3\build.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C debug-assertions=off -C metadata=e7c93bf382490bd0 -C extra-filename=-e7c93bf382490bd0 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\rayon-core-e7c93bf382490bd0 -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Compiling smallvec v1.10.0
Running `rustc --crate-name smallvec --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\smallvec-1.10.0\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no -C metadata=881054317b8f4d9c -C extra-filename=-881054317b8f4d9c --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Compiling ryu v1.0.11
Running `rustc --crate-name ryu --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\ryu-1.0.11\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C embed-bitcode=no -C debug-assertions=off -C metadata=1d25c68059b095ff -C extra-filename=-1d25c68059b095ff --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Compiling memoffset v0.6.5
Compiling crossbeam-epoch v0.9.11
Running `rustc --crate-name build_script_build C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\memoffset-0.6.5\build.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C debug-assertions=off --cfg "feature=\"default\"" -C metadata=8550414a9ea9eb93 -C extra-filename=-8550414a9ea9eb93 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\memoffset-8550414a9ea9eb93 -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --extern autocfg=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libautocfg-572016c50f479faf.rlib --cap-lints allow`
Running `rustc --crate-name build_script_build --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\crossbeam-epoch-0.9.11\build.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C debug-assertions=off --cfg "feature=\"alloc\"" --cfg "feature=\"std\"" -C metadata=a5f96eb43b93974a -C extra-filename=-a5f96eb43b93974a --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\crossbeam-epoch-a5f96eb43b93974a -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --extern autocfg=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libautocfg-572016c50f479faf.rlib --cap-lints allow`
Compiling regex-syntax v0.6.27
Running `rustc --crate-name regex_syntax --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\regex-syntax-0.6.27\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C embed-bitcode=no -C debug-assertions=off --cfg "feature=\"default\"" --cfg "feature=\"unicode\"" --cfg "feature=\"unicode-age\"" --cfg "feature=\"unicode-bool\"" --cfg "feature=\"unicode-case\"" --cfg "feature=\"unicode-gencat\"" --cfg "feature=\"unicode-perl\"" --cfg "feature=\"unicode-script\"" --cfg "feature=\"unicode-segment\"" -C metadata=ecd6baa27c468f0d -C extra-filename=-ecd6baa27c468f0d --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Running `C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\syn-b8e33a9e5e50652b\build-script-build`
Running `C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\proc-macro2-5b0e58d159021849\build-script-build`
Running `C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\quote-142419b9cedb1d17\build-script-build`
Running `C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\winapi-9ff0f262099ebbbd\build-script-build`
Running `C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\serde_derive-84c8f4354cf6545a\build-script-build`
Running `C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\memoffset-8550414a9ea9eb93\build-script-build`
Running `C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\proc-macro-hack-43fb356fc25df1f2\build-script-build`
Running `C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\getrandom-c55ade8324d95812\build-script-build`
Running `C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\libc-e99f7f26a77660cd\build-script-build`
Running `C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\serde-1080dbca61ce40eb\build-script-build`
Running `C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\crossbeam-epoch-a5f96eb43b93974a\build-script-build`
Running `C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\crossbeam-utils-66c260199ab3a6b1\build-script-build`
Running `C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\memchr-4ba3537c9d483825\build-script-build`
Running `C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\memchr-4ba3537c9d483825\build-script-build`
Running `C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\rayon-core-e7c93bf382490bd0\build-script-build`
Running `C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\serde_json-b3ba47c415a7892f\build-script-build`
Running `rustc --crate-name proc_macro2 --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\proc-macro2-1.0.47\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C embed-bitcode=no -C debug-assertions=off --cfg "feature=\"default\"" --cfg "feature=\"proc-macro\"" -C metadata=30944198976f606e -C extra-filename=-30944198976f606e --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --extern unicode_ident=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libunicode_ident-1afecb6243c40eff.rmeta --cap-lints allow --cfg use_proc_macro --cfg wrap_proc_macro --cfg proc_macro_span`
Compiling itoa v1.0.4
Running `rustc --crate-name itoa --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\itoa-1.0.4\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C embed-bitcode=no -C debug-assertions=off -C metadata=c5e72f9360d1cff0 -C extra-filename=-c5e72f9360d1cff0 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Running `rustc --crate-name winapi C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\winapi-0.3.9\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no --cfg "feature=\"consoleapi\"" --cfg "feature=\"errhandlingapi\"" --cfg "feature=\"fileapi\"" --cfg "feature=\"handleapi\"" --cfg "feature=\"minwinbase\"" --cfg "feature=\"minwindef\"" --cfg "feature=\"ntstatus\"" --cfg "feature=\"processenv\"" --cfg "feature=\"winbase\"" --cfg "feature=\"wincon\"" --cfg "feature=\"winerror\"" --cfg "feature=\"winnt\"" --cfg "feature=\"winuser\"" -C metadata=c43802643be2e565 -C extra-filename=-c43802643be2e565 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow -l dylib=advapi32 -l dylib=cfgmgr32 -l dylib=gdi32 -l dylib=kernel32 -l dylib=msimg32 -l dylib=opengl32 -l dylib=user32 -l dylib=winspool --cfg "feature=\"vadefs\"" --cfg "feature=\"cfgmgr32\"" --cfg "feature=\"devpropdef\"" --cfg "feature=\"processthreadsapi\"" --cfg "feature=\"basetsd\"" --cfg "feature=\"winreg\"" --cfg "feature=\"ntdef\"" --cfg "feature=\"libloaderapi\"" --cfg "feature=\"wincontypes\"" --cfg "feature=\"vcruntime\"" --cfg "feature=\"excpt\"" --cfg "feature=\"wtypesbase\"" --cfg "feature=\"reason\"" --cfg "feature=\"limits\"" --cfg "feature=\"rpcndr\"" --cfg "feature=\"ktmtypes\"" --cfg "feature=\"guiddef\"" --cfg "feature=\"windef\"" --cfg "feature=\"wingdi\"" --cfg "feature=\"cfg\""`
Running `C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\serde-1080dbca61ce40eb\build-script-build`
Running `rustc --crate-name memoffset C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\memoffset-0.6.5\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no --cfg "feature=\"default\"" -C metadata=d521e9d5ca1cfb74 -C extra-filename=-d521e9d5ca1cfb74 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow --cfg tuple_ty --cfg allow_clippy --cfg maybe_uninit --cfg doctests --cfg raw_ref_macros`
Running `rustc --crate-name proc_macro_hack --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\proc-macro-hack-0.5.19\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C embed-bitcode=no -C debug-assertions=off -C metadata=097737d8e2c954ab -C extra-filename=-097737d8e2c954ab --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --extern proc_macro --cap-lints allow`
Running `rustc --crate-name getrandom --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\getrandom-0.1.16\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no --cfg "feature=\"std\"" -C metadata=6c7f93dfc6444a9c -C extra-filename=-6c7f93dfc6444a9c --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --extern cfg_if=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libcfg_if-360d109d20cdf1de.rmeta --cap-lints allow -l advapi32`
Running `rustc --crate-name libc C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\libc-0.2.135\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no --cfg "feature=\"default\"" --cfg "feature=\"std\"" -C metadata=86a1bab4b120f6a2 -C extra-filename=-86a1bab4b120f6a2 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow --cfg freebsd11 --cfg libc_priv_mod_use --cfg libc_union --cfg libc_const_size_of --cfg libc_align --cfg libc_int128 --cfg libc_core_cvoid --cfg libc_packedN --cfg libc_cfg_target_vendor --cfg libc_non_exhaustive --cfg libc_ptr_addr_of --cfg libc_underscore_const_names --cfg libc_const_extern_fn`
Compiling rayon v1.5.3
Running `rustc --crate-name build_script_build --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\rayon-1.5.3\build.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C debug-assertions=off -C metadata=f6ea986aed560875 -C extra-filename=-f6ea986aed560875 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\rayon-f6ea986aed560875 -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --extern autocfg=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libautocfg-572016c50f479faf.rlib --cap-lints allow`
Compiling num-traits v0.2.15
Running `rustc --crate-name build_script_build C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\num-traits-0.2.15\build.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C debug-assertions=off --cfg "feature=\"default\"" --cfg "feature=\"std\"" -C metadata=286ff2921bdb81ee -C extra-filename=-286ff2921bdb81ee --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\num-traits-286ff2921bdb81ee -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --extern autocfg=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libautocfg-572016c50f479faf.rlib --cap-lints allow`
Running `rustc --crate-name crossbeam_utils --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\crossbeam-utils-0.8.12\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no --cfg "feature=\"default\"" --cfg "feature=\"std\"" -C metadata=3869c69ecfdeae93 -C extra-filename=-3869c69ecfdeae93 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --extern cfg_if=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libcfg_if-360d109d20cdf1de.rmeta --cap-lints allow`
Running `rustc --crate-name memchr --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\memchr-2.5.0\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C embed-bitcode=no -C debug-assertions=off --cfg "feature=\"default\"" --cfg "feature=\"std\"" -C metadata=055ae21494443b01 -C extra-filename=-055ae21494443b01 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow --cfg memchr_runtime_simd --cfg memchr_runtime_sse2 --cfg memchr_runtime_sse42 --cfg memchr_runtime_avx`
Running `rustc --crate-name memchr --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\memchr-2.5.0\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no --cfg "feature=\"default\"" --cfg "feature=\"std\"" -C metadata=f54a1ac40e150573 -C extra-filename=-f54a1ac40e150573 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow --cfg memchr_runtime_simd --cfg memchr_runtime_sse2 --cfg memchr_runtime_sse42 --cfg memchr_runtime_avx`
Compiling unindent v0.1.10
Running `rustc --crate-name unindent --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\unindent-0.1.10\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C embed-bitcode=no -C debug-assertions=off -C metadata=fcf514872b9c9d8e -C extra-filename=-fcf514872b9c9d8e --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Running `rustc --crate-name regex_syntax --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\regex-syntax-0.6.27\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no --cfg "feature=\"default\"" --cfg "feature=\"unicode\"" --cfg "feature=\"unicode-age\"" --cfg "feature=\"unicode-bool\"" --cfg "feature=\"unicode-case\"" --cfg "feature=\"unicode-gencat\"" --cfg "feature=\"unicode-perl\"" --cfg "feature=\"unicode-script\"" --cfg "feature=\"unicode-segment\"" -C metadata=5a236c973584f460 -C extra-filename=-5a236c973584f460 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Compiling cfg-if v0.1.10
Running `rustc --crate-name cfg_if --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\cfg-if-0.1.10\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no -C metadata=f3af8d70d946c761 -C extra-filename=-f3af8d70d946c761 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Compiling encode_unicode v0.3.6
Running `rustc --crate-name encode_unicode C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\encode_unicode-0.3.6\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no --cfg "feature=\"default\"" --cfg "feature=\"std\"" -C metadata=3130f1c305b48f53 -C extra-filename=-3130f1c305b48f53 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Compiling inventory v0.1.11
Compiling ppv-lite86 v0.2.16
Running `rustc --crate-name build_script_build --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\inventory-0.1.11\build.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C debug-assertions=off -C metadata=2a48a1911251848c -C extra-filename=-2a48a1911251848c --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\inventory-2a48a1911251848c -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Running `rustc --crate-name ppv_lite86 --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\ppv-lite86-0.2.16\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no --cfg "feature=\"simd\"" --cfg "feature=\"std\"" -C metadata=94c9eb93ee689d3b -C extra-filename=-94c9eb93ee689d3b --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Compiling rand_core v0.5.1
Running `rustc --crate-name rand_core --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\rand_core-0.5.1\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no --cfg "feature=\"alloc\"" --cfg "feature=\"getrandom\"" --cfg "feature=\"std\"" -C metadata=06e7d83191242dc6 -C extra-filename=-06e7d83191242dc6 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --extern getrandom=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libgetrandom-6c7f93dfc6444a9c.rmeta --cap-lints allow`
Compiling version_check v0.9.4
Running `rustc --crate-name version_check C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\version_check-0.9.4\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C embed-bitcode=no -C debug-assertions=off -C metadata=0584bbe3f7491e0d -C extra-filename=-0584bbe3f7491e0d --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Running `rustc --crate-name quote --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\quote-1.0.21\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C embed-bitcode=no -C debug-assertions=off --cfg "feature=\"default\"" --cfg "feature=\"proc-macro\"" -C metadata=dabbd39e3909793b -C extra-filename=-dabbd39e3909793b --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --extern proc_macro2=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libproc_macro2-30944198976f606e.rmeta --cap-lints allow`
Compiling num_cpus v1.13.1
Running `rustc --crate-name num_cpus C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\num_cpus-1.13.1\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no -C metadata=31ad845e15e2b15d -C extra-filename=-31ad845e15e2b15d --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Compiling lazy_static v1.4.0
Running `rustc --crate-name lazy_static C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\lazy_static-1.4.0\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no -C metadata=431f071606c587f4 -C extra-filename=-431f071606c587f4 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
Running `rustc --crate-name crossbeam_epoch --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\crossbeam-epoch-0.9.11\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no --cfg "feature=\"alloc\"" --cfg "feature=\"std\"" -C metadata=4897f65d76bcc39c -C extra-filename=-4897f65d76bcc39c --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --extern cfg_if=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libcfg_if-360d109d20cdf1de.rmeta --extern crossbeam_utils=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libcrossbeam_utils-3869c69ecfdeae93.rmeta --extern memoffset=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libmemoffset-d521e9d5ca1cfb74.rmeta --extern scopeguard=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libscopeguard-38978501dffc7e67.rmeta --cap-lints allow`
Compiling crossbeam-channel v0.5.6
Running `rustc --crate-name crossbeam_channel --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\crossbeam-channel-0.5.6\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no --cfg "feature=\"crossbeam-utils\"" --cfg "feature=\"default\"" --cfg "feature=\"std\"" -C metadata=658fbd4d6c655a6f -C extra-filename=-658fbd4d6c655a6f --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --extern cfg_if=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libcfg_if-360d109d20cdf1de.rmeta --extern crossbeam_utils=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libcrossbeam_utils-3869c69ecfdeae93.rmeta --cap-lints allow`
Running `rustc --crate-name syn --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\syn-1.0.102\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C embed-bitcode=no -C debug-assertions=off --cfg "feature=\"clone-impls\"" --cfg "feature=\"default\"" --cfg "feature=\"derive\"" --cfg "feature=\"extra-traits\"" --cfg "feature=\"full\"" --cfg "feature=\"parsing\"" --cfg "feature=\"printing\"" --cfg "feature=\"proc-macro\"" --cfg "feature=\"quote\"" -C metadata=ed20b2b04ea8255e -C extra-filename=-ed20b2b04ea8255e --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --extern proc_macro2=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libproc_macro2-30944198976f606e.rmeta --extern quote=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libquote-dabbd39e3909793b.rmeta --extern unicode_ident=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libunicode_ident-1afecb6243c40eff.rmeta --cap-lints allow`
Compiling aho-corasick v0.7.19
Running `rustc --crate-name aho_corasick --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\aho-corasick-0.7.19\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C embed-bitcode=no -C debug-assertions=off --cfg "feature=\"default\"" --cfg "feature=\"std\"" -C metadata=f1529ca83e28da6f -C extra-filename=-f1529ca83e28da6f --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --extern memchr=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libmemchr-055ae21494443b01.rmeta --cap-lints allow`
Compiling rand_chacha v0.2.2
Running `rustc --crate-name aho_corasick --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\aho-corasick-0.7.19\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no --cfg "feature=\"default\"" --cfg "feature=\"std\"" -C metadata=dfcbc49062ee55e6 -C extra-filename=-dfcbc49062ee55e6 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --extern memchr=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libmemchr-f54a1ac40e150573.rmeta --cap-lints allow`
Running `rustc --crate-name rand_chacha --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\rand_chacha-0.2.2\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no --cfg "feature=\"std\"" -C metadata=42edb5e8f22d28e3 -C extra-filename=-42edb5e8f22d28e3 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --extern ppv_lite86=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libppv_lite86-94c9eb93ee689d3b.rmeta --extern rand_core=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\librand_core-06e7d83191242dc6.rmeta --cap-lints allow`
Running `C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\inventory-2a48a1911251848c\build-script-build`
Compiling crossbeam-deque v0.8.2
Running `rustc --crate-name crossbeam_deque --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\crossbeam-deque-0.8.2\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no --cfg "feature=\"crossbeam-epoch\"" --cfg "feature=\"crossbeam-utils\"" --cfg "feature=\"default\"" --cfg "feature=\"std\"" -C metadata=b75d3846468b682e -C extra-filename=-b75d3846468b682e --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --extern cfg_if=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libcfg_if-360d109d20cdf1de.rmeta --extern crossbeam_epoch=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libcrossbeam_epoch-4897f65d76bcc39c.rmeta --extern crossbeam_utils=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libcrossbeam_utils-3869c69ecfdeae93.rmeta --cap-lints allow`
Running `C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\num-traits-286ff2921bdb81ee\build-script-build`
Running `C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\rayon-f6ea986aed560875\build-script-build`
Running `C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\build\serde_json-b3ba47c415a7892f\build-script-build`
Compiling textwrap v0.11.0
Running `rustc --crate-name rayon_core --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\rayon-core-1.9.3\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no -C metadata=0dc7012c06ddf19e -C extra-filename=-0dc7012c06ddf19e --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --extern crossbeam_channel=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libcrossbeam_channel-658fbd4d6c655a6f.rmeta --extern crossbeam_deque=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libcrossbeam_deque-b75d3846468b682e.rmeta --extern crossbeam_utils=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libcrossbeam_utils-3869c69ecfdeae93.rmeta --extern num_cpus=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libnum_cpus-31ad845e15e2b15d.rmeta --cap-lints allow`
Running `rustc --crate-name textwrap C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\textwrap-0.11.0\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no -C metadata=65eec7bbb3cdf0f8 -C extra-filename=-65eec7bbb3cdf0f8 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --extern unicode_width=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libunicode_width-5f855d5352d382a0.rmeta --cap-lints allow`
Compiling paste-impl v0.1.18
Running `rustc --crate-name paste_impl --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\paste-impl-0.1.18\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C embed-bitcode=no -C debug-assertions=off -C metadata=d5e32c087e023b55 -C extra-filename=-d5e32c087e023b55 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --extern proc_macro_hack=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\proc_macro_hack-097737d8e2c954ab.dll --extern proc_macro --cap-lints allow`
Compiling lock_api v0.3.4
Compiling strsim v0.8.0
Running `rustc --crate-name lock_api --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\lock_api-0.3.4\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no --cfg "feature=\"nightly\"" -C metadata=2fe17ee22c58d967 -C extra-filename=-2fe17ee22c58d967 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --extern scopeguard=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libscopeguard-38978501dffc7e67.rmeta --cap-lints allow`
Running `rustc --crate-name strsim C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\strsim-0.8.0\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no -C metadata=8fb9b7eda8bf3ac7 -C extra-filename=-8fb9b7eda8bf3ac7 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --cap-lints allow`
error[E0557]: feature has been removed
--> C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\lock_api-0.3.4\src\lib.rs:91:42
|
91 | #![cfg_attr(feature = "nightly", feature(const_fn))]
| ^^^^^^^^ feature has been removed
|
= note: split into finer-grained feature gates
For more information about this error, try `rustc --explain E0557`.
error: could not compile `lock_api` due to previous error
Caused by:
process didn't exit successfully: `rustc --crate-name lock_api --edition=2018 C:\Users\King\.cargo\registry\src\github.com-1ecc6299db9ec823\lock_api-0.3.4\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no --cfg "feature=\"nightly\"" -C metadata=2fe17ee22c58d967 -C extra-filename=-2fe17ee22c58d967 --out-dir C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps -L dependency=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps --extern scopeguard=C:\Users\King\AppData\Local\Temp\pip-install-b7_bo2sd\tokenizers_1b6098a3d736487c996866dfd304f880\target\release\deps\libscopeguard-38978501dffc7e67.rmeta --cap-lints allow` (exit code: 1)
warning: build failed, waiting for other jobs to finish...
error: `cargo rustc --lib --message-format=json-render-diagnostics --manifest-path Cargo.toml --release -v --features pyo3/extension-module -- --crate-type cdylib` failed with code 101
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for tokenizers
Failed to build tokenizers
ERROR: Could not build wheels for tokenizers, which is required to install pyproject.toml-based projects
C:\punctuation-restoration>
```<|||||>`pip install transformers>=4.10.0` helps me.
`transformers==4.11.3` causes `Building wheel for tokenizers (pyproject.toml) did not run successfully`<|||||>I had the same problem when trying to create a dockerfile. I was able to solve using the solution of this issue.
```
# Get Rust; NOTE: using sh for better compatibility with other base images
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Add .cargo/bin to PATH
ENV PATH="/root/.cargo/bin:${PATH}"
RUN pip install --upgrade pip
```<|||||>```sudo apt install rustc cargo``` solved my problem finally<|||||>i have the same with my windows 7 and python3.6.8.
so i used pyenv to install python3.7.7:
`pyenv install python3.7.7`
then i made dir 'project' and set there local version of python:
```
mkdir project
cd project
pyenv local 3.7.7
```
then just installed what i needed:
```
python3 -m pip install --upgrade pip
python3 -m pip install transformers
python3 -m pip install Pillow
python3 -m pip install pytorch torchvision cpuonly -c pytorch
```
have a nice day! |
transformers | 2,830 | closed | Reusing states for sequential decoding in BERTForMaskedLM | # 🚀 Feature request
I am using Bert as a decoder (by setting is_decoder=True). However, during sequential decoding, there is no way of reusing the hidden states, so for every word to be generated we need to rerun the model on the ENTIRE decoded sequence, which renders decoding inefficient. Can you add something similar to the keyword `past=` in GPT2 model to BERT's forward function (https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L938)?
## Generalization of my Issue
More generally, to the best of my knowledge, there's no model in this library that simultaneously supports 1) cross attention (by feeding `encoder_hidden_states=` or `memory=`), and 2) reusing decoder states during sequential decoding (by feeding `past=`). 1) rules out models like GPT-2 and XLNet which only supports language modeling (although in theory we can just use a decoder to do translation, I want to use a separate encoder and decoder); and 2) rules out models like BERT and T5 which supports 1) but not 2). For example, the point of T5 is to use it for text-to-text translation problems, but since we cannot reuse hidden states, sequential decoding (beam search) would be extremely inefficient.
## Example
In the provided summarization example, both 1) and 2) are supported. However, the decoder is defined in its own code (examples/summarization/modeling_bertabs.py) and cannot be used directly in the library. Besides, supporting incremental state update is a basic function that every decoder shall support.
## Relevant Issues
I checked the suggested similar issues and did not find the same issue. Please let me know if my issue duplicates others'. | 02-12-2020 18:28:29 | 02-12-2020 18:28:29 | That's a cool idea.<|||||>Closed by #3059 |
transformers | 2,829 | closed | BERT generating prediction in 120sec approx using squad 2.0 in prediction.json | I am using the below command to predict the question answer using BERT with squad but it taking too long to generate the prediction.json approx 120 sec. I wanna reduce this time to 10secs.
run_squad.py --vocab_file=uncased_L-12_H-768_A-12/vocab.txt --bert_config_file=uncased_L-12_H-768_A-12/bert_config.json --init_checkpoint=model.ckpt-21899 --do_train=False --train_file=train-v1.1.json --do_predict=True --train_batch_size=32 --learning_rate=5e-5 --num_train_epochs=3.0 --max_seq_length=384 --doc_stride=128 --version_2_with_negative=True --output_dir=/ --predict_file=input.json --use_tpu=False
Please suggest me some solution. | 02-12-2020 17:28:08 | 02-12-2020 17:28:08 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,828 | closed | [WIP] Create a Trainer class to handle TF2 model training | **EDIT**
Close this PR to create a cleaner, and more on purpose one.
Hello,
I'm opening the pull request I was talking about in the issue #2783. Here the proposed features in this PR:
- [x] add checkpoint manager in order to make a training fault-tolerant
- [x] add custom fit method to take into account the specific training steps in distributed mode or not
- [x] add optimizer creation method depending of its name
- [ ] add loss method in order to be able to custom the loss computation
- [x] add a Tensorboard summary writer to make the logs available in Tensorboard
For now I have created the definition of the methods with their documentation but with a `raise NotImplementedError` as first I would like to have your opinion on the signatures of these methods. Also I know that @julien-c you have recently worked on a `TFModelUtilsMixin` class. Do you think that some of these methods should go into it instead of directly in `TFPreTrainedModel`?
ping also @sshleifer
| 02-12-2020 16:13:10 | 02-12-2020 16:13:10 | I'm not 100% sure which ones of those methods need to live on the model vs. in the training framework
For instance, over in #2816, @srush is implementing support for `pytorch-lightning`, which over in PyTorch world, handles a lot of those tasks. In PyTorch we wouldn't want to implement these in to the model themselves.
Thoughts?<|||||>This is a really good point indeed, because it is something I don't know myself and also why I wanted your opinion on this.
How I see the whole picture is like the following. Having a class that will handle the training, let's name it `Trainer` for example, we can imagine this class implementing:
- An LR finder
- A cyclic training
- And maybe other things
Then looks like:
```
class Trainer(object):
def __init__(self, model_path, training_data, eval_data, **kwargs):
#kwargs will contain the the parameters of the TFPretrainedModel class
# such as distributed=True, optimizer="adam", etc...
self.model = Automodel.from_pretrained(model_path, kwargs)
self.tokenizer = AutoTokenizer.from_pretrained(model_path)
self.training_data = training_data
self.eval_data = eval_data
def preprocess_data():
# preprocessing the data with the tokenizer
def lr_finder():
#blabla implementation
return best_lr_over_training_data
def train(epochs):
lr = self.lr_finder()
self.model.create_optimizer(lr) # Certainly need to modify the signature of this method in the file above
self.create_checkpoint_manager("save")
self.create_summary_writer("logs")
self.model.fit(training_data, epochs) # implementing the cycling training here instead of just model.fit()
```
Then the code of the external user would be maybe something like:
```
parameters = {....}
trainer = Trainer("bert-base-uncased", [training, data], [eval, data], parameters)
trainer.preprocess_data()
trainer.train(4) # We can even imagine not giving the number of epochs, and use an EarlyStop callback and give a default number of epochs.
trainer.model.save_pretrained()
```
Of course this is just the first draft that comes threw my mind. There will be certainly several changes.<|||||>I have checked what is `pytorch-lightning` and it blows my mind, this is really awesome! So convenient and group a lot of the things I want to add here indeed. Unfortunately, I don't know such lib over TF2. I will take some time to check if it exists in parallel of what I'm doing here :)<|||||>Ok, I finally moved everything into a `Trainer` class. I think it was a bad idea to mix the pretrained model and the training features. I think it is much better now.
Also instead of the long list of keys in the `**kwargs` parameter we can imagine a config file specifically made for training, and one could custom the training just by updating the JSON file and not the code itself.<|||||>You now have a working example in `examples/run_tf_glue_with_trainer.py`. You can now see how simple it becomes to train a model, if we put apart the config dictionary, training a model takes 4 lines of code.
Of course there is still a lot of work to do, but now you can have a much better idea of where I wanna go. The next main focuses will be:
- how to select such or such data processor in order to have a trainer more generic for the dataprocessing part
- include metrics
- run an evaluation<|||||>This looks really great @jplu!<|||||>Thanks!<|||||>Close this PR to create a cleaner, and more on purpose one. |
transformers | 2,827 | closed | OOM risk in RobertaTokenizer/GPT2Tokenizer | # 🐛 Bug
## Information
Model I am using: Roberta (_roberta-base_)
Language I am using the model on: English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts
I am using a modified version of the [examples/distillation/scripts/binarized_data.py](https://github.com/huggingface/transformers/blob/master/examples/distillation/scripts/binarized_data.py) file.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset
I am tokenizing a gzipped JSON file that contains 2B inputs resulting in 250Gb of uncompressed data.
The tokenization function is divided across _n_ processes to make the tokenization part faster.
The resulting _token_ids_ are written as a list of integers in an output file.
While tokenization was done batch by batch, I noticed that my RAM was increasing.
It caused an OOM error (I have 64GB or RAM) while only 1.5B inputs had been processed.
I identified the problem to be the `cache` attribute of the `GPT2Tokenizer` ([link](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_gpt2.py#L191)) that is never flushed so its size will potentially grow infinitely.
Tokenizers inheriting from `GPT2Tokenizer` (such as `RobertaTokenizer` ) are thus also impacted.
## To reproduce
Steps to reproduce the behavior:
Run tokenization: `tokenizer.encode()` on a very big file using `GPT2Tokenizer` or `RobertaTokenizer`.
## Expected behavior
The memory footprint of the tokenizer should be constant while processing an infinite stream of inputs.
## Suggestion
I made a quick and dirty fix in my script by flushing the `cache` (tokenizer.cache.clear()) if its size reaches an arbitrarily set threshold (100k in my case) with no significant loss in terms of performance.
However I think that there are smarter solutions rather than flushing the whole cache content. One can use a LRU cache instead of a Python dict. You can also define a private method that checks if cache size reaches a threshold and perform flushing in an "elegant" way.
I know that for production purposes [Tokenizers](https://github.com/huggingface/tokenizers) lib would be more appropriate but I wanted to notice you this behavior.
## Environment info
- `transformers` version: 2.4.1
- Platform: Ubuntu 16.04 LTS
- Python version: 3.6.9
- PyTorch version: 1.4.0 (with GPU)
- Tensorflow version : 2.0.0 (with GPU)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: using python 3.6 [multiprocessing](https://docs.python.org/3.6/library/multiprocessing.html) lib
| 02-12-2020 15:57:14 | 02-12-2020 15:57:14 | If lru_cache is used, the max size couldn't be configured at runtime or disabled completely. Trying to go around this with anonymous functions will cause pickling problems and is generally ugly.
A more elegant and straightforward solution is to use a custom cache with ordered dict and a max size checked at each insertion.
Both have a considerable performance impact versus the current unbounded dict though, about 10% more processing time which can really add up in a high throughput objects like tokenizers.
Pretty much any more advanced cache will come with a performance hit. Thoughts?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,826 | closed | Why only use the hidden state of last token of last layer is used for predicting the next word? | I was trying to generate text and also reading the code to understand how it is working.
I found that, after providing some text as context (first iteration), it goes through the transformer and the output of the transformer (`output[0] `of `GPT2Model` ) is for each token position there is a vector. To my understanding, these vectors are the context-aware representation of each token position.
Now for generating the next word, the representation of the last token of last layers is being used.
This is the case for the first iteration.
Then for each subsequent iteration, only the representation of the last predicted word is used to predict the next word.
My question is, why the only the representation of the last word is used to predict the next word?
This raises another question, is the last token-position representation hold the context of the whole sequence (like LSTM)? | 02-12-2020 14:28:32 | 02-12-2020 14:28:32 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,825 | closed | binarized_data.py in distillation uses incorrect type casting | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): possibly affected model is DistilBert(distilbert-base-multilingual-cased)
Language I am using the model on (English, Chinese ...): multiple
The problem arises when using:
* [x] the official example scripts: (give details below)
The tasks I am working on is:
* [x] my task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Open line [84 in examples/distillation/scripts/binarized_data.py](https://github.com/huggingface/transformers/blob/21da895013a95e60df645b7d6b95f4a38f604759/examples/distillation/scripts/binarized_data.py#L84)
2. See typecast into np.uint16 (possibly added to produce smaller output file size)
3. Realize, that multilingual model has vocab size of 119547, so a large portion of tokens(54012, 45%), which has id > uint16 max value(65535), receives the wrong id after binarization
```python
# Some code to demonstrate the thing
import transformers as tr
import numpy as np
tok = tr.DistilBertTokenizer.from_pretrained("distilbert-base-multilingual-cased")
print("UInt16 max value", np.iinfo(np.uint16).max) ## 65535
print("Vocab size:", tok.vocab_size) ## 119547
## code to produce table i've included into issue
def table_row(tok_id):
print(f"|{tok_id:^15}|{tok.decode([tok_id]):^18}|{np.uint16(tok_id):^18}|{tok.decode([np.uint16(tok_id)]):^30}|")
print("|Actual token id|Actual token value|Token id in uint16|Token value by uint16 token id|")
print("|---------------|------------------|------------------|------------------------------|")
for i in range(65535, 65700):
table_row(i)
```
## Examples
|Actual token id|Actual token value|Token id in uint16|Token value by uint16 token id|
|---------------|------------------|------------------|------------------------------|
| 65535 | PD | 65535 | PD |
| 65536 | ##्ग | 0 | [PAD] |
| 65537 | označava | 1 | [unused1] |
| 65538 | ##gården | 2 | [unused2] |
| 65539 | ##чном | 3 | [unused3] |
| .... | .... | .... | .... |
| 65635 | siege | 99 | [unused99] |
| 65636 | ##lën | 100 | [UNK] |
| 65637 | dotato | 101 | [CLS] |
| 65638 | madeira | 102 | [SEP] |
| 65639 | ##μίας | 103 | [MASK] |
| 65640 | ##muggen | 104 | <S> |
| 65641 | ##льним | 105 | <T> |
| 65642 | Crimea | 106 | ! |
| 65643 | altor | 107 | " |
| 65644 | chefo | 108 | # |
| 65645 | persoon | 109 | $ |
| 65646 | ##зія | 110 | % |
| 65647 | новое | 111 | & |
| 65648 | ##šť | 112 | ' |
| 65649 | ##황 | 113 | ( |
| 65650 | fisica | 114 | ) |
| 65651 | ##ținut | 115 | * |
| 65652 | Woche | 116 | + |
| 65653 | angesehen | 117 | , |
| 65654 | Mach | 118 | - |
| 65655 | TNT | 119 | . |
| 65656 | obiettivo | 120 | / |
| 65657 | ##ceno | 121 | 0 |
| 65658 | ##מכון | 122 | 1 |
| 65659 | Tallinnas | 123 | 2 |
| 65660 | graet | 124 | 3 |
| 65661 | straal | 125 | 4 |
| 65662 | Pulitzer | 126 | 5 |
| 65663 | прво | 127 | 6 |
| 65664 | ##laska | 128 | 7 |
| 65665 | Actors | 129 | 8 |
| 65666 | Daimler | 130 | 9 |
| 65667 | estadual | 131 | : |
| 65668 | ##ಃ | 132 | ; |
| 65669 | resultó | 133 | < |
| 65670 | Tokom | 134 | = |
| 65671 | Parliamentary | 135 | > |
| 65672 | Phật | 136 | ? |
| 65673 | liście | 137 | @ |
| 65674 | ##ерна | 138 | A |
## Expected behavior
binarize_data.py should use typecasting to int32 at least to avoid incorrect behavior
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.3.0 (distillation code from current master branch)
- Platform: GNU/Linux Fedora 5.4.13-201.fc31.x86_64
- Python version: Python 3.6.9 :: Anaconda, Inc.
- PyTorch version (GPU?): 1.4.0 GPU
- Tensorflow version (GPU?): not applicable
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no | 02-12-2020 14:14:08 | 02-12-2020 14:14:08 | Good catch @Rexhaif
I'll fix that. Thanks for pointing that out.
Victor |
transformers | 2,824 | closed | GPT-2 language model: multiplying decoder-transformer output with token embedding or another weight matrix | I was reading the code of GPT2 language model. The transformation of hidden states to the probability distribution over the vocabulary has done in the following line:
`lm_logits = self.lm_head(hidden_states)`
Here,
`self.lm_head = nn.Linear(config.n_embd, config.vocab_size, bias=False)`
However, In the original paper, they suggested multiplying hidden states with the token embedding matrix whereas huggingface implementation used another matrix.
Is there any advantage of this? Am I missing something? | 02-12-2020 13:21:34 | 02-12-2020 13:21:34 | Hi, the input embeddings are tied to the output embeddings -> The `lm_head` attribute essentially shares its weights with the embedding layer. Passing the output of the transformer through that layer is the same as multiplying this output (the hidden states) with the token embedding matrix.<|||||>@LysandreJik Where is the code that performs this weight tying?<|||||>I think I found it:
```
output_embeddings = self.get_output_embeddings()
if output_embeddings is not None:
self._tie_or_clone_weights(output_embeddings, self.get_input_embeddings())
```
in `PreTrainedModel.tie_weights()` in `modeling_utils.py`. |
transformers | 2,823 | closed | Update run_tf_squad.py | 02-12-2020 12:00:18 | 02-12-2020 12:00:18 | ||
transformers | 2,822 | closed | bugs in xlnet XLNetLMHeadModel | # 🐛 Bug
## Information
Model I am using XLNet :
Language I am using the model on English :
The problem arises when using:
* [True ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. just deep into modeling_xlnet.py code;
2. browsing https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_xlnet.py#L1057;
3. comparing with other LMHead,such as https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_gpt2.py#L603;
4.you'll found inputs and labels are not shifted.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
the code should be refactored to the same style as other LMHeadModels
and because xlnet tokenizer appends <'sep'> , <'cls'> on the end, so the num tokens shifted should be 2, so the think the code should be:
if labels is not None:
# Shift so that tokens < n predict n
shift_logits = logits[..., :-2, :].contiguous()
shift_labels = labels[..., 1:-1].contiguous()
# Flatten the tokens
loss_fct = CrossEntropyLoss()
loss = loss_fct(logits.view(-1, logits.size(-1)), labels.view(-1))
# loss = loss_fct(logits.view(-1, logits.size(-1)), labels.reshape(-1))
outputs = (loss,) + outputs
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.4.1
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| 02-12-2020 11:35:36 | 02-12-2020 11:35:36 | I took a closer look into the XLNet model. As I understand the [paper](https://arxiv.org/pdf/1906.08237.pdf) and how the input/target lm training data is created in the code base (look [here](https://github.com/zihangdai/xlnet/blob/bbaa3a6fa0b3a2ee694e8cf66167434f9eca9660/data_utils.py#L616)), the language modelling loss is calculated using a <mask> token for specific words similar to BERT. Opposite to BERT though, the model still performs some kind of auto-regressive training as the length input_ids and labels are regressively increased over time `T`. At each training step `t` though, the `output_ids` are equal to the `input_ids`, whereas some `input_ids` are masked and the `embeddings` corresponding to the different positions `[1,T]` only see the `input_ids` and `embeddings` of certain other positions according to a random permutation (I think Fig. 4 in the paper explains it quite well).
Long story short, in my opinion the labels should not be a shifted version of the input_ids.
Regarding the special tokens, it's true that the `XLNetTokenizer` add two special tokens by default. I changed that in the examples provided in the `modeling_xlnet.py` file for the `XLNetModel` and `XLNetWithLMHeadModel` as those tokens were only mainly used for the two sentence input lm pre-training (similar to BERT) and might be confusing for simpler examples.
I added an examples in PR which gives a simple example how the `XLNetWithLMHeadModel` can be used for "standard" auto-regressive pretraining. Also when looking at the function `prepare_inputs_for_language_generation()` in `modeling_xlnet.py`, it can be seen that a <mask> token is added to the `input_ids` in order to perform language generation. This might make everything clearer as well.<|||||>Maybe @thomwolf can confirm before closing the issue? <|||||>Thanks ! Clear, Cool!<|||||>@patrickvonplaten As an aside, are there test available per-model that check that the output of a given model is identical to the output of the original model? Like the integration tests, but where the expected output is actually the same as the original implementation?<|||||>@BramVanroy We are working on those at the moment. So far only the new models (bart & roberta) have real IntegrationTests. Most of the LMHead models have some form of Integration Test that check whether reasonable language is generated.<|||||>@patrickvonplaten Reasonable output is indeed an important aspect, but comparing with original implementations might bring discrepancies to light quickly. I am not sure how feasible that is, so it's just a thought.<|||||>I agree 100%. We compare to the original implementations as best as we can! <|||||>@BramVanroy we compare to the original implementations when we initially convert the models. We make sure that the output of our models is the same as the output from the official models, given a small margin error. You can find an example of this in the [`convert_pytorch_checkpoint_to_tensorflow.py` script.](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_pytorch_checkpoint_to_tf2.py#L351)<|||||>> @BramVanroy we compare to the original implementations when we initially convert the models. We make sure that the output of our models is the same as the output from the official models, given a small margin error. You can find an example of this in the [`convert_pytorch_checkpoint_to_tensorflow.py` script.](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_pytorch_checkpoint_to_tf2.py#L351)
@LysandreJik From the looks of this, it seems that you are comparing the output of the imported and/or mapped weights between pt and tf. But it seems that this does not cover architectural difference (correct me if I'm wrong). For instance, the recent issue of bias being counted twice wouldn't have been caught in this test, I think? But if you have some example input case and hard-code a slice of its output from the original implementation (be that tensorflow, pytorch, or something else), then you can test that the transformer implementation (architecture + weights) behave the same.<|||||>Actually, the double bias would definitely have been caught with this! We load the original models' weights onto our models and compare the output of the two models given the same input. This usually results in a tensor of size `(batch_size, sequence_length, hidden_size)` for base models or `(batch_size, sequence_length, vocab_size)` for models with an LM head (that is a lot of values!) that we each compare individually to make sure the difference is under the defined threshold.
Where our tests failed us is that we did not have integration tests for this model at the time, which is something @patrickvonplaten is doing a great job at changing :).<|||||>@BramVanroy I think you're describing what we have in e.g. https://github.com/huggingface/transformers/blob/master/tests/test_modeling_roberta.py#L322 (and @patrickvonplaten indeed added others recently)
Here `expected_slice` is the output of the original (here, fairseq) implementation. I agree that it's a good way to ensure correctness (except in cases where the original implem is "incorrect" in some way!)
See the recently merged https://github.com/huggingface/transformers/pull/3014
|
transformers | 2,821 | closed | CUDA out of memory issue in the middle of training in run_language_modeling.py (say after 1000 steps). | # 🐛 Bug
CUDA OOM in run_language_modeling.py after many steps.
## Information
It seems strange to get them so late in the training procedure.
Model I am using (Bert, XLNet ...):
roberta-large
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [ ] the official example scripts: run_language_modeling.py, with line_by_line parameter
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: Wikitext after some filter of sentences (attached)
[wiki.train.raw.time_filter.normalized.text.txt](https://github.com/huggingface/transformers/files/4191154/wiki.train.raw.time_filter.normalized.text.txt)
## To reproduce
just run the script with the line_by_line parameter
Steps to reproduce the behavior:
- `transformers` version: latest
- Platform:
- Python version: 3.7.0
- PyTorch version (GPU?): latest
- Tensorflow version (GPU?):
- Using GPU in script?: yes - TITAN Xp
- Using distributed or parallel set-up in script?: no
| 02-12-2020 09:13:20 | 02-12-2020 09:13:20 | what is your GPU?<|||||>> what is your GPU?
TITAN Xp
<|||||>If I'm not mistaken the Titan XP has 12GB of VRAM? From my tests training RoBERTa-large with a batch size of 1 already requires 10GB of VRAM, so your GPU memory should be filled quickly. It is surprising that it crashes later though. Could you try with a smaller batch size and gradient accumulation?<|||||>Working with a smaller batch size and gradient accumulation works better. Thank you ! |
transformers | 2,820 | closed | ImportError: cannot import name 'GradientAccumulator' | transformers==2.4.1;
tensoflow==2.1.0;
torch==1.4.0;
when i start with follow code get some error and i didn't have tensoflow-gpu.
**code**
---------------------------------------------------------------------------
from transformers import (
TF2_WEIGHTS_NAME,
BertConfig,
BertTokenizer,
DistilBertConfig,
DistilBertTokenizer,
GradientAccumulator,
RobertaConfig,
RobertaTokenizer,
TFBertForTokenClassification,
TFDistilBertForTokenClassification,
TFRobertaForTokenClassification,
create_optimizer,
)
**error**
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-2-4dd6bfff15e9> in <module>()
12 from seqeval import metrics
13
---> 14 from transformers import (
15 TF2_WEIGHTS_NAME,
16 BertConfig,
ImportError: cannot import name 'GradientAccumulator'
| 02-12-2020 07:48:51 | 02-12-2020 07:48:51 | Hi,
anyone found workaround for this issue?
Thanks<|||||>This shouldn't happen with transformers v2.4.1 and tensorflow >= 2.0.0.
I can't replicate this issue with the versions you mentioned.
Would you mind telling me what gets printed out when you run the following snippet?
```py
from transformers import __version__
import tensorflow as tf
print("Transformers version", __version__)
print("TensorFlow version", tf.__version__)
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>well done |
transformers | 2,819 | closed | Create card for model bert-base-spanish-wwm-cased-finetuned-spa-squad2-es.md | 02-12-2020 00:46:18 | 02-12-2020 00:46:18 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2819?src=pr&el=h1) Report
> Merging [#2819](https://codecov.io/gh/huggingface/transformers/pull/2819?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e0b6247cf749c5a6c7b9543f6c16935b58370ce0?src=pr&el=desc) will **decrease** coverage by `0.26%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2819?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2819 +/- ##
==========================================
- Coverage 75.02% 74.75% -0.27%
==========================================
Files 93 93
Lines 15275 15275
==========================================
- Hits 11460 11419 -41
- Misses 3815 3856 +41
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2819?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jYW1lbWJlcnQucHk=) | `100% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG1fcm9iZXJ0YS5weQ==) | `100% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `100% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `97.82% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/2819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `96.54% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.05% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.84% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `95.11% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.66% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.26% <0%> (-0.52%)` | :arrow_down: |
| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/2819/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2819?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2819?src=pr&el=footer). Last update [e0b6247...9d4599b](https://codecov.io/gh/huggingface/transformers/pull/2819?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 2,818 | closed | Albert multilingual | # 🚀 Feature request
Provide multilingual pre-trained albert model.
## Motivation
Albert is a light weighted bert. It would be nice if it has the multilingual version.
## Your contribution
| 02-12-2020 00:15:09 | 02-12-2020 00:15:09 | As far as I know (from following https://github.com/google-research/ALBERT/issues/5 and https://github.com/google-research/ALBERT/issues/91), ALBERT multilingual is not yet released.
We'll make sure to support it once it's released.<|||||>https://github.com/google-research/ALBERT/pull/152/files
😱😱😱<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,817 | closed | GPT2LMHeadModel with variable length batch input | I'm trying to repurpose the GPT2LMHeadModel for a seq2seq-like task, where I have an input prompt sequence of length L and I'm trying to ask the model to output a sequence to match a target sequence/sentence.
For a single input-output pair, I simply change the original code of
`shift_logits = lm_logits[..., :-1, :].contiguous()`
to
`shift_logits = lm_logits[..., L-1:-1, :].contiguous()`
But I'm a bit lost on how can I do this for a batch of variable length input. Even if I pad the shorter sequences, I would need to shift the logits by different amount for each input. I'm also uncertain if I need to do something about the attention mask. Any tip is appreicated! | 02-11-2020 22:13:55 | 02-11-2020 22:13:55 | Have you tried concatenating the sequences into one long string and using a separator token without changing any of the code? You can then use a moving window of 1024 to train the model. You can make each step of the window start after an <|endoftext|> to ensure the primary sequence is not truncated.
You can then train on batches of these moving windows of 1024 (either moving a random # of tokens or to the next <|endoftext|> token)
E.g., An example input for 1024 may then look something like this: "Some sequence == Some other sequence <|endoftext|> Some sequence_2 == Some other sequence_2 <|endoftext|> Some sequence_3 == Some other sequence_3 <|endoftext|> Some sequence_4 == Some othe" (clipped on purpose to illustrate a point.)
Then you prompt with "Some sequence ==" and terminate generation/clip text on <|endoftext|>
The model is *very good* at learning like this. It is okay to have the window clip things off at the end.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,816 | closed | Proposal: Update examples to utilize a new format. | This PR creates a new example coding style for the pytorch code.
* Uses pytorch-lightning for the underlying training.
* Separates out the base transformer loading from the individual training.
* Moves each individual example to its own directory.
* Move the code in the readme to bash scripts.
The only two new files are `run_pl_ner.py` and `transformers_base.py`.
The goal is to keep the same format as the original command-line. Most of the argument names are preserved. I have verified that for NER the results of the same on GPU.
There are several nice benefits of lightning -> somewhat nicer logging and library integration (e.g. wandb), auto-checkpointing. Mostly the goal though is code readability with identical functionality.
Todo:
* make sure that the output file format is identical.
* print test results after training.
* test multi-gpu and apex (in theory these should work) | 02-11-2020 20:07:41 | 02-11-2020 20:07:41 | Hi @srush, thanks for this PR :heart: Can't wait to test it!
One suggestion/RFC: could we rename it to something like `token_classification` instead of `ner`. I know PoS tagging is not really covered in recent papers, but I always test new models for this task with the "identical" implementation 😅 This requires only a little modification in the code: we then should report accuracy as well.
But I will be totally fine with `ner` here!<|||||>Token Classification sounds good to me. That is consistent with the internal naming. |
transformers | 2,815 | closed | Add more specific testing advice to Contributing.md | 02-11-2020 19:42:14 | 02-11-2020 19:42:14 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2815?src=pr&el=h1) Report
> Merging [#2815](https://codecov.io/gh/huggingface/transformers/pull/2815?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bed38d3afec99ce99ef8610337cb279a8fb25033?src=pr&el=desc) will **increase** coverage by `0.55%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2815?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2815 +/- ##
==========================================
+ Coverage 75.02% 75.58% +0.55%
==========================================
Files 93 93
Lines 15275 15275
==========================================
+ Hits 11460 11545 +85
+ Misses 3815 3730 -85
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2815?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/2815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.49% <0%> (+27.59%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2815?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2815?src=pr&el=footer). Last update [bed38d3...4ae60cb](https://codecov.io/gh/huggingface/transformers/pull/2815?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 2,814 | closed | Repository with recipes how to pretrain model from scratch on my own data | # 🚀 Feature request
It would very useful to have documentation on how to train different models, not necessarily with the use of transformers, but with use external libs (like original BERT, fairseq, etc)
Maybe another repository with readmes or docs with recipes from those who already pretrain their model in order to reproduce procedure for other languages or domain.
There are many external resources (blogs, articles in arxiv) but without any details and very often they are not reproducible.
## Motivation
Have a proven recipe for training the model. Make it easy for others to train a custom model. The community will easily train language or domain-specific models.
More models available in transformers library.
There are many issues related to this:
* https://github.com/huggingface/transformers/issues/1283
* https://github.com/huggingface/transformers/issues/2301
* https://github.com/huggingface/transformers/issues/1672
* https://github.com/huggingface/transformers/issues/1714
* https://github.com/huggingface/transformers/issues/1638
* https://github.com/huggingface/transformers/issues/2279
* https://github.com/huggingface/transformers/issues/1108
* https://github.com/huggingface/transformers/issues/1175
* https://github.com/huggingface/transformers/issues/1381
* https://github.com/huggingface/transformers/issues/1547
* https://github.com/huggingface/transformers/issues/1999
* #1908
* #417
* #170
| 02-11-2020 16:04:52 | 02-11-2020 16:04:52 | Hi @ksopyla that's a great – but very broad – question.
We just wrote a blogpost that might be helpful: https://huggingface.co/blog/how-to-train
The post itself is on GitHub so feel free to improve/edit it too.<|||||>Thank you @julien-c. It will help to add new models to transformer model repository :)<|||||>Hi,
the blogpost is nice but it is NOT an end to end solution. I've been trying to learn how to use the huggingface "ecosystem" to build a LM model from scratch on a novel dataset, and the blogpost is not enough. Adding a jupyter notebook to the blog post would make it very easy for users to learn how to run things end to end. (VS "put in a Dataset type here" and "then run one of the scripts"). :) <|||||>@ddofer You are right, this is in process of being addressed at https://github.com/huggingface/blog/issues/3
Feel free to help :)<|||||>@julien-c Is it possible to do another example using bert to pretrain the LM instead of roberta? I followed the steps, but it doesn't seem to work when I changed the model_type to bert. <|||||>I am a new contributor and thought this might be a reasonable issue to start with.
I'm happy to add an additional example of using bert rather than roberta to pretrain the LM.
Please let me know if this would be helpful and/or if starting elsewhere would be better <|||||>> I am a new contributor and thought this might be a reasonable issue to start with.
>
> I'm happy to add an additional example of using bert rather than roberta to pretrain the LM.
>
> Please let me know if this would be helpful and/or if starting elsewhere would be better
Great that you want to contribute!; any help is welcome! Fine-tuning and pretraining BERT seems to be already covered in [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py) though. So your contribution should differ significantly from this functionality. Perhaps it can be written in a more educational rather than production-ready way? That would definitely be useful - explaining all concepts from scratch and such. (But not an easy task.)<|||||>First version of a notebook is up over at https://github.com/huggingface/blog/tree/master/notebooks
(thanks @aditya-malte for the help)<|||||>> > I am a new contributor and thought this might be a reasonable issue to start with.
> > I'm happy to add an additional example of using bert rather than roberta to pretrain the LM.
> > Please let me know if this would be helpful and/or if starting elsewhere would be better
>
> Great that you want to contribute!; any help is welcome! Fine-tuning and pretraining BERT seems to be already covered in [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py) though. So your contribution should differ significantly from this functionality. Perhaps it can be written in a more educational rather than production-ready way? That would definitely be useful - explaining all concepts from scratch and such. (But not an easy task.)
I'll give it a shot :) <|||||>hey @laurenmoos,
A general community request is to work on a keras like wrapper for Transformers. It would be great if you could do that.
model=Roberta()
model.pretrain(lm_data)
model.finetune(final_data)
model.predict(XYZ)<|||||>@aditya-malte I'd love to!
I will work on that and evaluate the request for additional documentation afterwards. Is there an issue to jump on?<|||||>Let me know if you’re interested. I’d be excited to collaborate!<|||||>@aditya-malte yes!<|||||>Hi,
Did we make any progress on the feature discussed above? A keras like wrapper sounds awesome for Transformers. I would like to contribute in the development.<|||||>> First version of a notebook is up over at https://github.com/huggingface/blog/tree/master/notebooks
> (thanks @aditya-malte for the help)
@julien-c Thanks for this. I have a question regarding `special_tokens_map.json` file. When I just use the `vocab.json` and `merges.txt` from the tokenizer, the `run_language_modeling.py` shows the following info message
```bash
05/01/2020 17:44:01 - INFO - transformers.tokenization_utils - Didn't find file /<path-to-my-output-dir>/special_tokens_map.json. We won't load it.
```
In the tutorial this has not been mentioned. Should we create this mapping file too?<|||||>Hi @dashayushman,
The message you’ve shown is not an error/warning as such but is just an INFO message.
As far as I remember, the BPE model should work just fine with the vocab and merges file. You can ignore the message.
Thanks
<|||||>@julien-c @aditya-malte
from blog post:
> If your dataset is very large, you can opt to load and tokenize examples on the fly, rather than as a preprocessing step.
how can I do that? Also, save the tokenized data?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Hi @BramVanroy @julien-c
Continuing #1999, it seems `run_language_modeling.py` is just for PyTorch and fine-tune a masked language model using Tensorflow doesn't have an example script yet. Any plan to make the Tensorflow version of the script or maybe how to modify the current`run_language_modeling.py` so it can be used for Tensorflow too? Thank you.<|||||>I would also like to see an example, how to train a language model (like BERT) from scratch with tensorflow on my own dataset, so i can finetune it later on a specific task. <|||||>> I would also like to see an example, how to train a language model (like BERT) from scratch with tensorflow on my own dataset, so i can finetune it later on a specific task.
ping @jplu ;)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,813 | closed | PreTrainedModel.generate do_sample default argument is wrong in the documentation | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): GPT2LMHeadModel
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Create GPT2LMHeadModel and GPT2Tokenizer objects using the from_pretrained('gpt2') method
2. Use the generate function to generate sequences without any input argument multiple times, and then repeat by setting do_sample = True
3. When do_sample is set to False (or is not supplied at all), the generate method constantly generates the following string:
'!\n\nThe first thing I did was to make a list of all the things I would!'
When generating with do_sample set to True, changing results are outputted. This is consistent with the behaviour described in the code, except for the default value of do_sample
Code sample from a python shell:
'''python
>>> model = transformers.GPT2LMHeadModel.from_pretrained('gpt2')
>>> g2t = transformers.GPT2Tokenizer.from_pretrained('gpt2')
>>> g2t.decode(model.generate()[0])
'!\n\nThe first thing I did was to make a list of all the things I would!'
>>> g2t.decode(model.generate()[0])
'!\n\nThe first thing I did was to make a list of all the things I would!'
>>> g2t.decode(model.generate()[0])
'!\n\nThe first thing I did was to make a list of all the things I would!'
>>> g2t.decode(model.generate(do_sample=True)[0])
"!, I can't help but wonder how she's doing. I really have no idea.!"
>>> g2t.decode(model.generate(do_sample=True)[0])
'!\n\nThe other guy is trying to take something away from the guy before you even start!'
>>> g2t.decode(model.generate(do_sample=True)[0])
'! are you kidding me?\n\n\nBut maybe you should wait for his own "last act!'
'''
Similarly, you can do print((transformers.GPT2LMHeadModel.from_pretrained('gpt2')).config.do_sample) to verify that the 'default' argument is in fact False
## Expected behavior
The documentation should say do_sample is False by default OR the config should be updated to be in line with the documentation
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.4.1
- Platform: Ubuntu GNU/Linux 18.04
- Python version: Python 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0] on linux
- PyTorch version (GPU?): 1.4.0 GPU version with Nvidia RTX 2080Ti
- Tensorflow version (GPU?): N/A
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 02-11-2020 15:27:09 | 02-11-2020 15:27:09 | In the documentation for [version 2.4.1/2.4.0](https://huggingface.co/transformers/v2.4.0/main_classes/model.html#transformers.PreTrainedModel.generate), it does indicate it is `False` by default. In the [master documentation](https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel.generate) though, it is set to `True` by default because we've changed it on the current master.<|||||>I see, however the page title for the master documentation clearly indicates 2.4.1 for the version, which was the source of my confusion. Thank you very much for the clarification<|||||>Indeed, this is very misleading. I'll update it. |
transformers | 2,812 | closed | How can I finetune the BERTModel on my own corpus? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Thanks for your code!
I want to fine-tune the BERT model on my own corpus which has a smaller vocabulary than the default size of 30522, and my final goal is to obtain a fine-tuned and personalized BERT model which can provide proper word embedding for future top tasks. In short, I need to fine-tune the BERTModel for providing word embedding based on my own corpus.
How can I build a new vocabulary and then fetch the embeddings from the provided pre-trained model, e.g., bert-base-uncased, and then fine-tune the model on my own corpus?
Have you provided functions for building vocabulary and further fine-tuning?
| 02-11-2020 15:20:13 | 02-11-2020 15:20:13 | Take a look at the `resize_embeddings` function and `examples/run_language_modeling.py`.<|||||>sorry, where is the resize_embeddings function?<|||||>My bad, it's a method on `PretrainedModel` called `resize_token_embeddings`. There is a call in `run_language_modeling.py. |
transformers | 2,811 | closed | How to use a batch size bigger than zero in Bert Sequence Classification | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
[Hugging Face documentation describes](https://huggingface.co/transformers/model_doc/bert.html#bertforsequenceclassification) how to do a sequence classification using a Bert model:
```
from transformers import BertTokenizer, BertForSequenceClassification
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForSequenceClassification.from_pretrained('bert-base-uncased')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
labels = torch.tensor([1]).unsqueeze(0) # Batch size 1
outputs = model(input_ids, labels=labels)
loss, logits = outputs[:2]
```
However, there is only example for batch size 1. How to implement it when we have a list of phrases and want to use a bigger batch size?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: https://stackoverflow.com/questions/60170037/how-to-use-a-batch-size-bigger-than-zero-in-bert-sequence-classification | 02-11-2020 13:33:01 | 02-11-2020 13:33:01 | I answered your question on stack overflow. |
transformers | 2,810 | closed | How to get longer output for summary? | #Question
https://stackoverflow.com/questions/60157959/transformers-summarization-with-python-pytorch-how-to-get-longer-output
Should i train it myself to get summary output longer than used in original training script?
:
python run_summarization.py \
--documents_dir $DATA_PATH \
--summaries_output_dir $SUMMARIES_PATH \ # optional
--no_cuda false \
--batch_size 4 \
--min_length 50 \
--max_length 200 \
--beam_size 5 \
--alpha 0.95 \
--block_trigram true \
--compute_rouge true
When i do inference with
--min_length 500 \
--max_length 600 \
I got a good output for 200 tokens, but the rest of the text is
. . . [unused7] [unused7] [unused7] [unused8] [unused4] [unused7] [unused7] [unused4] [unused7] [unused8]. [unused4] [unused7] . [unused4] [unused8] [unused4] [unused8]. [unused4] [unused4] [unused8] [unused4] . . [unused4] [unused6] [unused4] [unused7] [unused6] [unused4] [unused8] [unused5] [unused4] [unused7] [unused4] [unused4] [unused7]. [unused4] [unused6]. [unused4] [unused4] [unused4] [unused8] [unused4] [unused7] [unused4] [unused8] [unused6] [unused4] [unused4] [unused4]. [unused4]. [unused5] [unused4] [unused8] [unused7] [unused4] [unused7] [unused9] [unused4] [unused7] [unused4] [unused7] [unused5] [unused4] [unused5] [unused4] [unused6] [unused4]. . . [unused5]. [unused4] [unused4] [unused4] [unused6] [unused5] [unused4] [unused4] [unused6] [unused4] [unused6] [unused4] [unused4] [unused5] [unused4]. [unused5] [unused4] . [unused4] [unused4] [unused8] [unused8] [unused4] [unused7] [unused4] [unused8] [unused4] [unused7] [unused4] [unused8] [unused4] [unused8] [unused4] [unused6] | 02-11-2020 10:59:34 | 02-11-2020 10:59:34 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,809 | closed | Fix typo in src/transformers/data/processors/squad.py | end end -> and end | 02-11-2020 06:56:49 | 02-11-2020 06:56:49 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2809?src=pr&el=h1) Report
> Merging [#2809](https://codecov.io/gh/huggingface/transformers/pull/2809?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1f5db9a13c8932e02e6e7d599a16dc262b1570bf?src=pr&el=desc) will **decrease** coverage by `30.11%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2809?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2809 +/- ##
==========================================
- Coverage 75.02% 44.9% -30.12%
==========================================
Files 93 93
Lines 15275 15275
==========================================
- Hits 11460 6860 -4600
- Misses 3815 8415 +4600
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2809?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/2809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.24% <ø> (-0.65%)` | :arrow_down: |
| [src/transformers/modeling\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG1fcm9iZXJ0YS5weQ==) | `0% <0%> (-100%)` | :arrow_down: |
| [src/transformers/modeling\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `0% <0%> (-100%)` | :arrow_down: |
| [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `0% <0%> (-97.64%)` | :arrow_down: |
| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/2809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `0% <0%> (-96%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `0% <0%> (-95.78%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `0% <0%> (-94.28%)` | :arrow_down: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `0% <0%> (-87.91%)` | :arrow_down: |
| [src/transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `0% <0%> (-86.42%)` | :arrow_down: |
| [src/transformers/modeling\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/2809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `0% <0%> (-83.83%)` | :arrow_down: |
| ... and [20 more](https://codecov.io/gh/huggingface/transformers/pull/2809/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2809?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2809?src=pr&el=footer). Last update [1f5db9a...5d5447d](https://codecov.io/gh/huggingface/transformers/pull/2809?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Cool, thanks! |
transformers | 2,808 | closed | Multiple Choice BERT, SWAG task, failure to test | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ x] the official example scripts: (give details below)
* [ x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x ] an official GLUE/SQUaD task: (give the name)
* [x ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Download SWAG dataset and put it in some directory and set the path by `export SWAG_DIR=path/to/swag/dir`
2. Copy `run_multiple_choice.py` and `utils_multiple_choice.py`
3. Run the code only for testing with the following command
`./run_multiple_choice.py --model_type bert --task_name swag --model_name_or_path bert-base-uncased --do_lower_case --max_seq_length 80 --output_dir models_bert/swag_testing --data_dir $SWAG_DIR --do_test`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
```
Traceback (most recent call last):
File "./run_multiple_choice.py", line 678, in <module>
main()
File "./run_multiple_choice.py", line 669, in main
result = evaluate(args, model, tokenizer, prefix=prefix, test=True)
File "./run_multiple_choice.py", line 248, in evaluate
eval_dataset = load_and_cache_examples(args, eval_task, tokenizer, evaluate=not test, test=test)
File "./run_multiple_choice.py", line 354, in load_and_cache_examples
examples = processor.get_test_examples(args.data_dir)
File "utils_multiple_choice.py", line 168, in get_test_examples
"For swag testing, the input file does not contain a label column. It can not be tested in current code"
ValueError: For swag testing, the input file does not contain a label column. It can not be tested in current codesetting!
```
In the code it says for testing there is no need for label column, but it doesn't work with or without it. It does not work with the default `test.csv` file (it will be called by default for testing if it is in the directory), but it also does not work with `val.csv` (has label column).
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `Transformers` version: current
- Platform: Linux
- Python version: 3.7.5
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?:
| 02-11-2020 05:44:36 | 02-11-2020 05:44:36 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,807 | closed | get_activation('relu') provides a simple mapping from strings in configs to activation functions | 02-11-2020 04:48:49 | 02-11-2020 04:48:49 | Happy to do TF in a separate PR. I don't think worth breaking backwards compatibility over this.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2807?src=pr&el=h1) Report
> Merging [#2807](https://codecov.io/gh/huggingface/transformers/pull/2807?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/70bbe4b1de298651a9665dc86ba9689bca1e080f?src=pr&el=desc) will **increase** coverage by `29.04%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2807?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2807 +/- ##
===========================================
+ Coverage 44.91% 73.96% +29.04%
===========================================
Files 94 94
Lines 15274 15274
===========================================
+ Hits 6860 11297 +4437
+ Misses 8414 3977 -4437
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2807?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/2807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `92.85% <ø> (ø)` | |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/2807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.89% <ø> (+0.64%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `70.86% <100%> (+70.86%)` | :arrow_up: |
| [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.62% <100%> (+97.62%)` | :arrow_up: |
| [src/transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `86.37% <100%> (+86.37%)` | :arrow_up: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.16% <100%> (+88.16%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `61.32% <100%> (+61.32%)` | :arrow_up: |
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `83.28% <100%> (+83.28%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <100%> (+80.2%)` | :arrow_up: |
| ... and [28 more](https://codecov.io/gh/huggingface/transformers/pull/2807/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2807?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2807?src=pr&el=footer). Last update [70bbe4b...6879e76](https://codecov.io/gh/huggingface/transformers/pull/2807?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 2,806 | closed | TFBertModel.from_pretrained('neuralmind/bert-base-portuguese-cased') -> TypeError | I just installed the library on a TensorFlow environment (2.0.0-rc1) and there is no `BertModel` in `transformers`.
Is `TFBertModel` equivalent? If so, then I get the error `TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType` when loading the model with `model = TFBertModel.from_pretrained('neuralmind/bert-base-portuguese-cased')`.
- `transformers` version: 2.4.1
- Platform: Windows 10
- Python version: 3.7.6
- Tensorflow version (GPU?): 2.0.0-rc1 (it automatically uses GPU now)
- Using GPU in script?: No, just importing.
- Using distributed or parallel set-up in script?: No.
| 02-10-2020 23:17:15 | 02-10-2020 23:17:15 | `BertModel` is the pytorch model, and is therefore only available if you have torch installed. As you correctly said, `TFBertModel` is the TensorFlow equivalent.
Importing with `from transformers import TFBertModel` raises the above error?<|||||>Loading the model gives me the error: `TFBertModel.from_pretrained('neuralmind/bert-base-portuguese-cased')`<|||||>This model is only available in PyTorch, Neuralmind has not provided a TensorFlow checkpoint for that model. You can see it on the [page](https://huggingface.co/neuralmind/bert-base-portuguese-cased), as it has the tag `PyTorch`, but no `TensorFlow` tag.
You can still load it in TensorFlow, but you have to add the `from_pt` flag:
```py
from transformers import TFBertModel
TFBertModel.from_pretrained('neuralmind/bert-base-portuguese-cased', from_pt=True)
```
This might require you to have PyTorch installed to do the conversion.<|||||>Thank you, but with that I get the error `OSError: Loading a TF model from a PyTorch checkpoint is not supported when using a model identifier name.`.
I did install PyTorch.<|||||>Hi, i too have problem importing bert model error:
```
File "chatbot.py", line 54, in models
bert_model = TFBertModel.from_pretrained('bert-base-uncased')
File "C:\Users\CHENG\AppData\Local\Programs\Python\Python37\lib\site-packages\transformers\modeling_tf_utils.py", line 351, in from_pretrained
assert os.path.isfile(resolved_archive_file), "Error retrieving file {}".format(resolved_archive_file)
File "C:\Users\CHENG\AppData\Local\Programs\Python\Python37\lib\genericpath.py", line 30, in isfile
st = os.stat(path)
TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
```
sometimes it works, sometimes it throws this error, i don't know why, any help will be appreciated!!
<|||||>@rodrigoruiz, indeed, this functionality was added 12 days ago with https://github.com/huggingface/transformers/commit/961c69776f8a2c95b92407a086848ebca037de5d, so it wouldn't be available on the pip version of 2.4.1. My bad.
Would you try installing from source with `pip install git+https://github.com/huggingface/transformers` and let me know if it fixes your issue?<|||||>@LysandreJik Thank you, that worked!<|||||>> Hi, i too have problem importing bert model error:
>
> ```
> File "chatbot.py", line 54, in models
> bert_model = TFBertModel.from_pretrained('bert-base-uncased')
> File "C:\Users\CHENG\AppData\Local\Programs\Python\Python37\lib\site-packages\transformers\modeling_tf_utils.py", line 351, in from_pretrained
> assert os.path.isfile(resolved_archive_file), "Error retrieving file {}".format(resolved_archive_file)
> File "C:\Users\CHENG\AppData\Local\Programs\Python\Python37\lib\genericpath.py", line 30, in isfile
> st = os.stat(path)
> TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
> ```
>
> sometimes it works, sometimes it throws this error, i don't know why, any help will be appreciated!!
I have the same problem with `TFXLMRobertaModel.from_pretrained("xlm-roberta-base")`, did you solve it?<|||||>Hi @Riccorl my problem somehow just disappear after restarting and upgrading tensorflow to 2.1.0. I’m not sure how it is solved. Initially, the error pops up randomly, meaning sometimes it works smoothly sometimes not. But I have no error now at all.
Maybe do a `pip install -U transformers`
And then `pip install -U tensorflow-gpu`<|||||>> Hi @Riccorl my problem somehow just disappear after restarting and upgrading tensorflow to 2.1.0. I’m not sure how it is solved. Initially, the error pops up randomly, meaning sometimes it works smoothly sometimes not. But I have no error now at all.
>
> Maybe do a `pip install -U transformers`
> And then `pip install -U tensorflow-gpu`
It seems like i have problem only with `xlm-roberta` tensorflow models. Other models work. Maybe I should open a new issue<|||||>I had the same error with this
```
model = TFBertModel.from_pretrained('bert-base-uncased')
File "/home/cally/.local/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 403, in from_pretrained
assert os.path.isfile(resolved_archive_file), "Error retrieving file {}".format(resolved_archive_file)
File "/usr/local/lib/python3.7/genericpath.py", line 30, in isfile
st = os.stat(path)
TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
```
this is my code
```
model = TFBertModel.from_pretrained('bert-base-uncased')
```
did anyone solve it
<|||||>sometimes it works, sometimes it appears error<|||||>
> Hi @Riccorl my problem somehow just disappear after restarting and upgrading tensorflow to 2.1.0. I’m not sure how it is solved. Initially, the error pops up randomly, meaning sometimes it works smoothly sometimes not. But I have no error now at all.
>
> Maybe do a `pip install -U transformers`
> And then `pip install -U tensorflow-gpu`
Installing above packages solved this issue for me. Its working fine now. Thanks @nixon-nyx <|||||>I guess this can now be closed <|||||>@daraksha-shirin you’re welcome! Glad that I could help!<|||||>> I guess this can now be closed
Yep. <|||||>> I had the same error with this
>
> ```
> model = TFBertModel.from_pretrained('bert-base-uncased')
> File "/home/cally/.local/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 403, in from_pretrained
> assert os.path.isfile(resolved_archive_file), "Error retrieving file {}".format(resolved_archive_file)
> File "/usr/local/lib/python3.7/genericpath.py", line 30, in isfile
> st = os.stat(path)
> TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
> ```
>
> this is my code
>
> ```
> model = TFBertModel.from_pretrained('bert-base-uncased')
> ```
>
> did anyone solve it
I'm still having the exact same issue when fine-tuning model with `TFAutoModel` with following packages version:
- `tensorflow`: 2.2.0
- `transformers`: 3.0.2 |
transformers | 2,805 | closed | [model_cards] Add new German Europeana BERT models | Hi,
this PR adds the model cards for two new BERT models for Historic German.
The cased and uncased BERT models were trained on a huge corpus: newspapers from [Europeana](http://www.europeana-newspapers.eu/). Time period of these (noisy) OCRed newspapers is 18th - 20th century.
More information can be found [here](https://github.com/dbmdz/berts) and more detailed results on downstream tasks [here](https://github.com/stefan-it/europeana-bert). | 02-10-2020 22:25:57 | 02-10-2020 22:25:57 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2805?src=pr&el=h1) Report
> Merging [#2805](https://codecov.io/gh/huggingface/transformers/pull/2805?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/68ccc04ee6c762183ff2b34b8b85d139f77cbf14?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2805?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2805 +/- ##
=======================================
Coverage 75.02% 75.02%
=======================================
Files 93 93
Lines 15275 15275
=======================================
Hits 11460 11460
Misses 3815 3815
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2805?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2805?src=pr&el=footer). Last update [68ccc04...e1833f7](https://codecov.io/gh/huggingface/transformers/pull/2805?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,804 | closed | Fix a few issues regarding the language modeling script | The language modeling script currently has a few issues.
- in the line by line dataset, no special tokens are added (that's due to the fact the `batch_encode_plus` has the `add_special_token` flag `False` by default, which is misleading).
- the max length is ill computed in that same dataset, as it doesn't take into account the fact that `encode_plus` is aware of the special tokens and their impact on the sequence length. | 02-10-2020 21:53:17 | 02-10-2020 21:53:17 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2804?src=pr&el=h1) Report
> Merging [#2804](https://codecov.io/gh/huggingface/transformers/pull/2804?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/539f601be712619dc8c428f0a0b5e8e15f82ac4c?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2804?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2804 +/- ##
=======================================
Coverage 75.02% 75.02%
=======================================
Files 93 93
Lines 15275 15275
=======================================
Hits 11460 11460
Misses 3815 3815
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2804?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jYW1lbWJlcnQucHk=) | `100% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG1fcm9iZXJ0YS5weQ==) | `100% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `100% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `97.82% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/2804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `96.54% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.05% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.84% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `95.11% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.66% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.78% <0%> (ø)` | :arrow_up: |
| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/2804/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2804?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2804?src=pr&el=footer). Last update [539f601...98e2921](https://codecov.io/gh/huggingface/transformers/pull/2804?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,803 | closed | Support DeepSpeed for language modeling finetuning | # 🚀 Feature request
https://github.com/microsoft/DeepSpeed
This was just released, and given the code flow in `run_language_modeling.py` it seems like it would not be too difficult to drop-in, and it has a permissible license (MIT).
However, given the dependencies and difficulty installing them, it would likely have to be done in a separate file.
## Motivation
 | 02-10-2020 19:07:24 | 02-10-2020 19:07:24 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,802 | closed | FlauBERT lang embeddings only when n_langs > 1 | 02-10-2020 17:20:09 | 02-10-2020 17:20:09 | ||
transformers | 2,801 | closed | Can't load pre-trained Flaubert model | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Flaubert
Language I am using the model on (English, Chinese ...): French
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Load a pre-trained model
2.
3.
I'm following the guide from https://huggingface.co/transformers/model_doc/flaubert.html#flaubertmodel:
```
import transformers
tokenizer = transformers.FlaubertTokenizer.from_pretrained('flaubert-base-cased')
```
```
Traceback (most recent call last):
File "C:\Users\myself\Documents\work\dev\Classif_Annonces\venv\lib\site-packages\IPython\core\interactiveshell.py", line 3326, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-05c64572fe39>", line 2, in <module>
tokenizer = transformers.FlaubertTokenizer.from_pretrained('flaubert-base-cased')
File "C:\Users\myself\Documents\work\dev\Classif_Annonces\venv\lib\site-packages\transformers-2.4.1-py3.5.egg\transformers\tokenization_utils.py", line 309, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "C:\Users\myself\Documents\work\dev\Classif_Annonces\venv\lib\site-packages\transformers-2.4.1-py3.5.egg\transformers\tokenization_utils.py", line 410, in _from_pretrained
list(cls.vocab_files_names.values()),
OSError: Model name 'flaubert-base-cased' was not found in tokenizers model name list (flaubert-large-cased, flaubert-base-uncased, flaubert-small-cased, flaubert-base-cased). We assumed 'flaubert-base-cased' was a path, a model identifier, or url to a directory containing vocabulary files named ['merges.txt', 'vocab.json'] but couldn't find such vocabulary files at this path or url.
```
## Expected behavior
`tokenizer` should be a `FlaubertTokenizer` object
## Environment info
Well calling `python transformers-cli env` gave me another error:
```
(venv) C:\Users\PLHT09191\Documents\work\dev\Classif_Annonces\venv\Scripts>python transformers-cli env
Traceback (most recent call last):
File "transformers-cli", line 4, in <module>
__import__('pkg_resources').run_script('transformers==2.4.1', 'transformers-cli')
File "C:\Users\myself\Documents\work\dev\Classif_Annonces\venv\lib\site-packages\setuptools-40.8.0-py3.5.egg\pkg_resources\__init__.py", line 666, in run_script
File "C:\Users\myself\Documents\work\dev\Classif_Annonces\venv\lib\site-packages\setuptools-40.8.0-py3.5.egg\pkg_resources\__init__.py", line 1446, in run_script
File "c:\users\myself\documents\work\dev\classif_annonces\venv\lib\site-packages\transformers-2.4.1-py3.5.egg\EGG-INFO\scripts\transformers-cli", line 6, in <module>
from transformers.commands.user import UserCommands
File "C:\Users\myself\Documents\work\dev\Classif_Annonces\venv\lib\site-packages\transformers-2.4.1-py3.5.egg\transformers\commands\user.py", line 163
entries: List[os.DirEntry] = list(os.scandir(rel_path))
^
SyntaxError: invalid syntax
```
- `transformers` version: 2.4.1
- Platform: Windows 64 bits
- Python version: Python 3.5.2
- PyTorch version (GPU?): torch.__version__ = 1.4.0+cpu
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 02-10-2020 16:31:11 | 02-10-2020 16:31:11 | On the second issue, the CLI is Python 3.6+ only. We'll document this better in the future, cc @LysandreJik <|||||>On the first issue, looks like your traceback might be truncated. Did you paste all of it?<|||||>>
>
> On the first issue, looks like your traceback might be truncated. Did you paste all of it?
Yes indeed I forgot the last lines, don't know why... I edited my original post to include the full traceback:
> Traceback (most recent call last):
> File "C:\Users\myself\Documents\work\dev\Classif_Annonces\venv\lib\site-packages\IPython\core\interactiveshell.py", line 3326, in run_code
> exec(code_obj, self.user_global_ns, self.user_ns)
> File "<ipython-input-2-05c64572fe39>", line 2, in <module>
> tokenizer = transformers.FlaubertTokenizer.from_pretrained('flaubert-base-cased')
> File "C:\Users\myself\Documents\work\dev\Classif_Annonces\venv\lib\site-packages\transformers-2.4.1-py3.5.egg\transformers\tokenization_utils.py", line 309, in from_pretrained
> return cls._from_pretrained(*inputs, **kwargs)
> File "C:\Users\myself\Documents\work\dev\Classif_Annonces\venv\lib\site-packages\transformers-2.4.1-py3.5.egg\transformers\tokenization_utils.py", line 410, in _from_pretrained
> list(cls.vocab_files_names.values()),
> OSError: Model name 'flaubert-base-cased' was not found in tokenizers model name list (flaubert-large-cased, flaubert-base-uncased, flaubert-small-cased, flaubert-base-cased). We assumed 'flaubert-base-cased' was a path, a model identifier, or url to a directory containing vocabulary files named ['merges.txt', 'vocab.json'] but couldn't find such vocabulary files at this path or url.
> <|||||>I can't replicate this issue (FlaubertTokenizer) in either v2.4.0 or v2.4.1, does it arise when you simply do
```py
from transformers import FlaubertTokenizer
tokenizer= FlaubertTokenizer.from_pretrained("flaubert-base-cased")
```
?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,800 | closed | CircleCI doesn't run slow tests | circle_ci.cfg says `RUN_SLOW: yes`, but all my circleci runs have the slow tests skipped.
Is this expected behavior?
@LysandreJik
| 02-10-2020 15:32:00 | 02-10-2020 15:32:00 | When deploying on CircleCI, it runs the `build_and_test` job, which runs the [following suites](https://github.com/huggingface/transformers/blob/master/.circleci/config.yml#L126-L133).
The slow tests are ran by the `run_all_tests_torch_and_tf` suite, which only triggers [weekly](https://github.com/huggingface/transformers/blob/master/.circleci/config.yml#L137).
The slow tests are especially slow, and currently fail on CircleCI because the machines can't run for so long. We're exploring options to run them on a specific machine cc @julien-c <|||||>Got it, thanks. Can I delete this line https://github.com/huggingface/transformers/blob/81d6841b4be25a164235975e5ebdcf99d7a26633/.circleci/config.yml#L23
it confused me.<|||||>If you remove this line the slow tests won't run during the weekly tests though<|||||>Oh I get it, was missing
`run_all_tests_torch_and_tf` vs `run_tests_torch_and_tf`<|||||>should we rename `run_all_tests_torch_and_tf` to `run_slow_tests_torch_and_tf`?<|||||>Well its purpose really is to run all tests, not only the slow tests but the custom tokenizers and soon the doc examples as well, so I feel that the current name is fitting. |
transformers | 2,799 | closed | Add model readme for bert-base-german-cased | Adding a readme for our German BERT model. Not sure if the file location is correct, as the model was added before model hub / user name spaces were created. | 02-10-2020 15:15:20 | 02-10-2020 15:15:20 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2799?src=pr&el=h1) Report
> Merging [#2799](https://codecov.io/gh/huggingface/transformers/pull/2799?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/92e974196fc35eb826f64808ae82d20c4380e3eb?src=pr&el=desc) will **increase** coverage by `1.08%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2799?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2799 +/- ##
==========================================
+ Coverage 73.95% 75.03% +1.08%
==========================================
Files 93 93
Lines 15272 15272
==========================================
+ Hits 11295 11460 +165
+ Misses 3977 3812 -165
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2799?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.39% <0%> (+1.32%)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `94.27% <0%> (+2.2%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.21% <0%> (+2.27%)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.77% <0%> (+9.85%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0%> (+81.2%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2799?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2799?src=pr&el=footer). Last update [92e9741...5e0a253](https://codecov.io/gh/huggingface/transformers/pull/2799?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Location is correct!
There's a small markup issue though (everything is italicized), I'll fix in next commit as it looks like I don't have push access on your fork.
Also will add metadata for language (will tag you in the commit)<|||||>Thanks for the fast merge :)
I couldn't find the issue with italics, but it seems that on the [website](https://huggingface.co/bert-base-german-cased) the unordered lists are not correctly rendered from the markdown. Any advice on how to get them correctly formatted there?<|||||>Re. the list styling, yes, we'll tweak! |
transformers | 2,798 | closed | Reduce the CamemBERT dimensions | I want to reduce the output dimension by adding a linear layer at the end of the camem model.
code :
```python
from transformers import CamembertTokenizer, CamembertModel
import torch
from torch.nn import Sequential, Linear
tokenizer = CamembertTokenizer.from_pretrained('camembert-base')
model = CamembertModel.from_pretrained('camembert-base')
input_ids = torch.tensor(tokenizer.encode("La pose d'un panneau stop.", add_special_tokens=True)).unsqueeze(0) # Batch size 1
# labels = torch.tensor([1] * input_ids.size(1)).unsqueeze(0) # Batch size 1
model = Sequential(model, Linear(768, 256))
outputs = model(input_ids)
print(input_ids)
print(outputs[1].size())
print(outputs[0].size())
```
I got this :
```shell
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in linear(input, weight, bias)
1366 - Output: :math:`(N, *, out\_features)`
1367 """
-> 1368 if input.dim() == 2 and bias is not None:
1369 # fused op is marginally faster
1370 ret = torch.addmm(bias, input, weight.t())
AttributeError: 'tuple' object has no attribute 'dim'
```
Additionally, I want to do a word-level embedding, however, the 768 dimensions is too big from my point.
Thanks for your helps.
| 02-10-2020 14:53:44 | 02-10-2020 14:53:44 | Hi @AlafateABULIMITI sounds like a question that's better suited for Stack Overflow. Thanks! |
transformers | 2,797 | closed | Add model readme for deepset/roberta-base-squad2 | Adding a model readme for https://huggingface.co/deepset/roberta-base-squad2 | 02-10-2020 14:07:15 | 02-10-2020 14:07:15 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2797?src=pr&el=h1) Report
> Merging [#2797](https://codecov.io/gh/huggingface/transformers/pull/2797?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/92e974196fc35eb826f64808ae82d20c4380e3eb?src=pr&el=desc) will **increase** coverage by `1.08%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2797?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2797 +/- ##
==========================================
+ Coverage 73.95% 75.03% +1.08%
==========================================
Files 93 93
Lines 15272 15272
==========================================
+ Hits 11295 11460 +165
+ Misses 3977 3812 -165
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2797?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2797/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.39% <0%> (+1.32%)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2797/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `94.27% <0%> (+2.2%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2797/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.21% <0%> (+2.27%)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2797/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.77% <0%> (+9.85%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2797/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0%> (+81.2%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2797?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2797?src=pr&el=footer). Last update [92e9741...ec005e3](https://codecov.io/gh/huggingface/transformers/pull/2797?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,796 | closed | output padding different to zero in hidden layers with attention mask | # 🐛 Bug
On the last layer, the token corresponding to the padding does not return 0, even if attention masking is used.
## Information
Model I am using (Bert, XLNet ...):
roberta and roberta XLM
Language I am using the model on (English, Chinese ...):
english
The problem arises when using:
* [x] the official example scripts: (give details below)
The tasks I am working on is:
looking at the output of the last layer of RobertaModel
## To reproduce
Steps to reproduce the behavior:
1. use some padding in your inputs data
2. create accordingly the attention mask
3.
example of code :
```
def tokenize_sentences_Bert(sentences, tokenizer, maxlen):
tokens = []
lengths = []
for s in sentences:
token = tokenizer.encode(s, add_special_tokens=True, max_length=maxlen)
lengths.append(len(token))
token = token + [tokenizer.pad_token_id] * (maxlen - len(token) )
tokens.append(token)
return tokens, lengths
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
model = RobertaModel.from_pretrained('roberta-base',
output_hidden_states=False,
output_attentions=True)
max_length = 10
sequence = ["I eat a green apple", "I am playing football tomorrow"]
tokens, lengths = tokenize_sentences_Bert(sequence, tokenizer, maxlen=max_length)
lengths = torch.tensor(lengths)
tokens = torch.tensor(tokens)
attention_mask = (torch.arange(max_length).expand(len(lengths), max_length) < lengths.unsqueeze(1)).float()
print(attention_mask)
outputs = model(tokens, attention_mask=attention_mask)
print(outputs[0][:, :, :2]) # last step should return 0 last hidden layers
```
- `transformers` version:
- Platform: ubuntu
- Python version: 3.6
- PyTorch version (GPU?): 1.4
- Tensorflow version (GPU?):
- Using GPU in script? no:
- Using distributed or parallel set-up in script?: no
| 02-10-2020 13:41:25 | 02-10-2020 13:41:25 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,795 | closed | Probably a bug in XLMRobertaTokenizer | (Everything goes perfect when I did experiment with MultilingualBert, but seems only the base-model is released.)
When using XLM-R, the according tokenizer (XLMRobertaTokenizer) converts \<unk\> and every OOV token into id = 1. However, 1 should be the number of \<pad\>. (And the tokenizer can convert 1 to \<pad\>, 3 to \<unk\>). | 02-10-2020 13:13:12 | 02-10-2020 13:13:12 | Hi, if I am not wrong the pad should be 2 ?
at least the parameters tokenizer.pad_token_id for XLM-R
EDIT: 2 is eos sorry.<|||||>Indeed, this is a bug that will be fixed when #3198 is merged. Thanks for letting us know. |
transformers | 2,794 | closed | You must specify an aggregation method to update a MirroredVariable in Replica Context. | # You must specify an aggregation method to update a MirroredVariable in Replica Context.
##
<ipython-input-28-7cf32baaf070>:52 step_fn *
gradient_accumulator(grads)
/tensorflow-2.1.0/python3.6/tensorflow_core/python/distribute/distribute_lib.py:763 experimental_run_v2
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.6/dist-packages/transformers/optimization_tf.py:229 __call__ *
accum_gradient.assign_add(gradient)
/tensorflow-2.1.0/python3.6/tensorflow_core/python/distribute/values.py:1124 assign_add
return self._assign_func(f=assign_add_fn, *args, **kwargs)
/tensorflow-2.1.0/python3.6/tensorflow_core/python/distribute/values.py:1108 _assign_func
variable_type="MirroredVariable"))
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
using GPU: Yes
The problem arises when using:
Training on multiple gpus and accumulating gradient as given in run_tf_ner.py
| 02-10-2020 12:59:07 | 02-10-2020 12:59:07 | Tried executing the test case: **test_optimization_tf.py**
The test case also fails when on GPU.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,793 | closed | Fix circleci cuInit error on Tensorflow >= 2.1.0. | Tensorflow 2.1.0 introduce a new dependency model where `pip install tensorflow` would install tf **with GPU support**. Before 2.1.0 it would just install with CPU support only.
CircleCI machines are running without GPU hardware so, at initialisation, TensorFlow tests are looking for NVidia driver version but fails as their is no NVidia Driver running.
This PR introduces an extra (optional) dependency **tf-cpu** which explicitly requires **tensorflow-cpu** and makes sure it `pip install tf-cpu` instead of `pip install tf` while running unit tests.
It should remove the following error on CircleCI:
```bash
tests/test_modeling_tf_bert.py::TFBertModelTest::test_attention_outputs 2020-02-10 11:14:08.280770: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2020-02-10 11:14:08.280808: E tensorflow/stream_executor/cuda/cuda_driver.cc:351] failed call to cuInit: UNKNOWN ERROR (303)
2020-02-10 11:14:08.280837: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (40403e139ccb): /proc/driver/nvidia/version does not exist
2020-02-10 11:14:08.281093: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
```
Signed-off-by: Morgan Funtowicz <[email protected]> | 02-10-2020 12:38:11 | 02-10-2020 12:38:11 | |
transformers | 2,792 | closed | tiny issue with distilbertconfig docs | # 🐛 Bug (barely)
Discrepancy in variable names between docs and code:
I presume [``intermediate_size``](https://github.com/huggingface/transformers/blob/520e7f211926e07b2059bc8e21b668db4372e4db/src/transformers/configuration_distilbert.py#L63) refers to [``hidden_dim``](https://github.com/huggingface/transformers/blob/520e7f211926e07b2059bc8e21b668db4372e4db/src/transformers/configuration_distilbert.py#L109)?
| 02-10-2020 09:28:24 | 02-10-2020 09:28:24 | You're correct! I updated it with 539f601. Thanks. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.