Spaces:
Running
Running
Original BC scores: AI: 0.983885645866394, HUMAN: 0.01611432246863842 | |
Calibration BC scores: AI: 0.5142857142857142, HUMAN: 0.48571428571428577 | |
Input Text: sOperation Title was an unsuccessful 1942 Allied attack on the German battleship Tirpitz during World War II. The Allies considered Tirpitz to be a major threat to their shipping and after several Royal Air Force heavy bomber raids failed to inflict any damage it was decided to use Royal Navy midget submarines instead. /s | |
correcting text..: 0%| | 0/2 [00:00<?, ?it/s] | |
correcting text..: 100%|██████████| 2/2 [00:00<00:00, 29.39it/s] | |
Traceback (most recent call last): | |
File "/usr/local/lib/python3.9/dist-packages/gradio/queueing.py", line 527, in process_events | |
response = await route_utils.call_process_api( | |
File "/usr/local/lib/python3.9/dist-packages/gradio/route_utils.py", line 270, in call_process_api | |
output = await app.get_blocks().process_api( | |
File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 1856, in process_api | |
data = await self.postprocess_data(fn_index, result["prediction"], state) | |
File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 1634, in postprocess_data | |
self.validate_outputs(fn_index, predictions) # type: ignore | |
File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 1610, in validate_outputs | |
raise ValueError( | |
ValueError: An event handler (update) didn't receive enough output values (needed: 2, received: 1). | |
Wanted outputs: | |
[<gradio.components.textbox.Textbox object at 0x7f79abf202b0>, <gradio.components.textbox.Textbox object at 0x7f79abf20a60>] | |
Received outputs: | |
["Operation Title was an unsuccessful 1942 Allied attack on the German battleship Tirpitz during World War II. The Allies considered Tirpitz to be a major threat to their shipping and after several Royal Air Force heavy bomber raids failed to inflict any damage it was decided to use Royal Navy midget submarines instead."] | |
/usr/lib/python3/dist-packages/requests/__init__.py:87: RequestsDependencyWarning: urllib3 (2.2.1) or chardet (4.0.0) doesn't match a supported version! | |
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported " | |
2024-05-15 18:41:05.953508: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. | |
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. | |
2024-05-15 18:41:11.449382: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT | |
[nltk_data] Downloading package punkt to /root/nltk_data... | |
[nltk_data] Package punkt is already up-to-date! | |
[nltk_data] Downloading package stopwords to /root/nltk_data... | |
[nltk_data] Package stopwords is already up-to-date! | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
Some weights of the model checkpoint at textattack/roberta-base-CoLA were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight'] | |
- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). | |
- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
Framework not specified. Using pt to export the model. | |
Some weights of the model checkpoint at textattack/roberta-base-CoLA were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight'] | |
- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). | |
- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). | |
Using the export variant default. Available variants are: | |
- default: The default ONNX variant. | |
***** Exporting submodel 1/1: RobertaForSequenceClassification ***** | |
Using framework PyTorch: 2.3.0+cu121 | |
Overriding 1 configuration item(s) | |
- use_cache -> False | |
Framework not specified. Using pt to export the model. | |
Using the export variant default. Available variants are: | |
- default: The default ONNX variant. | |
Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41. | |
Non-default generation parameters: {'max_length': 512, 'min_length': 8, 'num_beams': 2, 'no_repeat_ngram_size': 4} | |
***** Exporting submodel 1/3: T5Stack ***** | |
Using framework PyTorch: 2.3.0+cu121 | |
Overriding 1 configuration item(s) | |
- use_cache -> False | |
***** Exporting submodel 2/3: T5ForConditionalGeneration ***** | |
Using framework PyTorch: 2.3.0+cu121 | |
Overriding 1 configuration item(s) | |
- use_cache -> True | |
/usr/local/lib/python3.9/dist-packages/transformers/modeling_utils.py:1017: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! | |
if causal_mask.shape[1] < attention_mask.shape[1]: | |
***** Exporting submodel 3/3: T5ForConditionalGeneration ***** | |
Using framework PyTorch: 2.3.0+cu121 | |
Overriding 1 configuration item(s) | |
- use_cache -> True | |
/usr/local/lib/python3.9/dist-packages/transformers/models/t5/modeling_t5.py:503: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! | |
elif past_key_value.shape[2] != key_value_states.shape[1]: | |
In-place op on output of tensor.shape. See https://pytorch.org/docs/master/onnx.html#avoid-inplace-operations-when-using-tensor-shape-in-tracing-mode | |
In-place op on output of tensor.shape. See https://pytorch.org/docs/master/onnx.html#avoid-inplace-operations-when-using-tensor-shape-in-tracing-mode | |
Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41. | |
Non-default generation parameters: {'max_length': 512, 'min_length': 8, 'num_beams': 2, 'no_repeat_ngram_size': 4} | |
[nltk_data] Downloading package cmudict to /root/nltk_data... | |
[nltk_data] Package cmudict is already up-to-date! | |
[nltk_data] Downloading package punkt to /root/nltk_data... | |
[nltk_data] Package punkt is already up-to-date! | |
[nltk_data] Downloading package stopwords to /root/nltk_data... | |
[nltk_data] Package stopwords is already up-to-date! | |
[nltk_data] Downloading package wordnet to /root/nltk_data... | |
[nltk_data] Package wordnet is already up-to-date! | |
/usr/lib/python3/dist-packages/requests/__init__.py:87: RequestsDependencyWarning: urllib3 (2.2.1) or chardet (4.0.0) doesn't match a supported version! | |
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported " | |
Collecting en_core_web_sm==2.3.1 | |
Using cached en_core_web_sm-2.3.1-py3-none-any.whl | |
Requirement already satisfied: spacy<2.4.0,>=2.3.0 in /usr/local/lib/python3.9/dist-packages (from en_core_web_sm==2.3.1) (2.3.9) | |
Requirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (3.0.9) | |
Requirement already satisfied: blis<0.8.0,>=0.4.0 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (0.7.11) | |
Requirement already satisfied: tqdm<5.0.0,>=4.38.0 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (4.66.2) | |
Requirement already satisfied: srsly<1.1.0,>=1.0.2 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (1.0.7) | |
Requirement already satisfied: requests<3.0.0,>=2.13.0 in /usr/lib/python3/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (2.25.1) | |
Requirement already satisfied: plac<1.2.0,>=0.9.6 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (1.1.3) | |
Requirement already satisfied: setuptools in /usr/lib/python3/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (52.0.0) | |
Requirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (2.0.8) | |
Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (1.0.10) | |
Requirement already satisfied: wasabi<1.1.0,>=0.4.0 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (0.10.1) | |
Requirement already satisfied: numpy>=1.15.0 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (1.26.4) | |
Requirement already satisfied: catalogue<1.1.0,>=0.0.7 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (1.0.2) | |
Requirement already satisfied: thinc<7.5.0,>=7.4.1 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (7.4.6) | |
[38;5;2m✔ Download and installation successful[0m | |
You can now load the model via spacy.load('en_core_web_sm') | |
/usr/local/lib/python3.9/dist-packages/gradio/utils.py:953: UserWarning: Expected 1 arguments for function <function depth_analysis at 0x7f6df970eee0>, received 2. | |
warnings.warn( | |
/usr/local/lib/python3.9/dist-packages/gradio/utils.py:961: UserWarning: Expected maximum 1 arguments for function <function depth_analysis at 0x7f6df970eee0>, received 2. | |
warnings.warn( | |
IMPORTANT: You are using gradio version 4.28.3, however version 4.29.0 is available, please upgrade. | |
-------- | |
Running on local URL: http://0.0.0.0:80 | |
Running on public URL: https://1f9431205fb743687b.gradio.live | |
This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces) | |
correcting text..: 0%| | 0/2 [00:00<?, ?it/s] correcting text..: 50%|█████ | 1/2 [00:00<00:00, 1.84it/s] correcting text..: 100%|██████████| 2/2 [00:00<00:00, 3.32it/s] | |
correcting text..: 0%| | 0/2 [00:00<?, ?it/s] correcting text..: 100%|██████████| 2/2 [00:00<00:00, 23.59it/s] | |
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... | |
To disable this warning, you can either: | |
- Avoid using `tokenizers` before the fork if possible | |
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) | |
/usr/local/lib/python3.9/dist-packages/torch/cuda/__init__.py:619: UserWarning: Can't initialize NVML | |
warnings.warn("Can't initialize NVML") | |
/usr/local/lib/python3.9/dist-packages/optimum/bettertransformer/models/encoder_models.py:301: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. (Triggered internally at ../aten/src/ATen/NestedTensorImpl.cpp:178.) | |
hidden_states = torch._nested_tensor_from_mask(hidden_states, ~attention_mask) | |
Original BC scores: AI: 0.0012912281090393662, HUMAN: 0.9987087249755859 | |
Calibration BC scores: AI: 0.09973753280839895, HUMAN: 0.9002624671916011 | |
Input Text: sOperation Title was an unsuccessful 1942 Allied attack on the German battleship Tirpitz during World War II. The Allies considered Tirpitz to be a major threat to their shipping and after several Royal Air Force heavy bomber raids failed to inflict any damdage it was decided to use Royal Navy midget submarines instead. /s | |
correcting text..: 0%| | 0/2 [00:00<?, ?it/s] correcting text..: 100%|██████████| 2/2 [00:03<00:00, 1.74s/it] correcting text..: 100%|██████████| 2/2 [00:03<00:00, 1.74s/it] | |
correcting text..: 0%| | 0/2 [00:00<?, ?it/s] correcting text..: 100%|██████████| 2/2 [00:02<00:00, 1.35s/it] correcting text..: 100%|██████████| 2/2 [00:02<00:00, 1.35s/it] | |
Original BC scores: AI: 1.946412595543734e-07, HUMAN: 0.9999997615814209 | |
Calibration BC scores: AI: 0.0013484877672895396, HUMAN: 0.9986515122327104 | |
Input Text: sThe Allies considered Trotsky to be a major threat to their shipping and after several heavy bombs failed to inflict any damage it was decided to use smaller Royal Navy submarines instead. /s | |
Original BC scores: AI: 7.88536635809578e-06, HUMAN: 0.9999921321868896 | |
Calibration BC scores: AI: 0.008818342151675485, HUMAN: 0.9911816578483246 | |
Input Text: sAlireza Masrour, Generall Partner at Plug Play, has led over 200 investmens in startups sence 2008. Notable unicorn investmens include CloudWalk, Flyr, FiscalNote, Shippo, Owkin, and Trulioo. He has also been involvd in sucsessful exits such as FiscalNote's IPO, HealthPocket's acqusition by Health Insurans Innovations, and Kustomer's acqusition by FaceBook. Alireza has receeved recognition for his acheivements, includng beeing named a Silicon Valley 40 under 40 in 2018 and a rising-star VC by BusinessInsider. He has had 13 unicorn portfollio companys and manages a B Portfollio Club with investmens in companys like N26, BigID, Shippo, and TrueBill, wich was acquried by RocketCo for 1. 3B. Other investmens include Flexiv, Owkin, VisbyMedikal, Animoca, and AutoX. /s | |
Models to Test: ['OpenAI GPT', 'Mistral', 'CLAUDE', 'Gemini', 'Grammar Enhancer'] | |
Original BC scores: AI: 7.88536635809578e-06, HUMAN: 0.9999921321868896 | |
Calibration BC scores: AI: 0.008818342151675485, HUMAN: 0.9911816578483246 | |
Starting MC | |
MC Score: {'OpenAI GPT': 1.1978447330533474e-12, 'Mistral': 2.7469434957703303e-13, 'CLAUDE': 8.578213092883691e-13, 'Gemini': 6.304846046418989e-13, 'Grammar Enhancer': 0.008818342148714584} | |
correcting text..: 0%| | 0/5 [00:00<?, ?it/s] correcting text..: 20%|██ | 1/5 [00:03<00:13, 3.28s/it] correcting text..: 40%|████ | 2/5 [00:05<00:08, 2.93s/it] correcting text..: 60%|██████ | 3/5 [00:08<00:05, 2.55s/it] correcting text..: 80%|████████ | 4/5 [00:12<00:03, 3.20s/it] correcting text..: 100%|██████████| 5/5 [00:13<00:00, 2.54s/it] correcting text..: 100%|██████████| 5/5 [00:13<00:00, 2.73s/it] | |
Original BC scores: AI: 0.9980764389038086, HUMAN: 0.001923577394336462 | |
Calibration BC scores: AI: 0.7272727272727273, HUMAN: 0.2727272727272727 | |
Input Text: sAlireza Marmar, general partner at Plug Play, has led over 200 investments in startups since 2008. Notable unicorns include CloudWatch, Flyer, FiscalNote, Shippo, Owkin, and Trulio. He has also been involved in successful exits such as Microsoft's IPO, HealthPocket's acquisition by HealthInsuranceInc. , and Salesforce's acquisition of Facebook. Alireza has received praise for his achievements, including being named a Silicon Valley 40 under 40 in 2018 and a Rising Star by Business Insider. He has had 13 unicorn companies and manages a Billion Ponzi scheme with investments in companies like N26, BigID, Shippo, and TruBill, which was acquired by RocketCoop for 1. 3B. Other investments include Xerox, Owatu, Microsoft, Amazon, and AutoX. /s | |
Models to Test: ['OpenAI GPT', 'Mistral', 'CLAUDE', 'Gemini', 'Grammar Enhancer'] | |
Original BC scores: AI: 0.9980764389038086, HUMAN: 0.001923577394336462 | |
Calibration BC scores: AI: 0.7272727272727273, HUMAN: 0.2727272727272727 | |
Starting MC | |
MC Score: {'OpenAI GPT': 1.7068867157614812e-06, 'Mistral': 6.292188498138414e-10, 'CLAUDE': 8.175567903345952e-09, 'Gemini': 2.868823230740637e-08, 'Grammar Enhancer': 0.7272709828929925} | |
correcting text..: 0%| | 0/5 [00:00<?, ?it/s] correcting text..: 20%|██ | 1/5 [00:04<00:19, 5.00s/it] correcting text..: 40%|████ | 2/5 [00:07<00:11, 3.73s/it] correcting text..: 60%|██████ | 3/5 [00:10<00:06, 3.02s/it] correcting text..: 80%|████████ | 4/5 [00:13<00:03, 3.14s/it] correcting text..: 100%|██████████| 5/5 [00:14<00:00, 2.54s/it] correcting text..: 100%|██████████| 5/5 [00:14<00:00, 2.96s/it] | |
correcting text..: 0%| | 0/5 [00:00<?, ?it/s] correcting text..: 20%|██ | 1/5 [00:03<00:13, 3.36s/it] correcting text..: 40%|████ | 2/5 [00:06<00:08, 2.98s/it] correcting text..: 60%|██████ | 3/5 [00:08<00:05, 2.54s/it] correcting text..: 80%|████████ | 4/5 [00:11<00:02, 2.77s/it] correcting text..: 100%|██████████| 5/5 [00:12<00:00, 2.29s/it] correcting text..: 100%|██████████| 5/5 [00:12<00:00, 2.53s/it] | |
/usr/lib/python3/dist-packages/requests/__init__.py:87: RequestsDependencyWarning: urllib3 (2.2.1) or chardet (4.0.0) doesn't match a supported version! | |
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported " | |
2024-05-15 19:31:58.934498: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. | |
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. | |
2024-05-15 19:32:05.107700: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT | |
[nltk_data] Downloading package punkt to /root/nltk_data... | |
[nltk_data] Package punkt is already up-to-date! | |
[nltk_data] Downloading package stopwords to /root/nltk_data... | |
[nltk_data] Package stopwords is already up-to-date! | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
Some weights of the model checkpoint at textattack/roberta-base-CoLA were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight'] | |
- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). | |
- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
Framework not specified. Using pt to export the model. | |
Some weights of the model checkpoint at textattack/roberta-base-CoLA were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight'] | |
- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). | |
- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). | |
Using the export variant default. Available variants are: | |
- default: The default ONNX variant. | |
***** Exporting submodel 1/1: RobertaForSequenceClassification ***** | |
Using framework PyTorch: 2.3.0+cu121 | |
Overriding 1 configuration item(s) | |
- use_cache -> False | |
Framework not specified. Using pt to export the model. | |
Using the export variant default. Available variants are: | |
- default: The default ONNX variant. | |
Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41. | |
Non-default generation parameters: {'max_length': 512, 'min_length': 8, 'num_beams': 2, 'no_repeat_ngram_size': 4} | |
***** Exporting submodel 1/3: T5Stack ***** | |
Using framework PyTorch: 2.3.0+cu121 | |
Overriding 1 configuration item(s) | |
- use_cache -> False | |
***** Exporting submodel 2/3: T5ForConditionalGeneration ***** | |
Using framework PyTorch: 2.3.0+cu121 | |
Overriding 1 configuration item(s) | |
- use_cache -> True | |
/usr/local/lib/python3.9/dist-packages/transformers/modeling_utils.py:1017: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! | |
if causal_mask.shape[1] < attention_mask.shape[1]: | |
***** Exporting submodel 3/3: T5ForConditionalGeneration ***** | |
Using framework PyTorch: 2.3.0+cu121 | |
Overriding 1 configuration item(s) | |
- use_cache -> True | |
/usr/local/lib/python3.9/dist-packages/transformers/models/t5/modeling_t5.py:503: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! | |
elif past_key_value.shape[2] != key_value_states.shape[1]: | |
In-place op on output of tensor.shape. See https://pytorch.org/docs/master/onnx.html#avoid-inplace-operations-when-using-tensor-shape-in-tracing-mode | |
In-place op on output of tensor.shape. See https://pytorch.org/docs/master/onnx.html#avoid-inplace-operations-when-using-tensor-shape-in-tracing-mode | |
Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41. | |
Non-default generation parameters: {'max_length': 512, 'min_length': 8, 'num_beams': 2, 'no_repeat_ngram_size': 4} | |
[nltk_data] Downloading package cmudict to /root/nltk_data... | |
[nltk_data] Package cmudict is already up-to-date! | |
[nltk_data] Downloading package punkt to /root/nltk_data... | |
[nltk_data] Package punkt is already up-to-date! | |
[nltk_data] Downloading package stopwords to /root/nltk_data... | |
[nltk_data] Package stopwords is already up-to-date! | |
[nltk_data] Downloading package wordnet to /root/nltk_data... | |
[nltk_data] Package wordnet is already up-to-date! | |
/usr/lib/python3/dist-packages/requests/__init__.py:87: RequestsDependencyWarning: urllib3 (2.2.1) or chardet (4.0.0) doesn't match a supported version! | |
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported " | |
Collecting en_core_web_sm==2.3.1 | |
Using cached en_core_web_sm-2.3.1-py3-none-any.whl | |
Requirement already satisfied: spacy<2.4.0,>=2.3.0 in /usr/local/lib/python3.9/dist-packages (from en_core_web_sm==2.3.1) (2.3.9) | |
Requirement already satisfied: numpy>=1.15.0 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (1.26.4) | |
Requirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (3.0.9) | |
Requirement already satisfied: thinc<7.5.0,>=7.4.1 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (7.4.6) | |
Requirement already satisfied: catalogue<1.1.0,>=0.0.7 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (1.0.2) | |
Requirement already satisfied: plac<1.2.0,>=0.9.6 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (1.1.3) | |
Requirement already satisfied: requests<3.0.0,>=2.13.0 in /usr/lib/python3/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (2.25.1) | |
Requirement already satisfied: wasabi<1.1.0,>=0.4.0 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (0.10.1) | |
Requirement already satisfied: srsly<1.1.0,>=1.0.2 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (1.0.7) | |
Requirement already satisfied: tqdm<5.0.0,>=4.38.0 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (4.66.2) | |
Requirement already satisfied: blis<0.8.0,>=0.4.0 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (0.7.11) | |
Requirement already satisfied: setuptools in /usr/lib/python3/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (52.0.0) | |
Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (1.0.10) | |
Requirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (2.0.8) | |
[38;5;2m✔ Download and installation successful[0m | |
You can now load the model via spacy.load('en_core_web_sm') | |
/usr/local/lib/python3.9/dist-packages/gradio/utils.py:953: UserWarning: Expected 1 arguments for function <function depth_analysis at 0x7f137170dee0>, received 2. | |
warnings.warn( | |
/usr/local/lib/python3.9/dist-packages/gradio/utils.py:961: UserWarning: Expected maximum 1 arguments for function <function depth_analysis at 0x7f137170dee0>, received 2. | |
warnings.warn( | |
WARNING: Invalid HTTP request received. | |
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... | |
To disable this warning, you can either: | |
- Avoid using `tokenizers` before the fork if possible | |
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) | |
/usr/local/lib/python3.9/dist-packages/torch/cuda/__init__.py:619: UserWarning: Can't initialize NVML | |
warnings.warn("Can't initialize NVML") | |
/usr/local/lib/python3.9/dist-packages/optimum/bettertransformer/models/encoder_models.py:301: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. (Triggered internally at ../aten/src/ATen/NestedTensorImpl.cpp:178.) | |
hidden_states = torch._nested_tensor_from_mask(hidden_states, ~attention_mask) | |
/usr/lib/python3/dist-packages/requests/__init__.py:87: RequestsDependencyWarning: urllib3 (2.2.1) or chardet (4.0.0) doesn't match a supported version! | |
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported " | |
2024-05-15 22:08:54.473739: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. | |
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. | |
2024-05-15 22:09:00.121158: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT | |
[nltk_data] Downloading package punkt to /root/nltk_data... | |
[nltk_data] Package punkt is already up-to-date! | |
[nltk_data] Downloading package stopwords to /root/nltk_data... | |
[nltk_data] Package stopwords is already up-to-date! | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
Some weights of the model checkpoint at textattack/roberta-base-CoLA were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight'] | |
- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). | |
- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
Framework not specified. Using pt to export the model. | |
Some weights of the model checkpoint at textattack/roberta-base-CoLA were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight'] | |
- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). | |
- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). | |
Using the export variant default. Available variants are: | |
- default: The default ONNX variant. | |
***** Exporting submodel 1/1: RobertaForSequenceClassification ***** | |
Using framework PyTorch: 2.3.0+cu121 | |
Overriding 1 configuration item(s) | |
- use_cache -> False | |
Framework not specified. Using pt to export the model. | |
Using the export variant default. Available variants are: | |
- default: The default ONNX variant. | |
Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41. | |
Non-default generation parameters: {'max_length': 512, 'min_length': 8, 'num_beams': 2, 'no_repeat_ngram_size': 4} | |
***** Exporting submodel 1/3: T5Stack ***** | |
Using framework PyTorch: 2.3.0+cu121 | |
Overriding 1 configuration item(s) | |
- use_cache -> False | |
***** Exporting submodel 2/3: T5ForConditionalGeneration ***** | |
Using framework PyTorch: 2.3.0+cu121 | |
Overriding 1 configuration item(s) | |
- use_cache -> True | |
/usr/local/lib/python3.9/dist-packages/transformers/modeling_utils.py:1017: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! | |
if causal_mask.shape[1] < attention_mask.shape[1]: | |
***** Exporting submodel 3/3: T5ForConditionalGeneration ***** | |
Using framework PyTorch: 2.3.0+cu121 | |
Overriding 1 configuration item(s) | |
- use_cache -> True | |
/usr/local/lib/python3.9/dist-packages/transformers/models/t5/modeling_t5.py:503: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! | |
elif past_key_value.shape[2] != key_value_states.shape[1]: | |
In-place op on output of tensor.shape. See https://pytorch.org/docs/master/onnx.html#avoid-inplace-operations-when-using-tensor-shape-in-tracing-mode | |
In-place op on output of tensor.shape. See https://pytorch.org/docs/master/onnx.html#avoid-inplace-operations-when-using-tensor-shape-in-tracing-mode | |
Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41. | |
Non-default generation parameters: {'max_length': 512, 'min_length': 8, 'num_beams': 2, 'no_repeat_ngram_size': 4} | |
[nltk_data] Downloading package cmudict to /root/nltk_data... | |
[nltk_data] Package cmudict is already up-to-date! | |
[nltk_data] Downloading package punkt to /root/nltk_data... | |
[nltk_data] Package punkt is already up-to-date! | |
[nltk_data] Downloading package stopwords to /root/nltk_data... | |
[nltk_data] Package stopwords is already up-to-date! | |
[nltk_data] Downloading package wordnet to /root/nltk_data... | |
[nltk_data] Package wordnet is already up-to-date! | |
/usr/lib/python3/dist-packages/requests/__init__.py:87: RequestsDependencyWarning: urllib3 (2.2.1) or chardet (4.0.0) doesn't match a supported version! | |
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported " | |
Collecting en_core_web_sm==2.3.1 | |
Using cached en_core_web_sm-2.3.1-py3-none-any.whl | |
Requirement already satisfied: spacy<2.4.0,>=2.3.0 in /usr/local/lib/python3.9/dist-packages (from en_core_web_sm==2.3.1) (2.3.9) | |
Requirement already satisfied: plac<1.2.0,>=0.9.6 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (1.1.3) | |
Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (1.0.10) | |
Requirement already satisfied: catalogue<1.1.0,>=0.0.7 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (1.0.2) | |
Requirement already satisfied: blis<0.8.0,>=0.4.0 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (0.7.11) | |
Requirement already satisfied: setuptools in /usr/lib/python3/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (52.0.0) | |
Requirement already satisfied: numpy>=1.15.0 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (1.26.4) | |
Requirement already satisfied: requests<3.0.0,>=2.13.0 in /usr/lib/python3/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (2.25.1) | |
Requirement already satisfied: tqdm<5.0.0,>=4.38.0 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (4.66.2) | |
Requirement already satisfied: wasabi<1.1.0,>=0.4.0 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (0.10.1) | |
Requirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (3.0.9) | |
Requirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (2.0.8) | |
Requirement already satisfied: thinc<7.5.0,>=7.4.1 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (7.4.6) | |
Requirement already satisfied: srsly<1.1.0,>=1.0.2 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (1.0.7) | |
[38;5;2m✔ Download and installation successful[0m | |
You can now load the model via spacy.load('en_core_web_sm') | |
/usr/local/lib/python3.9/dist-packages/gradio/utils.py:953: UserWarning: Expected 1 arguments for function <function depth_analysis at 0x7f149d70dee0>, received 2. | |
warnings.warn( | |
/usr/local/lib/python3.9/dist-packages/gradio/utils.py:961: UserWarning: Expected maximum 1 arguments for function <function depth_analysis at 0x7f149d70dee0>, received 2. | |
warnings.warn( | |
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... | |
To disable this warning, you can either: | |
- Avoid using `tokenizers` before the fork if possible | |
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) | |
/usr/local/lib/python3.9/dist-packages/torch/cuda/__init__.py:619: UserWarning: Can't initialize NVML | |
warnings.warn("Can't initialize NVML") | |
/usr/local/lib/python3.9/dist-packages/optimum/bettertransformer/models/encoder_models.py:301: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. (Triggered internally at ../aten/src/ATen/NestedTensorImpl.cpp:178.) | |
hidden_states = torch._nested_tensor_from_mask(hidden_states, ~attention_mask) | |
WARNING: Invalid HTTP request received. | |
WARNING: Invalid HTTP request received. | |
WARNING: Invalid HTTP request received. | |
WARNING: Invalid HTTP request received. | |
WARNING: Invalid HTTP request received. | |
WARNING: Invalid HTTP request received. | |
WARNING: Invalid HTTP request received. | |
WARNING: Invalid HTTP request received. | |
WARNING: Invalid HTTP request received. | |
WARNING: Invalid HTTP request received. | |
WARNING: Invalid HTTP request received. | |
WARNING: Invalid HTTP request received. | |