Upload README.md
Browse files
README.md
ADDED
@@ -0,0 +1,267 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!-- WEASEL: AUTO-GENERATED DOCS START (do not remove) -->
|
2 |
+
|
3 |
+
# 🪐 Weasel Project: Citations of ECFR Banking Regulation in a spaCy pipeline.
|
4 |
+
|
5 |
+
Custom text classification project for spaCy v3 adapted from the spaCy v3
|
6 |
+
|
7 |
+
## 📋 project.yml
|
8 |
+
|
9 |
+
The [`project.yml`](project.yml) defines the data assets required by the
|
10 |
+
project, as well as the available commands and workflows. For details, see the
|
11 |
+
[Weasel documentation](https://github.com/explosion/weasel).
|
12 |
+
|
13 |
+
### ⏯ Commands
|
14 |
+
|
15 |
+
The following commands are defined by the project. They
|
16 |
+
can be executed using [`weasel run [name]`](https://github.com/explosion/weasel/tree/main/docs/cli.md#rocket-run).
|
17 |
+
Commands are only re-run if their inputs have changed.
|
18 |
+
|
19 |
+
| Command | Description |
|
20 |
+
| --- | --- |
|
21 |
+
| `format-script` | Execute the Python script `firstStep-format.py`, which performs the initial formatting of a dataset file for the first step of the project. This script extracts text and labels from a dataset file in JSONL format and writes them to a new JSONL file in a specific format.
|
22 |
+
|
23 |
+
Usage:
|
24 |
+
```
|
25 |
+
spacy project run execute-first-step-format-script
|
26 |
+
```
|
27 |
+
|
28 |
+
Explanation:
|
29 |
+
- The script `firstStep-format.py` reads data from the file specified in the `dataset_file` variable (`data/train200.jsonl` by default).
|
30 |
+
- It extracts text and labels from each JSON object in the dataset file.
|
31 |
+
- If both text and at least one label are available, it writes a new JSON object to the output file specified in the `output_file` variable (`data/firstStep_file.jsonl` by default) with the extracted text and label.
|
32 |
+
- If either text or label is missing in a JSON object, a warning message is printed.
|
33 |
+
- Upon completion, the script prints a message confirming the processing and the path to the output file.
|
34 |
+
|
|
35 |
+
| `train-text-classification-model` | Train the text classification model for the second step of the project using the `secondStep-score.py` script. This script loads a blank English spaCy model and adds a text classification pipeline to it. It then trains the model using the processed data from the first step.
|
36 |
+
|
37 |
+
Usage:
|
38 |
+
```
|
39 |
+
spacy project run train-text-classification-model
|
40 |
+
```
|
41 |
+
|
42 |
+
Explanation:
|
43 |
+
- The script `secondStep-score.py` loads a blank English spaCy model and adds a text classification pipeline to it.
|
44 |
+
- It reads processed data from the file specified in the `processed_data_file` variable (`data/firstStep_file.jsonl` by default).
|
45 |
+
- The processed data is converted to spaCy format for training the model.
|
46 |
+
- The model is trained using the converted data for a specified number of iterations (`n_iter`).
|
47 |
+
- Losses are printed for each iteration during training.
|
48 |
+
- Upon completion, the trained model is saved to the specified output directory (`./my_trained_model` by default).
|
49 |
+
|
|
50 |
+
| `classify-unlabeled-data` | Classify the unlabeled data for the third step of the project using the `thirdStep-label.py` script. This script loads the trained spaCy model from the previous step and classifies each record in the unlabeled dataset.
|
51 |
+
|
52 |
+
Usage:
|
53 |
+
```
|
54 |
+
spacy project run classify-unlabeled-data
|
55 |
+
```
|
56 |
+
|
57 |
+
Explanation:
|
58 |
+
- The script `thirdStep-label.py` loads the trained spaCy model from the specified model directory (`./my_trained_model` by default).
|
59 |
+
- It reads the unlabeled data from the file specified in the `unlabeled_data_file` variable (`data/train.jsonl` by default).
|
60 |
+
- Each record in the unlabeled data is classified using the loaded model.
|
61 |
+
- The predicted labels for each record are extracted and stored along with the text.
|
62 |
+
- The classified data is optionally saved to a file specified in the `output_file` variable (`data/thirdStep_file.jsonl` by default).
|
63 |
+
|
|
64 |
+
| `format-labeled-data` | Format the labeled data for the final step of the project using the `finalStep-formatLabel.py` script. This script processes the classified data from the third step and transforms it into a specific format, considering a threshold for label acceptance.
|
65 |
+
|
66 |
+
Usage:
|
67 |
+
```
|
68 |
+
spacy project run format-labeled-data
|
69 |
+
```
|
70 |
+
|
71 |
+
Explanation:
|
72 |
+
- The script `finalStep-formatLabel.py` reads classified data from the file specified in the `input_file` variable (`data/thirdStep_file.jsonl` by default).
|
73 |
+
- For each record, it determines accepted categories based on a specified threshold.
|
74 |
+
- It constructs an output record containing the text, predicted labels, accepted categories, answer (accept/reject), and options with meta information.
|
75 |
+
- The transformed data is written to the file specified in the `output_file` variable (`data/train4465.jsonl` by default).
|
76 |
+
|
|
77 |
+
| `setup-environment` | Set up the Python virtual environment.
|
78 |
+
|
|
79 |
+
| `review-evaluation-data` | Review the evaluation data in Prodigy and automatically accept annotations.
|
80 |
+
|
81 |
+
Usage:
|
82 |
+
```
|
83 |
+
spacy project run review-evaluation-data
|
84 |
+
```
|
85 |
+
|
86 |
+
Explanation:
|
87 |
+
- The command reviews the evaluation data in Prodigy.
|
88 |
+
- It automatically accepts annotations made during the review process.
|
89 |
+
- Only sessions allowed by the environment variable PRODIGY_ALLOWED_SESSIONS are permitted to review data. In this case, the session 'reviwer' is allowed.
|
90 |
+
|
|
91 |
+
| `export-reviewed-evaluation-data` | Export the reviewed evaluation data from Prodigy to a JSONL file named 'goldenEval.jsonl'.
|
92 |
+
|
93 |
+
Usage:
|
94 |
+
```
|
95 |
+
spacy project run export-reviewed-evaluation-data
|
96 |
+
```
|
97 |
+
|
98 |
+
Explanation:
|
99 |
+
- The command exports the reviewed evaluation data from Prodigy to a JSONL file.
|
100 |
+
- The data is exported from the Prodigy database associated with the project named 'project3eval-review'.
|
101 |
+
- The exported data is saved to the file 'goldenEval.jsonl'.
|
102 |
+
- This command helps in preserving the reviewed annotations for further analysis or processing.
|
103 |
+
|
|
104 |
+
| `import-training-data` | Import the training data into Prodigy from a JSONL file named 'train200.jsonl'.
|
105 |
+
|
106 |
+
Usage:
|
107 |
+
```
|
108 |
+
spacy project run import-training-data
|
109 |
+
```
|
110 |
+
|
111 |
+
Explanation:
|
112 |
+
- The command imports the training data into Prodigy from the specified JSONL file.
|
113 |
+
- The data is imported into the Prodigy database associated with the project named 'prodigy3train'.
|
114 |
+
- This command prepares the training data for annotation and model training in Prodigy.
|
115 |
+
|
|
116 |
+
| `import-golden-evaluation-data` | Import the golden evaluation data into Prodigy from a JSONL file named 'goldeneval.jsonl'.
|
117 |
+
|
118 |
+
Usage:
|
119 |
+
```
|
120 |
+
spacy project run import-golden-evaluation-data
|
121 |
+
```
|
122 |
+
|
123 |
+
Explanation:
|
124 |
+
- The command imports the golden evaluation data into Prodigy from the specified JSONL file.
|
125 |
+
- The data is imported into the Prodigy database associated with the project named 'golden3'.
|
126 |
+
- This command prepares the golden evaluation data for further analysis and model evaluation in Prodigy.
|
127 |
+
|
|
128 |
+
| `train-model-experiment1` | Train a text classification model using Prodigy with the 'prodigy3train' dataset and evaluating on 'golden3'.
|
129 |
+
|
130 |
+
Usage:
|
131 |
+
```
|
132 |
+
spacy project run train-model-experiment1
|
133 |
+
```
|
134 |
+
|
135 |
+
Explanation:
|
136 |
+
- The command trains a text classification model using Prodigy.
|
137 |
+
- It uses the 'prodigy3train' dataset for training and evaluates the model on the 'golden3' dataset.
|
138 |
+
- The trained model is saved to the './output/experiment1' directory.
|
139 |
+
|
|
140 |
+
| `download-model` | Download the English language model 'en_core_web_lg' from spaCy.
|
141 |
+
|
142 |
+
Usage:
|
143 |
+
```
|
144 |
+
spacy project run download-model
|
145 |
+
```
|
146 |
+
|
147 |
+
Explanation:
|
148 |
+
- The command downloads the English language model 'en_core_web_lg' from spaCy.
|
149 |
+
- This model is used as the base model for further data processing and training in the project.
|
150 |
+
|
|
151 |
+
| `convert-data-to-spacy-format` | Convert the annotated data from Prodigy to spaCy format using the 'prodigy3train' and 'golden3' datasets.
|
152 |
+
|
153 |
+
Usage:
|
154 |
+
```
|
155 |
+
spacy project run convert-data-to-spacy-format
|
156 |
+
```
|
157 |
+
|
158 |
+
Explanation:
|
159 |
+
- The command converts the annotated data from Prodigy to spaCy format.
|
160 |
+
- It uses the 'prodigy3train' and 'golden3' datasets for conversion.
|
161 |
+
- The converted data is saved to the './corpus' directory with the base model 'en_core_web_lg'.
|
162 |
+
|
|
163 |
+
| `train-custom-model` | Train a custom text classification model using spaCy with the converted data in spaCy format.
|
164 |
+
|
165 |
+
Usage:
|
166 |
+
```
|
167 |
+
spacy project run train-custom-model
|
168 |
+
```
|
169 |
+
|
170 |
+
Explanation:
|
171 |
+
- The command trains a custom text classification model using spaCy.
|
172 |
+
- It uses the converted data in spaCy format located in the './corpus' directory.
|
173 |
+
- The model is trained using the configuration defined in 'corpus/config.cfg'.
|
174 |
+
|
|
175 |
+
|
176 |
+
### ⏭ Workflows
|
177 |
+
|
178 |
+
The following workflows are defined by the project. They
|
179 |
+
can be executed using [`weasel run [name]`](https://github.com/explosion/weasel/tree/main/docs/cli.md#rocket-run)
|
180 |
+
and will run the specified commands in order. Commands are only re-run if their
|
181 |
+
inputs have changed.
|
182 |
+
|
183 |
+
| Workflow | Steps |
|
184 |
+
| --- | --- |
|
185 |
+
| `all` | `format-script` → `train-text-classification-model` → `classify-unlabeled-data` → `format-labeled-data` → `setup-environment` → `review-evaluation-data` → `export-reviewed-evaluation-data` → `import-training-data` → `import-golden-evaluation-data` → `train-model-experiment1` → `download-model` → `convert-data-to-spacy-format` → `train-custom-model` |
|
186 |
+
|
187 |
+
### 🗂 Assets
|
188 |
+
|
189 |
+
The following assets are defined by the project. They can
|
190 |
+
be fetched by running [`weasel assets`](https://github.com/explosion/weasel/tree/main/docs/cli.md#open_file_folder-assets)
|
191 |
+
in the project directory.
|
192 |
+
|
193 |
+
| File | Source | Description |
|
194 |
+
| --- | --- | --- |
|
195 |
+
| [`corpus/labels/ner.json`](corpus/labels/ner.json) | Local | JSON file containing NER labels |
|
196 |
+
| [`corpus/labels/parser.json`](corpus/labels/parser.json) | Local | JSON file containing parser labels |
|
197 |
+
| [`corpus/labels/tagger.json`](corpus/labels/tagger.json) | Local | JSON file containing tagger labels |
|
198 |
+
| [`corpus/labels/textcat_multilabel.json`](corpus/labels/textcat_multilabel.json) | Local | JSON file containing multilabel text classification labels |
|
199 |
+
| [`data/eval.jsonl`](data/eval.jsonl) | Local | JSONL file containing evaluation data |
|
200 |
+
| [`data/firstStep_file.jsonl`](data/firstStep_file.jsonl) | Local | JSONL file containing formatted data from the first step |
|
201 |
+
| `data/five_examples_annotated5.jsonl` | Local | JSONL file containing five annotated examples |
|
202 |
+
| [`data/goldenEval.jsonl`](data/goldenEval.jsonl) | Local | JSONL file containing golden evaluation data |
|
203 |
+
| [`data/thirdStep_file.jsonl`](data/thirdStep_file.jsonl) | Local | JSONL file containing classified data from the third step |
|
204 |
+
| [`data/train.jsonl`](data/train.jsonl) | Local | JSONL file containing training data |
|
205 |
+
| [`data/train200.jsonl`](data/train200.jsonl) | Local | JSONL file containing initial training data |
|
206 |
+
| [`data/train4465.jsonl`](data/train4465.jsonl) | Local | JSONL file containing formatted and labeled training data |
|
207 |
+
| [`my_trained_model/textcat_multilabel/cfg`](my_trained_model/textcat_multilabel/cfg) | Local | Configuration files for the text classification model |
|
208 |
+
| [`my_trained_model/textcat_multilabel/model`](my_trained_model/textcat_multilabel/model) | Local | Trained model files for the text classification model |
|
209 |
+
| [`my_trained_model/vocab/key2row`](my_trained_model/vocab/key2row) | Local | Mapping from keys to row indices in the vocabulary |
|
210 |
+
| [`my_trained_model/vocab/lookups.bin`](my_trained_model/vocab/lookups.bin) | Local | Binary lookups file for the vocabulary |
|
211 |
+
| [`my_trained_model/vocab/strings.json`](my_trained_model/vocab/strings.json) | Local | JSON file containing string representations of the vocabulary |
|
212 |
+
| [`my_trained_model/vocab/vectors`](my_trained_model/vocab/vectors) | Local | Directory containing vector files for the vocabulary |
|
213 |
+
| [`my_trained_model/vocab/vectors.cfg`](my_trained_model/vocab/vectors.cfg) | Local | Configuration file for vectors in the vocabulary |
|
214 |
+
| [`my_trained_model/config.cfg`](my_trained_model/config.cfg) | Local | Configuration file for the trained model |
|
215 |
+
| [`my_trained_model/meta.json`](my_trained_model/meta.json) | Local | JSON file containing metadata for the trained model |
|
216 |
+
| [`my_trained_model/tokenizer`](my_trained_model/tokenizer) | Local | Tokenizer files for the trained model |
|
217 |
+
| [`output/experiment1/model-best/textcat_multilabel/cfg`](output/experiment1/model-best/textcat_multilabel/cfg) | Local | Configuration files for the best model in experiment 1 |
|
218 |
+
| [`output/experiment1/model-best/textcat_multilabel/model`](output/experiment1/model-best/textcat_multilabel/model) | Local | Trained model files for the best model in experiment 1 |
|
219 |
+
| [`output/experiment1/model-best/vocab/key2row`](output/experiment1/model-best/vocab/key2row) | Local | Mapping from keys to row indices in the vocabulary for the best model in experiment 1 |
|
220 |
+
| [`output/experiment1/model-best/vocab/lookups.bin`](output/experiment1/model-best/vocab/lookups.bin) | Local | Binary lookups file for the vocabulary for the best model in experiment 1 |
|
221 |
+
| [`output/experiment1/model-best/vocab/strings.json`](output/experiment1/model-best/vocab/strings.json) | Local | JSON file containing string representations of the vocabulary for the best model in experiment 1 |
|
222 |
+
| [`output/experiment1/model-best/vocab/vectors`](output/experiment1/model-best/vocab/vectors) | Local | Directory containing vector files for the vocabulary for the best model in experiment 1 |
|
223 |
+
| [`output/experiment1/model-best/vocab/vectors.cfg`](output/experiment1/model-best/vocab/vectors.cfg) | Local | Configuration file for vectors in the vocabulary for the best model in experiment 1 |
|
224 |
+
| [`output/experiment1/model-best/config.cfg`](output/experiment1/model-best/config.cfg) | Local | Configuration file for the best model in experiment 1 |
|
225 |
+
| [`output/experiment1/model-best/meta.json`](output/experiment1/model-best/meta.json) | Local | JSON file containing metadata for the best model in experiment 1 |
|
226 |
+
| [`output/experiment1/model-best/tokenizer`](output/experiment1/model-best/tokenizer) | Local | Tokenizer files for the best model in experiment 1 |
|
227 |
+
| [`output/experiment1/model-last/textcat_multilabel/cfg`](output/experiment1/model-last/textcat_multilabel/cfg) | Local | Configuration files for the last model in experiment 1 |
|
228 |
+
| [`output/experiment1/model-last/textcat_multilabel/model`](output/experiment1/model-last/textcat_multilabel/model) | Local | Trained model files for the last model in experiment 1 |
|
229 |
+
| [`output/experiment1/model-last/vocab/key2row`](output/experiment1/model-last/vocab/key2row) | Local | Mapping from keys to row indices in the vocabulary for the last model in experiment 1 |
|
230 |
+
| [`output/experiment1/model-last/vocab/lookups.bin`](output/experiment1/model-last/vocab/lookups.bin) | Local | Binary lookups file for the vocabulary for the last model in experiment 1 |
|
231 |
+
| [`output/experiment1/model-last/vocab/strings.json`](output/experiment1/model-last/vocab/strings.json) | Local | JSON file containing string representations of the vocabulary for the last model in experiment 1 |
|
232 |
+
| [`output/experiment1/model-last/vocab/vectors`](output/experiment1/model-last/vocab/vectors) | Local | Directory containing vector files for the vocabulary for the last model in experiment 1 |
|
233 |
+
| [`output/experiment1/model-last/vocab/vectors.cfg`](output/experiment1/model-last/vocab/vectors.cfg) | Local | Configuration file for vectors in the vocabulary for the last model in experiment 1 |
|
234 |
+
| [`output/experiment1/model-last/config.cfg`](output/experiment1/model-last/config.cfg) | Local | Configuration file for the last model in experiment 1 |
|
235 |
+
| [`output/experiment1/model-last/meta.json`](output/experiment1/model-last/meta.json) | Local | JSON file containing metadata for the last model in experiment 1 |
|
236 |
+
| [`output/experiment1/model-last/tokenizer`](output/experiment1/model-last/tokenizer) | Local | Tokenizer files for the last model in experiment 1 |
|
237 |
+
| [`output/experiment3/model-best/textcat_multilabel/cfg`](output/experiment3/model-best/textcat_multilabel/cfg) | Local | Configuration files for the best model in experiment 3 |
|
238 |
+
| [`output/experiment3/model-best/textcat_multilabel/model`](output/experiment3/model-best/textcat_multilabel/model) | Local | Trained model files for the best model in experiment 3 |
|
239 |
+
| [`output/experiment3/model-best/vocab/key2row`](output/experiment3/model-best/vocab/key2row) | Local | Mapping from keys to row indices in the vocabulary for the best model in experiment 3 |
|
240 |
+
| [`output/experiment3/model-best/vocab/lookups.bin`](output/experiment3/model-best/vocab/lookups.bin) | Local | Binary lookups file for the vocabulary for the best model in experiment 3 |
|
241 |
+
| [`output/experiment3/model-best/vocab/strings.json`](output/experiment3/model-best/vocab/strings.json) | Local | JSON file containing string representations of the vocabulary for the best model in experiment 3 |
|
242 |
+
| [`output/experiment3/model-best/vocab/vectors`](output/experiment3/model-best/vocab/vectors) | Local | Directory containing vector files for the vocabulary for the best model in experiment 3 |
|
243 |
+
| [`output/experiment3/model-best/vocab/vectors.cfg`](output/experiment3/model-best/vocab/vectors.cfg) | Local | Configuration file for vectors in the vocabulary for the best model in experiment 3 |
|
244 |
+
| [`output/experiment3/model-best/config.cfg`](output/experiment3/model-best/config.cfg) | Local | Configuration file for the best model in experiment 3 |
|
245 |
+
| [`output/experiment3/model-best/meta.json`](output/experiment3/model-best/meta.json) | Local | JSON file containing metadata for the best model in experiment 3 |
|
246 |
+
| [`output/experiment3/model-best/tokenizer`](output/experiment3/model-best/tokenizer) | Local | Tokenizer files for the best model in experiment 3 |
|
247 |
+
| [`output/experiment3/model-last/textcat_multilabel/cfg`](output/experiment3/model-last/textcat_multilabel/cfg) | Local | Configuration files for the last model in experiment 3 |
|
248 |
+
| [`output/experiment3/model-last/textcat_multilabel/model`](output/experiment3/model-last/textcat_multilabel/model) | Local | Trained model files for the last model in experiment 3 |
|
249 |
+
| [`output/experiment3/model-last/vocab/key2row`](output/experiment3/model-last/vocab/key2row) | Local | Mapping from keys to row indices in the vocabulary for the last model in experiment 3 |
|
250 |
+
| [`output/experiment3/model-last/vocab/lookups.bin`](output/experiment3/model-last/vocab/lookups.bin) | Local | Binary lookups file for the vocabulary for the last model in experiment 3 |
|
251 |
+
| [`output/experiment3/model-last/vocab/strings.json`](output/experiment3/model-last/vocab/strings.json) | Local | JSON file containing string representations of the vocabulary for the last model in experiment 3 |
|
252 |
+
| [`output/experiment3/model-last/vocab/vectors`](output/experiment3/model-last/vocab/vectors) | Local | Directory containing vector files for the vocabulary for the last model in experiment 3 |
|
253 |
+
| [`output/experiment3/model-last/vocab/vectors.cfg`](output/experiment3/model-last/vocab/vectors.cfg) | Local | Configuration file for vectors in the vocabulary for the last model in experiment 3 |
|
254 |
+
| [`output/experiment3/model-last/config.cfg`](output/experiment3/model-last/config.cfg) | Local | Configuration file for the last model in experiment 3 |
|
255 |
+
| [`output/experiment3/model-last/meta.json`](output/experiment3/model-last/meta.json) | Local | JSON file containing metadata for the last model in experiment 3 |
|
256 |
+
| [`output/experiment3/model-last/tokenizer`](output/experiment3/model-last/tokenizer) | Local | Tokenizer files for the last model in experiment 3 |
|
257 |
+
| [`python_Code/finalStep-formatLabel.py`](python_Code/finalStep-formatLabel.py) | Local | Python script for formatting labeled data in the final step |
|
258 |
+
| [`python_Code/firstStep-format.py`](python_Code/firstStep-format.py) | Local | Python script for formatting data in the first step |
|
259 |
+
| [`python_Code/five_examples_annotated.ipynb`](python_Code/five_examples_annotated.ipynb) | Local | Jupyter notebook containing five annotated examples |
|
260 |
+
| [`python_Code/secondStep-score.py`](python_Code/secondStep-score.py) | Local | Python script for scoring data in the second step |
|
261 |
+
| [`python_Code/thirdStep-label.py`](python_Code/thirdStep-label.py) | Local | Python script for labeling data in the third step |
|
262 |
+
| [`python_Code/train_eval_split.ipynb`](python_Code/train_eval_split.ipynb) | Local | Jupyter notebook for training and evaluation data splitting |
|
263 |
+
| [`TerminalCode.txt`](TerminalCode.txt) | Local | Text file containing terminal code |
|
264 |
+
| [`README.md`](README.md) | Local | Markdown file containing project documentation |
|
265 |
+
| [`prodigy.json`](prodigy.json) | Local | JSON file containing Prodigy configuration |
|
266 |
+
|
267 |
+
<!-- WEASEL: AUTO-GENERATED DOCS END (do not remove) -->
|