hogru commited on
Commit
06c754e
·
1 Parent(s): dcb0e52

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -3
README.md CHANGED
@@ -3,6 +3,9 @@ license: mit
3
  tags:
4
  - chemistry
5
  - smarts
 
 
 
6
  ---
7
 
8
  # Model Card for Model hogru/MolReactGen-USPTO50K-Reaction-Templates
@@ -63,11 +66,11 @@ The model generates reaction templates that are similar to the USPTO-50K trainin
63
 
64
  The default Hugging Face `Trainer()` has been used, with an `EarlyStoppingCallback()`.
65
 
66
- #### Preprocessing
67
 
68
  The training data was pre-processed with a `PreTrainedTokenizerFast()` trained on the training data with a bespoke `RegEx` pre-tokenizer which "understands" the SMARTS syntax.
69
 
70
- #### Training Hyperparameters
71
 
72
  - **Batch size:** 8
73
  - **Gradient accumulation steps:** 4
@@ -86,7 +89,7 @@ More configuration (options) can be found in the [`conf`](https://github.com/hog
86
 
87
  Please see the slides / the poster mentioned above.
88
 
89
- #### Metrics
90
 
91
  <!-- These are the evaluation metrics being used, ideally with a description of why. -->
92
 
 
3
  tags:
4
  - chemistry
5
  - smarts
6
+ widget:
7
+ - text: "^"
8
+ example_title: "Sample reaction template | SMARTS"
9
  ---
10
 
11
  # Model Card for Model hogru/MolReactGen-USPTO50K-Reaction-Templates
 
66
 
67
  The default Hugging Face `Trainer()` has been used, with an `EarlyStoppingCallback()`.
68
 
69
+ ### Preprocessing
70
 
71
  The training data was pre-processed with a `PreTrainedTokenizerFast()` trained on the training data with a bespoke `RegEx` pre-tokenizer which "understands" the SMARTS syntax.
72
 
73
+ ### Training Hyperparameters
74
 
75
  - **Batch size:** 8
76
  - **Gradient accumulation steps:** 4
 
89
 
90
  Please see the slides / the poster mentioned above.
91
 
92
+ ### Metrics
93
 
94
  <!-- These are the evaluation metrics being used, ideally with a description of why. -->
95