haneulpark commited on
Commit
80ad6d2
·
verified ·
1 Parent(s): 720771b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -3
README.md CHANGED
@@ -9,9 +9,9 @@ pretty_name: AggregatorAdvisor
9
  size_categories:
10
  - 10K<n<100K
11
  dataset_summary: >-
12
- AggregatorAdvisor identifies molecules that are known to aggregate or may aggregate in biochemical assays.
13
- The approach is based on the chemical similarity to known aggregators, and physical properties.
14
- The AggregatorAdvisor dataset contains 12645 compounds from 20 different sources.
15
  citation: >-
16
  @article
17
  {Irwin2015, title = {An Aggregation Advisor for Ligand Discovery},
@@ -60,3 +60,81 @@ dataset_info:
60
 
61
  # Aggregator Advisor
62
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  size_categories:
10
  - 10K<n<100K
11
  dataset_summary: >-
12
+ AggregatorAdvisor identifies molecules that are known to aggregate or may aggregate in biochemical assays.
13
+ The approach is based on the chemical similarity to known aggregators, and physical properties.
14
+ The AggregatorAdvisor dataset contains 12645 compounds from 20 different sources.
15
  citation: >-
16
  @article
17
  {Irwin2015, title = {An Aggregation Advisor for Ligand Discovery},
 
60
 
61
  # Aggregator Advisor
62
 
63
+
64
+ ## Quickstart Usage
65
+
66
+ ### Load a dataset in python
67
+ Each subset can be loaded into python using the Huggingface [datasets](https://huggingface.co/docs/datasets/index) library.
68
+ First, from the command line install the `datasets` library
69
+
70
+ $ pip install datasets
71
+
72
+ then, from within python load the datasets library
73
+
74
+ >>> import datasets
75
+
76
+ and load one of the `HematoxLong2023` datasets, e.g.,
77
+
78
+ >>> AggregatorAdvisor = datasets.load_dataset("maomlab/AggregatorAdvisor", name = "AggregatorAdvisor")
79
+ Downloading readme: 100%|██████████| 5.23k/5.23k [00:00<00:00, 35.1kkB/s]
80
+ Downloading data: 100%|██████████| 34.5k//34.5k/ [00:00<00:00, 155kB/s]
81
+ Downloading data: 100%|██████████| 97.1k/97.1k [00:00<00:00, 587kB/s]
82
+ Generating test split: 100%|██████████| 594/594 [00:00<00:00, 12705.92 examples/s]
83
+ Generating train split: 100%|██████████| 1788/1788 [00:00<00:00, 43895.91 examples/s]
84
+
85
+ and inspecting the loaded dataset
86
+
87
+ >>> AggregatorAdvisor
88
+ HematoxLong2023
89
+ DatasetDict({
90
+ test: Dataset({
91
+ features: ['new SMILES', 'label'],
92
+ num_rows: 594
93
+ })
94
+ train: Dataset({
95
+ features: ['new SMILES', 'label'],
96
+ num_rows: 1788
97
+ })
98
+ })
99
+
100
+ ### Use a dataset to train a model
101
+ One way to use the dataset is through the [MolFlux](https://exscientia.github.io/molflux/) package developed by Exscientia.
102
+ First, from the command line, install `MolFlux` library with `catboost` and `rdkit` support
103
+
104
+ pip install 'molflux[catboost,rdkit]'
105
+
106
+ then load, featurize, split, fit, and evaluate the catboost model
107
+
108
+ import json
109
+ from datasets import load_dataset
110
+ from molflux.datasets import featurise_dataset
111
+ from molflux.features import load_from_dicts as load_representations_from_dicts
112
+ from molflux.splits import load_from_dict as load_split_from_dict
113
+ from molflux.modelzoo import load_from_dict as load_model_from_dict
114
+ from molflux.metrics import load_suite
115
+
116
+ Split and evaluate the catboost model
117
+
118
+ split_dataset = load_dataset('maomlab/HematoxLong2023', name = 'HematoxLong2023')
119
+
120
+ split_featurised_dataset = featurise_dataset(
121
+ split_dataset,
122
+ column = "new SMILES",
123
+ representations = load_representations_from_dicts([{"name": "morgan"}, {"name": "maccs_rdkit"}]))
124
+
125
+ model = load_model_from_dict({
126
+ "name": "cat_boost_classifier",
127
+ "config": {
128
+ "x_features": ['new SMILES::morgan', 'new SMILES::maccs_rdkit'],
129
+ "y_features": ['Label']}})
130
+
131
+ model.train(split_featurised_dataset["train"])
132
+ preds = model.predict(split_featurised_dataset["test"])
133
+
134
+ classification_suite = load_suite("classification")
135
+
136
+ scores = classification_suite.compute(
137
+ references=split_featurised_dataset["test"]['Label'],
138
+ predictions=preds["cat_boost_classifier::Label"])
139
+
140
+ ## Citation