markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Squelching Line OutputYou might have noticed the annoying line of the form `[]` before the plots. This is because the `.plot` function actually produces output. Sometimes we wish not to display output, we can accomplish this with the semi-colon as follows. | plt.plot(X); | _____no_output_____ | MIT | Lab01/lmbaeza-lecture1.ipynb | lmbaeza/numerical-methods-2021 |
Adding Axis LabelsNo self-respecting quant leaves a graph without labeled axes. Here are some commands to help with that. | X = np.random.normal(0, 1, 100)
X2 = np.random.normal(0, 1, 100)
plt.plot(X);
plt.plot(X2);
plt.xlabel('Time') # The data we generated is unitless, but don't forget units in general.
plt.ylabel('Returns')
plt.legend(['X', 'X2']); | _____no_output_____ | MIT | Lab01/lmbaeza-lecture1.ipynb | lmbaeza/numerical-methods-2021 |
Generating StatisticsLet's use `numpy` to take some simple statistics. | Y = np.mean(X)
Y
Y = np.std(X)
Y | _____no_output_____ | MIT | Lab01/lmbaeza-lecture1.ipynb | lmbaeza/numerical-methods-2021 |
Getting Real Pricing DataRandomly sampled data can be great for testing ideas, but let's get some real data. We can use `get_pricing` to do that. You can use the `?` syntax as discussed above to get more information on `get_pricing`'s arguments. | !pip install yfinance
!pip install yahoofinancials
import yfinance as yf
from yahoofinancials import YahooFinancials
# Reference: https://towardsdatascience.com/a-comprehensive-guide-to-downloading-stock-prices-in-python-2cd93ff821d4
data = yf.download('MSFT', start='2012-01-01', end='2015-06-01', progress=False) | _____no_output_____ | MIT | Lab01/lmbaeza-lecture1.ipynb | lmbaeza/numerical-methods-2021 |
Our data is now a dataframe. You can see the datetime index and the colums with different pricing data. | data | _____no_output_____ | MIT | Lab01/lmbaeza-lecture1.ipynb | lmbaeza/numerical-methods-2021 |
This is a pandas dataframe, so we can index in to just get price like this. For more info on pandas, please [click here](http://pandas.pydata.org/pandas-docs/stable/10min.html). | X = data['Open']
X | _____no_output_____ | MIT | Lab01/lmbaeza-lecture1.ipynb | lmbaeza/numerical-methods-2021 |
Because there is now also date information in our data, we provide two series to `.plot`. `X.index` gives us the datetime index, and `X.values` gives us the pricing values. These are used as the X and Y coordinates to make a graph. | plt.plot(X.index, X.values)
plt.ylabel('Price')
plt.legend(['MSFT']);
np.mean(X)
np.std(X) | _____no_output_____ | MIT | Lab01/lmbaeza-lecture1.ipynb | lmbaeza/numerical-methods-2021 |
Getting Returns from PricesWe can use the `pct_change` function to get returns. Notice how we drop the first element after doing this, as it will be `NaN` (nothing -> something results in a NaN percent change). | R = X.pct_change()[1:] | _____no_output_____ | MIT | Lab01/lmbaeza-lecture1.ipynb | lmbaeza/numerical-methods-2021 |
We can plot the returns distribution as a histogram. | plt.hist(R, bins=20)
plt.xlabel('Return')
plt.ylabel('Frequency')
plt.legend(['MSFT Returns']); | _____no_output_____ | MIT | Lab01/lmbaeza-lecture1.ipynb | lmbaeza/numerical-methods-2021 |
Get statistics again. | np.mean(R)
np.std(R) | _____no_output_____ | MIT | Lab01/lmbaeza-lecture1.ipynb | lmbaeza/numerical-methods-2021 |
Now let's go backwards and generate data out of a normal distribution using the statistics we estimated from Microsoft's returns. We'll see that we have good reason to suspect Microsoft's returns may not be normal, as the resulting normal distribution looks far different. | plt.hist(np.random.normal(np.mean(R), np.std(R), 10000), bins=20)
plt.xlabel('Return')
plt.ylabel('Frequency')
plt.legend(['Normally Distributed Returns']); | _____no_output_____ | MIT | Lab01/lmbaeza-lecture1.ipynb | lmbaeza/numerical-methods-2021 |
Generating a Moving Average`pandas` has some nice tools to allow us to generate rolling statistics. Here's an example. Notice how there's no moving average for the first 60 days, as we don't have 60 days of data on which to generate the statistic. | # Take the average of the last 60 days at each timepoint.
MAVG = X.rolling(60)
plt.plot(X.index, X.values)
plt.ylabel('Price')
plt.legend(['MSFT', '60-day MAVG']); | _____no_output_____ | MIT | Lab01/lmbaeza-lecture1.ipynb | lmbaeza/numerical-methods-2021 |
#@markdown Before starting please save the notebook in your drive by clicking on `File -> Save a copy in drive`
#@markdown Check how many CPUs we have, you can choose a high memory instance to get 4.
import os
print(f"We have {os.cpu_count()} CPU cores.")
#@markdown Mount google drive
from google.colab import drive, output
drive.mount('/content/drive')
from pathlib import Path
if not Path("/content/drive/My Drive/IRCMS_GAN_collaborative_database").exists():
raise RuntimeError(
"Shortcut to our shared drive folder doesn't exits.\n\n"
"\t1. Go to the google drive web UI\n"
"\t2. Right click shared folder IRCMS_GAN_collaborative_database and click \"Add shortcut to Drive\""
)
def clear_on_success(msg="Ok!"):
if _exit_code == 0:
output.clear()
print(msg)
#@markdown Configuration
#@markdown Directories can be found via file explorer on the left by navigating into `drive` to the desired folders.
#@markdown Then right-click and `Copy path`.
audio_db_dir = "/content/drive/My Drive/AUDIO DATABASE/RAW Sessions/Roberto Studio Material" #@param {type:"string"}
resample_dir = "/content/drive/My Drive/AUDIO DATABASE/RAW Sessions/Roberto Studio Material 22050" #@param {type:"string"}
sample_rate = 22050 #@param {type:"string"}
sample_rate = int(sample_rate)
audio_db_dir = Path(audio_db_dir)
resample_dir = Path(resample_dir)
resample_dir.mkdir(parents=True, exist_ok=True)
if not audio_db_dir.exists():
raise RuntimeError("audio_db_dir {audio_db_dir} does not exists.")
#@markdown Install recent ffmpeg.
!add-apt-repository -y ppa:jonathonf/ffmpeg-4
!apt install ffmpeg
clear_on_success("ffmpeg installed.")
!ffmpeg -version
#@markdown Resample audio files.
import subprocess
from pathlib import Path
from joblib import Parallel, delayed
def convert(input, output, sample_rate):
command = ["ffmpeg", "-i", str(input), "-y", "-ar", str(sample_rate), str(output)]
try:
return subprocess.check_output(command, stderr=subprocess.STDOUT,)
except subprocess.CalledProcessError as exc:
print(f"Return code: {exc.returncode}\n", exc.output)
raise
def main(*, in_dir, out_dir, sample_rate):
in_dir = Path(in_dir)
out_dir = Path(out_dir)
in_paths = list(Path(in_dir).rglob("*.*"))
out_paths = [out_dir / in_path.relative_to(in_dir) for in_path in in_paths]
for sub_dir in set(out_path.parent for out_path in out_paths):
sub_dir.mkdir(exist_ok=True, parents=True)
Parallel(n_jobs=-1, backend='multiprocessing', verbose=2)(
delayed(convert)(in_path, out_path, sample_rate)
for in_path, out_path in zip(in_paths, out_paths)
)
main(in_dir=audio_db_dir, out_dir=resample_dir, sample_rate=sample_rate)
print('Done!')
| _____no_output_____ | MIT | Resample_Audio.ipynb | materialvision/melgan-neurips |
|
Talktorial 5 Compound clustering Developed in the CADD seminars 2017 and 2018, AG Volkamer, Charité/FU Berlin Calvinna Caswara and Gizem Spriewald Aim of this talktorialSimilar compounds might bind to the same targets and show similar effects. Based on this similar property principle, compound similarity can be used to build chemical groups via clustering. From such a clustering, a diverse set of compounds can also be selected from a larger set of screening compounds for further experimental testing. Learning goalsIn this talktorial, we will learn more about:* How to group compounds and how to pick a diverse set of compounds* Short introduction to two clustering algorithms* Application of the Butina clustering algorithm to a sample compound set Theory* Introduction to clustering and Jarvis-Patrick algorithm* Detailed explanation of Butina clustering* Picking diverse compounds Practical* Examples for Butina clustering and compound picking References* Butina, D. Unsupervised Data Base Clustering Based on Daylight’s Fingerprint and Tanimoto Similarity: A Fast and Automated Way To Cluster Small and Large Data Set. J. Chem. Inf. Comput. Sci. 1999.* Leach, Andrew R., Gillet, Valerie J. An Introduction to Chemoinformatics. 2003* Jarvis-Patrick Clustering: http://www.improvedoutcomes.com/docs/WebSiteDocs/Clustering/Jarvis-Patrick_Clustering_Overview.htm* TDT Tutorial: https://github.com/sriniker/TDT-tutorial-2014/blob/master/TDT_challenge_tutorial.ipynb* rdkit clustering documentation: http://rdkit.org/docs/Cookbook.htmlclustering-molecules _____________________________________________________________________________________________________________________ Theory Introduction to clustering and Jarvis-Patrick algorithm[Clustering](https://en.wikipedia.org/wiki/Cluster_analysis) can be defined as 'the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters)'.Compound clustering in pharmaceutical research is often based on chemical or structural similarity between compounds to find groups that share properties as well as to design a diverse and representative set for further analysis. General procedure: * Method are based on clustering data by similarity between neighboring points. * In cheminformatics, compounds are often encoded as molecular fingerprints and similarity can be described by the Tanimoto similarity (see **talktorial 4**). * As a quick reminder: Fingerprints are binary vectors where each bit indicates the presence or absence of a particular substructural fragment within a molecule. * Similarity (or distance) matrix: The similarity between each pair of molecules represented by binary fingerprints is most frequently quantified using the Tanimoto coefficient, which measures the number of common features (bits). * The value of the Tanimoto coefficient ranges from zero (no similarity) to one (high similarity).There are a number of clustering algorithms available, with the [Jarvis-Patrick clustering](http://www.improvedoutcomes.com/docs/WebSiteDocs/Clustering/Jarvis-Patrick_Clustering_Overview.htm) being one of the most widely used algorithms in the pharmaceutical context.Jarvis-Patrick clustering algorithm is defined by two parameters K and Kmin:* Calculate the set of K nearest neighbors for each molecule. * Two molecules cluster together if * they are in each others list of nearest neighbors * they have at least Kmin of their K nearest neighbors in common.The Jarvis-Patrick clustering algorithm is deterministic and able to deal with large sets of molecules in a matter of a few hours. However, a downside lies in the fact that this method tends to produce large heterogeneous clusters (see ref. Butina clustering). More clustering algorithms can also be found in the [scikit-learn clustering module](http://scikit-learn.org/stable/modules/clustering.html). Detailed explanation of Butina clusteringButina clustering ([*J. Chem. Inf. Model.*(1999), 39(4), 747](https://pubs.acs.org/doi/abs/10.1021/ci9803381)) was developed to identify smaller but homogeneous clusters, with the prerequisite that (at least) the cluster centroid will be more similar than a given threshold to every other molecule in the cluster.These are the key steps in this clustering approach (see flowchart below): 1. Data preparation and compound encoding* To identify chemical similarities, the compounds in the input data (e.g. given as SMILES) will be encoded as molecular fingerprints, e.g., RDK5 fingerprint which is a subgraph-based fingerprint similar to the well known [Daylight Fingerprint](/http://www.daylight.com/dayhtml/doc/theory/theory.finger.html) (which was used in the original publication). 2. Tanimoto similarity (or distance) matrix* The similarity between two fingerprints is calculated using the Tanimoto coefficient.* Matrix with Tanimoto similarities between all possible molecule/fingerprint pairs (n*n similarity matrix with n=number of molecules, upper triangle matrix used only)* Equally, the distances matrix can be calculated (1 - similarity) 3. Clustering molecules: Centroids and exclusion spheres Note: Molecules will be clustered together, if they have a maximum distance below a specified cut-off from the cluster centroid (if distance matrix is used) or if they have a minimum similarity above the specified cut-off (if similarity matrix is used). * **Identification of potential cluster centroids** * The cluster centroid is the molecule within a given cluster which has the largest number of neighbors. * Annotate neighbors: For each molecule count all molecules with a Tanimoto distance below a given threshold. * Sort the molecules by their number of neighbors in descending order, so that potential cluster centroids (i.e. the compounds with the largest number of neighbors) are placed at the top of the file. * **Clustering based on the exclusion spheres** * Starting with the first molecule (centroid) in the sorted list * All molecules with a Tanimoto index above or equal to the cut-off value used for clustering then become members of that cluster (in case of similarity). * Each molecule that has been identified as a member of the given cluster is flagged and removed from further comparisons. Thus, flagged molecules cannot become either another cluster centroid or a member of another cluster. This process is like putting an exclusion sphere around the newly formed cluster. * Once the first compound in the list has found all its neighbors, the first available (i.e. not flagged) compound at the top of the list becomes the new cluster centroid. * The same process is repeated for all other unflagged molecules down the list. * Molecules that have not been flagged by the end of the clustering process become singletons. * Note that some molecules assigned as singletons can have neighbors at the given Tanimoto similarity index, but those neighbors have been excluded by a stringer cluster centroid. | from IPython.display import IFrame
IFrame('images/butina_full.pdf', width=600, height=300) | _____no_output_____ | CC-BY-4.0 | talktorials/5_compound_clustering/T5_compound_clustering.ipynb | caramirezs/TeachOpenCADD |
*Figure 1:* Theoretical example of the Butina clustering algorithm, drawn by Calvinna Caswara. Picking diverse compoundsFinding representative sets of compounds is a concept often used in pharmaceutical industry.* Let's say, we applied a virtual screening campaign but only have a limited amount of resources to experimentally test a few compounds in a confirmatory assay. * In order to obtain as much information as possible from this screen, we want to select a diverse set. Thus, we pick one representative of each chemical series in our list of potentially active compounds.Another scenario would be to select one series to gain information about the structure-activity relationship, i.e., how do small structural changes in the molecule affect the in vitro activity. Practical Example using the Butina Clustering AlgorithmApplication is following the example of [TDT tutorial notebook by S. Riniker and G. Landrum](https://github.com/sriniker/TDT-tutorial-2014/blob/master/TDT_challenge_tutorial.ipynb). 1. Load data and calculate fingerprintsIn this part the data is prepared and fingerprints are calculated. | # Import packages
import pandas as pd
import numpy
import matplotlib.pyplot as plt
import time
import random
from random import choices
from rdkit import Chem
from rdkit.Chem import AllChem
from rdkit import DataStructs
from rdkit.DataStructs import cDataStructs
from rdkit.ML.Cluster import Butina
from rdkit.Chem import Draw
from rdkit.Chem import rdFingerprintGenerator
from rdkit.Chem.Draw import IPythonConsole
# Load and have a look into data
# Filtered data taken from talktorial 2
compound_df= pd.read_csv('../data/T2/EGFR_compounds_lipinski.csv',sep=";", index_col=0)
print('data frame shape:',compound_df.shape)
compound_df.head()
# Create molecules from SMILES and store in array
mols = []
for i in compound_df.index:
chemblId = compound_df['molecule_chembl_id'][i]
smiles = compound_df['smiles'][i]
mols.append((Chem.MolFromSmiles(smiles), chemblId))
mols[0:5]
# Create fingerprints for all molecules
rdkit_gen = rdFingerprintGenerator.GetRDKitFPGenerator(maxPath=5)
fingerprints = [rdkit_gen.GetFingerprint(m) for m,idx in mols]
# How many compounds/fingerprints do we have?
print('Number of compounds converted:',len(fingerprints))
print('Fingerprint length per compound:',len(fingerprints[0])) | Number of compounds converted: 4925
Fingerprint length per compound: 2048
| CC-BY-4.0 | talktorials/5_compound_clustering/T5_compound_clustering.ipynb | caramirezs/TeachOpenCADD |
2. Tanimoto similarity and distance matrixNow that we generated fingerprints, we move on to the next step: The identification of potential cluster centroids. For this, we define functions to calculate the Tanimoto similarity and distance matrix. | # Calculate distance matrix for fingerprint list
def Tanimoto_distance_matrix(fp_list):
dissimilarity_matrix = []
for i in range(1,len(fp_list)):
similarities = DataStructs.BulkTanimotoSimilarity(fp_list[i],fp_list[:i])
# Since we need a distance matrix, calculate 1-x for every element in similarity matrix
dissimilarity_matrix.extend([1-x for x in similarities])
return dissimilarity_matrix | _____no_output_____ | CC-BY-4.0 | talktorials/5_compound_clustering/T5_compound_clustering.ipynb | caramirezs/TeachOpenCADD |
See also [rdkit Cookbook: Clustering molecules](http://rdkit.org/docs/Cookbook.htmlclustering-molecules). | # Example: Calculate single similarity of two fingerprints
sim = DataStructs.TanimotoSimilarity(fingerprints[0],fingerprints[1])
print ('Tanimoto similarity: %4.2f, distance: %4.2f' %(sim,1-sim))
# Example: Calculate distance matrix (distance = 1-similarity)
Tanimoto_distance_matrix(fingerprints)[0:5]
# Side note: That looked like a list and not a matrix.
# But it is a triangular similarity matrix in the form of a list
n = len(fingerprints)
# Calculate number of elements in triangular matrix via n*(n-1)/2
elem_triangular_matr = (n*(n-1))/2
print(int(elem_triangular_matr), len(Tanimoto_distance_matrix(fingerprints))) | 12125350 12125350
| CC-BY-4.0 | talktorials/5_compound_clustering/T5_compound_clustering.ipynb | caramirezs/TeachOpenCADD |
3. Clustering molecules: Centroids and exclusion spheresIn this part, we cluster the molecules and look at the results. Define a clustering function. | # Input: Fingerprints and a threshold for the clustering
def ClusterFps(fps,cutoff=0.2):
# Calculate Tanimoto distance matrix
distance_matr = Tanimoto_distance_matrix(fps)
# Now cluster the data with the implemented Butina algorithm:
clusters = Butina.ClusterData(distance_matr,len(fps),cutoff,isDistData=True)
return clusters | _____no_output_____ | CC-BY-4.0 | talktorials/5_compound_clustering/T5_compound_clustering.ipynb | caramirezs/TeachOpenCADD |
Cluster the molecules based on their fingerprint similarity. | # Run the clustering procedure for the dataset
clusters = ClusterFps(fingerprints,cutoff=0.3)
# Give a short report about the numbers of clusters and their sizes
num_clust_g1 = len([c for c in clusters if len(c) == 1])
num_clust_g5 = len([c for c in clusters if len(c) > 5])
num_clust_g25 = len([c for c in clusters if len(c) > 25])
num_clust_g100 = len([c for c in clusters if len(c) > 100])
print("total # clusters: ", len(clusters))
print("# clusters with only 1 compound: ", num_clust_g1)
print("# clusters with >5 compounds: ", num_clust_g5)
print("# clusters with >25 compounds: ", num_clust_g25)
print("# clusters with >100 compounds: ", num_clust_g100)
# Plot the size of the clusters
fig = plt.figure(1, figsize=(10, 4))
plt1 = plt.subplot(111)
plt.axis([0, len(clusters), 0, len(clusters[0])+1])
plt.xlabel('Cluster index', fontsize=20)
plt.ylabel('Number of molecules', fontsize=20)
plt.tick_params(labelsize=16)
plt1.bar(range(1, len(clusters)), [len(c) for c in clusters[:len(clusters)-1]], lw=0)
plt.show() | _____no_output_____ | CC-BY-4.0 | talktorials/5_compound_clustering/T5_compound_clustering.ipynb | caramirezs/TeachOpenCADD |
How to pick a reasonable cutoff?Since the clustering result depends on the threshold chosen by the user, we will have a closer look on the choice of a cutoff. | for i in numpy.arange(0., 1.0, 0.1):
clusters = ClusterFps(fingerprints,cutoff=i)
fig = plt.figure(1, figsize=(10, 4))
plt1 = plt.subplot(111)
plt.axis([0, len(clusters), 0, len(clusters[0])+1])
plt.xlabel('Cluster index', fontsize=20)
plt.ylabel('Number of molecules', fontsize=20)
plt.tick_params(labelsize=16)
plt.title('Threshold: '+str('%3.1f' %i), fontsize=20)
plt1.bar(range(1, len(clusters)), [len(c) for c in clusters[:len(clusters)-1]], lw=0)
plt.show() | _____no_output_____ | CC-BY-4.0 | talktorials/5_compound_clustering/T5_compound_clustering.ipynb | caramirezs/TeachOpenCADD |
As you can see, the higher the threshold (distance cutoff), the more molecules are considered as similar and, therefore, clustered into less clusters.The lower the threshold, the more small clusters and "singletons" appear.* The smaller the distance value cut-off, the more similar the compounds are required to be to belong to one cluster.Looking at the plots above, we decided to choose a distance threshold of 0.2. There are not many singletons and the cluster sizes don't have an extreme but smooth distribution. | dist_co = 0.2
clusters = ClusterFps(fingerprints,cutoff=dist_co)
# Plot the size of the clusters - save plot
fig = plt.figure(1, figsize=(8, 2.5))
plt1 = plt.subplot(111)
plt.axis([0, len(clusters), 0, len(clusters[0])+1])
plt.xlabel('Cluster index', fontsize=20)
plt.ylabel('# molecules', fontsize=20)
plt.tick_params(labelsize=16)
plt1.bar(range(1, len(clusters)), [len(c) for c in clusters[:len(clusters)-1]], lw=0)
plt.title('Threshold: '+str('%3.1f' %dist_co), fontsize=20)
plt.savefig("../data/T5/cluster_dist_cutoff_%4.2f.png" %dist_co, dpi=300, bbox_inches="tight", transparent=True)
print('Number of clusters %d from %d molecules at distance cut-off %4.2f' %(len(clusters), len(mols), dist_co))
print('Number of molecules in largest cluster:', len(clusters[0]))
print('Similarity between two random points in same cluster %4.2f'%DataStructs.TanimotoSimilarity(fingerprints[clusters[0][0]],fingerprints[clusters[0][1]]))
print('Similarity between two random points in different cluster %4.2f'%DataStructs.TanimotoSimilarity(fingerprints[clusters[0][0]],fingerprints[clusters[1][0]])) | Number of clusters 1225 from 4925 molecules at distance cut-off 0.20
Number of molecules in largest cluster: 146
Similarity between two random points in same cluster 0.82
Similarity between two random points in different cluster 0.22
| CC-BY-4.0 | talktorials/5_compound_clustering/T5_compound_clustering.ipynb | caramirezs/TeachOpenCADD |
Cluster visualization 10 examples from largest clusterNow, let's have a closer look at the first 10 molecular structures of the first/largest clusters. | print ('Ten molecules from largest cluster:')
# Draw molecules
Draw.MolsToGridImage([mols[i][0] for i in clusters[0][:10]],
legends=[mols[i][1] for i in clusters[0][:10]],
molsPerRow=5)
# Save molecules from largest cluster for MCS analysis in Talktorial 9
w = Chem.SDWriter('../data/T5/molSet_largestCluster.sdf')
# Prepare data
tmp_mols=[]
for i in clusters[0]:
tmp = mols[i][0]
tmp.SetProp("_Name",mols[i][1])
tmp_mols.append(tmp)
# Write data
for m in tmp_mols: w.write(m) | _____no_output_____ | CC-BY-4.0 | talktorials/5_compound_clustering/T5_compound_clustering.ipynb | caramirezs/TeachOpenCADD |
10 examples from second largest cluster | print ('Ten molecules from second largest cluster:')
# Draw molecules
Draw.MolsToGridImage([mols[i][0] for i in clusters[1][:10]],
legends=[mols[i][1] for i in clusters[1][:10]],
molsPerRow=5) | Ten molecules from second largest cluster:
| CC-BY-4.0 | talktorials/5_compound_clustering/T5_compound_clustering.ipynb | caramirezs/TeachOpenCADD |
The first ten molecules in the respective clusters look indeed similar to each other and many share a common scaffold (visually detected). See **talktorial 6** for more information on how to calculate the maximum common substructure (MCS) of a set of molecules. Examples from first 10 clustersFor comparison, we have a look at the cluster centers of the first 10 clusters. | print ('Ten molecules from first 10 clusters:')
# Draw molecules
Draw.MolsToGridImage([mols[clusters[i][0]][0] for i in range(10)],
legends=[mols[clusters[i][0]][1] for i in range(10)],
molsPerRow=5) | Ten molecules from first 10 clusters:
| CC-BY-4.0 | talktorials/5_compound_clustering/T5_compound_clustering.ipynb | caramirezs/TeachOpenCADD |
Save cluster centers from first 3 clusters as SVG file. | # Generate image
img = Draw.MolsToGridImage([mols[clusters[i][0]][0] for i in range(0,3)],
legends=["Cluster "+str(i) for i in range(1,4)],
subImgSize=(200,200), useSVG=True)
# Get SVG data
molsvg = img.data
# Replace non-transparent to transparent background and set font size
molsvg = molsvg.replace("opacity:1.0", "opacity:0.0");
molsvg = molsvg.replace("12px", "20px");
# Save altered SVG data to file
f = open("../data/T5/cluster_representatives.svg", "w")
f.write(molsvg)
f.close() | _____no_output_____ | CC-BY-4.0 | talktorials/5_compound_clustering/T5_compound_clustering.ipynb | caramirezs/TeachOpenCADD |
While still some similarity is visible, clearly, the centroids from the different clusters look more dissimilar then the compounds within one cluster. Intra-cluster Tanimoto similaritiesWe can also have a look at the intra-cluster Tanimoto similarities. | # Function to compute Tanimoto similarity for all pairs of fingerprints in each cluster
def IntraTanimoto(fps_clusters):
intra_similarity =[]
# Calculate intra similarity per cluster
for k in range(0,len(fps_clusters)):
# Tanimoto distance matrix function converted to similarity matrix (1-distance)
intra_similarity.append([1-x for x in Tanimoto_distance_matrix(fps_clusters[k])])
return intra_similarity
# Recompute fingerprints for 10 first clusters
mol_fps_per_cluster=[]
for c in clusters[:10]:
mol_fps_per_cluster.append([rdkit_gen.GetFingerprint(mols[i][0]) for i in c])
# Compute intra-cluster similarity
intra_sim = IntraTanimoto(mol_fps_per_cluster)
# Violin plot with intra-cluster similarity
pos = list(range(10))
labels = pos
plt.figure(1, figsize=(10, 5))
ax = plt.subplot(111)
r = plt.violinplot(intra_sim, pos, showmeans=True, showmedians=True, showextrema=False)
ax.set_xticks(pos)
ax.set_xticklabels(labels)
ax.set_yticks(numpy.arange(0.6, 1., 0.1))
ax.set_title('Intra-cluster Tanimoto similarity', fontsize=13)
r['cmeans'].set_color('red')
# mean=red, median=blue | _____no_output_____ | CC-BY-4.0 | talktorials/5_compound_clustering/T5_compound_clustering.ipynb | caramirezs/TeachOpenCADD |
Compound pickingIn the following, we are going to pick a final list of **max. 1000 compounds** as a **diverse** subset. For this, we take the cluster centroid from each cluster (i.e. the first molecule of each cluster) and then we take - starting with the largest cluster - for each cluster the 10 molecules (or 50% if less than 10 molecules are left in the cluster) most similar to the centroid, until we have selected max. 1000 compounds. Thus, we have representatives of each cluster. Aim of this compound picking is to ensure the diversity for a smaller set of compounds which are proposed for testing in a confirmatory assay. Picking procedure was adapted from [TDT tutorial notebook by S. Riniker and G. Landrum](https://github.com/sriniker/TDT-tutorial-2014/blob/master/TDT_challenge_tutorial.ipynb). As described there: The idea behind this approach is to ensure diversity (representatives of each cluster) while getting some SAR from the results of the confirmatory assay (groups of quite similar molecules from larger clusters retained). Get cluster centers. | # Get the cluster center of each cluster (first molecule in each cluster)
clus_center = [mols[c[0]] for c in clusters]
# How many cluster centers/clusters do we have?
print('Number of cluster centers: ', len(clus_center)) | Number of cluster centers: 1225
| CC-BY-4.0 | talktorials/5_compound_clustering/T5_compound_clustering.ipynb | caramirezs/TeachOpenCADD |
Sort clusters by size and molecules in each cluster by similarity. | # Sort the molecules within a cluster based on their similarity
# to the cluster center and sort the clusters based on their size
clusters_sort = []
for c in clusters:
if len(c) < 2: continue # Singletons
else:
# Compute fingerprints for each cluster element
fps_clust = [rdkit_gen.GetFingerprint(mols[i][0]) for i in c]
# Similarity of all cluster members to the cluster center
simils = DataStructs.BulkTanimotoSimilarity(fps_clust[0],fps_clust[1:])
# Add index of the molecule to its similarity (centroid excluded!)
simils = [(s,index) for s,index in zip(simils, c[1:])]
# Sort in descending order by similarity
simils.sort(reverse=True)
# Save cluster size and index of molecules in clusters_sort
clusters_sort.append((len(simils), [i for s,i in simils]))
# Sort in descending order by cluster size
clusters_sort.sort(reverse=True) | _____no_output_____ | CC-BY-4.0 | talktorials/5_compound_clustering/T5_compound_clustering.ipynb | caramirezs/TeachOpenCADD |
Pick a maximum of 1000 compounds. | # Count selected molecules, pick cluster centers first
sel_molecules = clus_center.copy()
# Take 10 molecules (or a maximum of 50%) of each cluster starting with the largest one
index = 0
diff = 1000 - len(sel_molecules)
while diff > 0 and index < len(clusters_sort):
# Take indices of sorted clusters
tmp_cluster = clusters_sort[index][1]
# If the first cluster is > 10 big then take exactly 10 compounds
if clusters_sort[index][0] > 10:
num_compounds = 10
# If smaller, take half of the molecules
else:
num_compounds = int(0.5*len(c))+1
if num_compounds > diff:
num_compounds = diff
# Write picked molecules and their structures into list of lists called picked_fps
sel_molecules += [mols[i] for i in tmp_cluster[:num_compounds]]
index += 1
diff = 1000 - len(sel_molecules)
print('# Selected molecules: '+str(len(sel_molecules))) | # Selected molecules: 1225
| CC-BY-4.0 | talktorials/5_compound_clustering/T5_compound_clustering.ipynb | caramirezs/TeachOpenCADD |
This set of diverse molecules could now be used for experimental testing. (Additional information: run times)At the end of the talktorial, we can play with the size of the dataset and see how the Butina clustering run time changes. | # Reuse old dataset
sampled_mols = mols.copy() | _____no_output_____ | CC-BY-4.0 | talktorials/5_compound_clustering/T5_compound_clustering.ipynb | caramirezs/TeachOpenCADD |
Note that you can try out larger datasets, but data sizes larger than 10000 data points already start to consume quite some memory and time (that's why we stopped there). | # Helper function for time computation
def MeasureRuntime(sampled_mols):
start_time = time.time()
sampled_fingerprints = [rdkit_gen.GetFingerprint(m) for m,idx in sampled_mols]
# Run the clustering with the dataset
sampled_clusters = ClusterFps(sampled_fingerprints,cutoff=0.3)
return(time.time() - start_time)
dsize=[100, 500, 1000, 2000, 4000, 6000, 8000, 10000]
runtimes=[]
# Take random samples with replacement
for s in dsize:
tmp_set = [sampled_mols[i] for i in sorted(numpy.random.choice(range(len(sampled_mols)), size=s))]
tmp_t= MeasureRuntime(tmp_set)
print('Dataset size %d, time %4.2f seconds' %(s, tmp_t))
runtimes.append(tmp_t)
plt.plot(dsize, runtimes, 'g^')
plt.title('Runtime measurement of Butina Clustering with different dataset sizes')
plt.xlabel('# Molecules in data set')
plt.ylabel('Runtime in seconds')
plt.show() | _____no_output_____ | CC-BY-4.0 | talktorials/5_compound_clustering/T5_compound_clustering.ipynb | caramirezs/TeachOpenCADD |
Copyright 2018 The TensorFlow Authors. | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/load_data/text.ipynb | crypdra/docs |
Load text with tf.data View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial provides an example of how to use `tf.data.TextLineDataset` to load examples from text files. `TextLineDataset` is designed to create a dataset from a text file, in which each example is a line of text from the original file. This is potentially useful for any text data that is primarily line-based (for example, poetry or error logs).In this tutorial, we'll use three different English translations of the same work, Homer's Illiad, and train a model to identify the translator given a single line of text. Setup | from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
import tensorflow_datasets as tfds
import os | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/load_data/text.ipynb | crypdra/docs |
The texts of the three translations are by: - [William Cowper](https://en.wikipedia.org/wiki/William_Cowper) — [text](https://storage.googleapis.com/download.tensorflow.org/data/illiad/cowper.txt) - [Edward, Earl of Derby](https://en.wikipedia.org/wiki/Edward_Smith-Stanley,_14th_Earl_of_Derby) — [text](https://storage.googleapis.com/download.tensorflow.org/data/illiad/derby.txt)- [Samuel Butler](https://en.wikipedia.org/wiki/Samuel_Butler_%28novelist%29) — [text](https://storage.googleapis.com/download.tensorflow.org/data/illiad/butler.txt)The text files used in this tutorial have undergone some typical preprocessing tasks, mostly removing stuff — document header and footer, line numbers, chapter titles. Download these lightly munged files locally. | DIRECTORY_URL = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/'
FILE_NAMES = ['cowper.txt', 'derby.txt', 'butler.txt']
for name in FILE_NAMES:
text_dir = tf.keras.utils.get_file(name, origin=DIRECTORY_URL+name)
parent_dir = os.path.dirname(text_dir)
parent_dir | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/load_data/text.ipynb | crypdra/docs |
Load text into datasetsIterate through the files, loading each one into its own dataset.Each example needs to be labeled individually labeled, so use `tf.data.Dataset.map` to apply a labeler function to each one. This will iterate over every example in the dataset, returning (`example, label`) pairs. | def labeler(example, index):
return example, tf.cast(index, tf.int64)
labeled_data_sets = []
for i, file_name in enumerate(FILE_NAMES):
lines_dataset = tf.data.TextLineDataset(os.path.join(parent_dir, file_name))
labeled_dataset = lines_dataset.map(lambda ex: labeler(ex, i))
labeled_data_sets.append(labeled_dataset) | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/load_data/text.ipynb | crypdra/docs |
Combine these labeled datasets into a single dataset, and shuffle it. | BUFFER_SIZE = 50000
BATCH_SIZE = 64
TAKE_SIZE = 5000
all_labeled_data = labeled_data_sets[0]
for labeled_dataset in labeled_data_sets[1:]:
all_labeled_data = all_labeled_data.concatenate(labeled_dataset)
all_labeled_data = all_labeled_data.shuffle(
BUFFER_SIZE, reshuffle_each_iteration=False) | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/load_data/text.ipynb | crypdra/docs |
You can use `tf.data.Dataset.take` and `print` to see what the `(example, label)` pairs look like. The `numpy` property shows each Tensor's value. | for ex in all_labeled_data.take(5):
print(ex) | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/load_data/text.ipynb | crypdra/docs |
Encode text lines as numbersMachine learning models work on numbers, not words, so the string values need to be converted into lists of numbers. To do that, map each unique word to a unique integer. Build vocabularyFirst, build a vocabulary by tokenizing the text into a collection of individual unique words. There are a few ways to do this in both TensorFlow and Python. For this tutorial:1. Iterate over each example's `numpy` value.2. Use `tfds.features.text.Tokenizer` to split it into tokens.3. Collect these tokens into a Python set, to remove duplicates.4. Get the size of the vocabulary for later use. | tokenizer = tfds.features.text.Tokenizer()
vocabulary_set = set()
for text_tensor, _ in all_labeled_data:
some_tokens = tokenizer.tokenize(text_tensor.numpy())
vocabulary_set.update(some_tokens)
vocab_size = len(vocabulary_set)
vocab_size | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/load_data/text.ipynb | crypdra/docs |
Encode examplesCreate an encoder by passing the `vocabulary_set` to `tfds.features.text.TokenTextEncoder`. The encoder's `encode` method takes in a string of text and returns a list of integers. | encoder = tfds.features.text.TokenTextEncoder(vocabulary_set) | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/load_data/text.ipynb | crypdra/docs |
You can try this on a single line to see what the output looks like. | example_text = next(iter(all_labeled_data))[0].numpy()
print(example_text)
encoded_example = encoder.encode(example_text)
print(encoded_example) | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/load_data/text.ipynb | crypdra/docs |
Now run the encoder on the dataset by wrapping it in `tf.py_function` and passing that to the dataset's `map` method. | def encode(text_tensor, label):
encoded_text = encoder.encode(text_tensor.numpy())
return encoded_text, label
def encode_map_fn(text, label):
return tf.py_function(encode, inp=[text, label], Tout=(tf.int64, tf.int64))
all_encoded_data = all_labeled_data.map(encode_map_fn) | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/load_data/text.ipynb | crypdra/docs |
Split the dataset into text and train batchesUse `tf.data.Dataset.take` and `tf.data.Dataset.skip` to create a small test dataset and a larger training set.Before being passed into the model, the datasets need to be batched. Typically, the examples inside of a batch need to be the same size and shape. But, the examples in these datasets are not all the same size — each line of text had a different number of words. So use `tf.data.Dataset.padded_batch` (instead of `batch`) to pad the examples to the same size. | train_data = all_encoded_data.skip(TAKE_SIZE).shuffle(BUFFER_SIZE)
train_data = train_data.padded_batch(BATCH_SIZE, padded_shapes=([-1],[]))
test_data = all_encoded_data.take(TAKE_SIZE)
test_data = test_data.padded_batch(BATCH_SIZE, padded_shapes=([-1],[])) | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/load_data/text.ipynb | crypdra/docs |
Now, `test_data` and `train_data` are not collections of (`example, label`) pairs, but collections of batches. Each batch is a pair of (*many examples*, *many labels*) represented as arrays.To illustrate: | sample_text, sample_labels = next(iter(test_data))
sample_text[0], sample_labels[0] | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/load_data/text.ipynb | crypdra/docs |
Since we have introduced a new token encoding (the zero used for padding), the vocabulary size has increased by one. | vocab_size += 1 | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/load_data/text.ipynb | crypdra/docs |
Build the model | model = tf.keras.Sequential() | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/load_data/text.ipynb | crypdra/docs |
The first layer converts integer representations to dense vector embeddings. See the [Word Embeddings](../../tutorials/sequences/word_embeddings) tutorial for more details. | model.add(tf.keras.layers.Embedding(vocab_size, 64)) | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/load_data/text.ipynb | crypdra/docs |
The next layer is a [Long Short-Term Memory](http://colah.github.io/posts/2015-08-Understanding-LSTMs/) layer, which lets the model understand words in their context with other words. A bidirectional wrapper on the LSTM helps it to learn about the datapoints in relationship to the datapoints that came before it and after it. | model.add(tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64))) | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/load_data/text.ipynb | crypdra/docs |
Finally we'll have a series of one or more densely connected layers, with the last one being the output layer. The output layer produces a probability for all the labels. The one with the highest probability is the models prediction of an example's label. | # One or more dense layers.
# Edit the list in the `for` line to experiment with layer sizes.
for units in [64, 64]:
model.add(tf.keras.layers.Dense(units, activation='relu'))
# Output layer. The first argument is the number of labels.
model.add(tf.keras.layers.Dense(3, activation='softmax')) | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/load_data/text.ipynb | crypdra/docs |
Finally, compile the model. For a softmax categorization model, use `sparse_categorical_crossentropy` as the loss function. You can try other optimizers, but `adam` is very common. | model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy']) | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/load_data/text.ipynb | crypdra/docs |
Train the modelThis model running on this data produces decent results (about 83%). | model.fit(train_data, epochs=3, validation_data=test_data)
eval_loss, eval_acc = model.evaluate(test_data)
print('\nEval loss: {:.3f}, Eval accuracy: {:.3f}'.format(eval_loss, eval_acc))
| _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/load_data/text.ipynb | crypdra/docs |
Introduction. Project is the continuation of web crawling of website fmovies's [most-watched](https://fmovies.to/most-watched) section analysis for the website. This is the second part. In part one we crawled websites and extracted informations. In part two we will tidy and clean the data for analysis in third part. | import pandas as pd
import numpy as np
movie_df = pd.read_csv('../Data/final_movies_df.csv')
tv_df = pd.read_csv('../Data/final_tvs_df.csv')
print(movie_df.columns)
print(tv_df.columns)
movie_df.head() | _____no_output_____ | MIT | Files/.ipynb_checkpoints/fmovies_tidy-checkpoint.ipynb | nibukdk/web-scrapping-fmovie.to |
Columns- 'movie_name/ tv_name' : Name of movie / tv - 'watch_link': Url link for page to watch movie/tv, - 'date_added': Date added to df not in fmovies- 'site_rank': Ranking in the fmovies by order of most watched starting from 1.- 'Genre': Genres- 'Stars': Cast,- 'IMDb': IMDb ratings,- 'Director': Director, - 'Release': Released Date for Movie/TV,- 'Country': Origin country can be more than one- 'Rating'- Average reviews by viewers on the fmovies.to websie- 'season' - Which season, only for tv shows- 'episodes' - Number of episoded available for tv shows Rename Columns All Uppercase | movie_df.columns = movie_df.columns.str.upper().tolist()
tv_df.columns = tv_df.columns.str.upper().tolist()
tv_df.head(2)
movie_df.head(2) | _____no_output_____ | MIT | Files/.ipynb_checkpoints/fmovies_tidy-checkpoint.ipynb | nibukdk/web-scrapping-fmovie.to |
Tidying 1. Genre section has list of values in one row, lets make one value per row.2. Released Data can be converted to date time and then to index of df3. Ratings have to values, 1st is the site ratings and second is number of reviews by viewers. Lets separate them different columns. Genre Split and Date Column Lets make a function that splits and stacks the genre into multiple rows, like [this](https://stackoverflow.com/questions/17116814/pandas-how-do-i-split-text-in-a-column-into-multiple-rows/21032532). More, lets just reset index to release date. | def split_genre(df):
cp= df.copy()
# Spilt the genre by "," and stack to make muliple rows each with own unique genre
# this will return a new df with genres only
genre= cp.GENRE.str.split(',').apply(pd.Series, 1).stack()
# Pop one of index
genre.index = genre.index.droplevel(-1)
# Provide name to series
genre.name= "GENRE"
#delete the original genre from original df
cp.drop("GENRE", axis=True, inplace=True)
# Create a new df
new_df = cp.copy().join(genre)
# change release date from string to datetime and drop release column
new_df['Date'] = pd.to_datetime(new_df['RELEASE'], format="%Y-%m-%d")
new_df.drop('RELEASE', axis=1, inplace=True)
# Reset index
new_df.set_index('Date',drop=True, inplace=True)
return new_df
movie_df_tidy_1 = split_genre(movie_df)
tv_df_tidy_1 = split_genre(tv_df) | _____no_output_____ | MIT | Files/.ipynb_checkpoints/fmovies_tidy-checkpoint.ipynb | nibukdk/web-scrapping-fmovie.to |
Ratings Columns Split | site_user_rating_4movie = movie_df_tidy_1.RATING.str.split("/").str[0]
site_number_user_rated_4movie = movie_df_tidy_1.RATING.str.split("/").str[1].str.split(" ").str[0]
site_user_rating_4tv = tv_df_tidy_1.RATING.str.split("/").str[0]
site_number_user_rated_4tv = tv_df_tidy_1.RATING.str.split("/").str[1].str.split(" ").str[0]
| _____no_output_____ | MIT | Files/.ipynb_checkpoints/fmovies_tidy-checkpoint.ipynb | nibukdk/web-scrapping-fmovie.to |
Assign New cols and Drop the olds | tv_df_tidy_2 = tv_df_tidy_1.copy()
movie_df_tidy_2= movie_df_tidy_1.copy()
movie_df_tidy_2['User_Reviews_local'] = site_user_rating_4movie
movie_df_tidy_2['Number_Reviews_local'] = site_number_user_rated_4movie
tv_df_tidy_2['User_Reviews_local'] = site_user_rating_4tv
tv_df_tidy_2['Number_Reviews_local'] = site_number_user_rated_4tv
tv_df_tidy_2.drop('RATING', inplace=True,axis=1)
movie_df_tidy_2.drop('RATING', inplace=True,axis=1) | _____no_output_____ | MIT | Files/.ipynb_checkpoints/fmovies_tidy-checkpoint.ipynb | nibukdk/web-scrapping-fmovie.to |
Missing Vlaues | print(movie_df_tidy_2.info())
print("**"*20)
print(tv_df_tidy_2.info()) | <class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 3790 entries, 2019-04-22 to 2007-02-09
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 MOVIE_NAME 3790 non-null object
1 WATCH_LINK 3790 non-null object
2 DATE_ADDED 3790 non-null object
3 SITE_RANK 3790 non-null int64
4 STARS 3788 non-null object
5 IMDB 3788 non-null float64
6 DIRECTOR 3788 non-null object
7 COUNTRY 3788 non-null object
8 GENRE 3788 non-null object
9 User_Reviews_local 3788 non-null object
10 Number_Reviews_local 3788 non-null object
dtypes: float64(1), int64(1), object(9)
memory usage: 355.3+ KB
None
****************************************
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 764 entries, 2011-04-17 to 2019-03-28
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 TV_NAME 764 non-null object
1 WATCH_LINK 764 non-null object
2 SEASON 764 non-null int64
3 EPISODES 764 non-null int64
4 DATE_ADDED 764 non-null object
5 SITE_RANK 764 non-null int64
6 STARS 764 non-null object
7 IMDB 764 non-null float64
8 DIRECTOR 764 non-null object
9 COUNTRY 764 non-null object
10 GENRE 764 non-null object
11 User_Reviews_local 764 non-null object
12 Number_Reviews_local 764 non-null object
dtypes: float64(1), int64(3), object(9)
memory usage: 83.6+ KB
None
| MIT | Files/.ipynb_checkpoints/fmovies_tidy-checkpoint.ipynb | nibukdk/web-scrapping-fmovie.to |
It seems only movies has null vaules, lets dive deeper. | movie_df_tidy_2[movie_df_tidy_2.GENRE.isnull()] | _____no_output_____ | MIT | Files/.ipynb_checkpoints/fmovies_tidy-checkpoint.ipynb | nibukdk/web-scrapping-fmovie.to |
Earlier to prevent prolongation of crawling, we returned nan for bad requests. We can individually go throguh each link to values but lets drop them for now. | movie_df_tidy_2.dropna(inplace=True,axis=0) | _____no_output_____ | MIT | Files/.ipynb_checkpoints/fmovies_tidy-checkpoint.ipynb | nibukdk/web-scrapping-fmovie.to |
Write file for analysis part Index false argument on write will remove date index so lets not do that. | movie_df_tidy_2.to_csv('../Data/Movie.csv')
tv_df_tidy_2.to_csv('../Data/TV.csv') | _____no_output_____ | MIT | Files/.ipynb_checkpoints/fmovies_tidy-checkpoint.ipynb | nibukdk/web-scrapping-fmovie.to |
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Configuration_**Setting up your Azure Machine Learning services workspace and configuring your notebook library**_------ Table of Contents1. [Introduction](Introduction) 1. What is an Azure Machine Learning workspace1. [Setup](Setup) 1. Azure subscription 1. Azure ML SDK and other library installation 1. Azure Container Instance registration1. [Configure your Azure ML Workspace](Configure%20your%20Azure%20ML%20workspace) 1. Workspace parameters 1. Access your workspace 1. Create a new workspace 1. Create compute resources1. [Next steps](Next%20steps)--- IntroductionThis notebook configures your library of notebooks to connect to an Azure Machine Learning (ML) workspace. In this case, a library contains all of the notebooks in the current folder and any nested folders. You can configure this notebook library to use an existing workspace or create a new workspace.Typically you will need to run this notebook only once per notebook library as all other notebooks will use connection information that is written here. If you want to redirect your notebook library to work with a different workspace, then you should re-run this notebook.In this notebook you will* Learn about getting an Azure subscription* Specify your workspace parameters* Access or create your workspace* Add a default compute cluster for your workspace What is an Azure Machine Learning workspaceAn Azure ML Workspace is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML Workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, deployment, inference, and the monitoring of deployed models. SetupThis section describes activities required before you can access any Azure ML services functionality. 1. Azure SubscriptionIn order to create an Azure ML Workspace, first you need access to an Azure subscription. An Azure subscription allows you to manage storage, compute, and other assets in the Azure cloud. You can [create a new subscription](https://azure.microsoft.com/en-us/free/) or access existing subscription information from the [Azure portal](https://portal.azure.com). Later in this notebook you will need information such as your subscription ID in order to create and access AML workspaces. 2. Azure ML SDK and other library installationIf you are running in your own environment, follow [SDK installation instructions](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-environment). If you are running in Azure Notebooks or another Microsoft managed environment, the SDK is already installed.Also install following libraries to your environment. Many of the example notebooks depend on them```(myenv) $ conda install -y matplotlib tqdm scikit-learn```Once installation is complete, the following cell checks the Azure ML SDK version: | import azureml.core
print("This notebook was created using version 1.0.48
of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") | _____no_output_____ | MIT | configuration.ipynb | mesameki/MachineLearningNotebooks |
If you are using an older version of the SDK then this notebook was created using, you should upgrade your SDK. 3. Azure Container Instance registrationAzure Machine Learning uses of [Azure Container Instance (ACI)](https://azure.microsoft.com/services/container-instances) to deploy dev/test web services. An Azure subscription needs to be registered to use ACI. If you or the subscription owner have not yet registered ACI on your subscription, you will need to use the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and execute the following commands. Note that if you ran through the AML [quickstart](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-get-started) you have already registered ACI. ```shell check to see if ACI is already registered(myenv) $ az provider show -n Microsoft.ContainerInstance -o table if ACI is not registered, run this command. note you need to be the subscription owner in order to execute this command successfully.(myenv) $ az provider register -n Microsoft.ContainerInstance```--- Configure your Azure ML workspace Workspace parametersTo use an AML Workspace, you will need to import the Azure ML SDK and supply the following information:* Your subscription id* A resource group name* (optional) The region that will host your workspace* A name for your workspaceYou can get your subscription ID from the [Azure portal](https://portal.azure.com).You will also need access to a [_resource group_](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overviewresource-groups), which organizes Azure resources and provides a default region for the resources in a group. You can see what resource groups to which you have access, or create a new one in the [Azure portal](https://portal.azure.com). If you don't have a resource group, the create workspace command will create one for you using the name you provide.The region to host your workspace will be used if you are creating a new workspace. You do not need to specify this if you are using an existing workspace. You can find the list of supported regions [here](https://azure.microsoft.com/en-us/global-infrastructure/services/?products=machine-learning-service). You should pick a region that is close to your location or that contains your data.The name for your workspace is unique within the subscription and should be descriptive enough to discern among other AML Workspaces. The subscription may be used only by you, or it may be used by your department or your entire enterprise, so choose a name that makes sense for your situation.The following cell allows you to specify your workspace parameters. This cell uses the python method `os.getenv` to read values from environment variables which is useful for automation. If no environment variable exists, the parameters will be set to the specified default values. If you ran the Azure Machine Learning [quickstart](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-get-started) in Azure Notebooks, you already have a configured workspace! You can go to your Azure Machine Learning Getting Started library, view *config.json* file, and copy-paste the values for subscription ID, resource group and workspace name below.Replace the default values in the cell below with your workspace parameters | import os
subscription_id = os.getenv("SUBSCRIPTION_ID", default="<my-subscription-id>")
resource_group = os.getenv("RESOURCE_GROUP", default="<my-resource-group>")
workspace_name = os.getenv("WORKSPACE_NAME", default="<my-workspace-name>")
workspace_region = os.getenv("WORKSPACE_REGION", default="eastus2") | _____no_output_____ | MIT | configuration.ipynb | mesameki/MachineLearningNotebooks |
Access your workspaceThe following cell uses the Azure ML SDK to attempt to load the workspace specified by your parameters. If this cell succeeds, your notebook library will be configured to access the workspace from all notebooks using the `Workspace.from_config()` method. The cell can fail if the specified workspace doesn't exist or you don't have permissions to access it. | from azureml.core import Workspace
try:
ws = Workspace(subscription_id = subscription_id, resource_group = resource_group, workspace_name = workspace_name)
# write the details of the workspace to a configuration file to the notebook library
ws.write_config()
print("Workspace configuration succeeded. Skip the workspace creation steps below")
except:
print("Workspace not accessible. Change your parameters or create a new workspace below") | _____no_output_____ | MIT | configuration.ipynb | mesameki/MachineLearningNotebooks |
Create a new workspaceIf you don't have an existing workspace and are the owner of the subscription or resource group, you can create a new workspace. If you don't have a resource group, the create workspace command will create one for you using the name you provide.**Note**: As with other Azure services, there are limits on certain resources (for example AmlCompute quota) associated with the Azure ML service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.This cell will create an Azure ML workspace for you in a subscription provided you have the correct permissions.This will fail if:* You do not have permission to create a workspace in the resource group* You do not have permission to create a resource group if it's non-existing.* You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscriptionIf workspace creation fails, please work with your IT admin to provide you with the appropriate permissions or to provision the required resources. | from azureml.core import Workspace
# Create the workspace using the specified parameters
ws = Workspace.create(name = workspace_name,
subscription_id = subscription_id,
resource_group = resource_group,
location = workspace_region,
create_resource_group = True,
exist_ok = True)
ws.get_details()
# write the details of the workspace to a configuration file to the notebook library
ws.write_config() | _____no_output_____ | MIT | configuration.ipynb | mesameki/MachineLearningNotebooks |
Create compute resources for your training experimentsMany of the sample notebooks use Azure ML managed compute (AmlCompute) to train models using a dynamically scalable pool of compute. In this section you will create default compute clusters for use by the other notebooks and any other operations you choose.To create a cluster, you need to specify a compute configuration that specifies the type of machine to be used and the scalability behaviors. Then you choose a name for the cluster that is unique within the workspace that can be used to address the cluster later.The cluster parameters are:* vm_size - this describes the virtual machine type and size used in the cluster. All machines in the cluster are the same type. You can get the list of vm sizes available in your region by using the CLI command```shellaz vm list-skus -o tsv```* min_nodes - this sets the minimum size of the cluster. If you set the minimum to 0 the cluster will shut down all nodes while not in use. Setting this number to a value higher than 0 will allow for faster start-up times, but you will also be billed when the cluster is not in use.* max_nodes - this sets the maximum size of the cluster. Setting this to a larger number allows for more concurrency and a greater distributed processing of scale-out jobs.To create a **CPU** cluster now, run the cell below. The autoscale settings mean that the cluster will scale down to 0 nodes when inactive and up to 4 nodes when busy. | from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster"
# Verify that cluster does not exist already
try:
cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print("Found existing cpu-cluster")
except ComputeTargetException:
print("Creating new cpu-cluster")
# Specify the configuration for the new cluster
compute_config = AmlCompute.provisioning_configuration(vm_size="STANDARD_D2_V2",
min_nodes=0,
max_nodes=4)
# Create the cluster with the specified name and configuration
cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
# Wait for the cluster to complete, show the output log
cpu_cluster.wait_for_completion(show_output=True) | _____no_output_____ | MIT | configuration.ipynb | mesameki/MachineLearningNotebooks |
To create a **GPU** cluster, run the cell below. Note that your subscription must have sufficient quota for GPU VMs or the command will fail. To increase quota, see [these instructions](https://docs.microsoft.com/en-us/azure/azure-supportability/resource-manager-core-quotas-request). | from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your GPU cluster
gpu_cluster_name = "gpu-cluster"
# Verify that cluster does not exist already
try:
gpu_cluster = ComputeTarget(workspace=ws, name=gpu_cluster_name)
print("Found existing gpu cluster")
except ComputeTargetException:
print("Creating new gpu-cluster")
# Specify the configuration for the new cluster
compute_config = AmlCompute.provisioning_configuration(vm_size="STANDARD_NC6",
min_nodes=0,
max_nodes=4)
# Create the cluster with the specified name and configuration
gpu_cluster = ComputeTarget.create(ws, gpu_cluster_name, compute_config)
# Wait for the cluster to complete, show the output log
gpu_cluster.wait_for_completion(show_output=True) | _____no_output_____ | MIT | configuration.ipynb | mesameki/MachineLearningNotebooks |
Importing the images into this script | import os
import numpy as np
directory = 'C:/Users/joaovitor/Desktop/Meu_Canal/DINO/'
jump_img = os.listdir(os.path.join(directory, 'jump'))
nojump_img = os.listdir(os.path.join(directory, 'no_jump'))
#checking if the number of images in both directories are equals
print(len(jump_img) == len(nojump_img))
print(len(jump_img)) | False
81
| MIT | Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb | professorjar/curso-de-jogos- |
Storing the images array into lists | import cv2
imgs_list_jump = []
imgs_list_nojump = []
for img in jump_img:
images = cv2.imread(os.path.join(directory, 'jump', img), 0) #0 to convert the image to grayscale
imgs_list_jump.append(images)
for img in nojump_img:
images = cv2.imread(os.path.join(directory, 'no_jump', img), 0) #0 to convert the image to grayscale
imgs_list_nojump.append(images)
#Taking a look at the first img of array_imgs_jump list
print(imgs_list_jump[0])
print(50*'=')
print('Images Dimensions:', imgs_list_jump[0].shape) | [[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
...
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]]
==================================================
Images Dimensions: (480, 640)
| MIT | Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb | professorjar/curso-de-jogos- |
Let's display the first image | import matplotlib.pyplot as plt
img = cv2.cvtColor(imgs_list_jump[0], cv2.COLOR_BGR2RGB)
plt.imshow(img)
plt.show() | _____no_output_____ | MIT | Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb | professorjar/curso-de-jogos- |
The images have 480 pixels height and 640 pixels width | print(imgs_list_jump[0].shape) | (480, 640)
| MIT | Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb | professorjar/curso-de-jogos- |
The images sizes still very big, so we are going to resize all images in order to make them smaller | print('Original size:', imgs_list_jump[0].size) #original size | Original size: 307200
| MIT | Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb | professorjar/curso-de-jogos- |
We will apply the code bellow to all images | scale_percent = 20 #20 percent of original size
width = int(imgs_list_jump[0].shape[1] * scale_percent / 100)
height = int(imgs_list_jump[0].shape[0] * scale_percent / 100)
dim = (width, height)
#resize image
resized = cv2.resize(imgs_list_jump[0], dim, interpolation = cv2.INTER_AREA)
print('Original Dimensions:', imgs_list_jump[0].shape)
print('Resized Dimensions:', resized.shape)
img = cv2.cvtColor(resized, cv2.COLOR_BGR2RGB)
plt.imshow(img)
plt.show() | Original Dimensions: (480, 640)
Resized Dimensions: (96, 128)
| MIT | Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb | professorjar/curso-de-jogos- |
Applying to all images | scale_percent = 20 # 20 percent of original size
resized_jump_list = []
resized_nojump_list = []
for img in imgs_list_jump:
width = int(img.shape[1] * scale_percent / 100)
height = int(img.shape[0] * scale_percent / 100)
dim = (width, height)
#resize image
resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
resized_jump_list.append(resized)
for img in imgs_list_nojump:
width = int(img.shape[1] * scale_percent / 100)
height = int(img.shape[0] * scale_percent / 100)
dim = (width, height)
#resize image
resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
resized_nojump_list.append(resized)
#Checking if it worked:
print(resized_jump_list[0].shape)
print(resized_nojump_list[0].shape)
img = cv2.cvtColor(resized_nojump_list[10], cv2.COLOR_BGR2RGB)
plt.imshow(img)
plt.show()
cv2.imwrite('imagem_resized.png', resized_nojump_list[10]) | (96, 128)
(96, 128)
| MIT | Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb | professorjar/curso-de-jogos- |
Creating my X dataset | nojump_list_reshaped = []
jump_list_reshaped = []
for img in resized_nojump_list:
nojump_list_reshaped.append(img.reshape(-1, img.size))
for img in resized_jump_list:
jump_list_reshaped.append(img.reshape(-1, img.size))
X_nojump = np.array(nojump_list_reshaped).reshape(len(nojump_list_reshaped), nojump_list_reshaped[0].size)
X_jump = np.array(jump_list_reshaped).reshape(len(jump_list_reshaped), jump_list_reshaped[0].size)
print(X_nojump.shape)
print(X_jump.shape) | (386, 12288)
(81, 12288)
| MIT | Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb | professorjar/curso-de-jogos- |
Joining both X's | X = np.vstack([X_nojump, X_jump])
print(X.shape) | (467, 12288)
| MIT | Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb | professorjar/curso-de-jogos- |
Creating my Y dataset | y_nojump = np.array([0 for i in range(len(nojump_list_reshaped))]).reshape(len(nojump_list_reshaped),-1)
y_jump = np.array([1 for i in range(len(jump_list_reshaped))]).reshape(len(jump_list_reshaped),-1) | _____no_output_____ | MIT | Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb | professorjar/curso-de-jogos- |
Joining both Y's | y = np.vstack([y_nojump, y_jump])
print(y.shape) | (467, 1)
| MIT | Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb | professorjar/curso-de-jogos- |
Shuffling both datasets | shuffle_index = np.random.permutation(y.shape[0])
#print(shuffle_index)
X, y = X[shuffle_index], y[shuffle_index] | _____no_output_____ | MIT | Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb | professorjar/curso-de-jogos- |
Creating a X_train and y_train dataset | X_train = X
y_train = y | _____no_output_____ | MIT | Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb | professorjar/curso-de-jogos- |
Choosing SVM (Support Vector Machine) as our Machine Learning model | from sklearn.svm import SVC
svm_clf = SVC(kernel='linear')
svm_clf.fit(X_train, y_train.ravel()) | _____no_output_____ | MIT | Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb | professorjar/curso-de-jogos- |
Creating a confusion matrix to evaluate the model performance | from sklearn.model_selection import cross_val_predict
from sklearn.metrics import confusion_matrix
y_train_pred = cross_val_predict(svm_clf, X_train, y_train.ravel(), cv=3) #sgd_clf no primeiro parametro
confusion_matrix(y_train.ravel(), y_train_pred) | _____no_output_____ | MIT | Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb | professorjar/curso-de-jogos- |
Saving the model | import joblib
joblib.dump(svm_clf, 'jump_model.pkl') #sgd_clf no primeiro parametro | _____no_output_____ | MIT | Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb | professorjar/curso-de-jogos- |
Reflect Tables into SQLAlchemy ORM | # Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
# create engine to hawaii.sqlite
engine = create_engine("sqlite:///hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# View all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine) | _____no_output_____ | ADSL | climate_starter.ipynb | tanmayrp/sqlalchemy-challenge |
Exploratory Precipitation Analysis | # Find the most recent date in the data set.
most_recent_date_str = session.query(Measurement.date).order_by(Measurement.date.desc()).first()
print(f"The most recent date in the data set: {most_recent_date_str[0]}")
# Design a query to retrieve the last 12 months of precipitation data and plot the results.
# Starting from the most recent data point in the database.
most_recent_date = dt.datetime.strptime(most_recent_date_str[0], '%Y-%m-%d')
# Calculate the date one year from the last date in data set.
recent_date_one_year_past = dt.date(most_recent_date.year -1, most_recent_date.month, most_recent_date.day)
# Perform a query to retrieve the data and precipitation scores
sel = [Measurement.date, Measurement.prcp]
result = session.query(*sel).\
filter(Measurement.date >= recent_date_one_year_past).all()
# Save the query results as a Pandas DataFrame and set the index to the date column
precipitation_df = pd.DataFrame(result, columns=["Date", "Precipitation"])
precipitation_df = precipitation_df.set_index("Date")
# Sort the dataframe by date
precipitation_df = precipitation_df.sort_values(["Date"], ascending=True)
precipitation_df.head()
# Use Pandas Plotting with Matplotlib to plot the data
x_axis = precipitation_df.index.tolist()
y_axis = precipitation_df['Precipitation'].tolist()
plt.figure(figsize=(10,7))
plt.bar(x_axis, y_axis, width = 5, align="center",label='precipitation')
major_ticks = np.arange(0,400,45)
plt.xticks(major_ticks, rotation=90)
plt.xlabel("Date")
plt.ylabel("Inches")
plt.legend()
plt.show()
# Use Pandas to calcualte the summary statistics for the precipitation data
precipitation_df.describe() | _____no_output_____ | ADSL | climate_starter.ipynb | tanmayrp/sqlalchemy-challenge |
Exploratory Station Analysis | # Design a query to calculate the total number stations in the dataset
print(f"The number of stations in the dataset: {session.query(Station.id).count()} ");
# Design a query to find the most active stations (i.e. what stations have the most rows?)
# List the stations and the counts in descending order.
most_active_stations = session.query(Measurement.station, func.count(Measurement.id)).\
group_by(Measurement.station).\
order_by(func.count(Measurement.id).desc()).all()
most_active_station = most_active_stations[0][0]
print(f"The most active station is: {most_active_station}")
# Using the most active station id from the previous query, calculate the lowest, highest, and average temperature.
sel = [func.min(Measurement.tobs), func.max(Measurement.tobs), func.avg(Measurement.tobs)]
most_active_station_summary_stats = session.query(*sel).\
filter(Measurement.station == most_active_station).all()
most_active_station_summary_stats
# Using the most active station id
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
sel = [Measurement.date, Measurement.prcp]
result = session.query(Measurement.tobs).\
filter(Measurement.date >= recent_date_one_year_past).\
filter(Measurement.station == most_active_station).all()
fig, ax = plt.subplots()
plt.hist(list(np.ravel(result)), bins=12, label="tobs")
plt.xlabel("Temperature")
plt.ylabel("Frequency")
plt.legend()
plt.show() | _____no_output_____ | ADSL | climate_starter.ipynb | tanmayrp/sqlalchemy-challenge |
Close session | # Close Session
session.close() | _____no_output_____ | ADSL | climate_starter.ipynb | tanmayrp/sqlalchemy-challenge |
Binary Logistic RegressionLet $X$ training input of size $n * p$. It contains $n$ examples, each with $p$ features. Let $y$ training target of size $n$. Each input $X_i$, vector of size $p$, is associated with it's target, $y_i$, which is $0$ or $1$. Logistic regression tries to fit a linear model to predict the target $y$ of a new input vector $x$. The predictions of the model are denoted $\hat{y}$.$$o_i = X_i\beta = \sum_{j=1}^{p} X_{ij}\beta_j$$$$P(y_i = 1 | X_i) = \hat{y_i} = \sigma(o_i)$$$$\sigma(x) = \frac{1}{1 + e^{-x}}$$ Cross EntropyThe cost function is the cross-entropy. $$J(\beta) = - \sum_{i=1}^n (y_i log(\hat{y_i}) + (1 - y_i) log(1 - \hat{y_i}))$$ $$\frac{\partial J(\beta)}{\partial \hat{y_i}} = \frac{\hat{y_i} - y_i}{\hat{y_i}(1 - \hat{y_i})}$$$$\frac{\partial J(\beta)}{\partial \hat{y}} = \frac{\hat{y} - y}{\hat{y}(1 - \hat{y})}$$ | def sigmoid(x):
return 1 / (1 + np.exp(-x))
y_out = np.random.randn(13).astype(np.float32)
y_true = np.random.randint(0, 2, (13)).astype(np.float32)
y_pred = sigmoid(y_out)
j = - np.sum(y_true * np.log(y_pred) + (1-y_true) * np.log(1-y_pred))
ty_true = torch.tensor(y_true, requires_grad=False)
ty_pred = torch.tensor(y_pred, requires_grad=True)
criterion = torch.nn.BCELoss(reduction='sum')
tj = criterion(ty_pred, ty_true)
tj.backward()
print(j)
print(tj.data.numpy())
print(metrics.tdist(j, tj.data.numpy()))
dy_pred = (y_pred - y_true) / (y_pred * (1 - y_pred))
tdy_pred_sol = ty_pred.grad.data.numpy()
print(dy_pred)
print(tdy_pred_sol)
print(metrics.tdist(dy_pred, tdy_pred_sol)) | [-1.6231388 -2.9766939 2.274354 -6.4779763 -1.4708843 1.2155157
-1.9948862 1.8867183 1.4462028 18.669147 1.5500078 -1.6234685
-1.3342199]
[-1.6231389 -2.976694 2.274354 -6.477976 -1.4708843 1.2155157
-1.9948862 1.8867184 1.4462028 18.669147 1.5500077 -1.6234685
-1.3342199]
5.717077e-07
| MIT | courses/ml/logistic_regression.ipynb | obs145628/ml-notebooks |
$$\frac{\partial J(\beta)}{\partial o_i} = \hat{y_i} - y_i$$$$\frac{\partial J(\beta)}{\partial o} = \hat{y} - y$$ | y_out = np.random.randn(13).astype(np.float32)
y_true = np.random.randint(0, 2, (13)).astype(np.float32)
y_pred = sigmoid(y_out)
j = - np.sum(y_true * np.log(y_pred) + (1-y_true) * np.log(1-y_pred))
ty_true = torch.tensor(y_true, requires_grad=False)
ty_out = torch.tensor(y_out, requires_grad=True)
criterion = torch.nn.BCEWithLogitsLoss(reduction='sum')
tj = criterion(ty_out, ty_true)
tj.backward()
print(j)
print(tj.data.numpy())
print(metrics.tdist(j, tj.data.numpy()))
dy_out = y_pred - y_true
dy_out_sol = ty_out.grad.data.numpy()
print(dy_out)
print(dy_out_sol)
print(metrics.tdist(dy_out, dy_out_sol)) | [-0.7712122 0.5310385 -0.7378207 -0.13447696 0.20648097 0.28622478
-0.7465389 0.5608791 0.53383535 -0.75912154 -0.4418677 0.6848638
0.35961235]
[-0.7712122 0.5310385 -0.7378207 -0.13447696 0.20648097 0.28622478
-0.7465389 0.5608791 0.53383535 -0.75912154 -0.4418677 0.6848638
0.35961235]
0.0
| MIT | courses/ml/logistic_regression.ipynb | obs145628/ml-notebooks |
Can be trained with gradient descent | def log_reg_sk(X, y):
m = LogisticRegression(fit_intercept=False)
m.fit(X, y)
return m.coef_
def get_error(X, y, w):
y_pred = sigmoid(X @ w)
err = - np.sum(y * np.log(y_pred) + (1-y) * np.log(1-y_pred))
return err
def log_reg(X, y):
w = np.random.randn(X.shape[1])
for epoch in range(10000):
y_pred = sigmoid(X @ w)
dy_out = y_pred - y
dw = X.T @ dy_out
w -= 0.001 * dw
if epoch % 100 == 0:
err = get_error(X, y, w)
print('SGD Error = {}'.format(err))
return w
X = np.random.randn(73, 4).astype(np.float32)
y = np.random.randint(0, 2, (73)).astype(np.float32)
w1 = log_reg_sk(X, y)[0]
w2 = log_reg(X, y)
print('SK Error = {}'.format(get_error(X, y, w1)))
print('SGD Error = {}'.format(get_error(X, y, w2)))
print(w1)
print(w2) | SGD Error = 71.14744133609668
SGD Error = 49.65028785288255
SGD Error = 48.91772028291884
SGD Error = 48.888462052036814
SGD Error = 48.88680421514018
SGD Error = 48.88669058552164
SGD Error = 48.88668168135676
SGD Error = 48.886680916022215
SGD Error = 48.88668084643879
SGD Error = 48.88668083991474
SGD Error = 48.886680839293305
SGD Error = 48.88668083923365
SGD Error = 48.8866808392279
SGD Error = 48.886680839227346
SGD Error = 48.88668083922729
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922729
SGD Error = 48.88668083922729
SGD Error = 48.88668083922729
SGD Error = 48.88668083922729
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922729
SGD Error = 48.88668083922728
SGD Error = 48.88668083922729
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
SGD Error = 48.88668083922728
| MIT | courses/ml/logistic_regression.ipynb | obs145628/ml-notebooks |
Multiclass Logistic Regression | def softmax(x):
x_e = np.exp(x)
return x_e / np.sum(x_e, axis=1, keepdims=True)
y_out = np.random.randn(93, 4).astype(np.float32)
y_true = np.zeros((93, 4)).astype(np.float32)
for i in range(y_true.shape[0]):
y_true[i][np.random.randint(0, y_true.shape[1])] = 1
y_pred = softmax(y_out)
j = - np.sum(y_true * np.log(y_pred))
ty_true = torch.tensor(y_true, requires_grad=False)
ty_true = torch.argmax(ty_true, dim=1)
ty_out = torch.tensor(y_out, requires_grad=True)
criterion = torch.nn.CrossEntropyLoss(reduction='sum')
tj = criterion(ty_out, ty_true)
tj.backward()
print(j)
print(tj.data.numpy())
print(metrics.tdist(j, tj.data.numpy()))
y_out = np.random.randn(7, 4).astype(np.float32)
y_true = np.zeros((7, 4)).astype(np.float32)
for i in range(y_true.shape[0]):
y_true[i][np.random.randint(0, y_true.shape[1])] = 1
y_pred = softmax(y_out)
j = - np.sum(y_true * np.log(y_pred))
ty_true = torch.tensor(y_true, requires_grad=False)
ty_pred = torch.tensor(y_pred, requires_grad=True)
tj = - torch.sum(ty_true * torch.log(ty_pred))
tj.backward()
print(j)
print(tj.data.numpy())
print(metrics.tdist(j, tj.data.numpy()))
dy_pred = - y_true / y_pred
dy_pred_sol = ty_pred.grad.data.numpy()
print(dy_pred)
print(dy_pred_sol)
print(metrics.tdist(dy_pred, dy_pred_sol)) | [[ -0. -10.283339 -0. -0. ]
[-10.58094 -0. -0. -0. ]
[ -0. -0. -2.7528124 -0. ]
[-46.90987 -0. -0. -0. ]
[ -0. -0. -1.3170731 -0. ]
[ -7.9531765 -0. -0. -0. ]
[ -0. -10.990683 -0. -0. ]]
[[ -0. -10.283339 -0. -0. ]
[-10.58094 -0. -0. -0. ]
[ -0. -0. -2.7528124 -0. ]
[-46.90987 -0. -0. -0. ]
[ -0. -0. -1.3170731 -0. ]
[ -7.9531765 -0. -0. -0. ]
[ -0. -10.990683 -0. -0. ]]
0.0
| MIT | courses/ml/logistic_regression.ipynb | obs145628/ml-notebooks |
$$\frac{\partial J(\beta)}{\partial o_{ij}} = \hat{y_{ij}} - y_{ij}$$$$\frac{\partial J(\beta)}{\partial o} = \hat{y} - y$$ | y_out = np.random.randn(7, 4).astype(np.float32)
y_true = np.zeros((7, 4)).astype(np.float32)
for i in range(y_true.shape[0]):
y_true[i][np.random.randint(0, y_true.shape[1])] = 1
y_pred = softmax(y_out)
j = - np.sum(y_true * np.log(y_pred))
ty_true = torch.tensor(y_true, requires_grad=False)
ty_true = torch.argmax(ty_true, dim=1)
ty_out = torch.tensor(y_out, requires_grad=True)
criterion = torch.nn.CrossEntropyLoss(reduction='sum')
tj = criterion(ty_out, ty_true)
tj.backward()
print(j)
print(tj.data.numpy())
print(metrics.tdist(j, tj.data.numpy()))
dy_out = y_pred - y_true
dy_out_sol = ty_out.grad.data.numpy()
print(dy_out)
print(dy_out_sol)
print(metrics.tdist(dy_out, dy_out_sol)) | [[-0.71088123 0.25399554 0.31700996 0.13987577]
[ 0.02140404 0.3097546 0.29681578 -0.6279745 ]
[ 0.60384715 0.03253903 0.0066169 -0.6430031 ]
[ 0.22169167 -0.88766754 0.03120301 0.63477284]
[ 0.05100057 -0.38170385 0.10363309 0.22707026]
[ 0.02778155 0.6928965 -0.8194856 0.09880757]
[ 0.03780703 0.9247614 0.02876937 -0.99133784]]
[[-0.71088123 0.2539955 0.31700993 0.13987575]
[ 0.02140405 0.30975467 0.29681584 -0.6279744 ]
[ 0.60384715 0.03253903 0.0066169 -0.6430031 ]
[ 0.22169165 -0.88766754 0.03120301 0.6347728 ]
[ 0.05100057 -0.38170385 0.10363309 0.22707026]
[ 0.02778155 0.6928965 -0.8194856 0.09880759]
[ 0.03780702 0.9247613 0.02876936 -0.99133784]]
2.0499465e-07
| MIT | courses/ml/logistic_regression.ipynb | obs145628/ml-notebooks |
Can be trained with gradient descent | def get_error_multi(X, y, w):
y_pred = softmax(X @ w)
err = - np.sum(y * np.log(y_pred))
return err
def multilog_reg(X, y):
w = np.random.randn(X.shape[1], y.shape[1])
for epoch in range(10000):
y_pred = softmax(X @ w)
dy_out = y_pred - y
dw = X.T @ dy_out
w -= 0.001 * dw
if epoch % 100 == 0:
err = get_error_multi(X, y, w)
print('SGD Error = {}'.format(err))
return w
X = np.random.randn(93, 4).astype(np.float32)
y_true = np.zeros((93, 4)).astype(np.float32)
for i in range(y_true.shape[0]):
y_true[i][np.random.randint(0, y_true.shape[1])] = 1
y_true_sk = np.argmax(y_true, axis=1)
w1 = log_reg_sk(X, y_true_sk)
w2 = multilog_reg(X, y_true)
print('SK Error = {}'.format(get_error_multi(X, y_true, w1)))
print('SGD Error = {}'.format(get_error_multi(X, y_true, w2)))
print(w1)
print(w2) | SGD Error = 264.5967568728954
SGD Error = 124.52928999771657
SGD Error = 120.69338069535253
SGD Error = 120.60511291188504
SGD Error = 120.60208822782775
SGD Error = 120.60195961583351
SGD Error = 120.60195360857097
SGD Error = 120.60195331813674
SGD Error = 120.60195330392729
SGD Error = 120.60195330322918
SGD Error = 120.60195330319483
SGD Error = 120.60195330319314
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
SGD Error = 120.60195330319306
| MIT | courses/ml/logistic_regression.ipynb | obs145628/ml-notebooks |
Scaffolds of Keck_Pria_FP_data | Target_name = 'Keck_Pria_FP_data'
smiles_list = []
for i in range(k):
smiles_list.extend(data_pd_list[i][data_pd_list[i][Target_name]==1]['SMILES'].tolist())
scaffold_set = set()
for smiles in smiles_list:
mol = Chem.MolFromSmiles(smiles)
core = MurckoScaffold.GetScaffoldForMol(mol)
scaffold = Chem.MolToSmiles(core)
scaffold_set.add(scaffold)
print 'Original SMILES is \t{}'.format(smiles)
print 'The Scaffold is \t{}'.format(scaffold)
print
print '{} total smiles'.format(len(smiles_list))
print '{} different scaffolds'.format(len(scaffold_set)) | Original SMILES is c1cc(cc2c1CCCN2CCOC)NS(=O)(=O)c3c(c(c(c(c3C)C)C)C)C
The Scaffold is O=S(=O)(Nc1ccc2c(c1)NCCC2)c1ccccc1
Original SMILES is c1cc(ccc1CC)NC(=O)CSc2ncc(c(=O)[nH]2)S(=O)(=O)c3ccc(cc3C)C
The Scaffold is O=C(CSc1ncc(S(=O)(=O)c2ccccc2)c(=O)[nH]1)Nc1ccccc1
Original SMILES is c1ccc2c(c1)c(c[nH]2)CCNC(=O)Cc3csc(n3)Nc4cccc(c4)Cl
The Scaffold is O=C(Cc1csc(Nc2ccccc2)n1)NCCc1c[nH]c2ccccc12
Original SMILES is c1cc(ccc1c2nnc3n2CC(=C)S3)Br
The Scaffold is C=C1Cn2c(nnc2-c2ccccc2)S1
Original SMILES is c1cc(cc2c1CCCN2CCOC)N
The Scaffold is c1ccc2c(c1)CCCN2
Original SMILES is c1cc(cc(c1)Cl)Nc2nc(cs2)CC(=O)Nc3ccc4c(c3)OCCO4
The Scaffold is O=C(Cc1csc(Nc2ccccc2)n1)Nc1ccc2c(c1)OCCO2
Original SMILES is c1cc(cc(c1NC(=O)c2c(nns2)C)[N+](=O)[O-])OCC
The Scaffold is O=C(Nc1ccccc1)c1cnns1
Original SMILES is c1ccc2c(c1)ccn2CCNC(=S)NCCc3cc4ccc(cc4[nH]c3=O)C
The Scaffold is O=c1[nH]c2ccccc2cc1CCNC(=S)NCCn1ccc2ccccc21
Original SMILES is c1ccc(cc1)OCC(=O)Nc2nc-3c(s2)-c4cccc5c4c3ccc5
The Scaffold is O=C(COc1ccccc1)Nc1nc2c(s1)-c1cccc3cccc-2c13
Original SMILES is c1ccc(c(c1)C(=O)Nc2nnc(o2)Cc3cccs3)SCC
The Scaffold is O=C(Nc1nnc(Cc2cccs2)o1)c1ccccc1
Original SMILES is c1cc(ccc1n2ccnc2SCC(=O)Nc3ccc(cc3)Br)F
The Scaffold is O=C(CSc1nccn1-c1ccccc1)Nc1ccccc1
Original SMILES is c1cc2c(cc1C(=O)NCc3ccc4c(c3)cc(n4C)C)OCO2
The Scaffold is O=C(NCc1ccc2[nH]ccc2c1)c1ccc2c(c1)OCO2
Original SMILES is c1ccc2c(c1)ccc(c2C=Nc3c(cccn3)O)O
The Scaffold is C(=Nc1ccccn1)c1cccc2ccccc12
Original SMILES is c1cc(oc1)C(=O)Nc2ccc(cc2)Nc3ccc(nn3)n4cccn4
The Scaffold is O=C(Nc1ccc(Nc2ccc(-n3cccn3)nn2)cc1)c1ccco1
Original SMILES is c1ccc(c(c1)C(=O)Nc2nc(cs2)c3ccccn3)Br
The Scaffold is O=C(Nc1nc(-c2ccccn2)cs1)c1ccccc1
Original SMILES is c1ccc(cc1)C2=NN(C(C2)c3ccc4c(c3)nccn4)C(=O)c5cccs5
The Scaffold is O=C(c1cccs1)N1N=C(c2ccccc2)CC1c1ccc2nccnc2c1
Original SMILES is c1cc(cc(c1)CS(=O)(=O)Nc2ccc3c(c2)N(CCC3)CCOC)C
The Scaffold is O=S(=O)(Cc1ccccc1)Nc1ccc2c(c1)NCCC2
Original SMILES is c1c(onc1NC(=O)Cn2cccc(c2=O)c3nc(no3)C4CC4)C
The Scaffold is O=C(Cn1cccc(-c2nc(C3CC3)no2)c1=O)Nc1ccon1
Original SMILES is c1cc(sc1)Cc2nnc(o2)NC(=O)c3ccc(cc3)S(=O)(=O)N(C)CCCC
The Scaffold is O=C(Nc1nnc(Cc2cccs2)o1)c1ccccc1
Original SMILES is c1cc2cccnc2c(c1)SCC(=O)NCCc3ccc(cc3)Cl
The Scaffold is O=C(CSc1cccc2cccnc12)NCCc1ccccc1
Original SMILES is c1cc(cc(c1)F)NC(=O)Nc2ccc(cc2)Nc3ccc(nn3)n4cccn4
The Scaffold is O=C(Nc1ccccc1)Nc1ccc(Nc2ccc(-n3cccn3)nn2)cc1
Original SMILES is c1cc2cccc3c2c(c1)C(=O)N(C3=O)CCN4CCN(CC4)CC(=O)c5ccc(cc5)OC
The Scaffold is O=C(CN1CCN(CCN2C(=O)c3cccc4cccc(c34)C2=O)CC1)c1ccccc1
Original SMILES is c1ccnc(c1)CN2Cc3c(ccc4c3OC(=Cc5cccc(c5)F)C4=O)OC2
The Scaffold is O=C1C(=Cc2ccccc2)Oc2c1ccc1c2CN(Cc2ccccn2)CO1
Original SMILES is c1ccc(c(c1)c2c(c(on2)C)C(=O)NCCn3c4c(cn3)c(nc(n4)SCC)NCCC)Cl
The Scaffold is O=C(NCCn1ncc2cncnc21)c1conc1-c1ccccc1
24 total smiles
23 different scaffolds
| MIT | pria_lifechem/analysis/scaffold/scaffold_Keck_Pria_FP_data.ipynb | chao1224/pria_lifechem |
Below is scaffold for each fold Scaffold for fold 0 | i = 0
smiles_list = data_pd_list[i][data_pd_list[i][Target_name]==1]['SMILES'].tolist()
scaffold_set = set()
for smiles in smiles_list:
mol = Chem.MolFromSmiles(smiles)
core = MurckoScaffold.GetScaffoldForMol(mol)
scaffold = Chem.MolToSmiles(core)
scaffold_set.add(scaffold)
print 'Original SMILES is \t{}'.format(smiles)
print 'The Scaffold is \t{}'.format(scaffold)
print
print '{} total smiles'.format(len(smiles_list))
print '{} different scaffolds'.format(len(scaffold_set)) | Original SMILES is c1cc(cc2c1CCCN2CCOC)NS(=O)(=O)c3c(c(c(c(c3C)C)C)C)C
The Scaffold is O=S(=O)(Nc1ccc2c(c1)NCCC2)c1ccccc1
Original SMILES is c1cc(ccc1CC)NC(=O)CSc2ncc(c(=O)[nH]2)S(=O)(=O)c3ccc(cc3C)C
The Scaffold is O=C(CSc1ncc(S(=O)(=O)c2ccccc2)c(=O)[nH]1)Nc1ccccc1
Original SMILES is c1ccc2c(c1)c(c[nH]2)CCNC(=O)Cc3csc(n3)Nc4cccc(c4)Cl
The Scaffold is O=C(Cc1csc(Nc2ccccc2)n1)NCCc1c[nH]c2ccccc12
Original SMILES is c1cc(ccc1c2nnc3n2CC(=C)S3)Br
The Scaffold is C=C1Cn2c(nnc2-c2ccccc2)S1
Original SMILES is c1cc(cc2c1CCCN2CCOC)N
The Scaffold is c1ccc2c(c1)CCCN2
5 total smiles
5 different scaffolds
| MIT | pria_lifechem/analysis/scaffold/scaffold_Keck_Pria_FP_data.ipynb | chao1224/pria_lifechem |
Scaffold for fold 1 | i = 1
smiles_list = data_pd_list[i][data_pd_list[i][Target_name]==1]['SMILES'].tolist()
scaffold_set = set()
for smiles in smiles_list:
mol = Chem.MolFromSmiles(smiles)
core = MurckoScaffold.GetScaffoldForMol(mol)
scaffold = Chem.MolToSmiles(core)
scaffold_set.add(scaffold)
print 'Original SMILES is \t{}'.format(smiles)
print 'The Scaffold is \t{}'.format(scaffold)
print
print '{} total smiles'.format(len(smiles_list))
print '{} different scaffolds'.format(len(scaffold_set)) | Original SMILES is c1cc(cc(c1)Cl)Nc2nc(cs2)CC(=O)Nc3ccc4c(c3)OCCO4
The Scaffold is O=C(Cc1csc(Nc2ccccc2)n1)Nc1ccc2c(c1)OCCO2
Original SMILES is c1cc(cc(c1NC(=O)c2c(nns2)C)[N+](=O)[O-])OCC
The Scaffold is O=C(Nc1ccccc1)c1cnns1
Original SMILES is c1ccc2c(c1)ccn2CCNC(=S)NCCc3cc4ccc(cc4[nH]c3=O)C
The Scaffold is O=c1[nH]c2ccccc2cc1CCNC(=S)NCCn1ccc2ccccc21
Original SMILES is c1ccc(cc1)OCC(=O)Nc2nc-3c(s2)-c4cccc5c4c3ccc5
The Scaffold is O=C(COc1ccccc1)Nc1nc2c(s1)-c1cccc3cccc-2c13
4 total smiles
4 different scaffolds
| MIT | pria_lifechem/analysis/scaffold/scaffold_Keck_Pria_FP_data.ipynb | chao1224/pria_lifechem |
Scaffold for fold 2 | i = 2
smiles_list = data_pd_list[i][data_pd_list[i][Target_name]==1]['SMILES'].tolist()
scaffold_set = set()
for smiles in smiles_list:
mol = Chem.MolFromSmiles(smiles)
core = MurckoScaffold.GetScaffoldForMol(mol)
scaffold = Chem.MolToSmiles(core)
scaffold_set.add(scaffold)
print 'Original SMILES is \t{}'.format(smiles)
print 'The Scaffold is \t{}'.format(scaffold)
print
print '{} total smiles'.format(len(smiles_list))
print '{} different scaffolds'.format(len(scaffold_set)) | Original SMILES is c1ccc(c(c1)C(=O)Nc2nnc(o2)Cc3cccs3)SCC
The Scaffold is O=C(Nc1nnc(Cc2cccs2)o1)c1ccccc1
Original SMILES is c1cc(ccc1n2ccnc2SCC(=O)Nc3ccc(cc3)Br)F
The Scaffold is O=C(CSc1nccn1-c1ccccc1)Nc1ccccc1
Original SMILES is c1cc2c(cc1C(=O)NCc3ccc4c(c3)cc(n4C)C)OCO2
The Scaffold is O=C(NCc1ccc2[nH]ccc2c1)c1ccc2c(c1)OCO2
Original SMILES is c1ccc2c(c1)ccc(c2C=Nc3c(cccn3)O)O
The Scaffold is C(=Nc1ccccn1)c1cccc2ccccc12
4 total smiles
4 different scaffolds
| MIT | pria_lifechem/analysis/scaffold/scaffold_Keck_Pria_FP_data.ipynb | chao1224/pria_lifechem |
Scaffold for fold 3 | i = 3
smiles_list = data_pd_list[i][data_pd_list[i][Target_name]==1]['SMILES'].tolist()
scaffold_set = set()
for smiles in smiles_list:
mol = Chem.MolFromSmiles(smiles)
core = MurckoScaffold.GetScaffoldForMol(mol)
scaffold = Chem.MolToSmiles(core)
scaffold_set.add(scaffold)
print 'Original SMILES is \t{}'.format(smiles)
print 'The Scaffold is \t{}'.format(scaffold)
print
print '{} total smiles'.format(len(smiles_list))
print '{} different scaffolds'.format(len(scaffold_set)) | Original SMILES is c1cc(oc1)C(=O)Nc2ccc(cc2)Nc3ccc(nn3)n4cccn4
The Scaffold is O=C(Nc1ccc(Nc2ccc(-n3cccn3)nn2)cc1)c1ccco1
Original SMILES is c1ccc(c(c1)C(=O)Nc2nc(cs2)c3ccccn3)Br
The Scaffold is O=C(Nc1nc(-c2ccccn2)cs1)c1ccccc1
Original SMILES is c1ccc(cc1)C2=NN(C(C2)c3ccc4c(c3)nccn4)C(=O)c5cccs5
The Scaffold is O=C(c1cccs1)N1N=C(c2ccccc2)CC1c1ccc2nccnc2c1
Original SMILES is c1cc(cc(c1)CS(=O)(=O)Nc2ccc3c(c2)N(CCC3)CCOC)C
The Scaffold is O=S(=O)(Cc1ccccc1)Nc1ccc2c(c1)NCCC2
Original SMILES is c1c(onc1NC(=O)Cn2cccc(c2=O)c3nc(no3)C4CC4)C
The Scaffold is O=C(Cn1cccc(-c2nc(C3CC3)no2)c1=O)Nc1ccon1
5 total smiles
5 different scaffolds
| MIT | pria_lifechem/analysis/scaffold/scaffold_Keck_Pria_FP_data.ipynb | chao1224/pria_lifechem |
Scaffold for fold 4 | i = 4
smiles_list = data_pd_list[i][data_pd_list[i][Target_name]==1]['SMILES'].tolist()
scaffold_set = set()
for smiles in smiles_list:
mol = Chem.MolFromSmiles(smiles)
core = MurckoScaffold.GetScaffoldForMol(mol)
scaffold = Chem.MolToSmiles(core)
scaffold_set.add(scaffold)
print 'Original SMILES is \t{}'.format(smiles)
print 'The Scaffold is \t{}'.format(scaffold)
print
print '{} total smiles'.format(len(smiles_list))
print '{} different scaffolds'.format(len(scaffold_set)) | Original SMILES is c1cc(sc1)Cc2nnc(o2)NC(=O)c3ccc(cc3)S(=O)(=O)N(C)CCCC
The Scaffold is O=C(Nc1nnc(Cc2cccs2)o1)c1ccccc1
Original SMILES is c1cc2cccnc2c(c1)SCC(=O)NCCc3ccc(cc3)Cl
The Scaffold is O=C(CSc1cccc2cccnc12)NCCc1ccccc1
Original SMILES is c1cc(cc(c1)F)NC(=O)Nc2ccc(cc2)Nc3ccc(nn3)n4cccn4
The Scaffold is O=C(Nc1ccccc1)Nc1ccc(Nc2ccc(-n3cccn3)nn2)cc1
Original SMILES is c1cc2cccc3c2c(c1)C(=O)N(C3=O)CCN4CCN(CC4)CC(=O)c5ccc(cc5)OC
The Scaffold is O=C(CN1CCN(CCN2C(=O)c3cccc4cccc(c34)C2=O)CC1)c1ccccc1
Original SMILES is c1ccnc(c1)CN2Cc3c(ccc4c3OC(=Cc5cccc(c5)F)C4=O)OC2
The Scaffold is O=C1C(=Cc2ccccc2)Oc2c1ccc1c2CN(Cc2ccccn2)CO1
Original SMILES is c1ccc(c(c1)c2c(c(on2)C)C(=O)NCCn3c4c(cn3)c(nc(n4)SCC)NCCC)Cl
The Scaffold is O=C(NCCn1ncc2cncnc21)c1conc1-c1ccccc1
6 total smiles
6 different scaffolds
| MIT | pria_lifechem/analysis/scaffold/scaffold_Keck_Pria_FP_data.ipynb | chao1224/pria_lifechem |
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch) | # Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available! | _____no_output_____ | MIT | assignments/assignment3/PyTorch_CNN.ipynb | pavel2805/my_dlcoarse_ai |
Загружаем данные | # First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])) | _____no_output_____ | MIT | assignments/assignment3/PyTorch_CNN.ipynb | pavel2805/my_dlcoarse_ai |
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html | batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1) | _____no_output_____ | MIT | assignments/assignment3/PyTorch_CNN.ipynb | pavel2805/my_dlcoarse_ai |
Subsets and Splits