markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
** Predict the Big Mountain resort `Adult Weekend` price and print it out.** This is our expected price to present to management. Based on our model given the characteristics of the resort in comparison to other ski resorts and their unique characteristics. | price=model4.predict(features)
price | _____no_output_____ | MIT | models/GuidedCapstone_final_documentationStep6HL.ipynb | reetibhagat/big_mountain_resort |
** Print the Big Mountain resort actual `Adult Weekend` price.** | ac=df[df['Name'].str.contains('Big Mountain')]
print ("The actual Big Mountain Resort adult weekend price is $%s " % ' '.join(map(str, ac.AdultWeekend))) | The actual Big Mountain Resort adult weekend price is $81.0
| MIT | models/GuidedCapstone_final_documentationStep6HL.ipynb | reetibhagat/big_mountain_resort |
** As part of reviewing the results it is an important step to generate figures to visualize the data story. We can use the clusters we added to our data frame to create scatter plots for visualizing the Adult Weekend values compared to other characteristics. Run the example below to get you started and build two or three more figures to include in your data story telling.** | plt.scatter(df['summit_elev'], df['vertical_drop'], c=df['clusters'], s=50, cmap='viridis', label ='clusters',edgecolors='white')
plt.scatter(ac['summit_elev'], ac['vertical_drop'], c='white', s=200,edgecolors='black')
sns.despine()
plt.xlabel('Summit Elevation (feet)')
plt.ylabel('Vertical Elevation Drop (feet)')
#plt.title('summit_elev by vertical_drop by cluster')
plt.savefig('figures/fig1.png',bbox_inches='tight')
sns.regplot(x="AdultWeekend", y="SkiableTerrain_ac", data=df[(df['SkiableTerrain_ac']<25000)], color ="#440154FF",scatter_kws={"s": 25})
plt.scatter(x="AdultWeekend", y="SkiableTerrain_ac", data=ac, c='white',s=200,edgecolors='black')
sns.despine()
plt.xlabel('Lift Ticket Price ($)')
plt.ylabel('Skiable Area (acres)')
plt.savefig('figures/fig2.png',bbox_inches='tight')
sns.regplot(x="AdultWeekend", y="daysOpenLastYear", data=df,color ="#21908CFF",scatter_kws={"s": 25})
sns.despine()
plt.scatter(x="AdultWeekend", y="daysOpenLastYear", data=ac, c='white',s=200,edgecolors='black')
plt.xlabel('Lift Ticket Price ($)')
plt.ylabel('Days Open Last Year')
plt.savefig('figures/fig3.png',bbox_inches='tight')
sns.set(style="ticks")
sns.jointplot(x=df['AdultWeekend'], y=df['daysOpenLastYear'], kind="hex", color="#FDE725FF")
sns.despine()
plt.xlabel('Lift Ticket Price ($)')
plt.ylabel('Days Open Last Year')
plt.savefig('figures/fig4.png',bbox_inches='tight')
plt.scatter(x="AdultWeekend", y="averageSnowfall", data=df_1, c='blue',s=200,edgecolors='black')
plt.xlabel('Lift Ticket Price ($)')
plt.ylabel('Average Snowfall')
plt.savefig('figures/fig3.png',bbox_inches='tight') | _____no_output_____ | MIT | models/GuidedCapstone_final_documentationStep6HL.ipynb | reetibhagat/big_mountain_resort |
Finalize Code Making sure our code is well organized and easy to follow is an important step. This is the time where you need to review the notebooks and Python scripts you've created and clean them up so they are easy to follow and succinct in nature. Addtionally, we will also save our final model as a callable object using Pickle for future use in a data pipeline. Pickle is a module that serializes (and de-serializes) Python objects so that they can become executable objects like functions. It's used extensively in production environments where machine learning models are deployed on an industrial scale!** Run the example code below to save out your callable model. Notice that we save it in the models folder we created in our previous guided capstone step.** | import pickle
s = pickle.dumps(model4)
from joblib import dump, load
dump(model4, 'models/regression_model_adultweekend.joblib') | _____no_output_____ | MIT | models/GuidedCapstone_final_documentationStep6HL.ipynb | reetibhagat/big_mountain_resort |
Finalize Documentation For model documentation, we want to save the model performance metrics as well as the features included in the final model. You could also save the model perfomance metrics and coefficients fo the other models you tried in case you want to refer to them later. ** Create a dataframe containing the coefficients and the model performance metrics and save it out as a csv file, then upload it to your github repository.** | performance_metrics=pd.DataFrame(abs(model4.coef_), X.columns, columns=['Coefficient'])
performance_metrics['Mean Absolute Error']= mean_absolute_error(y_test, ypred)
performance_metrics['Root Mean Squared Error']=np.sqrt(mean_squared_error(y_test, ypred))
performance_metrics['r2-testscore']=model4.score(X_test,y_test)
performance_metrics['r2-trainscore']=model4.score(X_train,y_train)
performance_metrics.to_csv(r'/Users/ajesh_mahto/Desktop/capstone_project/data/performance_metrics_model4.csv') | _____no_output_____ | MIT | models/GuidedCapstone_final_documentationStep6HL.ipynb | reetibhagat/big_mountain_resort |
2d. Distributed training and monitoring In this notebook, we refactor to use the Experimenter class instead of hand-coding our ML pipeline. This allows us to carry out evaluation as part of our training loop instead of as a separate step. It also adds in failure-handling that is necessary for distributed training capabilities.We also use TensorBoard to monitor the training. | import google.datalab.ml as ml
import tensorflow as tf
from tensorflow.contrib import layers
print tf.__version__
# print ml.sdk_location
import datalab.bigquery as bq
import tensorflow as tf
import numpy as np
import shutil | _____no_output_____ | Apache-2.0 | courses/machine_learning/tensorflow/d_experiment.ipynb | AmirQureshi/code-to-run- |
Input Read data created in Lab1a, but this time make it more general, so that we are reading in batches. Instead of using Pandas, we will use add a filename queue to the TensorFlow graph. | CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def read_dataset(filename, num_epochs=None, batch_size=512, mode=tf.contrib.learn.ModeKeys.TRAIN):
def _input_fn():
filename_queue = tf.train.string_input_producer(
[filename], num_epochs=num_epochs, shuffle=True)
reader = tf.TextLineReader()
_, value = reader.read_up_to(filename_queue, num_records=batch_size)
value_column = tf.expand_dims(value, -1)
columns = tf.decode_csv(value_column, record_defaults=DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
return features, label
return _input_fn
def get_train():
return read_dataset('./taxi-train.csv', num_epochs=100, mode=tf.contrib.learn.ModeKeys.TRAIN)
def get_valid():
return read_dataset('./taxi-valid.csv', num_epochs=1, mode=tf.contrib.learn.ModeKeys.EVAL)
def get_test():
return read_dataset('./taxi-test.csv', num_epochs=1, mode=tf.contrib.learn.ModeKeys.EVAL) | _____no_output_____ | Apache-2.0 | courses/machine_learning/tensorflow/d_experiment.ipynb | AmirQureshi/code-to-run- |
Create features out of input data For now, pass these through. (same as previous lab) | INPUT_COLUMNS = [
layers.real_valued_column('pickuplon'),
layers.real_valued_column('pickuplat'),
layers.real_valued_column('dropofflat'),
layers.real_valued_column('dropofflon'),
layers.real_valued_column('passengers'),
]
feature_cols = INPUT_COLUMNS | _____no_output_____ | Apache-2.0 | courses/machine_learning/tensorflow/d_experiment.ipynb | AmirQureshi/code-to-run- |
Experiment framework | import tensorflow.contrib.learn as tflearn
from tensorflow.contrib.learn.python.learn import learn_runner
import tensorflow.contrib.metrics as metrics
def experiment_fn(output_dir):
return tflearn.Experiment(
tflearn.LinearRegressor(feature_columns=feature_cols, model_dir=output_dir),
train_input_fn=get_train(),
eval_input_fn=get_valid(),
eval_metrics={
'rmse': tflearn.MetricSpec(
metric_fn=metrics.streaming_root_mean_squared_error
)
}
)
shutil.rmtree('taxi_trained', ignore_errors=True) # start fresh each time
learn_runner.run(experiment_fn, 'taxi_trained') | _____no_output_____ | Apache-2.0 | courses/machine_learning/tensorflow/d_experiment.ipynb | AmirQureshi/code-to-run- |
Monitoring with TensorBoard | from google.datalab.ml import TensorBoard
TensorBoard().start('./taxi_trained')
TensorBoard().list()
# to stop TensorBoard
TensorBoard().stop(23002)
print 'stopped TensorBoard'
TensorBoard().list() | _____no_output_____ | Apache-2.0 | courses/machine_learning/tensorflow/d_experiment.ipynb | AmirQureshi/code-to-run- |
Actor and Critic Method パッケージの準備 | %load_ext autoreload
%autoreload 2
%matplotlib inline
from google.colab import drive
drive.mount('/content/drive')
import sys
import os
HOME_PATH = '/content/drive/MyDrive/Colab Notebooks/baby-steps-of-rl-ja/exercise/day_3'
sys.path.append(HOME_PATH)
import numpy as np
import gym
from el_agent import ELAgent
from frozen_lake_util import show_q_value | _____no_output_____ | Apache-2.0 | exercise/day_3/actor_and_critic_method.ipynb | masatoomori/baby-steps-of-rl-ja |
Actor の定義 | class Actor(ELAgent):
def __init__(self, env):
super().__init__(epsilon=-1)
n_row = env.observation_space.n
n_col = env.action_space.n
self.actions = list(range(env.action_space.n))
self.Q = np.random.uniform(0, 1, n_row * n_col).reshape((n_row, n_col))
def softmax(self, x):
return np.exp(x) / np.sum(np.exp(x), axis=0)
def policy(self, s):
a = np.random.choice(self.actions, 1, p=self.softmax(self.Q[s]))
return a[0] | _____no_output_____ | Apache-2.0 | exercise/day_3/actor_and_critic_method.ipynb | masatoomori/baby-steps-of-rl-ja |
Critic の定義 | class Critic():
def __init__(self, env):
n_state = env.observation_space.n
self.V = np.zeros(n_state) | _____no_output_____ | Apache-2.0 | exercise/day_3/actor_and_critic_method.ipynb | masatoomori/baby-steps-of-rl-ja |
Actor & Critic 学習プロセスの定義 | class ActorCritic():
def __init__(self, actor_class, critic_class):
self.actor_class = actor_class
self.critic_class = critic_class
def train(self, env, episode_count=1000, gamma=0.9, learning_rate=0.1, render=False, report_interval=50):
actor = self.actor_class(env)
critic = self.critic_class(env)
actor.init_log()
for e in range(episode_count):
s = env.reset()
is_done = False
while not is_done:
if render:
env.render()
a = actor.policy(s)
state, reward, is_done, info = env.step(a)
gain = reward + gamma * critic.V[state]
estimated = critic.V[s]
td = gain - estimated
actor.Q[s][a] += learning_rate * td
critic.V[s] += learning_rate * td
s = state
else:
actor.log(reward)
if e != 0 and e % report_interval == 0:
actor.show_reward_log(episode=e)
return actor, critic | _____no_output_____ | Apache-2.0 | exercise/day_3/actor_and_critic_method.ipynb | masatoomori/baby-steps-of-rl-ja |
Agent を学習させる | def train():
trainer = ActorCritic(Actor, Critic)
env = gym.make("FrozenLakeEasy-v0")
actor, critic = trainer.train(env, episode_count=3000)
show_q_value(actor.Q)
actor.show_reward_log()
agent = train() | At Episode 50 average reward is 0.02 (+/-0.14).
At Episode 100 average reward is 0.0 (+/-0.0).
At Episode 150 average reward is 0.0 (+/-0.0).
At Episode 200 average reward is 0.06 (+/-0.237).
At Episode 250 average reward is 0.04 (+/-0.196).
At Episode 300 average reward is 0.02 (+/-0.14).
At Episode 350 average reward is 0.0 (+/-0.0).
At Episode 400 average reward is 0.02 (+/-0.14).
At Episode 450 average reward is 0.02 (+/-0.14).
At Episode 500 average reward is 0.0 (+/-0.0).
At Episode 550 average reward is 0.06 (+/-0.237).
At Episode 600 average reward is 0.08 (+/-0.271).
At Episode 650 average reward is 0.04 (+/-0.196).
At Episode 700 average reward is 0.06 (+/-0.237).
At Episode 750 average reward is 0.04 (+/-0.196).
At Episode 800 average reward is 0.1 (+/-0.3).
At Episode 850 average reward is 0.08 (+/-0.271).
At Episode 900 average reward is 0.14 (+/-0.347).
At Episode 950 average reward is 0.12 (+/-0.325).
At Episode 1000 average reward is 0.3 (+/-0.458).
At Episode 1050 average reward is 0.44 (+/-0.496).
At Episode 1100 average reward is 0.46 (+/-0.498).
At Episode 1150 average reward is 0.72 (+/-0.449).
At Episode 1200 average reward is 0.84 (+/-0.367).
At Episode 1250 average reward is 0.84 (+/-0.367).
At Episode 1300 average reward is 0.88 (+/-0.325).
At Episode 1350 average reward is 0.88 (+/-0.325).
At Episode 1400 average reward is 0.94 (+/-0.237).
At Episode 1450 average reward is 0.9 (+/-0.3).
At Episode 1500 average reward is 0.9 (+/-0.3).
At Episode 1550 average reward is 0.94 (+/-0.237).
At Episode 1600 average reward is 0.92 (+/-0.271).
At Episode 1650 average reward is 0.98 (+/-0.14).
At Episode 1700 average reward is 0.92 (+/-0.271).
At Episode 1750 average reward is 0.96 (+/-0.196).
At Episode 1800 average reward is 0.94 (+/-0.237).
At Episode 1850 average reward is 0.98 (+/-0.14).
At Episode 1900 average reward is 0.98 (+/-0.14).
At Episode 1950 average reward is 0.98 (+/-0.14).
At Episode 2000 average reward is 1.0 (+/-0.0).
At Episode 2050 average reward is 0.98 (+/-0.14).
At Episode 2100 average reward is 0.98 (+/-0.14).
At Episode 2150 average reward is 0.96 (+/-0.196).
At Episode 2200 average reward is 1.0 (+/-0.0).
At Episode 2250 average reward is 1.0 (+/-0.0).
At Episode 2300 average reward is 1.0 (+/-0.0).
At Episode 2350 average reward is 0.98 (+/-0.14).
At Episode 2400 average reward is 0.98 (+/-0.14).
At Episode 2450 average reward is 0.98 (+/-0.14).
At Episode 2500 average reward is 0.96 (+/-0.196).
At Episode 2550 average reward is 1.0 (+/-0.0).
At Episode 2600 average reward is 0.98 (+/-0.14).
At Episode 2650 average reward is 1.0 (+/-0.0).
At Episode 2700 average reward is 0.98 (+/-0.14).
At Episode 2750 average reward is 1.0 (+/-0.0).
At Episode 2800 average reward is 1.0 (+/-0.0).
At Episode 2850 average reward is 0.98 (+/-0.14).
At Episode 2900 average reward is 0.98 (+/-0.14).
At Episode 2950 average reward is 0.98 (+/-0.14).
| Apache-2.0 | exercise/day_3/actor_and_critic_method.ipynb | masatoomori/baby-steps-of-rl-ja |
[Oregon Curriculum Network](http://www.4dsolutions.net/ocn) [Discovering Math with Python](Introduction.ipynb) Quadrays and GrapheneBy AlexanderAlUS - Own work, CC BY-SA 3.0, Link"Graphene" refers to an hexagonal grid of cells, the vertexes being carbon atoms. However any hexagonal mesh, such as for game boards, might be referred to as a "graphene pattern".Quadrays are explained [in other Notebooks](QuadraysJN.ipynb). Four basis vectors emanate to the corners of a volume 1 tetrahedron of edges 2R or 1D, in the canonical version, where R and D refer respectively to the Radius and Diameter of imaginary spheres packed together, giving this home base tetrahedron.The Quadrays {2, 1, 1, 0}, meaning all 12 permutations of those numbers, fan out from (0,0,0,0) to the corners of a cuboctahedron. | from itertools import permutations
g = permutations((2,1,1,0))
unique = {p for p in g} # set comprehension
print(unique) | {(0, 1, 1, 2), (1, 0, 1, 2), (2, 0, 1, 1), (0, 2, 1, 1), (0, 1, 2, 1), (1, 2, 1, 0), (1, 1, 2, 0), (2, 1, 1, 0), (1, 0, 2, 1), (1, 2, 0, 1), (2, 1, 0, 1), (1, 1, 0, 2)}
| MIT | GrapheneWithQrays.ipynb | 4dsolutions/Python5 |
I have [elsewhere](Generating%20the%20FCC.ipynb) used this fact to algorithmically generate consecutive shells of 12, 42, 92, 162... spheres (balls) respectively; a growing cuboctahedron of $10 S^{2} + 2$ balls per shell S = 1,2,3... (1 when S=0).However suppose we don't want to grow the grid omni-directionally, but only in a plane. Each ball will be surrounded by six neighbors meaning at the center of a hexagon.The cuboctahedron supplies four such hexagons i.e. its 24 edges comprise four hexagons orbiting the center. We may use any one of them. The AlgorithmThe algorithm begins with a planar subset of the vectors {2, 1, 1, 0} used to compute the six vertexes surrounding (0,0,0,0). We may call these six vertexes "carbons".Then hop to neighboring hexagon centers (where no carbon is located) using an additional set of vectors. From these new centers, compute the six surrounding carbons again, some of which will have already been found, as neighbors share fences, with three faces (centers) sharing each fence post (carbon). Using the Python set object, the algorithm filters to keep only unique carbons. Keep track of hexagon centers, a dual mesh, in a separate set. (0,0,0,0) will be the first center (ring0).If qrays r, s are 60 degrees apart on the same hexagon, pointing to neighboring carbons, then r + s will be the "hop" vector over the fence (edge) to the neighboring "yard" (face), or center. Once we have six vertex vectors from a center, computing the six hop vectors (for jumping over the fences) will be a matter of summing pairs of adjacent (60 degree separated) vectors. We only keep new centers i.e. those of the next ring (see below).What about edges?As we go around a hexagon in 60 degree increments, say in a clockwise direction, we will be finding edges in terms of adjacent ball pairs. To avoid redundancy, (ball_a, ball_b) -- any edge -- [will be sorted](https://github.com/4dsolutions/SAISOFT/blob/master/OrderingPolys.ipynb). Any two quadrays may be ordered as 4-tuples e.g. (2, 1, 1, 0) is "greater than" (2, 1, 0, 1). With unique representations of any edge, in the form of sorted tuples of qray namedtuples, we will be able to employ the same general technique employed with vertexes (carbons) and face centers: check the existing database for uniqueness and throw away (filter) anything already in the database. Sets will not allow duplicates.The first step is to isolate six of the twelve from {2, 1, 1, 0} that define a hexagon. ShapeVolumeVertex Inventory (sum of Quadrays)Tetrahedron1A,B,C,DInverse Tetrahedron1E,F,G,H = B+C+D, A+C+D, A+B+D, A+B+CDuo-Tet Cube3A through HOctahedron4I,J,K,L,M,N = A+B, A+C, A+D, B+C, B+D, C+DRhombic Dodecahedron6A through NCuboctahedron20O,P,Q,R,S,T = I+J, I+K, I+L, I+M, N+J, N+K; U,V,W,X,Y,Z = N+L, N+M, J+L, L+M, M+K, K+J&32;&32; One of the hexagons is TZOQXV. Do you see it in the above graphic? Another one is TYRQWS. If we regenerate all of the vectors A-Z mentioned above, we'll have a vocabulary suitable for graphene grid development, and then some. | from qrays import Qvector, IVM
A, B, C, D = Qvector((1,0,0,0)), Qvector((0,1,0,0)), Qvector((0,0,1,0)), Qvector((0,0,0,1))
E,F,G,H = B+C+D, A+C+D, A+B+D, A+B+C
I,J,K,L,M,N = A+B, A+C, A+D, B+C, B+D, C+D
O,P,Q,R,S,T = I+J, I+K, I+L, I+M, N+J, N+K; U,V,W,X,Y,Z = N+L, N+M, J+L, L+M, M+K, K+J
# two "beacons" of six spokes
hexrays = [T, Z, O, Q, X, V] # to surrounding carbon atoms
hoprays = [T+Z, Z+O, O+Q, Q+X, X+V, V+T] # to neighboring (vacant) hex centers
(T.angle(Z), Z.angle(O), O.angle(Q), Q.angle(X), X.angle(V), V.angle(T)) | _____no_output_____ | MIT | GrapheneWithQrays.ipynb | 4dsolutions/Python5 |
Lets verify that, going around the hexagon, each pair of consecutive hexrays is 60 degree apart. And ditto for hoprays, the vectors we'll use to jump over the fence to neighboring hexagon centers. | (hoprays[0].angle(hoprays[1]),
hoprays[1].angle(hoprays[2]),
hoprays[2].angle(hoprays[3]),
hoprays[3].angle(hoprays[4]),
hoprays[4].angle(hoprays[5]),
hoprays[5].angle(hoprays[0])) | _____no_output_____ | MIT | GrapheneWithQrays.ipynb | 4dsolutions/Python5 |
Looks like we're in business!As with the growing cuboctahedron and the CCP packing, it makes sense to think in terms of consecutive rings.The [hexagonal coordination sequence](https://oeis.org/A008458) is generated by: | def A008458(n):
# OEIS number
if n == 0:
return 1
return 6 * n
[A008458(x) for x in range(10)] | _____no_output_____ | MIT | GrapheneWithQrays.ipynb | 4dsolutions/Python5 |
I will use this as a check as the algorithm generates multiple rings. | centers = {IVM(0,0,0,0)} # center face
edges = set() # no duplicates permitted
carbons = set()
ring0 = [Qvector((0,0,0,0))]
def next_ring(ring):
"""
Use only the most recently added hexagonal ring
of face centers to compute the next ring, moving
outward: 1, 6, 12, 18, 24...
"""
new_faces = []
for face in ring:
verts = []
# CARBONS
for spoke in hexrays:
v = face + spoke
carbons.add(v.coords) # just the namedtuple is added to the set
verts.append(v)
# EDGES
for bond in zip(verts, verts[1:] + [verts[0]]):
# adding carbon-to-carbon bonds if not already in the set
edge = tuple(sorted([bond[0].coords, bond[1].coords]))
edges.add(edge)
# CENTERS
for jump in hoprays:
neighbor = face + jump
previous = len(centers)
centers.add(neighbor.coords)
if len(centers) > previous: # if True, face is new
new_faces.append(neighbor)
return new_faces
def rings(n):
prev = ring0
for ring in range(n):
print("Ring: {:3} Number: {:4}".format(ring, len(prev)))
nxt = next_ring(prev)
prev = nxt
rings(12) | Ring: 0 Number: 1
Ring: 1 Number: 6
Ring: 2 Number: 12
Ring: 3 Number: 18
Ring: 4 Number: 24
Ring: 5 Number: 30
Ring: 6 Number: 36
Ring: 7 Number: 42
Ring: 8 Number: 48
Ring: 9 Number: 54
Ring: 10 Number: 60
Ring: 11 Number: 66
| MIT | GrapheneWithQrays.ipynb | 4dsolutions/Python5 |
Note these are the expected numbers for consecutive rings.Now that we have our database, it's time to generate some graphical output. As with the FCC, I'll use [POV-Ray's scene description language](http://www.4dsolutions.net/ocn/numeracy0.html) and then render in [POV-Ray](http://www.povray.org). We just want to look at the edges and carbon atom vertexes. | sph = """sphere { %s 0.1 texture { pigment { color rgb <1,0,0> } } }"""
cyl = """cylinder { %s %s 0.05 texture { pigment { color rgb <1.0, 0.65, 0.0> } } }"""
def make_graphene(fname="../c6xty/graphene.pov", append=True):
"""
Scan through carbons, edges, converting to XYZ and embedding
in POV-Ray Scene Description Language
"""
if append:
pov = open(fname, "a")
else:
pov = open(fname, "w")
# graphene will be included as a single object in the
# parent povray script, where lighting, camera position,
# and background are defined
print("#declare graphene = union{", file=pov)
for atom in carbons:
v = Qvector(atom).xyz()
s = sph % "<{0.x}, {0.y}, {0.z}>".format(v)
print(s, file=pov)
for bond in edges:
v0, v1 = bond
v0 = Qvector(v0).xyz()
v1 = Qvector(v1).xyz()
c = cyl % ("<{0.x}, {0.y}, {0.z}>".format(v0), "<{0.x}, {0.y}, {0.z}>".format(v1))
print(c, file=pov)
print("}\n", file=pov)
make_graphene(append=False) | _____no_output_____ | MIT | GrapheneWithQrays.ipynb | 4dsolutions/Python5 |
(image-segmentation:relabel-sequential)= Sequential object (re-)labelingAs mentioned above, depending on the use-case it might be important to label objects in an image subsequently. It could for example be that a post-processing algorithm for label images crashes in case we pass a label image with missing labels. Hence, we should know how to relabel an image sequentially. | import numpy as np
from skimage.io import imread
from skimage.segmentation import relabel_sequential
import pyclesperanto_prototype as cle | _____no_output_____ | CC-BY-4.0 | docs/20_image_segmentation/15_sequential_labeling.ipynb | rayanirban/BioImageAnalysisNotebooks |
Our starting point is a label image with labels 1-8, where some labels are not present: | label_image = imread("../../data/label_map_with_index_gaps.tif")
cle.imshow(label_image, labels=True) | _____no_output_____ | CC-BY-4.0 | docs/20_image_segmentation/15_sequential_labeling.ipynb | rayanirban/BioImageAnalysisNotebooks |
When measuring the maximum intensity in the image, we can see that this label image containing 4 labels is obviously not sequentially labeled. | np.max(label_image) | _____no_output_____ | CC-BY-4.0 | docs/20_image_segmentation/15_sequential_labeling.ipynb | rayanirban/BioImageAnalysisNotebooks |
We can use the `unique` function to figure out which labels are present: | np.unique(label_image) | _____no_output_____ | CC-BY-4.0 | docs/20_image_segmentation/15_sequential_labeling.ipynb | rayanirban/BioImageAnalysisNotebooks |
Sequential labelingWe can now relabel this image and remove these gaps using [scikit-image's `relabel_sequential()` function](https://scikit-image.org/docs/dev/api/skimage.segmentation.htmlskimage.segmentation.relabel_sequential). We're entering the `_` as additional return variables as we're not interested in them. This is necessary because the `relabel_sequential` function returns three things, but we only need the first. | relabeled, _, _ = relabel_sequential(label_image)
cle.imshow(relabeled, labels=True) | _____no_output_____ | CC-BY-4.0 | docs/20_image_segmentation/15_sequential_labeling.ipynb | rayanirban/BioImageAnalysisNotebooks |
Afterwards, the unique labels should be sequential: | np.unique(relabeled) | _____no_output_____ | CC-BY-4.0 | docs/20_image_segmentation/15_sequential_labeling.ipynb | rayanirban/BioImageAnalysisNotebooks |
Also pyclesperanto has a function for relabeling label images sequentially. The result is supposed identical to the result in scikit-image. It just doesn't return the additional values. | relabeled1 = cle.relabel_sequential(label_image)
cle.imshow(relabeled1, labels=True) | _____no_output_____ | CC-BY-4.0 | docs/20_image_segmentation/15_sequential_labeling.ipynb | rayanirban/BioImageAnalysisNotebooks |
Reverting sequential labelingIn some cases we apply an operation to a label image that returns a new label image with less labels that are sequentially labeled but the label-identity is lost. This happens for example when excluding labels from the label image that are too small. | large_labels = cle.exclude_small_labels(relabeled, maximum_size=260)
cle.imshow(large_labels, labels=True, max_display_intensity=4)
np.unique(large_labels) | _____no_output_____ | CC-BY-4.0 | docs/20_image_segmentation/15_sequential_labeling.ipynb | rayanirban/BioImageAnalysisNotebooks |
To restore the original label identities, we need to multiply a binary image representing the remaining labels with the original label image. | binary_remaining_labels = large_labels > 0
cle.imshow(binary_remaining_labels)
large_labels_with_original_identity = binary_remaining_labels * relabeled
cle.imshow(large_labels_with_original_identity, labels=True, max_display_intensity=4)
np.unique(large_labels_with_original_identity) | _____no_output_____ | CC-BY-4.0 | docs/20_image_segmentation/15_sequential_labeling.ipynb | rayanirban/BioImageAnalysisNotebooks |
Multiple single-step forecast models models studied in Zoumpekas et al (2020) | import random
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.keras.callbacks import Callback
from tensorflow.keras.layers import Dense, Input, Conv1D, LSTM, GRU, Bidirectional, Dropout, Flatten
from tensorflow.keras import Model, Sequential
from tensorflow.keras.initializers import RandomNormal
from tensorflow.keras.regularizers import l2
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
l2reg = l2(l2=0.0001)
cnn2 = Sequential([
Input(shape=(6,2), name="conv1d_1_input"),
Conv1D(80, kernel_size=3, strides=1, activation="relu", kernel_regularizer=l2reg, bias_regularizer=l2reg, name="conv1d_1"),
Dropout(0.20, name="dropout_1"),
Conv1D(2, kernel_size=3, strides=2, activation="linear", kernel_regularizer=l2reg, bias_regularizer=l2reg, name="conv1d_2"),
Flatten(),
Dense(1, kernel_regularizer=l2reg, bias_regularizer=l2reg, name="dense_1")
])
cnn3 = Sequential([
Input(shape=(6,2), name="conv1d_1_input"),
Conv1D(80, kernel_size=3, strides=1, activation="relu", kernel_regularizer=l2reg, bias_regularizer=l2reg, name="conv1d_1"),
Dropout(0.20, name="dropout_1"),
Conv1D(40, kernel_size=2, strides=1, activation="relu", kernel_regularizer=l2reg, bias_regularizer=l2reg, name="conv1d_2"),
Dropout(0.20, name="dropout_2"),
Conv1D(2, kernel_size=2, strides=2, activation="linear", kernel_regularizer=l2reg, bias_regularizer=l2reg, name="conv1d_3"),
Flatten(),
Dense(1, kernel_regularizer=l2reg, bias_regularizer=l2reg, name="dense_1")
])
lstm = Sequential([
Input(shape=(6,2), name="lstm_1_input"),
LSTM(50, activation="tanh", recurrent_dropout=0.2, kernel_regularizer=l2reg, bias_regularizer=l2reg, name="lstm_1"),
Dropout(0.20, name="dropout_1"),
Dense(1, kernel_regularizer=l2reg, bias_regularizer=l2reg, name="dense_1")
])
slstm = Sequential([
Input(shape=(6,2), name="lstm_1_input"),
LSTM(50, activation="tanh", return_sequences=True, recurrent_dropout=0.2, kernel_regularizer=l2reg, bias_regularizer=l2reg, name="lstm_1"),
Dropout(0.20, name="dropout_1"),
LSTM(50, activation="tanh", recurrent_dropout=0.2, kernel_regularizer=l2reg, bias_regularizer=l2reg, name="lstm_2"),
Dropout(0.20, name="dropout_2"),
Dense(1, kernel_regularizer=l2reg, bias_regularizer=l2reg, name="dense_1")
])
bilstm = Sequential([
Input(shape=(6,2), name="lstm_1_input"),
Bidirectional(LSTM(50, activation="tanh", recurrent_dropout=0.2, kernel_regularizer=l2reg, bias_regularizer=l2reg, name="lstm_1")),
Dropout(0.20, name="dropout_1"),
Dense(1, kernel_regularizer=l2reg, bias_regularizer=l2reg, name="dense_1")
])
gru = Sequential([
Input(shape=(6,2), name="gru_1_input"),
GRU(50, activation="tanh", recurrent_dropout=0.2, kernel_regularizer=l2reg, bias_regularizer=l2reg, name="gru_1"),
Dropout(0.20, name="dropout_1"),
Dense(1, kernel_regularizer=l2reg, bias_regularizer=l2reg, name="dense_1")
])
from tensorflow.keras.utils import plot_model
plot_model(cnn2, show_shapes=True, show_layer_names=True, dpi=64)
plot_model(cnn3, show_shapes=True, show_layer_names=True, dpi=64)
plot_model(lstm, show_shapes=True, show_layer_names=True, dpi=64)
plot_model(slstm, show_shapes=True, show_layer_names=True, dpi=64)
plot_model(bilstm, show_shapes=True, show_layer_names=True, dpi=64)
plot_model(gru, show_shapes=True, show_layer_names=True, dpi=64)
# Build data generator
def datagen(data, seq_len, batch_size, targetcol):
"As a generator to produce samples for Keras model"
# Learn about the data's features and time axis
input_cols = [c for c in data.columns if c != targetcol]
# Infinite loop to generate a batch
batch = []
while True:
# Pick one position, then clip a sequence length
while True:
t = random.choice(data.index)
n = (data.index == t).argmax()
if n-seq_len+1 < 0:
continue # this sample is not enough for one sequence length
frame = data.iloc[n-seq_len+1:n+1][input_cols].T
target = data.iloc[n+1][targetcol]
# extract 2D array
batch.append([frame, data.iloc[n+1][targetcol]])
break
# if we get enough for a batch, dispatch
if len(batch) == batch_size:
X, y = zip(*batch)
yield np.array(X, dtype="float32"), np.array(y, dtype="float32")
batch = []
def read_data(filename):
# Read data into pandas DataFrames
X = pd.read_csv(filename, index_col="Timestamp")
X.index = pd.to_datetime(X.index, unit="s")
# target is next day closing price
cols = X.columns
X["Target"] = X["Close"].shift(-1)
X.dropna(inplace=True)
return X
# Read data
TRAINFILE = "dataset/Ethereum_price_data_train.csv"
VALIDFILE = "dataset/EThereum_price_data_test_29_May_2018-30_December_2018.csv"
df_train = read_data(TRAINFILE)
df_valid = read_data(VALIDFILE)
# Training in SGD with batch size 128 and 50 epochs
seq_len = 2
batch_size = 128
n_epochs = 50
n_steps = 400
checkpoint_path = "cnn2-{epoch}-{val_loss:.0f}.h5"
callbacks = [
ModelCheckpoint(checkpoint_path,
monitor='val_loss', mode="max",
verbose=0, save_best_only=True, save_weights_only=False, save_freq="epoch"),
EarlyStopping(monitor="val_loss", patience=3, restore_best_weights=True)
]
cnn2.compile(optimizer="adam", loss="mse")
cnn2.fit(datagen(df_train, seq_len, batch_size, "Target"),
validation_data=datagen(df_valid, seq_len, batch_size, "Target"),
epochs=n_epochs, steps_per_epoch=n_steps, validation_steps=10, verbose=1, callbacks=callbacks) | Epoch 1/50
400/400 [==============================] - 52s 129ms/step - loss: 5134155.0000 - val_loss: 36836.4336
Epoch 2/50
400/400 [==============================] - 51s 128ms/step - loss: 784477.2500 - val_loss: 21840.3535
Epoch 3/50
400/400 [==============================] - 51s 129ms/step - loss: 224073.4531 - val_loss: 6584.0576
Epoch 4/50
400/400 [==============================] - 51s 129ms/step - loss: 84045.1719 - val_loss: 5507.8584
Epoch 5/50
400/400 [==============================] - 52s 129ms/step - loss: 38750.0430 - val_loss: 1258.1412
Epoch 6/50
400/400 [==============================] - 52s 130ms/step - loss: 20047.4297 - val_loss: 407.4286
Epoch 7/50
400/400 [==============================] - 51s 129ms/step - loss: 14825.2490 - val_loss: 1907.7725
Epoch 8/50
400/400 [==============================] - 51s 129ms/step - loss: 7697.8579 - val_loss: 310.8632
Epoch 9/50
400/400 [==============================] - 52s 129ms/step - loss: 3882.0569 - val_loss: 84.9826
Epoch 10/50
400/400 [==============================] - 51s 129ms/step - loss: 4043.2842 - val_loss: 1336.3011
Epoch 11/50
400/400 [==============================] - 51s 129ms/step - loss: 2533.0771 - val_loss: 73.1278
Epoch 12/50
400/400 [==============================] - 52s 129ms/step - loss: 1831.2122 - val_loss: 44.2584
Epoch 13/50
400/400 [==============================] - 51s 129ms/step - loss: 1814.2236 - val_loss: 39.1109
Epoch 14/50
400/400 [==============================] - 52s 129ms/step - loss: 1474.9673 - val_loss: 124.2770
Epoch 15/50
400/400 [==============================] - 51s 129ms/step - loss: 1204.8274 - val_loss: 39.5159
Epoch 16/50
400/400 [==============================] - 51s 129ms/step - loss: 3227.2451 - val_loss: 2031.0986
| MIT | multimodel-1obs-1step.ipynb | righthandabacus/market_notebooks |
Scaling Criteo: Triton Inference with HugeCTR OverviewThe last step is to deploy the ETL workflow and saved model to production. In the production setting, we want to transform the input data as during training (ETL). We need to apply the same mean/std for continuous features and use the same categorical mapping to convert the categories to continuous integer before we use the deep learning model for a prediction. Therefore, we deploy the NVTabular workflow with the HugeCTR model as an ensemble model to Triton Inference. The ensemble model guarantees that the same transformation are applied to the raw inputs. Learning objectivesIn this notebook, we learn how to deploy our models to production:- Use **NVTabular** to generate config and model files for Triton Inference Server- Deploy an ensemble of NVTabular workflow and HugeCTR model- Send example request to Triton Inference Server Inference with Triton and HugeCTRFirst, we need to generate the Triton Inference Server configurations and save the models in the correct format. In the previous notebooks [02-ETL-with-NVTabular](./02-ETL-with-NVTabular.ipynb) and [03-Training-with-HugeCTR](./03-Training-with-HugeCTR.ipynb) we saved the NVTabular workflow and HugeCTR model to disk. We will load them. Saving Ensemble Model for Triton Inference Server After training terminates, we can see that two `.model` files are generated. We need to move them inside a temporary folder, like `criteo_hugectr/1`. Let's create these folders. | import os
import numpy as np | _____no_output_____ | Apache-2.0 | examples/scaling-criteo/04-Triton-Inference-with-HugeCTR.ipynb | mikemckiernan/NVTabular |
Now we move our saved `.model` files inside 1 folder. We use only the last snapshot after `9600` iterations. | os.system("mv *9600.model ./criteo_hugectr/1/") | _____no_output_____ | Apache-2.0 | examples/scaling-criteo/04-Triton-Inference-with-HugeCTR.ipynb | mikemckiernan/NVTabular |
Now we can save our models to be deployed at the inference stage. To do so we will use export_hugectr_ensemble method below. With this method, we can generate the config.pbtxt files automatically for each model. In doing so, we should also create a hugectr_params dictionary, and define the parameters like where the amazonreview.json file will be read, slots which corresponds to number of categorical features, `embedding_vector_size`, `max_nnz`, and `n_outputs` which is number of outputs.The script below creates an ensemble triton server model where- workflow is the the nvtabular workflow used in preprocessing,- hugectr_model_path is the HugeCTR model that should be served. - This path includes the .model files.name is the base name of the various triton models- output_path is the path where is model will be saved to.- cats are the categorical column names- conts are the continuous column names We need to load the NVTabular workflow first | import nvtabular as nvt
BASE_DIR = os.environ.get("BASE_DIR", "/raid/data/criteo")
input_path = os.path.join(BASE_DIR, "test_dask/output")
workflow = nvt.Workflow.load(os.path.join(input_path, "workflow")) | _____no_output_____ | Apache-2.0 | examples/scaling-criteo/04-Triton-Inference-with-HugeCTR.ipynb | mikemckiernan/NVTabular |
Let's clear the directory | os.system("rm -rf /model/*")
from nvtabular.inference.triton import export_hugectr_ensemble
hugectr_params = dict()
hugectr_params["config"] = "/model/criteo/1/criteo.json"
hugectr_params["slots"] = 26
hugectr_params["max_nnz"] = 1
hugectr_params["embedding_vector_size"] = 128
hugectr_params["n_outputs"] = 1
export_hugectr_ensemble(
workflow=workflow,
hugectr_model_path="./criteo_hugectr/1/",
hugectr_params=hugectr_params,
name="criteo",
output_path="/model/",
label_columns=["label"],
cats=["C" + str(x) for x in range(1, 27)],
conts=["I" + str(x) for x in range(1, 14)],
max_batch_size=64,
) | _____no_output_____ | Apache-2.0 | examples/scaling-criteo/04-Triton-Inference-with-HugeCTR.ipynb | mikemckiernan/NVTabular |
We can take a look at the generated files. | !tree /model | [01;34m/model[00m
├── [01;34mcriteo[00m
│ ├── [01;34m1[00m
│ │ ├── 0_opt_sparse_9600.model
│ │ ├── [01;34m0_sparse_9600.model[00m
│ │ │ ├── emb_vector
│ │ │ ├── key
│ │ │ └── slot_id
│ │ ├── _dense_9600.model
│ │ ├── _opt_dense_9600.model
│ │ └── criteo.json
│ └── config.pbtxt
├── [01;34mcriteo_ens[00m
│ ├── [01;34m1[00m
│ └── config.pbtxt
└── [01;34mcriteo_nvt[00m
├── [01;34m1[00m
│ ├── model.py
│ └── [01;34mworkflow[00m
│ ├── [01;34mcategories[00m
│ │ ├── unique.C1.parquet
│ │ ├── unique.C10.parquet
│ │ ├── unique.C11.parquet
│ │ ├── unique.C12.parquet
│ │ ├── unique.C13.parquet
│ │ ├── unique.C14.parquet
│ │ ├── unique.C15.parquet
│ │ ├── unique.C16.parquet
│ │ ├── unique.C17.parquet
│ │ ├── unique.C18.parquet
│ │ ├── unique.C19.parquet
│ │ ├── unique.C2.parquet
│ │ ├── unique.C20.parquet
│ │ ├── unique.C21.parquet
│ │ ├── unique.C22.parquet
│ │ ├── unique.C23.parquet
│ │ ├── unique.C24.parquet
│ │ ├── unique.C25.parquet
│ │ ├── unique.C26.parquet
│ │ ├── unique.C3.parquet
│ │ ├── unique.C4.parquet
│ │ ├── unique.C5.parquet
│ │ ├── unique.C6.parquet
│ │ ├── unique.C7.parquet
│ │ ├── unique.C8.parquet
│ │ └── unique.C9.parquet
│ ├── column_types.json
│ ├── metadata.json
│ └── workflow.pkl
└── config.pbtxt
9 directories, 40 files
| Apache-2.0 | examples/scaling-criteo/04-Triton-Inference-with-HugeCTR.ipynb | mikemckiernan/NVTabular |
We need to write a configuration file with the stored model weights and model configuration. | %%writefile '/model/ps.json'
{
"supportlonglong": true,
"models": [
{
"model": "criteo",
"sparse_files": ["/model/criteo/1/0_sparse_9600.model"],
"dense_file": "/model/criteo/1/_dense_9600.model",
"network_file": "/model/criteo/1/criteo.json",
"max_batch_size": "64",
"gpucache": "true",
"hit_rate_threshold": "0.9",
"gpucacheper": "0.5",
"num_of_worker_buffer_in_pool": "4",
"num_of_refresher_buffer_in_pool": "1",
"cache_refresh_percentage_per_iteration": 0.2,
"deployed_device_list": ["0"],
"default_value_for_each_table": ["0.0", "0.0"],
}
],
} | Overwriting /model/ps.json
| Apache-2.0 | examples/scaling-criteo/04-Triton-Inference-with-HugeCTR.ipynb | mikemckiernan/NVTabular |
Loading Ensemble Model with Triton Inference ServerWe have only saved the models for Triton Inference Server. We started Triton Inference Server in explicit mode, meaning that we need to send a request that Triton will load the ensemble model. We connect to the Triton Inference Server. | import tritonhttpclient
try:
triton_client = tritonhttpclient.InferenceServerClient(url="localhost:8000", verbose=True)
print("client created.")
except Exception as e:
print("channel creation failed: " + str(e)) | client created.
| Apache-2.0 | examples/scaling-criteo/04-Triton-Inference-with-HugeCTR.ipynb | mikemckiernan/NVTabular |
We deactivate warnings. | import warnings
warnings.filterwarnings("ignore") | _____no_output_____ | Apache-2.0 | examples/scaling-criteo/04-Triton-Inference-with-HugeCTR.ipynb | mikemckiernan/NVTabular |
We check if the server is alive. | triton_client.is_server_live() | GET /v2/health/live, headers None
<HTTPSocketPoolResponse status=200 headers={'content-length': '0', 'content-type': 'text/plain'}>
| Apache-2.0 | examples/scaling-criteo/04-Triton-Inference-with-HugeCTR.ipynb | mikemckiernan/NVTabular |
We check the available models in the repositories:- criteo_ens: Ensemble - criteo_nvt: NVTabular - criteo: HugeCTR model | triton_client.get_model_repository_index() | POST /v2/repository/index, headers None
<HTTPSocketPoolResponse status=200 headers={'content-type': 'application/json', 'content-length': '93'}>
bytearray(b'[{"name":".ipynb_checkpoints"},{"name":"criteo"},{"name":"criteo_ens"},{"name":"criteo_nvt"}]')
| Apache-2.0 | examples/scaling-criteo/04-Triton-Inference-with-HugeCTR.ipynb | mikemckiernan/NVTabular |
We load the models individually. | %%time
triton_client.load_model(model_name="criteo_nvt")
%%time
triton_client.load_model(model_name="criteo")
%%time
triton_client.load_model(model_name="criteo_ens") | POST /v2/repository/models/criteo_ens/load, headers None
<HTTPSocketPoolResponse status=200 headers={'content-type': 'application/json', 'content-length': '0'}>
Loaded model 'criteo_ens'
CPU times: user 4.7 ms, sys: 0 ns, total: 4.7 ms
Wall time: 20.2 s
| Apache-2.0 | examples/scaling-criteo/04-Triton-Inference-with-HugeCTR.ipynb | mikemckiernan/NVTabular |
Example Request to Triton Inference ServerNow, the models are loaded and we can create a sample request. We read an example **raw batch** for inference. | # Get dataframe library - cudf or pandas
from merlin.core.dispatch import get_lib
df_lib = get_lib()
# read in the workflow (to get input/output schema to call triton with)
batch_path = os.path.join(BASE_DIR, "converted/criteo")
batch = df_lib.read_parquet(os.path.join(batch_path, "*.parquet"), num_rows=3)
batch = batch[[x for x in batch.columns if x != "label"]]
print(batch) | I1 I2 I3 I4 I5 I6 I7 I8 I9 I10 ... C17 \
0 5 110 <NA> 16 <NA> 1 0 14 7 1 ... -771205462
1 32 3 5 <NA> 1 0 0 61 5 0 ... -771205462
2 <NA> 233 1 146 1 0 0 99 7 0 ... -771205462
C18 C19 C20 C21 C22 C23 \
0 -1206449222 -1793932789 -1014091992 351689309 632402057 -675152885
1 -1578429167 -1793932789 -20981661 -1556988767 -924717482 391309800
2 1653545869 -1793932789 -1014091992 351689309 632402057 -675152885
C24 C25 C26
0 2091868316 809724924 -317696227
1 1966410890 -1726799382 -1218975401
2 883538181 -10139646 -317696227
[3 rows x 39 columns]
| Apache-2.0 | examples/scaling-criteo/04-Triton-Inference-with-HugeCTR.ipynb | mikemckiernan/NVTabular |
We prepare the batch for inference by using correct column names and data types. We use the same datatypes as defined in our dataframe. | batch.dtypes
import tritonclient.http as httpclient
from tritonclient.utils import np_to_triton_dtype
inputs = []
col_names = list(batch.columns)
col_dtypes = [np.int32] * len(col_names)
for i, col in enumerate(batch.columns):
d = batch[col].fillna(0).values_host.astype(col_dtypes[i])
d = d.reshape(len(d), 1)
inputs.append(httpclient.InferInput(col_names[i], d.shape, np_to_triton_dtype(col_dtypes[i])))
inputs[i].set_data_from_numpy(d) | _____no_output_____ | Apache-2.0 | examples/scaling-criteo/04-Triton-Inference-with-HugeCTR.ipynb | mikemckiernan/NVTabular |
We send the request to the triton server and collect the last output. | # placeholder variables for the output
outputs = [httpclient.InferRequestedOutput("OUTPUT0")]
# build a client to connect to our server.
# This InferenceServerClient object is what we'll be using to talk to Triton.
# make the request with tritonclient.http.InferInput object
response = triton_client.infer("criteo_ens", inputs, request_id="1", outputs=outputs)
print("predicted sigmoid result:\n", response.as_numpy("OUTPUT0")) | POST /v2/models/criteo_ens/infer, headers {'Inference-Header-Content-Length': 3383}
b'{"id":"1","inputs":[{"name":"I1","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"I2","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"I3","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"I4","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"I5","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"I6","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"I7","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"I8","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"I9","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"I10","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"I11","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"I12","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"I13","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"C1","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"C2","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"C3","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"C4","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"C5","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"C6","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"C7","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"C8","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"C9","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"C10","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"C11","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"C12","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"C13","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"C14","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"C15","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"C16","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"C17","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"C18","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"C19","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"C20","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"C21","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"C22","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"C23","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"C24","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"C25","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}},{"name":"C26","shape":[3,1],"datatype":"INT32","parameters":{"binary_data_size":12}}],"outputs":[{"name":"OUTPUT0","parameters":{"binary_data":true}}]}\x05\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x00n\x00\x00\x00\x03\x00\x00\x00\xe9\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x01\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x92\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00=\x00\x00\x00c\x00\x00\x00\x07\x00\x00\x00\x05\x00\x00\x00\x07\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x002\x01\x00\x00U\x0c\x00\x00\x1d\x0c\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x01\x00\x00\x00y\rwb\x8d\xfd\xf3\xe5y\rwbX]\x1f\xe2\xa6\xff\xaa\xa0\x03B\x98\xad/D\xea\xaf\xd5\x15\xaao\r\xc6\xbeb\xcf\x7f\\\x94!4\x8a\xda\xeeIl8H\'\xb08#\x9f\xd6<M\x06U\xe7\xcbm\xcdo\xcbm\xcdo\xcbm\xcdo!\xaa\x805\x81\xed\x16\xabb\xeb\xf5\xb5\x03\x89\x80()lBC\x8b\xcc\xf2\xd1\xa6\xdf\xdeFT\xe1\xf5\x1d\x1f\x82N.\xc1}\x02.\xa9\xc0\xe9}\xc1}\x02.1B|\x0cd\xdcRf1B|\x0c\x1f\x1d\x98\x95\'N\xeb\x99\x84aq\x12\xb7\xff\xc5\x00\xb7\xff\xc5\x00\xb7\xff\xc5\x007\xe5N\xbe7\xe5N\xbe7\xe5N\xbe\xcct\x0b\x8a\x99\xfe\xbb\xf3\x0b\r\x0f\xf7\xfa>\xdcL\xfa>\xdcL\xfa>\xdcL\xaaV\x08\xd2\xaaV\x08\xd2\xaaV\x08\xd2\xba\x0b\x17\xb8\x11\x15\xeb\xa1\x8d\x1b\x8fb\x0b\xc2\x12\x95\x0b\xc2\x12\x95\x0b\xc2\x12\x95(/\x8e\xc3c\xd8\xbf\xfe(/\x8e\xc3]Z\xf6\x14\xa1<2\xa3]Z\xf6\x14\x89\xb0\xb1%V\xee\xe1\xc8\x89\xb0\xb1%\x0b\xfc\xc1\xd7\xe8\xe9R\x17\x0b\xfc\xc1\xd7\x9c`\xaf|\x8a\x0c5u\x05\xb9\xa94\xfckC0\xea!\x13\x99\x02He\xff\x1dW\x10\xedW\xe9W\xb7\x1dW\x10\xed'
<HTTPSocketPoolResponse status=400 headers={'content-length': '122', 'content-type': 'text/plain'}>
| Apache-2.0 | examples/scaling-criteo/04-Triton-Inference-with-HugeCTR.ipynb | mikemckiernan/NVTabular |
Let's unload the model. We need to unload each model. | triton_client.unload_model(model_name="criteo_ens")
triton_client.unload_model(model_name="criteo_nvt")
triton_client.unload_model(model_name="criteo") | POST /v2/repository/models/criteo_ens/unload, headers None
{"parameters":{"unload_dependents":false}}
<HTTPSocketPoolResponse status=200 headers={'content-type': 'application/json', 'content-length': '0'}>
Loaded model 'criteo_ens'
POST /v2/repository/models/criteo_nvt/unload, headers None
{"parameters":{"unload_dependents":false}}
<HTTPSocketPoolResponse status=200 headers={'content-type': 'application/json', 'content-length': '0'}>
Loaded model 'criteo_nvt'
POST /v2/repository/models/criteo/unload, headers None
{"parameters":{"unload_dependents":false}}
<HTTPSocketPoolResponse status=200 headers={'content-type': 'application/json', 'content-length': '0'}>
Loaded model 'criteo'
| Apache-2.0 | examples/scaling-criteo/04-Triton-Inference-with-HugeCTR.ipynb | mikemckiernan/NVTabular |
NNCLR* Nearest- Neighbor Contrastive Learning of visual Representations (NNCLR), samples the nearest neighbors from the dataset in the latent space, and treats them as positives. This provides more semantic variations than pre-defined transformations.* NNCLR Formulated by Google Research and DeepMind | # !pip install lightly av torch-summary
import torch
from torch import nn
import torchvision
from lightly.data import LightlyDataset
from lightly.data import SimCLRCollateFunction
from lightly.loss import NTXentLoss
from lightly.models.modules import NNCLRProjectionHead
from lightly.models.modules import NNCLRPredictionHead
from lightly.models.modules import NNMemoryBankModule
class NNCLR(nn.Module):
def __init__(self, backbone):
super().__init__()
self.backbone = backbone
self.projection_head = NNCLRProjectionHead(512, 512, 128)
self.prediction_head = NNCLRPredictionHead(128, 512, 128)
def forward(self, x):
y = self.backbone(x).flatten(start_dim=1)
z = self.projection_head(y)
p = self.prediction_head(z)
z = z.detach()
return z, p
resnet = torchvision.models.resnet18()
backbone = nn.Sequential(*list(resnet.children())[:-1])
model = NNCLR(backbone)
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
memory_bank = NNMemoryBankModule(size=4096)
memory_bank.to(device)
cifar10 = torchvision.datasets.CIFAR10("datasets/cifar10", download=True)
dataset = LightlyDataset.from_torch_dataset(cifar10)
collate_fn = SimCLRCollateFunction(input_size=32)
dataloader = torch.utils.data.DataLoader(
dataset,
batch_size=256,
collate_fn=collate_fn,
shuffle=True,
drop_last=True,
num_workers=8,
)
criterion = NTXentLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.06)
print("Starting Training")
for epoch in range(5):
total_loss = 0
for (x0, x1), _, _ in dataloader:
x0 = x0.to(device)
x1 = x1.to(device)
z0, p0 = model(x0)
z1, p1 = model(x1)
z0 = memory_bank(z0, update=False)
z1 = memory_bank(z1, update=True)
loss = 0.5 * (criterion(z0, p1) + criterion(z1, p0))
total_loss += loss.detach()
loss.backward()
optimizer.step()
optimizer.zero_grad()
avg_loss = total_loss / len(dataloader)
print(f"epoch: {epoch:>02}, loss: {avg_loss:.5f}") | Starting Training
| MIT | Pytorch/NNCLR.ipynb | ashishpatel26/Self-Supervisedd-Learning |
Predictive Modelling: XGBoost Imports | %load_ext autoreload
%autoreload 2
# Pandas and numpy
import pandas as pd
import numpy as np
#
from IPython.display import display, clear_output
import sys
import time
# Libraries for Visualization
import matplotlib.pyplot as plt
import seaborn as sns
from src.visualization.visualize import plot_corr_matrix, plot_multi, plot_norm_dist, plot_feature_importances
# Some custom tools
from src.data.tools import check_for_missing_vals
#
from src.models.predict_model import avg_model, run_combinations
#from src.models.train_model import run_combinations
# Alpaca API
import alpaca_trade_api as tradeapi
# Pickle
import pickle
import os
from pathlib import Path
# To load variables from .env file into system environment
from dotenv import find_dotenv, load_dotenv
from atomm.Indicators import MomentumIndicators
from atomm.DataManager.main import MSDataManager
from atomm.Tools import calc_open_position, calc_returns
from src.visualization.visualize import plot_confusion_matrix
from atomm.Methods import BlockingTimeSeriesSplit, PurgedKFold
import time
# scikit-learn
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score
from sklearn.model_selection import cross_val_score, TimeSeriesSplit
from xgboost import XGBClassifier
from sklearn.metrics import classification_report, confusion_matrix, plot_confusion_matrix
from sklearn.metrics import accuracy_score, recall_score, f1_score, precision_score
from sklearn.model_selection import train_test_split, TimeSeriesSplit
from xgboost import XGBClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.multiclass import OneVsRestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
# For BayesianHyperparameter Optimization
from atomm.Models.Tuning import search_space, BayesianSearch
from hyperopt import space_eval
# Visualization libraries
import seaborn as sns
import matplotlib.pyplot as plt
from pandas.plotting import scatter_matrix
import matplotlib.gridspec as gridspec
#import matplotlib.style as style
from scipy import stats
# Load environment variables
load_dotenv(find_dotenv()) | _____no_output_____ | MIT | notebooks/06e_Predictive_Modeling-XGBoost-Copy1.ipynb | robindoering86/capstone_nf |
Loading the data | data_base_dir = os.environ.get('DATA_DIR_BASE_PATH')
data_base_dir
!pwd
fname = os.path.join(data_base_dir, 'processed', 'index.h5')
fname = Path(fname)
#fname = '../data/processed/index.h5'
# Load dataset from HDF storage
with pd.HDFStore(fname) as storage:
djia = storage.get('nyse/cleaned/rand_symbols')
y_2c = storage.get('nyse/engineered/target_two_class')
y_3c = storage.get('nyse/engineered/target_three_class')
df_moments = storage.get('nyse/engineered/features')
#print(storage.info())
# Create copies of the pristine data
X = df_moments.copy()
y = y_3c.copy()
y2 = y_2c.copy()
prices = djia.copy()
forecast_horizon = [1, 3, 5, 7, 10, 15, 20, 25, 30]
input_window_size = [3, 5, 7, 10, 15, 20, 25, 30]
ti_list = ['macd', 'rsi', 'stoc', 'roc', 'bbu', 'bbl', 'ema', 'atr', 'adx', 'cci', 'williamsr', 'stocd']
symbol_list = df_moments.columns.get_level_values(0).unique()
df_moments.columns.get_level_values(1).unique() | _____no_output_____ | MIT | notebooks/06e_Predictive_Modeling-XGBoost-Copy1.ipynb | robindoering86/capstone_nf |
Imputing missing values | X.shape
check_for_missing_vals(X) | No missing values found in dataframe
| MIT | notebooks/06e_Predictive_Modeling-XGBoost-Copy1.ipynb | robindoering86/capstone_nf |
Prices values | prices.shape
check_for_missing_vals(prices)
y_3c.shape
check_for_missing_vals(y_3c)
y2.shape
check_for_missing_vals(y2) | No missing values found in dataframe
| MIT | notebooks/06e_Predictive_Modeling-XGBoost-Copy1.ipynb | robindoering86/capstone_nf |
No missing values, and sizes of ```y.shape[0]``` and```X.shape[0]``` match. Scaling the features | from sklearn.preprocessing import MinMaxScaler, StandardScaler
#scale = MinMaxScaler()
scale = StandardScaler()
scaled = scale.fit_transform(X)
scaled.shape
#X_scaled = pd.DataFrame(data=scaled, columns=X.columns)
X_scaled = X | _____no_output_____ | MIT | notebooks/06e_Predictive_Modeling-XGBoost-Copy1.ipynb | robindoering86/capstone_nf |
Train-Test Split | # Use 70/30 train/test splits
test_p = .3
# Scaled, three-class
test_size = int((1 - test_p) * X_scaled.shape[0])
X_train, X_test, y_train, y_test = X_scaled[:test_size], X_scaled[test_size:], y_3c[:test_size], y_3c[test_size:]
prices_train, prices_test = djia[:test_size], djia[test_size:]
# Unscaled, two-class
test_size = int((1 - test_p) * X.shape[0])
X_train, X_test, y_train, y_test = X[:test_size], X[test_size:], y2[:test_size], y2[test_size:]
prices_train, prices_test = djia[:test_size], djia[test_size:]
# Scaled, two-class
test_size = int((1 - test_p) * X.shape[0])
X_train, X_test, y_train, y_test = X_scaled[:test_size], X_scaled[test_size:], y2[:test_size], y2[test_size:]
prices_train, prices_test = djia[:test_size], djia[test_size:]
#test_size = test_p
#X_train, X_test, y_train, y_test = train_test_split(X_scaled, y_3c, test_size=test_size, random_state=101) | _____no_output_____ | MIT | notebooks/06e_Predictive_Modeling-XGBoost-Copy1.ipynb | robindoering86/capstone_nf |
Model | symbol_list
symbol = 'T'
n1 = 15
n2 = 15
n_estimators = 10
# set up cross validation splits
tscv = TimeSeriesSplit(n_splits=5)
btscv = BlockingTimeSeriesSplit(n_splits=5)
#ppcv = PurgedKFold(n_splits=5)
# Creates a list of features for a given lookback window (n1)
features = [f'{x}_{n1}' for x in ti_list]
# Creates a list of all features
all_features = [f'{x}_{n}' for x in ti_list for n in input_window_size] | _____no_output_____ | MIT | notebooks/06e_Predictive_Modeling-XGBoost-Copy1.ipynb | robindoering86/capstone_nf |
Single lookback/lookahead combination | clf_svc1 = OneVsRestClassifier(
BaggingClassifier(
SVC(
kernel='rbf',
class_weight='balanced'
),
max_samples=.4,
n_estimators=n_estimators,
n_jobs=-1)
)
clf_svc1.fit(X_train[symbol][[f'{x}_{n}' for x in ti_list]], y_train[symbol][f'signal_{n}'])
y_pred_svc1 = clf_svc1.predict(X_test[symbol][[f'{x}_{n}' for x in ti_list]])
print('Accuracy Score: ', accuracy_score(y_pred_svc1, y_test[symbol][f'signal_{n}']))
print(classification_report(y_pred_svc1, y_test[symbol][f'signal_{n}']))
plot_confusion_matrix(
clf_svc1,
X_test[symbol][[f'{x}_{n}' for x in ti_list]],
y_test[symbol][f'signal_{n}'],
normalize='all'
) | Accuracy Score: 0.5400340715502555
precision recall f1-score support
0 0.91 0.52 0.66 505
1 0.19 0.68 0.29 82
accuracy 0.54 587
macro avg 0.55 0.60 0.48 587
weighted avg 0.81 0.54 0.61 587
| MIT | notebooks/06e_Predictive_Modeling-XGBoost-Copy1.ipynb | robindoering86/capstone_nf |
All combinations Averaging across all 50 randomly selected stocks | avg_results, scores_dict, preds_dict, params_dict, returns_dict = avg_model(
symbol_list,
forecast_horizon,
input_window_size,
X_train,
X_test,
y_train,
y_test,
prices_test,
model=clf_svc1,
silent = False
) | _____no_output_____ | MIT | notebooks/06e_Predictive_Modeling-XGBoost-Copy1.ipynb | robindoering86/capstone_nf |
Hyperparamter Optimization: GridSearch | gsearch_xgb.best_score_ | _____no_output_____ | MIT | notebooks/06e_Predictive_Modeling-XGBoost-Copy1.ipynb | robindoering86/capstone_nf |
Hyperparamter Optimization: Bayesian Optimization XGBoost | n1=15
n2=15
symbol='T'
y_train[symbol][f'signal_{n2}'].value_counts()
symbol_list
# Optimizing for accuracy_score
model = XGBClassifier
bsearch_xgba, clf_bsearch_xgba, params_bsearch_xgba = BayesianSearch(
search_space(model),
model,
X_train[symbol][features],
y_train[symbol][f'signal_{n2}'],
X_test[symbol][features],
y_test[symbol][f'signal_{n2}'],
num_eval=100,
scoring_metric='accuracy_score'
)
y_pred_bsearch_xgba = clf_bsearch_xgba.predict(X_test[symbol][features])
print('Recall Score: ', recall_score(y_pred_bsearch_xgba, y_test[symbol][f'signal_{n2}']))
print(classification_report(y_pred_bsearch_xgba, y_test[symbol][f'signal_{n2}']))
plot_confusion_matrix(
clf_bsearch_xgba,
X_test[symbol][features], y_test[symbol][f'signal_{n2}'],
)
calc_returns(y_pred_bsearch_xgba, djia[symbol][test_size:])
# Optimizing for recall_score
model = XGBClassifier
bsearch_xgbb, clf_bsearch_xgbb, params_bsearch_xgbb = BayesianSearch(
search_space(model),
model,
X_train[symbol][features],
y_train[symbol][f'signal_{n2}'],
X_test[symbol][features],
y_test[symbol][f'signal_{n2}'],
num_eval=100,
scoring_metric='recall_score'
)
y_pred_bsearch_xgbb = clf_bsearch_xgbb.predict(X_test[symbol][features])
print('Recall Score: ', recall_score(y_pred_bsearch_xgbb, y_test[symbol][f'signal_{n2}']))
print(classification_report(y_pred_bsearch_xgbb, y_test[symbol][f'signal_{n2}']))
plot_confusion_matrix(
clf_bsearch_xgbb,
X_test[symbol][features], y_test[symbol][f'signal_{n2}'],
)
calc_returns(y_pred_bsearch_xgbb, djia[symbol][test_size:])
# f1_score as scoring metric
model = XGBClassifier
bsearch_xgbc, clf_bsearch_xgbc, params_bsearch_xgbc = BayesianSearch(
search_space(model),
model,
X_train[symbol][features],
y_train[symbol][f'signal_{n2}'],
X_test[symbol][features],
y_test[symbol][f'signal_{n2}'],
num_eval=100,
scoring_metric='f1_score'
)
y_pred_bsearch_xgbc = clf_bsearch_xgbc.predict(X_test[symbol][features])
print('Recall Score: ', recall_score(y_pred_bsearch_xgbb, y_test[symbol][f'signal_{n2}']))
print(classification_report(y_pred_bsearch_xgbc, y_test[symbol][f'signal_{n2}']))
plot_confusion_matrix(
clf_bsearch_xgbc,
X_test[symbol][features], y_test[symbol][f'signal_{n2}'],
)
calc_returns(y_pred_bsearch_xgbc, djia[symbol][test_size:])
# Precision as scoring metric
model = XGBClassifier
bsearch_xgbd, clf_bsearch_xgbd, params_bsearch_xgbd = BayesianSearch(
search_space(model),
model,
X_train[symbol][features],
y_train[symbol][f'signal_{n2}'],
X_test[symbol][features],
y_test[symbol][f'signal_{n2}'],
num_eval=100,
scoring_metric='precision_score'
)
y_pred_bsearch_xgbd = clf_bsearch_xgbd.predict(X_test[symbol][features])
print('Recall Score: ', recall_score(y_pred_bsearch_xgbd, y_test[symbol][f'signal_{n2}']))
print(classification_report(y_pred_bsearch_xgbd, y_test[symbol][f'signal_{n2}']))
plot_confusion_matrix(
clf_bsearch_xgbd,
X_test[symbol][features], y_test[symbol][f'signal_{n2}'],
)
calc_returns(y_pred_bsearch_xgbd, djia[symbol][test_size:]) | _____no_output_____ | MIT | notebooks/06e_Predictive_Modeling-XGBoost-Copy1.ipynb | robindoering86/capstone_nf |
XGBoost with all features | # Accuracy as scoring metric
n1=15
n2=15
symbol='T'
model = XGBClassifier
bsearch_xgb1, clf_bsearch_xgb1, params_bsearch_xgb1 = BayesianSearch(
search_space(model),
model,
X_train[symbol][all_features],
y_train[symbol][f'signal_{n2}'],
X_test[symbol][all_features],
y_test[symbol][f'signal_{n2}'],
num_eval=100,
scoring_metric='accuracy_score'
)
y_pred_xgb1 = clf_bsearch_xgb1.predict(X_test[symbol][all_features])
print('Accuracy Score: ', accuracy_score(y_pred_xgb1, y_test[symbol][f'signal_{n1}']))
print(classification_report(y_pred_xgb1, y_test[symbol][f'signal_{n1}']))
plot_confusion_matrix(
clf_bsearch_xgb1,
X_test[symbol][all_features],
y_test[symbol][f'signal_{n1}'],
)
calc_returns(y_pred_xgb1, djia[symbol][test_size:])
# Recall as scoring metric
model = XGBClassifier
bsearch_xgb2, clf_bsearch_xgb2, params_bsearch_xgb2 = BayesianSearch(
search_space(model),
model,
X_train[symbol][all_features],
y_train[symbol][f'signal_{n2}'],
X_test[symbol][all_features],
y_test[symbol][f'signal_{n2}'],
num_eval=100,
scoring_metric='recall_score'
)
y_pred_xgb2 = clf_bsearch_xgb2.predict(X_test[symbol][all_features])
print('Recall Score: ', recall_score(y_pred_xgb2, y_test[symbol][f'signal_{n1}']))
print(classification_report(y_pred_xgb2, y_test[symbol][f'signal_{n1}']))
plot_confusion_matrix(
clf_bsearch_xgb2,
X_test[symbol][all_features], y_test[symbol][f'signal_{n1}']
)
calc_returns(y_pred_xgb2, djia[symbol][test_size:])
# f1_score as scoring metric
model = XGBClassifier
bsearch_xgb3, clf_bsearch_xgb3, params_bsearch_xgb3 = BayesianSearch(
search_space(model),
model,
X_train[symbol][all_features],
y_train[symbol][f'signal_{n2}'],
X_test[symbol][all_features],
y_test[symbol][f'signal_{n2}'],
num_eval=100,
scoring_metric='f1_score'
)
y_pred_xgb3 = clf_bsearch_xgb3.predict(X_test[symbol][all_features])
print('F1 Score: ', f1_score(y_pred_xgb3, y_test[symbol][f'signal_{n1}']))
print(classification_report(y_pred_xgb3, y_test[symbol][f'signal_{n1}']))
plot_confusion_matrix(
clf_bsearch_xgb3,
X_test[symbol][all_features],
y_test[symbol][f'signal_{n1}']
)
calc_returns(y_pred_xgb3, djia[symbol][test_size:])
# precision_score as scoring metric
model = XGBClassifier
bsearch_xgb4, clf_bsearch_xgb4, params_bsearch_xgb4 = BayesianSearch(
search_space(model),
model,
X_train[symbol][all_features],
y_train[symbol][f'signal_{n2}'],
X_test[symbol][all_features],
y_test[symbol][f'signal_{n2}'],
num_eval=100,
scoring_metric='precision_score'
)
y_pred_xgb4 = clf_bsearch_xgb4.predict(X_test[symbol][all_features])
print('Precision Score: ', precision_score(y_pred_xgb4, y_test[symbol][f'signal_{n1}'], average='weighted'))
print(classification_report(y_pred_xgb4, y_test[symbol][f'signal_{n1}']))
plot_confusion_matrix(
clf_bsearch_xgb4,
X_test[symbol][all_features],
y_test[symbol][f'signal_{n1}']
)
calc_returns(y_pred_xgb4, djia[symbol][test_size:]) | _____no_output_____ | MIT | notebooks/06e_Predictive_Modeling-XGBoost-Copy1.ipynb | robindoering86/capstone_nf |
Running on all 50 stocks on best model | #best_params = {'bootstrap': False, 'criterion': 'gini', 'max_depth': 218, 'max_features': 1, 'min_samples_leaf': 19, 'n_estimators': 423}
#model_2a = (n_jobs=-1, **params_rf4)
avg_results, scores_dict, preds_dict, params_dict, returns_dict = avg_model(
symbol_list,
forecast_horizon,
input_window_size,
ti_list,
X_train,
X_test,
y_train,
y_test,
prices_test,
model=clf_rf4,
silent = False
)
#best_params = {'bootstrap': False, 'criterion': 'gini', 'max_depth': 218, 'max_features': 1, 'min_samples_leaf': 19, 'n_estimators': 423}
#model_2a = (n_jobs=-1, **params_rf4)
avg_results, scores_dict, preds_dict, params_dict, returns_dict = avg_model(
symbol_list,
forecast_horizon,
input_window_size,
ti_list,
X_train,
X_test,
y_train,
y_test,
prices_test,
model=RandomForestClassifier,
silent = False,
hyper_optimize=True,
n_eval=10,
) | _____no_output_____ | MIT | notebooks/06e_Predictive_Modeling-XGBoost-Copy1.ipynb | robindoering86/capstone_nf |
Settings | %env TF_KERAS = 1
import os
sep_local = os.path.sep
import sys
sys.path.append('..'+sep_local+'..')
print(sep_local)
os.chdir('..'+sep_local+'..'+sep_local+'..'+sep_local+'..'+sep_local+'..')
print(os.getcwd())
import tensorflow as tf
print(tf.__version__) | _____no_output_____ | MIT | notebooks/pokemon/basic/convolutional/AE/pokemonAE_Convolutional_reconst_1ellwlb_01psnr.ipynb | Fidan13/Generative_Models |
Dataset loading | dataset_name='pokemon'
images_dir = 'C:\\Users\\Khalid\\Documents\projects\\pokemon\DS06\\'
validation_percentage = 20
valid_format = 'png'
from training.generators.file_image_generator import create_image_lists, get_generators
imgs_list = create_image_lists(
image_dir=images_dir,
validation_pct=validation_percentage,
valid_imgae_formats=valid_format
)
inputs_shape= image_size=(200, 200, 3)
batch_size = 32
latents_dim = 32
intermediate_dim = 50
training_generator, testing_generator = get_generators(
images_list=imgs_list,
image_dir=images_dir,
image_size=image_size,
batch_size=batch_size,
class_mode=None
)
import tensorflow as tf
train_ds = tf.data.Dataset.from_generator(
lambda: training_generator,
output_types=tf.float32 ,
output_shapes=tf.TensorShape((batch_size, ) + image_size)
)
test_ds = tf.data.Dataset.from_generator(
lambda: testing_generator,
output_types=tf.float32 ,
output_shapes=tf.TensorShape((batch_size, ) + image_size)
)
_instance_scale=1.0
for data in train_ds:
_instance_scale = float(data[0].numpy().max())
break
_instance_scale
import numpy as np
from collections.abc import Iterable
if isinstance(inputs_shape, Iterable):
_outputs_shape = np.prod(inputs_shape)
_outputs_shape | _____no_output_____ | MIT | notebooks/pokemon/basic/convolutional/AE/pokemonAE_Convolutional_reconst_1ellwlb_01psnr.ipynb | Fidan13/Generative_Models |
Model's Layers definition | units=20
c=50
enc_lays = [
tf.keras.layers.Conv2D(filters=units, kernel_size=3, strides=(2, 2), activation='relu'),
tf.keras.layers.Conv2D(filters=units*9, kernel_size=3, strides=(2, 2), activation='relu'),
tf.keras.layers.Flatten(),
# No activation
tf.keras.layers.Dense(latents_dim)
]
dec_lays = [
tf.keras.layers.Dense(units=c*c*units, activation=tf.nn.relu),
tf.keras.layers.Reshape(target_shape=(c , c, units)),
tf.keras.layers.Conv2DTranspose(filters=units, kernel_size=3, strides=(2, 2), padding="SAME", activation='relu'),
tf.keras.layers.Conv2DTranspose(filters=units*3, kernel_size=3, strides=(2, 2), padding="SAME", activation='relu'),
# No activation
tf.keras.layers.Conv2DTranspose(filters=3, kernel_size=3, strides=(1, 1), padding="SAME")
] | _____no_output_____ | MIT | notebooks/pokemon/basic/convolutional/AE/pokemonAE_Convolutional_reconst_1ellwlb_01psnr.ipynb | Fidan13/Generative_Models |
Model definition | model_name = dataset_name+'AE_Convolutional_reconst_1ell_01psnr'
experiments_dir='experiments'+sep_local+model_name
from training.autoencoding_basic.autoencoders.autoencoder import autoencoder as AE
inputs_shape=image_size
variables_params = \
[
{
'name': 'inference',
'inputs_shape':inputs_shape,
'outputs_shape':latents_dim,
'layers': enc_lays
}
,
{
'name': 'generative',
'inputs_shape':latents_dim,
'outputs_shape':inputs_shape,
'layers':dec_lays
}
]
from utils.data_and_files.file_utils import create_if_not_exist
_restore = os.path.join(experiments_dir, 'var_save_dir')
create_if_not_exist(_restore)
_restore
#to restore trained model, set filepath=_restore
ae = AE(
name=model_name,
latents_dim=latents_dim,
batch_size=batch_size,
variables_params=variables_params,
filepath=None
)
from evaluation.quantitive_metrics.peak_signal_to_noise_ratio import prepare_psnr
from statistical.losses_utilities import similarty_to_distance
from statistical.ae_losses import expected_loglikelihood_with_lower_bound as ellwlb
ae.compile(loss={'x_logits': lambda x_true, x_logits: ellwlb(x_true, x_logits)+ 0.1*similarity_to_distance(prepare_psnr([ae.batch_size]+ae.get_inputs_shape()))(x_true, x_logits)}) | _____no_output_____ | MIT | notebooks/pokemon/basic/convolutional/AE/pokemonAE_Convolutional_reconst_1ellwlb_01psnr.ipynb | Fidan13/Generative_Models |
Callbacks |
from training.callbacks.sample_generation import SampleGeneration
from training.callbacks.save_model import ModelSaver
es = tf.keras.callbacks.EarlyStopping(
monitor='loss',
min_delta=1e-12,
patience=12,
verbose=1,
restore_best_weights=False
)
ms = ModelSaver(filepath=_restore)
csv_dir = os.path.join(experiments_dir, 'csv_dir')
create_if_not_exist(csv_dir)
csv_dir = os.path.join(csv_dir, ae.name+'.csv')
csv_log = tf.keras.callbacks.CSVLogger(csv_dir, append=True)
csv_dir
image_gen_dir = os.path.join(experiments_dir, 'image_gen_dir')
create_if_not_exist(image_gen_dir)
sg = SampleGeneration(latents_shape=latents_dim, filepath=image_gen_dir, gen_freq=5, save_img=True, gray_plot=False) | _____no_output_____ | MIT | notebooks/pokemon/basic/convolutional/AE/pokemonAE_Convolutional_reconst_1ellwlb_01psnr.ipynb | Fidan13/Generative_Models |
Model Training | ae.fit(
x=train_ds,
input_kw=None,
steps_per_epoch=int(1e4),
epochs=int(1e6),
verbose=2,
callbacks=[ es, ms, csv_log, sg],
workers=-1,
use_multiprocessing=True,
validation_data=test_ds,
validation_steps=int(1e4)
) | _____no_output_____ | MIT | notebooks/pokemon/basic/convolutional/AE/pokemonAE_Convolutional_reconst_1ellwlb_01psnr.ipynb | Fidan13/Generative_Models |
Model Evaluation inception_score | from evaluation.generativity_metrics.inception_metrics import inception_score
is_mean, is_sigma = inception_score(ae, tolerance_threshold=1e-6, max_iteration=200)
print(f'inception_score mean: {is_mean}, sigma: {is_sigma}') | _____no_output_____ | MIT | notebooks/pokemon/basic/convolutional/AE/pokemonAE_Convolutional_reconst_1ellwlb_01psnr.ipynb | Fidan13/Generative_Models |
Frechet_inception_distance | from evaluation.generativity_metrics.inception_metrics import frechet_inception_distance
fis_score = frechet_inception_distance(ae, training_generator, tolerance_threshold=1e-6, max_iteration=10, batch_size=32)
print(f'frechet inception distance: {fis_score}') | _____no_output_____ | MIT | notebooks/pokemon/basic/convolutional/AE/pokemonAE_Convolutional_reconst_1ellwlb_01psnr.ipynb | Fidan13/Generative_Models |
perceptual_path_length_score | from evaluation.generativity_metrics.perceptual_path_length import perceptual_path_length_score
ppl_mean_score = perceptual_path_length_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200, batch_size=32)
print(f'perceptual path length score: {ppl_mean_score}') | _____no_output_____ | MIT | notebooks/pokemon/basic/convolutional/AE/pokemonAE_Convolutional_reconst_1ellwlb_01psnr.ipynb | Fidan13/Generative_Models |
precision score | from evaluation.generativity_metrics.precision_recall import precision_score
_precision_score = precision_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200)
print(f'precision score: {_precision_score}') | _____no_output_____ | MIT | notebooks/pokemon/basic/convolutional/AE/pokemonAE_Convolutional_reconst_1ellwlb_01psnr.ipynb | Fidan13/Generative_Models |
recall score | from evaluation.generativity_metrics.precision_recall import recall_score
_recall_score = recall_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200)
print(f'recall score: {_recall_score}') | _____no_output_____ | MIT | notebooks/pokemon/basic/convolutional/AE/pokemonAE_Convolutional_reconst_1ellwlb_01psnr.ipynb | Fidan13/Generative_Models |
Image Generation image reconstruction Training dataset | %load_ext autoreload
%autoreload 2
from training.generators.image_generation_testing import reconstruct_from_a_batch
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'reconstruct_training_images_like_a_batch_dir')
create_if_not_exist(save_dir)
reconstruct_from_a_batch(ae, training_generator, save_dir)
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'reconstruct_testing_images_like_a_batch_dir')
create_if_not_exist(save_dir)
reconstruct_from_a_batch(ae, testing_generator, save_dir) | _____no_output_____ | MIT | notebooks/pokemon/basic/convolutional/AE/pokemonAE_Convolutional_reconst_1ellwlb_01psnr.ipynb | Fidan13/Generative_Models |
with Randomness | from training.generators.image_generation_testing import generate_images_like_a_batch
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'generate_training_images_like_a_batch_dir')
create_if_not_exist(save_dir)
generate_images_like_a_batch(ae, training_generator, save_dir)
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'generate_testing_images_like_a_batch_dir')
create_if_not_exist(save_dir)
generate_images_like_a_batch(ae, testing_generator, save_dir) | _____no_output_____ | MIT | notebooks/pokemon/basic/convolutional/AE/pokemonAE_Convolutional_reconst_1ellwlb_01psnr.ipynb | Fidan13/Generative_Models |
Complete Randomness | from training.generators.image_generation_testing import generate_images_randomly
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'random_synthetic_dir')
create_if_not_exist(save_dir)
generate_images_randomly(ae, save_dir)
from training.generators.image_generation_testing import interpolate_a_batch
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'interpolate_dir')
create_if_not_exist(save_dir)
interpolate_a_batch(ae, testing_generator, save_dir) | 100%|██████████| 15/15 [00:00<00:00, 19.90it/s]
| MIT | notebooks/pokemon/basic/convolutional/AE/pokemonAE_Convolutional_reconst_1ellwlb_01psnr.ipynb | Fidan13/Generative_Models |
Compute ICA on MEG data and remove artifacts============================================ICA is fit to MEG raw data.The sources matching the ECG and EOG are automatically found and displayed.Subsequently, artifact detection and rejection quality are assessed. | # Authors: Denis Engemann <[email protected]>
# Alexandre Gramfort <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne.preprocessing import ICA
from mne.preprocessing import create_ecg_epochs, create_eog_epochs
from mne.datasets import sample | _____no_output_____ | BSD-3-Clause | 0.16/_downloads/plot_ica_from_raw.ipynb | drammock/mne-tools.github.io |
Setup paths and prepare raw data. | data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, None, fir_design='firwin') # already lowpassed @ 40
raw.annotations = mne.Annotations([1], [10], 'BAD')
raw.plot(block=True)
# For the sake of example we annotate first 10 seconds of the recording as
# 'BAD'. This part of data is excluded from the ICA decomposition by default.
# To turn this behavior off, pass ``reject_by_annotation=False`` to
# :meth:`mne.preprocessing.ICA.fit`.
raw.annotations = mne.Annotations([0], [10], 'BAD') | _____no_output_____ | BSD-3-Clause | 0.16/_downloads/plot_ica_from_raw.ipynb | drammock/mne-tools.github.io |
1) Fit ICA model using the FastICA algorithm.Other available choices are ``picard``, ``infomax`` or ``extended-infomax``.NoteThe default method in MNE is FastICA, which along with Infomax is one of the most widely used ICA algorithm. Picard is a new algorithm that is expected to converge faster than FastICA and Infomax, especially when the aim is to recover accurate maps with a low tolerance parameter, see [1]_ for more information.We pass a float value between 0 and 1 to select n_components based on thepercentage of variance explained by the PCA components. | ica = ICA(n_components=0.95, method='fastica', random_state=0, max_iter=100)
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=False,
stim=False, exclude='bads')
ica.fit(raw, picks=picks, decim=3, reject=dict(mag=4e-12, grad=4000e-13),
verbose='warning') # low iterations -> does not fully converge
# maximum number of components to reject
n_max_ecg, n_max_eog = 3, 1 # here we don't expect horizontal EOG components | _____no_output_____ | BSD-3-Clause | 0.16/_downloads/plot_ica_from_raw.ipynb | drammock/mne-tools.github.io |
2) identify bad components by analyzing latent sources. | title = 'Sources related to %s artifacts (red)'
# generate ECG epochs use detection via phase statistics
ecg_epochs = create_ecg_epochs(raw, tmin=-.5, tmax=.5, picks=picks)
ecg_inds, scores = ica.find_bads_ecg(ecg_epochs, method='ctps')
ica.plot_scores(scores, exclude=ecg_inds, title=title % 'ecg', labels='ecg')
show_picks = np.abs(scores).argsort()[::-1][:5]
ica.plot_sources(raw, show_picks, exclude=ecg_inds, title=title % 'ecg')
ica.plot_components(ecg_inds, title=title % 'ecg', colorbar=True)
ecg_inds = ecg_inds[:n_max_ecg]
ica.exclude += ecg_inds
# detect EOG by correlation
eog_inds, scores = ica.find_bads_eog(raw)
ica.plot_scores(scores, exclude=eog_inds, title=title % 'eog', labels='eog')
show_picks = np.abs(scores).argsort()[::-1][:5]
ica.plot_sources(raw, show_picks, exclude=eog_inds, title=title % 'eog')
ica.plot_components(eog_inds, title=title % 'eog', colorbar=True)
eog_inds = eog_inds[:n_max_eog]
ica.exclude += eog_inds | _____no_output_____ | BSD-3-Clause | 0.16/_downloads/plot_ica_from_raw.ipynb | drammock/mne-tools.github.io |
3) Assess component selection and unmixing quality. | # estimate average artifact
ecg_evoked = ecg_epochs.average()
ica.plot_sources(ecg_evoked, exclude=ecg_inds) # plot ECG sources + selection
ica.plot_overlay(ecg_evoked, exclude=ecg_inds) # plot ECG cleaning
eog_evoked = create_eog_epochs(raw, tmin=-.5, tmax=.5, picks=picks).average()
ica.plot_sources(eog_evoked, exclude=eog_inds) # plot EOG sources + selection
ica.plot_overlay(eog_evoked, exclude=eog_inds) # plot EOG cleaning
# check the amplitudes do not change
ica.plot_overlay(raw) # EOG artifacts remain
# To save an ICA solution you can say:
# ica.save('my_ica.fif')
# You can later load the solution by saying:
# from mne.preprocessing import read_ica
# read_ica('my_ica.fif')
# Apply the solution to Raw, Epochs or Evoked like this:
# ica.apply(epochs) | _____no_output_____ | BSD-3-Clause | 0.16/_downloads/plot_ica_from_raw.ipynb | drammock/mne-tools.github.io |
Torch Hub Inference TutorialIn this tutorial you'll learn:- how to load a pretrained model using Torch Hub - run inference to classify the action in a demo video Install and Import modules If `torch`, `torchvision` and `pytorchvideo` are not installed, run the following cell: | try:
import torch
except ModuleNotFoundError:
!pip install torch torchvision
import os
import sys
import torch
if torch.__version__=='1.6.0+cu101' and sys.platform.startswith('linux'):
!pip install pytorchvideo
else:
need_pytorchvideo=False
try:
# Running notebook locally
import pytorchvideo
except ModuleNotFoundError:
need_pytorchvideo=True
if need_pytorchvideo:
# Install from GitHub
!pip install "git+https://github.com/facebookresearch/pytorchvideo.git"
import json
from torchvision.transforms import Compose, Lambda
from torchvision.transforms._transforms_video import (
CenterCropVideo,
NormalizeVideo,
)
from pytorchvideo.data.encoded_video import EncodedVideo
from pytorchvideo.transforms import (
ApplyTransformToKey,
ShortSideScale,
UniformTemporalSubsample,
UniformCropVideo
)
from typing import Dict | _____no_output_____ | Apache-2.0 | tutorials/torchhub_inference_tutorial.ipynb | Spencer551/pytorchvideo |
Setup Download the id to label mapping for the Kinetics 400 dataset on which the Torch Hub models were trained. This will be used to get the category label names from the predicted class ids. | !wget https://dl.fbaipublicfiles.com/pyslowfast/dataset/class_names/kinetics_classnames.json
with open("kinetics_classnames.json", "r") as f:
kinetics_classnames = json.load(f)
# Create an id to label name mapping
kinetics_id_to_classname = {}
for k, v in kinetics_classnames.items():
kinetics_id_to_classname[v] = str(k).replace('"', "") | _____no_output_____ | Apache-2.0 | tutorials/torchhub_inference_tutorial.ipynb | Spencer551/pytorchvideo |
Load Model using Torch Hub APIPyTorchVideo provides several pretrained models through Torch Hub. Available models are described in [model zoo documentation](https://github.com/facebookresearch/pytorchvideo/blob/main/docs/source/model_zoo.mdkinetics-400). Here we are selecting the `slowfast_r50` model which was trained using a 8x8 setting on the Kinetics 400 dataset. NOTE: to run on GPU in Google Colab, in the menu bar selet: Runtime -> Change runtime type -> Harware Accelerator -> GPU | # Device on which to run the model
# Set to cuda to load on GPU
device = "cpu"
# Pick a pretrained model
model_name = "slowfast_r50"
model = torch.hub.load("facebookresearch/pytorchvideo:main", model=model_name, pretrained=True)
# Set to eval mode and move to desired device
model = model.to(device)
model = model.eval() | _____no_output_____ | Apache-2.0 | tutorials/torchhub_inference_tutorial.ipynb | Spencer551/pytorchvideo |
Define the transformations for the input required by the modelBefore passing the video into the model we need to apply some input transforms and sample a clip of the correct duration.NOTE: The input transforms are specific to the model. If you choose a different model than the example in this tutorial, please refer to the code provided in the Torch Hub documentation and copy over the relevant transforms:- [SlowFast](https://pytorch.org/hub/facebookresearch_pytorchvideo_slowfast/)- [X3D](https://pytorch.org/hub/facebookresearch_pytorchvideo_x3d/)- [Slow](https://pytorch.org/hub/facebookresearch_pytorchvideo_resnet/) | ####################
# SlowFast transform
####################
side_size = 256
mean = [0.45, 0.45, 0.45]
std = [0.225, 0.225, 0.225]
crop_size = 256
num_frames = 32
sampling_rate = 2
frames_per_second = 30
alpha = 4
class PackPathway(torch.nn.Module):
"""
Transform for converting video frames as a list of tensors.
"""
def __init__(self):
super().__init__()
def forward(self, frames: torch.Tensor):
fast_pathway = frames
# Perform temporal sampling from the fast pathway.
slow_pathway = torch.index_select(
frames,
1,
torch.linspace(
0, frames.shape[1] - 1, frames.shape[1] // alpha
).long(),
)
frame_list = [slow_pathway, fast_pathway]
return frame_list
transform = ApplyTransformToKey(
key="video",
transform=Compose(
[
UniformTemporalSubsample(num_frames),
Lambda(lambda x: x/255.0),
NormalizeVideo(mean, std),
ShortSideScale(
size=side_size
),
CenterCropVideo(crop_size),
PackPathway()
]
),
)
# The duration of the input clip is also specific to the model.
clip_duration = (num_frames * sampling_rate)/frames_per_second | _____no_output_____ | Apache-2.0 | tutorials/torchhub_inference_tutorial.ipynb | Spencer551/pytorchvideo |
Load an example videoWe can test the classification of an example video from the kinetics validation set such as this [archery video](https://www.youtube.com/watch?v=3and4vWkW4s). | # Download the example video file
!wget https://dl.fbaipublicfiles.com/pytorchvideo/projects/archery.mp4
# Load the example video
video_path = "archery.mp4"
# Select the duration of the clip to load by specifying the start and end duration
# The start_sec should correspond to where the action occurs in the video
start_sec = 0
end_sec = start_sec + clip_duration
# Initialize an EncodedVideo helper class
video = EncodedVideo.from_path(video_path)
# Load the desired clip
video_data = video.get_clip(start_sec=start_sec, end_sec=end_sec)
# Apply a transform to normalize the video input
video_data = transform(video_data)
# Move the inputs to the desired device
inputs = video_data["video"]
inputs = [i.to(device)[None, ...] for i in inputs] | _____no_output_____ | Apache-2.0 | tutorials/torchhub_inference_tutorial.ipynb | Spencer551/pytorchvideo |
Get model predictions | # Pass the input clip through the model
preds = model(inputs)
# Get the predicted classes
post_act = torch.nn.Softmax(dim=1)
preds = post_act(preds)
pred_classes = preds.topk(k=5).indices
# Map the predicted classes to the label names
pred_class_names = [kinetics_id_to_classname[int(i)] for i in pred_classes[0]]
print("Predicted labels: %s" % ", ".join(pred_class_names)) | _____no_output_____ | Apache-2.0 | tutorials/torchhub_inference_tutorial.ipynb | Spencer551/pytorchvideo |
ANN Metrics | def recall(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def precision(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
def f1(y_true, y_pred):
myPrecision = precision(y_true, y_pred)
myRecall = recall(y_true, y_pred)
return 2*((myPrecision*myRecall)/(myPrecision+myRecall+K.epsilon())) | _____no_output_____ | MIT | Boda/ensemble/NN.ipynb | UVA-DSI-2019-Capstones/UVACyber |
ANN Model | tests[tests.label == 1]
df = pd.read_csv('/scratch/by8jj/stratified samples/ensemble model/file.csv')
len(df)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df.drop('label', axis = 1), df.label, test_size=0.2)
X_train['label'] = y_train
X_train
df_mal = X_train[X_train['label'] == 1]
df_ben = X_train[X_train['label'] == 0].sample(frac = 1)[:len(df_mal)]
df_bal = pd.concat([df_mal, df_ben]).sample(frac = 1)
df_bal
y = df_bal.label.tolist()
X = np.matrix(df_bal.drop(labels = ['label'], axis = 1)).astype(np.float)
print(X.shape)
model = models.Sequential()
model.add(Dense(2, input_dim=4, kernel_initializer='uniform', activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(1, kernel_initializer='uniform', activation='sigmoid'))
adam = optimizers.Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0000002, amsgrad=False)
model.compile(loss='binary_crossentropy', optimizer=adam, metrics=['accuracy',f1,recall,precision])
result = model.fit(X, y, epochs=20, batch_size=256, verbose=1, validation_split=0.3)
y_pred = model.predict(np.matrix(X_test).astype(np.float))
y_pred = [x[0] for x in y_pred]
temp = [x if x>0.5 else 0 for x in y_pred]
pred = pd.DataFrame({'test':y_test, 'pred': y_pred})
pred
temp = [1 if x>0.8 else 0 for x in y_pred]
cm= confusion_matrix(y_test, temp)
tn, fp, fn, tp = cm.ravel()
precision=tp/(tp+fp)
recall=tp/(tp+fn)
fpr = fp/(fp+ tn)
accuracy = (tp + tn)/(tn + tp + fn + fp)
F1 = 2 * (precision * recall) / (precision + recall)
print("precision:", precision*100)
print("recall:", recall*100)
print("false positive rate:", fpr*100)
print("accuracy", accuracy*100)
print("F1-score", F1) | precision: 81.6491971891807
recall: 89.35790853042899
false positive rate: 1.0073213698881054
accuracy 98.5325078797032
F1-score 0.8532980501964162
| MIT | Boda/ensemble/NN.ipynb | UVA-DSI-2019-Capstones/UVACyber |
Submitting various things for end of grant. | import os
import sys
import requests
import pandas
import paramiko
import json
from IPython import display
from curation_common import *
from htsworkflow.submission.encoded import DCCValidator
PANDAS_ODF = os.path.expanduser('~/src/pandasodf')
if PANDAS_ODF not in sys.path:
sys.path.append(PANDAS_ODF)
from pandasodf import ODFReader
from htsworkflow.submission.encoded import Document
from htsworkflow.submission.aws_submission import run_aws_cp
# live server & control file
server = ENCODED('www.encodeproject.org')
spreadsheet_name = os.path.expanduser('~diane/woldlab/ENCODE/10x_mouse_limb_20181219.ods')
# test server & datafile
#server = ENCODED('test.encodedcc.org')
#spreadsheet_name = os.path.expanduser('~diane/woldlab/ENCODE/10x_mouse_limb_20181219-testserver.ods')
server.load_netrc()
validator = DCCValidator(server)
award = 'UM1HG009443' | _____no_output_____ | BSD-3-Clause | 10x-3-to-13-submission.ipynb | detrout/encode4-curation |
Submit Documents Example Document submission | #atac_uuid = '0fc44318-b802-474e-8199-f3b6d708eb6f'
#atac = Document(os.path.expanduser('~/proj/encode3-curation/Wold_Lab_ATAC_Seq_protocol_December_2016.pdf'),
# 'general protocol',
# 'ATAC-Seq experiment protocol for Wold lab',
# )
#body = atac.create_if_needed(server, atac_uuid)
#print(body['@id']) | _____no_output_____ | BSD-3-Clause | 10x-3-to-13-submission.ipynb | detrout/encode4-curation |
Submit Annotations | #sheet = gcat.get_file(spreadsheet_name, fmt='pandas_excel')
#annotations = sheet.parse('Annotations', header=0)
#created = server.post_sheet('/annotations/', annotations, verbose=True, dry_run=True)
#print(len(created))
#if created:
# annotations.to_excel('/tmp/annotations.xlsx', index=False) | _____no_output_____ | BSD-3-Clause | 10x-3-to-13-submission.ipynb | detrout/encode4-curation |
Register Biosamples | book = ODFReader(spreadsheet_name)
biosample = book.parse('Biosample', header=0)
created = server.post_sheet('/biosamples/', biosample,
verbose=True,
dry_run=True,
validator=validator)
print(len(created))
if created:
biosample.to_excel('/dev/shm/biosamples.xlsx', index=False) | _____no_output_____ | BSD-3-Clause | 10x-3-to-13-submission.ipynb | detrout/encode4-curation |
Register Libraries | print(spreadsheet_name)
book = ODFReader(spreadsheet_name)
libraries = book.parse('Library', header=0)
created = server.post_sheet('/libraries/',
libraries,
verbose=True,
dry_run=True,
validator=validator)
print(len(created))
if created:
libraries.to_excel('/dev/shm/libraries.xlsx', index=False) | _____no_output_____ | BSD-3-Clause | 10x-3-to-13-submission.ipynb | detrout/encode4-curation |
Register Experiments | book = ODFReader(spreadsheet_name)
experiments = book.parse('Experiment', header=0)
created = server.post_sheet('/experiments/',
experiments,
verbose=True,
dry_run=False,
validator=validator)
print(len(created))
if created:
experiments.to_excel('/dev/shm/experiments.xlsx', index=False) | _____no_output_____ | BSD-3-Clause | 10x-3-to-13-submission.ipynb | detrout/encode4-curation |
Register Replicates | book = ODFReader(spreadsheet_name)
replicates = book.parse('Replicate', header=0)
created = server.post_sheet('/replicates/',
replicates,
verbose=True,
dry_run=True,
validator=validator)
print(len(created))
if created:
replicates.to_excel('/dev/shm/replicates.xlsx', index=False) | _____no_output_____ | BSD-3-Clause | 10x-3-to-13-submission.ipynb | detrout/encode4-curation |
Image extraction from folders and creating image set | def CreateTrainSet(positive_path, negative_path, IMAGE_WIDTH, IMAGE_HEIGHT, Positive_Images=1200):
# getting all file names from positive path
positives = os.listdir(positive_path)
positive_files = [os.path.join(positive_path, file_name) for file_name in positives if file_name.endswith('.jpg')]
positive_files.sort()
# getting all file names from negative path
negatives = os.listdir(negative_path)
negative_files = [os.path.join(negative_path, file_name) for file_name in negatives if file_name.endswith('.jpg')]
negative_files.sort()
# creating train label np array for pos=0 and neg=1
pos_labels = np.zeros(Positive_Images)
neg_labels = np.ones(len(negative_files))
train_labels = np.concatenate((pos_labels, neg_labels), axis=0).astype(int)
# add positive images to train_image np array
pos_images = np.zeros((Positive_Images, IMAGE_HEIGHT, IMAGE_WIDTH))
for filename in positive_files[0: Positive_Images]:
img = cv2.imread(filename, 0)
img = cv2.resize(img, (IMAGE_WIDTH, IMAGE_HEIGHT))
pos_images[positive_files.index(filename)] = img
# add negative images to train_image np array
neg_images = np.zeros((len(negative_files), IMAGE_HEIGHT, IMAGE_WIDTH))
for filename in negative_files:
img = cv2.imread(filename, 0)
img = cv2.resize(img, (IMAGE_WIDTH, IMAGE_HEIGHT))
neg_images[negative_files.index(filename)] = img
train_images = np.zeros((len(positive_files)+len(negative_files), IMAGE_HEIGHT, IMAGE_WIDTH))
train_images = np.concatenate((pos_images, neg_images), axis=0).astype(int)
return train_images, train_labels
Positive_Images = 1200
print(f"Path exists: {os.path.isdir(positive_path) and os.path.isdir(negative_path)}")
IMAGE_WIDTH = 64
IMAGE_HEIGHT = 128
R_train_images, train_labels = CreateTrainSet(positive_path, negative_path, IMAGE_WIDTH, IMAGE_HEIGHT, Positive_Images)
print("train_images: ", R_train_images.shape)
print("train_labels: ", train_labels.shape)
print(R_train_images[0])
print(train_labels[0])
print(R_train_images[-1])
print(train_labels[-1]) | Path exists: True
train_images: (1480, 128, 64)
train_labels: (1480,)
[[196 197 201 ... 124 119 117]
[195 197 200 ... 125 118 115]
[195 196 200 ... 126 116 111]
...
[181 181 181 ... 182 181 181]
[178 178 177 ... 184 184 184]
[176 176 175 ... 186 186 186]]
0
[[198 198 197 ... 123 113 106]
[196 196 196 ... 121 115 113]
[194 194 194 ... 121 123 128]
...
[247 247 246 ... 246 245 245]
[250 250 248 ... 244 244 244]
[247 247 246 ... 246 247 247]]
1
| MIT | Part2.ipynb | ismailfaruk/ECSE415-Final-Project |
Getting Hog features and creating training feature set | # returns HoG features, and orderd features
def HoG_features(images):
cell_size = (8,8)
block_size = (4,4)
nbins = 4
# all images have same shape
img_size = images[0].shape
# creating HoG object
hog = cv2.HOGDescriptor(_winSize=(img_size[1] // cell_size[1] * cell_size[1],
img_size[0] // cell_size[0] * cell_size[0]),
_blockSize=(block_size[1] * cell_size[1],
block_size[0] * cell_size[0]),
_blockStride=(cell_size[1], cell_size[0]),
_cellSize=(cell_size[1], cell_size[0]),
_nbins=nbins)
features = []
for i in range(images.shape[0]):
# Compute HoG features
features.append(hog.compute((images[i]).astype(np.uint8)).reshape(1, -1))
# Stack arrays in sequence vertically
features = np.vstack(features)
return features
# getting HoG features
train_features = HoG_features(R_train_images)
print("trained_features_reshaped: ", train_features.shape)
print("trained_features_reshaped[0]: ", train_features[0]) | trained_features_reshaped: (1480, 4160)
trained_features_reshaped[0]: [0.03915166 0.0065741 0.00676362 ... 0.0232183 0.02239115 0.00087363]
| MIT | Part2.ipynb | ismailfaruk/ECSE415-Final-Project |
Non-linear SVM Classifier | def NonLinear_SVM(train_features, train_labels, gamma, C, random_state=None):
# creating non-linear svc object, RBF kernel is default
clf = svm.SVC(C=C, gamma=gamma, random_state=random_state)
# fit and predict
clf.fit(train_features, train_labels)
return clf
def predict(clf, test_features, test_labels):
predict = clf.predict(test_features)
# using accruacy score from metrics lib and multiply 100 to get precentage
accuracy = accuracy_score(test_labels, predict)*100
return accuracy | _____no_output_____ | MIT | Part2.ipynb | ismailfaruk/ECSE415-Final-Project |
1 Fold Validation | k_fold = 5
pos_count = Positive_Images
neg_count = 280
pos_train_split = int(pos_count*4/k_fold)
neg_train_split = int(pos_count+neg_count*4/k_fold)
print(f"train_size: {pos_train_split+neg_train_split-pos_count}")
print(f"test_size: {pos_count-pos_train_split+neg_count-neg_train_split+pos_count}")
# splitting all pos and neg into 4/5 for train and 1/5 split for test
train_features_split = np.concatenate((train_features[: pos_train_split], train_features[pos_count: neg_train_split]), axis=0)
train_labels_split = np.concatenate((train_labels[: pos_train_split], train_labels[pos_count: neg_train_split]), axis=0)
val_features_split = np.concatenate((train_features[pos_train_split:pos_count], train_features[neg_train_split:]), axis=0)
val_labels_split = np.concatenate((train_labels[pos_train_split:pos_count], train_labels[neg_train_split:]), axis=0)
print(f"train_split: {train_features_split.shape} and {train_labels_split.shape}")
print(f"val_split: {val_features_split.shape} and {val_labels_split.shape}")
MIN_ACCURACY = 50
GammaList = ['auto', 'scale']
C_List = [0.01, 0.1, 1, 10, 100, 1000]
Best_SVM = {"gamma":None, "C":None, "accuracy":0}
for gamma in GammaList:
for C in C_List:
clf = NonLinear_SVM(train_features_split, train_labels_split, gamma, C)
accuracy = predict(clf, val_features_split, val_labels_split)
if round(accuracy, 2) > MIN_ACCURACY:
print(f"Gamma: {gamma}, C: {C}, Accuracy: {round(accuracy, 2)}%")
if round(accuracy, 2) > Best_SVM["accuracy"]:
Best_SVM["gamma"] = gamma
Best_SVM["C"] = C
Best_SVM["accuracy"] = round(accuracy, 2)
print("Best parameters: ", Best_SVM) | _____no_output_____ | MIT | Part2.ipynb | ismailfaruk/ECSE415-Final-Project |
5 Fold Cross Validation | def k_fold_SVC(train_features, train_labels, train_index, val_index, k_folds):
total_accuracy = 0
for i in range(k_folds):
x_train, x_val = train_features[train_index], train_features[val_index]
y_train, y_val = train_labels[train_index], train_labels[val_index]
clf = NonLinear_SVM(x_train, y_train, gamma, C)
total_accuracy += predict(clf, x_val, y_val)
avg_accuracy = total_accuracy/k_folds
return avg_accuracy
# 5 fold cross validation dataset
k_folds = 5
kf = KFold(n_splits=k_folds, shuffle=True)
kf.get_n_splits(train_features)
MIN_ACCURACY = 50
GammaList = ['auto', 'scale']
C_List = [0.01, 0.1, 1, 10, 100, 1000]
Best_SVM = {"gamma":None, "C":None, "accuracy":0}
for gamma in GammaList:
for C in C_List:
start_time = time.time()
for train_index, val_index in kf.split(train_features):
accuracy = k_fold_SVC(train_features, train_labels, train_index, val_index, 5)
time_taken = (time.time() - (start_time))/k_folds
if round(accuracy, 2) > MIN_ACCURACY:
print(f"Gamma: {gamma}, C: {C}, Accuracy: {round(accuracy, 2)}%, time taken to train/test: {round(time_taken, 2)}")
if round(accuracy, 2) > Best_SVM["accuracy"]:
Best_SVM["gamma"] = gamma
Best_SVM["C"] = C
Best_SVM["accuracy"] = round(accuracy, 2)
print("Best parameters: ", Best_SVM) | _____no_output_____ | MIT | Part2.ipynb | ismailfaruk/ECSE415-Final-Project |
Using Optimal Paramaeters for SVM Classifier | # Optimal SVM Classifer
gamma = "scale"
C = 10
Optimal_Clf = NonLinear_SVM(train_features_split, train_labels_split, gamma, C)
accuracy = predict(clf, val_features_split, val_labels_split)
print(f"Gamma: {gamma}, C: {C}, Accuracy: {round(accuracy, 2)}%") | Gamma: scale, C: 10, Accuracy: 96.62%
| MIT | Part2.ipynb | ismailfaruk/ECSE415-Final-Project |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.