markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Visualising the clusters - On the first two columns
plt.figure(figsize=(14,10)) plt.scatter(x[y_kmeans == 0, 0], x[y_kmeans == 0, 1], s = 100, c = 'tab:orange', label = 'Iris-setosa') plt.scatter(x[y_kmeans == 1, 0], x[y_kmeans == 1, 1], s = 100, c = 'tab:blue', label = 'Iris-versicolour') plt.scatter(x[y_kmeans == 2, 0], x[y_kmeans == 2, 1], s = 100, c = 'tab:green', label = 'Iris-virginica') plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:,1], s = 100, c = 'black', label = 'Centroids') plt.title('Clusters K = 3') plt.legend(loc = 'upper left', ncol = 2) plt.show()
_____no_output_____
Apache-2.0
Task_2_Clustering.ipynb
BakkeshAS/GRIP_Task_2_Predict_Optimum_Clusters
Welcome to an example Binder Test markup This notebook uses a Python environment with a few libraries, including `dask`, all of which were specificied using a `conda` [environment.yml](../edit/environment.yml) file. To demo the environment, we'll show a simplified example of using `dask` to analyze time series data, adapted from Matthew Rocklin's excellent repo of [dask examples](https://github.com/blaze/dask-examples) — check out that repo for the full version (and many other examples). Setup plotting
%matplotlib inline
_____no_output_____
BSD-3-Clause
index.ipynb
trailmarkerlib/dataExplore
Turn on a global progress bar
from dask.diagnostics import ProgressBar progress_bar = ProgressBar() progress_bar.register()
_____no_output_____
BSD-3-Clause
index.ipynb
trailmarkerlib/dataExplore
Generate fake data
import dask.dataframe as dd df = dd.demo.make_timeseries(start='2000', end='2015', dtypes={'A': float, 'B': int}, freq='5s', partition_freq='3M', seed=1234)
_____no_output_____
BSD-3-Clause
index.ipynb
trailmarkerlib/dataExplore
Compute and plot a cumulative sum
df.A.cumsum().resample('1w').mean().compute().plot();
[########################################] | 100% Completed | 16.5s
BSD-3-Clause
index.ipynb
trailmarkerlib/dataExplore
> This is one of the 100 recipes of the [IPython Cookbook](http://ipython-books.github.io/), the definitive guide to high-performance scientific computing and data science in Python. 3.7. Processing webcam images in real-time from the notebook In this recipe, we show how to communicate data in both directions from the notebook to the Python kernel, and conversely. Specifically, we will retrieve the webcam feed from the browser using HTML5's `` element, and pass it to Python in real time using the interactive capabilities of the IPython notebook 2.0+. This way, we can process the image in Python with an edge detector (implemented in scikit-image), and display it in the notebook in real time. Most of the code for this recipe comes from [Jason Grout's example](https://github.com/jasongrout/ipywidgets). 1. We need to import quite a few modules.
from IPython.html.widgets import DOMWidget from IPython.utils.traitlets import Unicode, Bytes, Instance from IPython.display import display from skimage import io, filter, color import urllib import base64 from PIL import Image import StringIO import numpy as np from numpy import array, ndarray import matplotlib.pyplot as plt
_____no_output_____
BSD-2-Clause
notebooks/chapter03_notebook/07_webcam_py2.ipynb
khanparwaz/PythonProjects
2. We define two functions to convert images from and to base64 strings. This conversion is a common way to pass binary data between processes (here, the browser and Python).
def to_b64(img): imgdata = StringIO.StringIO() pil = Image.fromarray(img) pil.save(imgdata, format='PNG') imgdata.seek(0) return base64.b64encode(imgdata.getvalue()) def from_b64(b64): im = Image.open(StringIO.StringIO(base64.b64decode(b64))) return array(im)
_____no_output_____
BSD-2-Clause
notebooks/chapter03_notebook/07_webcam_py2.ipynb
khanparwaz/PythonProjects
3. We define a Python function that will process the webcam image in real time. It accepts and returns a NumPy array. This function applies an edge detector with the `roberts()` function in scikit-image.
def process_image(image): img = filter.roberts(image[:,:,0]/255.) return (255-img*255).astype(np.uint8)
_____no_output_____
BSD-2-Clause
notebooks/chapter03_notebook/07_webcam_py2.ipynb
khanparwaz/PythonProjects
4. Now, we create a custom widget to handle the bidirectional communication of the video flow from the browser to Python and reciprocally.
class Camera(DOMWidget): _view_name = Unicode('CameraView', sync=True) # This string contains the base64-encoded raw # webcam image (browser -> Python). imageurl = Unicode('', sync=True) # This string contains the base64-encoded processed # webcam image(Python -> browser). imageurl2 = Unicode('', sync=True) # This function is called whenever the raw webcam # image is changed. def _imageurl_changed(self, name, new): head, data = new.split(',', 1) if not data: return # We convert the base64-encoded string # to a NumPy array. image = from_b64(data) # We process the image. image = process_image(image) # We convert the processed image # to a base64-encoded string. b64 = to_b64(image) self.imageurl2 = 'data:image/png;base64,' + b64
_____no_output_____
BSD-2-Clause
notebooks/chapter03_notebook/07_webcam_py2.ipynb
khanparwaz/PythonProjects
5. The next step is to write the Javascript code for the widget.
%%javascript var video = $('<video>')[0]; var canvas = $('<canvas>')[0]; var canvas2 = $('<img>')[0]; var width = 320; var height = 0; require(["widgets/js/widget"], function(WidgetManager){ var CameraView = IPython.DOMWidgetView.extend({ render: function(){ var that = this; // We append the HTML elements. setTimeout(function() { that.$el.append(video). append(canvas). append(canvas2);}, 200); // We initialize the webcam. var streaming = false; navigator.getMedia = ( navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia); navigator.getMedia({video: true, audio: false}, function(stream) { if (navigator.mozGetUserMedia) { video.mozSrcObject = stream; } else { var vendorURL = (window.URL || window.webkitURL); video.src = vendorURL.createObjectURL( stream); } video.controls = true; video.play(); }, function(err) { console.log("An error occured! " + err); } ); // We initialize the size of the canvas. video.addEventListener('canplay', function(ev){ if (!streaming) { height = video.videoHeight / ( video.videoWidth/width); video.setAttribute('width', width); video.setAttribute('height', height); canvas.setAttribute('width', width); canvas.setAttribute('height', height); canvas2.setAttribute('width', width); canvas2.setAttribute('height', height); streaming = true; } }, false); // Play/Pause functionality. var interval; video.addEventListener('play', function(ev){ // We get the picture every 100ms. interval = setInterval(takepicture, 100); }) video.addEventListener('pause', function(ev){ clearInterval(interval); }) // This function is called at each time step. // It takes a picture and sends it to the model. function takepicture() { canvas.width = width; canvas.height = height; canvas2.width = width; canvas2.height = height; video.style.display = 'none'; canvas.style.display = 'none'; // We take a screenshot from the webcam feed and // we put the image in the first canvas. canvas.getContext('2d').drawImage(video, 0, 0, width, height); // We export the canvas image to the model. that.model.set('imageurl', canvas.toDataURL('image/png')); that.touch(); } }, update: function(){ // This function is called whenever Python modifies // the second (processed) image. We retrieve it and // we display it in the second canvas. var img = this.model.get('imageurl2'); canvas2.src = img; return CameraView.__super__.update.apply(this); } }); // Register the view with the widget manager. WidgetManager.register_widget_view('CameraView', CameraView); });
_____no_output_____
BSD-2-Clause
notebooks/chapter03_notebook/07_webcam_py2.ipynb
khanparwaz/PythonProjects
6. Finally, we create and display the widget.
c = Camera() display(c)
_____no_output_____
BSD-2-Clause
notebooks/chapter03_notebook/07_webcam_py2.ipynb
khanparwaz/PythonProjects
raytracer-imagingFor PET image reconstruction
import matplotlib.pyplot as plt from mpl_toolkits.mplot3d.axes3d import Axes3D import math import numpy as np from numba import cuda,types, from_dtype import raytracer.cudaOptions from raytracer.rotation.quaternion import * from raytracer.raytracer.voxel import * from raytracer.raytracer.raytrace import * data_rays = 0.1*np.loadtxt(rayOptions.data_directory).T # x,y,z [mm,ns] data_rays[0,:] voxel_size = np.array([10,10,100]) #odd so we have center point raytracerA = raytracer(voxel_size,method="ART") #Reconstruction: projection_error = raytracerA.reconstruct(data_rays[0:100],iterations=8) # rayNHit,rayHits,rayWeights = raytracerA.norm_raytrace(0.0*np.pi/180.0,90.0*np.pi/180.0) # projection = raytracerA.rayproject(rayNHit,rayHits) raytracerA.make_projection(phi=0.0*np.pi/180.0,alpha=90.0*np.pi/180.0) print(rayNHit) print("---") print(nvoxels,camera_nrays) print(rayHits.shape) print("---") count = 0 # for r in range(100): # if rayNHit[r] > 0: # print(rayHits[r,0:rayNHit[r]]) # print("---") # for r in range(100): # if rayNHit[r] > 0: # print(rayWeights[r,0:rayNHit[r]]) # print("---") print(np.sum(rayNHit > 0)) quaternion.rotate(verts,20.*np.pi/180.0,40.*np.pi/180.0) print(verts) # set the colors of each object colors = np.empty(verts.shape, dtype=object) colors = 'red' # and plot everything ax = plt.figure().add_subplot(projection='3d') ax.voxels(verts, facecolors=colors, edgecolor='k') plt.show() ax = plt.figure().add_subplot() ax.scatter(verts[:,1],verts[:,2],c=voxels.flatten(),cmap='inferno',s=2,alpha=0.5) #project onto x plt.show() ax = plt.figure().add_subplot() ax.scatter(verts[:,1].astype(int),verts[:,2].astype(int),c=voxels.flatten(),cmap='inferno',s=6,alpha=0.5) #project onto x plt.show() H, xedges, yedges = np.histogram2d(verts[:,1], verts[:,2], bins=camera_size,range=camera_range) plt.imshow(H.T,cmap='binary',extent=np.array(camera_range).flatten()) xedges rayverts
_____no_output_____
MIT
examples/raytracer_data.ipynb
akhilsadam/raytracer-imaging
2.1
df = pd.read_csv('week2.csv') df = df.sort_values(by=['Date']) df = df.drop(['Unnamed: 2', 'Month.1', 'Year.1'], axis = 1) df.head() df.dtypes df['Date'] = pd.to_datetime(df['Date']) df.dtypes df.set_index("Date", inplace = True) df.head() df.reset_index().plot(x='Date', y='Close_Price');
_____no_output_____
MIT
Module 2.ipynb
parth2608/Stock-Analysis
2.2
plt.stem(df.index.values,df.Day_Perc_Change, bottom=0);
C:\Users\Dell\AppData\Roaming\Python\Python37\site-packages\ipykernel_launcher.py:1: UserWarning: In Matplotlib 3.3 individual lines on a stem plot will be added as a LineCollection instead of individual lines. This significantly improves the performance of a stem plot. To remove this warning and switch to the new behaviour, set the "use_line_collection" keyword argument to True. """Entry point for launching an IPython kernel.
MIT
Module 2.ipynb
parth2608/Stock-Analysis
2.3
df.reset_index().plot(x='Date', y='Total_Traded_Quantity'); sns.set(style="darkgrid") scaledvolume = df["Total_Traded_Quantity"] - df["Total_Traded_Quantity"].min() scaledvolume = scaledvolume/scaledvolume.max() * df.Day_Perc_Change.max() fig, ax = plt.subplots(figsize=(12, 6)) ax.stem(df.index, df.Day_Perc_Change , 'b', markerfmt='bo', label='Daily Percente Change') ax.plot(df.index, scaledvolume, 'k', label='Volume') ax.set_xlabel('Date') plt.legend(loc=2) plt.tight_layout() plt.xticks(plt.xticks()[0], df.index.date, rotation=45) plt.show()
C:\Users\Dell\AppData\Roaming\Python\Python37\site-packages\ipykernel_launcher.py:8: UserWarning: In Matplotlib 3.3 individual lines on a stem plot will be added as a LineCollection instead of individual lines. This significantly improves the performance of a stem plot. To remove this warning and switch to the new behaviour, set the "use_line_collection" keyword argument to True.
MIT
Module 2.ipynb
parth2608/Stock-Analysis
2.4
df = df.reset_index() df.head() df.Trend.groupby(df.Trend).count().plot(kind='pie', autopct='%.1f%%') plt.axis('equal') plt.show() avg = df.groupby(df['Trend'])['Total_Traded_Quantity'].mean() med = df.groupby(df['Trend'])['Total_Traded_Quantity'].median() plt.subplot(2, 1, 1) avg.plot.bar(color='Blue', label='mean').label_outer() plt.title('mean') plt.legend() plt.show() plt.subplot(2, 1, 2) med.plot.bar(color='Orange', label='median') plt.title('median') plt.legend() plt.show()
_____no_output_____
MIT
Module 2.ipynb
parth2608/Stock-Analysis
2.5
df['Day_Perc_Change'].plot.hist(bins=20);
_____no_output_____
MIT
Module 2.ipynb
parth2608/Stock-Analysis
2.6
jublfood = pd.read_csv('JUBLFOOD.csv') jublfood = jublfood.drop(jublfood[jublfood.Series != 'EQ'].index) jublfood.reset_index(inplace=True) print(jublfood.shape) jublfood.head() godrejind = pd.read_csv('GODREJIND.csv') godrejind = godrejind.drop(godrejind[godrejind.Series != 'EQ'].index) godrejind.reset_index(inplace=True) print(godrejind.shape) godrejind.head() maruti = pd.read_csv('MARUTI.csv') maruti = maruti.drop(maruti[maruti.Series != 'EQ'].index) maruti.reset_index(inplace=True) print(maruti.shape) maruti.head() pvr = pd.read_csv('PVR.csv') pvr = pvr.drop(pvr[pvr.Series != 'EQ'].index) pvr.reset_index(inplace=True) print(pvr.shape) pvr.head() tcs = pd.read_csv('TCS.csv') tcs = tcs.drop(tcs[tcs.Series != 'EQ'].index) tcs.reset_index(inplace=True) print(tcs.shape) tcs.head() combined = pd.concat([godrejind['Close Price'], jublfood['Close Price'], maruti['Close Price'], pvr['Close Price'], tcs['Close Price']], join='inner', axis=1, keys=['GODREJIND', 'JUBLFOOD', 'MARUTI', 'PVR', 'TCS']) combined.head() perc_change = pd.DataFrame() perc_change['GODREJIND'] = combined['GODREJIND'].pct_change()*100 perc_change['GODREJIND'][0] = 0 perc_change['JUBLFOOD'] = combined['JUBLFOOD'].pct_change()*100 perc_change['JUBLFOOD'][0] = 0 perc_change['MARUTI'] = combined['MARUTI'].pct_change()*100 perc_change['MARUTI'][0] = 0 perc_change['PVR'] = combined['PVR'].pct_change()*100 perc_change['PVR'][0] = 0 perc_change['TCS'] = combined['TCS'].pct_change()*100 perc_change['TCS'][0] = 0 perc_change.head() sns.pairplot(perc_change);
_____no_output_____
MIT
Module 2.ipynb
parth2608/Stock-Analysis
2.7
perc_change = perc_change.drop([0]) perc_change.reset_index(inplace=True) perc_change.head() print(perc_change[['GODREJIND', 'JUBLFOOD', 'MARUTI', 'PVR', 'TCS']].rolling(7, min_periods=1).std(ddof=0).head()) plt.plot(perc_change[['GODREJIND', 'JUBLFOOD', 'MARUTI', 'PVR', 'TCS']].rolling(7, min_periods=1).std(ddof=0)) plt.legend(["GODREJIND", "JUBLFOOD", "MARUTI", "PVR", "TCS"]) plt.title('Volatility');
GODREJIND JUBLFOOD MARUTI PVR TCS 0 0.000000 0.000000 0.000000 0.000000 0.000000 1 0.215246 1.304872 0.922343 0.743322 0.814782 2 1.539099 2.159120 1.524084 0.821539 0.936910 3 1.378038 1.869992 1.348622 0.713198 1.721081 4 1.484981 1.758872 1.297517 1.012500 1.553279
MIT
Module 2.ipynb
parth2608/Stock-Analysis
2.8
nifty = pd.read_csv('Nifty50.csv') nifty['PCT'] = nifty['Close'].pct_change()*100 nifty = nifty.drop([0]) nifty.reset_index(inplace=True) nifty.head() print(nifty[['PCT']].rolling(7, min_periods=1).std(ddof=0).head()) plt.plot(nifty[['PCT']].rolling(7, min_periods=1).std(ddof=0), color='Black'); plt.legend(["Nifty"]); plt.title('Nifty Volatility'); plt.plot(nifty[['PCT']].rolling(7, min_periods=1).std(ddof=0), color='Black'); plt.plot(perc_change[['GODREJIND', 'JUBLFOOD', 'MARUTI', 'PVR', 'TCS']].rolling(7, min_periods=1).std(ddof=0)); plt.legend(["Nifty", "GODREJIND", "JUBLFOOD", "MARUTI", "PVR", "TCS"]); plt.title('Volatility Comparison');
_____no_output_____
MIT
Module 2.ipynb
parth2608/Stock-Analysis
2.9
short_window = 21 long_window = 34 tcs_roll_21 = combined['TCS'].rolling(short_window, min_periods=1).mean() tcs_roll_34 = combined['TCS'].rolling(long_window, min_periods=1).mean() date = pd.to_datetime(tcs['Date']) signals = pd.DataFrame(index=date) signals['signal'] = 0 signals['signal'][short_window:] = np.where(tcs_roll_21[short_window:] > tcs_roll_34[short_window:], 1, 0) signals['position'] = signals['signal'].diff().fillna(0) signals['short_mavg'] = tcs_roll_21.tolist() signals['long_mavg'] = tcs_roll_34.tolist() signals.head() plt.figure(figsize=(16,8)) plt.plot(date, combined['TCS'].rolling(7, min_periods=1).mean(), color='red', label='TCS') plt.plot(date, tcs_roll_21, color='blue', label='21_SMA') plt.plot(date, tcs_roll_34, color='green', label='34_SMA') plt.plot(signals.loc[signals.position == 1].index, signals['short_mavg'][signals.position == 1], '^', markersize=10, color='green', label='Buy') plt.plot(signals.loc[signals.position == -1].index, signals['short_mavg'][signals.position == -1], 'v', markersize=10, color='red', label='Sell') plt.legend(loc='best') plt.xlabel('Date') plt.ylabel('Price in Rupees') plt.show()
_____no_output_____
MIT
Module 2.ipynb
parth2608/Stock-Analysis
2.10
tcs_mean_14 = combined.TCS.rolling(14, min_periods=1).mean() tcs_std_14 = combined.TCS.rolling(14, min_periods=1).std() upper = tcs_mean_14 + 2*tcs_std_14 lower = tcs_mean_14 - 2*tcs_std_14 plt.figure(figsize=(16,8)) plt.plot(date, tcs_mean_14, color='black', label='TCS') plt.plot(date, tcs['Average Price'], color='red', label='Average Price') plt.plot(date, upper, color='blue', label='Upper Bound') plt.plot(date, lower, color='green', label='Lower Bound') plt.xlabel('Date') plt.legend() plt.show()
_____no_output_____
MIT
Module 2.ipynb
parth2608/Stock-Analysis
Beam finite element matricesBased on: \[1] Neto, M. A., Amaro, A., Roseiro, L., Cirne, J., & Leal, R. (2015). Finite Element Method for Beams. In Engineering Computation of Structures: The Finite Element Method (pp. 115–156). Springer International Publishing. http://doi.org/10.1007/978-3-319-17710-6_4The beam finite element, its nodes and degrees-of-freedom (DOFs) can be seen below:![alt text](beam_DOFs.svg)
from sympy import * init_printing() def symb(x, y): return symbols('{0}_{1}'.format(x, y), type = float) E, A, L, G, I_1, I_2, I_3, rho = symbols('E A L G I_1 I_2 I_3 rho', type = float)
_____no_output_____
MIT
notes/FEM_beam.ipynb
gfsReboucas/Drivetrain-python
The finite element matrices should have order 12, accounting for each of the element's DOFs, shown below:
u = Matrix(12, 1, [symb('u', v + 1) for v in range(12)]) transpose(u)
_____no_output_____
MIT
notes/FEM_beam.ipynb
gfsReboucas/Drivetrain-python
Axial deformation along $x_1$In terms of generic coordinates $v_i$:
v_a = Matrix(2, 1, [symb('v', v + 1) for v in range(2)]) transpose(v_a)
_____no_output_____
MIT
notes/FEM_beam.ipynb
gfsReboucas/Drivetrain-python
which are equivalent to the $u_i$ coordinates in the following way:$$ \mathbf{v} = \mathbf{R} \mathbf{u},$$where:$$ v_1 = u_1, \\ v_2 = u_7,$$with the following coordinate transformation matrix:
R = zeros(12) R[1 - 1, 1 - 1] = 1 R[2 - 1, 7 - 1] = 1 R[:2, :]
_____no_output_____
MIT
notes/FEM_beam.ipynb
gfsReboucas/Drivetrain-python
Stiffness matrixEq. (3.15) of [1]:
K_a = (E * A / L) * Matrix([[1, -1], [-1, 1]]) K_a
_____no_output_____
MIT
notes/FEM_beam.ipynb
gfsReboucas/Drivetrain-python
Inertia matrixEq. (3.16) of [1]:
M_a = (rho * A * L / 6) * Matrix([[2, 1], [1, 2]]) M_a
_____no_output_____
MIT
notes/FEM_beam.ipynb
gfsReboucas/Drivetrain-python
Torsional deformation around $x_1$According to [1], one can obtain the matrices for the torsional case from the axial case by replacing the elasticity modulus $E$ and the cross-sectional area $A$ by the shear modulus $G$ and the polar area moment of inertia $I_1$.In terms of generic coordinates $v_i$:
v_t = Matrix(2, 1, [symb('v', v + 3) for v in range(2)]) transpose(v_t)
_____no_output_____
MIT
notes/FEM_beam.ipynb
gfsReboucas/Drivetrain-python
which are equivalent to the $u_i$ coordinates in the following way:$$ v_3 = u_4, \\ v_4 = u_{10},$$with the following coordinate transformation matrix:
R[3 - 1, 4 - 1] = 1 R[4 - 1, 10 - 1] = 1 R[0:4, :]
_____no_output_____
MIT
notes/FEM_beam.ipynb
gfsReboucas/Drivetrain-python
Stiffness matrix
K_t = K_a.subs([(E, G), (A, I_1)]) K_t
_____no_output_____
MIT
notes/FEM_beam.ipynb
gfsReboucas/Drivetrain-python
Inertia matrix
M_t = M_a.subs([(E, G), (A, I_1)]) M_t
_____no_output_____
MIT
notes/FEM_beam.ipynb
gfsReboucas/Drivetrain-python
Bending on the plane $x_1-x_3$In this case the bending torsion occurs around the $x_2$ axis. In terms of generic coordinates $v_i$:
v_b13 = Matrix(4, 1, [symb('v', v + 9) for v in range(4)]) transpose(v_b13)
_____no_output_____
MIT
notes/FEM_beam.ipynb
gfsReboucas/Drivetrain-python
which are equivalent to the $u_i$ coordinates in the following way:$$ v_9 = u_3, \\ v_{10} = u_5, \\ v_{11} = u_9, \\ v_{12} = u_{11},$$with the following coordinate transformation matrix:
R[9 - 1, 3 - 1] = 1 R[10 - 1, 5 - 1] = 1 R[11 - 1, 9 - 1] = 1 R[12 - 1, 11 - 1] = 1 R
_____no_output_____
MIT
notes/FEM_beam.ipynb
gfsReboucas/Drivetrain-python
Stiffness matrix
K_b13 = (E * I_2 / L**3) * Matrix([[ 12 , 6 * L , -12 , 6 * L ], [ 6 * L, 4 * L**2, - 6 * L, 2 * L**2], [-12 , -6 * L , 12 , -6 * L ], [ 6 * L, 2 * L**2, - 6 * L, 4 * L**2]]) if (not K_b13.is_symmetric()): print('Error in K_b13.') K_b13
_____no_output_____
MIT
notes/FEM_beam.ipynb
gfsReboucas/Drivetrain-python
Inertia matrix
M_b13 = (rho * L * A / 420) * Matrix([[ 156 , 22 * L , 54 , -13 * L ], [ 22 * L, 4 * L**2, 13 * L, - 3 * L**2], [ 54 , 13 * L , 156 , -22 * L ], [- 13 * L, - 3 * L**2, - 22 * L, 4 * L**2]]) if (not M_b13.is_symmetric()): print('Error in M_b13.') M_b13
_____no_output_____
MIT
notes/FEM_beam.ipynb
gfsReboucas/Drivetrain-python
Bending on the plane $x_1-x_2$In this case the bending torsion occurs around the $x_3$ axis, but in the opposite direction. The matrices are similar to the case of bending on the $x_1-x_3$ plane, needing proper coordinate transformation and replacing the index of the area moment of inertia from 2 to 3.Written in terms of generic coordinates $v_i$:
v_b12 = Matrix(4, 1, [symb('v', v + 5) for v in range(4)]) transpose(v_b12)
_____no_output_____
MIT
notes/FEM_beam.ipynb
gfsReboucas/Drivetrain-python
which are equivalent to the $u_i$ coordinates in the following way:$$ v_5 = u_2, \\ v_6 = -u_6, \\ v_7 = u_8, \\ v_8 = -u_{12},$$with the following coordinate transformation matrix:
R[5 - 1, 2 - 1] = 1 R[6 - 1, 6 - 1] = -1 R[7 - 1, 8 - 1] = 1 R[8 - 1, 12 - 1] = -1 R
_____no_output_____
MIT
notes/FEM_beam.ipynb
gfsReboucas/Drivetrain-python
Stiffness matrix
K_b12 = K_b13.subs(I_2, I_3) K_b12
_____no_output_____
MIT
notes/FEM_beam.ipynb
gfsReboucas/Drivetrain-python
Inertia matrix
M_b12 = M_b13 if (not M_b12.is_symmetric()): print('Error in M_b12.') M_b12
_____no_output_____
MIT
notes/FEM_beam.ipynb
gfsReboucas/Drivetrain-python
Assembly of the full matricesAccounting for axial loads, torques and bending in both planes.
RAR = lambda A: transpose(R)*A*R transpose(R**-1*u) K_f = diag(K_a, K_t, K_b12, K_b13) K = RAR(K_f) if (not K.is_symmetric()): print('Error in K.') K M_f = diag(M_a, M_t, M_b12, M_b13) M = RAR(M_f) if (not M.is_symmetric()): print('Error in M.') M
_____no_output_____
MIT
notes/FEM_beam.ipynb
gfsReboucas/Drivetrain-python
Dynamic matrices for Lin and ParkerSee:Lin, J., & Parker, R. G. (1999). Analytical characterization of the unique properties of planetary gear free vibration. Journal of Vibration and Acoustics, Transactions of the ASME, 121(3), 316–321. http://doi.org/10.1115/1.2893982Considering translation on directions $x_2$ and $x_3$ and rotation around $x_1$:
id = [2,3, 4, 8, 9, 10] u_LP = [symb('u', i) for i in id] Matrix(u_LP) T = zeros(12,6) T[2 - 1, 1 - 1] = 1 T[3 - 1, 2 - 1] = 1 T[4 - 1, 3 - 1] = 1 T[8 - 1, 4 - 1] = 1 T[9 - 1, 5 - 1] = 1 T[10 - 1, 6 - 1] = 1 (T.T) * K * T
_____no_output_____
MIT
notes/FEM_beam.ipynb
gfsReboucas/Drivetrain-python
🚜 Predicting the Sale Price of Bulldozers using Machine LearningIn this notebook, we're going to go through an example machine learning project with the goal of predicting the sale price of bulldozers. 1. Problem defition> How well can we predict the future sale price of a bulldozer, given its characteristics and previous examples of how much similar bulldozers have been sold for? 2. DataThe data is downloaded from the Kaggle Bluebook for Bulldozers competition: https://www.kaggle.com/c/bluebook-for-bulldozers/dataThere are 3 main datasets:* Train.csv is the training set, which contains data through the end of 2011.* Valid.csv is the validation set, which contains data from January 1, 2012 - April 30, 2012 You make predictions on this set throughout the majority of the competition. Your score on this set is used to create the public leaderboard.* Test.csv is the test set, which won't be released until the last week of the competition. It contains data from May 1, 2012 - November 2012. Your score on the test set determines your final rank for the competition. 3. EvaluationThe evaluation metric for this competition is the RMSLE (root mean squared log error) between the actual and predicted auction prices.For more on the evaluation of this project check: https://www.kaggle.com/c/bluebook-for-bulldozers/overview/evaluation**Note:** The goal for most regression evaluation metrics is to minimize the error. For example, our goal for this project will be to build a machine learning model which minimises RMSLE. 4. FeaturesKaggle provides a data dictionary detailing all of the features of the dataset. You can view this data dictionary on Google Sheets: https://docs.google.com/spreadsheets/d/18ly-bLR8sbDJLITkWG7ozKm8l3RyieQ2Fpgix-beSYI/edit?usp=sharing
import numpy as np import pandas as pd import matplotlib.pyplot as plt import sklearn # Import training and validation sets df = pd.read_csv("data/bluebook-for-bulldozers/TrainAndValid.csv", low_memory=False) df.info() df.isna().sum() df.columns fig, ax = plt.subplots() ax.scatter(df["saledate"][:1000], df["SalePrice"][:1000]) df.saledate[:1000] df.saledate.dtype df.SalePrice.plot.hist()
_____no_output_____
MIT
workbench materials/end-to-end-bluebook-bulldozer-price-regression-video.ipynb
Mikolaj-Myszka/DataScience-bulldozer-price-prediction
Parsing datesWhen we work with time series data, we want to enrich the time & date component as much as possible.We can do that by telling pandas which of our columns has dates in it using the `parse_dates` parameter.
# Import data again but this time parse dates df = pd.read_csv("data/bluebook-for-bulldozers/TrainAndValid.csv", low_memory=False, parse_dates=["saledate"]) df.saledate.dtype df.saledate[:1000] fig, ax = plt.subplots() ax.scatter(df["saledate"][:1000], df["SalePrice"][:1000]) df.head() df.head().T df.saledate.head(20)
_____no_output_____
MIT
workbench materials/end-to-end-bluebook-bulldozer-price-regression-video.ipynb
Mikolaj-Myszka/DataScience-bulldozer-price-prediction
Sort DataFrame by saledateWhen working with time series data, it's a good idea to sort it by date.
# Sort DataFrame in date order df.sort_values(by=["saledate"], inplace=True, ascending=True) df.saledate.head(20)
_____no_output_____
MIT
workbench materials/end-to-end-bluebook-bulldozer-price-regression-video.ipynb
Mikolaj-Myszka/DataScience-bulldozer-price-prediction
Make a copy of the original DataFrameWe make a copy of the original dataframe so when we manipulate the copy, we've still got our original data.
# Make a copy of the original DataFrame to perform edits on df_tmp = df.copy()
_____no_output_____
MIT
workbench materials/end-to-end-bluebook-bulldozer-price-regression-video.ipynb
Mikolaj-Myszka/DataScience-bulldozer-price-prediction
Add datetime parameters for `saledate` column
df_tmp["saleYear"] = df_tmp.saledate.dt.year df_tmp["saleMonth"] = df_tmp.saledate.dt.month df_tmp["saleDay"] = df_tmp.saledate.dt.day df_tmp["saleDayOfWeek"] = df_tmp.saledate.dt.dayofweek df_tmp["saleDayOfYear"] = df_tmp.saledate.dt.dayofyear df_tmp.head().T # Now we've enriched our DataFrame with date time features, we can remove 'saledate' df_tmp.drop("saledate", axis=1, inplace=True) # Check the values of different columns df_tmp.state.value_counts() df_tmp.head() len(df_tmp)
_____no_output_____
MIT
workbench materials/end-to-end-bluebook-bulldozer-price-regression-video.ipynb
Mikolaj-Myszka/DataScience-bulldozer-price-prediction
5. Modelling We've done enough EDA (we could always do more) but let's start to do some model-driven EDA.
# Let's build a machine learning model from sklearn.ensemble import RandomForestRegressor model = RandomForestRegressor(n_jobs=-1, random_state=42) model.fit(df_tmp.drop("SalePrice", axis=1), df_tmp["SalePrice"]) df_tmp.info() df_tmp["UsageBand"].dtype df_tmp.isna().sum()
_____no_output_____
MIT
workbench materials/end-to-end-bluebook-bulldozer-price-regression-video.ipynb
Mikolaj-Myszka/DataScience-bulldozer-price-prediction
Convert string to categoriesOne way we can turn all of our data into numbers is by converting them into pandas catgories.We can check the different datatypes compatible with pandas here: https://pandas.pydata.org/pandas-docs/stable/reference/general_utility_functions.htmldata-types-related-functionality
df_tmp.head().T pd.api.types.is_string_dtype(df_tmp["UsageBand"]) # Find the columns which contain strings for label, content in df_tmp.items(): if pd.api.types.is_string_dtype(content): print(label) # If you're wondering what df.items() does, here's an example random_dict = {"key1": "hello", "key2": "world!"} for key, value in random_dict.items(): print(f"this is a key: {key}", f"this is a value: {value}") # This will turn all of the string value into category values for label, content in df_tmp.items(): if pd.api.types.is_string_dtype(content): df_tmp[label] = content.astype("category").cat.as_ordered() df_tmp.info() df_tmp.state.cat.categories df_tmp.state.cat.codes
_____no_output_____
MIT
workbench materials/end-to-end-bluebook-bulldozer-price-regression-video.ipynb
Mikolaj-Myszka/DataScience-bulldozer-price-prediction
Thanks to pandas Categories we now have a way to access all of our data in the form of numbers.But we still have a bunch of missing data...
# Check missing data df_tmp.isnull().sum()/len(df_tmp)
_____no_output_____
MIT
workbench materials/end-to-end-bluebook-bulldozer-price-regression-video.ipynb
Mikolaj-Myszka/DataScience-bulldozer-price-prediction
Save preprocessed data
# Export current tmp dataframe df_tmp.to_csv("data/bluebook-for-bulldozers/train_tmp.csv", index=False) # Import preprocessed data df_tmp = pd.read_csv("data/bluebook-for-bulldozers/train_tmp.csv", low_memory=False) df_tmp.head().T df_tmp.isna().sum()
_____no_output_____
MIT
workbench materials/end-to-end-bluebook-bulldozer-price-regression-video.ipynb
Mikolaj-Myszka/DataScience-bulldozer-price-prediction
Fill missing values Fill numerical missing values first
for label, content in df_tmp.items(): if pd.api.types.is_numeric_dtype(content): print(label) df_tmp.ModelID # Check for which numeric columns have null values for label, content in df_tmp.items(): if pd.api.types.is_numeric_dtype(content): if pd.isnull(content).sum(): print(label) # Fill numeric rows with the median for label, content in df_tmp.items(): if pd.api.types.is_numeric_dtype(content): if pd.isnull(content).sum(): # Add a binary column which tells us if the data was missing or not df_tmp[label+"_is_missing"] = pd.isnull(content) # Fill missing numeric values with median df_tmp[label] = content.fillna(content.median()) # Demonstrate how median is more robust than mean hundreds = np.full((1000,), 100) hundreds_billion = np.append(hundreds, 1000000000) np.mean(hundreds), np.mean(hundreds_billion), np.median(hundreds), np.median(hundreds_billion) # Check if there's any null numeric values for label, content in df_tmp.items(): if pd.api.types.is_numeric_dtype(content): if pd.isnull(content).sum(): print(label) # Check to see how many examples were missing df_tmp.auctioneerID_is_missing.value_counts() df_tmp.isna().sum()
_____no_output_____
MIT
workbench materials/end-to-end-bluebook-bulldozer-price-regression-video.ipynb
Mikolaj-Myszka/DataScience-bulldozer-price-prediction
Filling and turning categorical variables into numbers
# Check for columns which aren't numeric for label, content in df_tmp.items(): if not pd.api.types.is_numeric_dtype(content): print(label) # Turn categorical variables into numbers and fill missing for label, content in df_tmp.items(): if not pd.api.types.is_numeric_dtype(content): # Add binary column to indicate whether sample had missing value df_tmp[label+"_is_missing"] = pd.isnull(content) # Turn categories into numbers and add +1 df_tmp[label] = pd.Categorical(content).codes+1 pd.Categorical(df_tmp["state"]).codes+1 df_tmp.info() df_tmp.head().T df_tmp.isna().sum()
_____no_output_____
MIT
workbench materials/end-to-end-bluebook-bulldozer-price-regression-video.ipynb
Mikolaj-Myszka/DataScience-bulldozer-price-prediction
Now that all of data is numeric as well as our dataframe has no missing values, we should be able to build a machine learning model.
df_tmp.head() len(df_tmp) %%time # Instantiate model model = RandomForestRegressor(n_jobs=-1, random_state=42) # Fit the model model.fit(df_tmp.drop("SalePrice", axis=1), df_tmp["SalePrice"]) # Score the model model.score(df_tmp.drop("SalePrice", axis=1), df_tmp["SalePrice"])
_____no_output_____
MIT
workbench materials/end-to-end-bluebook-bulldozer-price-regression-video.ipynb
Mikolaj-Myszka/DataScience-bulldozer-price-prediction
**Question:** Why doesn't the above metric hold water? (why isn't the metric reliable) Splitting data into train/validation sets
df_tmp.saleYear df_tmp.saleYear.value_counts() # Split data into training and validation df_val = df_tmp[df_tmp.saleYear == 2012] df_train = df_tmp[df_tmp.saleYear != 2012] len(df_val), len(df_train) # Split data into X & y X_train, y_train = df_train.drop("SalePrice", axis=1), df_train.SalePrice X_valid, y_valid = df_val.drop("SalePrice", axis=1), df_val.SalePrice X_train.shape, y_train.shape, X_valid.shape, y_valid.shape y_train
_____no_output_____
MIT
workbench materials/end-to-end-bluebook-bulldozer-price-regression-video.ipynb
Mikolaj-Myszka/DataScience-bulldozer-price-prediction
Building an evaluation function
# Create evaluation function (the competition uses RMSLE) from sklearn.metrics import mean_squared_log_error, mean_absolute_error, r2_score def rmsle(y_test, y_preds): """ Caculates root mean squared log error between predictions and true labels. """ return np.sqrt(mean_squared_log_error(y_test, y_preds)) # Create function to evaluate model on a few different levels def show_scores(model): train_preds = model.predict(X_train) val_preds = model.predict(X_valid) scores = {"Training MAE": mean_absolute_error(y_train, train_preds), "Valid MAE": mean_absolute_error(y_valid, val_preds), "Training RMSLE": rmsle(y_train, train_preds), "Valid RMSLE": rmsle(y_valid, val_preds), "Training R^2": r2_score(y_train, train_preds), "Valid R^2": r2_score(y_valid, val_preds)} return scores
_____no_output_____
MIT
workbench materials/end-to-end-bluebook-bulldozer-price-regression-video.ipynb
Mikolaj-Myszka/DataScience-bulldozer-price-prediction
Testing our model on a subset (to tune the hyperparameters)
# # This takes far too long... for experimenting # %%time # model = RandomForestRegressor(n_jobs=-1, # random_state=42) # model.fit(X_train, y_train) len(X_train) # Change max_samples value model = RandomForestRegressor(n_jobs=-1, random_state=42, max_samples=10000) %%time # Cutting down on the max number of samples each estimator can see improves training time model.fit(X_train, y_train) (X_train.shape[0] * 100) / 1000000 10000 * 100 show_scores(model)
_____no_output_____
MIT
workbench materials/end-to-end-bluebook-bulldozer-price-regression-video.ipynb
Mikolaj-Myszka/DataScience-bulldozer-price-prediction
Hyerparameter tuning with RandomizedSearchCV
%%time from sklearn.model_selection import RandomizedSearchCV # Different RandomForestRegressor hyperparameters rf_grid = {"n_estimators": np.arange(10, 100, 10), "max_depth": [None, 3, 5, 10], "min_samples_split": np.arange(2, 20, 2), "min_samples_leaf": np.arange(1, 20, 2), "max_features": [0.5, 1, "sqrt", "auto"], "max_samples": [10000]} # Instantiate RandomizedSearchCV model rs_model = RandomizedSearchCV(RandomForestRegressor(n_jobs=-1, random_state=42), param_distributions=rf_grid, n_iter=2, cv=5, verbose=True) # Fit the RandomizedSearchCV model rs_model.fit(X_train, y_train) # Find the best model hyperparameters rs_model.best_params_ # Evaluate the RandomizedSearch model show_scores(rs_model)
_____no_output_____
MIT
workbench materials/end-to-end-bluebook-bulldozer-price-regression-video.ipynb
Mikolaj-Myszka/DataScience-bulldozer-price-prediction
Train a model with the best hyperparamters**Note:** These were found after 100 iterations of `RandomizedSearchCV`.
%%time # Most ideal hyperparamters ideal_model = RandomForestRegressor(n_estimators=40, min_samples_leaf=1, min_samples_split=14, max_features=0.5, n_jobs=-1, max_samples=None, random_state=42) # random state so our results are reproducible # Fit the ideal model ideal_model.fit(X_train, y_train) # Scores for ideal_model (trained on all the data) show_scores(ideal_model) # Scores on rs_model (only trained on ~10,000 examples) show_scores(rs_model)
_____no_output_____
MIT
workbench materials/end-to-end-bluebook-bulldozer-price-regression-video.ipynb
Mikolaj-Myszka/DataScience-bulldozer-price-prediction
Make predictions on test data
# Import the test data df_test = pd.read_csv("data/bluebook-for-bulldozers/Test.csv", low_memory=False, parse_dates=["saledate"]) df_test.head() # Make predictions on the test dataset test_preds = ideal_model.predict(df_test)
_____no_output_____
MIT
workbench materials/end-to-end-bluebook-bulldozer-price-regression-video.ipynb
Mikolaj-Myszka/DataScience-bulldozer-price-prediction
Preprocessing the data (getting the test dataset in the same format as our training dataset)
def preprocess_data(df): """ Performs transformations on df and returns transformed df. """ df["saleYear"] = df.saledate.dt.year df["saleMonth"] = df.saledate.dt.month df["saleDay"] = df.saledate.dt.day df["saleDayOfWeek"] = df.saledate.dt.dayofweek df["saleDayOfYear"] = df.saledate.dt.dayofyear df.drop("saledate", axis=1, inplace=True) # Fill the numeric rows with median for label, content in df.items(): if pd.api.types.is_numeric_dtype(content): if pd.isnull(content).sum(): # Add a binary column which tells us if the data was missing or not df[label+"_is_missing"] = pd.isnull(content) # Fill missing numeric values with median df[label] = content.fillna(content.median()) # Filled categorical missing data and turn categories into numbers if not pd.api.types.is_numeric_dtype(content): df[label+"_is_missing"] = pd.isnull(content) # We add +1 to the category code because pandas encodes missing categories as -1 df[label] = pd.Categorical(content).codes+1 return df # Process the test data df_test = preprocess_data(df_test) df_test.head() # Make predictions on updated test data test_preds = ideal_model.predict(df_test) X_train.head() # We can find how the columns differ using sets set(X_train.columns) - set(df_test.columns) # Manually adjust df_test to have auctioneerID_is_missing column df_test["auctioneerID_is_missing"] = False df_test.head()
_____no_output_____
MIT
workbench materials/end-to-end-bluebook-bulldozer-price-regression-video.ipynb
Mikolaj-Myszka/DataScience-bulldozer-price-prediction
Finally now our test dataframe has the same features as our training dataframe, we can make predictions!
# Make predictions on the test data test_preds = ideal_model.predict(df_test) test_preds
_____no_output_____
MIT
workbench materials/end-to-end-bluebook-bulldozer-price-regression-video.ipynb
Mikolaj-Myszka/DataScience-bulldozer-price-prediction
We've made some predictions but they're not in the same format Kaggle is asking for: https://www.kaggle.com/c/bluebook-for-bulldozers/overview/evaluation
# Format predictions into the same format Kaggle is after df_preds = pd.DataFrame() df_preds["SalesID"] = df_test["SalesID"] df_preds["SalesPrice"] = test_preds df_preds # Export prediction data df_preds.to_csv("data/bluebook-for-bulldozers/test_predictions.csv", index=False)
_____no_output_____
MIT
workbench materials/end-to-end-bluebook-bulldozer-price-regression-video.ipynb
Mikolaj-Myszka/DataScience-bulldozer-price-prediction
Feature ImportanceFeature importance seeks to figure out which different attributes of the data were most importance when it comes to predicting the **target variable** (SalePrice).
# Find feature importance of our best model ideal_model.feature_importances_ # Helper function for plotting feature importance def plot_features(columns, importances, n=20): df = (pd.DataFrame({"features": columns, "feature_importances": importances}) .sort_values("feature_importances", ascending=False) .reset_index(drop=True)) # Plot the dataframe fig, ax = plt.subplots() ax.barh(df["features"][:n], df["feature_importances"][:20]) ax.set_ylabel("Features") ax.set_xlabel("Feature importance") ax.invert_yaxis() plot_features(X_train.columns, ideal_model.feature_importances_) df["Enclosure"].value_counts()
_____no_output_____
MIT
workbench materials/end-to-end-bluebook-bulldozer-price-regression-video.ipynb
Mikolaj-Myszka/DataScience-bulldozer-price-prediction
BHSA specifics
A = use("bhsa:clone", checkout="clone", hoist=globals()) A.reuse() A.showContext()
_____no_output_____
MIT
zz_test/030-bhsa.ipynb
sethbam9/tutorials
A slot with standard features
A.displayShow("standardFeatures") w = 2 p = L.u(w, otype="phrase")[0] tree = A.unravel(w, explain=True) tree = A.unravel(p, explain=True) A.pretty(w, standardFeatures=True) A.pretty(p, standardFeatures=True)
_____no_output_____
MIT
zz_test/030-bhsa.ipynb
sethbam9/tutorials
Base types
p = 675477 highlights = {p} A.pretty(p, highlights=highlights) A.pretty(p, highlights=highlights, baseTypes="phrase")
_____no_output_____
MIT
zz_test/030-bhsa.ipynb
sethbam9/tutorials
A tricky verse
v = T.nodeFromSection(("Genesis", 7, 14)) v A.plain(v, explain=False)
_____no_output_____
MIT
zz_test/030-bhsa.ipynb
sethbam9/tutorials
Halfverses are hidden. This verse is divided (at top level) in 3 spans: one for each clause chunk. The first and last chunksbelong to clause 1, and the middle chunk is clause 2.Look what happens if we set `hideTypes=True`:
A.plain(v, hideTypes=False, explain=False)
_____no_output_____
MIT
zz_test/030-bhsa.ipynb
sethbam9/tutorials
When you make the browser window narrower, the line breaks are different.Because now the verse is divided in 2 spans: one for each half verse, and the separation betweenthe half verses is within the third clause chunk.See it in pretty view:
A.pretty(v, explain=False)
_____no_output_____
MIT
zz_test/030-bhsa.ipynb
sethbam9/tutorials
We can selectively unhide the half verse and leave everything else hidden:
A.pretty( v, hideTypes=True, hiddenTypes="subphrase phrase_atom clause_atom sentence_atom", explain=False, )
_____no_output_____
MIT
zz_test/030-bhsa.ipynb
sethbam9/tutorials
This shows the reason of the split.We can also print the full structure (although that's a bit over the top):
A.pretty(v, hideTypes=False)
_____no_output_____
MIT
zz_test/030-bhsa.ipynb
sethbam9/tutorials
Alignment in tables
A.table( ((2, 213294, 426583), (3, 213295, 426582), (4, 213296, 426581)), withPassage={1, 2} )
_____no_output_____
MIT
zz_test/030-bhsa.ipynb
sethbam9/tutorials
SubphrasesSubphrases with equal slots:
w = 3461 sps = L.u(w, otype="subphrase") for sp in sps[0:-1]: print(E.oslots.s(sp)) A.pretty(sp, standardFeatures=True, extraFeatures="rela", withNodes=True)
array('I', [3461, 3462, 3463])
MIT
zz_test/030-bhsa.ipynb
sethbam9/tutorials
Sentence spanning two verses
A.pretty(v, withNodes=True, explain=False)
_____no_output_____
MIT
zz_test/030-bhsa.ipynb
sethbam9/tutorials
Base types
cl = 427612 words = L.d(cl, otype="word") phrases = L.d(cl, otype="phrase") highlights = {phrases[1]: "lightsalmon", words[2]: "lightblue"} A.pretty(cl, baseTypes="phrase", withNodes=True, highlights=highlights, explain=True)
<0> TOP <1> clause 427612 {322-326} <2> phrase* 651725 {322-323} <3> word 322 {322} <3> word 323 {323} <2> phrase* 651726 {324-326} <3> word 324 {324} <3> word 325 {325} <3> word 326 {326}
MIT
zz_test/030-bhsa.ipynb
sethbam9/tutorials
Gaps
c = 427931 s = L.u(c, otype="sentence")[0] v = L.u(c, otype="verse")[0] highlights = {c: "khaki", c + 1: "lightblue"} A.webLink(s) A.plain(s, withNodes=True, highlights=highlights) A.plain(s) T.formats A.plain(c, fmt="text-phono-full", withNodes=True, explain=False) A.plain(c, withNodes=False, explain=False) A.pretty(c, withNodes=True, explain=False) A.pretty(c, withNodes=True, hideTypes=False, explain=False) A.pretty(s, withNodes=True, highlights=highlights, explain=False) A.plain(s, withNodes=True, hideTypes=False) A.pretty(s, withNodes=True, highlights=highlights, hideTypes=False, explain=False) A.plain(427931) A.pretty(427931, withNodes=True) A.plain(427932) A.pretty(427932) sp = F.otype.s("subphrase")[0] v = L.u(sp, otype="verse")[0] A.pretty(v) p = 653380 s = L.u(p, otype="sentence")[0] c = L.d(s, otype="clause")[0] A.plain(p) A.pretty(p) A.pretty(p, baseTypes="phrase_atom", hideTypes=True) A.pretty(p, baseTypes="phrase") A.plain(s) A.plain(s, plainGaps=False) A.plain(c, withNodes=True) A.pretty(c) A.pretty(s, baseTypes={"subphrase", "word"}) A.pretty(s, baseTypes="phrase") A.prettyTuple((p,), 1, baseTypes="phrase") A.pretty(p, baseTypes="phrase")
_____no_output_____
MIT
zz_test/030-bhsa.ipynb
sethbam9/tutorials
Atom types
p = F.otype.s("phrase")[0] pa = F.otype.s("phrase_atom")[0]
_____no_output_____
MIT
zz_test/030-bhsa.ipynb
sethbam9/tutorials
Plain
A.plain(p, highlights={p, pa}) A.plain(p, highlights={p, pa}, hideTypes=False) A.plain(p, highlights={p}) A.plain(p, highlights={p}, hideTypes=False) A.plain(p, highlights={pa}) A.plain(p, highlights={pa}, hideTypes=False) A.plain(pa, highlights={p, pa}) A.plain(pa, highlights={p, pa}, hideTypes=False) A.plain(pa, highlights={pa}) A.plain(pa, highlights={pa}, hideTypes=False) A.plain(pa, highlights={p}) A.plain(pa, highlights={p}, hideTypes=False)
_____no_output_____
MIT
zz_test/030-bhsa.ipynb
sethbam9/tutorials
Pretty
A.pretty(p, highlights={p, pa}) A.pretty(p, highlights={p, pa}, hideTypes=False) A.pretty(p, highlights={p}) A.pretty(p, highlights={p}, hideTypes=False) A.pretty(p, highlights={pa}) A.pretty(p, highlights={pa}, hideTypes=False) A.pretty(pa, highlights={p, pa}) A.pretty(pa, highlights={p, pa}, hideTypes=False) A.pretty(pa, highlights={pa}) A.pretty(pa, highlights={pa}, hideTypes=False) A.pretty(pa, highlights={p}) A.pretty(pa, highlights={p}, hideTypes=False)
_____no_output_____
MIT
zz_test/030-bhsa.ipynb
sethbam9/tutorials
Highlights
cl = 435509 ph = 675481 w = 38625 highlights = {ph, w} A.pretty(cl, highlights=highlights, withNodes=True) A.pretty(cl, highlights=highlights, baseTypes={"phrase"}, withNodes=True) A.plain(ph, highlights=highlights) A.plain(ph, highlights=highlights, baseTypes={"phrase"}, withNodes=True) typeShow(A)
_____no_output_____
MIT
zz_test/030-bhsa.ipynb
sethbam9/tutorials
Starting with a 1-indexed array of zeros and a list of operations, for each operation add a value to each the array element between two given indices, inclusive. Once all operations have been performed, return the maximum value in the array.
def main(): n, m = map(int, input().split()) xs = [0] * (n + 2) for _ in range(m): a, b, k = map(int, input().split()) xs[a] += k xs[b + 1] -= k answer = 0 current = 0 for x in xs: current += x answer = max(answer, current) print(answer) if __name__ == '__main__': main() import math import os import random import re import sys # Complete the arrayManipulation function below. def arrayManipulation(n, queries): res = [0]*(n+1) for row in range(len(queries)): a = queries[row][0] b = queries[row][1] k = queries[row][2] res[a-1] += k res[b] -= k sm = 0 mx = 0 for i in range(len(res)): sm += res[i] if sm > mx: mx = sm return mx if __name__ == '__main__': fptr = open(os.environ['OUTPUT_PATH'], 'w') nm = input().split() n = int(nm[0]) m = int(nm[1]) queries = [] for _ in range(m): queries.append(list(map(int, input().rstrip().split()))) result = arrayManipulation(n, queries) fptr.write(str(result) + '\n') fptr.close()
_____no_output_____
MIT
Interview Preparation Kit/1. arrays/4. array manipulation.ipynb
faisalsanto007/Hakcerrank-problem-solving
Forward Kinematics for Wheeled Robots; Dead Reckoning
# Preparation import numpy as np np.set_printoptions(precision=4, suppress=True) import matplotlib.pyplot as plt import ipywidgets
_____no_output_____
MIT
Robotics Concepts/03 wheeled.ipynb
gyani91/Robotics
Let's first setup useful functions (see transforms2d notebook)
def mktr(x, y): return np.array([[1, 0, x], [0, 1, y], [0, 0, 1]]) def mkrot(theta): return np.array([[np.cos(theta), -np.sin(theta), 0], [np.sin(theta), np.cos(theta), 0], [0, 0, 1]]) def drawf(f, ax=None, name=None): """ Draw frame defined by f on axis ax (if provided) or on plt.gca() otherwise """ xhat = f @ np.array([[0, 0, 1], [1, 0, 1]]).T yhat = f @ np.array([[0, 0, 1], [0, 1, 1]]).T if(not ax): ax = plt.gca() ax.plot(xhat[0, :], xhat[1, :], 'r-') # transformed x unit vector ax.plot(yhat[0, :], yhat[1, :], 'g-') # transformed y unit vector if(name): ax.text(xhat[0, 0], xhat[1, 0], name, va="top", ha="center")
_____no_output_____
MIT
Robotics Concepts/03 wheeled.ipynb
gyani91/Robotics
A function to draw a robot at a given pose `f`
def drawrobot(f, l, ax=None, alpha=0.5): """ Draw robot at f, with wheel distance from center l, on axis ax (if provided) or on plt.gca() otherwise. if l is None, no wheels are drawn""" if(not ax): ax = plt.gca() robot = ([[-1, 2, -1, -1], # x [-1, 0, 1, -1]]) # y robot = np.array(robot) robot = np.vstack(( robot * 0.1, # scale by 0.1 units np.ones((1, robot.shape[1])))) robott = f @ robot wheell = np.array([ [-0.05, 0.05], [l, l], [1, 1] ]) wheelr = wheell * np.array([[1, -1, 1]]).T wheellt = f @ wheell wheelrt = f @ wheelr ax.plot(robott[0, :], robott[1, :], 'k-', alpha=alpha) ax.plot(wheellt[0, :], wheellt[1, :], 'k-', alpha=alpha) ax.plot(wheelrt[0, :], wheelrt[1, :], 'k-', alpha=alpha)
_____no_output_____
MIT
Robotics Concepts/03 wheeled.ipynb
gyani91/Robotics
Note how the frame is centered in the middle point between the two wheels, and the robot points towards the $x$ axis.
drawf(np.eye(3)) drawrobot(np.eye(3), 0.1) drawrobot(mktr(0.5, 0.3) @ mkrot(np.pi/4), 0.1) plt.gca().axis("equal")
_____no_output_____
MIT
Robotics Concepts/03 wheeled.ipynb
gyani91/Robotics
The kinematic model of a differential-drive robot![image.png](attachment:image.png)For a given differential drive robot, we have the following (fixed) parameter:* $l$: distance from the robot frame to each wheel. The distance between the wheel is therefore $2 \cdot l$We control the angular speed of each wheel $\phi_L, \phi_R$. Given the wheel radius $r$, the tangential speed for each wheel is $v_L = r \phi_L$ and $v_R = r \phi_R$, respectively. From now on, we assume we directly control (or measure) $v_L$ and $v_R$.Assuming that $v_L$ and $v_R$ are constant during a short time interval $[t, t+\delta t]$, we have three ways to update the pose of the robot:* Euler method* Runge-Kutta method* Exact methodThe function below implements the exact method, and returns a transform from the pose at $t$ to the pose at $t+\delta t$.
def ddtr(vl, vr, l, dt): """ returns the pose transform for a motion with duration dt of a differential drive robot with wheel speeds vl and vr and wheelbase l """ if(np.isclose(vl, vr)): # we are moving straight, R is at the infinity and we handle this case separately return mktr((vr + vl)/2*dt, 0) # note we translate along x () omega = (vr - vl) / (2 * l) # angular speed of the robot frame R = l * (vr + vl) / (vr - vl) # Make sure you understand this! return mktr(0, R) @ mkrot(omega * dt) @ mktr(0, -R)
_____no_output_____
MIT
Robotics Concepts/03 wheeled.ipynb
gyani91/Robotics
Let's test our function. Try special cases and try for each to predict where the ICR will be.* $v_R = -v_L$* $v_R = 0$, $v_L > 0$* $v_R = 0.5 \cdot v_L$
l = 0.1 initial_frame = np.eye(3) # try changing this @ipywidgets.interact(vl=ipywidgets.FloatSlider(min=-2, max=+2), vr=ipywidgets.FloatSlider(min=-2, max=+2)) def f(vl, vr): drawf(initial_frame) # Initial frame f = ddtr(vl, vr, l, 1) drawf(f) drawrobot(f, l) plt.axis("equal")
_____no_output_____
MIT
Robotics Concepts/03 wheeled.ipynb
gyani91/Robotics
This approach tells you how to move from the pose at $t$ to the pose at $t+\delta t$. Then you can concatenate multiple transformations.
dt = 1.0 Ts = [mktr(1, 1) @ mkrot(np.pi/4)] vl, vr = 0.10, 0.05 l = 0.1 for i in range(10): Ts.append(Ts[-1] @ ddtr(vl, vr, l, dt)) drawf(np.eye(3)) for T in Ts: drawrobot(T, l) plt.axis("equal")
_____no_output_____
MIT
Robotics Concepts/03 wheeled.ipynb
gyani91/Robotics
Question: in the example above, would you get the same result if you estimated the final position in a single step? Try that, and make sure you understand the result.
@ipywidgets.interact( vl=ipywidgets.FloatSlider(min=-0.5, max=0.5, value=0, step=0.02), vr=ipywidgets.FloatSlider(min=-0.5, max=0.5, value=0, step=0.02), l= ipywidgets.FloatSlider(min=0.05, max=0.15, value=0.10, step=0.01)) def f(vl, vr, l): Ts = [np.eye(3)] for i in range(10): Ts.append(Ts[-1] @ ddtr(vl, vr, l, 1)) drawf(np.eye(3)) for T in Ts: drawrobot(T, l) plt.axis("equal")
_____no_output_____
MIT
Robotics Concepts/03 wheeled.ipynb
gyani91/Robotics
ExerciseImplement the same approach, using the Euler method for integration. Compare the results with the method above using different values for $\delta t$. Dead ReckoningWe now have all the necessary info in order to predict the trajectory when the wheel speeds change over time. We define the initial and final speeds for the left and right wheels, and let them vary linearly from $t=0$ to $t=10$.Can you set the parameters to get an s-shaped path? Try changing the value of `dt` and make sure you understand under what conditions the final pose of the robot changes when you change `dt`.
l = 0.05 @ipywidgets.interact( vl0=ipywidgets.FloatSlider(min=-0.5, max=0.5, value=0, step=0.02), vr0=ipywidgets.FloatSlider(min=-0.5, max=0.5, value=0, step=0.02), vl1=ipywidgets.FloatSlider(min=-0.5, max=0.5, value=0, step=0.02), vr1=ipywidgets.FloatSlider(min=-0.5, max=0.5, value=0, step=0.02), dt=ipywidgets.Select(options=[1, 5])) def f(vl0, vr0, vl1, vr1, dt): t0 = 0 t1 = 10 def wheelspeeds(t): return (vl0 + (vl1-vl0)*(t-t0)/(t1-t0), vr0 + (vr1-vr0)*(t-t0)/(t1-t0)) ts = np.arange(t0, t1+dt, dt) vls, vrs = [], [] for t in ts: vl, vr = wheelspeeds(t) vls.append(vl) vrs.append(vr) cT = np.eye(3) Ts = [] for i, t in enumerate(ts): if(i == 0): Ts.append(cT) else: vl, vr = vls[i-1], vrs[i-1] cT = cT @ ddtr(vl, vr, l, dt) Ts.append(cT) fig, ax = plt.subplots() ax.plot(ts, vls, label=("left")) ax.plot(ts, vrs, label=("right")) ax.set(xlabel="time", ylabel="wheel tangential speed") ax.legend() fig, ax = plt.subplots() drawf(np.eye(3), ax=ax) for T in Ts: drawrobot(T, l, ax=ax) drawf(Ts[-1], name="time = {}".format(ts[-1]), ax=ax) plt.axis("equal")
_____no_output_____
MIT
Robotics Concepts/03 wheeled.ipynb
gyani91/Robotics
Sorting Lists of Lists Sort the following list of lists by the grades in descending order. The desired output should be: [['Kaylee', 99], ['Simon', 99], ['Zoe', 85], ['Malcolm', 80], ['Wash', 79]] Hints: https://wiki.python.org/moin/HowTo/Sorting
grades = [["Malcolm", 80], ["Zoe", 85], ["Kaylee", 99], ["Simon", 99], ["Wash", 79]]
_____no_output_____
ADSL
Python-Drills/01-Sort_List_of_Lists/Sort_List_of_Lists.ipynb
SVEENASHARMA/web-design-challenge
YOUR CODE HERE
sorted_list = sorted(grades, key = lambda x:(-x[1], x[0])) print(sorted_list)
[['Kaylee', 99], ['Simon', 99], ['Zoe', 85], ['Malcolm', 80], ['Wash', 79]]
ADSL
Python-Drills/01-Sort_List_of_Lists/Sort_List_of_Lists.ipynb
SVEENASHARMA/web-design-challenge
In this challenge we jump directly into building neural networks. We won't get into much theory or generality, but by the end of this exercise you'll build a very simple example of one, and in the mean time gain some intuition for how they work. First, we import numpy as usual.
# LAMBDA SCHOOL # # MACHINE LEARNING # # MIT LICENSE import numpy as np
_____no_output_____
MIT
Week 09 Neural Networks/Code Challenges/Day 1 XOR.ipynb
jraval/LambdaSchoolDataScience
Next, look at the Wikipedia article for the XOR function (https://en.wikipedia.org/wiki/XOR_gate). Basically, it's a function that takes in two truth values (aka booleans) x and y and spits out a third truth value f(x,y) according to the following rule: f(x,y) is true when x is true OR y is true, but not both. If we use the common representation wherein "1" means "True" and "0" means "False", this means that f(0,0) = f(1,1) = 0 and f(0,1) = f(1,0) = 1. Check that this makes sense!Your first task for today is to implement the XOR function. There are slick ways to do this using modular arithmetic (if you're in to that sort of thing), but implement it however you like. Check that it gives the right values for each of the inputs (0,0), (0,1), (1,0), and (1,1).
def xorFunction(x, y): if x == y: return False else: return True
_____no_output_____
MIT
Week 09 Neural Networks/Code Challenges/Day 1 XOR.ipynb
jraval/LambdaSchoolDataScience
Great. Now, define a function sigma(x) that acts the way that the sigmoid function does. If you don't remember exactly how this works, check Wikipedia or ask one of your classmates.
def sigma(x): return 1 / (1 + np.exp(-x))
_____no_output_____
MIT
Week 09 Neural Networks/Code Challenges/Day 1 XOR.ipynb
jraval/LambdaSchoolDataScience
Most machine learning algorithms have free parameters that we tweak to get the behavior we want, and this is no exception. Introduce two variables a and b and assign them both to the value 10 (for now).
a = 10 b = 10
_____no_output_____
MIT
Week 09 Neural Networks/Code Challenges/Day 1 XOR.ipynb
jraval/LambdaSchoolDataScience
Finally, here's our first neural network. Just like linear and logistic regression, it's nothing more than a function that takes in our inputs (x and y) and returns an output according to some prescribed rule. Today our rule consists of the following steps:Step 1: Take x and y and calculate ax + by.Step 2: Plug the result of step 1 into the sigma function we introduced earlier.Step 3: Take the result of step 2 and round it to the nearest whole number.Define a function NN(x,y) that takes in x and y and returns the result of performing these steps.
def NN(x,y): linear = a*x + b*y out = sigma(linear) return np.round(out)
_____no_output_____
MIT
Week 09 Neural Networks/Code Challenges/Day 1 XOR.ipynb
jraval/LambdaSchoolDataScience
See what happens when you plug the values (0,0), (0,1), (1,0), and (1,1) into NN. The last (and possible trickiest) part of this assignment is to try and find values of a and b such that NN and XOR give the same outputs on each of those inputs. If you find a solution, share it. If you can't, talk with your classmates and see how they do. Feel free to collaborate!
print([NN(*args) for args in [(0, 0), (0, 1), (1, 0), (1, 1)]])
[0.0, 1.0, 1.0, 1.0]
MIT
Week 09 Neural Networks/Code Challenges/Day 1 XOR.ipynb
jraval/LambdaSchoolDataScience
The XOR function cannot be learned by a single unit, which has a linear decision boundary.See: https://www.youtube.com/watch?v=kNPGXgzxoHw
def NN2(x, y): h1 = np.round(sigma(w11*x + w11*y + b11)) h2 = np.round(sigma(w12*x + w12*y + b12)) out = np.round(sigma(w21*h1 + w22*h2 + b2)) return out w11 = 20 w12 = -20 b11 = -10 b12 = 30 w21 = 20 w22 = 20 b2 = -30 print([NN2(*args) for args in [(0, 0), (0, 1), (1, 0), (1, 1)]])
[0.0, 1.0, 1.0, 0.0]
MIT
Week 09 Neural Networks/Code Challenges/Day 1 XOR.ipynb
jraval/LambdaSchoolDataScience