markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Output Table:
orbit_df.head()
_____no_output_____
Apache-2.0
notebooks/Orbit Computation/Orbit Computation.ipynb
open-space-collective/open-space-toolk
2D plot, over **World Map**:
figure = go.Figure( data = go.Scattergeo( lon = orbit_df['$Longitude$'], lat = orbit_df['$Latitude$'], mode = 'lines', line = go.scattergeo.Line( width = 1, color = 'red' ) ), layout = go.Layout( title = None, showlegend = False, height=1000, geo = go.layout.Geo( showland = True, landcolor = 'rgb(243, 243, 243)', countrycolor = 'rgb(204, 204, 204)' ) ) ) figure.show()
_____no_output_____
Apache-2.0
notebooks/Orbit Computation/Orbit Computation.ipynb
open-space-collective/open-space-toolk
3D plot, in **Earth Fixed** frame:
figure = go.Figure( data = [ go.Scattergeo( lon = orbit_df['$Longitude$'], lat = orbit_df['$Latitude$'], mode = 'lines', line = go.scattergeo.Line( width = 2, color = 'rgb(255, 62, 79)' ) ) ], layout = go.Layout( title = None, showlegend = False, width = 800, height = 800, geo = go.layout.Geo( showland = True, showlakes = True, showcountries = False, showocean = True, countrywidth = 0.0, landcolor = 'rgb(100, 100, 100)', lakecolor = 'rgb(240, 240, 240)', oceancolor = 'rgb(240, 240, 240)', projection = dict( type = 'orthographic', rotation = dict( lon = -100, lat = 40, roll = 0 ) ), lonaxis = dict( showgrid = True, gridcolor = 'rgb(102, 102, 102)', gridwidth = 0.5 ), lataxis = dict( showgrid = True, gridcolor = 'rgb(102, 102, 102)', gridwidth = 0.5 ) ) ) ) figure.show()
_____no_output_____
Apache-2.0
notebooks/Orbit Computation/Orbit Computation.ipynb
open-space-collective/open-space-toolk
3D plot, in **Earth Inertial** frame:
theta = np.linspace(0, 2 * np.pi, 30) phi = np.linspace(0, np.pi, 30) theta_grid, phi_grid = np.meshgrid(theta, phi) r = float(Earth.equatorial_radius.in_meters()) x = r * np.cos(theta_grid) * np.sin(phi_grid) y = r * np.sin(theta_grid) * np.sin(phi_grid) z = r * np.cos(phi_grid) earth = go.Surface( x=x, y=y, z=z, colorscale='Viridis', showscale=False ) trace = go.Scatter3d( x=orbit_df['$x_{x}^{ECI}$'], y=orbit_df['$x_{y}^{ECI}$'], z=orbit_df['$x_{z}^{ECI}$'], mode='lines', marker=dict( size=0, color=orbit_df['$x_{z}^{ECI}$'], colorscale='Viridis', showscale=False ), line=dict( color=orbit_df['$x_{z}^{ECI}$'], width=1 ) ) figure = go.Figure( data = [earth, trace], layout = go.Layout( title = None, width = 800, height = 1000, showlegend = False, scene = go.layout.Scene( xaxis = dict( gridcolor = 'rgb(255, 255, 255)', zerolinecolor = 'rgb(255, 255, 255)', showbackground = True, backgroundcolor = 'rgb(230, 230,230)' ), yaxis = dict( gridcolor = 'rgb(255, 255, 255)', zerolinecolor = 'rgb(255, 255, 255)', showbackground = True, backgroundcolor = 'rgb(230, 230,230)' ), zaxis = dict( gridcolor = 'rgb(255, 255, 255)', zerolinecolor = 'rgb(255, 255, 255)', showbackground = True, backgroundcolor = 'rgb(230, 230,230)' ), camera = dict( up = dict( x = 0, y = 0, z = 1 ), eye = dict( x = -1.7428, y = 1.0707, z = 0.7100, ) ), aspectratio = dict(x = 1, y = 1, z = 1), aspectmode = 'manual' ) ) ) figure.show()
_____no_output_____
Apache-2.0
notebooks/Orbit Computation/Orbit Computation.ipynb
open-space-collective/open-space-toolk
prepared by Maksim Dimitrijev (QLatvia) This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} 1 \mspace{-1.5mu} \rfloor } $$ \newcommand{\Y}{ \mymatrix{rr}{0 & -i \\ i & 0} } $ $ \newcommand{\S}{ \mymatrix{rr}{1 & 0 \\ 0 & i} } $ $ \newcommand{\T}{ \mymatrix{rr}{1 & 0 \\ 0 & e^{i \frac{pi}{4}}} } $ $ \newcommand{\Sdg}{ \mymatrix{rr}{1 & 0 \\ 0 & -i} } $ $ \newcommand{\Tdg}{ \mymatrix{rr}{1 & 0 \\ 0 & e^{-i \frac{pi}{4}}} } $$ \newcommand{\qgate}[1]{ \mathop{\textit{1} } }$ Global and local phase A generic qubit can be in state $\ket{\psi} = \alpha \ket{0} + \beta \ket{1}$, where $\alpha, \beta \in \mathbb{C}$ and $\mathopen|\alpha\mathclose|^2 + \mathopen|\beta\mathclose|^2 = 1$. Both amplitudes can be complex values, and each complex number has a real and an imaginary part. Therefore, we have 4 different numbers that describe the state of a qubit. Can we have more concise way to represent the state of a qubit? Understanding of complex numbers and global and local phase can help us to improve the situation. Another representation of a state At the given moment we have 4 numbers to represent the state of a qubit. We can reduce this to three numbers, since $\mathopen|\alpha\mathclose|^2 + \mathopen|\beta\mathclose|^2 = 1$. How the result will look like? Since there is a correspondence between two amplitudes and complex numbers can be represented in polar form, the community has found the following representation of the state of a qubit with three angles:$$\ket{\psi} = e^{i\delta} ( \cos{\frac{\theta}{2}} \ket{0} + e^{i\phi} \sin{\frac{\theta}{2}} \ket{1} ),$$where $0 \leq \theta \leq \pi$ and $0 \leq \phi, \delta < 2\pi$. Global phase Suppose that we have a qubit and its state is either $ \ket{\psi} $ or $ e^{i\delta} \ket{\psi} $, where $0 \leq \delta < 2\pi$.Is there any sequence of one-qubit gates such that we can measure different results after applying them? All one-qubit gates are $ 2 \times 2 $ matrices, and their application is represented by a single matrix: $ A_n \cdot \cdots \cdot A_2 \cdot A_1 = A $.By linearity, if $ A \ket{\psi} = \ket{u} $, then $ A e^{i\delta} \ket{\psi} = e^{i\delta} \ket{u} $. Thus, after measurement, the probabilities of observing state $ \ket{0} $ and state $ \ket{1} $ are the same in both cases. Therefore, we cannot distinguish them.Even though the states $ \ket{0} $ and $ e^{i\delta} \ket{0} $ are different mathematically, they are assumed the same from the physical point of view. The complex number $ e^{i\delta} $ in front of $ e^{i\delta} \ket{\psi} $ is called as _global phase_.Therefore, two vectors that differ by a global phase factor, in quantum mechanics are considered equivalent, and so we have $ \ket{\psi} \equiv e^{i\delta} \ket{\psi} $ for any quantum state $\ket{\psi}$ and any $0 \leq \delta < 2\pi$.Now we can describe the state of a qubit with two numbers - angles $\theta$ and $\phi$:$$\ket{\psi} = \cos{\frac{\theta}{2}} \ket{0} + e^{i\phi} \sin{\frac{\theta}{2}} \ket{1},$$where $0 \leq \theta \leq \pi$ and $0 \leq \phi < 2\pi$. Local phase In our last form of the state representation we have an amplitude multiplier $e^{i\phi}$. What if we have a similar multiplier also for the amplitude of state $\ket{0}$? Then we would have the following state: $$e^{i\gamma} \cos{\frac{\theta}{2}} \ket{0} + e^{i\phi} \sin{\frac{\theta}{2}} \ket{1} = e^{i\gamma}(\cos{\frac{\theta}{2}} \ket{0} + e^{i(\phi-\gamma)} \sin{\frac{\theta}{2}} \ket{1}) = \cos{\frac{\theta}{2}} \ket{0} + e^{i(\phi-\gamma)} \sin{\frac{\theta}{2}} \ket{1}.$$ Therefore, there is no need for second such multiplier, and having one related to the state $\ket{1}$ is a convention.One more useful fact is that $\mathopen|e^{i\phi}\mathclose| = 1$ for any $\phi$. This means that this multiplier does not affect the probabilities to observe state $\ket{0}$ or $\ket{1}$ after the measurement. This means that only parameter $\theta$ influences the probability to observe state $\ket{0}$ or $\ket{1}$ after the measurement.Although $e^{i\phi}$ does not influence the measurement outcome, it influences the overall computation, as you could notice in previous notebook, where we affected the phase for the state $\ket{1}$. The number $e^{i\phi}$ shows additional relation between states $\ket{0}$ and $\ket{1}$, like one more dimension added to the computation, and is called local phase. As you have noticed, local phase indeed is important for the computation. Task 1 Find the probabilities of observing states $\ket{0}$ and $\ket{1}$ for the qubits in the following states: $\cos{\frac{\pi}{3}} \ket{0} + e^{i\pi} \sin{\frac{\pi}{3}} \ket{1}$ $\cos{\frac{\pi}{4}} \ket{0} + e^{i\frac{\pi}{4}} \sin{\frac{\pi}{4}} \ket{1}$ $\cos{\frac{\pi}{6}} \ket{0} + i \sin{\frac{\pi}{6}} \ket{1}$ click for our solution Task 2 Implement a function that calculates the probabilities of observing states $\ket{0}$ and $\ket{1}$ given the angle of a quantum state. Assuming that the quantum state is of the form $\cos{\frac{\theta}{2}} \ket{0} + e^{i\phi} \sin{\frac{\theta}{2}} \ket{1}$, $\theta$ and $\phi$ should be provided to the function as parameters. Check the quantum states given in Task 1 and compare the results.
# # your solution is here #
_____no_output_____
Apache-2.0
silver/C05_Global_And_Local_Phase.ipynb
VGGatGitHub/QWorld-silver
Germany: LK Kitzingen (Bayern)* Homepage of project: https://oscovida.github.io* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Bayern-LK-Kitzingen.ipynb)
import datetime import time start = datetime.datetime.now() print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}") %config InlineBackend.figure_formats = ['svg'] from oscovida import * overview(country="Germany", subregion="LK Kitzingen"); # load the data cases, deaths, region_label = germany_get_region(landkreis="LK Kitzingen") # compose into one table table = compose_dataframe_summary(cases, deaths) # show tables with up to 500 rows pd.set_option("max_rows", 500) # display the table table
_____no_output_____
CC-BY-4.0
ipynb/Germany-Bayern-LK-Kitzingen.ipynb
RobertRosca/oscovida.github.io
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Bayern-LK-Kitzingen.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and " f"deaths at {fetch_deaths_last_execution()}.") # to force a fresh download of data, run "clear_cache()" print(f"Notebook execution took: {datetime.datetime.now()-start}")
_____no_output_____
CC-BY-4.0
ipynb/Germany-Bayern-LK-Kitzingen.ipynb
RobertRosca/oscovida.github.io
test_str="xxup copy of course appear of course or half - dressed on his gf ill cross ! i was unemployed vans - land through the smallest shoes leave" for x in test_str.split(): print (f'{x}')
xxup copy of course appear of course or half - dressed on his gf ill cross ! i was unemployed vans - land through the smallest shoes leave
MIT
untoken.ipynb
gdoteof/DiscordChatExporter
xxup is a token which mean the next word should be uppercased.
pieces = test_str.split() for idx,val in enumerate(pieces): print(f'{idx}:{val}') def un_xxup(str): pieces = test_str.split() for index,value in enumerate(pieces): if value == 'xxup': pieces[index+1] = pieces[index+1].capitalize() del pieces[index] return " ".join(pieces) un_xxup(test_str)
_____no_output_____
MIT
untoken.ipynb
gdoteof/DiscordChatExporter
TD1: Timeseries analysis using autoregressive methods and general Box-Jenkins methodsSome useful translations, just in case:- **a timeseries**: une série temporelle (always plural in English)- **a trend**: une tendance- **a lag**: un retard, un décalage dans le temps- **stationary**: stationnaireSome interesting content to dive deeper and/or go further about timeseries analysis, or that might help you during the TD:- [The engineering statistics handbook on timeseries analysis](https://www.itl.nist.gov/div898/handbook/pmc/section4/pmc4.htm)- [A Stanford class on autoregressive models seen as generative models](https://deepgenerativemodels.github.io/notes/) (and more on deep generative models)
!pip install statsmodels==0.12.1 !pip install sktime import requests import pandas as pd import numpy as np import matplotlib.pyplot as plt import statsmodels import sktime import scipy
_____no_output_____
CC0-1.0
TD1_Timeseries_Analysis.ipynb
RomainBnfn/Timeseries-Sequence-Processing-2021
1. AnalysisFor this exercise, we will use a timeseries representing daily average temperature in Melbourne, Australia between 1980 and 1990.This timeseries will be stored in a [Pandas DataFrame object](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html), a standard to handle tabular data in Python.This analysis will follow the steps proposed by George Box and Gwilym Jenkins in 1970, called [Box-Jenkins method](https://en.wikipedia.org/wiki/Box%E2%80%93Jenkins_method), which emphasizes issues encountered when appliying autoregressive methods.
# Read data from remote repository df = pd.read_csv("https://raw.githubusercontent.com/jbrownlee/Datasets/master/daily-min-temperatures.csv", index_col=0) # Display the 5 first data points df.head()
_____no_output_____
CC0-1.0
TD1_Timeseries_Analysis.ipynb
RomainBnfn/Timeseries-Sequence-Processing-2021
1.1 Run-plots analysis"Run-plots" are the simplest representation of a timeseries, where the x-axis represents time and the y-axis represents the observed variable, here temperature in Celsius degrees.**Question: Given the figures and the statistic test below, what hypothesis can you draw regarding the behaviour of this timeseries? Is is stationary? Does it displays seasonality? Trending? Explain. You can create additional figures if you need.** The timeseries seems to be quite stationary because the result of the test result return a p-value << 0.5, so the non stationary hypothesis is rejected (so it's probably stationary). Moreover, the trend of the curve is like a sinus curve (with a period of one year).
# Plot the full timeseries df.plot(figsize=(20, 4), title="Temperature in Melbourne - 1980 to 1990") # Plot the first year of data df.iloc[:365].plot(figsize=(20, 4), title="Temperature in Melbourne - one year")
_____no_output_____
CC0-1.0
TD1_Timeseries_Analysis.ipynb
RomainBnfn/Timeseries-Sequence-Processing-2021
The Augmented Dickey-Fuller test is a statistical test used to checkthe stationarity of a timeseries. It is implemented in the `adfuller()` function in `statsmodels`.
from statsmodels.tsa.stattools import adfuller adf, p, *other_stuff = adfuller(df) print(f"p-value (95% confidence interval): {p:g}, statistics: {adf:g}")
p-value (95% confidence interval): 0.000247083, statistics: -4.4448
CC0-1.0
TD1_Timeseries_Analysis.ipynb
RomainBnfn/Timeseries-Sequence-Processing-2021
1.2 Autocorrelation and partial autocorrelationAutocorrelation (and partial autocorrelation) are metrics that can be computed to evaluate **how dependent the variable is from its $n$ previous values**, what is called a **lag (of length n)**.**Question: Plot $X[t-1]$ versus $X[t]$, for all $t$. What can you conclude on the autocorrelation of the series with a lag of 1? You can also compute the Pearson correlation coefficient between $X[t-1]$ and $X[t]$.***Some help:*- You can create a new DataFrame with all values shifted from $n$ timestep using the `.shift()` method of the DataFrame. See Pandas documentation for more.- You can plot some data versius some other using the `plt.scatter(X, Y)` method of Matplotlib. This plots a dot on the figure for every couple `(x, y)`in `X` and `Y`. See Matplotlib documentation for more.- Pearson coefficient can be computed using the DataFrame `.corr()` method. This method computes the correlation coefficient between all variables (*columns*) in the DataFrame. Try appliying this method to a DataFrame where one column is $X[t]$ and another column is the shifted timeseries $X[t-1]$. Note that you can merge two DataFrames into one using the function `pd.concat(dataframes)` of Pandas. See Pandas documentation for more.
# Create a shifted version of the timeseries: df_shifted = df.shift(periods=1) # Plot df vs. df_shifted plt.figure(figsize=(5, 5)) plt.scatter(df, df_shifted) plt.xlabel("X[t]") plt.ylabel("X[t-1]") plt.show()
_____no_output_____
CC0-1.0
TD1_Timeseries_Analysis.ipynb
RomainBnfn/Timeseries-Sequence-Processing-2021
It seems to be like X[t] timeseries explains globaly the X[t-1] timeseries because the graph above is quite like a linear graph. But in middle of the graph, the line is very thick so it's difficult to explain the X[t-1] by the X[t] value. ~ not high correlated **Pearson correlation coefficient**To compute this coefficient, we first need to ensure that our variable follows a normal distribution. Let's plot the distribution, using the `.hist()` method of DataFrame objects:*(Optional)* Perform a normality test, using `scipy.stats.normaltest`.
# Plot of the distribution of the variable # (in our case, the temperature histogram) df.hist() # The hist graph seems to be an normal dist hist with a ~ mean of 12 from scipy import stats # Normality test k2, p = scipy.stats.normaltest(df) print(f"p-value (95% confidence interval): {p[0]:g}, statistics: {k2[0]:g}") # The result results with a p-value of 0.00009 << 0.05 so we can reject the null hypothesis of non normal distribution # (Optional) Compute Pearson coefficients # First, concatenate df and df_shifted in df_all, following axis 1 # (concatenate columns, not rows !) df_all = pd.concat([df, df_shifted[1:]], axis=1) # Rename the columns df_all.columns = ["X[t]", "X[t-1]"] df_all.head() # Compute correlation and print it df_all.corr() # The cor between X[t] and X[t-1] is 0.77, this is very closer to 1 than 0 so we can suggest that there is a link between those two vars
_____no_output_____
CC0-1.0
TD1_Timeseries_Analysis.ipynb
RomainBnfn/Timeseries-Sequence-Processing-2021
We can see that the value has a quite normal distribution so we can calculate the Pearson Correlation Value. The correlation between X[t-1] and X[t] is 0.77 (~ close to 1) so the two var are very correlated. We probably explain X[t] by X[t-1]. ---We will now compute autocorrelation function (ACF) and partial autocorrelation function (PACF) of the timeseries. These functions compute correlation (or partial correlation) between $X[t]$ and $X[t-n]$, for an interval of different lags $n$. For now, we only evaluated correlation for lag $n=1$.**Question: Plot the ACF and the PACF of the timeseries, with $n={1, \dots, 31}$ (one month lag) and $n={1, \dots, 730}$ (2 years lag). What is your hypothesis on the lag to use to create the model ?***Some help:*- See documentation of `statsmodels.graphics.tsaplots.plot_acf` to understand how to change the number of lags to plot.- **Autocorrelation** is the result of the multiplication (or convolution) of all points of the signal with themselves, shifted in time by a lag of $n$. The **autocorrelation function** (ACF) is the function giving autocorrelation for any lag $n$.- **Partial autocorrelation** is similar to autocorrelation, but the correlation between two points of the signal is computed assuming that this two points are independent from all points between them in time. The **partial autocorrelation function** (PACF) is the function giving partial autocorrelation for any lag $n$.- Autocorrelation is helpful to check if a process in autoregressive. **Autoregressive processes are auto-correlated**.- Partial autocorrelation is helpful to find the order of an autoregressive process, i.e. **how many past steps are needed to predict the future one**.
from statsmodels.graphics.tsaplots import plot_pacf, plot_acf
_____no_output_____
CC0-1.0
TD1_Timeseries_Analysis.ipynb
RomainBnfn/Timeseries-Sequence-Processing-2021
1.2.1 Autocorrelation
# Plot autocorrelation for lags between 1 and 730 days plot_acf(df.values.squeeze(), lags=730) plt.show() # Plot autocorrelation for lags between 1 and 31 days plot_acf(df.values.squeeze(), lags=31) plt.show()
_____no_output_____
CC0-1.0
TD1_Timeseries_Analysis.ipynb
RomainBnfn/Timeseries-Sequence-Processing-2021
1.2.2 Partial autocorrelation
# Plot partial autocorrelation for lags between 1 and 730 days plot_pacf(df.values.squeeze(), lags=730) plt.show() # Plot partial autocorrelation for lags between 1 and 31 days plot_pacf(df.values, lags=31) plt.show()
_____no_output_____
CC0-1.0
TD1_Timeseries_Analysis.ipynb
RomainBnfn/Timeseries-Sequence-Processing-2021
The correlation seem to be the strongest with a lag which is a multiple of 180.2 (1/2 year), but verry bad monthly or with any other lag. 2. Modeling 2.0 Modeling: AR from scratch (just as an example, nothing to do here)AR stands for AutoRegressive. Autoregressive models describe the value of any points in a timeseries given the values of $p$ previous points, establishing a linear relashionship between them such that:$$X_t = \alpha + \beta_1 X_{t-1} + \beta_2 X_{t-2} + ... + \beta_{p} X_{t-p} + \epsilon_t$$where $X$ is a timeseries, $p$ is the lag used in the AR model, also called the **order** of the model, and $\beta=\{\beta, \dots, \beta_p\}$ and $\alpha$ are the parameters we want to estimate. $\epsilon_t$ is a white noise random process that we will consider to be 0 for all time steps in our model.$X_t$ is therefore linearly dependent from its $p$ previous values $X_{t-1}, \dots, X_{t-p}$. We can learn $\beta_{[1, p]}$ and $\alpha$ using a linear regression defined by:$$[\alpha, \beta_{[1, p]}] = X \cdot X_{lags}^\intercal \cdot (X_{lags} \cdot X_{lags}^\intercal)^{-1}$$where $X$ is the whole timeseries with an available lag ($t-p$ timesteps have $p$ past values, the $p$ first timesteps do not have pasts values), and $X_{lags}$ are the $X_{t-1}, \dots, X_{t-p}$ for all time steps with an available lag $t-p$.
# We store all values of the series in a numpy array called series series = df["Temp"].values def auto_regression(series, order): n_points = len(series) # All lagged values will be stored in y_lag. # If order is 7, for each timestep we will store 7 values. X_lag = np.zeros((order, n_points-order)) # All current values will be stores in X. X = np.zeros((1, n_points-order)) for i in range(0, n_points-order-1): X_lag[:, i] = series[i:i+order] # get the lagged values X[:, i] = series[i+order+1] # get the current value # Add a constant term (c=1) to X_lag to compute alpha in the linear # regression X_lag = np.vstack((np.ones((1, n_points-order)), X_lag)) # Linear regression coef = np.dot(np.dot(X, X_lag.T), scipy.linalg.pinv(np.dot(X_lag, X_lag.T))) alpha = coef[:, 0] beta = coef[:, 1:] return alpha, beta alpha, beta = auto_regression(series, order=3)
_____no_output_____
CC0-1.0
TD1_Timeseries_Analysis.ipynb
RomainBnfn/Timeseries-Sequence-Processing-2021
Now that we have our coefficients learned, we can make predictions.
lag = beta.shape[1] Y_truth = [] # real timeseries Y_pred = [] # predictions for i in range(0, len(series)-lag-1): # apply the equation of AR using the coefficients at each time steps y = alpha + np.dot(beta, series[i:i+lag]) # y[t] = alpha + y[t-1]*beta1 + y[t-2]*beta2 + ... Y_pred.append(y) Y_truth.append(series[i+lag+1]) Y_pred = np.array(Y_pred).flatten() Y_truth = np.array(Y_truth).flatten() # Plot the results for one year plt.plot(series[lag+1:lag+366], label="True series") plt.plot(Y_pred[:365], label="Predicted values") plt.title(f'Lag of {lag}') plt.legend() plt.show()
_____no_output_____
CC0-1.0
TD1_Timeseries_Analysis.ipynb
RomainBnfn/Timeseries-Sequence-Processing-2021
And here are our coefficients:
coefs = np.c_[alpha, beta] plt.bar(np.arange(coefs.shape[1]), coefs.flatten()) labels = ['$\\alpha$'] for i in range(beta.shape[1]): labels.append(f"$\\beta_{i+1}$") plt.xticks(np.arange(coefs.shape[1]), labels) plt.show()
_____no_output_____
CC0-1.0
TD1_Timeseries_Analysis.ipynb
RomainBnfn/Timeseries-Sequence-Processing-2021
2.1 Modeling : ARIMA
from statsmodels.tsa.arima.model import ARIMA
_____no_output_____
CC0-1.0
TD1_Timeseries_Analysis.ipynb
RomainBnfn/Timeseries-Sequence-Processing-2021
ARIMA is an acronym that stands for AutoRegressive Integrated Moving Average, capturing the key aspects of the model :- **AR** : *AutoRegressive* A model that uses the dependent relationship between an observation and some number of lagged observations.A pure AR model is such that :$$Y_t = \alpha + \beta_1 Y_{t-1} + \beta_2 Y_{t-2} + ... + \beta_{p} Y_{t-p} + \epsilon_1$$- **I** : *Integrated* The use of differencing of raw observations in order to make the time series stationary- **MA** : *Moving Average* A model that uses the dependency between an observation and a residual error from a moving average model applied to lagged observationsA pure moving average model is such that :$$Y_t = \alpha + \epsilon_t + \phi_1 \epsilon_{t-1} + \phi_2 \epsilon_{t-2} + ... + \phi_q \epsilon_{t-q}$$Thus finally, the equation for ARIMA becomes :$$Y_t = \alpha + \beta_1 Y_{t-1} + ... + \beta_p Y_{t-p} \epsilon_t + \phi_1 \epsilon_{t-1} + ... + \phi_q \epsilon_{t-q} $$Each of these components is specified in the model as a parameter :- **p** : number of lag observations- **d** : number of times that raw observations are differenced. It is the minimum number of differencing needed to make the series stationary. If the time series is already stationary, then d= 0- **q** : size of moving average windowNow, we will fit an ARIMA forecast model to the daily minimum temperature data.The data contains a one-year seasonal component :
# seasonal difference differenced = df.diff(365) # trim off the first year of empty data differenced = differenced[365:] # Create an ARIMA model (check the statsmodels docs) (p,d,q) # d = 0 because the serie is stationary (see before) model = ARIMA(series, order=(3, 0, 15)) # fit model model_fit = model.fit() print(model_fit.summary()) # reviewing the residual errors # line plot residuals = pd.DataFrame(model_fit.resid) residuals.plot() plt.show() # density plot residuals.plot(kind='kde') plt.show() # summary stats print("Residuals stats:", residuals.describe())
_____no_output_____
CC0-1.0
TD1_Timeseries_Analysis.ipynb
RomainBnfn/Timeseries-Sequence-Processing-2021
To evaluate the ARIMA model, we will use walk forward validation. First we split the data into a training and testing set (initially, a year is a good interval to test for this dataset given the seasonal nature). A model will be constructed on the historic data and predict the next time step. The real observation of the time step will be added to the history, a new model constructed and the next time step predicted. The forecasts will be collected together to the final observations to give an error score (for example, RSME : root mean square error)
from math import sqrt from sklearn.metrics import mean_squared_error # rolling forecast with ARIMA train, test = differenced.iloc[:-365], differenced.iloc[-365:] # walk-forward validation values = train.values history = [v for v in values] predictions = list() test_values = test.values for t in range(len(test_values)): # fit model model = ARIMA(history, order=(7,0,0)) model_fit = model.fit() # make prediction yhat = model_fit.forecast()[0] predictions.append(yhat) history.append(test_values[t]) # evaluate forecast rsme = sqrt(mean_squared_error(test_values, predictions)) print('RSME : ', rsme) # plot forecasts against actual outcomes plt.plot(test) plt.plot(predictions, color='red') plt.show()
RSME : 3.0962731960398195
CC0-1.0
TD1_Timeseries_Analysis.ipynb
RomainBnfn/Timeseries-Sequence-Processing-2021
We can also use the predict() function on the results object to make predictions. It accepts the index of the time steps to make predictions as arguments. These indexes are relative to the start of the training dataset.
forecast = model_fit.predict(start=len(train.values), end=len(differenced.values), typ='levels') plt.plot(test) plt.plot(forecast, color='red') plt.show()
_____no_output_____
CC0-1.0
TD1_Timeseries_Analysis.ipynb
RomainBnfn/Timeseries-Sequence-Processing-2021
Exercise: Mauna Loa CO2 concentration levels (1975 - 2021)Carbon dioxyde (CO2) is a gas naturaly present in our environment. However, the concentration of CO2 is increasing every year, mainly because of human activities. It is one of the major cause of global warming, and its value is precautiounously measured since 1973 at the Mauna Loa observatory, in Hawaii.We will get interested on the measures performed between 1975 and 2021. The dataset is composed of monthly averaged values. Values are expressed in *ppm* (parts-per-million).**Question: Appliying the method described above, model the behaviour of this timeseries.****Question: Using your model, make predictions from 2001 to 2021, and evaluate the performance of your model. Make some projections about the evolution of the concentration after 2021.****Do not forget to explain your hypotheses, choices and results.***Some help*- Be careful ! This timeseries is more difficult to model (do not forget the stationarity property...)- If a timeseries is not stationary, one can **differenciate** its values over time to create a stationary approximation of the timeseries (like ARIMA does). You can also **remove the linear trend** from the data. Differencing (for an order 1 differenciation) implies transforming $X[t]$ into $X[t] - X[t-1]$.- Maybe a seasonal model (SARIMA, ...) could be interesting ?- You can do projections by using the model as a **generative model**: using the predicted value $X[t]$, you can predict $X[t+1$] using $X[t]$, then predict $X[t+2]$ using $X[t+1]$ and so on, using only the predictions of your model. For instance, with a dataset stopping in December 2021, you can predict January 2022 using December 2021, which you know from the dataset. Then, you can predict February 2022 from January 2022, March 2022 from February 2022...*Reference:*K.W. Thoning, A.M. Crotwell, and J.W. Mund (2021), Atmospheric Carbon Dioxide Dry Air Mole Fractions from continuous measurements at Mauna Loa, Hawaii, Barrow, Alaska, American Samoa and South Pole. 1973-2020, Version 2021-08-09 National Oceanic and Atmospheric Administration (NOAA), Global Monitoring Laboratory (GML), Boulder, Colorado, USA
ts = pd.read_csv("https://gml.noaa.gov/aftp/data/trace_gases/co2/in-situ/surface/mlo/co2_mlo_surface-insitu_1_ccgg_MonthlyData.txt", header=150, sep=" ") ts = ts[ts["year"] > 1975] time_index = pd.DatetimeIndex(pd.to_datetime(ts[["year", "month", "day"]])) ts = ts.set_index(time_index) ts = pd.Series(ts["value"]) ts.plot(figsize=(10, 5)) plt.xlabel("Time") plt.ylabel("CO2 (ppm)") plt.show() # Plot the first year of data ts.iloc[:365].plot(figsize=(20, 4), title="CO2 (ppm) in one year") from statsmodels.tsa.stattools import adfuller # The curve of the values seems to increase over years so it might not be stationary adf, p, *other_stuff = adfuller(ts) print(f"p-value (95% confidence interval): {p:g}, statistics: {adf:g}") # p-value 1 >> 0.05 so we can't reject the hypothesis of non stationiary (Ok !) # Create a shifted version of the timeseries: ts_shifted = ts.shift(periods=1) plt.figure(figsize=(5, 5)) plt.scatter(ts, ts_shifted) plt.xlabel("X[t]") plt.ylabel("X[t-1]") plt.show() # It seems that X[t] explain perfectly X[t-1] ts.hist() from scipy import stats # It not seems to be normaly distribuated k2, p = scipy.stats.normaltest(ts) print(f"p-value (95% confidence interval): {p:g}, statistics: {k2:g}") # p-value is 5e-36 << 0.5 so we can reject the null hypothesis of non normal distribution
p-value (95% confidence interval): 5.4641e-36, statistics: 162.39
CC0-1.0
TD1_Timeseries_Analysis.ipynb
RomainBnfn/Timeseries-Sequence-Processing-2021
🍔 K Means Clustering Step By Step Installation
# remove `!` if running the line in a terminal !pip install -U RelevanceAI[notebook]==2.0.0
_____no_output_____
Apache-2.0
guides/kmeans_clustering_step_by_step_guide.ipynb
RelevanceAI/RelevanceAI
Setup First, you need to set up a client object to interact with RelevanceAI.
from relevanceai import Client client = Client()
_____no_output_____
Apache-2.0
guides/kmeans_clustering_step_by_step_guide.ipynb
RelevanceAI/RelevanceAI
Data You will need to have a dataset under your Relevance AI account. You can either use our e-commerce dataset as shown below or follow the tutorial on how to create your own dataset.Our e-commerce dataset includes fields such as `product_title`, as well as the vectorized version of the field `product_title_clip_vector_`. Loading these documents can be done via: Load the data
from relevanceai.utils.datasets import get_ecommerce_dataset_encoded documents = get_ecommerce_dataset_encoded() {k: v for k, v in documents[0].items() if "_vector_" not in k}
_____no_output_____
Apache-2.0
guides/kmeans_clustering_step_by_step_guide.ipynb
RelevanceAI/RelevanceAI
Upload the data to Relevance AI Run the following cell, to upload these documents into your personal Relevance AI account under the name `quickstart_clustering_kmeans`
ds = client.Dataset("quickstart_kmeans_clustering") ds.insert_documents(documents)
_____no_output_____
Apache-2.0
guides/kmeans_clustering_step_by_step_guide.ipynb
RelevanceAI/RelevanceAI
Check the data
ds.health() ds.schema
_____no_output_____
Apache-2.0
guides/kmeans_clustering_step_by_step_guide.ipynb
RelevanceAI/RelevanceAI
Clustering We apply the Kmeams clustering algorithm to the vector field, `product_title_clip_vector_`
from sklearn.cluster import KMeans VECTOR_FIELD = "product_title_clip_vector_" KMEAN_NUMBER_OF_CLUSTERS = 5 ALIAS = "kmeans_" + str(KMEAN_NUMBER_OF_CLUSTERS) model = KMeans(n_clusters=KMEAN_NUMBER_OF_CLUSTERS) clusterer = client.ClusterOps(alias=ALIAS, model=model) clusterer.run( dataset_id="quickstart_kmeans_clustering", vector_fields=["product_title_clip_vector_"], ) # List closest to center of the cluster clusterer.list_closest( dataset_id="quickstart_kmeans_clustering", vector_field="product_title_clip_vector_" ) # List furthest from the center of the cluster clusterer.list_furthest( dataset_id="quickstart_kmeans_clustering", vector_field="product_title_clip_vector_" )
_____no_output_____
Apache-2.0
guides/kmeans_clustering_step_by_step_guide.ipynb
RelevanceAI/RelevanceAI
We download a small sample and show the clustering results using our json_shower.
from relevanceai import show_json sample_documents = ds.sample(n=5) samples = [ { "product_title": d["product_title"], "cluster": d["_cluster_"][VECTOR_FIELD][ALIAS], } for d in sample_documents ] show_json(samples, text_fields=["product_title", "cluster"])
_____no_output_____
Apache-2.0
guides/kmeans_clustering_step_by_step_guide.ipynb
RelevanceAI/RelevanceAI
Image Manipulation with skimage This example builds a simple UI for performing basic image manipulation with [scikit-image](http://scikit-image.org/).
# Stdlib imports from io import BytesIO # Third-party libraries from IPython.display import Image from ipywidgets import interact, interactive, fixed import matplotlib as mpl from skimage import data, filters, io, img_as_float import numpy as np
_____no_output_____
BSD-3-Clause
docs/source/examples/Image Processing.ipynb
akhand1111/ipywidgets
Let's load an image from scikit-image's collection, stored in the `data` module. These come back as regular numpy arrays:
i = img_as_float(data.coffee()) i.shape
_____no_output_____
BSD-3-Clause
docs/source/examples/Image Processing.ipynb
akhand1111/ipywidgets
Let's make a little utility function for displaying Numpy arrays with the IPython display protocol:
def arr2img(arr): """Display a 2- or 3-d numpy array as an image.""" if arr.ndim == 2: format, cmap = 'png', mpl.cm.gray elif arr.ndim == 3: format, cmap = 'jpg', None else: raise ValueError("Only 2- or 3-d arrays can be displayed as images.") # Don't let matplotlib autoscale the color range so we can control overall luminosity vmax = 255 if arr.dtype == 'uint8' else 1.0 with BytesIO() as buffer: mpl.image.imsave(buffer, arr, format=format, cmap=cmap, vmin=0, vmax=vmax) out = buffer.getvalue() return Image(out) arr2img(i)
_____no_output_____
BSD-3-Clause
docs/source/examples/Image Processing.ipynb
akhand1111/ipywidgets
Now, let's create a simple "image editor" function, that allows us to blur the image or change its color balance:
def edit_image(image, sigma=0.1, R=1.0, G=1.0, B=1.0): new_image = filters.gaussian(image, sigma=sigma, multichannel=True) new_image[:,:,0] = R*new_image[:,:,0] new_image[:,:,1] = G*new_image[:,:,1] new_image[:,:,2] = B*new_image[:,:,2] return arr2img(new_image)
_____no_output_____
BSD-3-Clause
docs/source/examples/Image Processing.ipynb
akhand1111/ipywidgets
We can call this function manually and get a new image. For example, let's do a little blurring and remove all the red from the image:
edit_image(i, sigma=5, R=0.1)
_____no_output_____
BSD-3-Clause
docs/source/examples/Image Processing.ipynb
akhand1111/ipywidgets
But it's a lot easier to explore what this function does by controlling each parameter interactively and getting immediate visual feedback. IPython's `ipywidgets` package lets us do that with a minimal amount of code:
lims = (0.0,1.0,0.01) interact(edit_image, image=fixed(i), sigma=(0.0,10.0,0.1), R=lims, G=lims, B=lims);
_____no_output_____
BSD-3-Clause
docs/source/examples/Image Processing.ipynb
akhand1111/ipywidgets
Browsing the scikit-image gallery, and editing grayscale and jpg imagesThe coffee cup isn't the only image that ships with scikit-image, the `data` module has others. Let's make a quick interactive explorer for this:
def choose_img(name): # Let's store the result in the global `img` that we can then use in our image editor below global img img = getattr(data, name)() return arr2img(img) # Skip 'load' and 'lena', two functions that don't actually return images interact(choose_img, name=sorted(set(data.__all__)-{'lena', 'load'}));
_____no_output_____
BSD-3-Clause
docs/source/examples/Image Processing.ipynb
akhand1111/ipywidgets
And now, let's update our editor to cope correctly with grayscale and color images, since some images in the scikit-image collection are grayscale. For these, we ignore the red (R) and blue (B) channels, and treat 'G' as 'Grayscale':
lims = (0.0, 1.0, 0.01) def edit_image(image, sigma, R, G, B): new_image = filters.gaussian(image, sigma=sigma, multichannel=True) if new_image.ndim == 3: new_image[:,:,0] = R*new_image[:,:,0] new_image[:,:,1] = G*new_image[:,:,1] new_image[:,:,2] = B*new_image[:,:,2] else: new_image = G*new_image return arr2img(new_image) interact(edit_image, image=fixed(img), sigma=(0.0, 10.0, 0.1), R=lims, G=lims, B=lims);
_____no_output_____
BSD-3-Clause
docs/source/examples/Image Processing.ipynb
akhand1111/ipywidgets
Compare phase estimation methods on hippocampal theta oscillations
import numpy as np import scipy as sp %matplotlib notebook %config InlineBackend.figure_format = 'retina' %matplotlib inline import matplotlib.pyplot as plt
_____no_output_____
MIT
misc/.ipynb_checkpoints/Theta phase estimate-checkpoint.ipynb
srcole/qwm
Load data
x = np.load('/gh/data2/hc3/npy/gor01/2006-6-7_16-40-19/10/lfp_0.npy') x = x[10000:30000] Fs = 1252
_____no_output_____
MIT
misc/.ipynb_checkpoints/Theta phase estimate-checkpoint.ipynb
srcole/qwm
Preprocess data
cflow = 50 Ntapslow = 501 cfhigh = 1 Ntapshigh = 1001 from misshapen import nonshape x = nonshape.highpass_default(x, Fs, cfhigh, Ntapshigh) x = nonshape.lowpass_default(x, Fs, cflow, Ntapslow)
_____no_output_____
MIT
misc/.ipynb_checkpoints/Theta phase estimate-checkpoint.ipynb
srcole/qwm
Compute phase time series Bandpass + hilbert
f_range = (4,10) x_filt, _ = nonshape.bandpass_default(x, f_range, Fs, rmv_edge=False) pha_h = np.angle(sp.signal.hilbert(x_filt))
_____no_output_____
MIT
misc/.ipynb_checkpoints/Theta phase estimate-checkpoint.ipynb
srcole/qwm
Peak and trough interpolation
Ps, Ts = nonshape.findpt(x, f_range, Fs) pha_pt = nonshape.wfpha(x, Ps, Ts)
_____no_output_____
MIT
misc/.ipynb_checkpoints/Theta phase estimate-checkpoint.ipynb
srcole/qwm
Compare phase time series
t = np.arange(0,len(x)/Fs, 1/Fs) tlims = [1,3] samps = np.logical_and(t>=tlims[0],t<tlims[1]) plt.figure(figsize=(16,6)) plt.subplot(2,1,1) plt.plot(t[samps],x[samps],'k') plt.xlim(tlims) plt.ylabel('Voltage (uV)',size=20) plt.subplot(2,1,2) plt.plot(t[samps],pha_h[samps],'r',label='Hilbert') plt.plot(t[samps],pha_pt[samps],'b',label='waveform') plt.xlim(tlims) plt.xlabel('Time (s)',size=20) plt.ylabel('Phase (rad)',size=20) plt.legend(loc='best',title='Phase estimate method')
_____no_output_____
MIT
misc/.ipynb_checkpoints/Theta phase estimate-checkpoint.ipynb
srcole/qwm
Downloading data dynamically
# Required libraries import os import tarfile import urllib # DOWNLOAD_ROOT = "https://raw.githubusercontent.com/ageron/handson-ml2/master/" DOWNLOAD_ROOT = "https://raw.githubusercontent.com/ageron/handson-ml/master/" # HOUSING_PATH = os.path.join('datasets', 'housing') HOUSING_PATH = '../Datasets' HOUSING_URL = DOWNLOAD_ROOT + "datasets/housing/housing.tgz" print(HOUSING_URL) print('https://raw.githubusercontent.com/ageron/handson-ml/master/datasets/housing/housing.tgz') def fetch_housing_data(housing_url=HOUSING_URL, housing_path=HOUSING_PATH): os.makedirs(housing_path, exist_ok=True) tgz_path = os.path.join(housing_path, 'housing.tgz') urllib.request.urlretrieve(housing_url, tgz_path) # print('1') housing_tgz = tarfile.open(tgz_path) # print('2') housing_tgz.extractall(path=housing_path) # print('3') housing_tgz.close() print("*File downloaded and extracted successfully*") fetch_housing_data()
1 2 3 *File downloaded and extracted successfully*
MIT
Projects/Housing/Code/Housing data download.ipynb
drnesr/Machine_Learning
Load dataset:MNIST and CIFAR10
mnist = fetch_openml("mnist_784") mnist.data.shape print('Data: {}, target: {}'.format(mnist.data.shape, mnist.target.shape)) X_train, X_test, y_train, y_test = train_test_split( mnist.data, mnist.target, test_size=1/7, random_state=0, ) X_train = X_train.values.reshape((len(X_train), 784)) X_test = X_test.values.reshape((len(X_test), 784)) #Limit the size of the dataset X_train = X_train[:1000] y_train = y_train[:1000] X_test = X_test[:500] y_test = y_test[:500] print('X_train:', X_train.shape, X_train.dtype) print('y_train:', y_train.shape, y_train.dtype) print('X_test:', X_test.shape) print('y_test:', y_test.shape) # X_train %matplotlib inline def load_CIFAR_batch(filename): with open(filename, 'rb') as f: datadict = pickle.load(f,encoding='latin1') X = datadict['data'] Y = datadict['labels'] X = X.reshape(10000, 3, 32, 32).transpose(0,2,3,1).astype("float") Y = np.array(Y) return X, Y def load_CIFAR10(): xs = [] ys = [] for b in range(1,6): f = os.path.join('datasets', 'cifar-10-batches-py', 'data_batch_%d' % (b, )) X, Y = load_CIFAR_batch(f) xs.append(X) ys.append(Y) Xtr = np.concatenate(xs) Ytr = np.concatenate(ys) del X, Y Xte, Yte = load_CIFAR_batch(os.path.join('datasets', 'cifar-10-batches-py', 'test_batch')) return Xtr, Ytr, Xte, Yte X_train, y_train, X_test, y_test = load_CIFAR10() classes = ['plane', 'car', 'bird', 'cat', 'dear', 'dog', 'frog', 'horse', 'ship', 'truck'] num_classes = len(classes) num_each_class = 7 for y, cls in enumerate(classes): idxs = np.flatnonzero(y_train == y) idxs = np.random.choice(idxs, num_each_class, replace=False) for i, idx in enumerate(idxs): plt_idx = i * num_classes + (y + 1) plt.subplot(num_each_class, num_classes, plt_idx) plt.imshow(X_train[idx].astype('uint8')) plt.axis('off') if i == 0: plt.title(cls) plt.show() X_train.shape X_train = np.reshape(X_train, (X_train.shape[0], -1)) X_test = np.reshape(X_test, (X_test.shape[0], -1)) # Divide the sub-data set y_train = y_train[:1000] y_test = y_test[:1000] X_train = X_train[:1000] X_test = X_test[:1000] X_train.shape
_____no_output_____
MIT
PSForest_example.ipynb
nishiwen1214/PSForest
Using the PSForest
start =time.clock() before_mem = memory_profiler.memory_usage() # Create PSForest model ps_forest = PSForest( estimators_config={ 'mgs': [{ 'estimator_class': RandomForestClassifier, 'estimator_params': { 'n_estimators': 500, 'max_features': 1, 'min_samples_split': 10, 'n_jobs': -1, } }, { 'estimator_class': RandomForestClassifier, 'estimator_params': { 'n_estimators': 500, 'max_features': 1, 'min_samples_split': 10, 'n_jobs': -1, } },{ 'estimator_class': RandomForestClassifier, 'estimator_params': { 'n_estimators': 500, 'min_samples_split': 10, 'max_features': 'sqrt', 'n_jobs': -1, } },{ 'estimator_class': RandomForestClassifier, 'estimator_params': { 'n_estimators': 500, 'min_samples_split': 10, 'max_features': 'sqrt', 'n_jobs': -1, } }], 'cascade': [{ 'estimator_class': RandomForestClassifier, 'estimator_params': { 'n_estimators': 500, 'min_samples_split': 10, 'max_features': 1, 'oob_score':True, 'n_jobs': -1, } }, { 'estimator_class': RandomForestClassifier, 'estimator_params': { 'n_estimators': 500, 'min_samples_split': 10, 'max_features': 'sqrt', 'oob_score':True, 'n_jobs': -1, } }, { 'estimator_class': RandomForestClassifier, 'estimator_params': { 'n_estimators': 500, 'min_samples_split': 10, 'max_features': 1, 'oob_score':True, 'n_jobs': -1, } }, { 'estimator_class': RandomForestClassifier, 'estimator_params': { 'n_estimators': 500, 'min_samples_split': 10, 'max_features': 'sqrt', 'oob_score':True, 'n_jobs': -1, } },{ 'estimator_class': RandomForestClassifier, 'estimator_params': { 'n_estimators': 500, 'min_samples_split': 10, 'max_features': 1, 'oob_score':True, 'n_jobs': -1, } }, { 'estimator_class': RandomForestClassifier, 'estimator_params': { 'n_estimators': 500, 'min_samples_split': 10, 'max_features': 'sqrt', 'oob_score':True, 'n_jobs': -1, } }, { 'estimator_class': RandomForestClassifier, 'estimator_params': { 'n_estimators': 500, 'min_samples_split': 10, 'max_features': 1, 'oob_score':True, 'n_jobs': -1, } }, { 'estimator_class': RandomForestClassifier, 'estimator_params': { 'n_estimators': 500, 'min_samples_split': 10, 'max_features': 'sqrt', 'oob_score':True, 'n_jobs': -1, } }] }, stride_ratios=[1/256,1/128,1/64,1/32,1/16,1/8,1/4], ) # ps_forest.fit(X_train, y_train) # with Multi-Grained Pooling ps_forest.fit_c(X_train, y_train) # without Multi-Grained Pooling after_mem = memory_profiler.memory_usage() end = time.clock() print("Memory (Before): {}Mb".format(before_mem)) print("Memory (After): {}Mb".format(after_mem)) print("Memory consumption: {}Mb".format(after_mem[0] - before_mem[0])) # y_pred = ps_forest.predict(X_test) # with Multi-Grained Pooling y_pred = ps_forest.predict_c(X_test) # without Multi-Grained Pooling print('Prediction shape:', y_pred.shape) print( 'Accuracy:', accuracy_score(y_test, y_pred), 'F1 score:', f1_score(y_test, y_pred, average='weighted') ) print('Running time: %s Seconds'%(end-start)) # RandomForest rf = RandomForestClassifier() rf.fit(X_train, y_train) rf_y_pred = rf.predict(X_test) acc = accuracy_score(y_test, rf_y_pred) print('accuracy:', acc)
accuracy: 0.896
MIT
PSForest_example.ipynb
nishiwen1214/PSForest
Optional side-effect
val nameOpt = Option("Amir") def printName(name: String) = println(name) nameOpt.foreach(printName) val anotherOpt = None anotherOpt.foreach(printName)
_____no_output_____
MIT
scala/Optional side-effect.ipynb
ashishpatel26/learning
Agenda- Why Logging- How does Logging work for you?- Optional Content The Presentation- The slides, support code and jypyter notebook are on Github- [https://github.com/stbaercom/europython2015_logging](https://github.com/stbaercom/europython2015_logging) A Simple Program, Without any Logging
from datetime import datetime def my_division_p(dividend, divisor): try: print("Debug, Division : {}/{}".format(dividend,divisor)) result = dividend / divisor return result except (ZeroDivisionError, TypeError): print("Error, Division Failed") return None def division_task_handler_p(task): print("Handling division task,{} items".format(len(task))) result = [] for i, task in enumerate(task): print("Doing devision iteration {} on {:%Y}".format(i,datetime.now())) dividend, divisor = task result.append(my_division_p(dividend,divisor)) return result
_____no_output_____
MIT
europython_2015_logging_talk.ipynb
stbaercom/europython2015_logging
Let us Have a Look at the Output
task = [(3,4),(5,1.4),(2,0),(3,5),("10",1)] division_task_handler_p(task)
Handling division task,5 items Doing devision iteration 0 on 2015 Debug, Division : 3/4 Doing devision iteration 1 on 2015 Debug, Division : 5/1.4 Doing devision iteration 2 on 2015 Debug, Division : 2/0 Error, Division Failed Doing devision iteration 3 on 2015 Debug, Division : 3/5 Doing devision iteration 4 on 2015 Debug, Division : 10/1 Error, Division Failed
MIT
europython_2015_logging_talk.ipynb
stbaercom/europython2015_logging
The Problems with ``print()``- We don't have a way to select the types of messages we are interested in- We have to add all information (timestamps, etc...) by ourselves- All our messages will look slightly different- We have only limited control where our message end up What is Different with Logging?- We have more structure, and easier parsing- The logging module provides some extra informaiton (Logger, Level, and Formating)- We Handling of exception essentially for free. Aspects of a Logging Message Using the Logging Module for Comparison
import log1; logging = log1.get_clean_logging() logging.basicConfig(level=logging.DEBUG) log = logging.getLogger() def my_division(dividend, divisor): try: log.debug("Division : %s/%s", dividend, divisor) result = dividend / divisor return result except (ZeroDivisionError, TypeError): log.exception("Error, Division Failed") return None def division_task_handler(task): log.info("Handling division task,%s items",len(task)) result = [] for i, task in enumerate(task): log.info("Doing devision iteration %s",i) dividend, divisor = task result.append(my_division(dividend,divisor)) return result
_____no_output_____
MIT
europython_2015_logging_talk.ipynb
stbaercom/europython2015_logging
The Call and the Log Messages
task = [(3,4),(2,0),(3,5),("10",1)] division_task_handler(task)
INFO:root:Handling division task,4 items INFO:root:Doing devision iteration 0 DEBUG:root:Division : 3/4 INFO:root:Doing devision iteration 1 DEBUG:root:Division : 2/0 ERROR:root:Error, Division Failed Traceback (most recent call last): File "<ipython-input-10-a904db1e3e23>", line 8, in my_division result = dividend / divisor ZeroDivisionError: division by zero INFO:root:Doing devision iteration 2 DEBUG:root:Division : 3/5 INFO:root:Doing devision iteration 3 DEBUG:root:Division : 10/1 ERROR:root:Error, Division Failed Traceback (most recent call last): File "<ipython-input-10-a904db1e3e23>", line 8, in my_division result = dividend / divisor TypeError: unsupported operand type(s) for /: 'str' and 'int'
MIT
europython_2015_logging_talk.ipynb
stbaercom/europython2015_logging
How does the Logging Module represent these Aspect Back to Code. How does Logging Work?
import log1;logging = log1.get_clean_logging() # this would be import logging outside this notebook logging.debug("Find me in the log") logging.info("I am hidden") logging.warn("I am here") logging.error("As am I") try: 1/0; except: logging.exception(" And I") logging.critical("Me, of course")
WARNING:root:I am here ERROR:root:As am I ERROR:root: And I Traceback (most recent call last): File "<ipython-input-12-75f8227eec02>", line 8, in <module> 1/0; ZeroDivisionError: division by zero CRITICAL:root:Me, of course
MIT
europython_2015_logging_talk.ipynb
stbaercom/europython2015_logging
More Complex Logging Setup with ``basicConfig()``
import log1;logging = log1.get_clean_logging() datefmt = "%Y-%m-%d %H:%M:%S" msgfmt = "%(asctime)s,%(msecs)03d %(levelname)-10s %(name)-15s : %(message)s" logging.basicConfig(level=logging.DEBUG, format=msgfmt, datefmt=datefmt) logging.debug("Now I show up ") logging.info("Now this is %s logging!","good") logging.warn("I am here. %-4i + %-4i = %i",1,3,1+3) logging.error("As am I") try: 1/0; except: logging.exception(" And I")
2015-07-19 20:19:55,551 DEBUG root : Now I show up 2015-07-19 20:19:55,552 INFO root : Now this is good logging! 2015-07-19 20:19:55,552 WARNING root : I am here. 1 + 3 = 4 2015-07-19 20:19:55,552 ERROR root : As am I 2015-07-19 20:19:55,553 ERROR root : And I Traceback (most recent call last): File "<ipython-input-13-63765f2f7e9f>", line 12, in <module> 1/0; ZeroDivisionError: division by zero
MIT
europython_2015_logging_talk.ipynb
stbaercom/europython2015_logging
Some (personal) Remarks about ``basicConfig()``- `basicConfig()` does save you some typing, but I would go for the 'normal' setup. - Using `basicConfig()` is a matter of personal taste.- The normal setup makes the structure clearer.- Keep in mind that basicConfig() is meant to be called once... Using the Standard Configuration
import log1, json, logging.config;logging = log1.get_clean_logging() datefmt = "%Y-%m-%d %H:%M:%S" msgfmt = "%(asctime)s,%(msecs)03d %(levelname)-6s %(name)-10s : %(message)s" log = logging.getLogger() log.setLevel(logging.DEBUG) lh = logging.StreamHandler() lf = logging.Formatter(fmt=msgfmt, datefmt=datefmt) lh.setFormatter(lf) log.addHandler(lh) log.info("Now this is %s logging!","good") log.debug("A slightly more complex message %s + %s = %s",1,2,1+2)
2015-07-19 20:19:55,571 INFO root : Now this is good logging! 2015-07-19 20:19:55,572 DEBUG root : A slightly more complex message 1 + 2 = 3
MIT
europython_2015_logging_talk.ipynb
stbaercom/europython2015_logging
Now, back to the Theory. What have we Build? How do we get from the Configuration to the Log Message? Formatting : Attributes Available for the Logging Call AttributeDescriptionargsTuple of arguments passed to the logging callasctimeLog record creation time, formattedcreatedLog record creation time, seconds since the Epochexc_infoException information / stack trace, if anyfilenameFilename portion of pathname for the logging modulefuncNameName of function containing the logging calllevelnameName of Logging LevellevelnoNumber of Logging LevellinenoLine number in source code for the logging callmoduleModule (name portion of filename).messageLogged messagenameName of the logger used to log the call.pathnamepathname of source fileprocessProcess IDprocessNameProcess name...... Using ``dictConfig()``
import log1, json, logging.config;logging = log1.get_clean_logging() conf_dict = { 'version': 1, 'disable_existing_loggers': True, 'formatters': { 'longformat': { 'format': "%(asctime)s,%(msecs)03d %(levelname)-10s %(name)-15s : %(message)s", 'datefmt': "%Y-%m-%d %H:%M:%S"}}, 'handlers': { 'console': { 'class': 'logging.StreamHandler', 'formatter': "longformat"}}, 'loggers':{ '': { 'level': 'DEBUG', 'handlers': ['console']}}} logging.config.dictConfig(conf_dict) log = logging.getLogger() log.info("Now this is %s logging!","good")
2015-07-19 20:19:55,602 INFO root : Now this is good logging!
MIT
europython_2015_logging_talk.ipynb
stbaercom/europython2015_logging
Adding a ``Filehandler`` to the Logger
import log1, json, logging.config;logging = log1.get_clean_logging() base_config = json.load(open("conf_dict.json")) base_config['handlers']['logfile'] = { 'class' : 'logging.FileHandler', 'mode' : 'w', 'filename' : 'logfile.txt', 'formatter': "longformat"} base_config['loggers']['']['handlers'].append('logfile') logging.config.dictConfig(base_config) log = logging.getLogger() log.info("Now this is %s logging!","good") !cat logfile.txt
2015-07-19 20:19:55,618 INFO root : Now this is good logging!
MIT
europython_2015_logging_talk.ipynb
stbaercom/europython2015_logging
Another look at the logging object tree Set the Level on the ``FileHandler``
import log1, json, logging.config;logging = log1.get_clean_logging() file_config = json.load(open("conf_dict_with_file.json")) file_config['handlers']['logfile']['level'] = "WARN" logging.config.dictConfig(file_config) log = logging.getLogger() log.info("Now this is %s logging!","good") log.warning("Now this is %s logging!","worrisome") !cat logfile.txt
2015-07-20 19:04:03,132 INFO root : Now this is good logging! 2015-07-20 19:04:03,133 WARNING root : Now this is worrisome logging!
MIT
europython_2015_logging_talk.ipynb
stbaercom/europython2015_logging
Adding Child Loggers under the Root
import log1,json,logging.config;logging = log1.get_clean_logging() logging.config.dictConfig(json.load(open("conf_dict.json"))) log = logging.getLogger("") child_A = logging.getLogger("A") child_B = logging.getLogger("B") child_B_A = logging.getLogger("B.A") log.info("Now this is %s logging!","good") child_A.info("Now this is more logging!") log.warning("Now this is %s logging!","worrisome")
2015-07-19 20:19:55,865 INFO root : Now this is good logging! 2015-07-19 20:19:55,866 INFO A : Now this is more logging! 2015-07-19 20:19:55,867 WARNING root : Now this is worrisome logging!
MIT
europython_2015_logging_talk.ipynb
stbaercom/europython2015_logging
Looking at the tree of Logging Objects Best Practices for the Logging Tree- Use ``.getLogger(__name__)`` per module to define loggers under the root logger- Set propagate to True on each Logger- Attach Handlers and Filters as needed to control output from the Logging hierarchy Filter - Now that things are Getting Complicated- With more loggers and handlers in the tree of logging objects, things are getting complicated- We may not want every logger to send log records to every filter- The logging level gives us some control, there are limits- Filters are one solution to this problem- Filter can also **add** information to records, thus helping with structured logging Using Filters An Example for using Filter Objects
import log1,json,logging.config;logging = log1.get_clean_logging() logging.config.dictConfig(json.load(open("conf_dict.json"))) def log_filter(rec): # Callables work with 3.2 and later if 'please' in rec.msg.lower(): return True return False log = logging.getLogger("") log.addFilter(log_filter) child_A = logging.getLogger("A") log.info("Just log me") child_A.info("Just log me") log.info("Hallo, Please log me")
2015-07-20 08:01:55,108 INFO A : Just log me 2015-07-20 08:01:55,108 INFO root : Hallo, Please log me
MIT
europython_2015_logging_talk.ipynb
stbaercom/europython2015_logging
The Way of a Logging Record A second Example for Filters, in the LogHandler
import log1, json, logging.config;logging = log1.get_clean_logging() datefmt = "%Y-%m-%d %H:%M:%S" msgfmt = "%(asctime)s,%(msecs)03d %(levelname)-6s %(name)-10s : %(message)s" log_reg = None def handler_filter(rec): # Callables work with 3.2 and later global log_reg if 'please' in rec.msg.lower(): rec.msg = rec.msg + " (I am nice)" # Changing the record rec.args = (rec.args[0].upper(), rec.args[1] + 10) rec.__dict__['custom_name'] = "Important context information" log_reg = rec return True return False log = logging.getLogger() lh = logging.StreamHandler() lf = logging.Formatter(fmt=msgfmt, datefmt=datefmt) lh.setFormatter(lf) log.addHandler(lh) lh.addFilter(handler_filter) log.warn("I am a bold Logger","good") log.warn("Hi, I am %s. I am %i seconds old. Please log me","Loggy", 1)
2015-07-19 20:19:55,905 WARNING root : Hi, I am LOGGY. I am 11 seconds old. Please log me (I am nice)
MIT
europython_2015_logging_talk.ipynb
stbaercom/europython2015_logging
Things you might want to know ( if we still have some time) A short look at our LogRecord
print(log_reg) log_reg.__dict__
<LogRecord: root, 30, <ipython-input-20-d1d101ab918f>, 25, "Hi, I am %s. I am %i seconds old. Please log me (I am nice)">
MIT
europython_2015_logging_talk.ipynb
stbaercom/europython2015_logging
Logging Performance - Slow, but Fast EnoughScenario (10000 Call, 3 Logs per call)RuntimeFull Logging with buffered writes3.096sDisable Caller information2.868sCheck Logging Lvl before Call, Logging disabled0.186sLogging module level disabled0.181sNo Logging calls at all0.157s Getting the current Logging Tree
import json, logging.config config = json.load(open("conf_dict_with_file.json")) logging.config.dictConfig(config) import requests import logging_tree logging_tree.printout()
<--"" Level DEBUG Handler Stream <IPython.kernel.zmq.iostream.OutStream object at 0x105d043c8> Formatter fmt='%(asctime)s,%(msecs)03d %(levelname)-10s %(name)-15s : %(message)s' datefmt='%Y-%m-%d %H:%M:%S' Handler File '/Users/imhiro/AllFiles/0021_travel_events_conferences_workshops/2015-07-19_europython/github/logfile.txt' Formatter fmt='%(asctime)s,%(msecs)03d %(levelname)-10s %(name)-15s : %(message)s' datefmt='%Y-%m-%d %H:%M:%S' | o "IPKernelApp" | Level WARNING | Propagate OFF | Disabled | Handler Stream <_io.TextIOWrapper name='<stderr>' mode='w' encoding='UTF-8'> | Formatter <IPython.config.application.LevelFormatter object at 0x104b362e8> | o<--[concurrent] | | | o<--"concurrent.futures" | Level NOTSET so inherits level DEBUG | Disabled | o<--"requests" | Level NOTSET so inherits level DEBUG | Handler <logging.NullHandler object at 0x106f75a20> | | | o<--[requests.packages] | | | o<--"requests.packages.urllib3" | Level NOTSET so inherits level DEBUG | Handler <logging.NullHandler object at 0x106f759b0> | | | o<--"requests.packages.urllib3.connectionpool" | | Level NOTSET so inherits level DEBUG | | | o<--"requests.packages.urllib3.poolmanager" | | Level NOTSET so inherits level DEBUG | | | o<--[requests.packages.urllib3.util] | | | o<--"requests.packages.urllib3.util.retry" | Level NOTSET so inherits level DEBUG | o<--"tornado" Level NOTSET so inherits level DEBUG Disabled | o<--"tornado.access" | Level NOTSET so inherits level DEBUG | Disabled | o<--"tornado.application" | Level NOTSET so inherits level DEBUG | Disabled | o<--"tornado.general" Level NOTSET so inherits level DEBUG Disabled
MIT
europython_2015_logging_talk.ipynb
stbaercom/europython2015_logging
Reconfiguration- It is possible to change the logging configuration at runtime- It is even part of the standard library- Still, some caution is in order Reloading the configuration _can_ disable the existing loggers
import log1,json,logging,logging.config;logging = log1.get_clean_logging() #Load Config, define a child logger (could also be a module) logging.config.dictConfig(json.load(open("conf_dict_with_file.json"))) child_log = logging.getLogger("somewhere") #Reload Config logging.config.dictConfig(json.load(open("conf_dict_with_file.json"))) #Our childlogger was disables child_log.info("Now this is %s logging!","good")
_____no_output_____
MIT
europython_2015_logging_talk.ipynb
stbaercom/europython2015_logging
Reloading can happen in place
import log1, json, logging, logging.config;logging = log1.get_clean_logging() config = json.load(open("conf_dict_with_file.json")) #Load Config, define a child logger (could also be a module) logging.config.dictConfig(config) child_log = logging.getLogger("somewhere") config['disable_existing_loggers'] = False #Reload Config logging.config.dictConfig(config) #Our childlogger was disables child_log.info("Now this is %s logging!","good")
2015-07-19 20:20:42,290 INFO somewhere : Now this is good logging!
MIT
europython_2015_logging_talk.ipynb
stbaercom/europython2015_logging
Successful Logging to all of You
from presentation_helper import customize_settings customize_settings()
_____no_output_____
MIT
europython_2015_logging_talk.ipynb
stbaercom/europython2015_logging
%%capture pip install arviz import numpy as np import pymc3 as pm import theano.tensor as tt import arviz as az import matplotlib.pyplot as plt import seaborn as sns
_____no_output_____
MIT
ThinkBayes_Chapter_9.ipynb
ricardoV94/ThinkBayesPymc3
9.1 Paintball
def StrafingSpeed(alpha, beta, x): theta = tt.arctan2(x-alpha, beta) speed = beta / tt.cos(theta)**2 return speed with pm.Model() as m_9_3: obs_beta = pm.Data('obs_beta', [10]) alpha = pm.Uniform('alpha', lower=0, upper=31, observed=10) beta = pm.Uniform('beta', lower=1, upper=51, observed=obs_beta) location = pm.Uniform('location', lower=0, upper=31) speed = pm.Deterministic('speed', StrafingSpeed(alpha, beta, location)) like = pm.Potential('like', -np.log(speed)) # Equivalent to 1/speed traces_m_9_3 = [] for beta in (10, 20, 40): with m_9_3: pm.set_data({'obs_beta': [beta]}) traces_m_9_3.append(pm.sample(5000, progressbar=False)) sns.kdeplot(traces_m_9_3[0]['location'], label='beta = 10', color='darkblue') sns.kdeplot(traces_m_9_3[1]['location'], label='beta = 20') sns.kdeplot(traces_m_9_3[2]['location'], label='beta = 40', color='lightblue') plt.xlim([0,30]) plt.xlabel('Distace') plt.ylabel('Prob');
_____no_output_____
MIT
ThinkBayes_Chapter_9.ipynb
ricardoV94/ThinkBayesPymc3
9.5 Joint distributionsResults are very different from those of the book. Posterior is much more narrow.
with pm.Model() as m_9_5: alpha = pm.Uniform('alpha', lower=0, upper=31) beta = pm.Uniform('beta', lower=1, upper=51) location = pm.Uniform('location', lower=0, upper=31, observed=[15, 16, 18, 21]) speed = pm.Deterministic('speed', StrafingSpeed(alpha, beta, location)) like = pm.Potential('like', -tt.log(speed)) trace_m_9_5 = pm.sample(5000) bins = np.linspace(0, 51, 50) plt.hist(trace_m_9_5['alpha'], cumulative=True, bins=bins, density=True, histtype='step', lw=2, label='alpha') plt.hist(trace_m_9_5['beta'], cumulative=True, bins=bins, density=True, histtype='step', lw=2, color='lightblue', label='beta') plt.ylabel('Prob') plt.xlabel('Distance') plt.legend(loc=4); fig, ax = plt.subplots(1, 2, figsize=(12,4)) pm.plot_posterior(trace_m_9_5['alpha'], credible_interval=.5, ax=ax[0]) pm.plot_posterior(trace_m_9_5['beta'], credible_interval=.5, ax=ax[1]) sns.kdeplot(trace_m_9_5['alpha'], trace_m_9_5['beta'], shade=True, n_levels=3, shade_lowest=False); plt.ylabel('beta') plt.xlabel('alpha');
_____no_output_____
MIT
ThinkBayes_Chapter_9.ipynb
ricardoV94/ThinkBayesPymc3
9.6 Conditional Distributions
with pm.Model() as m_9_6: obs_beta = pm.Data('obs_beta', 10) alpha = pm.Uniform('alpha', lower=0, upper=31) beta = pm.Uniform('beta', lower=1, upper=51, observed=obs_beta) location = pm.Uniform('location', lower=0, upper=31, observed=[15, 16, 18, 21]) speed = pm.Deterministic('speed', StrafingSpeed(alpha, beta, location)) like = pm.Potential('like', -np.log(speed)) traces_m_9_6 = [] for beta in (10, 20, 40): with m_9_6: pm.set_data({'obs_beta': beta}) traces_m_9_6.append(pm.sample(5000, progressbar=False)) sns.kdeplot(traces_m_9_6[0]['alpha'], label='beta = 10', color='darkblue') sns.kdeplot(traces_m_9_6[1]['alpha'], label='beta = 20') sns.kdeplot(traces_m_9_6[2]['alpha'], label='beta = 40', color='lightblue') plt.xlim([0,30]) plt.xlabel('Distace') plt.ylabel('Prob');
_____no_output_____
MIT
ThinkBayes_Chapter_9.ipynb
ricardoV94/ThinkBayesPymc3
Multiple Linear Regression
import matplotlib.pyplot as plt import numpy as np from sklearn import datasets, linear_model from sklearn.metrics import mean_squared_error, r2_score from sklearn.model_selection import train_test_split diabetes = datasets.load_diabetes() #load dataset diabetes_X = diabetes.data X_train, X_test, y_train, y_test = train_test_split(diabetes_X, diabetes.target, test_size=0.20, random_state=42, shuffle=True) #split 20% into test set y_test = y_test.reshape(-1,1) y_train = y_train.reshape(-1, 1) print('Size of the training set is {}'.format(X_train.shape)) print('Size of the Label training set is {}'.format(y_train.shape)) print('Size of the Label teest set is {}'.format(y_test.shape)) print('Size of the test set is {}'.format(X_test.shape)) # Create linear regression object regr = linear_model.LinearRegression() # Train the model using the training sets regr.fit(X_train, y_train) # Make predictions using the testing set y_pred = regr.predict(X_test) # The coefficients print('Coefficients: \n', regr.coef_) # The mean squared error print("Mean squared error: %.2f" % mean_squared_error(y_test, y_pred)) # Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % r2_score(y_test, y_pred))
Coefficients: [[ 37.90031426 -241.96624835 542.42575342 347.70830529 -931.46126093 518.04405547 163.40353476 275.31003837 736.18909839 48.67112488]] Mean squared error: 2900.17 Variance score: 0.45
MIT
Module 4/multilinear_regression.ipynb
axel-sirota/interpreting-data-with-advanced-models
Handling missing values
# get the number of missing data points per column missing_values_count = nfl_data.isnull().sum() # how many total missing values do we have? total_cells = np.product(nfl_data.shape) total_missing = missing_values_count.sum() # percent of data that is missing percent_missing = (total_missing/total_cells) * 100 print(percent_missing) # replace all NA's the value that comes directly after it in the same column, # then replace all the remaining na's with 0 subset_nfl_data.fillna(method='bfill', axis=0).fillna(0)
_____no_output_____
MIT
Data_Cleaning.ipynb
duartele/exerc-jupyternotebook
Scaling and normalization
# for Box-Cox Transformation from scipy import stats # for min_max scaling from mlxtend.preprocessing import minmax_scaling # set seed for reproducibility np.random.seed(0)
_____no_output_____
MIT
Data_Cleaning.ipynb
duartele/exerc-jupyternotebook
In scaling, you're changing the range of your data
# generate 1000 data points randomly drawn from an exponential distribution original_data = np.random.exponential(size=1000) # mix-max scale the data between 0 and 1 scaled_data = minmax_scaling(original_data, columns=[0]) # plot both together to compare fig, ax = plt.subplots(1,2) sns.distplot(original_data, ax=ax[0]) ax[0].set_title("Original Data") sns.distplot(scaled_data, ax=ax[1]) ax[1].set_title("Scaled data")
_____no_output_____
MIT
Data_Cleaning.ipynb
duartele/exerc-jupyternotebook
In normalization, you're changing the shape of the distribution of your data.
# normalize the exponential data with boxcox normalized_data = stats.boxcox(original_data)
_____no_output_____
MIT
Data_Cleaning.ipynb
duartele/exerc-jupyternotebook
Parsing Dates - it will be "object" if you don't parse it.
import datetime
_____no_output_____
MIT
Data_Cleaning.ipynb
duartele/exerc-jupyternotebook
if a date is MM/DD/YY (02/25/17) the format is "%m/%d/%y""%Y" (upper y) is used if the year has four digits (2017)That is (02/25/2017) the format is "%m/%d/%Y"Other formats: DD/MM/YY (25/02/17 - "%d/%m/%y") ; DD-MM-YY (25-02-17 - "%d-%m-%y"). At the end, the date will be show as the defaut YYYY-MM-DD (datetime64)
# create a new column, date_parsed, with the parsed dates landslides['date_parsed'] = pd.to_datetime(landslides['date'], format="%m/%d/%y")
_____no_output_____
MIT
Data_Cleaning.ipynb
duartele/exerc-jupyternotebook
If your dates is with multiple formats, you can use "infer_datetime_format".
landslides['date_parsed'] = pd.to_datetime(landslides['Date'], infer_datetime_format=True) day_of_month_landslides = landslides['date_parsed'].dt.day
_____no_output_____
MIT
Data_Cleaning.ipynb
duartele/exerc-jupyternotebook
To check if everything looks right
date_lengths = earthquakes.Date.str.len() date_lengths.value_counts() #or showing trhough a histogram of the day (values must be in [0,31] ) #In that example, 3 dates have len of 24. So we run this code indices = np.where([date_lengths == 24])[1] print('Indices with corrupted data:', indices) earthquakes.loc[indices] #Fixing manually the incorrect dates earthquakes.loc[3378, "Date"] = "02/23/1975"
_____no_output_____
MIT
Data_Cleaning.ipynb
duartele/exerc-jupyternotebook
Character Encodings
# helpful character encoding module import chardet # look at the first ten thousand bytes to guess the character encoding with open("../input/kickstarter-projects/ks-projects-201801.csv", 'rb') as rawdata: result = chardet.detect(rawdata.read(10000)) # check what the character encoding might be print(result) # read in the file with the encoding detected by chardet kickstarter_2016 = pd.read_csv("../input/kickstarter-projects/ks-projects-201612.csv", encoding='Windows-1252') # look at the first few lines kickstarter_2016.head()
_____no_output_____
MIT
Data_Cleaning.ipynb
duartele/exerc-jupyternotebook
Exercises
#1 - class bytes We need to create a new_entry in bytes (UTF-8is defaut) sample_entry = b'\xa7A\xa6n' print(sample_entry) print('data type:', type(sample_entry)) #solution - Try using .decode() to get the string, then .encode() to get the bytes representation, encoded in UTF-8. before = sample_entry.decode("big5-tw") new_entry = before.encode()
_____no_output_____
MIT
Data_Cleaning.ipynb
duartele/exerc-jupyternotebook
Inconsistent Data
# helpful modules import fuzzywuzzy from fuzzywuzzy import process import chardet # get the top 10 closest matches to "south korea" # The closer str has ratio of 100 matches = fuzzywuzzy.process.extract("south korea", countries, limit=10, scorer=fuzzywuzzy.fuzz.token_sort_ratio) # function to replace rows in the provided column of the provided dataframe # that match the provided string above the provided ratio with the provided string def replace_matches_in_column(df, column, string_to_match, min_ratio = 47): # get a list of unique strings strings = df[column].unique() # get the top 10 closest matches to our input string matches = fuzzywuzzy.process.extract(string_to_match, strings, limit=10, scorer=fuzzywuzzy.fuzz.token_sort_ratio) # only get matches with a ratio >= min_ratio close_matches = [matches[0] for matches in matches if matches[1] >= min_ratio] # get the rows of all the close matches in our dataframe rows_with_matches = df[column].isin(close_matches) # replace all rows with close matches with the input matches df.loc[rows_with_matches, column] = string_to_match
_____no_output_____
MIT
Data_Cleaning.ipynb
duartele/exerc-jupyternotebook
Scraping Fantasy Football Data (Week 3 Projections/Week 2 Actuals)Need to scrape the following data:- Weekly Player PPR Projections: ESPN, CBS, Fantasy Sharks, Scout Fantasy Sporsts, (and tried Fantasy Football Today but doesn't have defense projections currently, so exclude)- Previous Week Player Actual PPR Results- Weekly Fanduel Player Salary (can manually download csv from a Thurs-Sun contest and then import)
import pandas as pd import numpy as np import requests # import json # from bs4 import BeautifulSoup import time from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC # from selenium.common.exceptions import NoSuchElementException #function to initiliaze selenium web scraper def instantiate_selenium_driver(): chrome_options = webdriver.ChromeOptions() chrome_options.add_argument('--no-sandbox') chrome_options.add_argument('--window-size=1420,1080') #chrome_options.add_argument('--headless') chrome_options.add_argument('--disable-gpu') driver = webdriver.Chrome('..\plugins\chromedriver.exe', chrome_options=chrome_options) return driver #function to save dataframes to pickle archive #file name: don't include csv in file name, function will also add a timestamp to the archive #directory name don't include final backslash def save_to_pickle(df, directory_name, file_name): lt = time.localtime() full_file_name = f"{file_name}_{lt.tm_year}-{lt.tm_mon}-{lt.tm_mday}-{lt.tm_hour}-{lt.tm_min}.pkl" path = f"{directory_name}/{full_file_name}" df.to_pickle(path) print(f"Pickle saved to: {path}") #remove name suffixes of II III IV or Jr. or Sr. or random * from names to easier match other databases #also remove periods from first name T.J. make TJ (just remove periods from whole name in function) def remove_suffixes_periods(name): #remove periods and any asterisks name = name.replace(".", "") name = name.replace("*", "") #remove any suffixes by splitting the name on spaces and then rebuilding the name with only the first two of the list (being first/last name) name_split = name.split(" ") name_final = " ".join(name_split[0:2]) #rebuild # #old suffix removal process (created some errors for someone with Last Name starting with V) # for suffix in [" III", " II", " IV", " V", " Jr.", " Sr."]: # name = name.replace(suffix, "") return name_final #function to rename defense position labels so all matach #this will be used since a few players have same name as another player, but currently none that #are at same position need to create a function that gets all the defense labels the same, so that #when merge, can merge by both player name and position to prevent bad merges #input of pos will be the value of the column that getting mapped def convert_defense_label(pos): defense_labels_scraped = ['DST', 'D', 'Def', 'DEF'] if pos in defense_labels_scraped: #conver defense position labels to espn format pos = 'D/ST' return pos
_____no_output_____
MIT
data/Scraping Fantasy Football Data - FINAL-Week3.ipynb
zgscherrer/Project-Fantasy-Football
Get Weekly Player Actual Fantasy PPR PointsGet from ESPN's Scoring Leaders tablehttp://games.espn.com/ffl/leaders?&scoringPeriodId=1&seasonId=2018&slotCategoryId=0&leagueID=0- scoringPeriodId = week of the season- seasonId = year- slotCategoryId = position, where 'QB':0, 'RB':2, 'WR':4, 'TE':6, 'K':17, 'D/ST':16- leagueID = scoring type, PPR Standard is 0
##SCRAPE ESPN SCORING LEADERS TABLE FOR ACTUAL FANTASY PPR POINTS## #input needs to be year as four digit number and week as number #returns dataframe of scraped data def scrape_actual_PPR_player_points_ESPN(week, year): #instantiate the driver driver = instantiate_selenium_driver() #initialize dataframe for all data player_actual_ppr = pd.DataFrame() #url that returns info has different code for each position position_ids = {'QB':0, 'RB':2, 'WR':4, 'TE':6, 'K':17, 'D/ST':16} #cycle through each position webpage to create comprehensive dataframe for pos, pos_id in position_ids.items(): #note leagueID=0 is for PPR standard scoring url_start_pos = f"http://games.espn.com/ffl/leaders?&scoringPeriodId={week}&seasonId={year}&slotCategoryId={pos_id}&leagueID=0" driver.get(url_start_pos) #each page only gets 50 results, so cycle through next button until next button no longer exists while True: #read in the table from ESPN, by using the class, and use the 1st row index for column header player_actual_ppr_table_page = pd.read_html(driver.page_source, attrs={'class': 'playerTableTable'}, #return only the table of this class, which has the player data header=[1])[0] #returns table in a list, so get zeroth table #easier to just assign the player position rather than try to scrape it out player_actual_ppr_table_page['POS'] = pos #replace any placeholder string -- or --/-- with None type to not confuse calculations later player_actual_ppr_table_page.replace({'--': None, '--/--': None}, inplace=True) #if want to extract more detailed data from this, can do added reformatting, etc., but not doing that for our purposes # #rename D/ST columns so don't get misassigned to wrong columns # if pos == 'D/ST': # player_actual_ppr_table_page.rename(columns={'SCK':'D/ST_Sack', # 'FR':'D/ST_FR', 'INT':'D/ST_INT', # 'TD':'D/ST_TD', 'BLK':'D/ST_BLK', 'PA':'D/ST_PA'}, # inplace=True) # #rename/recalculate Kicker columns so don't get misassigned to wrong columns # elif pos == 'K': # player_actual_ppr_table_page.rename(columns={'1-39':'KICK_FG_1-39', '40-49':'KICK_FG_40-49', # '50+':'KICK_FG_50+', 'TOT':'KICK_FG', # 'XP':'KICK_XP'}, # inplace=True) # #if wanted to use all the kicker data could fix this code snipit - erroring out because can't split None types # #just want made FG's for each bucket and overall FGAtt and XPAtt # player_actual_ppr_table_page['KICK_FGAtt'] = player_actual_ppr_table_page['KICK_FG'].map( # lambda x: x.split("/")[-1]).astype('float64') # player_actual_ppr_table_page['KICK_XPAtt'] = player_actual_ppr_table_page['KICK_XP'].map( # lambda x: x.split("/")[-1]).astype('float64') # player_actual_ppr_table_page['KICK_FG_1-39'] = player_actual_ppr_table_page['KICK_FG_1-39'].map( # lambda x: x.split("/")[0]).astype('float64') # player_actual_ppr_table_page['KICK_FG_40-49'] = player_actual_ppr_table_page['KICK_FG_40-49'].map( # lambda x: x.split("/")[0]).astype('float64') # player_actual_ppr_table_page['KICK_FG_50+'] = player_actual_ppr_table_page['KICK_FG_50+'].map( # lambda x: x.split("/")[0]).astype('float64') # player_actual_ppr_table_page['KICK_FG'] = player_actual_ppr_table_page['KICK_FG'].map( # lambda x: x.split("/")[0]).astype('float64') # player_actual_ppr_table_page['KICK_XP'] = player_actual_ppr_table_page['KICK_XP'].map( # lambda x: x.split("/")[0]).astype('float64') # player_actual_ppr_table_page['KICK_FG%'] = player_actual_ppr_table_page['KICK_FG'] / espn_proj_table_page['KICK_FGAtt'] #add page data to overall dataframe player_actual_ppr = pd.concat([player_actual_ppr, player_actual_ppr_table_page], ignore_index=True, sort=False) #click to next page to get next 40 results, but check that it exists try: next_button = driver.find_element_by_partial_link_text('NEXT') next_button.click() except EC.NoSuchElementException: break driver.quit() #drop any completely blank columns player_actual_ppr.dropna(axis='columns', how='all', inplace=True) #add columns that give week/season player_actual_ppr['WEEK'] = week player_actual_ppr['SEASON'] = year return player_actual_ppr ###FORMAT/EXTRACT ACTUAL PLAYER PPR DATA### #(you could make this more complex if want to extract some of the subdata) def format_extract_PPR_player_points_ESPN(df_scraped_ppr_espn): #split out player, team, position based on ESPN's formatting def split_player_team_pos_espn(play_team_pos): #incoming string for players: 'Todd Gurley II, LAR RB' or 'Drew Brees, NO\xa0QB' #incoming string for players with special designations: 'Aaron Rodgers, GB\xa0QB Q' #incoming string for D/ST: 'Jaguars D/ST\xa0D/ST' #operations if D/ST if "D/ST" in play_team_pos: player = play_team_pos.split(' D/ST\xa0')[0] team = player.split()[0] #operations for regular players else: player = play_team_pos.split(',')[0] team_pos = play_team_pos.split(',')[1] team = team_pos.split()[0] return player, team df_scraped_ppr_espn[['PLAYER', 'TEAM']] = df_scraped_ppr_espn.apply( lambda x: split_player_team_pos_espn(x['PLAYER, TEAM POS']), axis='columns', result_type='expand') #need to remove name suffixes so can match players easier to other data - see function defined above df_scraped_ppr_espn['PLAYER'] = df_scraped_ppr_espn['PLAYER'].map(remove_suffixes_periods) #convert PTS to float type (sometimes zeros have been stored as strings) df_scraped_ppr_espn['PTS'] = df_scraped_ppr_espn['PTS'].astype('float64') #for this function only extract 'PLAYER', 'POS', 'TEAM', 'PTS' df_scraped_ppr_espn = df_scraped_ppr_espn[['PLAYER', 'POS', 'TEAM', 'PTS', 'WEEK']].sort_values('PTS', ascending=False) return df_scraped_ppr_espn #CALL SCRAPE AND FORMATTING OF ACTUAL PPR WEEK 2- AND SAVE TO PICKLES FOR LATER USE #scrape data and save the messy full dataframe df_wk2_player_actual_ppr_scrape = scrape_actual_PPR_player_points_ESPN(2, 2018) save_to_pickle(df_wk2_player_actual_ppr_scrape, 'pickle_archive', 'Week2_Player_Actual_PPR_messy_scrape') #format data to extract just player pts/playr/pos/team/weel and save the data df_wk2_player_actual_ppr = format_extract_PPR_player_points_ESPN(df_wk2_player_actual_ppr_scrape) #rename PTS column to something more descriptive df_wk2_player_actual_ppr.rename(columns={'PTS':'FPTS_PPR_ACTUAL'}, inplace=True) save_to_pickle(df_wk2_player_actual_ppr, 'pickle_archive', 'Week2_Player_Actual_PPR') print(df_wk2_player_actual_ppr.shape) df_wk2_player_actual_ppr.head()
Pickle saved to: pickle_archive/Week2_Player_Actual_PPR_messy_scrape_2018-9-18-17-59.pkl Pickle saved to: pickle_archive/Week2_Player_Actual_PPR_2018-9-18-17-59.pkl (1009, 5)
MIT
data/Scraping Fantasy Football Data - FINAL-Week3.ipynb
zgscherrer/Project-Fantasy-Football
Get ESPN Player Fantasy Points Projections for Week Get from ESPN's Projections Tablehttp://games.espn.com/ffl/tools/projections?&scoringPeriodId=1&seasonId=2018&slotCategoryId=0&leagueID=0- scoringPeriodId = week of the season- seasonId = year- slotCategoryId = position, where 'QB':0, 'RB':2, 'WR':4, 'TE':6, 'K':17, 'D/ST':16- leagueID = scoring type, PPR Standard is 0
##SCRAPE ESPN PROJECTIONS TABLE FOR PROJECTED FANTASY PPR POINTS## #input needs to be year as four digit number and week as number #returns dataframe of scraped data def scrape_weekly_player_projections_ESPN(week, year): #instantiate the driver on the ESPN projections page driver = instantiate_selenium_driver() #initialize dataframe for all data proj_ppr_espn = pd.DataFrame() #url that returns info has different code for each position position_ids = {'QB':0, 'RB':2, 'WR':4, 'TE':6, 'K':17, 'D/ST':16} #cycle through each position webpage to create comprehensive dataframe for pos, pos_id in position_ids.items(): #note leagueID=0 is for PPR standard scoring url_start_pos = f"http://games.espn.com/ffl/tools/projections?&scoringPeriodId={week}&seasonId={year}&slotCategoryId={pos_id}&leagueID=0" driver.get(url_start_pos) #each page only gets 50 results, so cycle through next button until next button no longer exists while True: #read in the table from ESPN, by using the class, and use the 1st row index for column header proj_ppr_espn_table_page = pd.read_html(driver.page_source, attrs={'class': 'playerTableTable'}, #return only the table of this class, which has the player data header=[1])[0] #returns table in a list, so get zeroth table #easier to just assign the player position rather than try to scrape it out proj_ppr_espn_table_page['POS'] = pos #replace any placeholder string -- or --/-- with None type to not confuse calculations later proj_ppr_espn_table_page.replace({'--': None, '--/--': None}, inplace=True) #if want to extract more detailed data from this, can do added reformatting, etc., but not doing that for our purposes # #rename D/ST columns so don't get misassigned to wrong columns # if pos == 'D/ST': # proj_ppr_espn_table_page.rename(columns={'SCK':'D/ST_Sack', # 'FR':'D/ST_FR', 'INT':'D/ST_INT', # 'TD':'D/ST_TD', 'BLK':'D/ST_BLK', 'PA':'D/ST_PA'}, # inplace=True) # #rename/recalculate Kicker columns so don't get misassigned to wrong columns # elif pos == 'K': # proj_ppr_espn_table_page.rename(columns={'1-39':'KICK_FG_1-39', '40-49':'KICK_FG_40-49', # '50+':'KICK_FG_50+', 'TOT':'KICK_FG', # 'XP':'KICK_XP'}, # inplace=True) # #if wanted to use all the kicker data could fix this code snipit - erroring out because can't split None types # #just want made FG's for each bucket and overall FGAtt and XPAtt # proj_ppr_espn_table_page['KICK_FGAtt'] = proj_ppr_espn_table_page['KICK_FG'].map( # lambda x: x.split("/")[-1]).astype('float64') # proj_ppr_espn_table_page['KICK_XPAtt'] = proj_ppr_espn_table_page['KICK_XP'].map( # lambda x: x.split("/")[-1]).astype('float64') # proj_ppr_espn_table_page['KICK_FG_1-39'] = proj_ppr_espn_table_page['KICK_FG_1-39'].map( # lambda x: x.split("/")[0]).astype('float64') # proj_ppr_espn_table_page['KICK_FG_40-49'] = proj_ppr_espn_table_page['KICK_FG_40-49'].map( # lambda x: x.split("/")[0]).astype('float64') # proj_ppr_espn_table_page['KICK_FG_50+'] = proj_ppr_espn_table_page['KICK_FG_50+'].map( # lambda x: x.split("/")[0]).astype('float64') # proj_ppr_espn_table_page['KICK_FG'] = proj_ppr_espn_table_page['KICK_FG'].map( # lambda x: x.split("/")[0]).astype('float64') # proj_ppr_espn_table_page['KICK_XP'] = proj_ppr_espn_table_page['KICK_XP'].map( # lambda x: x.split("/")[0]).astype('float64') # proj_ppr_espn_table_page['KICK_FG%'] = proj_ppr_espn_table_page['KICK_FG'] / espn_proj_table_page['KICK_FGAtt'] #add page data to overall dataframe proj_ppr_espn = pd.concat([proj_ppr_espn, proj_ppr_espn_table_page], ignore_index=True, sort=False) #click to next page to get next 40 results, but check that it exists try: next_button = driver.find_element_by_partial_link_text('NEXT') next_button.click() except EC.NoSuchElementException: break driver.quit() #drop any completely blank columns proj_ppr_espn.dropna(axis='columns', how='all', inplace=True) #add columns that give week/season proj_ppr_espn['WEEK'] = week proj_ppr_espn['SEASON'] = year return proj_ppr_espn #formatting/extracting function is same for ESPN Actual/PPR Projections, so don't need new function #WEEK 3 PROJECTIONS #CALL SCRAPE AND FORMATTING OF ESPN WEEKLY PROJECTIONS - AND SAVE TO PICKLES FOR LATER USE #scrape data and save the messy full dataframe df_wk3_ppr_proj_espn_scrape = scrape_weekly_player_projections_ESPN(3, 2018) save_to_pickle(df_wk3_ppr_proj_espn_scrape, 'pickle_archive', 'Week2_PPR_Projections_ESPN_messy_scrape') #format data to extract just player pts/playr/pos/team/week and save the data df_wk3_ppr_proj_espn = format_extract_PPR_player_points_ESPN(df_wk3_ppr_proj_espn_scrape) #rename PTS column to something more descriptive df_wk3_ppr_proj_espn.rename(columns={'PTS':'FPTS_PPR_ESPN'}, inplace=True) save_to_pickle(df_wk3_ppr_proj_espn, 'pickle_archive', 'Week3_PPR_Projections_ESPN') print(df_wk3_ppr_proj_espn.shape) df_wk3_ppr_proj_espn.head()
Pickle saved to: pickle_archive/Week2_PPR_Projections_ESPN_messy_scrape_2018-9-18-18-3.pkl Pickle saved to: pickle_archive/Week3_PPR_Projections_ESPN_2018-9-18-18-3.pkl (1009, 5)
MIT
data/Scraping Fantasy Football Data - FINAL-Week3.ipynb
zgscherrer/Project-Fantasy-Football
Get CBS Player Fantasy Points Projections for Week Get from CBS's Projections Tablehttps://www.cbssports.com/fantasy/football/stats/sortable/points/QB/ppr/projections/2018/2?&print_rows=9999- QB is where position goes- 2018 is where season goes- 2 is where week goes- print_rows = 9999 gives all results in one table
##SCRAPE CBS PROJECTIONS TABLE FOR PROJECTED FANTASY PPR POINTS## #input needs to be year as four digit number and week as number #returns dataframe of scraped data def scrape_weekly_player_projections_CBS(week, year): ###GET PROJECTIONS FROM CBS### #CBS has separate tables for each position, so need to cycle through them #but url can return all list so don't need to go page by page proj_ppr_cbs = pd.DataFrame() positions = ['QB', 'RB', 'WR', 'TE', 'K', 'DST'] header_row_index = {'QB':2, 'RB':2, 'WR':2, 'TE':2, 'K':1, 'DST':1} for position in positions: #url just needs to change position url = f"https://www.cbssports.com/fantasy/football/stats/sortable/points/{position}/ppr/projections/{year}/{week}?&print_rows=9999" #read in the table from CBS by class, and use the 2nd row index for column header proj_ppr_cbs_pos = pd.read_html(url, attrs={'class': 'data'}, #return only the table of this class, which has the player data header=[header_row_index[position]])[0] #returns table in a list, so get table proj_ppr_cbs_pos['POS'] = position #add the table to the overall df proj_ppr_cbs = pd.concat([proj_ppr_cbs, proj_ppr_cbs_pos], ignore_index=True, sort=False) #some tables include the page selector as the bottom row of the table, #so need to find the index values of those rows and then drop them from the table index_pages_rows = list(proj_ppr_cbs[proj_ppr_cbs['Player'].str.contains('Pages')].index) proj_ppr_cbs.drop(index_pages_rows, axis='index', inplace=True) #add columns that give week/season proj_ppr_cbs['WEEK'] = week proj_ppr_cbs['SEASON'] = year return proj_ppr_cbs ###FORMAT/EXTRACT ACTUAL PLAYER PPR DATA### #(you could make this more complex if want to extract some of the subdata) def format_extract_PPR_player_points_CBS(df_scraped_ppr_cbs): # #could include this extra data if you want to extract it # #calculate completion percentage # df_cbs_proj['COMPLETION_PERCENTAGE'] = df_cbs_proj.CMP/df_cbs_proj.ATT # #rename some of columns so don't lose meaning # df_cbs_proj.rename(columns={'ATT':'PASS_ATT', 'CMP':'PASS_COMP', 'COMPLETION_PERCENTAGE': 'PASS_COMP_PCT', # 'YD': 'PASS_YD', 'TD':'PASS_TD', 'INT':'PASS_INT', 'RATE':'PASS_RATE', # 'ATT.1': 'RUSH_ATT', 'YD.1': 'RUSH_YD', 'AVG': 'RUSH_AVG', 'TD.1':'RUSH_TD', # 'TARGT': 'RECV_TARGT', 'RECPT': 'RECV_RECPT', 'YD.2':'RECV_YD', 'AVG.1':'RECV_AVG', 'TD.2':'RECV_TD', # 'FPTS':'PTS', # 'FG':'KICK_FG', 'FGA': 'KICK_FGAtt', 'XP':'KICK_XP', 'XPAtt':'KICK_XPAtt', # 'Int':'D/ST_INT', 'Sty':'D/ST_Sty', 'Sack':'D/ST_Sack', 'TK':'D/ST_TK', # 'DFR':'D/ST_FR', 'FF':'D/ST_FF', 'DTD':'D/ST_TD', # 'Pa':'D/ST_PtsAll', 'PaNetA':'D/ST_PaYdA', 'RuYdA':'D/ST_RuYdA', 'TyDa':'D/ST_ToYdA'}, # inplace=True) # #calculate passing, rushing, total yards/game # df_cbs_proj['D/ST_PaYd/G'] = df_cbs_proj['D/ST_PaYdA']/16 # df_cbs_proj['D/ST_RuYd/G'] = df_cbs_proj['D/ST_RuYdA']/16 # df_cbs_proj['D/ST_ToYd/G'] = df_cbs_proj['D/ST_ToYdA']/16 #rename FPTS to PTS df_scraped_ppr_cbs.rename(columns={'FPTS':'FPTS_PPR_CBS'}, inplace=True) #split out player, team def split_player_team(play_team): #incoming string for players: 'Todd Gurley, LAR' #incoming string for DST: 'Jaguars, JAC' #operations if D/ST (can tell if there is only two items in a list separated by a space, instead of three) if len(play_team.split()) == 2: player = play_team.split(',')[0] #+ ' D/ST' team = play_team.split(',')[1] #operations for regular players else: player = play_team.split(',')[0] team = play_team.split(',')[1] #remove any possible name suffixes to merge with other data better player = remove_suffixes_periods(player) return player, team df_scraped_ppr_cbs[['PLAYER', 'TEAM']] = df_scraped_ppr_cbs.apply( lambda x: split_player_team(x['Player']), axis='columns', result_type='expand') #convert defense position label to espn standard df_scraped_ppr_cbs['POS'] = df_scraped_ppr_cbs['POS'].map(convert_defense_label) #for this function only extract 'PLAYER', 'POS', 'TEAM', 'PTS' df_scraped_ppr_cbs = df_scraped_ppr_cbs[['PLAYER', 'POS', 'TEAM', 'FPTS_PPR_CBS', 'WEEK']].sort_values('FPTS_PPR_CBS', ascending=False) return df_scraped_ppr_cbs #WEEK 3 PROJECTIONS #CALL SCRAPE AND FORMATTING OF CBS WEEKLY PROJECTIONS - AND SAVE TO PICKLES FOR LATER USE #scrape data and save the messy full dataframe df_wk3_ppr_proj_cbs_scrape = scrape_weekly_player_projections_CBS(3, 2018) save_to_pickle(df_wk3_ppr_proj_cbs_scrape, 'pickle_archive', 'Week3_PPR_Projections_CBS_messy_scrape') #format data to extract just player pts/playr/pos/team/week and save the data df_wk3_ppr_proj_cbs = format_extract_PPR_player_points_CBS(df_wk3_ppr_proj_cbs_scrape) save_to_pickle(df_wk3_ppr_proj_cbs, 'pickle_archive', 'Week3_PPR_Projections_CBS') print(df_wk3_ppr_proj_cbs.shape) df_wk3_ppr_proj_cbs.head()
Pickle saved to: pickle_archive/Week3_PPR_Projections_CBS_messy_scrape_2018-9-18-18-4.pkl Pickle saved to: pickle_archive/Week3_PPR_Projections_CBS_2018-9-18-18-4.pkl (830, 5)
MIT
data/Scraping Fantasy Football Data - FINAL-Week3.ipynb
zgscherrer/Project-Fantasy-Football
Get Fantasy Sharks Player Points Projection for WeekThey have a json option that gets updated weekly (don't appear to store previous week projections). The json defaults to PPR (which is lucky for us) and has an all players option.https://www.fantasysharks.com/apps/Projections/WeeklyProjections.php?pos=ALL&format=jsonIt returns a list of players, each saved as a dictionary.[ { "Rank": 1, "ID": "4925", "Name": "Brees, Drew", "Pos": "QB", "Team": "NOS", "Opp": "CLE", "Comp": "27.49", "PassYards": "337", "PassTD": 2.15, "Int": "0.61", "Att": "1.5", "RushYards": "0", "RushTD": 0.12, "Rec": "0", "RecYards": "0", "RecTD": 0, "FantasyPoints": 26 }, But the json is only for current week, can't get other week data - so instead use this url exampe:https://www.fantasysharks.com/apps/bert/forecasts/projections.php?Position=99&scoring=2&Segment=628&uid=4- Segment is the week/season id - for 2018 week 1 starts at 628 and adds 1 for each additional week- Position=99 is all positions- scoring=2 is PPR default
##SCRAPE FANTASY SHARKS PROJECTIONS TABLE FOR PROJECTED FANTASY PPR POINTS## #input needs to be week as number (year isn't used, but keep same format as others) #returns dataframe of scraped data def scrape_weekly_player_projections_Sharks(week, year): #fantasy sharks url - segment for 2018 week 1 starts at 628 and adds 1 for each additional week segment = 627 + week #Position=99 is all positions, and scoring=2 is PPR default sharks_weekly_url = f"https://www.fantasysharks.com/apps/bert/forecasts/projections.php?Position=99&scoring=2&Segment={segment}&uid=4" #since don't need to iterate over pages, can just use reqeuests instead of selenium scraper #however with requests, need to include headers because this website was rejecting the request since it knew python was running it - need to spoof a browser header #other possible headers: 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36' headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1)'} #response returns html response = requests.get(sharks_weekly_url, headers=headers) #extract the table data from the html response (call response.text) and get table with player data proj_ppr_sharks = pd.read_html(response.text, #response.text gives the html of the page request attrs={'id': 'toolData'}, #return only the table of this id, which has the player data header = 0 #header is the 0th row )[0] #pd.read_html returns a list of tables even though only one in it, select the table #the webpage uses different tiers, which add extra rows to the table - get rid of those #also sometimes repeats the column headers for readability as scrolling - get rid of those #so need to find the index values of those bad rows and then drop them from the table index_pages_rows = list(proj_ppr_sharks[proj_ppr_sharks['#'].str.contains('Tier|#')].index) proj_ppr_sharks.drop(index_pages_rows, axis='index', inplace=True) #add columns that give week/season proj_ppr_sharks['WEEK'] = week proj_ppr_sharks['SEASON'] = year return proj_ppr_sharks ###FORMAT/EXTRACT ACTUAL PLAYER PPR DATA### #(you could make this more complex if want to extract some of the subdata like opposing team (OPP) def format_extract_PPR_player_points_Sharks(df_scraped_ppr_sharks): #rename PTS to FPTS_PPR_SHARKS and a few others df_scraped_ppr_sharks.rename(columns={'Pts':'FPTS_PPR_SHARKS', 'Player': 'PLAYER', 'Tm': 'TEAM', 'Position': 'POS'}, inplace=True) #they have player name as Last Name, First Name - reorder to First Last def modify_player_name(player, pos): #incoming string for players: 'Johnson, David' Change to: 'David Johnson' #incoming string for defense: 'Lions, Detroit' Change to: 'Lions' if pos == 'D': player_formatted = player.split(', ')[0] else: player_formatted = ' '.join(player.split(', ')[::-1]) player_formatted = remove_suffixes_periods(player_formatted) #name overrides - some spelling differences from ESPN/CBS if player_formatted == 'Steven Hauschka': player_formatted = 'Stephen Hauschka' elif player_formatted == 'Josh Bellamy': player_formatted = 'Joshua Bellamy' elif player_formatted == 'Joshua Perkins': player_formatted = 'Josh Perkins' return player_formatted df_scraped_ppr_sharks['PLAYER'] = df_scraped_ppr_sharks.apply( lambda row: modify_player_name(row['PLAYER'], row['POS']), axis='columns') #convert FPTS to float type (currently stored as string) df_scraped_ppr_sharks['FPTS_PPR_SHARKS'] = df_scraped_ppr_sharks['FPTS_PPR_SHARKS'].astype('float64') #convert defense position label to espn standard df_scraped_ppr_sharks['POS'] = df_scraped_ppr_sharks['POS'].map(convert_defense_label) #for this function only extract 'PLAYER', 'POS', 'TEAM', 'FPTS' df_scraped_ppr_sharks = df_scraped_ppr_sharks[['PLAYER', 'POS', 'TEAM', 'FPTS_PPR_SHARKS', 'WEEK']].sort_values('FPTS_PPR_SHARKS', ascending=False) return df_scraped_ppr_sharks #WEEK 3 PROJECTIONS #CALL SCRAPE AND FORMATTING OF FANTASY SHARKS WEEKLY PROJECTIONS - AND SAVE TO PICKLES FOR LATER USE #scrape data and save the messy full dataframe df_wk3_ppr_proj_sharks_scrape = scrape_weekly_player_projections_Sharks(3, 2018) save_to_pickle(df_wk3_ppr_proj_sharks_scrape, 'pickle_archive', 'Week3_PPR_Projections_Sharks_messy_scrape') #format data to extract just player pts/playr/pos/team and save the data df_wk3_ppr_proj_sharks = format_extract_PPR_player_points_Sharks(df_wk3_ppr_proj_sharks_scrape) save_to_pickle(df_wk3_ppr_proj_sharks, 'pickle_archive', 'Week3_PPR_Projections_Sharks') print(df_wk3_ppr_proj_sharks.shape) df_wk3_ppr_proj_sharks.head()
Pickle saved to: pickle_archive/Week3_PPR_Projections_Sharks_messy_scrape_2018-9-18-18-4.pkl Pickle saved to: pickle_archive/Week3_PPR_Projections_Sharks_2018-9-18-18-4.pkl (971, 5)
MIT
data/Scraping Fantasy Football Data - FINAL-Week3.ipynb
zgscherrer/Project-Fantasy-Football
Get Scout Fantasy Sports Player Fantasy Points Projections for Week Get from Scout Fantasy Sports Projections Tablehttps://fftoolbox.scoutfantasysports.com/football/rankings/?pos=rb&week=2&noppr=false- pos is position with options of 'QB','RB','WR','TE', 'K', 'DEF'- week is week of year- noppr is set to false when you want the ppr projections- it also returns one long table (no pagination required)
##SCRAPE Scout PROJECTIONS TABLE FOR PROJECTED FANTASY PPR POINTS## #input needs to be year as four digit number and week as number #returns dataframe of scraped data def scrape_weekly_player_projections_SCOUT(week, year): ###GET PROJECTIONS FROM SCOUT### #SCOUT has separate tables for each position, so need to cycle through them #but url can return whole list so don't need to go page by page proj_ppr_scout = pd.DataFrame() positions = ['QB', 'RB', 'WR', 'TE', 'K', 'DEF'] for position in positions: #url just needs to change position and week url = f"https://fftoolbox.scoutfantasysports.com/football/rankings/?pos={position}&week={week}&noppr=false" #response returns html response = requests.get(url, verify=False) #need verify false otherwise requests won't work on this site #extract the table data from the html response (call response.text) and get table with player data proj_ppr_scout_pos = pd.read_html(response.text, #response.text gives the html of the page request attrs={'class': 'responsive-table'}, #return only the table of this class, which has the player data header=0 #header is the 0th row )[0] #returns list of tables so get the table #add the table to the overall df proj_ppr_scout = pd.concat([proj_ppr_scout, proj_ppr_scout_pos], ignore_index=True, sort=False) #ads are included in table rows (eg 'googletag.defineSlot("/7103/SMG_FFToolBox/728x...') #so need to find the index values of those rows and then drop them from the table index_ads_rows = list(proj_ppr_scout[proj_ppr_scout['#'].str.contains('google')].index) proj_ppr_scout.drop(index_ads_rows, axis='index', inplace=True) #add columns that give week/season proj_ppr_scout['WEEK'] = week proj_ppr_scout['SEASON'] = year return proj_ppr_scout ###FORMAT/EXTRACT ACTUAL PLAYER PPR DATA### #(you could make this more complex if want to extract some of the subdata) def format_extract_PPR_player_points_SCOUT(df_scraped_ppr_scout): #rename columns df_scraped_ppr_scout.rename(columns={'Projected Pts.':'FPTS_PPR_SCOUT', 'Player':'PLAYER', 'Pos':'POS', 'Team':'TEAM'}, inplace=True) #some players (very few - mostly kickers) seem to have name as last, first instead of written out #also rename defenses from City/State to Mascot #create dictionary for geographical location to mascot (use this for some Defense renaming) based on this website's naming NFL_team_mascot = {'Arizona': 'Cardinals', 'Atlanta': 'Falcons', 'Baltimore': 'Ravens', 'Buffalo': 'Bills', 'Carolina': 'Panthers', 'Chicago': 'Bears', 'Cincinnati': 'Bengals', 'Cleveland': 'Browns', 'Dallas': 'Cowboys', 'Denver': 'Broncos', 'Detroit': 'Lions', 'Green Bay': 'Packers', 'Houston': 'Texans', 'Indianapolis': 'Colts', 'Jacksonville': 'Jaguars', 'Kansas City': 'Chiefs', #'Los Angeles': 'Rams', 'Miami': 'Dolphins', 'Minnesota': 'Vikings', 'New England': 'Patriots', 'New Orleans': 'Saints', 'New York Giants': 'Giants', 'New York Jets': 'Jets', 'Oakland': 'Raiders', 'Philadelphia': 'Eagles', 'Pittsburgh': 'Steelers', #'Los Angeles': 'Chargers', 'Seattle': 'Seahawks', 'San Francisco': '49ers', 'Tampa Bay': 'Buccaneers', 'Tennessee': 'Titans', 'Washington': 'Redskins'} #get Los Angelse defense data for assigning D's LosAngeles_defense_ranks = [int(x) for x in df_scraped_ppr_scout['#'][df_scraped_ppr_scout.PLAYER == 'Los Angeles'].tolist()] print(LosAngeles_defense_ranks) #in this function the defense rename here is SUPER GLITCHY since there are two Defenses' names 'Los Angeles', for now this code assumes the higher pts Defense is LA Rams def modify_player_name_scout(player, pos, rank): #defense need to change from city to mascot if pos == 'Def': #if Los Angeles is geographic location, then use minimum rank to Rams (assuming they are better defense) if player == 'Los Angeles' and int(rank) == min(LosAngeles_defense_ranks): player_formatted = 'Rams' elif player == 'Los Angeles' and int(rank) == max(LosAngeles_defense_ranks): player_formatted = 'Chargers' else: player_formatted = NFL_team_mascot.get(player) else: #if incoming string for players: 'Johnson, David' Change to: 'David Johnson' (this is rare - mostly for kickers on this site for som reason) if ',' in player: player = ' '.join(player.split(', ')[::-1]) #remove suffixes/periods for all players player_formatted = remove_suffixes_periods(player) #hard override of some player names that don't match to ESPN naming if player_formatted == 'Juju Smith-Schuster': player_formatted = 'JuJu Smith-Schuster' elif player_formatted == 'Steven Hauschka': player_formatted = 'Stephen Hauschka' return player_formatted df_scraped_ppr_scout['PLAYER'] = df_scraped_ppr_scout.apply( lambda row: modify_player_name_scout(row['PLAYER'], row['POS'], row['#']), axis='columns') #convert defense position label to espn standard df_scraped_ppr_scout['POS'] = df_scraped_ppr_scout['POS'].map(convert_defense_label) #for this function only extract 'PLAYER', 'POS', 'TEAM', 'PTS', 'WEEK' (note Team is blank because webpage uses images for teams) df_scraped_ppr_scout = df_scraped_ppr_scout[['PLAYER', 'POS', 'TEAM', 'FPTS_PPR_SCOUT', 'WEEK']].sort_values('FPTS_PPR_SCOUT', ascending=False) return df_scraped_ppr_scout #WEEK 3 PROJECTIONS #CALL SCRAPE AND FORMATTING OF SCOUT WEEKLY PROJECTIONS - AND SAVE TO PICKLES FOR LATER USE #scrape data and save the messy full dataframe df_wk3_ppr_proj_scout_scrape = scrape_weekly_player_projections_SCOUT(3, 2018) save_to_pickle(df_wk3_ppr_proj_scout_scrape, 'pickle_archive', 'Week3_PPR_Projections_SCOUT_messy_scrape') #format data to extract just player pts/playr/pos/team and save the data df_wk3_ppr_proj_scout = format_extract_PPR_player_points_SCOUT(df_wk3_ppr_proj_scout_scrape) save_to_pickle(df_wk3_ppr_proj_scout, 'pickle_archive', 'Week3_PPR_Projections_SCOUT') print(df_wk3_ppr_proj_scout.shape) df_wk3_ppr_proj_scout.head()
C:\Users\micha\Anaconda3\envs\PythonData\lib\site-packages\urllib3\connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings InsecureRequestWarning) C:\Users\micha\Anaconda3\envs\PythonData\lib\site-packages\urllib3\connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings InsecureRequestWarning) C:\Users\micha\Anaconda3\envs\PythonData\lib\site-packages\urllib3\connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings InsecureRequestWarning) C:\Users\micha\Anaconda3\envs\PythonData\lib\site-packages\urllib3\connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings InsecureRequestWarning) C:\Users\micha\Anaconda3\envs\PythonData\lib\site-packages\urllib3\connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings InsecureRequestWarning) C:\Users\micha\Anaconda3\envs\PythonData\lib\site-packages\urllib3\connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings InsecureRequestWarning)
MIT
data/Scraping Fantasy Football Data - FINAL-Week3.ipynb
zgscherrer/Project-Fantasy-Football
Get FanDuel Player Salaries for Week just import the Thurs-Mon game salaries (they differ for each game type, and note they don't include Kickers in the Thurs-Mon)Go to a FanDuel Thurs-Mon competition and Download a csv of players, which we then upload and format in python.
###FORMAT/EXTRACT FANDUEL SALARY INFO### def format_extract_FanDuel(df_fanduel_csv, week, year): #rename columns df_fanduel_csv.rename(columns={'Position':'POS', 'Nickname':'PLAYER', 'Team':'TEAM', 'Salary':'SALARY_FANDUEL'}, inplace=True) #add week/season columns df_fanduel_csv['WEEK'] = week df_fanduel_csv['SEASON'] = year #fix names def modify_player_name_fanduel(player, pos): #defense comes in as 'Dallas Cowboys' or 'Tampa Bay Buccaneers' need to split and take last word, which is the team mascot, just 'Cowboys' or 'Buccaneers' if pos == 'D': player_formatted = player.split()[-1] else: #need to remove suffixes, etc. player_formatted = remove_suffixes_periods(player) #hard override of some player names that don't match to ESPN naming if player_formatted == 'Josh Bellamy': player_formatted = 'Joshua Bellamy' return player_formatted df_fanduel_csv['PLAYER'] = df_fanduel_csv.apply( lambda row: modify_player_name_fanduel(row['PLAYER'], row['POS']), axis='columns') #convert defense position label to espn standard df_fanduel_csv['POS'] = df_fanduel_csv['POS'].map(convert_defense_label) #for this function only extract 'PLAYER', 'POS', 'TEAM', 'SALARY', 'WEEK' (note Team is blank because webpage uses images for teams) df_fanduel_csv = df_fanduel_csv[['PLAYER', 'POS', 'TEAM', 'SALARY_FANDUEL', 'WEEK']].sort_values('SALARY_FANDUEL', ascending=False) return df_fanduel_csv #WEEK 3 FANDUEL SALARIES #import csv from FanDuel df_wk3_fanduel_csv = pd.read_csv('fanduel_salaries/Week3-FanDuel-NFL-2018-09-20-28399-players-list.csv') #format data to extract just player salary/player/pos/team and save the data df_wk3_fanduel = format_extract_FanDuel(df_wk3_fanduel_csv, 3, 2018) save_to_pickle(df_wk3_fanduel, 'pickle_archive', 'Week3_Salary_FanDuel') print(df_wk3_fanduel.shape) df_wk3_fanduel.head()
Pickle saved to: pickle_archive/Week3_Salary_FanDuel_2018-9-18-18-5.pkl (665, 5)
MIT
data/Scraping Fantasy Football Data - FINAL-Week3.ipynb
zgscherrer/Project-Fantasy-Football
!!!FFtoday apparently doesn't do weekly projections for Defenses, so don't use it for now (can check back in future and see if updated)!!! Get FFtoday Player Fantasy Points Projections for Week Get from FFtoday's Projections Tablehttp://www.fftoday.com/rankings/playerwkproj.php?Season=2018&GameWeek=2&PosID=10&LeagueID=107644- Season = year- GameWeek = week- PosID = the id for each position 'QB':10, 'RB':20, 'WR':30, 'TE':40, 'K':80, 'DEF':99- LeagueID = the scoring type, 107644 gives FFToday PPR scoring
# ##SCRAPE FFtoday PROJECTIONS TABLE FOR PROJECTED FANTASY PPR POINTS## # #input needs to be year as four digit number and week as number # #returns dataframe of scraped data # def scrape_weekly_player_projections_FFtoday(week, year): # #instantiate selenium driver # driver = instantiate_selenium_driver() # #initialize dataframe for all data # proj_ppr_fft = pd.DataFrame() # #url that returns info has different code for each position and also takes year variable # position_ids = {'QB':10, 'RB':20, 'WR':30, 'TE':40, 'K':80, 'DEF':99} # #cycle through each position webpage to create comprehensive dataframe # for pos, pos_id in position_ids.items(): # url_start_pos = f"http://www.fftoday.com/rankings/playerwkproj.php?Season={year}&GameWeek={week}&PosID={pos_id}&LeagueID=107644" # driver.get(url_start_pos) # #each page only gets 50 results, so cycle through next button until next button no longer exists # while True: # #read in table - no classes for tables so just need to find the right table in the list of tables from the page - 5th index # proj_ppr_fft_table_page = pd.read_html(driver.page_source, header=[1])[5] # proj_ppr_fft_table_page['POS'] = pos # #need to rename columns for different positions before concat because of differing column conventions # if pos == 'QB': # proj_ppr_fft_table_page.rename(columns={'Player Sort First: Last:':'PLAYER', # 'Comp':'PASS_COMP', 'Att': 'PASS_ATT', 'Yard':'PASS_YD', # 'TD':'PASS_TD', 'INT':'PASS_INT', # 'Att.1':'RUSH_ATT', 'Yard.1':'RUSH_YD', 'TD.1':'RUSH_TD'}, # inplace=True) # elif pos == 'RB': # proj_ppr_fft_table_page.rename(columns={'Player Sort First: Last:':'PLAYER', # 'Att': 'RUSH_ATT', 'Yard':'RUSH_YD', 'TD':'RUSH_TD', # 'Rec':'RECV_RECPT', 'Yard.1':'RECV_YD', 'TD.1':'RECV_TD'}, # inplace=True) # elif pos == 'WR': # proj_ppr_fft_table_page.rename(columns={'Player Sort First: Last:':'PLAYER', # 'Rec':'RECV_RECPT', 'Yard':'RECV_YD', 'TD':'RECV_TD', # 'Att':'RUSH_ATT', 'Yard.1':'RUSH_YD', 'TD.1':'RUSH_TD'}, # inplace=True) # elif pos == 'TE': # proj_ppr_fft_table_page.rename(columns={'Player Sort First: Last:':'PLAYER', # 'Rec':'RECV_RECPT', 'Yard':'RECV_YD', 'TD':'RECV_TD'}, # inplace=True) # elif pos == 'K': # proj_ppr_fft_table_page.rename(columns={'Player Sort First: Last:':'PLAYER', # 'FGM':'KICK_FG', 'FGA':'KICK_FGAtt', 'FG%':'KICK_FG%', # 'EPM':'KICK_XP', 'EPA':'KICK_XPAtt'}, # inplace=True) # elif pos == 'DEF': # proj_ppr_fft_table_page['PLAYER'] = proj_ppr_fft_table_page['Team'] #+ ' D/ST' #add player name with team name plus D/ST tag # proj_ppr_fft_table_page.rename(columns={'Sack':'D/ST_Sack', 'FR':'D/ST_FR', 'DefTD':'D/ST_TD', 'INT':'D/ST_INT', # 'PA':'D/ST_PtsAll', 'PaYd/G':'D/ST_PaYd/G', 'RuYd/G':'D/ST_RuYd/G', # 'Safety':'D/ST_Sty', 'KickTD':'D/ST_RET_TD'}, # inplace=True) # #add the position/page data to overall df # proj_ppr_fft = pd.concat([proj_ppr_fft, proj_ppr_fft_table_page], # ignore_index=True, # sort=False) # #click to next page to get next 50 results, but check that next button exists # try: # next_button = driver.find_element_by_link_text("Next Page") # next_button.click() # except EC.NoSuchElementException: # break # driver.quit() # #add columns that give week/season # proj_ppr_fft['WEEK'] = week # proj_ppr_fft['SEASON'] = year # return proj_ppr_fft # ###FORMAT/EXTRACT ACTUAL PLAYER PPR DATA### # #(you could make this more complex if want to extract some of the subdata) # def format_extract_PPR_player_points_FFtoday(df_scraped_ppr_fft): # # #optional data formatting for additional info # # #calculate completion percentage # # df_scraped_ppr_fft['PASS_COMP_PCT'] = df_scraped_ppr_fft.PASS_COMP/df_scraped_ppr_fft.PASS_ATT # # #calculate total PaYd and RuYd for season # # df_scraped_ppr_fft['D/ST_PaYdA'] = df_scraped_ppr_fft['D/ST_PaYd/G'] * 16 # # df_scraped_ppr_fft['D/ST_RuYdA'] = df_scraped_ppr_fft['D/ST_RuYd/G'] * 16 # # df_scraped_ppr_fft['D/ST_ToYd/G'] = df_scraped_ppr_fft['D/ST_PaYd/G'] + df_scraped_ppr_fft['D/ST_RuYd/G'] # # df_scraped_ppr_fft['D/ST_ToYdA'] = df_scraped_ppr_fft['D/ST_ToYd/G'] * 16 # #rename some of outstanding columns to match other dfs # df_scraped_ppr_fft.rename(columns={'Team':'TEAM', 'FPts':'FPTS_PPR_FFTODAY'}, # inplace=True) # #remove any possible name suffixes to merge with other data better # df_scraped_ppr_fft['PLAYER'] = df_scraped_ppr_fft['PLAYER'].map(remove_suffixes_periods) # #for this function only extract 'PLAYER', 'POS', 'TEAM', 'PTS' # df_scraped_ppr_fft = df_scraped_ppr_fft[['PLAYER', 'POS', 'TEAM', 'FPTS_PPR_FFTODAY', 'WEEK']].sort_values('FPTS_PPR_FFTODAY', ascending=False) # return df_scraped_ppr_fft
_____no_output_____
MIT
data/Scraping Fantasy Football Data - FINAL-Week3.ipynb
zgscherrer/Project-Fantasy-Football
Initial Database Stuff
# actual_ppr_df = pd.read_pickle('pickle_archive/Week1_Player_Actual_PPR_2018-9-13-6-41.pkl') # espn_final_df = pd.read_pickle('pickle_archive/Week1_PPR_Projections_ESPN_2018-9-13-6-46.pkl') # cbs_final_df = pd.read_pickle('pickle_archive/Week1_PPR_Projections_CBS_2018-9-13-17-45.pkl') # cbs_final_df.head() # from sqlalchemy import create_engine # disk_engine = create_engine('sqlite:///my_lite_store.db') # actual_ppr_df.to_sql('actual_ppr', disk_engine, if_exists='append') # espn_final_df.to_sql('espn_final_df', disk_engine, if_exists='append') # cbs_final_df.to_sql('cbs_final_df', disk_engine, if_exists='append')
_____no_output_____
MIT
data/Scraping Fantasy Football Data - FINAL-Week3.ipynb
zgscherrer/Project-Fantasy-Football
Basics of CodingIn this chapter, you'll learn about the basics of objects, types, operations, conditions, loops, functions, and imports. These are the basic building blocks of almost all programming languages.This chapter has benefited from the excellent [*Python Programming for Data Science*](https://www.tomasbeuzen.com/python-programming-for-data-science/README.html) book by Tomas Beuzen.```{tip}Remember, you can launch this page interactively by using the 'Colab' or 'Binder' buttons under the rocket symbol at the top of the page. You can also download this page as a Jupyter Notebook to run on your own computer: use the 'download .ipynb' button under the download symbol the top of the page.``` If you get stuckIt's worth saying at the outset that, *no-one*, and I mean no-one, memorises half of the stuff you'll see in this book. 80% or more of time spent programming is actually time spent looking up how to do this or that online, 'debugging' a code for errors, or testing code. This applies to all programmers, regardless of level. You are here to learn the skills and concepts of programming, not the precise syntax (which is easy to look up later).![xkcd-what-did-you-see](https://imgs.xkcd.com/comics/wisdom_of_the_ancients.png)Knowing how to Google is one of the most important skills of any coder. No-one remembers every function from every library. Here are some useful coding resources:- when you have an error, look on Stack Overflow to see if anyone else had the same error (they probably did) and how they overcame it.- if you're having trouble navigating a new package or library, look up the documentation online. The best libraries put as much effort into documentation as they do the code base.- use cheat sheets to get on top of a range of functionality quickly. For instance, this excellent (mostly) base Python [Cheat Sheet](https://gto76.github.io/python-cheatsheet/).- if you're having a coding issue, take a walk to think about the problem, or explain your problem to an animal toy on your desk ([traditionally](https://en.wikipedia.org/wiki/Rubber_duck_debugging) a rubber duck, but other animals are available). Values, variables, and typesA value is datum such as a number or text. There are different types of values: 352.3 is known as a float or double, 22 is an integer, and "Hello World!" is a string. A variable is a name that refers to a value: you can think of a variable as a box that has a value, or multiple values, packed inside it. Almost any word can be a variable name as long as it starts with a letter or an underscore, although there are some special keywords that can't be used because they already have a role in the Python language: these include `if`, `while`, `class`, and `lambda`.Creating a variable in Python is achieved via an assignment (putting a value in the box), and this assignment is done via the `=` operator. The box, or variable, goes on the left while the value we wish to store appears on the right. It's simpler than it sounds:
a = 10 print(a)
_____no_output_____
MIT
code-basics.ipynb
lukestein/coding-for-economists
This creates a variable `a`, assigns the value 10 to it, and prints it. Sometimes you will hear variables referred to as *objects*. Everything that is not a literal value, such as `10`, is an object. In the above example, `a` is an object that has been assigned the value `10`.How about this:
b = "This is a string" print(b)
_____no_output_____
MIT
code-basics.ipynb
lukestein/coding-for-economists
It's the same thing but with a different **type** of data, a string instead of an integer. Python is *dynamically typed*, which means it will guess what type of variable you're creating as you create it. This has pros and cons, with the main pro being that it makes for more concise code.```{admonition} ImportantEverything is an object, and every object has a type.```The most basic built-in data types that you'll need to know about are: integers `10`, floats `1.23`, strings `like this`, booleans `True`, and nothing `None`. Python also has a built-in type called a list `[10, 15, 20]` that can contain anything, even *different* types. So
list_example = [10, 1.23, "like this", True, None] print(list_example)
_____no_output_____
MIT
code-basics.ipynb
lukestein/coding-for-economists
is completely valid code. `None` is a special type of nothingness, and represents an object with no value. It has type `NoneType` and is more useful than you might think! As well as the built-in types, packages can define their own custom types. If you ever want to check the type of a Python variable, you can call the `type` function on it like so:
type(list_example)
_____no_output_____
MIT
code-basics.ipynb
lukestein/coding-for-economists
This is especially useful for debugging `ValueError` messages.Below is a table of common data types in Python:| Name | Type name | Type Category | Description | Example || :-------------------- | :--------- | :------------- | :-------------------------------------------- | :----------------------------------------- || integer | `int` | Numeric Type | positive/negative whole numbers | `22` || floating point number | `float` | Numeric Type | real number in decimal form | `3.14159` || boolean | `bool` | Boolean Values | true or false | `True` || string | `str` | Sequence Type | text | `"Hello World!"` || list | `list` | Sequence Type | a collection of objects - mutable & ordered | `['text entry', True, 16]` || tuple | `tuple` | Sequence Type | a collection of objects - immutable & ordered | `(51.02, -0.98)` || dictionary | `dict` | Mapping Type | mapping of key-value pairs | `{'name':'Ada', 'subject':'computer science'}` || none | `NoneType` | Null Object | represents no value | `None` || function | `function` | Function | Represents a function | `def add_one(x): return x+1` | BracketsYou may notice that there are several kinds of brackets that appear in the code we've seen so far, including `[]`, `{}`, and `()`. These can play different roles depending on the context, but the most common uses are:- `[]` is used to denote a list, eg `['a', 'b']`, or to signify accessing a position using an index, eg `vector[0]` to get the first entry of a variable called vector.- `{}` is used to denote a set, eg `{'a', 'b'}`, or a dictionary (with pairs of terms), eg `{'first_letter': 'a', 'second_letter': 'b'}`.- `()` is used to denote a tuple, eg `('a', 'b')`, or the arguments to a function, eg `function(x)` where `x` is the input passed to the function, *or* to indicate the order operations are carried out. Lists and slicingLists are a really useful way to work with lots of data at once. They're defined with square brackets, with entries separated by commas. You can also construct them by appending entries:
list_example.append("one more entry") print(list_example)
_____no_output_____
MIT
code-basics.ipynb
lukestein/coding-for-economists
And you can access earlier entries using an index, which begins at 0 and ends at one less than the length of the list (this is the convention in many programming languages). For instance, to print specific entries at the start, using `0`, and end, using `-1`:
print(list_example[0]) print(list_example[-1])
_____no_output_____
MIT
code-basics.ipynb
lukestein/coding-for-economists