markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df)
_____no_output_____
Apache-2.0
Regression/Linear Models/ElasticNet_RobustScaler_PowerTransformer.ipynb
shreepad-nade/ds-seed
Calling preprocessing functions on the feature and target set.
x=X.columns.to_list() for i in x: X[i]=NullClearner(X[i]) X=EncodeX(X) Y=NullClearner(Y) X.head()
_____no_output_____
Apache-2.0
Regression/Linear Models/ElasticNet_RobustScaler_PowerTransformer.ipynb
shreepad-nade/ds-seed
Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
f,ax = plt.subplots(figsize=(18, 18)) matrix = np.triu(X.corr()) se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix) plt.show()
_____no_output_____
Apache-2.0
Regression/Linear Models/ElasticNet_RobustScaler_PowerTransformer.ipynb
shreepad-nade/ds-seed
Data SplittingThe train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
_____no_output_____
Apache-2.0
Regression/Linear Models/ElasticNet_RobustScaler_PowerTransformer.ipynb
shreepad-nade/ds-seed
ModelElastic Net first emerged as a result of critique on Lasso, whose variable selection can be too dependent on data and thus unstable. The solution is to combine the penalties of Ridge regression and Lasso to get the best of both worlds.Features of ElasticNet Regression-It combines the L1 and L2 approaches.It performs a more efficient regularization process.It has two parameters to be set, λ and α. Model Tuning Parameters:**alpha: float, default=1.0** ->Constant that multiplies the penalty terms. Defaults to 1.0. alpha = 0 is equivalent to an ordinary least square, solved by the LinearRegression object.**l1_ratio: float, default=0.5** ->The ElasticNet mixing parameter, with 0 <= l1_ratio <= 1. For l1_ratio = 0 the penalty is an L2 penalty. For l1_ratio = 1 it is an L1 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1 and L2.**fit_intercept: bool, default=True** ->Whether the intercept should be estimated or not. If False, the data is assumed to be already centered.**normalize: bool, default=False** ->This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm.**precompute: bool or array-like of shape (n_features, n_features), default=False** ->Whether to use a precomputed Gram matrix to speed up calculations. The Gram matrix can also be passed as argument. For sparse input this option is always False to preserve sparsity.**max_iter: int, default=1000** ->The maximum number of iterations.**copy_X: bool, default=True** ->If True, X will be copied; else, it may be overwritten.**tol: float, default=1e-4** ->The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol.**warm_start: bool, default=False** ->When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution.**positive: bool, default=False** ->When set to True, forces the coefficients to be positive.**random_state: int, RandomState instance, default=None** ->The seed of the pseudo random number generator that selects a random feature to update. Used when selection == ‘random’. Pass an int for reproducible output across multiple function calls.**selection: {‘cyclic’, ‘random’}, default=’cyclic’** ->If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4. Robust ScalerStandardization of a dataset is a common requirement for many machine learning estimators. Typically this is done by removing the mean and scaling to unit variance. However, outliers can often influence the sample mean / variance in a negative way. In such cases, the median and the interquartile range often give better results.This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range). The IQR is the range between the 1st quartile (25th quantile) and the 3rd quartile (75th quantile). Power TransformerApply a power transform featurewise to make data more Gaussian-like.Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired.
model=make_pipeline(RobustScaler(), PowerTransformer(), ElasticNet()) model.fit(x_train,y_train)
_____no_output_____
Apache-2.0
Regression/Linear Models/ElasticNet_RobustScaler_PowerTransformer.ipynb
shreepad-nade/ds-seed
Model AccuracyWe will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.score: The score function returns the coefficient of determination R2 of the prediction.
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100)) #prediction on testing set prediction=model.predict(x_test)
_____no_output_____
Apache-2.0
Regression/Linear Models/ElasticNet_RobustScaler_PowerTransformer.ipynb
shreepad-nade/ds-seed
Model evolutionr2_score: The r2_score function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.MAE: The mean abosolute error function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.MSE: The mean squared error function squares the error(penalizes the model for large errors) by our model.
print('Mean Absolute Error:', mean_absolute_error(y_test, prediction)) print('Mean Squared Error:', mean_squared_error(y_test, prediction)) print('Root Mean Squared Error:', np.sqrt(mean_squared_error(y_test, prediction))) print("R-squared score : ",r2_score(y_test, prediction))
R-squared score : 0.7453424175665325
Apache-2.0
Regression/Linear Models/ElasticNet_RobustScaler_PowerTransformer.ipynb
shreepad-nade/ds-seed
Prediction PlotFirst, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
plt.figure(figsize=(14,10)) plt.plot(range(20),y_test[0:20], color = "green") plt.plot(range(20),model.predict(x_test[0:20]), color = "red") plt.legend(["Actual","prediction"]) plt.title("Predicted vs True Value") plt.xlabel("Record number") plt.ylabel(target) plt.show()
_____no_output_____
Apache-2.0
Regression/Linear Models/ElasticNet_RobustScaler_PowerTransformer.ipynb
shreepad-nade/ds-seed
Pokemon Context Pokémon (ポケモン , Pokemon) és un dels videojocs que Satoshi Tajiri va crear per a diverses plataformes, especialment la Game Boy, i que gràcies a la seva popularitat va aconseguir expandir-se a altres mitjans d'entreteniment, com ara sèries de televisió, jocs de cartes i roba, convertint-se, així, en una marca comercial reconeguda al mercat mundial. Fins al dia 1 de desembre de 2006 havien arribat a 175 milions d'exemplars venuts (inclosa la versió Pikachu de la consola Nintendo 64), arribant a ocupar el segon lloc de les nissagues de videojocs més venudes de Nintendo.La saga Pokémon fou creada el 27 de febrer de 1996 al Japó. És desenvolupada per la companyia programadora de software japonesa Game Freak, amb els personatges creats per Satoshi Tajiri per a la companyia de joguines Creatures Inc., i alhora distribuïda i/o publicada per Nintendo. La missió dels protagonistes d'aquests jocs és capturar i entrenar els Pokémons, que actualment arriben a 806 tipus diferents. La possibilitat d'intercanviar-los amb altres usuaris va fer que la popularitat dels jocs de Pokémon augmentés i va provocar un èxit en les vendes de jocs de Pokémon, de televisió, de pel·lícules i marxandatges.![title](img/pokemon.jpg) [1] IntroduccióEn aquesta pràctica es volen analitzar les dades dels *Pokemons* per tal d'extreure informació característica que ens permeti amplicar el coneixement i entendre millor la relació que hi ha entre ells.Per això s'utilitzaràn dos datasets (obtinguts de la plataforma [Kaggle](https://www.kaggle.com/)) que es complementen i que tenen les dades necessàries per realitzar l'anàlisi que es vol dur a terme. DadesEls *datasets* utilitzats són:* [Informació dels pokemons](https://www.kaggle.com/rounakbanik/pokemon) * ***pokemon.csv:*** Fitxer que conté les dades dels *Pokemons* amb els camps: * ***abilities***: Llista d'algunes de habilitats que pot adquirir. (Categòrica) * ***against_?***: Debilitat respecte a un tipus concret (against_fire, against_electric, etc). (Numèrica) * ***attack***: Punts d'atac. (Numèrica) * ***base_egg_steps***: Nombre de passos requerits per a que l'ou del *Pokemon* eclosioni. (Numèrica) * ***base_happiness***: Felicitat base. (Numèrica) * ***capture_rate***: Probabilitat de captura. (Numèrica) * ***classification***: Classificació del *Pokemon* segons la descripció de la *Pokedex* del joc Sol/Luna. (Categòrica) * ***defense***: Punts defensa. (Numèrica) * ***experience_growth***: Creixement d'experiència. (Numèrica) * ***height_m***: Alçada en metres. (Numèrica) * ***hp***: Punts de vida. (Numèrica) * ***japanese_name***: Nom original Japonès. (Categòrica) * ***name***: Nom del *Pokemon*. (Categòrica) * ***percentage_male***: Percentatge de mascles. (Numèrica) * ***pokedex_number***: Número de l'entrada en la *Pokedex*. (Numèrica) * ***sp_attack***: Atac especial. (Numèrica) * ***sp_defense***: Defensa especial. (Numèrica) * ***speed***: Velocitat. (Numèrica) * ***type1***: Tipus primari. (Categòrica) * ***type2***: Tipus secundari. (Categòrica) * ***weight_kg***: Pes en quilograms. (Numèrica) * ***generation***: Primera generació en que va apareixer el *Pokemon*. (Categòrica) * ***is_legendary***: Si és o no llegendari. (Categòrica) * [Informació de combats](https://www.kaggle.com/terminus7/pokemon-challenge) * ***combats.csv:*** Fitxer que conté informació sobre combats hipotètics * ***First_pokemon***: Identificador *Pokedex* del primer *Pokemon* del combat. * ***Second_pokemon***: Identificador *Pokedex* del segon *Pokemon* del combat. * ***Winner***: Identificador *Pokedex* del guanyador. Què es vol aconseguir?Amb aquestes dades es vol donar resposta a la següents preguntes* Quants *Pokemons* hi ha a cada generació?* Quants són llegendaris i com es reparteixen entre les diferents generacions?* Quin dels llegendaris és el més fort i quin el més dèbil?* Com es distribueixen els tipus?* Quines combinacions de tipus (*type1* i *type2*) hi ha?* Com es distribueix el pes i quin és el Pokemon de menor i major pes (en Kg)?* Com es distribueix l'alçada i quin és el Pokemon de menor i major alçada (en m)?* Com es distribueix velocitat i quin és el Pokemon de menor i major velocitat?* Com es distribueix l'atac i defensa i quin és el Pokemon de menor i major atac i defensa?* Quin és el resultat de la comparació entre l'atac, l'atac especial, la defensa i la defensa especial base?* Es pot considerar que els Pokemon de tipus roca i foc tenen el mateix pes?Aquestes preguntes es poden contestar analitzant les dades del dataset *Informació dels Pokemons* (*pokemon.csv*), però es vol anar un pas més enllà i desenvolupar un model predictiu que sigui capaç de preedir quin *Pokemon* guanyaria un combat. Per això s'afegeix el dataset *Informació de combats* (*combats.csv*). Amb el model construit, es simularà un torneig amb 16 *Pokemons* i s'intentarà adivinar quin de tots ells seria el guanyador. --- [2] Integració i selecció ImportsEn aquesta pràctica s'utilitzaran les llibreries:* *pandas*: Per treballar amb *DataFrames* (internament usa *numpy*).* *matplotlib* i *seaborn*: Per fer els gràfics.* *missingno*: Per fer gràfics de valors mancants.* *scipy*: Per fer testos estadístics.* *scikit-learn*: Per construir els models predictius
import numpy as np import pandas as pd from matplotlib import pyplot as plt import seaborn as sns #conda install -c conda-forge missingno import missingno as msno import scipy as sp path_folder = './datasets'
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Carregar les dades Pokemon_info dataset
pokemon_info_df = pd.read_csv(path_folder+'/pokemon.csv') #Dimensions del DF (files, columnes) print(pokemon_info_df.shape)
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Hi ha **42 variables** i **801 registres**.Quins són els diferents tipus de variables?
print(pokemon_info_df.dtypes.unique())
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Hi ha variables de tipus: * ***O***: Categòrica.* ***float64***: Real.* ***int64***: Enter.De quin tipus és cada variable?
#Variables print(pokemon_info_df.dtypes)
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Distribució del tipus de les variables.
pd.value_counts(pokemon_info_df.dtypes).plot.bar()
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
**Nota:** Com es pot veure, hi ha moltes variables de tipus ***float64*** i ***int64***, es probable que donat el domini d'aquestes variables, es pogués canviar el tipus a **float32** i **int32** per així reduir la quantitat de memòria utilitzada. Selecció de variablesA partir de les preguntes plantejades en el primer apartat, per aquest *dataset* es seleccionen les variables:* name* pokedex_number* generation* type1* type2* is_legendary* attack* sp_attack* defense* sp_defense* speed* hp* height_m* wegiht_kg* against_?
pokemon_info_df = pokemon_info_df[["name","pokedex_number",\ "generation","type1","type2",\ "is_legendary","attack","sp_attack",\ "defense","sp_defense","speed",\ "hp","height_m","weight_kg",\ "against_bug","against_dark","against_dragon",\ "against_electric", "against_fairy", "against_fight",\ "against_fire", "against_flying", "against_ghost",\ "against_grass", "against_ground", "against_ice",\ "against_normal", "against_poison", "against_psychic",\ "against_rock", "against_steel", "against_water"]]
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
*pokemon_battles dataset*
pokemon_battles_df = pd.read_csv(path_folder+'/combats.csv') print(pokemon_battles_df.shape)
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Hi ha **38,743 registres** i **3 variables.**De quin tipus són?
print(pokemon_battles_df.dtypes.unique()) print(pokemon_battles_df.dtypes)
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
**Nota:** Totes les variables són enteres (*int64*). Selecció de variablesEn aquest *dataset* són necessaries totes les variables, i per tant, no es fa cap selecció. --- [3] Neteja de les dadesUn cop es coneixen les variables de les quals es disposa per l'anàlisi i el seu tipus, és important explorar quines d'aquestes variables tenen valors mancants i si això fa que deixin de ser útils.
#Hi ha algún camp en tot el DF que tingui un valor mancant? print(pokemon_info_df.isnull().values.any())
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Valors mancantsQuins camps tenen valors mancants?
pokemon_info_mv_list = pokemon_info_df.columns[pokemon_info_df.isnull().any()].tolist() print(pokemon_info_mv_list)
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Les variables: **height_m**, **percentage_male**, **type2**, **weight_kg** tenen valors mancants, però quants registres estan afectats?
def missing_values(df, fields): n_rows = df.shape[0] for field in fields: n_missing_values = df[field].isnull().sum() print("%s: %d (%.3f)" % (field, n_missing_values, n_missing_values/n_rows)) msno.bar(pokemon_info_df[pokemon_info_mv_list], color="#b2ff54", labels=True)
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Com es distribueixen els valors mancants en funció de l'ordre del *Pokemon* imputat per la *Pokedex*?
msno.matrix(pokemon_info_df[pokemon_info_mv_list])
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
La variable **height_m** té 20 registres sense valor (2.5%), **type2** 384 (48%) i **weight_kg** 20 (2.5%) Imputar els valors perdutsPer tal d'imputar correctament els valors perduts, cal primer observar els altres valors per cada una d'aquestes variables. Així que anem a veure quins valors diferents hi ha per cada variable.**type2**
print(pokemon_info_df[pokemon_info_df['type2'].notnull()]['type2'].unique())
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Com es pot veure, hi ha 18 tipus de Pokemon diferents en la variable **type2**. **Com que es tracta d'una variable arbitraria definida pel dissenyador del *Pokemon*, no té cap sentit imputar un valor en base a la similitud que té amb els altres *Pokemons*, i per això, es decideix assignar l'etiqueta arbitrària (*unknown*) per distingir valor mancants.**
pokemon_info_df['type2'].fillna('unknown', inplace=True)
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
**height_m**Com que només hi ha un 20 registres sense valor per aquesta variable i el nombre de registres és molt superior a 50, es poden descartar. Per fer-ho assignem el valor 0, i així es remarca que la dada no existeix perquè no té sentit un *Pokemon* que no tingui alçada.**Nota:** En cas que el nombre de registres fos inferior a 50, es podria implementar una solució basada en crear un **model de regressió lineal simple** on la **variable a preedir** fos l'**alçada** i la **variable predictora** el **pes**. Aquesta predicció es podria fer a agrupant els *Pokemons* pel seu tipus i només per aquells on el factor de correlació de *Pearson* fos superior a 0,7 o inferior a -0,7.
pokemon_info_df['height_m'].fillna(np.int(0), inplace=True)
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
**weight_kg**Igual que amb la variable **height_m**
pokemon_info_df['weight_kg'].fillna(np.int(0), inplace=True)
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Ara es pot comprovar que no hi ha cap valor *na* en tot el *dataset*
print(pokemon_info_df.columns[pokemon_info_df.isnull().any()].tolist() == [])
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Dades extremesLes dades extremes o *outliers* són aquelles que estàn fora del rang que es pot considerar normal per una variable numèrica. Hi ha diferents maneres de detectar les dades extremes, un dels més comuns és considerar com a tal a totes aquelles dades inferiors a *Q1* - 1.5 * *RIQ* o superior a *Q3* + 1.5 * *RIQ*.L'anàlisi de dades extremes es farà sobre les variables: *attack*, *sp_attack*, *defense*, *sp_defense*, *speed*, *hp*, *height_m* i *weight_kg*
def print_min_max(var): data = pokemon_info_df[var] data = sorted(data) q1, q2, q3 = np.percentile(data, [25,50,75]) iqr = q3 - q1 lower_bound = q1 - (1.5 * iqr) upper_bound = q3 + (1.5 * iqr) data_pd = pokemon_info_df[var] outliers = data_pd[(data_pd < lower_bound) | (data_pd > upper_bound)] print("{} - mínim: {}, mediana: {}, màxim: {}, number of outliers: {}".format(var, min(pokemon_info_df[var])\ ,q2\ , max(pokemon_info_df[var])\ , len(outliers))) print_min_max("attack") print_min_max("sp_attack") print_min_max("defense") print_min_max("sp_defense") print_min_max("speed") print_min_max("hp") print_min_max("weight_kg") print_min_max("height_m")
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Una manera de representar aquesta informació és a través de diagrames de caixa o *boxplots*
plt.subplots(figsize=(15,10)) sns.boxplot(data=pokemon_info_df[['attack', 'sp_attack', 'defense', 'sp_defense', 'speed', \ 'hp', 'weight_kg', 'height_m']], orient='v')
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
De les variables analitzades, totes tenen relativament poques dades atípiques i les que en tenen no són molt pronunciats a excepció de la variable *weight_kg*, com que aquesta variable no s'usarà en la construcció del model predictiu, s'ha decidit assumir el risc de treballar amb les dades extremes i no eliminar-les del conjunt. Guardar les dades preprocessadesUn cop finalitzada l'etapa de integració, filtrat i nateja de dades, es guarda en un fitxer intermig anomenat *pokemon_clean_data.csv*
pokemon_info_df.to_csv(path_folder+'/pokemon_clean_data.csv')
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
[4, 5]. Anàlisi descriptiu GeneracionsQuantes generacions hi ha?
print("Hi ha %d generacions de Pokemons" %(pokemon_info_df["generation"].nunique()))
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Distribució dels *Pokemons* en base a la generacióCom es distribueixen els Pokemons en base a la primera generació en que van apareixre?
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5)) #Diagrama de barres sns.countplot(x="generation", data=pokemon_info_df, ax=ax1) #Diagrama de sectors sector_diagram = pd.value_counts(pokemon_info_df.generation) sector_diagram.plot.pie(startangle=90, autopct='%1.1f%%', shadow=False, explode=(0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05), ax=ax2) plt.axis("equal")
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Quines són les tres generacions on van apareixer més Pokemons?
print("5na generació -> %d Pokemons"%(len(pokemon_info_df[pokemon_info_df["generation"] == 5]))) print("1ra generació -> %d Pokemons"%(len(pokemon_info_df[pokemon_info_df["generation"] == 1]))) print("3era generació -> %d Pokemons"%(len(pokemon_info_df[pokemon_info_df["generation"] == 3])))
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
La generació amb més Pokemons és la **5na** amb **156 (19,5%)**, seguidament de la **1era** generació amb **151 (18,9%)** i finalment la **3era** generació amb **135 Pokemons (16,9%)**. Entre aquestes tres generacions hi ha el **55,3%** del total de *Pokemons*. Pokemons llegendarisHi ha *Pokemons* que despunten per sobre de la resta degut a les seves característiques especials. Sovint estan relacionats amb llegendes del passat i per això se'ls coneix com a llegendàris. Què podem dir al respecte d'aquests *Pokemons*? Quants Pokemons llegendaris hi ha?
print("Nombre total de Pokemons llegendaris: {}".format(len(pokemon_info_df[pokemon_info_df["is_legendary"] == True])))
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
En total hi ha **70 Pokemons llegendaris.** Distribució dels *Pokemons* llegendarisEn quines edicions apareixen aquests Pokemons?
pokemon_legendary_df = pokemon_info_df[pokemon_info_df["is_legendary"] == True] fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5)) #Diagrama de barres sns.countplot(x="generation", data=pokemon_legendary_df, ax=ax1) #Diagrama de sectors sector_diagram = pd.value_counts(pokemon_legendary_df.generation) sector_diagram.plot.pie(startangle=90, autopct='%1.1f%%', shadow=False, explode=(0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05), ax=ax2) plt.axis("equal") print("7na generació -> %d Pokemons"%(len(pokemon_legendary_df[pokemon_legendary_df["generation"] == 7]))) print("4rta generació -> %d Pokemons"%(len(pokemon_legendary_df[pokemon_legendary_df["generation"] == 4]))) print("5na generació -> %d Pokemons"%(len(pokemon_legendary_df[pokemon_legendary_df["generation"] == 5])))
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
La **7na generació** té **17 Pokemons llegendaris (24,3%)**, la **4rta** en té **13 (18,6%)** i la **5na 13**. Entre **aquestes tres generacions** hi ha un **61,5% de Pokemons llegendaris**. Tipus dels *Pokemons* llegendarisQuin són els tipus (*type1* i *type2*) que predominen en els *Pokemons* llegendaris?
def plot_by_type(dataFrame, title): plt.subplots(figsize=(15, 13)) sns.heatmap( dataFrame[dataFrame["type2"] != "unknown"].groupby(["type1", "type2"]).size().unstack(), cmap="Blues", linewidths=1, annot=True ) plt.xticks(rotation=35) plt.title(title) plt.show() plot_by_type(pokemon_legendary_df, "Pokemons llegendaris per Tipus")
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Els tipus **psíquic/fantasma**, **foc/volador**, **elèctric/volador**, **insecte/lluita** i **drac/psíquic** són els tipus amb més *Pokemons* llegendaris, tots ells amb 2 exemplars. *Pokemon* llegendari més fortQuin és el Pokemon llegendari amb més atac (attack), defensa (defense), vida (hp) i velocitat (velocity) mitjana?
legendary_with_more_attack = max(pokemon_legendary_df['attack']) legendary_with_less_attack = min(pokemon_legendary_df['attack']) legendary_with_more_defense = max(pokemon_legendary_df['defense']) legendary_with_less_defense = min(pokemon_legendary_df['defense']) legendary_with_more_hp = max(pokemon_legendary_df['hp']) legendary_with_less_hp = min(pokemon_legendary_df['hp']) legendary_with_more_speed = max(pokemon_legendary_df['speed']) legendary_with_less_speed = min(pokemon_legendary_df['speed']) #Afegim el camp strong amb el comput en base al atac, defensa, vida i velocitat normalitzada. pokemon_legendary_df["strong"] = ((pokemon_legendary_df['attack'] - legendary_with_less_attack)/(legendary_with_more_attack-legendary_with_less_attack) + (pokemon_legendary_df['defense'] - legendary_with_less_defense)/(legendary_with_more_defense-legendary_with_less_defense) + (pokemon_legendary_df['hp'] - legendary_with_less_hp)/(legendary_with_more_hp-legendary_with_less_hp) + (pokemon_legendary_df['speed'] - legendary_with_less_speed)/(legendary_with_more_speed-legendary_with_less_speed)) print(pokemon_legendary_df["strong"]) pokemon_legendary_df[pokemon_legendary_df["strong"] == max(pokemon_legendary_df["strong"])][["name", "strong"]] pokemon_legendary_df[pokemon_legendary_df["strong"] == min(pokemon_legendary_df["strong"])][["name", "strong"]]
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
En base al càlcul realitzat, podem considerar que el *Pokemon* llegendari més fort és **Groudon** amb una ponderació de: 2,44 punts i el més dèbil és **Cosmog** amb una ponderació de 0 punts. *Type1* i *type2*Cada *Pokemon* és d'un tipus concret **type1** o és una combinació de **type1** i **type2**, per aquest motiu, alguns d'ells no tenen **type2** (com s'ha vist en l'apartat anterior). *Pokemons* d'un únic tipus i de doble tipus.
single_type_pokemons = [] dual_type_pokemons = [] for i in pokemon_info_df.index: if(pokemon_info_df.type2[i] != "unknown"): single_type_pokemons.append(pokemon_info_df.name[i]) else: dual_type_pokemons.append(pokemon_info_df.name[i]) print("Nombre de Pokemons amb un únic tipus %d: " % len(single_type_pokemons)) print("Nombre de Pokemons amb dos tipus %d: " % len(dual_type_pokemons))
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Hi ha **417** d'un únic tipus (**52,1%**) i **384** amb doble tipus (**47,9%**), això es representa en el següent diagrama de sectors.
data= [len(single_type_pokemons), len(dual_type_pokemons)] colors= ["#ced1ff","#76bfd4"] plt.pie(data, labels=["Tipus únic","Doble tipus"], startangle=90, explode=(0, 0.15), shadow=True, colors=colors, autopct='%1.1f%%') plt.axis("equal") plt.title("Tipus únic vs Doble tipus") plt.tight_layout() plt.show()
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Distribució en base al tipusEn els següents diagrames de barres es mostra la distribució per **type1** i per **type2**
def plot_distribution(data, col, xlabel, ylabel, title): types = pd.value_counts(data[col]) fig, ax = plt.subplots() fig.set_size_inches(15,7) sns.set_style("whitegrid") ax = sns.barplot(x=types.index, y=types, data=data) ax.set_xticklabels(ax.get_xticklabels(), rotation=75, fontsize=12) ax.set(xlabel=xlabel, ylabel=ylabel) ax.set_title(title) plot_distribution(pokemon_info_df, "type1", "Tipus únic", "Nombre", "Distribució dels Pokemons amb un únic tipus") plot_distribution(pokemon_info_df, "type2", "Doble tipus", "Nombre", "Distribució dels Pokemons amb un únic tipus")
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Combinació de tipusAra volem saber quina combinació de tipus **type1** i **type2** hi ha entre tots els Pokemons.
plt.subplots(figsize=(15, 13)) sns.heatmap( pokemon_info_df[pokemon_info_df["type2"] != "unknown"].groupby(["type1", "type2"]).size().unstack(), cmap="Blues", linewidths=1, annot=True ) plt.xticks(rotation=35) plt.show()
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Com es pot veure, la combinació de tipus més comuna és **normal/volador** amb **26 Pokemons** seguida per la combinació **planta/verí** i **insecte/volador** amb **14** i **13 Pokemons** respectivament.**Nota:** En aquest mapa de calor s'han filtrat tots aquells Pokemons sense segon tipus. Pes i alçadaLa variable **height_m** conté l'alçada en metres, mentre que la variable **weight_kg** conté el pes en Kilgorams. Així que, quins són els *Pokemons* amb major i menor alçada? I els de major i menor pes?
tallest_m = max(pokemon_info_df['height_m']) shortest_m = tallest_m for i in pokemon_info_df.index: if pokemon_info_df.height_m[i] > 0 and pokemon_info_df.height_m[i] < shortest_m: shortest_m = pokemon_info_df.height_m[i] tallest_pokemon = pokemon_info_df[pokemon_info_df['height_m'] == tallest_m] shortest_pokemon = pokemon_info_df[pokemon_info_df['height_m'] == shortest_m] print("Els Pokemons més alts són:") for i in tallest_pokemon.index: print("\t%s amb %.2f metres" % (tallest_pokemon.name[i], tallest_pokemon.height_m[i])) print("\nEls Pokemons més petits són:") for i in shortest_pokemon.index: print("\t%s amb %.2f metres" % (shortest_pokemon.name[i], shortest_pokemon.height_m[i])) max_weight = max(pokemon_info_df['weight_kg']) light_kg = max_weight for i in pokemon_info_df.index: if pokemon_info_df.weight_kg[i] > 0 and pokemon_info_df.weight_kg[i] < light_kg: light_kg = pokemon_info_df.weight_kg[i] heviest_pokemon = pokemon_info_df[pokemon_info_df['weight_kg'] == max_weight] lightest_pokemon = pokemon_info_df[pokemon_info_df['weight_kg'] == light_kg] print("Els Pokemons amb més pes són:") for i in heviest_pokemon.index: print("\t%s amb %.2f kilograms" % (heviest_pokemon.name[i], heviest_pokemon.weight_kg[i])) print("\nEls Pokemons amb menys pes són:") for i in lightest_pokemon.index: print("\t%s amb %.2f kilograms" % (lightest_pokemon.name[i], lightest_pokemon.weight_kg[i]))
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Distribució de l'alçada i del pesAra es vol veure quina és la distribució de l'alçada i pes dels Pokemons, per això es pot utilitzar histogrames i diagrames de caixa.
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15,5)) sns.distplot(pokemon_info_df['height_m'], color='g', axlabel="Alçada (m)", ax=ax1) sns.distplot(pokemon_info_df['weight_kg'], color='y', axlabel="Pes (kg)", ax=ax2) fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5)) sns.boxplot(x=pokemon_info_df["height_m"], color="g", orient="v", ax=ax1) sns.boxplot(x=pokemon_info_df["weight_kg"], color="y", orient="v", ax=ax2)
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Tots aquells Pokemons amb una alçada inferior a Com es pot veure, hi ha Pokemons molt dispersos a la resta, es con VelocitatQuins són els *Pokemons* més ràpids i quins els més lents?
fast_value = max(pokemon_info_df['speed']) slow_value = min(pokemon_info_df[pokemon_info_df['speed'] != 0]['speed']) fastest_pokemon = pokemon_info_df[pokemon_info_df['speed'] == max(pokemon_info_df['speed'])] slowest_pokemon = pokemon_info_df[pokemon_info_df['speed'] == slow_value] print("Els Pokemons més ràpids són:") for i in fastest_pokemon.index: print("\t%s amb una velocitat de %.f punts" %(fastest_pokemon.name[i], fastest_pokemon.speed[i])) print("Els Pokemons més lents són:") for i in slowest_pokemon.index: print("\t%s amb una velocitat de %.f punts" %(slowest_pokemon.name[i], slowest_pokemon.speed[i]))
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Distribució de la velocitat
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5)) sns.distplot(pokemon_info_df['speed'], color="orange", ax=ax1) sns.boxplot(pokemon_info_df['speed'], color="orange", orient="v", ax=ax2)
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Atac i defensaEn els següents gràfics es comparen: l'atac i l'atac especial base, la defensa i la defensa especial base.
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5)) sns.distplot(pokemon_info_df['attack'], color="#B8F0FC", hist=False, ax=ax1, label="Attack") sns.distplot(pokemon_info_df["sp_attack"], color="#52BAD0", hist=False, ax=ax1, label="S. Attack") ax1.title.set_text("Attack vs Special Attack") ax2.title.set_text("Defense vs Special defense") sns.distplot(pokemon_info_df['defense'], color="#C6FFBF", hist=False, ax=ax2, label="Defense") sns.distplot(pokemon_info_df["sp_defense"], color="#61D052", hist=False, ax=ax2, label="S. Defense")
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
En els següents gràfics es comparen: l'atac i la defensa base, l'atac especial i la defensa especial base.
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5)) ax1.title.set_text("Attack vs Defense") sns.distplot(pokemon_info_df['attack'], color="#B8F0FC", hist=False, ax=ax1, label="Attack") sns.distplot(pokemon_info_df["defense"], color="#52BAD0", hist=False, ax=ax1, label="Defense") ax2.title.set_text("Special Attack vs Special defense") sns.distplot(pokemon_info_df['sp_attack'], color="#C6FFBF", hist=False, ax=ax2, label="Special Attack") sns.distplot(pokemon_info_df["sp_defense"], color="#61D052", hist=False, ax=ax2, label="Special Defense")
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
[4]. Distribució de les variablesEn aquest apartat s'estudiarà la distribució que segueixen algunes de les variables i s'aplicaran contrastos de hipòtesi amb la finalitat d'extreure conclusions en base al tipus dels Pokemons.S'ha decidit estudiar les variables *atac*, *hp*, *defensa* i *velocitat*. Normalitat en la distribucióS'aplica un test de normalitat *shapiro-wilks* per veure si segueixen una [distribució normal](https://en.wikipedia.org/wiki/Normal_distribution). Aquest test planteja el següent contrast de hipòtesis:$H_{0}: X$ és normal$H_{1}: X$ no és normal
sp.stats.shapiro(pokemon_info_df['attack'].to_numpy()) sp.stats.shapiro(pokemon_info_df['hp'].to_numpy()) sp.stats.shapiro(pokemon_info_df['defense'].to_numpy()) sp.stats.shapiro(pokemon_info_df['speed'].to_numpy()) sp.stats.shapiro(pokemon_info_df['height_m'].to_numpy()) sp.stats.shapiro(pokemon_info_df['weight_kg'].to_numpy())
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Els testos per les variables *attack*, *hp*, *defense*, *speed*, *height_m* i *weight_kg* han obtingut un *p-value* inferior al nivell de significació ($\alpha$ = 0.05), i per tant hi ha evidències estadístqiues suficients per rebutjar la hipòtesi nul·la i acceptar que no segueixen una distribució normal. Homocedasticitat**Pes** en els Pokemons de tipus **roca** i **foc**Ara es vol saber si hi ha diferència en la variancia (heterocedasticitat) o no (homocedasticitat) per la variable *weight_kg* en base a si el seu primer tipus és roca (*rock*) o foc (*fire*). Per això s'aplica un test de **Fligner-Killeen** (s'aplica aquest test perquè no és paramètric i com s'ha vist anteriorment, les dades no han superat el test de normalitat) on el contrast és el següent:$H_{0}$: La variància entre $X_{0}$ i $X_{1}$ és homogenea.$H_{1}$: La variància entre $X_{0}$ i $X_{1}$ és heterogenea.El contrast es fa amb un nivell de significació de:$\alpha = 0.05$
rock_pokemons_array = pokemon_info_df[(pokemon_info_df['type1'] == 'rock') \ & (pokemon_info_df['weight_kg'] != 0)]['weight_kg'].to_numpy() fire_pokemons_array = pokemon_info_df[(pokemon_info_df['type1'] == 'fire') \ & (pokemon_info_df['weight_kg'] != 0)]['weight_kg'].to_numpy() sp.stats.fligner(rock_pokemons_array, fire_pokemons_array)
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Com que s'ha obtingut un ***p-value*** de **0,044** (0,044 < $\alpha$), **hi ha suficients evidències estadístiques per rebutjar la hipótesi nul·la**, i per tant, s'accepta amb un **nivell de confiança del 95%** que **hi ha diferències entre les variancies dels Pokemons de tipus roca i els de tipus foc.** Contrast - Pes dels Pokemons de tipus roca i focAra es vol contestar a la pregunta: Es pot considerar que els Pokemons de tipus roca i foc tenen la mateixa mitja de pes? Per això es pot aplicar un **t-test** on el contrast d'hipòtesis és:$H_0:$ $\mu_{1}-\mu_{2} = 0$ (la mitja de **weight_kg** és igual pels Pokemons de tipus roca i foc)$H_1:$ $\mu_{1}-\mu_{2} \ne 0$ (la mitja de **weight_kg** no és igual pels Pokemons de tipus roca i foc)On:$\alpha = 0.05$**Nota:** Tot i que la variable **weight_kg** no segueix una distribució normal, com que la mida de les dades és considerablement superior a 30, es pot assumir normalitat pel teorema del límit central.
sp.stats.ttest_ind(a = rock_pokemons_array, b = fire_pokemons_array)
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Com que s'ha obtingut un ***p-value*** de **0,137**, **no hi ha evidències estadístiques suficients per rebutjar la hipòtesi nul·la**, i per tant **es pot considerar que la mitja de pes entre els Pokemons de tipus roca i de tipus foc és el mateix.** [4, 5] Anàlisi predictiuEn aquest punt **es dona per finalitzat l'anàlisi descriptiu i es passa a l'anàlisi predictiu** amb l'objectiu de crear un model que permeti **adivinar quin guanyaria un combat entre dos *Pokemons***. El problema que es vol resoldre **és un problema de classificació amb dades etiquetades** (model supervisat), i per això, es crearan diversos models simples on es tindrà en compte l'accuracy com a únic paràmetre de bondat del model.Per mesurar l'*accuracy* s'aplicarà la tècnica de *k-fold cross validation* amb un valor de 10 per la *k* Pokemon_battles_dfEl *dataset* analitzat fins el moment no conté la informació relacionada als combats, i per això, es complementaran les dades amb el *dataset pokemon_battles_df* amb els següents camps:* ***First_pokemon***: Índex de la *Pokedex* pel primer contrincant.* ***Second_pokemon***: Índex de la *Pokedex* pel segon contrincant.* ***Winner***: Índex de la *Pokedex* del guanyador.
pokemon_battles_df
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
El primer que cal fer es relacionar el *dataset* que conté la informació dels Pokemons (*pokemon_info_df*) amb el *dataset* dels combats (*pokemon_battles_df*). Per això apliquem dos *joins*, el primer que relaciona aquests dos datasets per obtenir les dades del primer Pokemon i el segon *join* on es tornen a relacionar aquest dos *datasets*, però aquesta vegada per obtenir la informació del segón *Pokemon* implicat.
pokemon_battles_info_df = pokemon_battles_df.merge(pokemon_info_df, \ left_on='First_pokemon', \ right_on='pokedex_number' \ ).merge(pokemon_info_df, \ left_on='Second_pokemon', \ right_on='pokedex_number' \ )[['First_pokemon', 'Second_pokemon', \ 'Winner', 'name_x', 'attack_x', \ 'sp_attack_x', 'defense_x', 'sp_defense_x', \ 'hp_x', 'speed_x', 'type1_x','is_legendary_x', \ 'name_y', 'attack_y', \ 'sp_attack_y', 'defense_y', 'sp_defense_y', \ 'hp_y', 'speed_y', 'type1_y','is_legendary_y']]
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
El *dataset* resultant conté per nom *field_x* el resultat del primer join i *field_y* pel resultat del segon join. Apliquem un *rename* perquè els camps *field_x* començin per *First_pokemon* i els camps *field_y* per *Second_pokemon*
pokemon_battles_info_df.rename(columns={'name_x': 'First_pokemon_name', 'attack_x': 'First_pokemon_attack', \ 'sp_attack_x': 'First_pokemon_sp_attack', 'defense_x': 'First_pokemon_defense', \ 'sp_defense_x': 'First_pokemon_sp_defense', 'hp_x': 'First_pokemon_hp', \ 'speed_x': 'First_pokemon_speed', 'type1_x': 'First_pokemon_type1', \ 'is_legendary_x': 'First_pokemon_is_legendary', 'name_y': 'Second_pokemon_name', \ 'attack_y': 'Second_pokemon_attack', 'sp_attack_y': 'Second_pokemon_sp_attack', \ 'defense_y': 'Second_pokemon_defense', 'sp_defense_y': 'Second_pokemon_sp_defense', \ 'hp_y': 'Second_pokemon_hp', 'speed_y': 'Second_pokemon_speed', \ 'type1_y': 'Second_pokemon_type1', 'is_legendary_y': 'Second_pokemon_is_legendary'}, \ inplace=True)
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Camps *diff_?*Per construir el model predictiu cal calcular els camps amb les diferències entre les propietats implicades. Aquestes s'anomenaran *Diff_?*. Per exemple, la diferència d'atac seria: *Diff_attack* = *First_pokemon_attack* - *Second_pokemon_attack*
pokemon_battles_info_df['Diff_attack'] = pokemon_battles_info_df['First_pokemon_attack'] - pokemon_battles_info_df['Second_pokemon_attack'] pokemon_battles_info_df['Diff_sp_attack'] = pokemon_battles_info_df['First_pokemon_sp_attack'] - pokemon_battles_info_df['Second_pokemon_sp_attack'] pokemon_battles_info_df['Diff_defense'] = pokemon_battles_info_df['First_pokemon_defense'] - pokemon_battles_info_df['Second_pokemon_defense'] pokemon_battles_info_df['Diff_sp_defense'] = pokemon_battles_info_df['First_pokemon_sp_defense'] - pokemon_battles_info_df['Second_pokemon_sp_defense'] pokemon_battles_info_df['Diff_hp'] = pokemon_battles_info_df['First_pokemon_hp'] - pokemon_battles_info_df['Second_pokemon_hp'] pokemon_battles_info_df['Diff_speed'] = pokemon_battles_info_df['First_pokemon_speed'] - pokemon_battles_info_df['Second_pokemon_speed']
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Camp *winner_result*Com que l'objectiu d'aquest model predictiu és fer una classificació on el resultat sigui 0 si guanya el primer *Pokemon* o 1 en cas contrari. Afegim el camp ***Winner_result*** amb aquest càlcul.
pokemon_battles_info_df['Winner_result'] = np.where(\ pokemon_battles_info_df['First_pokemon'] == \ pokemon_battles_info_df['Winner'], 0, 1)
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Seleccionar els camps del modelAra creem el *dataset* ***pokemon_battles_pred_df*** amb els camps que s'usaran com a predictors, que són: * ***Diff_attack**** ***Diff_sp_attack**** ***Diff_defense**** ***Diff_sp_defense**** ***Diff_hp**** ***Diff_speed**** ***First_pokemon_is_legendary**** ***Second_pokemon_is_legendary***I el *dataset* ***pokemon_battles_res_df*** amb el camp resultat que és *Winner_result*
pokemon_battles_pred = pokemon_battles_info_df[['Diff_attack', 'Diff_sp_attack', \ 'Diff_defense', 'Diff_sp_defense', \ 'Diff_hp', 'Diff_speed', \ 'First_pokemon_is_legendary', 'Second_pokemon_is_legendary']].values pokemon_battles_res = pokemon_battles_info_df['Winner_result'].values
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Escalar les dadesSi els rangs de valors per les variables utilitzades en el model és considerablment diferent, poden causar distorsions en els resultats obtinguts. Per mostrar la seva distribució es pot utilitzar un *boxplot*.
plt.subplots(figsize=(15,10)) sns.boxplot(data=pokemon_battles_pred[:,0:6], orient='v')
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Com es pot observar hi ha diferència entre el rang de les dades, per això es pot aplicar un escalat robust.
from sklearn.preprocessing import RobustScaler rs = RobustScaler() rs.fit(pokemon_battles_pred) pokemon_battles_pred = rs.transform(pokemon_battles_pred) plt.subplots(figsize=(15,10)) sns.boxplot(data=pokemon_battles_pred[:,0:6], orient='v')
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Separar les dades en *dades d'entrenament* i *dades de prova*Com que és un **model supervisat**, cal separar les dades en dades d'entrenament i dades de prova. El model utilitzarà les dades d'entrenament per aprendre (fase d'entrenament) i les dades de prova per comprovar si el que ha aprés és o no correcte (fase de test).Com que **hi ha una quantitat relativament alta de registres** (38,743), s'ha decidit utilitzar un **80% de les dades per a l'entrenament** (30,994 registres) i **un 20% pel test** (7,749 registres).
from sklearn.model_selection import train_test_split #S'ha decidit assignar el valor 23 a la llavor per així obtenir sempre el mateix resultat. pokemon_battle_pred_train, pokemon_battle_pred_test, \ pokemon_battle_res_train, pokemon_battle_res_test = train_test_split(\ pokemon_battles_pred, \ pokemon_battles_res, \ test_size=0.2, random_state = 23)
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Crear el model de regressió logística
from sklearn.linear_model import LogisticRegression classifier = LogisticRegression(random_state = 0) classifier.fit(pokemon_battle_pred_train, pokemon_battle_res_train) pokemon_battle_results = classifier.predict(pokemon_battle_pred_test) from sklearn.metrics import confusion_matrix cm = confusion_matrix(pokemon_battle_res_test, pokemon_battle_results) print(cm) from sklearn.model_selection import cross_val_score accuracies = cross_val_score(estimator = classifier, X = pokemon_battles_pred, y = pokemon_battles_res, cv = 10, scoring='accuracy') print('Mean: {}, standard deviation: {}'.format(accuracies.mean(), accuracies.std()))
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
**Accuracy:** 87,97% K nearest Neighbours (*Knn*)
from sklearn.neighbors import KNeighborsClassifier knn_classifier = KNeighborsClassifier(n_neighbors=5, metric='minkowski', p=2) knn_classifier.fit(X=pokemon_battle_pred_train, y=pokemon_battle_res_train) knn_pokemon_battle_results = knn_classifier.predict(pokemon_battle_pred_test) knn_cm = confusion_matrix(pokemon_battle_res_test, knn_pokemon_battle_results) print(knn_cm) from sklearn.model_selection import cross_val_score accuracies = cross_val_score(estimator = knn_classifier, X = pokemon_battles_pred, y = pokemon_battles_res, cv = 10, scoring='accuracy') print('Mean: {}, standard deviation: {}'.format(accuracies.mean(), accuracies.std()))
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
**Accuracy:** 87,58% Support Vector Machine - SVM
from sklearn.svm import SVC svm_classifier = SVC(kernel='rbf', random_state=0) svm_classifier = svm_classifier.fit(X=pokemon_battle_pred_train, y=pokemon_battle_res_train) svm_pokemon_battle_results = svm_classifier.predict(X=pokemon_battle_pred_test) svm_cm = confusion_matrix(pokemon_battle_res_test, svm_pokemon_battle_results) print(svm_cm) from sklearn.model_selection import cross_val_score accuracies = cross_val_score(estimator = svm_classifier, X = pokemon_battles_pred, y = pokemon_battles_res, cv = 10, scoring='accuracy') print('Mean: {}, standard deviation: {}'.format(accuracies.mean(), accuracies.std()))
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
**Accuracy:** 90,92% Classificació per xarxa bayesiana (*Naive bayes*)
from sklearn.naive_bayes import GaussianNB nb_classifier = GaussianNB() nb_classifier = nb_classifier.fit(X=pokemon_battle_pred_train, y=pokemon_battle_res_train) nb_pokemon_battle_results = nb_classifier.predict(X=pokemon_battle_pred_test) nb_cm = confusion_matrix(pokemon_battle_res_test, nb_pokemon_battle_results) print(nb_cm) from sklearn.model_selection import cross_val_score accuracies = cross_val_score(estimator = nb_classifier, X = pokemon_battles_pred, y = pokemon_battles_res, cv = 10, scoring='accuracy') print('Mean: {}, standard deviation: {}'.format(accuracies.mean(), accuracies.std()))
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
**Accuracy:** 79,95% Random Forest Classifier (RFC)
from sklearn.ensemble import RandomForestClassifier rfc_classifier = RandomForestClassifier(n_estimators=10, criterion='entropy', random_state=0) rfc_classifier = rfc_classifier.fit(X=pokemon_battle_pred_train, y=pokemon_battle_res_train) rfc_pokemon_battle_results = rfc_classifier.predict(X=pokemon_battle_pred_test) rfc_cm = confusion_matrix(pokemon_battle_res_test, rfc_pokemon_battle_results) print(rfc_cm) from sklearn.model_selection import cross_val_score accuracies = cross_val_score(estimator = rfc_classifier, X = pokemon_battles_pred, y = pokemon_battles_res, cv = 10, scoring='accuracy') print('Mean: {}, standard deviation: {}'.format(accuracies.mean(), accuracies.std()))
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
**Accuracy:** 92,25 Millor modelEl model que ha obtingut un millor *accuracy* ha estat el *Random Forest Classifier* amb un encert del 92,51%: Millorar el model (afegir el tipus dels *Pokemons*)Com s'ha mostrat en apartats anteriors, **cada *Pokemon* té un tipus base** i pot tenir un segon tipus. Evidentment, aquestes propietats **influeixen a l'hora de determinar el guanyador en un combat**, per exemple, un Pokemon d'aigua més dèbil (menys atac, defensa, vida, etc.) pot guanyar amb més facilitat a un Pokemon de foc que tingui més elevades les característiques, que a un Pokemon de planta.Per això, anem a calcular una nova propietat que determini l'eficacia en base al tipus de Pokemon. Aquesta propietat vindrà definida en funció del primer i segon tipus del Pokemon (*type1* i *type2*) i la seva debilitat en vers als altres tipus (*against_?*).D'aquesta manera, si comparem els Pokemons *Pikachu* (elèctric/elèctric) i *Onix* (roca/terra), té avantatge l'Ònix perquè no té debilitat en vers a l'electricitat (*against_electric = 0*) i en canvi, en Pikachu té debilitat per la roca (*against_rock = 1*) i per la terra (*against_ground = 2*).Per obtenir un valor numèric, s'aplica la formula:$f(p1, p2) = g(p1, p2) - g(p2, p1)$On:* $g(p1, p2) = dbt1(p1, p2)*ft1 + dbt2(p1, p2)*ft2$* $dbt1(p1, p2)$ = Debilitat del *Pokemon* p2 en vers al primer tipus del *Pokemon* p1.* $dbt2(p1, p2)$ = Debilitat del *Pokemon* p2 en vers el segon tipus del *Pokemon* p1.* $ft1$ = Factor arbitrari per ponderar el tipus 1* $ft2$ = Factor arbitrari per ponderar el tipus 2D'aquesta manera, per l'exemple de l'*Onix* vs *Pikachu* donat:* *Onix*: *type1* = rock, *type2* = ground, *against_electric* = 0* *Pikachu*: *type1* = eletric, *type2* = eletric, *against_rock* = 1, *against_ground*= 2* $ft1$ = 1* $ft2$ = 0.3tenim:$f(Onix, Pikachu) = (1*1 + 2*0.3) - (0*1 + 0*0.3) = 1.6$Com era d'esperar, degut que els *Pokemons* de tipus roca i terra tenen avantatge davant dels *Pokemons* de tipus elèctric, s'ha obtingut un valor positiu.
def effectivity_against(pokemon1, pokemon2, effectivity_type1, effectivity_type2): type1 = pokemon1['type1'].iloc[0] type2 = pokemon1['type2'].iloc[0] against_type1 = pokemon2['against_'+type1].iloc[0] if type2 == 'unknown': return against_type1 * effectivity_type1 else: against_type2 = pokemon2['against_'+type2].iloc[0] return (against_type1 * effectivity_type1) + (against_type2 * effectivity_type2) def balance_effectivity_against(pokemon1, pokemon2, effectivity_type1 = 1, effectivity_type2 = 0.3): return effectivity_against(pokemon1, pokemon2, \ effectivity_type1, effectivity_type2) - effectivity_against(pokemon2, pokemon1, \ effectivity_type1, effectivity_type2) def balance_effectivity_against_by_pokedex_number(pokemon_number1, pokemon_number2, \ effectivity_type1 = 1, effectivity_type2 = 0.3): pokemon1 = pokemon_info_df[pokemon_info_df['pokedex_number'] == pokemon_number1] pokemon2 = pokemon_info_df[pokemon_info_df['pokedex_number'] == pokemon_number2] return balance_effectivity_against(pokemon1, pokemon2, effectivity_type1, effectivity_type2)
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Ara cal afegir la propietat *balance_effectivity* al *dataframe pokemon_battles_info_df*
pokemon_battles_info_df['balance_effectivity'] = [\ balance_effectivity_against_by_pokedex_number(\ row['First_pokemon'], \ row['Second_pokemon']) \ for index, row in pokemon_battles_df.iterrows()\ ]
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
S'afegeix la columna *balance_effectivity* al *dataframe pokemon_battles_pred*
pokemon_battles_improved_pred = pokemon_battles_info_df[['Diff_attack', 'Diff_sp_attack', \ 'Diff_defense', 'Diff_sp_defense', \ 'Diff_hp', 'Diff_speed', \ 'First_pokemon_is_legendary', 'Second_pokemon_is_legendary', 'balance_effectivity']].values
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Distribució de les variables
plt.subplots(figsize=(15,10)) sns.boxplot(data=pokemon_battles_improved_pred[:,[0,1,2,3,4,5,8]], orient='v')
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Es normalitzen altre vegada les variables numèriques.
rs = RobustScaler() rs.fit(pokemon_battles_improved_pred) pokemon_battles_improved_pred = rs.transform(pokemon_battles_improved_pred) plt.subplots(figsize=(15,10)) sns.boxplot(data=pokemon_battles_improved_pred[:,[0,1,2,3,4,5,8]], orient='v')
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Un cop escalades, tornem a separar-les en un conjunt d'entrenament i un de prova.
pokemon_battles_improved_pred_train, pokemon_battles_improved_pred_test, \ pokemon_battles_improved_res_train, pokemon_battles_improved_res_test = train_test_split(\ pokemon_battles_improved_pred, \ pokemon_battles_res, \ test_size=0.2, random_state = 23)
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
*Random forest* milloratCalculat l'atribut *balance_effectivity* que té en compte el tipus dels Pokemons involucrats en el combat, tornem a crear el model basat en *random forest* (ja que és amb el que hem obtingut un major *accuracy*) per veure si millorem els resultats.
improved_rfc_classifier = RandomForestClassifier(n_estimators=10, criterion='entropy', random_state=0) improved_rfc_classifier = improved_rfc_classifier.fit(\ X=pokemon_battles_improved_pred_train, \ y=pokemon_battles_improved_res_train) improved_rfc_pokemon_battle_results = improved_rfc_classifier.predict(X=pokemon_battles_improved_pred_test) improved_rfc_cm = confusion_matrix(pokemon_battles_improved_res_test, improved_rfc_pokemon_battle_results) print(improved_rfc_cm) from sklearn.model_selection import cross_val_score accuracies = cross_val_score(estimator = improved_rfc_classifier, X = pokemon_battles_improved_pred, y = pokemon_battles_res, cv = 10, scoring='accuracy') print('Mean: {}, standard deviation: {}'.format(accuracies.mean(), accuracies.std()))
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
**Accuracy:** 92,56%**Nota:** Afegint la variable *balance_effectivity* augmenta la complexitat del model i millora l'accuracy només en un 0,31%. [Corba ROC](https://en.wikipedia.org/wiki/Receiver_operating_characteristic)La corba característica pel model obtingut és:
from sklearn.metrics import roc_curve, auc fpr, tpr, _ = roc_curve(y_true=pokemon_battles_improved_res_test , y_score=improved_rfc_pokemon_battle_results) auc = auc(fpr, tpr) plt.subplots(figsize=(15, 8)) plt.plot(fpr, tpr, color='darkorange', lw=2, label='ROC curve (area = %0.2f)' % auc) plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate (FPR)') plt.ylabel('True Positive Rate (TPR)') plt.title('ROC - Classificació de combats') plt.legend(loc="lower right") plt.show()
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Torneig *Pokemon*Per comprovar l'efectivitat del model de predicció creat s'ha decidit realitzar un Torneig *Pokemon*, on hi participen **16 *Pokemons***, **8** dels quals **són llegendaris**. El Torneig consta de **8 combats** dividits en **4 fases**.
# Construeix les dades del combat que enfronta el pokemon1 contra el pokemon2, #les dades retornades ja estan normalitzades. def build_fight(name_pokemon1, name_pokemon2): pokemon1 = pokemon_info_df[pokemon_info_df['name'] == name_pokemon1].iloc[0] pokemon2 = pokemon_info_df[pokemon_info_df['name'] == name_pokemon2].iloc[0] return rs.transform(pd.DataFrame.from_dict({'Diff_attack': [pokemon1['attack']-pokemon2['attack']],\ 'Diff_sp_attack': [pokemon1['sp_attack']-pokemon2['sp_attack']],\ 'Diff_defense': [pokemon1['defense']-pokemon2['defense']],\ 'Diff_sp_defense': [pokemon1['sp_defense']-pokemon2['sp_defense']],\ 'Diff_hp': [pokemon1['hp']-pokemon2['hp']],\ 'Diff_speed': [pokemon1['speed']-pokemon2['speed']],\ 'First_pokemon_is_legendary': [pokemon1['is_legendary']],\ 'Second_pokemon_is_legendary': [pokemon2['is_legendary']],\ 'balance_effectivity': [balance_effectivity_against_by_pokedex_number(\ pokemon1['pokedex_number'], pokemon2['pokedex_number'])]})) # Realitza una lluita que enfronta el Pokemon1 contra el Pokemon2 i # fa la predicció del guanyador amb el classifier def fight(classifier, name_pokemon1, name_pokemon2): pokemon_fight = build_fight(name_pokemon1, name_pokemon2) #Make the prediction result = classifier.predict_proba(X=pokemon_fight) if result[0][0] > 0.5: print('The winner is: {} with a probability of: {}%'.format(name_pokemon1, (result[0][0]*100))) else: print('The winner is: {} with a probability of: {}%'.format(name_pokemon2, (result[0][1]*100)))
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Round 1![title](img/torneig/round__1.jpg)
fight1 = fight(classifier=improved_rfc_classifier, name_pokemon1='Snorlax', name_pokemon2='Ninetales') fight2 = fight(classifier=improved_rfc_classifier, name_pokemon1='Gengar', name_pokemon2='Altaria') fight3 = fight(classifier=improved_rfc_classifier, name_pokemon1='Raikou', name_pokemon2='Mew') fight4 = fight(classifier=improved_rfc_classifier, name_pokemon1='Articuno', name_pokemon2='Kommo-o') fight5 = fight(classifier=improved_rfc_classifier, name_pokemon1='Swampert', name_pokemon2='Solgaleo') fight6 = fight(classifier=improved_rfc_classifier, name_pokemon1='Nidoking', name_pokemon2='Rayquaza') fight7 = fight(classifier=improved_rfc_classifier, name_pokemon1='Mewtwo', name_pokemon2='Celebi') fight8 = fight(classifier=improved_rfc_classifier, name_pokemon1='Arceus', name_pokemon2='Milotic')
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Round 2![title](img/torneig/round_2.jpg)
fight9 = fight(classifier=improved_rfc_classifier, name_pokemon1='Snorlax', name_pokemon2='Raikou') fight10 = fight(classifier=improved_rfc_classifier, name_pokemon1='Altaria', name_pokemon2='Kommo-o') fight11 = fight(classifier=improved_rfc_classifier, name_pokemon1='Swampert', name_pokemon2='Mewtwo') fight12 = fight(classifier=improved_rfc_classifier, name_pokemon1='Rayquaza', name_pokemon2='Arceus')
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Round 3![title](img/torneig/round_3.jpg)
fight9 = fight(classifier=improved_rfc_classifier, name_pokemon1='Snorlax', name_pokemon2='Mewtwo') fight10 = fight(classifier=improved_rfc_classifier, name_pokemon1='Kommo-o', name_pokemon2='Arceus')
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
Round 4![title](img/torneig/round_4.jpg)
fight10 = fight(classifier=improved_rfc_classifier, name_pokemon1='Mewtwo', name_pokemon2='Arceus')
_____no_output_____
Apache-2.0
.ipynb_checkpoints/data_analysis-checkpoint.ipynb
ogalera-dev/data-analysis
import matplotlib.pyplot as plt import numpy as np plt.plot([2,4,6,8,10]) plt.show() #plotting with lists %matplotlib inline plt.plot([2,5,9,7],[2,1,3,5], color='green', marker='o') #number of x points should be equal to number of y points plt.xlabel('Bombs') plt.ylabel('People') plt.xlim(-5,20) plt.ylim(0,20) plt.show() #plotting with numpy arrays %matplotlib inline #arr=np.arange(5,10,0.5) #plt.plot(arr, arr**2, '1:c', label="y=$x^2$") #using LaTeX for better formatting #plt.plot(arr, arr, 'or', label="y=x") #plt.legend() #plotting on a log scale plt.loglog(arr, arr, 'or', label="y=x") plt.loglog(arr, arr**2, '1:c', label="y=$x^2$") plt.loglog(arr, arr**3, 'b--', label="y=$x^3$") plt.xlabel('x') plt.ylabel('y') plt.legend() plt.show() #using only one axis as log scale t=np.arange(-5,20,0.5) plt.plot(t, np.exp(t)) plt.yscale('log') plt.show() #bar graphs names=['a','b','c', 'd'] values=np.random.randint(0,25,4) plt.bar(names,values,color='r') plt.xlabel("Names") plt.ylabel("Values") plt.title("Bar Graph") plt.show() #histograms value=np.random.randn(5000) plt.hist(value) plt.show() #scatterplots x_values = np.random.randn(1000) y_values_1 = np.sin(np.pi*x_values) + 0.25*np.random.randn(1000) y_values_2 = np.cos(np.pi*x_values) + 0.25*np.random.randn(1000) plt.figure(figsize=(8,6)) plt.scatter(x_values,y_values_1,s=10,color='darkorange',label='Sine') plt.scatter(x_values,y_values_2,s=1,color='indigo',label='Cosine') plt.xlabel('x') plt.ylabel('y') plt.title('Scatter Plot') plt.legend() plt.show()
_____no_output_____
MIT
Tutorial 4/Tutorial_4_Plotting.ipynb
drkndl/IITB-Astro-Tutorials
Aliasing e o teorema da amostragemNeste notebook exploramos questões a respeito da taxa de amostragem.
# importar as bibliotecas necessárias import numpy as np # arrays import matplotlib.pyplot as plt # plots plt.rcParams.update({'font.size': 14})
_____no_output_____
CC0-1.0
Aula 39 - Aliasing e solucoes/Amostragem 2.ipynb
RicardoGMSilveira/codes_proc_de_sinais
Exemplo 1 Vamos criar um seno, entre 0 [s] e 1[s], com frequência 10 [Hz]. Vamos variar a taxa de amostragem e averiguar o que ocorre.
Fs = 15.7 time = np.arange(0, 1, 1/Fs) xt = np.sin(2*np.pi*10*time) N = len(xt) # Num. de amostras no sinal
_____no_output_____
CC0-1.0
Aula 39 - Aliasing e solucoes/Amostragem 2.ipynb
RicardoGMSilveira/codes_proc_de_sinais
1 período do espectroVamos calcular o espectro com a FFT e plotar 1 período de espectro. Note que o vetor de frequências vai de 0 até bem perto de $F_s$.A princípio, o espectro tem o mesmo número de amostras do sinal.
Xw = np.fft.fft(xt) # A princípio, o espectro tem o mesmo número de amostras do sinal freq = np.linspace(0, (N-1)*Fs/Fs, N) # 1 período do vetor de frequências vai de 0 até bem perto de Fs. print("xt possui {} amostras e Xw possui {} componentes de frequência".format(N, len(Xw))) plt.figure() plt.plot(freq, np.abs(Xw)/N, 'b', linewidth = 2) plt.axvline(Fs/2, color='k',linestyle = '--', linewidth = 4, alpha = 0.8) plt.xlabel('Frequência [Hz]') plt.ylabel('Magnitude [-]') plt.grid(which='both', axis='both') plt.xlim((0,Fs)) plt.ylim((0,0.6));
xt possui 16 amostras e Xw possui 16 componentes de frequência
CC0-1.0
Aula 39 - Aliasing e solucoes/Amostragem 2.ipynb
RicardoGMSilveira/codes_proc_de_sinais
Vários período do espectro
# novo vetor de frequências - 3 períodos plt.figure() plt.plot(freq-Fs, np.abs(Xw)/N, '--b', linewidth = 2) plt.plot(freq, np.abs(Xw)/N, 'b', linewidth = 2) plt.plot(freq+Fs, np.abs(Xw)/N, '--b', linewidth = 2) plt.axvline(Fs/2, color='k',linestyle = '--', linewidth = 4, alpha = 0.8) plt.xlabel('Frequência [Hz]') plt.ylabel('Magnitude [-]') plt.grid(which='both', axis='both') plt.xlim((-Fs,2*Fs)) plt.ylim((0,0.6)); Fs_2 = 1000 time2 = np.arange(0, 1, 1/Fs_2) xt_2 = np.sin(2*np.pi*10*time2) plt.figure() plt.plot(time2, xt_2, 'r', linewidth = 2) #plt.plot(time, xt, '--b', linewidth = 2) plt.stem(time, xt, '-b', label = r"$F_s = {}$ [Hz]".format(Fs), basefmt=" ", use_line_collection= True) plt.legend(loc = 'upper right') plt.xlabel('Tempo [s]') plt.ylabel('Amplitude [-]') plt.grid(which='both', axis='both') plt.xlim((0,1)) plt.ylim((-1.6,1.6));
_____no_output_____
CC0-1.0
Aula 39 - Aliasing e solucoes/Amostragem 2.ipynb
RicardoGMSilveira/codes_proc_de_sinais
Exemplo 2 - Vamos ouvir um seno com várias taxas de amostragem
import IPython.display as ipd from scipy import signal # Gerar sinal com uma taxa de amostragem fs = 1800 t = np.arange(0, 1, 1/fs) # vetor temporal freq = 1000 w = 2*np.pi*freq xt = np.sin(w*t) # Reamostrar o sinal para a placa de som conseguir tocá-lo fs_audio = 44100 xt_play = signal.resample(xt, fs_audio) ipd.Audio(xt_play, rate=fs_audio) # load a NumPy array xt_play.shape
_____no_output_____
CC0-1.0
Aula 39 - Aliasing e solucoes/Amostragem 2.ipynb
RicardoGMSilveira/codes_proc_de_sinais
Exemplo 3. Um sinal com 3 senos
Fs=100 T=2.0; t=np.arange(0,T,1/Fs) # 3 sinais com diferentes frequências f1=10 f2=40 f3=80 #80 e 120 x1=np.sin(2*np.pi*f1*t) x2=np.sin(2*np.pi*f2*t) x3=0.2*np.sin(2*np.pi*f3*t) # FFT N=len(t) X1=np.fft.fft(x1) X2=np.fft.fft(x2) X3=np.fft.fft(x3) freq = np.linspace(0, (N-1)*Fs/N, N) plt.figure(figsize=(8,20)) plt.subplot(3,1,1) plt.plot(freq, np.abs(X1)/N, 'b', linewidth = 2, label = r"$f = {:.1f}$ [Hz]".format(f1)) plt.axvline(Fs/2, color='k',linestyle = '--', linewidth = 4, alpha = 0.4) plt.legend(loc = 'upper right') plt.ylim((0, 0.6)) plt.xlabel('Frequency [Hz]') plt.ylabel('Magnitude [-]') plt.grid(which='both', axis='both') plt.subplot(3,1,2) plt.plot(freq, np.abs(X2)/N, 'b', linewidth = 2, label = r"$f = {:.1f}$ [Hz]".format(f2)) plt.axvline(Fs/2, color='k',linestyle = '--', linewidth = 4, alpha = 0.4) plt.legend(loc = 'upper right') plt.ylim((0, 0.6)) plt.xlabel('Frequency [Hz]') plt.ylabel('Magnitude [dB]') plt.grid(which='both', axis='both') plt.subplot(3,1,3) plt.plot(freq, np.abs(X3)/N, 'b', linewidth = 2, label = r"$f = {:.1f}$ [Hz]".format(f3)) plt.axvline(Fs/2, color='k',linestyle = '--', linewidth = 4, alpha = 0.4) plt.legend(loc = 'upper right') plt.ylim((0, 0.6)) plt.xlabel('Frequency [Hz]') plt.ylabel('Magnitude [dB]') plt.grid(which='both', axis='both')
_____no_output_____
CC0-1.0
Aula 39 - Aliasing e solucoes/Amostragem 2.ipynb
RicardoGMSilveira/codes_proc_de_sinais
Programming LSTM with Keras and TensorFlowSo far, the neural networks that we’ve examined have always had forward connections. Neural networks of this type always begin with an input layer connected to the first hidden layer. Each hidden layer always connects to the next hidden layer. The final hidden layer always connects to the output layer. This manner to connect layers is the reason that these networks are called “feedforward.” Recurrent neural networks are not so rigid, as backward connections are also allowed. A recurrent connection links a neuron in a layer to either a previous layer or the neuron itself. Most recurrent neural network architectures maintain state in the recurrent connections. Feedforward neural networks don’t maintain any state. A recurrent neural network’s state acts as a sort of short-term memory for the neural network. Consequently, a recurrent neural network will not always produce the same output for a given input.Recurrent neural networks do not force the connections to flow only from one layer to the next, from the input layer to the output layer. A recurrent connection occurs when a connection is formed between a neuron and one of the following other types of neurons:* The neuron itself* A neuron on the same level* A neuron on a previous levelRecurrent connections can never target the input neurons or bias neurons. The processing of recurrent connections can be challenging. Because the recurrent links create endless loops, the neural network must have some way to know when to stop. A neural network that entered an endless loop would not be useful. To prevent endless loops, we can calculate the recurrent connections with the following three approaches:* Context neurons* Calculating output over a fixed number of iterations* Calculating output until neuron output stabilizesThe context neuron is a special neuron type that remembers its input and provides that input as its output the next time that we calculate the network. For example, if we gave a context neuron 0.5 as input, it would output 0. Context neurons always output 0 on their first call. However, if we gave the context neuron a 0.6 as input, the output would be 0.5. We never weigh the input connections to a context neuron, but we can weigh the output from a context neuron just like any other network connection. Context neurons allow us to calculate a neural network in a single feedforward pass. Context neurons usually occur in layers. A layer of context neurons will always have the same number of context neurons as neurons in its source layer, as demonstrated by Figure 10.CTX.**Figure 10.CTX: Context Layers**![Context Layers](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/class_10_context_layer.png "Context Layers")As you can see from the above layer, two hidden neurons that are labeled hidden one and hidden two directly connect to the two context neurons. The dashed lines on these connections indicate that these are not weighted connections. These weightless connections are never dense. If these connections were dense, hidden one would be connected to both hidden one and hidden 2. However, the direct connection joins each hidden neuron to its corresponding context neuron. The two context neurons form dense, weighted connections to the two hidden neurons. Finally, the two hidden neurons also form dense connections to the neurons in the next layer. The two context neurons form two connections to a single neuron in the next layer, four connections to two neurons, six connections to three neurons, and so on.You can combine context neurons with the input, hidden, and output layers of a neural network in many different ways. Understanding LSTMLong Short Term Neural Network (LSTM) layers are a type of recurrent unit that you often use with deep neural networks.[[Cite:hochreiter1997long]](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.676.4320) For TensorFlow, you can think of LSTM as a layer type that you can combine with other layer types, such as dense. LSTM makes use of two transfer function types internally. The first type of transfer function is the sigmoid. This transfer function type is used form gates inside of the unit. The sigmoid transfer function is given by the following equation:$ \mbox{S}(t) = \frac{1}{1 + e^{-t}} $The second type of transfer function is the hyperbolic tangent (tanh) function, which you to scale the output of the LSTM. This functionality is similar to how we have used other transfer functions in this course. We provide the graphs for these functions here:
%matplotlib inline import matplotlib import numpy as np import matplotlib.pyplot as plt import math def sigmoid(x): a = [] for item in x: a.append(1/(1+math.exp(-item))) return a def f2(x): a = [] for item in x: a.append(math.tanh(item)) return a x = np.arange(-10., 10., 0.2) y1 = sigmoid(x) y2 = f2(x) print("Sigmoid") plt.plot(x,y1) plt.show() print("Hyperbolic Tangent(tanh)") plt.plot(x,y2) plt.show()
Sigmoid
Apache-2.0
Clase8-RNN/extras/2lstm.ipynb
diegostaPy/cursoIA
Both of these two functions compress their output to a specific range. For the sigmoid function, this range is 0 to 1. For the hyperbolic tangent function, this range is -1 to 1.LSTM maintains an internal state and produces an output. The following diagram shows an LSTM unit over three time slices: the current time slice (t), as well as the previous (t-1) and next (t+1) slice, as demonstrated by Figure 10.LSTM.**Figure 10.LSTM: LSTM Layers**![LSTM Layers](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/class_10_lstm1.png "LSTM Layers")The values $\hat{y}$ are the output from the unit; the values ($x$) are the input to the unit, and the values $c$ are the context values. The output and context values always feed their output to the next time slice. The context values allow the network to maintain state between calls. Figure 10.ILSTM shows the internals of a LSTM layer.**Figure 10.ILSTM: Inside a LSTM Layer**![LSTM Layers](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/class_10_lstm2.png "Inside the LSTM")A LSTM unit consists of three gates:* Forget Gate ($f_t$) - Controls if/when the context is forgotten. (MC)* Input Gate ($i_t$) - Controls if/when the context should remember a value. (M+/MS)* Output Gate ($o_t$) - Controls if/when the remembered value is allowed to pass from the unit. (RM)Mathematically, you can think of the above diagram as the following:**These are vector values.**First, calculate the forget gate value. This gate determines if the LSTM unit should forget its short term memory. The value $b$ is a bias, just like the bias neurons we saw before. Except LSTM has a bias for every gate: $b_t$, $b_i$, and $b_o$.$ f_t = S(W_f \cdot [\hat{y}_{t-1}, x_t] + b_f) $$ i_t = S(W_i \cdot [\hat{y}_{t-1},x_t] + b_i) $$ \tilde{C}_t = \tanh(W_C \cdot [\hat{y}_{t-1},x_t]+b_C) $$ C_t = f_t \cdot C_{t-1}+i_t \cdot \tilde{C}_t $$ o_t = S(W_o \cdot [\hat{y}_{t-1},x_t] + b_o ) $$ \hat{y}_t = o_t \cdot \tanh(C_t) $ Simple TensorFlow LSTM ExampleThe following code creates the LSTM network, which is an example of a RNN for classification. The following code trains on a data set (x) with a max sequence size of 6 (columns) and six training elements (rows)
from tensorflow.keras.preprocessing import sequence from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Embedding from tensorflow.keras.layers import LSTM import numpy as np max_features = 4 # 0,1,2,3 (total of 4) x = [ [[0],[1],[1],[0],[0],[0]], [[0],[0],[0],[2],[2],[0]], [[0],[0],[0],[0],[3],[3]], [[0],[2],[2],[0],[0],[0]], [[0],[0],[3],[3],[0],[0]], [[0],[0],[0],[0],[1],[1]] ] x = np.array(x,dtype=np.float32) y = np.array([1,2,3,2,3,1],dtype=np.int32) # Convert y2 to dummy variables y2 = np.zeros((y.shape[0], max_features),dtype=np.float32) y2[np.arange(y.shape[0]), y] = 1.0 print(y2) print('Build model...') model = Sequential() model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2, \ input_shape=(None, 1))) model.add(Dense(4, activation='sigmoid')) # try using different optimizers and different optimizer configs model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) print('Train...') model.fit(x,y2,epochs=200) pred = model.predict(x) predict_classes = np.argmax(pred,axis=1) print("Predicted classes: {}",predict_classes) print("Expected classes: {}",predict_classes) def runit(model, inp): inp = np.array(inp,dtype=np.float32) pred = model.predict(inp) return np.argmax(pred[0]) print( runit( model, [[[0],[0],[0],[0],[0],[1]]] ))
1
Apache-2.0
Clase8-RNN/extras/2lstm.ipynb
diegostaPy/cursoIA
Sun Spots ExampleIn this section, we see an example of RNN regression to predict sunspots. You can find the data files needed for this example at the following location.* [Sunspot Data Files](http://www.sidc.be/silso/datafilestotal)* [Download Daily Sunspots](http://www.sidc.be/silso/INFO/sndtotcsv.php) - 1/1/1818 to now.The following code loads the sunspot file:
import pandas as pd import os # Replacce the following path with your own file. It can be downloaded from: # http://www.sidc.be/silso/INFO/sndtotcsv.php if COLAB: PATH = "/content/drive/My Drive/Colab Notebooks/data/" else: PATH = "./data/" filename = os.path.join(PATH,"SN_d_tot_V2.0.csv") names = ['year', 'month', 'day', 'dec_year', 'sn_value' , 'sn_error', 'obs_num'] df = pd.read_csv(filename,sep=';',header=None,names=names, na_values=['-1'], index_col=False) print("Starting file:") print(df[0:10]) print("Ending file:") print(df[-10:])
Starting file: year month day dec_year sn_value sn_error obs_num 0 1818 1 1 1818.001 -1 NaN 0 1 1818 1 2 1818.004 -1 NaN 0 2 1818 1 3 1818.007 -1 NaN 0 3 1818 1 4 1818.010 -1 NaN 0 4 1818 1 5 1818.012 -1 NaN 0 5 1818 1 6 1818.015 -1 NaN 0 6 1818 1 7 1818.018 -1 NaN 0 7 1818 1 8 1818.021 65 10.2 1 8 1818 1 9 1818.023 -1 NaN 0 9 1818 1 10 1818.026 -1 NaN 0 Ending file: year month day dec_year sn_value sn_error obs_num 73769 2019 12 22 2019.974 0 0.0 17 73770 2019 12 23 2019.977 0 0.0 24 73771 2019 12 24 2019.979 16 1.0 10 73772 2019 12 25 2019.982 23 2.2 7 73773 2019 12 26 2019.985 10 3.7 16 73774 2019 12 27 2019.988 0 0.0 26 73775 2019 12 28 2019.990 0 0.0 26 73776 2019 12 29 2019.993 0 0.0 27 73777 2019 12 30 2019.996 0 0.0 32 73778 2019 12 31 2019.999 0 0.0 19
Apache-2.0
Clase8-RNN/extras/2lstm.ipynb
diegostaPy/cursoIA
As you can see, there is quite a bit of missing data near the end of the file. We want to find the starting index where the missing data no longer occurs. This technique is somewhat sloppy; it would be better to find a use for the data between missing values. However, the point of this example is to show how to use LSTM with a somewhat simple time-series.
start_id = max(df[df['obs_num'] == 0].index.tolist())+1 # Find the last zero and move one beyond print(start_id) df = df[start_id:] # Trim the rows that have missing observations df['sn_value'] = df['sn_value'].astype(float) df_train = df[df['year']<2000] df_test = df[df['year']>=2000] spots_train = df_train['sn_value'].tolist() spots_test = df_test['sn_value'].tolist() print("Training set has {} observations.".format(len(spots_train))) print("Test set has {} observations.".format(len(spots_test))) import numpy as np def to_sequences(seq_size, obs): x = [] y = [] for i in range(len(obs)-SEQUENCE_SIZE): #print(i) window = obs[i:(i+SEQUENCE_SIZE)] after_window = obs[i+SEQUENCE_SIZE] window = [[x] for x in window] #print("{} - {}".format(window,after_window)) x.append(window) y.append(after_window) return np.array(x),np.array(y) SEQUENCE_SIZE = 10 x_train,y_train = to_sequences(SEQUENCE_SIZE,spots_train) x_test,y_test = to_sequences(SEQUENCE_SIZE,spots_test) print("Shape of training set: {}".format(x_train.shape)) print("Shape of test set: {}".format(x_test.shape)) x_train from tensorflow.keras.preprocessing import sequence from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Embedding from tensorflow.keras.layers import LSTM from tensorflow.keras.datasets import imdb from tensorflow.keras.callbacks import EarlyStopping import numpy as np print('Build model...') model = Sequential() model.add(LSTM(64, dropout=0.0, recurrent_dropout=0.0,input_shape=(None, 1))) model.add(Dense(32)) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto', restore_best_weights=True) print('Train...') model.fit(x_train,y_train,validation_data=(x_test,y_test), callbacks=[monitor],verbose=2,epochs=1000)
Build model... Train... Train on 55150 samples, validate on 7295 samples Epoch 1/1000 55150/55150 - 13s - loss: 1312.6864 - val_loss: 190.2033 Epoch 2/1000 55150/55150 - 8s - loss: 513.1618 - val_loss: 188.5868 Epoch 3/1000 55150/55150 - 8s - loss: 510.8469 - val_loss: 191.0815 Epoch 4/1000 55150/55150 - 8s - loss: 506.8735 - val_loss: 215.0268 Epoch 5/1000 55150/55150 - 8s - loss: 503.7439 - val_loss: 193.7987 Epoch 6/1000 55150/55150 - 8s - loss: 504.5192 - val_loss: 199.2520 Epoch 7/1000 Restoring model weights from the end of the best epoch. 55150/55150 - 8s - loss: 502.6547 - val_loss: 198.9333 Epoch 00007: early stopping
Apache-2.0
Clase8-RNN/extras/2lstm.ipynb
diegostaPy/cursoIA
Finally, we evaluate the model with RMSE.
from sklearn import metrics pred = model.predict(x_test) score = np.sqrt(metrics.mean_squared_error(pred,y_test)) print("Score (RMSE): {}".format(score))
Score (RMSE): 13.732691339581104
Apache-2.0
Clase8-RNN/extras/2lstm.ipynb
diegostaPy/cursoIA
Import Module
from random import random,randint,choice import genetic as g
_____no_output_____
MIT
models/genetic_programming/example.ipynb
shawlu95/Data_Science_Toolbox
Build Dataset
def hiddenfunction(x,y): return x**2+2*y + 7 def buildhiddenset(): rows=[] for i in range(200): x=randint(0,40) y=randint(0,40) rows.append([x,y,hiddenfunction(x,y)]) return rows hiddenset=buildhiddenset() help(g.evolve)
Help on function evolve in module genetic: evolve(pc, popsize, rankfunction, maxgen=500, mutationrate=0.1, breedingrate=0.4, pexp=0.7, pnew=0.05) rankfunction The function used on the list of programs to rank them from best to worst. mutationrate The probability of a mutation, passed on to mutate. breedingrate The probability of crossover, passed on to crossover. popsize The size of the initial population. probexp The rate of decline in the probability of selecting lower-ranked programs. A higher value makes the selection process more stringent, choosing only programs with the best ranks to replicate. probnew The probability when building the new population that a completely new, ran- dom program is introduced. probexp and probnew will be discussed further in the upcoming section “The Importance of Diversity.”
MIT
models/genetic_programming/example.ipynb
shawlu95/Data_Science_Toolbox
Evolution
rank_func=g.getrankfunction(buildhiddenset()) best = g.evolve(2,500,rank_func,mutationrate=0.2,breedingrate=0.2,pexp=0.7,pnew=0.3) best.display()
add add if add p1 8 add if 1 subtract 7 subtract 7 add p1 subtract 3 4 p0 8 add if if isgreater 2 subtract add 5 1 p0 isgreater 6 multiply p1 subtract 4 isgreater p0 add p0 0 0 9 p0 2 p1 multiply if 1 p0 p0 p0
MIT
models/genetic_programming/example.ipynb
shawlu95/Data_Science_Toolbox
SUPPORT VECTOR REGRESSIONSupport Vector Regression(SVR) adalah teknik Supervised Learning yang mengadopsi teknik Support Vector Machine(SVM). Yang membedakan adalah SVR menggunakan hyperplane sebagai dasar untuk membuat margin dan garis pembatas.Apakah yang membedakan SVR dengan regresi linear?Pada regresi linear, kita membuat model berdasarkan perhitungan error yang paling kecil. Sedangkan pada SVR, kita membatasi error pada treshold tertentu.Untuk lebih jelasnya perhatikan gambar berikut. ![image.png](attachment:image.png) biru: hyperplanemerah: garis pembatas Pada gambar diatas, terlihat bahwa data-data point terletak di dalam garis pembatas. Prinsip algoritma SVR dalam menentukan hyperplane adalah memperhitungkan jumlah data point yang terletak di dalam garis pembatas. Jadi garis prediksi yang terbaik adalah hyperplane yang mempunyai data point terbanyak dalam garis pembatas. Garis pembatas $\epsilon$ bisa kita tentukan sendiri dalam merancang model SVR. Sebagai contoh, misalkan kita mempunyai hyperplane dengan persamaan sebagai berikut.$Wx + b = 0$Maka persamaan garis pembatas berdasarkan hyperplane diatas adalah:$Wx + b = \epsilon$$Wx + b = -\epsilon$Jadi, data point yang akan dihitung sebagai penentuan model SVR, akan memenuhi persamaan dibawah ini.$-\epsilon \leq y - Wx + b\leq \epsilon $ ![image.png](attachment:image.png) CODING SECTION Misalkan kita ingin menganalisa tingkat kejujuran calon karyawan baru. pada pekerjaan yang dia ingin lamar, dia menceritakan bahwa sudah memiliki pengalaman di pekerjaan tersebut selama 16 tahun dan mempunyai gaji sebesar 20 juta rupiah di perusahaan sebelumnya. Kita ingin melakukan pengecekan terhadap pernyataan calon karyawan tersebut. kita melakukan pengecekan terhadap data-data karyawan yang bekerja di bidang yang sama, yang sudah kita ambil dari situs pencarian kerja. Data dibawah ini adalah data gaji karyawan-karyawan terhadap lama pengalaman mereka bekerja.
import numpy as np #aljabar linear import pandas as pd #pengolahan data import matplotlib.pyplot as plt #visualisasi import warnings warnings.filterwarnings("ignore") df = pd.read_csv('salary.csv') #membaca data df.head(10) #mengubah data menjadi array agar bisa dilakukan proses machine learning X = df.pengalaman.values.reshape(-1,1) y = df.gaji.values #Memisah data untuk proses data latih dan data uji from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = 1/4, random_state = 42) #import algoritma dari scikit-learn from sklearn.svm import SVR model = SVR(gamma = 'auto', kernel='rbf') #memuat machine learning terhadap data latih model.fit(X, y)
_____no_output_____
MIT
2 Regression/2.3 support vector regression/SVR.ipynb
jordihasianta/DS_Kitchen
Visualisasi
# data latih X_grid = np.arange(min(X), max(X), 0.1) X_grid = X_grid.reshape((len(X_grid), 1)) plt.scatter(X_train, y_train, color = 'red') plt.plot(X_grid, model.predict(X_grid), color = 'blue') plt.title('Gaji vs Pengalaman (Training Set)') plt.xlabel('pengalaman') plt.ylabel('gaji') plt.show() #data uji X_grid = np.arange(min(X), max(X), 0.1) X_grid = X_grid.reshape((len(X_grid), 1)) plt.scatter(X_test, y_test, color = 'red') plt.plot(X_grid, model.predict(X_grid), color = 'blue') plt.title('Gaji vs Pengalaman (Test set)') plt.xlabel('pengalaman') plt.ylabel('gaji') plt.show() model.predict(np.array(16).reshape(1,-1))
_____no_output_____
MIT
2 Regression/2.3 support vector regression/SVR.ipynb
jordihasianta/DS_Kitchen
phageParser - Distribution of Number of Spacers per Locus C.K. Yildirim ([email protected])The latest version of this [IPython notebook](http://ipython.org/notebook.html) demo is available at [http://github.com/phageParser/phageParser](https://github.com/phageParser/phageParser/tree/django-dev/demos)To run this notebook locally:* `git clone` or [download](https://github.com/phageParser/phageParser/archive/master.zip) this repository* Install [Jupyter Notebook](http://jupyter.org/install.html)* In a command prompt, type `jupyter notebook` - the notebook server will launch in your browser* Navigate to the phageParser/demos folder and open the notebook Introduction This demo uses the REST API of phageParser to plot the distribution of number of spacers for CRISPR loci.In this case, the API is consumed using the `requests` library and the json responses are parsed for gatheringnumber of spacers for each locus.
# import packages import requests import json import numpy as np import random import matplotlib.pyplot as plt from matplotlib import mlab import seaborn as sns import pandas as pd from scipy import stats sns.set_palette("husl") #Url of the phageParser API apiurl = 'https://phageparser.herokuapp.com' #Get the initial page for listing of accessible objects and get url for spacers r=requests.get(apiurl) organisms_url = r.json()['organisms'] #Iterate through each page and merge the json response into a dictionary for organisms organism_dict = {} r=requests.get(organisms_url) last_page = r.json()['meta']['total_pages'] for page in range(1,last_page+1): url = organisms_url+'?page={}&include[]=loci.spacers'.format(page) payload = requests.get(url).json() organism_objs = payload['organisms'] for organism_obj in organism_objs: organism_dict[organism_obj['id']] = organism_obj #Calculate the number of spacers for each locus locus_num_spacer = np.array([ len(loc['spacers']) for v in organism_dict.values() for loc in v['loci']]) #Calculate the mean and standard deviation for spacer basepair lengths mu, sigma = locus_num_spacer.mean(), locus_num_spacer.std() print("Calculated mean basepair length for spacers is {:.2f}+/-{:.2f}".format(mu,sigma)) g=sns.distplot(locus_num_spacer,bins=range(0,600,1),kde=False) g.set(yscale="log") g.set_ylim(8*10**-1,1.1*10**3) g.set_title("Histogram of number of spacers per locus") g.set_xlabel("Number of spacers") g.set_ylabel("Number of loci") plt.show() #Plot cumulative probability of data fig, ax = plt.subplots(figsize=(8,4), dpi=100) sorted_data = np.sort(locus_num_spacer) ax.step(sorted_data, np.arange(sorted_data.size), label='Empirical') #Format the figure and label ax.grid(True) #ax.set_title('Cumulative distribution of locus sizes') ax.set_xlabel("Number of spacers") ax.set_ylabel("Fraction of loci with x or fewer spacers") ax.set_xlim(1,500) ax.set_xscale('log') plt.show()
_____no_output_____
MIT
demos/Locus Number of Spacers Analysis.ipynb
nataliyah123/phageParser
Your first neural networkIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
%matplotlib inline %load_ext autoreload %autoreload 2 %config InlineBackend.figure_format = 'retina' import numpy as np import pandas as pd import matplotlib.pyplot as plt
_____no_output_____
MIT
project-bikesharing/Predicting_bike_sharing_data.ipynb
pasbury/udacity-deep-learning-v2-pytorch
Load and prepare the dataA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
data_path = 'Bike-Sharing-Dataset/hour.csv' rides = pd.read_csv(data_path) rides.head()
_____no_output_____
MIT
project-bikesharing/Predicting_bike_sharing_data.ipynb
pasbury/udacity-deep-learning-v2-pytorch
Checking out the dataThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the `cnt` column. You can see the first few rows of the data above.Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
rides[:24*10].plot(x='dteday', y='cnt')
_____no_output_____
MIT
project-bikesharing/Predicting_bike_sharing_data.ipynb
pasbury/udacity-deep-learning-v2-pytorch
Dummy variablesHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to `get_dummies()`.
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday'] for each in dummy_fields: dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False) rides = pd.concat([rides, dummies], axis=1) fields_to_drop = ['instant', 'dteday', 'season', 'weathersit', 'weekday', 'atemp', 'mnth', 'workingday', 'hr'] data = rides.drop(fields_to_drop, axis=1) data.head()
_____no_output_____
MIT
project-bikesharing/Predicting_bike_sharing_data.ipynb
pasbury/udacity-deep-learning-v2-pytorch
Scaling target variablesTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.The scaling factors are saved so we can go backwards when we use the network for predictions.
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed'] # Store scalings in a dictionary so we can convert back later scaled_features = {} for each in quant_features: mean, std = data[each].mean(), data[each].std() scaled_features[each] = [mean, std] data.loc[:, each] = (data[each] - mean)/std
_____no_output_____
MIT
project-bikesharing/Predicting_bike_sharing_data.ipynb
pasbury/udacity-deep-learning-v2-pytorch
Splitting the data into training, testing, and validation setsWe'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
# Save data for approximately the last 21 days test_data = data[-21*24:] # Now remove the test data from the data set data = data[:-21*24] # Separate the data into features and targets target_fields = ['cnt', 'casual', 'registered'] features, targets = data.drop(target_fields, axis=1), data[target_fields] test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
_____no_output_____
MIT
project-bikesharing/Predicting_bike_sharing_data.ipynb
pasbury/udacity-deep-learning-v2-pytorch
We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
# Hold out the last 60 days or so of the remaining data as a validation set train_features, train_targets = features[:-60*24], targets[:-60*24] val_features, val_targets = features[-60*24:], targets[-60*24:]
_____no_output_____
MIT
project-bikesharing/Predicting_bike_sharing_data.ipynb
pasbury/udacity-deep-learning-v2-pytorch