markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Uarray reduced Now let's use uarray's `optimize` decorator to create an updated function that specifes the dimensionality of the arrays to produced an optimized form:
# enable_logging() optimized_some_fn = optimize(args[0].shape, args[1].shape)(some_fn)
_____no_output_____
BSD-3-Clause
notebooks/NumPy Compat.ipynb
costrouc/uarray
Now let's try our function out to see if it's faster:
# NBVAL_IGNORE_OUTPUT %timeit optimized_some_fn(*args)
5.47 µs ± 48.5 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
BSD-3-Clause
notebooks/NumPy Compat.ipynb
costrouc/uarray
Yep about 10x as fast. Let's look at how this is done! First, we create an abstract representation of the array operations:
optimized_some_fn.__optimize_steps__['resulting_expr']
_____no_output_____
BSD-3-Clause
notebooks/NumPy Compat.ipynb
costrouc/uarray
Then, we compile that to Python AST:
print(optimized_some_fn.__optimize_steps__['ast_as_source'])
def fn(a, b): i_5 = () i_6 = 10 i_1 = ((i_6,) + i_5) i_0 = np.empty(i_1) i_2 = 10 for i_3 in range(i_2): i_4 = i_0[i_3] i_9 = 5 i_10 = a i_13 = i_10[i_9] i_11 = i_3 i_12 = b i_14 = i_12[i_11] i_4 = (i_13 * i_14) i_0[i_3] = i_4 return i_0
BSD-3-Clause
notebooks/NumPy Compat.ipynb
costrouc/uarray
Numba optimized To give this an extra speed boost, we can compile the returned expression with Numba:
numba_optimized = njit(optimized_some_fn) # NBVAL_IGNORE_OUTPUT # run once first to compile numba_optimized(*args) %timeit numba_optimized(*args)
876 ns ± 16.7 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
BSD-3-Clause
notebooks/NumPy Compat.ipynb
costrouc/uarray
Great, another speedup! Unkown dimensionality? What if we want to produce a version of the function that works on any dimensional input? Or if we just want to actually defer to NumPy's implementation and not replace `outer`? We simply omit the `with_dim` methods and we get back an abstract representation that is compiled without any knowledge of the dimensionality:
dims_not_known = optimize(some_fn) dims_not_known.__optimize_steps__['resulting_expr'] print(dims_not_known.__optimize_steps__['ast_as_source'])
def fn(a, b): i_18 = 5 i_16 = a i_17 = b i_19 = np.multiply.outer(i_16, i_17) i_15 = i_19[i_18] return i_15
BSD-3-Clause
notebooks/NumPy Compat.ipynb
costrouc/uarray
TweepyPara acessar a API do twitter, deve-se acessar a área de desenvolvedor, criar uma aplicação e solicitar as credenciais de acesso. As células abaixo demonstram como fazer a conexão na API do twitter usando a biblioteca python Tweepy e retornar a lista de tweets na timeline do usuário.[Documentação do tweepy](https://tweepy.readthedocs.io/)
consumer_key = os.environ['CONSUMER_API_KEY'] consumer_secret = os.environ['CONSUMER_API_SECRET'] access_token = os.environ['ACCESS_TOKEN'] access_token_secret = os.environ['ACCESS_TOKEN_SECRET'] auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth)
_____no_output_____
Apache-2.0
twitter-crawler.ipynb
phdabel/twitter-crawler
Recuperar tweets da timeline
public_tweets = api.home_timeline()
_____no_output_____
Apache-2.0
twitter-crawler.ipynb
phdabel/twitter-crawler
Criar um dicionário vazio para incluir os tweets.Passo necessário para criar um dataframe.
tweets = {'id':[],'author':[], 'screen_name':[], 'tweet':[],'created_at':[],'language':[],'n_retweets':[],'n_likes':[]} ct = 0 num_results = 1000 result_count = 0 last_id = None while result_count < num_results: gremio_tweets = api.search(q='gremio',lang='pt',since_id=last_id) for tweet in gremio_tweets: print("id: "+tweet.id_str) print("Screen name: "+tweet.author.screen_name) print("Autor: "+tweet.author.name) print("Tweet: "+tweet.text) print("Data de criação: "+tweet.created_at.strftime("%d/%m/%Y, %H:%M:%S")) print("Idioma: "+tweet.lang) print("Retweets: "+str(tweet.retweet_count)) print("Curtidas: "+str(tweet.favorite_count)) tweets['id'].append(tweet.id) tweets['screen_name'].append(tweet.author.screen_name) tweets['author'].append(tweet.author.name) tweets['tweet'].append(tweet.text) tweets['created_at'].append(tweet.created_at) tweets['language'].append(tweet.lang) tweets['n_retweets'].append(tweet.retweet_count) tweets['n_likes'].append(tweet.favorite_count) print("==========================") result_count += 1
id: 1201655788508471298 Screen name: RauenPaian Autor: IMORTAL 🖤💙🖤💙 Tweet: RT @SoccerGremio: A informação que nos chega, é que são apenas detalhes a serem finalizados entre Grêmio e Palmeiras por Raphael Veiga. Os… Data de criação: 03/12/2019, 00:14:00 Idioma: pt Retweets: 54 Curtidas: 0 ========================== id: 1201655782686760960 Screen name: regisschuch Autor: Régis Tweet: Esperamos pela parte do @Gremio Que não contrate esse Egídio. JOGADOR muito ruim. Data de criação: 03/12/2019, 00:13:58 Idioma: pt Retweets: 0 Curtidas: 0 ========================== id: 1201655773706833922 Screen name: magrin00 Autor: 🄼🄰🄶🅁🄸🄽ᶜʳᶠ ⚫🔴 Tweet: RT @TozzaFla: Fifa reconhece títulos mundiais de Flamengo, Grêmio, Santos e São Paulo | futebol internacional | Globoesporte https://t.co/A… Data de criação: 03/12/2019, 00:13:56 Idioma: pt Retweets: 232 Curtidas: 0 ========================== id: 1201655770200317958 Screen name: BaseFuracao Autor: Base Furacão Tweet: Em atuação coletiva, 3-0 vs Boca. Em emoção e vibração, Remontada vs Grêmio. https://t.co/ebQwsoBUmo Data de criação: 03/12/2019, 00:13:55 Idioma: pt Retweets: 0 Curtidas: 0 ========================== id: 1201655768107356164 Screen name: Proerrd Autor: Vivi pereiraⓟ Tweet: @di_dinelli @isastrevisan comissão rainha grêmio nadinha Data de criação: 03/12/2019, 00:13:55 Idioma: pt Retweets: 0 Curtidas: 0 ========================== id: 1201655765859262465 Screen name: jean_wosch Autor: Jean Wosch Tweet: @omenguista @LibertadoresBR @AthleticoPR @Flamengo @Gremio @Palmeiras @SantosFC @SaoPauloFC Não reclama, se não pio… https://t.co/n60v1B2Mby Data de criação: 03/12/2019, 00:13:54 Idioma: pt Retweets: 0 Curtidas: 0 ========================== id: 1201655763921453056 Screen name: avalyzinho Autor: nx.avaly 15/11 Tweet: RT @Jhorobert11: Obrigado senhor 🙏🏽🙏🏽 Feliz por mais uma vitória e pelos 2 gols ⚽️⚽️ Isso é Grêmio 🇪🇪🇪🇪 JR11 https://t.co/AEzxnn3GLt Data de criação: 03/12/2019, 00:13:54 Idioma: pt Retweets: 20 Curtidas: 0 ========================== id: 1201655762931666948 Screen name: DanielZ80970238 Autor: Daniel Zurita Tweet: RT @TozzaFla: Fifa reconhece títulos mundiais de Flamengo, Grêmio, Santos e São Paulo | futebol internacional | Globoesporte https://t.co/A… Data de criação: 03/12/2019, 00:13:54 Idioma: pt Retweets: 232 Curtidas: 0 ========================== id: 1201655755826442241 Screen name: BianoRL1 Autor: @BianoRL Tweet: RT @ejramorim: Gostaria de agradecer as atletas, comissão e direção por essa temporada, é um enorme prazer poder conviver e aprender diaria… Data de criação: 03/12/2019, 00:13:52 Idioma: pt Retweets: 3 Curtidas: 0 ========================== id: 1201655731537268736 Screen name: LuizCar74542646 Autor: Luiz Carlos Tweet: Tu tens razão, qual era a meia cancha do grêmio em 2017, campeão da libertadores, quem entrou e nunca mais saiu Art… https://t.co/VNcZjOWna6 Data de criação: 03/12/2019, 00:13:46 Idioma: pt Retweets: 0 Curtidas: 0 ========================== id: 1201655713065512960 Screen name: ceciliaraisa Autor: O Chelsea kerrLUTE 😒💙😒💦 Tweet: @manamaiara @leilinha_1910 @tathiane_vidal pra não dar briga é melhor ele ir pro Grêmio Data de criação: 03/12/2019, 00:13:42 Idioma: pt Retweets: 0 Curtidas: 0 ========================== id: 1201655702562967552 Screen name: cxcarloos Autor: carrlos Tweet: RT @SoccerGremio: A informação que nos chega, é que são apenas detalhes a serem finalizados entre Grêmio e Palmeiras por Raphael Veiga. Os… Data de criação: 03/12/2019, 00:13:39 Idioma: pt Retweets: 54 Curtidas: 0 ========================== id: 1201655693784338433 Screen name: willianselong_ Autor: ₩illian Tweet: @cesarspo @sandra_kunst @Gremio Vo passar por burro nada o comentário é meu e quem decide sou eu, evito ladainha de pessoas como fosse. Data de criação: 03/12/2019, 00:13:37 Idioma: pt Retweets: 0 Curtidas: 0 ========================== id: 1201655691825553411 Screen name: pmsilva37 Autor: Coalito 🐨 Tweet: @carlydamasceno2 Com empate também, caso o Grêmio vença o Cruzeiro. Ficaria 3 pontos na frente e pelo número de vit… https://t.co/hA7TYBW0Jk Data de criação: 03/12/2019, 00:13:37 Idioma: pt Retweets: 0 Curtidas: 0 ========================== id: 1201655682900078592 Screen name: VNJS18 Autor: Vinicin Tweet: RT @RDTRubroNegro: Flamengo pré Jorge Jesus não vencia: - A Liberta há 38 anos. - O Brasileiro há 10 anos. - Na Arena da Baixada há 45 a… Data de criação: 03/12/2019, 00:13:35 Idioma: pt Retweets: 597 Curtidas: 0 ========================== id: 1201655788508471298 Screen name: RauenPaian Autor: IMORTAL 🖤💙🖤💙 Tweet: RT @SoccerGremio: A informação que nos chega, é que são apenas detalhes a serem finalizados entre Grêmio e Palmeiras por Raphael Veiga. Os… Data de criação: 03/12/2019, 00:14:00 Idioma: pt Retweets: 54 Curtidas: 0 ========================== id: 1201655782686760960 Screen name: regisschuch Autor: Régis Tweet: Esperamos pela parte do @Gremio Que não contrate esse Egídio. JOGADOR muito ruim. Data de criação: 03/12/2019, 00:13:58 Idioma: pt Retweets: 0 Curtidas: 0 ========================== id: 1201655773706833922 Screen name: magrin00 Autor: 🄼🄰🄶🅁🄸🄽ᶜʳᶠ ⚫🔴 Tweet: RT @TozzaFla: Fifa reconhece títulos mundiais de Flamengo, Grêmio, Santos e São Paulo | futebol internacional | Globoesporte https://t.co/A… Data de criação: 03/12/2019, 00:13:56 Idioma: pt Retweets: 232 Curtidas: 0 ========================== id: 1201655770200317958 Screen name: BaseFuracao Autor: Base Furacão Tweet: Em atuação coletiva, 3-0 vs Boca. Em emoção e vibração, Remontada vs Grêmio. https://t.co/ebQwsoBUmo Data de criação: 03/12/2019, 00:13:55 Idioma: pt Retweets: 0 Curtidas: 0 ========================== id: 1201655768107356164 Screen name: Proerrd Autor: Vivi pereiraⓟ Tweet: @di_dinelli @isastrevisan comissão rainha grêmio nadinha Data de criação: 03/12/2019, 00:13:55 Idioma: pt Retweets: 0 Curtidas: 0 ========================== id: 1201655765859262465 Screen name: jean_wosch Autor: Jean Wosch Tweet: @omenguista @LibertadoresBR @AthleticoPR @Flamengo @Gremio @Palmeiras @SantosFC @SaoPauloFC Não reclama, se não pio… https://t.co/n60v1B2Mby Data de criação: 03/12/2019, 00:13:54 Idioma: pt Retweets: 0 Curtidas: 0 ========================== id: 1201655763921453056 Screen name: avalyzinho Autor: nx.avaly 15/11 Tweet: RT @Jhorobert11: Obrigado senhor 🙏🏽🙏🏽 Feliz por mais uma vitória e pelos 2 gols ⚽️⚽️ Isso é Grêmio 🇪🇪🇪🇪 JR11 https://t.co/AEzxnn3GLt Data de criação: 03/12/2019, 00:13:54 Idioma: pt Retweets: 20 Curtidas: 0 ========================== id: 1201655762931666948 Screen name: DanielZ80970238 Autor: Daniel Zurita Tweet: RT @TozzaFla: Fifa reconhece títulos mundiais de Flamengo, Grêmio, Santos e São Paulo | futebol internacional | Globoesporte https://t.co/A… Data de criação: 03/12/2019, 00:13:54 Idioma: pt Retweets: 232 Curtidas: 0 ========================== id: 1201655755826442241 Screen name: BianoRL1 Autor: @BianoRL Tweet: RT @ejramorim: Gostaria de agradecer as atletas, comissão e direção por essa temporada, é um enorme prazer poder conviver e aprender diaria… Data de criação: 03/12/2019, 00:13:52 Idioma: pt Retweets: 3 Curtidas: 0 ========================== id: 1201655731537268736 Screen name: LuizCar74542646 Autor: Luiz Carlos Tweet: Tu tens razão, qual era a meia cancha do grêmio em 2017, campeão da libertadores, quem entrou e nunca mais saiu Art… https://t.co/VNcZjOWna6 Data de criação: 03/12/2019, 00:13:46 Idioma: pt Retweets: 0 Curtidas: 0 ========================== id: 1201655713065512960 Screen name: ceciliaraisa Autor: O Chelsea kerrLUTE 😒💙😒💦 Tweet: @manamaiara @leilinha_1910 @tathiane_vidal pra não dar briga é melhor ele ir pro Grêmio Data de criação: 03/12/2019, 00:13:42 Idioma: pt Retweets: 0 Curtidas: 0 ========================== id: 1201655702562967552 Screen name: cxcarloos Autor: carrlos Tweet: RT @SoccerGremio: A informação que nos chega, é que são apenas detalhes a serem finalizados entre Grêmio e Palmeiras por Raphael Veiga. Os… Data de criação: 03/12/2019, 00:13:39 Idioma: pt Retweets: 54 Curtidas: 0 ========================== id: 1201655693784338433 Screen name: willianselong_ Autor: ₩illian Tweet: @cesarspo @sandra_kunst @Gremio Vo passar por burro nada o comentário é meu e quem decide sou eu, evito ladainha de pessoas como fosse. Data de criação: 03/12/2019, 00:13:37 Idioma: pt Retweets: 0 Curtidas: 0 ========================== id: 1201655691825553411 Screen name: pmsilva37 Autor: Coalito 🐨 Tweet: @carlydamasceno2 Com empate também, caso o Grêmio vença o Cruzeiro. Ficaria 3 pontos na frente e pelo número de vit… https://t.co/hA7TYBW0Jk Data de criação: 03/12/2019, 00:13:37 Idioma: pt Retweets: 0 Curtidas: 0 ========================== id: 1201655682900078592 Screen name: VNJS18 Autor: Vinicin Tweet: RT @RDTRubroNegro: Flamengo pré Jorge Jesus não vencia: - A Liberta há 38 anos. - O Brasileiro há 10 anos. - Na Arena da Baixada há 45 a… Data de criação: 03/12/2019, 00:13:35 Idioma: pt Retweets: 597 Curtidas: 0 ==========================
Apache-2.0
twitter-crawler.ipynb
phdabel/twitter-crawler
Criação do dataframe
df = pd.DataFrame.from_dict(tweets) df
_____no_output_____
Apache-2.0
twitter-crawler.ipynb
phdabel/twitter-crawler
Salvar o dataframe em csv para uso posterior.
df.to_csv('dataframe.csv')
_____no_output_____
Apache-2.0
twitter-crawler.ipynb
phdabel/twitter-crawler
Assignment 2Before working on this assignment please read these instructions fully. In the submission area, you will notice that you can click the link to **Preview the Grading** for each step of the assignment. This is the criteria that will be used for peer grading. Please familiarize yourself with the criteria before beginning the assignment.An NOAA dataset has been stored in the file `data/C2A2_data/BinnedCsvs_d100/4e86d2106d0566c6ad9843d882e72791333b08be3d647dcae4f4b110.csv`. The data for this assignment comes from a subset of The National Centers for Environmental Information (NCEI) [Daily Global Historical Climatology Network](https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/readme.txt) (GHCN-Daily). The GHCN-Daily is comprised of daily climate records from thousands of land surface stations across the globe.Each row in the assignment datafile corresponds to a single observation.The following variables are provided to you:* **id** : station identification code* **date** : date in YYYY-MM-DD format (e.g. 2012-01-24 = January 24, 2012)* **element** : indicator of element type * TMAX : Maximum temperature (tenths of degrees C) * TMIN : Minimum temperature (tenths of degrees C)* **value** : data value for element (tenths of degrees C)For this assignment, you must:1. Read the documentation and familiarize yourself with the dataset, then write some python code which returns a line graph of the record high and record low temperatures by day of the year over the period 2005-2014. The area between the record high and record low temperatures for each day should be shaded.2. Overlay a scatter of the 2015 data for any points (highs and lows) for which the ten year record (2005-2014) record high or record low was broken in 2015.3. Watch out for leap days (i.e. February 29th), it is reasonable to remove these points from the dataset for the purpose of this visualization.4. Make the visual nice! Leverage principles from the first module in this course when developing your solution. Consider issues such as legends, labels, and chart junk.The data you have been given is near **Singapore, Central Singapore Community Development Council, Singapore**, and the stations the data comes from are shown on the map below.
import matplotlib.pyplot as plt import mplleaflet import pandas as pd def leaflet_plot_stations(binsize, hashid): df = pd.read_csv('data/C2A2_data/BinSize_d{}.csv'.format(binsize)) station_locations_by_hash = df[df['hash'] == hashid] lons = station_locations_by_hash['LONGITUDE'].tolist() lats = station_locations_by_hash['LATITUDE'].tolist() plt.figure(figsize=(8,8)) plt.scatter(lons, lats, c='r', alpha=0.7, s=200) return mplleaflet.display() leaflet_plot_stations(100,'4e86d2106d0566c6ad9843d882e72791333b08be3d647dcae4f4b110') # Import useful libraries import matplotlib.pyplot as plt import matplotlib.dates as dates import matplotlib.ticker as ticker import pandas as pd import numpy as np %matplotlib notebook # Read the dataframe df1 = pd.read_csv('data/C2A2_data/BinnedCsvs_d100/4e86d2106d0566c6ad9843d882e72791333b08be3d647dcae4f4b110.csv') df1.head() #How many records? len(df1) minimum = [] maximum = [] month = [] #remove February 29 df1 = df1[~(df1['Date'].str.endswith(r'02-29'))] times1 = pd.DatetimeIndex(df1['Date']) #after removing Feb 29, how many records remaining len(df1) #Data for 2005-2014 df = df1[times1.year != 2015] times = pd.DatetimeIndex(df['Date']) for j in df.groupby([times.month, times.day]): minimum.append(min(j[1]['Data_Value'])) maximum.append(max(j[1]['Data_Value'])) #Data of 2015 df2015 = df1[times1.year == 2015] times2015 = pd.DatetimeIndex(df2015['Date']) minimum2015 = [] maximum2015 = [] for j in df2015.groupby([times2015.month, times2015.day]): minimum2015.append(min(j[1]['Data_Value'])) maximum2015.append(max(j[1]['Data_Value'])) minaxis = [] maxaxis = [] minvals = [] maxvals = [] for i in range(len(minimum)): if((minimum[i] - minimum2015[i]) > 0): minaxis.append(i) minvals.append(minimum2015[i]) if((maximum[i] - maximum2015[i]) < 0): maxaxis.append(i) maxvals.append(maximum2015[i]) plt.figure() colors = ['skyblue', 'lightcoral'] plt.plot(minimum, c='skyblue', alpha = 0.5, label = 'Minimum Temperature (2005-14)') plt.plot(maximum, c ='lightcoral', alpha = 0.5, label = 'Maximum Temperature (2005-14)') plt.scatter(minaxis, minvals, s = 10, c = 'blue', label = 'Record Break Minimum (2015)') plt.scatter(maxaxis, maxvals, s = 10, c = 'red', label = 'Record Break Maximum (2015)') plt.gca().fill_between(range(len(minimum)), minimum, maximum, facecolor='lightgray', alpha=0.2) plt.ylim(0, 450) plt.legend(loc = 8, frameon=False, title='Temperature', fontsize=8) plt.xticks( np.linspace(15,15 + 30*11 , num = 12), (r'Jan', r'Feb', r'Mar', r'Apr', r'May', r'Jun', r'Jul', r'Aug', r'Sep', r'Oct', r'Nov', r'Dec') ) plt.xlabel('Months') plt.ylabel('Temperature (tenths of degrees C)') plt.title(r'Temperature Summary Plot of Singapore (2005-2015)') plt.show() plt.savefig('Temperature.png', transparent = True, bbox_inches = 'tight')
_____no_output_____
MIT
Applied Data Science with Python Specialzation/Applied Plotting Charting and Data Representation in Python/Assignment2/Assignment2.ipynb
lynnxlmiao/Coursera
Classical Machine Learning ApproachIn this notebook we will be learning to 1. Create a Naive TF - IDF based Bag of Words representation of text. 2. Use classical ML models to solve text classification. 3. Use a One Vs Rest strategy to solve multi-label text classification. **HOT TIP** : *Save them as pickle for easy rendering for experiments* This Notebook uses code from https://github.com/susanli2016/Machine-Learning-with-Python/blob/master/Multi%20label%20text%20classification.ipynb
# Installing packages. !pip install contractions !pip install textsearch !pip install tqdm # Importing packages. import nltk nltk.download('punkt') nltk.download('stopwords') %matplotlib inline import re import matplotlib import numpy as np import matplotlib.pyplot as plt import pandas as pd from sklearn.model_selection import train_test_split from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.naive_bayes import MultinomialNB from sklearn.metrics import accuracy_score from sklearn.multiclass import OneVsRestClassifier from nltk.corpus import stopwords stop_words = set(stopwords.words('english')) from sklearn.svm import LinearSVC from sklearn.linear_model import LogisticRegression from sklearn.pipeline import Pipeline import seaborn as sns from sklearn.metrics import confusion_matrix, classification_report import pickle import ast from sklearn.externals import joblib from datetime import datetime from sklearn.preprocessing import MultiLabelBinarizer # Let's mount our G-Drive. from google.colab import drive drive.mount('/content/drive', force_remount=True) # Data read and preparation. # Mentioning where is our data located on G-Drive. Make sure to rectify your path path = '/content/drive/My Drive/ICDMAI_Tutorial/notebook/' data ='filtered_data/question_tag_text_mapping.pkl' ml_model = path + 'ml_model/' # Let us quickly load our question tag data question_tag = pd.read_pickle(path+data) question_tag.head(3)
_____no_output_____
MIT
2_classical_ml_approach.ipynb
funmilola09/Recurrent-Neural-Pipeline
Creating one hot encoding from multilabelled tagged data
# In order to use one vs rest strategy we will need to one hot encoding each tag across all documents. mlb = MultiLabelBinarizer() question_tag['Tag_pop'] = question_tag['Tag'] question_tag = question_tag.join(pd.DataFrame(mlb.fit_transform(question_tag.pop('Tag_pop')), columns=mlb.classes_, index=question_tag.index)) question_tag.head(3) # Creating a list of all existing 'Tags' dummy = question_tag.drop(['Id', 'OwnerUserId', 'CreationDate', 'ClosedDate', 'Score', 'Title','Body','Tag'], axis=1) categories = list(dummy.columns.values)
_____no_output_____
MIT
2_classical_ml_approach.ipynb
funmilola09/Recurrent-Neural-Pipeline
Text preprocessing
# Let us createa a very basic text preprocessor which we will use for cleaning text. def clean_text(text): text = text.lower() text = re.sub(r"what's", "what is ", text) text = re.sub(r"\'s", " ", text) text = re.sub(r"\'ve", " have ", text) text = re.sub(r"can't", "can not ", text) text = re.sub(r"n't", " not ", text) text = re.sub(r"i'm", "i am ", text) text = re.sub(r"\'re", " are ", text) text = re.sub(r"\'d", " would ", text) text = re.sub(r"\'ll", " will ", text) text = re.sub(r"\'scuse", " excuse ", text) text = re.sub('\W', ' ', text) text = re.sub('\s+', ' ', text) text = text.strip(' ') return text question_tag['Body'] = question_tag['Body'].map(lambda com : clean_text(com))
_____no_output_____
MIT
2_classical_ml_approach.ipynb
funmilola09/Recurrent-Neural-Pipeline
Creating a 70/30 Train-Test Split
train, test = train_test_split(question_tag, random_state=42, test_size=0.30, shuffle=True) X_train = train.Body X_test = test.Body print("Train data shape : {}".format(X_train.shape)) print("Test data shape : {}".format(X_test.shape))
Train data shape : (736394,) Test data shape : (315598,)
MIT
2_classical_ml_approach.ipynb
funmilola09/Recurrent-Neural-Pipeline
Creating Bag of Words representation using TF - IDF 1. Initializing the Vectorizer object 2. Create a corpus from training data. 3. Create a document term matrix
#Initializing the Vectorizer object tfidf = TfidfVectorizer(stop_words=stop_words) #Create a corpus from training data #Create a document term matrix of training data based on the corpus. X_train_dtm = tfidf.fit_transform(X_train) #Create a document term matrix of test data based on the corpus. #Note that the dimensions/columns of DTM of the test data will be based on the training data corpus only. X_test_dtm = tfidf.transform(X_test)
_____no_output_____
MIT
2_classical_ml_approach.ipynb
funmilola09/Recurrent-Neural-Pipeline
Pipelinescikit-learn provides a Pipeline utility to help automate machine learning workflows. Pipelines are very common in Machine Learning systems, since there is a lot of data to manipulate and many data transformations to apply. So we will utilize pipeline to train every classifier. One Vs Rest Multilabel strategyThe Multi-label algorithm accepts a binary mask over multiple labels. The result for each prediction will be an array of 0s and 1s marking which class labels apply to each row input sample.OneVsRest strategy can be used for multilabel learning, where a classifier is used to predict multiple labels for instance. **Naive Bayes**, **SVM**, **Logistic Regression** supports multi-class, but we are in a multi-label scenario, therefore, we wrap them in the OneVsRestClassifier. We create a Training Pipeline and a Scoring Pipeline
def tag_level_training_pipeline(X_train, train, X_test, test, classifier_pipeline, output_directory): #1. Create a classifier for each Tag for category in categories: print('... Processing {}'.format(category)) # 1. train the model using X_dtm & y classifier_pipeline.fit(X_train, train[category]) # 2. save the model to disk filename = ml_model + output_directory +str(category)+ '_model.pkl' joblib.dump(classifier_pipeline, filename, compress = 1) # 3. compute the testing accuracy prediction = classifier_pipeline.predict(X_test) print('Test accuracy is {}'.format(accuracy_score(test[category], prediction))) print(classification_report(test[category], prediction)) def tag_level_predict(X_train, train, X_test, test, model_directory): prediction_df = pd.DataFrame(columns=['dummy1']) #Score the document across classifier for each Tag for category in categories: # 1. load the model filename = ml_model + model_directory +str(category)+ '_model.pkl' classifier_pipeline = joblib.load(filename) # 2. predict on the test data. prediction = classifier_pipeline.predict(X_test) prediction_df[str(category)] = prediction # Remember We had encoded the labels. It time to bring them back to their original form. for category in categories: prediction_df.loc[prediction_df[str(category)] == 1, str(category)] = category prediction_df['predicted_labels'] = prediction_df[[str(i) for i in categories]].values.tolist() prediction_df['predicted_labels'] = prediction_df['predicted_labels'].apply(lambda x : list(set(x))) # prediction_df['predicted_labels'] = prediction_df['predicted_labels'].apply(lambda x: x.remove(0) if (0 in x) else x ) # We create result having orignal labels and predicted labels for metrics Evaluation final_pred_df = pd.concat([test[['Id','Tag']].reset_index(), prediction_df[['predicted_labels']].reset_index()], axis=1) final_pred_df['original_labels'] = final_pred_df['Tag'] # prediction_df[['Id']] = test[['Id']] final_pred_df_result = final_pred_df[['Id','original_labels','predicted_labels']] return final_pred_df_result # importing os module import os try: os.rename('/content/drive/My Drive/ICDMAI_Tutorial/notebook/ml_model/SVM/_net_model.pkl', '/content/drive/My Drive/ICDMAI_Tutorial/notebook/ml_model/SVM/.net_model.pkl') except : print("Already in proper filename!") ## A Dummy example. X_test = ["How to handle memory locking ?", "How to handle memory locking in java ?", "How to handle memory locking in java python ?","This post is not about java"] X_test_dtm = tfidf.transform(X_test) result = tag_level_predict(X_train_dtm, train, X_test_dtm, test.head(1), 'SVM/') for i in range(result.shape[0]): print("Input [",X_test[i],"] || Predicted classes: ",result.predicted_labels[i])
/usr/local/lib/python3.6/dist-packages/sklearn/utils/deprecation.py:144: FutureWarning: The sklearn.svm.classes module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.svm. Anything that cannot be imported from sklearn.svm is now part of the private API. warnings.warn(message, FutureWarning) /usr/local/lib/python3.6/dist-packages/sklearn/base.py:318: UserWarning: Trying to unpickle estimator LinearSVC from version 0.21.3 when using version 0.22.1. This might lead to breaking code or invalid results. Use at your own risk. UserWarning) /usr/local/lib/python3.6/dist-packages/sklearn/utils/deprecation.py:144: FutureWarning: The sklearn.preprocessing.label module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.preprocessing. Anything that cannot be imported from sklearn.preprocessing is now part of the private API. warnings.warn(message, FutureWarning) /usr/local/lib/python3.6/dist-packages/sklearn/base.py:318: UserWarning: Trying to unpickle estimator LabelBinarizer from version 0.21.3 when using version 0.22.1. This might lead to breaking code or invalid results. Use at your own risk. UserWarning) /usr/local/lib/python3.6/dist-packages/sklearn/base.py:318: UserWarning: Trying to unpickle estimator OneVsRestClassifier from version 0.21.3 when using version 0.22.1. This might lead to breaking code or invalid results. Use at your own risk. UserWarning) /usr/local/lib/python3.6/dist-packages/sklearn/base.py:318: UserWarning: Trying to unpickle estimator Pipeline from version 0.21.3 when using version 0.22.1. This might lead to breaking code or invalid results. Use at your own risk. UserWarning)
MIT
2_classical_ml_approach.ipynb
funmilola09/Recurrent-Neural-Pipeline
Evaluating our results
# Here we define precision, recall, f1 measure at a single document level. def document_evaluation_metrics(prd_grp,grp,metric="precision"): pred_group = prd_grp if 0 in pred_group: pred_group.remove(0) group = grp set_pred_group = set(pred_group) set_group = set(group) intrsct = set_group.intersection(set_pred_group) accuracy = len(intrsct) / float(len(set_pred_group) if len(set_pred_group)>1 else 1) recall = len(intrsct) / float(len(set_group) if len(set_group)>1 else 1) if metric == "precision": return accuracy elif metric == "recall": return recall elif metric == "f1_measure": if accuracy == 0 or recall == 0: return 0 elif accuracy > 0 and recall >0 : f1_measure = 2*accuracy*recall/(float(accuracy + recall)) return f1_measure return -1 # Provide overall average stats and populate document level metrics. def model_evaluation_stats(final_pred_df, model_name="default"): final_pred_df['doc_precision'] = final_pred_df.apply(lambda x: document_evaluation_metrics(x.predicted_labels, x.original_labels, "precision"), axis=1) final_pred_df['doc_recall'] = final_pred_df.apply(lambda x: document_evaluation_metrics(x.predicted_labels, x.original_labels, "recall"), axis=1) final_pred_df['doc_f1_measure'] = final_pred_df.apply(lambda x: document_evaluation_metrics(x.predicted_labels, x.original_labels, "f1_measure"), axis=1) print('Avearge precision across documents is {}'.format(final_pred_df['doc_precision'].mean())) print('Avearge recall across documents is {}'.format(final_pred_df['doc_recall'].mean())) print('Avearge f1 measure across documents is {}'.format(final_pred_df['doc_f1_measure'].mean())) pickle.dump(final_pred_df, open(ml_model + model_name + ".pkl", 'wb')) # final_pred_df.to_csv(ml_model + 'SVM_Tag_predictions.txt',sep='\t',index=False)
_____no_output_____
MIT
2_classical_ml_approach.ipynb
funmilola09/Recurrent-Neural-Pipeline
Let us train, score and evaluate Naive Bayes
#Naive Bayes Classifier NB_pipeline = Pipeline([ ('clf', OneVsRestClassifier(MultinomialNB( fit_prior=True, class_prior=None))), ]) tag_level_training_pipeline(X_train_dtm, train, X_test_dtm, test, NB_pipeline, 'NaiveBayes/') result = tag_level_predict(X_train_dtm, train, X_test_dtm, test, 'NaiveBayes/') model_evaluation_stats(result, "NaiveBayes")
_____no_output_____
MIT
2_classical_ml_approach.ipynb
funmilola09/Recurrent-Neural-Pipeline
Let us train, score and evaluate Support Vector Machines
#SVM Classifier SVC_pipeline = Pipeline([ ('clf', OneVsRestClassifier(LinearSVC(), n_jobs=1)), ]) tag_level_training_pipeline(X_train_dtm, train, X_test_dtm, test, SVC_pipeline, 'SVM/') result = tag_level_predict(X_train_dtm, train, X_test_dtm, test, 'SVM/') model_evaluation_stats(result, "SVM")
... Processing .net Test accuracy is 0.9771893358006071 precision recall f1-score support 0 0.98 1.00 0.99 308362 1 0.51 0.09 0.15 7236 accuracy 0.98 315598 macro avg 0.75 0.54 0.57 315598 weighted avg 0.97 0.98 0.97 315598 ... Processing agile Test accuracy is 0.9999429654180318 precision recall f1-score support 0 1.00 1.00 1.00 315573 1 0.89 0.32 0.47 25 accuracy 1.00 315598 macro avg 0.94 0.66 0.74 315598 weighted avg 1.00 1.00 1.00 315598 ... Processing ajax Test accuracy is 0.9887356700612805 precision recall f1-score support 0 0.99 1.00 0.99 310952 1 0.70 0.41 0.52 4646 accuracy 0.99 315598 macro avg 0.84 0.71 0.76 315598 weighted avg 0.99 0.99 0.99 315598 ... Processing amazon-web-services Test accuracy is 0.9981780619649048 precision recall f1-score support 0 1.00 1.00 1.00 314643 1 0.79 0.55 0.65 955 accuracy 1.00 315598 macro avg 0.89 0.77 0.82 315598 weighted avg 1.00 1.00 1.00 315598 ... Processing android Test accuracy is 0.9829815144582665 precision recall f1-score support 0 0.99 1.00 0.99 288322 1 0.96 0.84 0.90 27276 accuracy 0.98 315598 macro avg 0.97 0.92 0.94 315598 weighted avg 0.98 0.98 0.98 315598 ... Processing android-studio Test accuracy is 0.9972401599503166 precision recall f1-score support 0 1.00 1.00 1.00 314589 1 0.67 0.27 0.38 1009 accuracy 1.00 315598 macro avg 0.84 0.63 0.69 315598 weighted avg 1.00 1.00 1.00 315598 ... Processing angular2 Test accuracy is 0.9991064582158315 precision recall f1-score support 0 1.00 1.00 1.00 314898 1 0.94 0.64 0.76 700 accuracy 1.00 315598 macro avg 0.97 0.82 0.88 315598 weighted avg 1.00 1.00 1.00 315598 ... Processing angularjs Test accuracy is 0.9949904625504598 precision recall f1-score support 0 1.00 1.00 1.00 309420 1 0.93 0.80 0.86 6178 accuracy 0.99 315598 macro avg 0.96 0.90 0.93 315598 weighted avg 0.99 0.99 0.99 315598 ... Processing apache Test accuracy is 0.9952788040481879 precision recall f1-score support 0 1.00 1.00 1.00 313590 1 0.71 0.43 0.54 2008 accuracy 1.00 315598 macro avg 0.85 0.72 0.77 315598 weighted avg 0.99 1.00 0.99 315598 ... Processing apache-spark Test accuracy is 0.999404305477221 precision recall f1-score support 0 1.00 1.00 1.00 314985 1 0.93 0.75 0.83 613 accuracy 1.00 315598 macro avg 0.97 0.87 0.91 315598 weighted avg 1.00 1.00 1.00 315598 ... Processing api Test accuracy is 0.9951900835873485 precision recall f1-score support 0 1.00 1.00 1.00 314094 1 0.47 0.07 0.12 1504 accuracy 1.00 315598 macro avg 0.73 0.53 0.56 315598 weighted avg 0.99 1.00 0.99 315598 ... Processing asp.net Test accuracy is 0.9815905043758199 precision recall f1-score support 0 0.99 1.00 0.99 306664 1 0.79 0.48 0.60 8934 accuracy 0.98 315598 macro avg 0.89 0.74 0.79 315598 weighted avg 0.98 0.98 0.98 315598 ... Processing asp.net-web-api Test accuracy is 0.9985804726265692 precision recall f1-score support 0 1.00 1.00 1.00 314999 1 0.72 0.41 0.52 599 accuracy 1.00 315598 macro avg 0.86 0.71 0.76 315598 weighted avg 1.00 1.00 1.00 315598 ... Processing azure Test accuracy is 0.9987769250755708 precision recall f1-score support 0 1.00 1.00 1.00 314517 1 0.91 0.71 0.80 1081 accuracy 1.00 315598 macro avg 0.95 0.86 0.90 315598 weighted avg 1.00 1.00 1.00 315598 ... Processing bash Test accuracy is 0.995047497132428 precision recall f1-score support 0 1.00 1.00 1.00 313311 1 0.76 0.46 0.58 2287 accuracy 1.00 315598 macro avg 0.88 0.73 0.79 315598 weighted avg 0.99 1.00 0.99 315598 ... Processing c Test accuracy is 0.9872432651664459 precision recall f1-score support 0 0.99 1.00 0.99 308691 1 0.81 0.55 0.65 6907 accuracy 0.99 315598 macro avg 0.90 0.77 0.82 315598 weighted avg 0.99 0.99 0.99 315598 ... Processing c# Test accuracy is 0.9414159785549971 precision recall f1-score support 0 0.95 0.98 0.97 285147 1 0.77 0.56 0.65 30451 accuracy 0.94 315598 macro avg 0.86 0.77 0.81 315598 weighted avg 0.94 0.94 0.94 315598 ... Processing c++ Test accuracy is 0.9785581657678439 precision recall f1-score support 0 0.98 0.99 0.99 301367 1 0.85 0.63 0.73 14231 accuracy 0.98 315598 macro avg 0.92 0.81 0.86 315598 weighted avg 0.98 0.98 0.98 315598 ... Processing cloud Test accuracy is 0.9995722406352385 precision recall f1-score support 0 1.00 1.00 1.00 315459 1 0.75 0.04 0.08 139 accuracy 1.00 315598 macro avg 0.87 0.52 0.54 315598 weighted avg 1.00 1.00 1.00 315598 ... Processing codeigniter Test accuracy is 0.9979055634066122 precision recall f1-score support 0 1.00 1.00 1.00 314150 1 0.90 0.61 0.73 1448 accuracy 1.00 315598 macro avg 0.95 0.80 0.86 315598 weighted avg 1.00 1.00 1.00 315598 ... Processing css Test accuracy is 0.9785454914162954 precision recall f1-score support 0 0.99 0.99 0.99 302936 1 0.78 0.64 0.71 12662 accuracy 0.98 315598 macro avg 0.88 0.82 0.85 315598 weighted avg 0.98 0.98 0.98 315598 ... Processing devops Test accuracy is 0.9999524711816932
MIT
2_classical_ml_approach.ipynb
funmilola09/Recurrent-Neural-Pipeline
Let us train, score and evaluate Logistic Regression
#Logistic Regression Classifier LogReg_pipeline = Pipeline([ ('clf', OneVsRestClassifier(LogisticRegression(solver='sag'), n_jobs=1)), ]) tag_level_training_pipeline(X_train_dtm, train, X_test_dtm, test, LogReg_pipeline, 'LogisticRegression/') result = tag_level_predict(X_train_dtm, train, X_test_dtm, test, 'LogisticRegression/') model_evaluation_stats(result, "LogisticRegression")
_____no_output_____
MIT
2_classical_ml_approach.ipynb
funmilola09/Recurrent-Neural-Pipeline
Wikipedia Thanks-Receiver Study Randomization [J. Nathan Matias](https://twitter.com/natematias)October 29, 2019This code takes as input data described in the [randomization data format](https://docs.google.com/document/d/1plhoDbQryYQ32vZMXu8YmlLSp30QTdup43k6uTePOT4/edit?usp=drive_web&ouid=117701977297551627494) and produces randomizations for the Thanks Recipient study.Notes:* We use the 99% confidence interval cutoffs from our first sample rather than relative to each subsequent sample * Polish Experienced: 235.380736142341 * Polish Newcomer: 72.2118047599678 * Arabic Newcomer: 54.7365066602131 * German Newcomer: 63.3678642498622* We will be drawing only 300 Polish accounts
options("scipen"=9, "digits"=4) library(ggplot2) library(rlang) library(tidyverse) library(viridis) library(blockTools) library(blockrand) library(gmodels) # contains CrossTable library(DeclareDesign) library(DescTools) # contains Freq library(uuid) options(repr.plot.width=7, repr.plot.height=3.5) sessionInfo()
_____no_output_____
MIT
randomization/thanks-recipient-study-2019/generate-wikipedia-thanks-recipient-randomizations-final-10.29.2019.R.ipynb
mitmedialab/CivilServant-Wikipedia-Analysis
Load Input Dataframe
filename <- "all-thankees-historical-20191029.csv" data.path <- "/home/civilservant/Tresors/CivilServant/projects/wikipedia-integration/gratitude-study/Data Drills/thankee" recipient.df <- read.csv(file.path(data.path, "historical_output", filename))
_____no_output_____
MIT
randomization/thanks-recipient-study-2019/generate-wikipedia-thanks-recipient-randomizations-final-10.29.2019.R.ipynb
mitmedialab/CivilServant-Wikipedia-Analysis
Load Participants in the Thanker Study
thanker.df <- read.csv(file.path(data.path, "..", "thanker_hardlaunch", "randomization_output", "all-thanker-randomization-final-20190729.csv")) usernames.to.exclude <- thanker.df$user_name
_____no_output_____
MIT
randomization/thanks-recipient-study-2019/generate-wikipedia-thanks-recipient-randomizations-final-10.29.2019.R.ipynb
mitmedialab/CivilServant-Wikipedia-Analysis
Load Liaison Usernames
liaison.df <- read.csv(file.path(data.path, "..", "thanker_hardlaunch", "randomization_output", "liason-thanker-randomization-datadrill-20190718.csv")) usernames.to.exclude <- append(as.character(usernames.to.exclude), as.character(liaison.df$user_name)) print(paste(length(usernames.to.exclude), "usernames to exclude"))
[1] "462 usernames to exclude"
MIT
randomization/thanks-recipient-study-2019/generate-wikipedia-thanks-recipient-randomizations-final-10.29.2019.R.ipynb
mitmedialab/CivilServant-Wikipedia-Analysis
Adjust Column Names to Match Thankee Randomization Specification
recipient.df$prev_experience <- factor(as.integer(gsub("bin_", "", recipient.df$prev_experience))) recipient.df$anonymized_id <- sapply( seq_along(1:nrow(recipient.df)), UUIDgenerate ) recipient.df$newcomer <- recipient.df$prev_experience == 0 recipient.df <- subset(recipient.df, lang!="en") #recipient.df <- subset(recipient.df, user_editcount_quality >=4 ) hist(recipient.df$user_editcount_quality)
_____no_output_____
MIT
randomization/thanks-recipient-study-2019/generate-wikipedia-thanks-recipient-randomizations-final-10.29.2019.R.ipynb
mitmedialab/CivilServant-Wikipedia-Analysis
Confirm the number of participants
print("Newcomer Participants to Randomize") summary(subset(recipient.df, newcomer == 1)$lang) ## Polish Experienced Accounts print("Experienced Participants to Randomize") summary(subset(recipient.df, newcomer == 0)$lang)
[1] "Experienced Participants to Randomize"
MIT
randomization/thanks-recipient-study-2019/generate-wikipedia-thanks-recipient-randomizations-final-10.29.2019.R.ipynb
mitmedialab/CivilServant-Wikipedia-Analysis
Omit Participants Omit Participants in the Thanker Study
print(paste(nrow(recipient.df), "participants before removing thankers")) recipient.df <- subset(recipient.df, (user_name %in% usernames.to.exclude)!=TRUE) print(paste(nrow(recipient.df), "participants after removing thankers"))
[1] "3262 participants before removing thankers" [1] "3262 participants after removing thankers"
MIT
randomization/thanks-recipient-study-2019/generate-wikipedia-thanks-recipient-randomizations-final-10.29.2019.R.ipynb
mitmedialab/CivilServant-Wikipedia-Analysis
Subset values outside the 99% confidence intervalsWe are using upper confidence intervals from the first randomization, found at [generate-wikipedia-thanks-recipient-randomizations-final-07.28.3019](generate-wikipedia-thanks-recipient-randomizations-final-07.28.3019.R.ipynb) * Polish Experienced: 235.380736142341 * Polish Newcomer: 72.2118047599678 * Arabic Newcomer: 54.7365066602131 * German Newcomer: 63.3678642498622
upper.conf.ints <- data.frame(lang=c("pl", "pl", "de", "ar"), newcomer=c(0,1,1,1), conf.int = c( 235.380736142341, 72.2118047599678, 54.7365066602131, 63.3678642498622 )) upper.conf.ints #subset(upper.conf.ints, lang=="pl" & newcomer ==1)$conf.int ## CREATE A PLACEHOLDER WITH ZERO ROWS ## BEFORE ITERATING recipient.trimmed.df <- recipient.df[0,] for(l in c("ar", "de", "fa", "pl")){ print(paste("Language: ", l)) for(n in c(0,1)){ print(paste(" newcomer:", n == 1)) lang.df <- subset(recipient.df, lang==l & newcomer == n) print(paste( " ", nrow(lang.df), "rows from original dataset")) prev.conf.int <- subset(upper.conf.ints, lang==l & newcomer ==n)$conf.int print( " 99% confidence intervals:") print(paste(" upper: ", prev.conf.int ,sep="")) print(paste(" Removing", nrow(subset(lang.df, labor_hours_84_days_pre_sample > prev.conf.int)), "outliers", "observations because labor_hours_84_days_pre_sample is an outlier.")) lang.subset.df <- subset(lang.df, labor_hours_84_days_pre_sample <= prev.conf.int) print(paste( " ", nrow(lang.subset.df), "rows in trimmed dataset")) recipient.trimmed.df <- rbind(recipient.trimmed.df, lang.subset.df) } } recipient.df.penultimate <- recipient.trimmed.df
[1] "Language: ar" [1] " newcomer: FALSE" [1] " 0 rows from original dataset" [1] " 99% confidence intervals:" [1] " upper: " [1] " Removing 0 outliers observations because labor_hours_84_days_pre_sample is an outlier." [1] " 0 rows in trimmed dataset" [1] " newcomer: TRUE" [1] " 743 rows from original dataset" [1] " 99% confidence intervals:" [1] " upper: 63.3678642498622" [1] " Removing 8 outliers observations because labor_hours_84_days_pre_sample is an outlier." [1] " 735 rows in trimmed dataset" [1] "Language: de" [1] " newcomer: FALSE" [1] " 0 rows from original dataset" [1] " 99% confidence intervals:" [1] " upper: " [1] " Removing 0 outliers observations because labor_hours_84_days_pre_sample is an outlier." [1] " 0 rows in trimmed dataset" [1] " newcomer: TRUE" [1] " 1565 rows from original dataset" [1] " 99% confidence intervals:" [1] " upper: 54.7365066602131" [1] " Removing 37 outliers observations because labor_hours_84_days_pre_sample is an outlier." [1] " 1528 rows in trimmed dataset" [1] "Language: fa" [1] " newcomer: FALSE" [1] " 0 rows from original dataset" [1] " 99% confidence intervals:" [1] " upper: " [1] " Removing 0 outliers observations because labor_hours_84_days_pre_sample is an outlier." [1] " 0 rows in trimmed dataset" [1] " newcomer: TRUE" [1] " 0 rows from original dataset" [1] " 99% confidence intervals:" [1] " upper: " [1] " Removing 0 outliers observations because labor_hours_84_days_pre_sample is an outlier." [1] " 0 rows in trimmed dataset" [1] "Language: pl" [1] " newcomer: FALSE" [1] " 512 rows from original dataset" [1] " 99% confidence intervals:" [1] " upper: 235.380736142341" [1] " Removing 1 outliers observations because labor_hours_84_days_pre_sample is an outlier." [1] " 511 rows in trimmed dataset" [1] " newcomer: TRUE" [1] " 442 rows from original dataset" [1] " 99% confidence intervals:" [1] " upper: 72.2118047599678" [1] " Removing 7 outliers observations because labor_hours_84_days_pre_sample is an outlier." [1] " 435 rows in trimmed dataset"
MIT
randomization/thanks-recipient-study-2019/generate-wikipedia-thanks-recipient-randomizations-final-10.29.2019.R.ipynb
mitmedialab/CivilServant-Wikipedia-Analysis
Review and Generate Variables
print(aggregate(recipient.df.penultimate[c("labor_hours_84_days_pre_sample")], FUN=mean, by = list(recipient.df.penultimate$prev_experience))) print(CrossTable(recipient.df.penultimate$has_email, recipient.df.penultimate$newcomer, prop.r = FALSE, prop.c=TRUE, prop.t = FALSE, prop.chisq = FALSE)) ## Update the has_email field ## recipient.df.penultimate$has_email <- recipient.df.penultimate$has_email == "True" ## PREVIOUS EXPERIENCE print("prev_experience") print(summary(factor(recipient.df.penultimate$prev_experience))) cat("\n") ## SHOW LABOR HOURS BY EXPERIENCE GROUP: print("Aggregate labor_hours_84_days_pre_sample") print(aggregate(recipient.df.penultimate[c("labor_hours_84_days_pre_sample")], FUN=mean, by = list(recipient.df.penultimate$prev_experience))) cat("\n") print("NEWCOMERS AND EMAILS") print("--------------------") print(CrossTable(recipient.df.penultimate$has_email, recipient.df.penultimate$newcomer, prop.r = FALSE, prop.c=TRUE, prop.t = FALSE, prop.chisq = FALSE)) # VARIABLE: num_prev_thanks_pre_treatment print("num_prev_thanks_pre_sample") print(summary(recipient.df.penultimate$num_prev_thanks_pre_sample)) cat("\n") ## SHOW PREVIOUS THANKS BY EXPERIENCE GROUP: print("num_prev_thanks_pre_sample by prev_experience") print(aggregate(recipient.df.penultimate[c("num_prev_thanks_pre_sample")], FUN=mean, by = list(recipient.df.penultimate$prev_experience))) cat("\n")
[1] "prev_experience" 0 90 180 365 730 1460 2920 2698 63 52 69 81 102 144 [1] "Aggregate labor_hours_84_days_pre_sample" Group.1 labor_hours_84_days_pre_sample 1 0 5.878 2 90 4.519 3 180 8.063 4 365 5.797 5 730 5.354 6 1460 5.866 7 2920 8.791 [1] "NEWCOMERS AND EMAILS" [1] "--------------------" Cell Contents |-------------------------| | N | | N / Col Total | |-------------------------| Total Observations in Table: 3209 | recipient.df.penultimate$newcomer recipient.df.penultimate$has_email | FALSE | TRUE | Row Total | -----------------------------------|-----------|-----------|-----------| False | 20 | 18 | 38 | | 0.039 | 0.007 | | -----------------------------------|-----------|-----------|-----------| True | 491 | 2680 | 3171 | | 0.961 | 0.993 | | -----------------------------------|-----------|-----------|-----------| Column Total | 511 | 2698 | 3209 | | 0.159 | 0.841 | | -----------------------------------|-----------|-----------|-----------| $t y x FALSE TRUE False 20 18 True 491 2680 $prop.row y x FALSE TRUE False 0.5263 0.4737 True 0.1548 0.8452 $prop.col y x FALSE TRUE False 0.039139 0.006672 True 0.960861 0.993328 $prop.tbl y x FALSE TRUE False 0.006232 0.005609 True 0.153007 0.835151 [1] "num_prev_thanks_pre_sample" Min. 1st Qu. Median Mean 3rd Qu. Max. 0.00 0.00 0.00 0.34 0.00 112.00 [1] "num_prev_thanks_pre_sample by prev_experience" Group.1 num_prev_thanks_pre_sample 1 0 0.1542 2 90 0.1111 3 180 0.1731 4 365 0.2899 5 730 0.9630 6 1460 0.6176 7 2920 3.4097
MIT
randomization/thanks-recipient-study-2019/generate-wikipedia-thanks-recipient-randomizations-final-10.29.2019.R.ipynb
mitmedialab/CivilServant-Wikipedia-Analysis
Subset Sample to Planned sample sizesSample sizes are reported in the experiment [Decisions Document](https://docs.google.com/document/d/1HryhsmWI6WthXQC7zv9Hz1a9DhpZ3FxVRLjTONuMg4I/edit)* Arabic newcomers (1750 goal) (hoping for as many as possible in first sample) * hoping for 1350 in the first sample and 400 later* German newcomers (3000 goal) (hoping for as many as possible in first sample) * hoping for 1600 in first sample and 1400 later* Persian Experienced (2400 goal)* Polish: * Newcomers: (800 goal) * Experienced: (2400 goal)
## Seed generated by Brooklyn Integers # https://www.brooklynintegers.com/int/1495265601/ set.seed(1495265601) print("Newcomers") summary(subset(recipient.df.penultimate, newcomer==1)$lang) print("Experienced") summary(subset(recipient.df.penultimate, newcomer==0)$lang) ## CREATE THE FINAL PARTICIPANT SAMPLE BEFORE RANDOMIZATION recipient.df.final <- recipient.df.penultimate
_____no_output_____
MIT
randomization/thanks-recipient-study-2019/generate-wikipedia-thanks-recipient-randomizations-final-10.29.2019.R.ipynb
mitmedialab/CivilServant-Wikipedia-Analysis
Generate Randomization Blocks
recipient.df.final$lang_prev_experience <- factor(paste(recipient.df.final$lang, recipient.df.final$prev_experience)) colnames(recipient.df.final) ## BLOCKING VARIABLES bv = c("labor_hours_84_days_pre_sample", "num_prev_thanks_pre_sample") block.size = 2 ## TODO: CHECK TO SEE IF I CAN DO BALANCED RANDOMIZATION ## WITHIN BLOCKS LARGER THAN 2 blockobj = block(data=recipient.df.final, n.tr = block.size, groups = "lang_prev_experience", id.vars="anonymized_id", block.vars = bv, distance ="mahalanobis" ) ## CHECK DISTANCES #print(blockobj) recipient.df.final$randomization_block_id <- createBlockIDs(blockobj, data=recipient.df.final, id.var = "anonymized_id") recipient.df.final$randomization_block_size = block.size
_____no_output_____
MIT
randomization/thanks-recipient-study-2019/generate-wikipedia-thanks-recipient-randomizations-final-10.29.2019.R.ipynb
mitmedialab/CivilServant-Wikipedia-Analysis
Identify Incomplete Blocks and Remove Participants in Incomplete Blocks From the Experiment
block.sizes <- aggregate(recipient.df.final$randomization_block_id, FUN=length, by=list(recipient.df.final$randomization_block_id)) incomplete.blocks <- subset(block.sizes, x == 1)$Group.1 incomplete.blocks nrow(subset(recipient.df.final, randomization_block_id %in% incomplete.blocks)) removed.observations <- subset(recipient.df.final, ( randomization_block_id %in% incomplete.blocks)==TRUE) recipient.df.final <- subset(recipient.df.final, ( randomization_block_id %in% incomplete.blocks)!=TRUE) print(paste("Removed", nrow(removed.observations), "units placed in incomplete blocks."))
[1] "Removed 5 units placed in incomplete blocks."
MIT
randomization/thanks-recipient-study-2019/generate-wikipedia-thanks-recipient-randomizations-final-10.29.2019.R.ipynb
mitmedialab/CivilServant-Wikipedia-Analysis
Generate Randomizations
assignments <- block_ra(blocks=recipient.df.final$randomization_block_id, num_arms = 2, conditions = c(0,1)) recipient.df.final$randomization_arm <- assignments
_____no_output_____
MIT
randomization/thanks-recipient-study-2019/generate-wikipedia-thanks-recipient-randomizations-final-10.29.2019.R.ipynb
mitmedialab/CivilServant-Wikipedia-Analysis
Check Balance
print("Aggregating labor hours by treatment") print(aggregate(recipient.df.final[c("labor_hours_84_days_pre_sample")], FUN=mean, by = list(recipient.df.final$randomization_arm))) print("CrossTable of lang by treatment") CrossTable(recipient.df.final$lang, recipient.df.final$randomization_arm, prop.r = TRUE, prop.c=FALSE, prop.t = FALSE, prop.chisq = FALSE) print("CrossTable of lang_prev_experience by treatment") CrossTable(recipient.df.final$lang_prev_experience, recipient.df.final$randomization_arm, prop.r = TRUE, prop.c=FALSE, prop.t = FALSE, prop.chisq = FALSE)
[1] "Aggregating labor hours by treatment" Group.1 labor_hours_84_days_pre_sample 1 0 6.008 2 1 5.944 [1] "CrossTable of lang by treatment" Cell Contents |-------------------------| | N | | N / Row Total | |-------------------------| Total Observations in Table: 3204 | recipient.df.final$randomization_arm recipient.df.final$lang | 0 | 1 | Row Total | ------------------------|-----------|-----------|-----------| ar | 367 | 367 | 734 | | 0.500 | 0.500 | 0.229 | ------------------------|-----------|-----------|-----------| de | 764 | 764 | 1528 | | 0.500 | 0.500 | 0.477 | ------------------------|-----------|-----------|-----------| pl | 471 | 471 | 942 | | 0.500 | 0.500 | 0.294 | ------------------------|-----------|-----------|-----------| Column Total | 1602 | 1602 | 3204 | ------------------------|-----------|-----------|-----------| [1] "CrossTable of lang_prev_experience by treatment" Cell Contents |-------------------------| | N | | N / Row Total | |-------------------------| Total Observations in Table: 3204 | recipient.df.final$randomization_arm recipient.df.final$lang_prev_experience | 0 | 1 | Row Total | ----------------------------------------|-----------|-----------|-----------| ar 0 | 367 | 367 | 734 | | 0.500 | 0.500 | 0.229 | ----------------------------------------|-----------|-----------|-----------| de 0 | 764 | 764 | 1528 | | 0.500 | 0.500 | 0.477 | ----------------------------------------|-----------|-----------|-----------| pl 0 | 217 | 217 | 434 | | 0.500 | 0.500 | 0.135 | ----------------------------------------|-----------|-----------|-----------| pl 1460 | 51 | 51 | 102 | | 0.500 | 0.500 | 0.032 | ----------------------------------------|-----------|-----------|-----------| pl 180 | 26 | 26 | 52 | | 0.500 | 0.500 | 0.016 | ----------------------------------------|-----------|-----------|-----------| pl 2920 | 72 | 72 | 144 | | 0.500 | 0.500 | 0.045 | ----------------------------------------|-----------|-----------|-----------| pl 365 | 34 | 34 | 68 | | 0.500 | 0.500 | 0.021 | ----------------------------------------|-----------|-----------|-----------| pl 730 | 40 | 40 | 80 | | 0.500 | 0.500 | 0.025 | ----------------------------------------|-----------|-----------|-----------| pl 90 | 31 | 31 | 62 | | 0.500 | 0.500 | 0.019 | ----------------------------------------|-----------|-----------|-----------| Column Total | 1602 | 1602 | 3204 | ----------------------------------------|-----------|-----------|-----------|
MIT
randomization/thanks-recipient-study-2019/generate-wikipedia-thanks-recipient-randomizations-final-10.29.2019.R.ipynb
mitmedialab/CivilServant-Wikipedia-Analysis
Subset Polish Experienced AccountsWithin Polish, identify 300 accounts (150 blocks) to include and drop all of the others.Note: since the previous randomization included a larger number of more experienced accounts, we're prioritizing accounts from experience groups 90, 180, 365, 730, and 1460 (all except 2920).
## SHOW PREVIOUS THANKS BY EXPERIENCE GROUP: recipient.df.final$count.var <- 1 print("Number of Accounts for each experience level among Polish Participants") print(aggregate(subset(recipient.df.final, lang="pl")[c("count.var")], FUN=sum, by = list(subset(recipient.df.final, lang="pl")$prev_experience))) cat("\n") print(paste("Total number of rows: ", nrow(recipient.df.final), sep="")) recipient.df.final.a <- subset(recipient.df.final, !(lang=="pl" & prev_experience==2920)) recipient.df.final.a$initial.block.id <- recipient.df.final.a$randomization_block_id print(paste("Total number of rows once we subset Polish:", nrow(recipient.df.final.a)))
[1] "Total number of rows: 3204" [1] "Total number of rows once we subset Polish: 3060"
MIT
randomization/thanks-recipient-study-2019/generate-wikipedia-thanks-recipient-randomizations-final-10.29.2019.R.ipynb
mitmedialab/CivilServant-Wikipedia-Analysis
Offset block IDs to be uniqueObserve the block IDs from the previous randomizations and ensure that these ones are unique and larger.
## LOAD PREVIOUS RANDOMIZATIONS prev_randomization_filename <- "thanks-recipient-randomizations-20190729.csv" prev.randomization.df <- read.csv(file.path(data.path, "randomization_output", prev_randomization_filename)) print(paste("Max Block ID: ", max(prev.randomization.df$randomization_block_id))) prev.max.block.id <- max(prev.randomization.df$randomization_block_id) prev.max.block.id <- 4221 recipient.df.final.a$randomization_block_id <- recipient.df.final.a$initial.block.id + prev.max.block.id summary(recipient.df.final.a$randomization_block_id)
_____no_output_____
MIT
randomization/thanks-recipient-study-2019/generate-wikipedia-thanks-recipient-randomizations-final-10.29.2019.R.ipynb
mitmedialab/CivilServant-Wikipedia-Analysis
Sort by block ID
recipient.df.final.a <- recipient.df.final.a[order(recipient.df.final.a$randomization_block_id),] print("Newcomers") summary(subset(recipient.df.final.a, newcomer==1)$lang) print("Experienced") summary(subset(recipient.df.final.a, newcomer==0)$lang)
[1] "Newcomers"
MIT
randomization/thanks-recipient-study-2019/generate-wikipedia-thanks-recipient-randomizations-final-10.29.2019.R.ipynb
mitmedialab/CivilServant-Wikipedia-Analysis
Output and Archive Randomizations
randomization.filename <- paste("thanks-recipient-randomizations-", format(Sys.Date(), format="%Y%m%d"), ".csv", sep="") write.csv(recipient.df.final.a, file = file.path(data.path, "randomization_output", randomization.filename)) colnames(recipient.df.final.a)
_____no_output_____
MIT
randomization/thanks-recipient-study-2019/generate-wikipedia-thanks-recipient-randomizations-final-10.29.2019.R.ipynb
mitmedialab/CivilServant-Wikipedia-Analysis
Get marriages
filepath = '/Volumes/backup_128G/z_repository/TBIO_data/RequestsFromTana/20190515' filename = 'marriages.tsv' read_filename = '{0}/{1}'.format(filepath, filename) marriageDf = pd.read_csv(read_filename, delimiter='\t') marriageDf.fillna('', inplace=True) marriageDf.shape, marriageDf.head()
_____no_output_____
MIT
tbio/marriages.ipynb
VincentCheng34/StudyOnPython
To unique spouses
startId = 10000 personIds = {} marriageDic = {} for idx in range(0, len(marriageDf)): row = marriageDf.loc[idx] man = str(row['?manVal']) wife = str(row['?wifeVal']) woman = str(row['?womanVal']) husband = str(row['?husbandVal']) if man != '' and man not in personIds: personIds[man] = startId startId += 1 if wife != '' and wife not in personIds: personIds[wife] = startId startId += 1 if woman != '' and woman not in personIds: personIds[woman] = startId startId += 1 if husband != '' and husband not in personIds: personIds[husband] = startId startId += 1 if man not in marriageDic: marriageDic[man] = [wife] elif wife not in marriageDic[man]: marriageDic[man].append(wife) # else: # print("man WRONG:", man, wife) if wife not in marriageDic: marriageDic[wife] = [man] elif man not in marriageDic[wife]: marriageDic[wife].append(man) # else: # print("wife WRONG:", wife, man) if woman not in marriageDic: marriageDic[woman] = [husband] elif husband not in marriageDic[woman]: marriageDic[woman].append(husband) # else: # print("woman WRONG:", woman, husband) if husband not in marriageDic: marriageDic[husband] = [woman] elif woman not in marriageDic[husband]: marriageDic[husband].append(woman) # else: # print("husband WRONG:", husband, woman) # marriageDic len(personIds) personDf = pd.DataFrame(personIds, index=['ID']).T personDf.head() write_nodes_to = '{0}/{1}'.format(filepath, 'nodes_person_20190516_v2.xlsx') personDf.to_excel(write_nodes_to)
_____no_output_____
MIT
tbio/marriages.ipynb
VincentCheng34/StudyOnPython
Read person-family map table
familymembers = 'Familymembers.xlsx' read_familymembers = '{0}/{1}'.format(filepath, familymembers) fmDf = pd.read_excel(read_familymembers) fmDf.shape, fmDf.head() startId = 20000 familyIds = {} fmDic = {} for idx in range(0, len(fmDf)): row = fmDf.loc[idx] person = str(row['personStr']) family = str(row['familyStr']) if family not in familyIds: familyIds[family] = startId startId += 1 if person not in fmDic: fmDic[person] = family elif family != fmDic[person]: print("Dup:", person, family, fmDic[person]) # fmDic len(familyIds) familyDf = pd.DataFrame(familyIds, index=['ID']).T familyDf.head() write_nodes_to = '{0}/{1}'.format(filepath, 'nodes_family_20190516_v2.xlsx') familyDf.to_excel(write_nodes_to)
_____no_output_____
MIT
tbio/marriages.ipynb
VincentCheng34/StudyOnPython
results `Source (Family)` | `Target(Family)`| `Type(Undirected)` | `Person/Source` | `Person/target`
def getFamilyName(INperson): if INperson not in fmDic: # print(INperson, " Not found!") return '' return fmDic[INperson] def getPersonId(INperson): if INperson not in personIds: return 0 return personIds[INperson] def getFamilyId(INfamily): if INfamily not in familyIds: return 0 return familyIds[INfamily] resList = [] for sPerson in marriageDic: spouses = marriageDic[sPerson] for tPerson in spouses: sFamily = getFamilyName(sPerson) tFamily = getFamilyName(tPerson) if sFamily == '' or tFamily == '': # print(fPerson, fFamily, sPerson, sFamily) continue sPersonId = getPersonId(sPerson) tPersonId = getPersonId(tPerson) sFamilyId = getFamilyId(sFamily) tFamilyId = getFamilyId(tFamily) resList.append([sFamilyId, tFamilyId, 'Undirected', sPersonId, tPersonId]) print(len(resList)) resDf = pd.DataFrame(resList, columns=['SourceFamily', 'TargetFamily', 'Type', 'SourcePerson', 'TargetPerson']) resDf.drop_duplicates(keep='first', inplace=True) resDf.sort_values(by=['SourceFamily', 'TargetFamily', 'SourcePerson', 'TargetPerson'], inplace=True) resDf.head() print(len(resDf)) write_file_to = '{0}/{1}'.format(filepath, 'marriages_20190516_v2.xlsx') resDf.to_excel(write_file_to, index=False)
_____no_output_____
MIT
tbio/marriages.ipynb
VincentCheng34/StudyOnPython
LeNet Lab Solution![LeNet Architecture](lenet.png)Source: Yan LeCun Load DataLoad the MNIST data, which comes pre-loaded with TensorFlow.You do not need to modify this section.
from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", reshape=False) X_train, y_train = mnist.train.images, mnist.train.labels X_validation, y_validation = mnist.validation.images, mnist.validation.labels X_test, y_test = mnist.test.images, mnist.test.labels assert(len(X_train) == len(y_train)) assert(len(X_validation) == len(y_validation)) assert(len(X_test) == len(y_test)) print() print("Image Shape: {}".format(X_train[0].shape)) print() print("Training Set: {} samples".format(len(X_train))) print("Validation Set: {} samples".format(len(X_validation))) print("Test Set: {} samples".format(len(X_test)))
G:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters
MIT
LeNet-Lab-Solution.ipynb
mdeopujari/CarND-Traffic-Sign-Classifier-Project
The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.However, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.In order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).You do not need to modify this section.
import numpy as np # Pad images with 0s X_train = np.pad(X_train, ((0,0),(2,2),(2,2),(0,0)), 'constant') X_validation = np.pad(X_validation, ((0,0),(2,2),(2,2),(0,0)), 'constant') X_test = np.pad(X_test, ((0,0),(2,2),(2,2),(0,0)), 'constant') print("Updated Image Shape: {}".format(X_train[0].shape))
Updated Image Shape: (36, 36, 1)
MIT
LeNet-Lab-Solution.ipynb
mdeopujari/CarND-Traffic-Sign-Classifier-Project
Visualize DataView a sample from the dataset.You do not need to modify this section.
import random import numpy as np import matplotlib.pyplot as plt %matplotlib inline index = random.randint(0, len(X_train)) image = X_train[index].squeeze() plt.figure(figsize=(1,1)) plt.imshow(image, cmap="gray") print(y_train[index])
(32, 32) 4
MIT
LeNet-Lab-Solution.ipynb
mdeopujari/CarND-Traffic-Sign-Classifier-Project
Preprocess DataShuffle the training data.You do not need to modify this section.
from sklearn.utils import shuffle X_train, y_train = shuffle(X_train, y_train)
_____no_output_____
MIT
LeNet-Lab-Solution.ipynb
mdeopujari/CarND-Traffic-Sign-Classifier-Project
Setup TensorFlowThe `EPOCH` and `BATCH_SIZE` values affect the training speed and model accuracy.You do not need to modify this section.
import tensorflow as tf EPOCHS = 10 BATCH_SIZE = 128
_____no_output_____
MIT
LeNet-Lab-Solution.ipynb
mdeopujari/CarND-Traffic-Sign-Classifier-Project
SOLUTION: Implement LeNet-5Implement the [LeNet-5](http://yann.lecun.com/exdb/lenet/) neural network architecture.This is the only cell you need to edit. InputThe LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since MNIST images are grayscale, C is 1 in this case. Architecture**Layer 1: Convolutional.** The output shape should be 28x28x6.**Activation.** Your choice of activation function.**Pooling.** The output shape should be 14x14x6.**Layer 2: Convolutional.** The output shape should be 10x10x16.**Activation.** Your choice of activation function.**Pooling.** The output shape should be 5x5x16.**Flatten.** Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using `tf.contrib.layers.flatten`, which is already imported for you.**Layer 3: Fully Connected.** This should have 120 outputs.**Activation.** Your choice of activation function.**Layer 4: Fully Connected.** This should have 84 outputs.**Activation.** Your choice of activation function.**Layer 5: Fully Connected (Logits).** This should have 10 outputs. OutputReturn the result of the 2nd fully connected layer.
from tensorflow.contrib.layers import flatten def LeNet(x): # Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer mu = 0 sigma = 0.1 # SOLUTION: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6. conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma)) conv1_b = tf.Variable(tf.zeros(6)) conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b # SOLUTION: Activation. conv1 = tf.nn.relu(conv1) # SOLUTION: Pooling. Input = 28x28x6. Output = 14x14x6. conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') # SOLUTION: Layer 2: Convolutional. Output = 10x10x16. conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma)) conv2_b = tf.Variable(tf.zeros(16)) conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b # SOLUTION: Activation. conv2 = tf.nn.relu(conv2) # SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16. conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') # SOLUTION: Flatten. Input = 5x5x16. Output = 400. fc0 = flatten(conv2) # SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120. fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma)) fc1_b = tf.Variable(tf.zeros(120)) fc1 = tf.matmul(fc0, fc1_W) + fc1_b # SOLUTION: Activation. fc1 = tf.nn.relu(fc1) # SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84. fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma)) fc2_b = tf.Variable(tf.zeros(84)) fc2 = tf.matmul(fc1, fc2_W) + fc2_b # SOLUTION: Activation. fc2 = tf.nn.relu(fc2) # SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 10. fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 10), mean = mu, stddev = sigma)) fc3_b = tf.Variable(tf.zeros(10)) logits = tf.matmul(fc2, fc3_W) + fc3_b return logits
_____no_output_____
MIT
LeNet-Lab-Solution.ipynb
mdeopujari/CarND-Traffic-Sign-Classifier-Project
Features and LabelsTrain LeNet to classify [MNIST](http://yann.lecun.com/exdb/mnist/) data.`x` is a placeholder for a batch of input images.`y` is a placeholder for a batch of output labels.You do not need to modify this section.
x = tf.placeholder(tf.float32, (None, 32, 32, 1)) y = tf.placeholder(tf.int32, (None)) one_hot_y = tf.one_hot(y, 10)
_____no_output_____
MIT
LeNet-Lab-Solution.ipynb
mdeopujari/CarND-Traffic-Sign-Classifier-Project
Training PipelineCreate a training pipeline that uses the model to classify MNIST data.You do not need to modify this section.
rate = 0.001 logits = LeNet(x) cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits) loss_operation = tf.reduce_mean(cross_entropy) optimizer = tf.train.AdamOptimizer(learning_rate = rate) training_operation = optimizer.minimize(loss_operation)
_____no_output_____
MIT
LeNet-Lab-Solution.ipynb
mdeopujari/CarND-Traffic-Sign-Classifier-Project
Model EvaluationEvaluate how well the loss and accuracy of the model for a given dataset.You do not need to modify this section.
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1)) accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) saver = tf.train.Saver() def evaluate(X_data, y_data): num_examples = len(X_data) total_accuracy = 0 sess = tf.get_default_session() for offset in range(0, num_examples, BATCH_SIZE): batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE] accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y}) total_accuracy += (accuracy * len(batch_x)) return total_accuracy / num_examples
_____no_output_____
MIT
LeNet-Lab-Solution.ipynb
mdeopujari/CarND-Traffic-Sign-Classifier-Project
Train the ModelRun the training data through the training pipeline to train the model.Before each epoch, shuffle the training set.After each epoch, measure the loss and accuracy of the validation set.Save the model after training.You do not need to modify this section.
with tf.Session() as sess: sess.run(tf.global_variables_initializer()) num_examples = len(X_train) print("Training...") print() for i in range(EPOCHS): X_train, y_train = shuffle(X_train, y_train) for offset in range(0, num_examples, BATCH_SIZE): end = offset + BATCH_SIZE batch_x, batch_y = X_train[offset:end], y_train[offset:end] sess.run(training_operation, feed_dict={x: batch_x, y: batch_y}) validation_accuracy = evaluate(X_validation, y_validation) print("EPOCH {} ...".format(i+1)) print("Validation Accuracy = {:.3f}".format(validation_accuracy)) print() saver.save(sess, './lenet') print("Model saved")
_____no_output_____
MIT
LeNet-Lab-Solution.ipynb
mdeopujari/CarND-Traffic-Sign-Classifier-Project
Evaluate the ModelOnce you are completely satisfied with your model, evaluate the performance of the model on the test set.Be sure to only do this once!If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.You do not need to modify this section.
with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('.')) test_accuracy = evaluate(X_test, y_test) print("Test Accuracy = {:.3f}".format(test_accuracy))
_____no_output_____
MIT
LeNet-Lab-Solution.ipynb
mdeopujari/CarND-Traffic-Sign-Classifier-Project
Get data for only hepatocytes and rerun harmony+umap
# Reprocess t cell data for ds in DS_LIST: print(ds) ind_select = dic_data_raw[ds].obs['cell_ontology_class']=='hepatocyte' adata = dic_data_raw[ds][ind_select,:].copy() sc.pp.filter_cells(adata, min_genes=250) sc.pp.filter_genes(adata, min_cells=50) adata.obs['batch_harmony'] = adata.obs['mouse.id'] adata.obs['batch_harmony'] = adata.obs['batch_harmony'].astype('category') sc.pp.highly_variable_genes(adata, subset = False, min_disp=.5, min_mean=.0125, max_mean=10, n_bins=20, n_top_genes=None) sc.pp.scale(adata, max_value=10, zero_center=False) sc.pp.pca(adata, n_comps=50, use_highly_variable=True, svd_solver='arpack') sc.external.pp.harmony_integrate(adata, key='batch_harmony', max_iter_harmony=20) sc.pp.neighbors(adata, n_neighbors=50, n_pcs=20, use_rep="X_pca_harmony") # sc.pp.neighbors(adata, n_neighbors=50, n_pcs=20, use_rep="X_pca") sc.tl.leiden(adata, resolution=0.7) sc.tl.umap(adata) sc.pl.umap(adata, color='cell_ontology_class') sc.pl.umap(adata, color='leiden') sc.pl.umap(adata, color=['age', 'sex', 'mouse.id', 'n_genes', 'subtissue']) adata.write('/n/holystore01/LABS/price_lab/Users/mjzhang/scTRS_data/single_cell_data/tms_proc/' 'hep.%s.h5ad'%ds) adata = read_h5ad('/n/holystore01/LABS/price_lab/Users/mjzhang/scTRS_data/single_cell_data/tms_proc/hep.droplet.h5ad') # dic_cluster = {'0':'4', '1':'3','2':'1','3':'5','4':'0','5':'2'} # adata.obs['leiden_old'] = adata.obs['leiden'].values # adata.obs['leiden'] = [dic_cluster[x] for x in adata.obs['leiden_old']] adata.obs = adata.obs.join(dic_score['droplet']) sc.pl.umap(adata, color='leiden') sc.pl.umap(adata, color='UKB_460K.biochemistry_LDLdirect.norm_score') # sc.pl.umap(adata, color=['Pecam1', 'Nrp1', 'Kdr', 'Oit3']) # sc.pl.umap(adata, color=['Clec4f', 'Cd68', 'Irf7']) # sc.pl.umap(adata, color=['Alb', 'Ttr', 'Apoa1', 'Serpina1c']) # adata.write('/n/holystore01/LABS/price_lab/Users/mjzhang/scTRS_data/single_cell_data/tms_proc/hep.facs_annot.h5ad') temp_df = dic_data_raw['facs'][dic_data_raw['facs'].obs['cell_ontology_class']=='hepatocyte']\ .obs.groupby(['subtissue', 'mouse.id']).agg({'cell':len}) temp_df.loc[~temp_df['cell'].isna()]
_____no_output_____
MIT
experiments/job.case_hepatocyte/s1_reprocess_tms_droplet_hep.ipynb
martinjzhang/scDRS
Dot Placeholder Pull() and dot behaves in the same way
library(tidyverse) library(dslabs) data(murders) murders <- murders %>% mutate(murder_rate = (total/population) * 100000) summarize(murders, mean = mean(murder_rate)) us_murder_rate <- murders %>% summarize(rate = sum(total)/ sum(population) *100000) r <- us_murder_rate %>% .$rate class(r) r class(us_murder_rate$rate) us_murder_rate <- murders %>% summarize(rate = sum(total) / sum(population) * 100000) %>% .$rate us_murder_rate heights %>% group_by(sex) %>% summarize(mean = mean(height), sd = sd(height)) murders %>% mutate(murder_rate = total/population * 100000) %>% group_by(region) %>% summarize(mean = mean(murder_rate), sd = sd(murder_rate)) murders %>% mutate(murder_rate = total/population * 100000) %>% arrange(desc(murder_rate)) %>% top_n(10, murder_rate) murders %>% arrange(population) %>% head() murders %>% arrange(desc(population)) %>% head() murders %>% arrange(region, population) %>% head() murders %>% top_n(10, murder_rate)
_____no_output_____
MIT
HAR_DM/Practice Work/Summarizing with dplyr.ipynb
ashudva/HAR
Gaussian Mixture Models (GMM)KDE centers each bin (or kernel rather) at each point. In a [**mixture model**](https://en.wikipedia.org/wiki/Mixture_model) we don't use a kernel for each data point, but rather we fit for the *locations of the kernels*--in addition to the width. So a mixture model is sort of a hybrid between an $N$-D histogram and KDE. Using lots of kernels (maybe even more than the BIC score suggests) may make sense if you just want to provide an accurate description of the data (as in density estimation). Using fewer kernels makes mixture models more like clustering, where the suggestion is still to use many kernels in order to divide the sample into real clusters and "background". Gaussians are the most commonly used components for mixture models. So, the pdf is modeled by a sum of Gaussians:$$p(x) = \sum_{k=1}^N \alpha_k \mathscr{N}(x|\mu_k,\Sigma_k),$$where $\alpha_k$ is the "mixing coefficient" with $0\le \alpha_k \le 1$ and $\sum_{k=1}^N \alpha_k = 1$.We can solve for the parameters using maximum likelihood analyis as we have discussed previously.However, this can be complicated in multiple dimensions, requiring the use of [**Expectation Maximization (EM)**](https://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm) methods. Expectation Maximization (ultra simplified version)(Note: all explanations of EM are far more complicated than seems necessary for our purposes, so here is my overly simplified explanation.)This may make more sense in terms of our earlier Bayesian analyses if we write this as $$p(z=c) = \alpha_k,$$and$$p(x|z=c) = \mathscr{N}(x|\mu_k,\Sigma_k),$$where $z$ is a "hidden" variable related to which "component" each point is assigned to.In the Expectation step, we hold $\mu_k, \Sigma_k$, and $\alpha_k$ fixed and compute the probability that each $x_i$ belongs to component, $c$. In the Maximization step, we hold the probability of the components fixed and maximize $\mu_k, \Sigma_k,$ and $\alpha_k$. Note that $\alpha$ is the relative weight of each Gaussian component and not the probability of each point belonging to a specific component. We can use the following animation to illustrate the process. We start with a 2-component GMM, where the initial components can be randomly determined.The points that are closest to the centroid of a component will be more probable under that distribution in the "E" step and will pull the centroid towards them in the "M" step. Iteration between the "E" and "M" step eventually leads to convergence.In this particular example, 3 components better describes the data and similarly converges. Note that the process is not that sensitive to how the components are first initialized. We pretty much get the same result in the end.
from IPython.display import YouTubeVideo YouTubeVideo("B36fzChfyGU")
_____no_output_____
MIT
MixtureModel.ipynb
gtrichards/PHYS_T480
A typical call to the [Gaussian Mixture Model](http://scikit-learn.org/stable/modules/mixture.html) algorithm looks like this:
# Execute this cell import numpy as np from sklearn.mixture import GMM X = np.random.normal(size=(1000,2)) #1000 points in 2D gmm = GMM(3) #three components gmm.fit(X) log_dens = gmm.score(X) BIC = gmm.bic(X)
_____no_output_____
MIT
MixtureModel.ipynb
gtrichards/PHYS_T480
Let's start with the 1-D example given in Ivezic, Figure 6.8, which compares a Mixture Model to KDE.[Note that the version at astroML.org has some bugs!]
# Execute this cell # Ivezic, Figure 6.8 # Author: Jake VanderPlas # License: BSD # The figure produced by this code is published in the textbook # "Statistics, Data Mining, and Machine Learning in Astronomy" (2013) # For more information, see http://astroML.github.com # To report a bug or issue, use the following forum: # https://groups.google.com/forum/#!forum/astroml-general %matplotlib inline import numpy as np from matplotlib import pyplot as plt from scipy import stats from astroML.plotting import hist from sklearn.mixture import GMM from sklearn.neighbors import KernelDensity #------------------------------------------------------------ # Generate our data: a mix of several Cauchy distributions # this is the same data used in the Bayesian Blocks figure np.random.seed(0) N = 10000 mu_gamma_f = [(5, 1.0, 0.1), (7, 0.5, 0.5), (9, 0.1, 0.1), (12, 0.5, 0.2), (14, 1.0, 0.1)] true_pdf = lambda x: sum([f * stats.cauchy(mu, gamma).pdf(x) for (mu, gamma, f) in mu_gamma_f]) x = np.concatenate([stats.cauchy(mu, gamma).rvs(int(f * N)) for (mu, gamma, f) in mu_gamma_f]) np.random.shuffle(x) x = x[x > -10] x = x[x < 30] #------------------------------------------------------------ # plot the results fig = plt.figure(figsize=(10, 10)) fig.subplots_adjust(bottom=0.08, top=0.95, right=0.95, hspace=0.1) N_values = (500, 5000) subplots = (211, 212) k_values = (10, 100) for N, k, subplot in zip(N_values, k_values, subplots): ax = fig.add_subplot(subplot) xN = x[:N] t = np.linspace(-10, 30, 1000) kde = KernelDensity(0.1, kernel='gaussian') kde.fit(xN[:, None]) dens_kde = np.exp(kde.score_samples(t[:, None])) # Compute density via Gaussian Mixtures # we'll try several numbers of clusters n_components = np.arange(3, 16) gmms = [GMM(n_components=n).fit(xN[:,None]) for n in n_components] BICs = [gmm.bic(xN[:,None]) for gmm in gmms] i_min = np.argmin(BICs) t = np.linspace(-10, 30, 1000) logprob, responsibilities = gmms[i_min].score_samples(t[:,None]) # plot the results ax.plot(t, true_pdf(t), ':', color='black', zorder=3, label="Generating Distribution") ax.plot(xN, -0.005 * np.ones(len(xN)), '|k', lw=1.5) ax.plot(t, np.exp(logprob), '-', color='gray', label="Mixture Model\n(%i components)" % n_components[i_min]) ax.plot(t, dens_kde, '-', color='black', zorder=3, label="Kernel Density $(h=0.1)$") # label the plot ax.text(0.02, 0.95, "%i points" % N, ha='left', va='top', transform=ax.transAxes) ax.set_ylabel('$p(x)$') ax.legend(loc='upper right') if subplot == 212: ax.set_xlabel('$x$') ax.set_xlim(0, 20) ax.set_ylim(-0.01, 0.4001) plt.show()
_____no_output_____
MIT
MixtureModel.ipynb
gtrichards/PHYS_T480
Hmm, that doesn't look so great for the 5000 point distribution. Plot the BIC values and see if anything looks awry. What do the individual components look like? Make a plot of those. Careful with the shapes of the arrays! Can you figure out something that you can do to improve the results? Ivezic, Figure 6.6 shows a 2-D example. In the first panel, we have the raw data. In the second panel we have a density plot (essentially a 2-D histogram). We then try to represent the data with a series of Gaussians. We allow up to 14 Gaussians and use the AIC/BIC to determine the best choice for this number. This is shown in the third panel. Finally, the fourth panel shows the chosen Gaussians with their centroids and 1-$\sigma$ contours.In this case 7 components are required for the best fit. While it looks like we could do a pretty good job with just 2 components, there does appear to be some "background" that is a high enough level to justify further components.
# Execute this cell # Ivezic, Figure 6.6 # Author: Jake VanderPlas # License: BSD # The figure produced by this code is published in the textbook # "Statistics, Data Mining, and Machine Learning in Astronomy" (2013) # For more information, see http://astroML.github.com # To report a bug or issue, use the following forum: # https://groups.google.com/forum/#!forum/astroml-general %matplotlib inline import numpy as np from matplotlib import pyplot as plt from scipy.stats import norm from sklearn.mixture import GMM from astroML.datasets import fetch_sdss_sspp from astroML.decorators import pickle_results from astroML.plotting.tools import draw_ellipse #------------------------------------------------------------ # Get the Segue Stellar Parameters Pipeline data data = fetch_sdss_sspp(cleaned=True) # Note how X was created from two columns of data X = np.vstack([data['FeH'], data['alphFe']]).T # truncate dataset for speed X = X[::5] #------------------------------------------------------------ # Compute GMM models & AIC/BIC N = np.arange(1, 14) #@pickle_results("GMM_metallicity.pkl") def compute_GMM(N, covariance_type='full', n_iter=1000): models = [None for n in N] for i in range(len(N)): #print N[i] models[i] = GMM(n_components=N[i], n_iter=n_iter, covariance_type=covariance_type) models[i].fit(X) return models models = compute_GMM(N) AIC = [m.aic(X) for m in models] BIC = [m.bic(X) for m in models] i_best = np.argmin(BIC) gmm_best = models[i_best] print "best fit converged:", gmm_best.converged_ print "BIC: n_components = %i" % N[i_best] #------------------------------------------------------------ # compute 2D density FeH_bins = 51 alphFe_bins = 51 H, FeH_bins, alphFe_bins = np.histogram2d(data['FeH'], data['alphFe'], (FeH_bins, alphFe_bins)) Xgrid = np.array(map(np.ravel, np.meshgrid(0.5 * (FeH_bins[:-1] + FeH_bins[1:]), 0.5 * (alphFe_bins[:-1] + alphFe_bins[1:])))).T log_dens = gmm_best.score(Xgrid).reshape((51, 51)) #------------------------------------------------------------ # Plot the results fig = plt.figure(figsize=(12, 5)) fig.subplots_adjust(wspace=0.45, bottom=0.25, top=0.9, left=0.1, right=0.97) # plot data ax = fig.add_subplot(141) ax.scatter(data['FeH'][::10],data['alphFe'][::10],marker=".",color='k',edgecolors='None') ax.set_xlabel(r'$\rm [Fe/H]$') ax.set_ylabel(r'$\rm [\alpha/Fe]$') ax.xaxis.set_major_locator(plt.MultipleLocator(0.3)) ax.set_xlim(-1.101, 0.101) ax.text(0.93, 0.93, "Input", va='top', ha='right', transform=ax.transAxes) # plot density ax = fig.add_subplot(142) ax.imshow(H.T, origin='lower', interpolation='nearest', aspect='auto', extent=[FeH_bins[0], FeH_bins[-1], alphFe_bins[0], alphFe_bins[-1]], cmap=plt.cm.binary) ax.set_xlabel(r'$\rm [Fe/H]$') ax.set_ylabel(r'$\rm [\alpha/Fe]$') ax.xaxis.set_major_locator(plt.MultipleLocator(0.3)) ax.set_xlim(-1.101, 0.101) ax.text(0.93, 0.93, "Density", va='top', ha='right', transform=ax.transAxes) # plot AIC/BIC ax = fig.add_subplot(143) ax.plot(N, AIC, '-k', label='AIC') ax.plot(N, BIC, ':k', label='BIC') ax.legend(loc=1) ax.set_xlabel('N components') plt.setp(ax.get_yticklabels(), fontsize=7) # plot best configurations for AIC and BIC ax = fig.add_subplot(144) ax.imshow(np.exp(log_dens), origin='lower', interpolation='nearest', aspect='auto', extent=[FeH_bins[0], FeH_bins[-1], alphFe_bins[0], alphFe_bins[-1]], cmap=plt.cm.binary) ax.scatter(gmm_best.means_[:, 0], gmm_best.means_[:, 1], c='w') for mu, C, w in zip(gmm_best.means_, gmm_best.covars_, gmm_best.weights_): draw_ellipse(mu, C, scales=[1], ax=ax, fc='none', ec='k') ax.text(0.93, 0.93, "Converged", va='top', ha='right', transform=ax.transAxes) ax.set_xlim(-1.101, 0.101) ax.set_ylim(alphFe_bins[0], alphFe_bins[-1]) ax.xaxis.set_major_locator(plt.MultipleLocator(0.3)) ax.set_xlabel(r'$\rm [Fe/H]$') ax.set_ylabel(r'$\rm [\alpha/Fe]$') plt.show()
_____no_output_____
MIT
MixtureModel.ipynb
gtrichards/PHYS_T480
That said, I'd say that there are *too* many components here. So, I'd be inclined to explore this a bit further if it were my data.Lastly, let's look at a 2-D case where we are using GMM more to characterize the data than to find clusters.
# Execute this cell # Ivezic, Figure 6.7 # Author: Jake VanderPlas # License: BSD # The figure produced by this code is published in the textbook # "Statistics, Data Mining, and Machine Learning in Astronomy" (2013) # For more information, see http://astroML.github.com # To report a bug or issue, use the following forum: # https://groups.google.com/forum/#!forum/astroml-general import numpy as np from matplotlib import pyplot as plt from sklearn.mixture import GMM from astroML.datasets import fetch_great_wall from astroML.decorators import pickle_results #------------------------------------------------------------ # load great wall data X = fetch_great_wall() #------------------------------------------------------------ # Create a function which will save the results to a pickle file # for large number of clusters, computation will take a long time! #@pickle_results('great_wall_GMM.pkl') def compute_GMM(n_clusters, n_iter=1000, min_covar=3, covariance_type='full'): clf = GMM(n_clusters, covariance_type=covariance_type, n_iter=n_iter, min_covar=min_covar) clf.fit(X) print "converged:", clf.converged_ return clf #------------------------------------------------------------ # Compute a grid on which to evaluate the result Nx = 100 Ny = 250 xmin, xmax = (-375, -175) ymin, ymax = (-300, 200) Xgrid = np.vstack(map(np.ravel, np.meshgrid(np.linspace(xmin, xmax, Nx), np.linspace(ymin, ymax, Ny)))).T #------------------------------------------------------------ # Compute the results # # we'll use 100 clusters. In practice, one should cross-validate # with AIC and BIC to settle on the correct number of clusters. clf = compute_GMM(n_clusters=100) log_dens = clf.score(Xgrid).reshape(Ny, Nx) #------------------------------------------------------------ # Plot the results fig = plt.figure(figsize=(10, 5)) fig.subplots_adjust(hspace=0, left=0.08, right=0.95, bottom=0.13, top=0.9) ax = fig.add_subplot(211, aspect='equal') ax.scatter(X[:, 1], X[:, 0], s=1, lw=0, c='k') ax.set_xlim(ymin, ymax) ax.set_ylim(xmin, xmax) ax.xaxis.set_major_formatter(plt.NullFormatter()) plt.ylabel(r'$x\ {\rm (Mpc)}$') ax = fig.add_subplot(212, aspect='equal') ax.imshow(np.exp(log_dens.T), origin='lower', cmap=plt.cm.binary, extent=[ymin, ymax, xmin, xmax]) ax.set_xlabel(r'$y\ {\rm (Mpc)}$') ax.set_ylabel(r'$x\ {\rm (Mpc)}$') plt.show()
_____no_output_____
MIT
MixtureModel.ipynb
gtrichards/PHYS_T480
Initial changes
ds_train = TiggeMRMSDataset( tigge_dir=f'{DATADRIVE}/tigge/32km/', tigge_vars=['total_precipitation'], mrms_dir=f'{DATADRIVE}/mrms/4km/RadarOnly_QPE_06H/', rq_fn=f'{DATADRIVE}/mrms/4km/RadarQuality.nc', # const_fn='/datadrive/tigge/32km/constants.nc', # const_vars=['orog', 'lsm'], data_period=('2018-01', '2019-01'), first_days=5, scale=False # split='train' ) mean_precip = [] for idx in range(len(ds_train.idxs)): X, y = ds_train[idx] mean_precip.append(y.max()) mean_precip = np.array(mean_precip) plt.hist(mean_precip, bins=100); # plt.yscale('log') cat_bins = np.arange(0, 102, 2, dtype='float') cat_bins = np.append(np.insert(cat_bins, 1, 0.01), np.inf) len(cat_bins) cat_bins X, y = ds_train[600] X.shape, y.shape plt.imshow(y[0]) plt.colorbar(); def to_categorical(y, num_classes=None, dtype='float32'): """Copied from keras source code """ y = np.array(y, dtype='int') input_shape = y.shape if input_shape and input_shape[-1] == 1 and len(input_shape) > 1: input_shape = tuple(input_shape[:-1]) y = y.ravel() if not num_classes: num_classes = np.max(y) + 1 n = y.shape[0] categorical = np.zeros((n, num_classes), dtype=dtype) categorical[np.arange(n), y] = 1 output_shape = input_shape + (num_classes,) categorical = np.reshape(categorical, output_shape) return categorical a = pd.cut(y.reshape(-1), cat_bins, labels=False, include_lowest=True).reshape(y.shape) a.shape plt.imshow(a[0]) plt.colorbar(); plt.hist(a.flat, bins=cat_bins); a = to_categorical(a.squeeze(), num_classes=len(cat_bins)) a = np.rollaxis(a, 2) a.shape
_____no_output_____
MIT
notebooks/stephan_notebooks/10-Categorical.ipynb
raspstephan/nwp-downscale
Changes implemented
cat_bins = np.arange(0, 55, 5, dtype='float') cat_bins = np.append(np.insert(cat_bins, 1, 0.01), np.inf) len(cat_bins) plt.plot(cat_bins) ds_train = TiggeMRMSDataset( tigge_dir=f'{DATADRIVE}/tigge/32km/', tigge_vars=['total_precipitation'], mrms_dir=f'{DATADRIVE}/mrms/4km/RadarOnly_QPE_06H/', rq_fn=f'{DATADRIVE}/mrms/4km/RadarQuality.nc', # const_fn='/datadrive/tigge/32km/constants.nc', # const_vars=['orog', 'lsm'], data_period=('2018-01', '2018-12'), first_days=5, scale=True, cat_bins=cat_bins # split='train' ) ds_valid = TiggeMRMSDataset( tigge_dir=f'{DATADRIVE}/tigge/32km/', tigge_vars=['total_precipitation'], mrms_dir=f'{DATADRIVE}/mrms/4km/RadarOnly_QPE_06H/', rq_fn=f'{DATADRIVE}/mrms/4km/RadarQuality.nc', # const_fn='/datadrive/tigge/32km/constants.nc', # const_vars=['orog', 'lsm'], data_period=('2019-01', '2019-12'), first_days=2, scale=True, cat_bins=cat_bins, mins=ds_train.mins, maxs=ds_train.maxs # split='train' ) X, y = ds_train[600] X.shape, y.shape y sampler_train = torch.utils.data.WeightedRandomSampler(ds_train.compute_weights(), len(ds_train), replacement=True) sampler_valid = torch.utils.data.WeightedRandomSampler(ds_valid.compute_weights(), len(ds_valid), replacement=True) dl_train = torch.utils.data.DataLoader(ds_train, batch_size=32, sampler=sampler_train) dl_valid = torch.utils.data.DataLoader(ds_valid, batch_size=32, sampler=sampler_valid) len(dl_train) len(ds_train) X, y = next(iter(dl_train)) fig, axs = plt.subplots(4, 8, figsize=(24, 12)) for x, ax in zip(X.numpy(), axs.flat): ax.imshow(x[0], cmap='gist_ncar_r', vmin=0, vmax=0.5) X.shape, y.shape y.type() class UpsampleBlock(nn.Module): def __init__(self, nf, spectral_norm=False, method='PixelShuffle'): super().__init__() self.conv = nn.Conv2d(nf, nf * 4 if method=='PixelShuffle' else nf, kernel_size=3, stride=1, padding=1) if method == 'PixelShuffle': self.upsample = nn.PixelShuffle(2) elif method == 'bilinear': self.upsample = nn.Upsample(scale_factor=2, mode='bilinear') else: raise NotImplementedError self.activation = nn.LeakyReLU(0.2) if spectral_norm: self.conv = nn.utils.spectral_norm(self.conv) def forward(self, x): out = self.conv(x) out = self.upsample(out) out = self.activation(out) return out class Generator(nn.Module): """Generator with noise vector and spectral normalization """ def __init__(self, nres, nf_in, nf, relu_out=False, use_noise=True, spectral_norm=True, nout=1, softmax_out=False, upsample_method='PixelShuffle'): """ General Generator with different options to use. e.g noise, Spectral normalization (SN) """ super().__init__() self.relu_out = relu_out self.softmax_out = softmax_out self.use_noise = use_noise self.spectral_norm = spectral_norm # First convolution if use_noise: self.conv_in = nn.Conv2d(nf_in, nf-1, kernel_size=9, stride=1, padding=4) else: self.conv_in = nn.Conv2d(nf_in, nf, kernel_size=9, stride=1, padding=4) self.activation_in = nn.LeakyReLU(0.2) # Resblocks keeping shape self.resblocks = nn.Sequential(*[ ResidualBlock(nf, spectral_norm=spectral_norm) for i in range(nres) ]) # Resblocks with upscaling self.upblocks = nn.Sequential(*[ UpsampleBlock(nf, spectral_norm=spectral_norm, method=upsample_method) for i in range(3) ]) self.conv_out = nn.Conv2d(nf, nout, kernel_size=9, stride=1, padding=4) if spectral_norm: self.conv_in = nn.utils.spectral_norm(self.conv_in) self.conv_out = nn.utils.spectral_norm(self.conv_out) def forward(self, x): out = self.conv_in(x) out = self.activation_in(out) if self.use_noise: bs, _, h, w = x.shape z = torch.normal(0, 1, size=(bs, 1, h, w), device=device, requires_grad=True) out = torch.cat([out, z], dim=1) skip = out out = self.resblocks(out) out = out + skip out = self.upblocks(out) out = self.conv_out(out) if self.relu_out: out = nn.functional.relu(out) if self.softmax_out: out = nn.functional.softmax(out, dim=1) return out gen = Generator( nres=3, nf_in=1, nf=64, relu_out=False, use_noise=False, spectral_norm=False, nout=len(cat_bins)-1, softmax_out=False, upsample_method='bilinear' ).to(device) count_parameters(gen) criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(gen.parameters(), lr=1e-4) trainer = Trainer(gen, optimizer, criterion, dl_train, dl_valid) trainer.fit(10) trainer.plot_losses() preds = nn.functional.softmax(gen(X.to(device)), dim=1).cpu().detach().numpy() preds.shape target = y.cpu().detach().numpy() target.shape np.argmax(target.mean((1, 2))) i=7 plt.imshow(target[i]) plt.colorbar() target[i, 20, 20] plt.imshow(X[i, 0] * ds_train.maxs.tp.values) plt.colorbar(); plt.imshow(np.argmax(preds[i], axis=0)) plt.colorbar() plt.plot(preds[i, :, 100, 100]) plt.axvline(target[i, 100, 100]) cdf = np.cumsum(preds, axis=1) cdf.shape len(cat_bins) plt.plot(cat_bins[:-1], cdf[i, :, 20, 20]) plt.plot(cat_bins[:-1], cdf[i, :, 100, 100]) from scipy.ndimage import gaussian_filter def corr_random2D(size, sigma=5, inflation=0.5): r = np.random.uniform(size=(size, size)) r = gaussian_filter(r, sigma) r = (r - 0.5) * (inflation / r.std()) + 0.5 r = 1/(1 + np.exp(-r)) return r rand = corr_random2D(128, 3) plt.imshow(rand) plt.colorbar(); cat_bins c = cdf[i, :, 100, 100] c = np.insert(c, 0, 0) c len(cat_bins), len(c) p = rand[100, 100] p b = np.digitize(p, c, right=True) - 1 b c[b], c[b+1] w1 = (p - c[b]) / (c[b+1] - c[b]) w2 = (c[b+1] - p) / (c[b+1] - c[b]) w1, w2 v = cat_bins[b] * w1 + cat_bins[b+1] * w2 v def cat2real1D(pdf, q, cat_bins, interpolate=True): c = np.cumsum(pdf) c = np.insert(c, 0, 0) b = np.digitize(q, c, right=True) - 1 if interpolate: w1 = (q - c[b]) / (c[b+1] - c[b]) w2 = (c[b+1] - q) / (c[b+1] - c[b]) else: w1, w2 = 0.5, 0.5 assert w1 >0, 'Weights must be positive' assert w2 >0, 'Weights must be positive' v = cat_bins[b] * w2 + cat_bins[b+1] * w1 return v import pdb def cat2real2D(pdf, q, cat_bins, interpolate=True): nbins, nx, ny = pdf.shape # pdb.set_trace() r = [cat2real1D(a, b, cat_bins, interpolate) for a, b in zip(pdf.reshape(nbins, -1).T, q.reshape(-1))] r = np.array(r).reshape(nx, ny) return r o = cat2real2D(pdf, rand, cat_bins) plt.imshow(o) plt.colorbar(); weights = ds_valid.compute_weights() np.argsort(weights)[::-1][:20] X_sample, y_sample = ds_valid.__getitem__(501, no_cat=True) X_sample.shape, y_sample.shape vmin=0 vmax=10 fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5)) img = ax1.imshow(X_sample[0]*ds_valid.maxs.tp.values, cmap='gist_ncar_r', vmin=vmin, vmax=vmax) plt.colorbar(img, ax=ax1, shrink=0.7) img = ax2.imshow(y_sample[0], cmap='gist_ncar_r', vmin=vmin, vmax=vmax) plt.colorbar(img, ax=ax2, shrink=0.7) pdf = nn.functional.softmax(gen(torch.from_numpy(X_sample[None]).to(device))).cpu().detach().numpy()[0] pdf.shape fig, axs = plt.subplots(1, 5, figsize=(20, 4)) for i in range(5): if i ==0: rand = np.ones((128, 128)) * 0.5 else: rand = corr_random2D(128, 3, inflation=0.3) o = cat2real2D(pdf, rand, cat_bins) axs[i].imshow(o, cmap='gist_ncar_r', vmin=vmin, vmax=vmax)
_____no_output_____
MIT
notebooks/stephan_notebooks/10-Categorical.ipynb
raspstephan/nwp-downscale
Overfitting test MSE
ds_train = TiggeMRMSDataset( tigge_dir=f'{DATADRIVE}/tigge/32km/', tigge_vars=['total_precipitation'], mrms_dir=f'{DATADRIVE}/mrms/4km/RadarOnly_QPE_06H/', rq_fn=f'{DATADRIVE}/mrms/4km/RadarQuality.nc', # const_fn='/datadrive/tigge/32km/constants.nc', # const_vars=['orog', 'lsm'], data_period=('2018-01', '2018-12'), first_days=5, scale=True, # cat_bins=cat_bins # split='train' ) ds_valid = TiggeMRMSDataset( tigge_dir=f'{DATADRIVE}/tigge/32km/', tigge_vars=['total_precipitation'], mrms_dir=f'{DATADRIVE}/mrms/4km/RadarOnly_QPE_06H/', rq_fn=f'{DATADRIVE}/mrms/4km/RadarQuality.nc', # const_fn='/datadrive/tigge/32km/constants.nc', # const_vars=['orog', 'lsm'], data_period=('2019-01', '2019-12'), first_days=2, scale=True, # cat_bins=cat_bins, mins=ds_train.mins, maxs=ds_train.maxs # split='train' ) X, y = ds_train[600] X.shape, y.shape sampler_train = torch.utils.data.WeightedRandomSampler(ds_train.compute_weights(), len(ds_train)) sampler_valid = torch.utils.data.WeightedRandomSampler(ds_valid.compute_weights(), len(ds_valid)) dl_train = torch.utils.data.DataLoader(ds_train, batch_size=32, sampler=sampler_train) dl_valid = torch.utils.data.DataLoader(ds_valid, batch_size=32, sampler=sampler_valid) len(dl_train) len(ds_train) X, y = next(iter(dl_train)) fig, axs = plt.subplots(4, 8, figsize=(24, 12)) for x, ax in zip(X.numpy(), axs.flat): ax.imshow(x[0], cmap='gist_ncar_r', vmin=0, vmax=0.5) X.shape, y.shape y.type() class UpsampleBlock(nn.Module): def __init__(self, nf, spectral_norm=False, method='PixelShuffle'): super().__init__() self.conv = nn.Conv2d(nf, nf * 4 if method=='PixelShuffle' else nf, kernel_size=3, stride=1, padding=1) if method == 'PixelShuffle': self.upsample = nn.PixelShuffle(2) elif method == 'bilinear': self.upsample = nn.Upsample(scale_factor=2, mode='bilinear') else: raise NotImplementedError self.activation = nn.LeakyReLU(0.2) if spectral_norm: self.conv = nn.utils.spectral_norm(self.conv) def forward(self, x): out = self.conv(x) out = self.upsample(out) out = self.activation(out) return out class Generator(nn.Module): """Generator with noise vector and spectral normalization """ def __init__(self, nres, nf_in, nf, relu_out=False, use_noise=True, spectral_norm=True, nout=1, softmax_out=False, upsample_method='PixelShuffle'): """ General Generator with different options to use. e.g noise, Spectral normalization (SN) """ super().__init__() self.relu_out = relu_out self.softmax_out = softmax_out self.use_noise = use_noise self.spectral_norm = spectral_norm # First convolution if use_noise: self.conv_in = nn.Conv2d(nf_in, nf-1, kernel_size=9, stride=1, padding=4) else: self.conv_in = nn.Conv2d(nf_in, nf, kernel_size=9, stride=1, padding=4) self.activation_in = nn.LeakyReLU(0.2) # Resblocks keeping shape self.resblocks = nn.Sequential(*[ ResidualBlock(nf, spectral_norm=spectral_norm) for i in range(nres) ]) # Resblocks with upscaling self.upblocks = nn.Sequential(*[ UpsampleBlock(nf, spectral_norm=spectral_norm, method=upsample_method) for i in range(3) ]) self.conv_out = nn.Conv2d(nf, nout, kernel_size=9, stride=1, padding=4) if spectral_norm: self.conv_in = nn.utils.spectral_norm(self.conv_in) self.conv_out = nn.utils.spectral_norm(self.conv_out) def forward(self, x): out = self.conv_in(x) out = self.activation_in(out) if self.use_noise: bs, _, h, w = x.shape z = torch.normal(0, 1, size=(bs, 1, h, w), device=device, requires_grad=True) out = torch.cat([out, z], dim=1) skip = out out = self.resblocks(out) out = out + skip out = self.upblocks(out) out = self.conv_out(out) if self.relu_out: out = nn.functional.relu(out) if self.softmax_out: out = nn.functional.softmax(out, dim=1) return out gen = Generator( nres=3, nf_in=1, nf=64, relu_out=False, use_noise=False, spectral_norm=False, nout=1, softmax_out=False, upsample_method='PixelShuffle' ).to(device) count_parameters(gen) criterion = nn.MSELoss() optimizer = torch.optim.Adam(gen.parameters(), lr=1e-5) trainer = Trainer(gen, optimizer, criterion, dl_train, dl_valid) trainer.fit(10) trainer.plot_losses() plot_sample(X, y, gen, 13)
_____no_output_____
MIT
notebooks/stephan_notebooks/10-Categorical.ipynb
raspstephan/nwp-downscale
--- Saving your plot to a file (or to an io buffer):- `mplfinance.plot()` allows you to save your plot to a file, or io-buffer, using the `savefig` keyword.- The value of `savefig` may be a `str`, `dict`, or `io.ByteIO` object. - If the value is a `str` then it is assumed to be the file name to which to save the figure/plot. - If the value is an `io.ByteIO` object, then the figure will be saved to the io buffer object. This avoids interaction with disk, and can also be useful when mplfinance is behind a web server (so that requests for an image file can be serviced without going to disk).If the file extension is one of those recognized by `matplotlib.pyplot.savefig()` then the file type will be inferred from the extension, for example: `.pdf`, `.svg`, `.png`, `.jpg` ...
df = pd.read_csv('data/SP500_NOV2019_Hist.csv',index_col=0,parse_dates=True) %%capture ## cell magic function `%%capture` blocks jupyter notebook output, ## which is not needed here since the plot is saved to a file anyway: mpf.plot(df,type='candle',volume=True,savefig='testsave.png')
_____no_output_____
Apache-2.0
examples/savefig.ipynb
fadeawaylove/stock-trade-system
--- We can use IPython.display.Image to display the image file here in the notebook:
import IPython.display as IPydisplay %ls -l testsave.png IPydisplay.Image(filename='testsave.png')
-rw-rw-rw- 1 dino dino 24877 Jun 7 17:52 testsave.png
Apache-2.0
examples/savefig.ipynb
fadeawaylove/stock-trade-system
--- We can use io to save the plot as a byte buffer:
%%capture ## cell magic function `%%capture` blocks jupyter notebook output, ## which is not needed here, since the plot is saved to the io-buffer anyway: buf = io.BytesIO() mpf.plot(df,type='candle',volume=True,savefig=buf) buf.seek(0)
_____no_output_____
Apache-2.0
examples/savefig.ipynb
fadeawaylove/stock-trade-system
We can use Ipython.display.Image to display the image in the ioBytes buffer:
IPydisplay.Image(buf.read())
_____no_output_____
Apache-2.0
examples/savefig.ipynb
fadeawaylove/stock-trade-system
--- Specifying image attributes with `savefig`We can control various attributes of the saved figure/plot by passing a `dict`ionary as the value for the `savefig` keyword.The dictionary **must** contain the keyword `fname` for the file name to be saved, **and *may* contain any of the other keywords accepted by [`matplotlib.pyplot.savefig()`](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.savefig.html)** (for example: dpi, facecolor, edgecolor, orientation, format, metadata, quality)When creating the `dict`, I recommend using the `dict()` constructor so that that `keyword=` syntax may be used and thereby more closely resemble calling:**[`matplotlib.pyplot.savefig()`](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.savefig.html)**
%%capture ## %%capture blocks jupyter notebook output; plots are saved to files anyway: save = dict(fname='tsave30.jpg',dpi=30,pad_inches=0.25) mpf.plot(df,volume=True,savefig=save) mpf.plot(df,volume=True,savefig=dict(fname='tsave100.jpg',dpi=100,pad_inches=0.25)) %ls -l tsave30.jpg %ls -l tsave100.jpg IPydisplay.Image(filename='tsave30.jpg') IPydisplay.Image(filename='tsave100.jpg')
-rw-rw-rw- 1 dino dino 11016 Jun 7 17:52 tsave30.jpg -rw-rw-rw- 1 dino dino 54172 Jun 7 17:52 tsave100.jpg
Apache-2.0
examples/savefig.ipynb
fadeawaylove/stock-trade-system
Specifying image attributes (via `savefig`) dict also works with an io.BytesIO buffer:- Just assign the io-buffer to the `fname` key in the savefig dict
%%capture buf30dpi = io.BytesIO() buf100dpi = io.BytesIO() mpf.plot(df,volume=True,savefig=dict(fname=buf30dpi ,dpi=30 ,pad_inches=0.25)) mpf.plot(df,volume=True,savefig=dict(fname=buf100dpi,dpi=100,pad_inches=0.25))
_____no_output_____
Apache-2.0
examples/savefig.ipynb
fadeawaylove/stock-trade-system
Use Ipython.display.Image to display the buffer contents:
_ = buf30dpi.seek(0) IPydisplay.Image(buf30dpi.read()) _ = buf100dpi.seek(0) IPydisplay.Image(buf100dpi.read())
_____no_output_____
Apache-2.0
examples/savefig.ipynb
fadeawaylove/stock-trade-system
Oxford University COVID-19 forecasting project dataThe data below was downloaded from the [Oxford University countermeasures database](https://www.notion.so/COVID-19-countermeasures-database-b532a58d6f944ef6982ab565627bdb08). See the website for a description of the data sources.
countermeasures_df = pd.read_csv("data/containment_measures_march23.csv") countermeasures_df.head(1) countermeasures_df[["Country", "Date Start", "Description of measure implemented"]].groupby("Country").head(10) print(countermeasures_df["Country"].unique()) print(countermeasures_df["Implementing City"].unique()) print(countermeasures_df["Implementing State/Province"].unique())
[nan 'Quang Ninh' 'Gyeongbook Province' 'Daegu, Gyeongbook Province' 'Busan' 'Hubei' 'Hunan' 'Tianjin' 'Zheijang' 'Meituan' 'Chongqing' 'Zhejian' 'Shanghai' 'Sichuan' 'Jiangsu' 'Guangdong' 'Hangzhou' 'Shenzhen' 'Leishenshan' 'Guangzhou' 'Kanagawa prefecture' 'Guangxi' 'Guizhou' 'Huanggang' 'Henan, Shandong' 'Liaoning, Shandong' 'Shandong' 'Madrid' 'Tyrol' 'Basque Country' 'Galicia' 'Republika Srpska' 'Bosnia' 'Autonomous Region of Madeira' 'California' 'Leningrad Oblast' 'Moscow Oblast' 'St. Petersburg Oblast' 'Santa Clara County' 'Seattle' 'Berkley, Countra Costa Country, Santa Clara County, los Angles' 'Orange County' 'Placer County, San Mateo County, Sonoma County' 'San Benito County, Santa Clara County' 'Kershan County, Lancaster County' 'Bavaria' 'Colima' 'Mexico City' 'South Fulton' 'Atalanta, Brookhaven, Clarkston, Dunwoody' 'Atalanta' 'Albany, Athens-Clark county, Bosnia, Dougherty County' 'Gyeonggi-Province, Paju' 'Gyeonggi-Province, Seoul' 'Farifax county, Loudoun county, Prince William County, Stafford County']
MIT
notebooks/data_sources.ipynb
braadbaart/covid19
John Hopkins containment measures databaseThe data is made available as part of the John Hopkins [Containment Measures Database](http://epidemicforecasting.org/containment). See the website in the link for a description of the data sources.
containment_df = pd.read_csv("data/countermeasures_db_johnshopkins_2020_03_30.csv") containment_df.columns print(containment_df["Country"].unique()) cases_df = containment_df[["Date", "Country", "Confirmed Cases", "Deaths"]]\ .loc[containment_df["Confirmed Cases"] > 3000]\ .pivot(index="Date", columns="Country", values="Confirmed Cases") cases_df.plot(figsize=(16,8), title="Per-country growth in confirmed cases after the first 3000")\ .legend(bbox_to_anchor=(1,1)) deaths_df = containment_df[["Date", "Country", "Confirmed Cases", "Deaths"]]\ .loc[containment_df["Deaths"] > 100]\ .pivot(index="Date", columns="Country", values="Deaths") deaths_df.plot(figsize=(16,8), title="Deaths per country after the first 100 deaths recorded")\ .legend(bbox_to_anchor=(1,1)) other_cm_cols = ['Unnamed: 0', 'Resumption', 'Diagnostic criteria loosened', 'Testing criteria', 'Date', 'Country', 'Confirmed Cases', 'Deaths'] countermeasures = list(filter(lambda m: m not in other_cm_cols, containment_df.columns)) cm_df = containment_df[countermeasures + ['Date', 'Country']].fillna(0) cm_df[countermeasures] = cm_df[countermeasures].mask(cm_df[countermeasures] > 0, 1) cm_df.groupby("Date").sum().plot(figsize=(16,8), title="Number of countries implementing measure by date")\ .legend(bbox_to_anchor=(1,1))
_____no_output_____
MIT
notebooks/data_sources.ipynb
braadbaart/covid19
**Rendering component declaration.**
# imports for setting up display for the colab server. !sudo apt-get update > /dev/null 2>&1 !sudo apt-get install -y xvfb x11-utils > /dev/null 2>&1 !pip install gym==0.17.* pyvirtualdisplay==0.2.* PyOpenGL==3.1.* PyOpenGL-accelerate==3.1.* > /dev/null 2>&1 # gym related import statements. import gym from gym import logger as gymlogger from gym.wrappers import Monitor gymlogger.set_level(40) #error only # RL agent construction related imports. import numpy as np np.random.seed(0) import matplotlib.pyplot as plt from scipy.special import softmax # virtual display related import statements. import math import glob import io import base64 import time from time import sleep from tqdm import tqdm from IPython.display import HTML from IPython import display as ipythondisplay # This creates virtual display to send the frames for being rendered. from pyvirtualdisplay import Display display = Display(visible=0, size=(1366, 768)) display.start() def show_video(): ''' This function loads the data video inline into the colab notebook. By reading the video stored by the Monitor class. ''' mp4list = glob.glob('video/*.mp4') if len(mp4list) > 0: mp4 = mp4list[0] video = io.open(mp4, 'r+b').read() encoded = base64.b64encode(video) ipythondisplay.display(HTML(data='''<video alt="test" autoplay loop controls style="height: 400px;"> <source src="data:video/mp4;base64,{0}" type="video/mp4" /> </video>'''.format(encoded.decode('ascii')))) else: print("Could not find video") def wrap_env(env): ''' This monitoring tool records the outputs from the output and saves it a mp4 file in the stated directory. If we don't change the video directory the videos will get stored in 'content/' directory. ''' env = Monitor(env, './video', force=True) return env
_____no_output_____
Unlicense
milestone-two/sarsa_lambda_agent_mountain_car_v0.ipynb
galleon/prototyping-self-driving-agents
**SARSA(Lambda) Algorithm Implementation for MountainCarV0 Environment**__The implementation consists of below stated sections:__ * __Agent class decleration and parsing environment.__* __SARSA(LAMBDA) Algorithm Implementation.__* __Plotting results, outputting results and downloading them.__ **Agent Class Decleration and Parsing Environment**
# This environment has two degrees of freedom: position and velocity. # Our agent will learn to interact with environment having these two values. class State: def __init__(self): self.pos = None self.vel = None # Agent class defined for storing all the agent related values. # and getting actions from the policy. Here, target policy is same as behavior policy. class Agent: def __init__(self, env): self.velocity_lim = np.array([env.observation_space.low[1], env.observation_space.high[1]]) self.position_lim = np.array([env.observation_space.low[0], env.observation_space.high[0]]) self.velocity_step, self.position_step = 0.005, 0.1 self.velocity_space = np.arange(self.velocity_lim[0], self.velocity_lim[1] + self.velocity_step, self.velocity_step) self.position_space = np.arange(self.position_lim[0], self.position_lim[1] + self.position_step, self.position_step) self.m, self.n, self.n_action = len(self.velocity_space), len(self.position_space), 3 self.Q_sa = np.full(shape = (self.m, self.n, 3), fill_value = 0.0, dtype = np.float32) self.collective_record = [] self.success = [] def get_action_value_index(self, state): pos_offset = state[0] - self.position_lim[0] vel_offset = state[1] - self.velocity_lim[0] pos_ind = pos_offset // self.position_step vel_ind = vel_offset // self.velocity_step return np.array([vel_ind, pos_ind], dtype= np.int) def get_action(self, state): ind = self.get_action_value_index(state, 0) p = self.Policy[ind[0], ind[1], :] action = np.random.choice([0, 1, 2], size = 1, p = p) return action[0] # Wraping the environment in the Monitor class. env = wrap_env(gym.make('MountainCar-v0')) # Fixing the randomness in the environment. env.seed(0) # Parsing the environment onto Agent object. sarsa_agent = Agent(env) # For some information into Q(s,a) table generated. # It's dimension is equal to A(Q(s,a)) = n[D(1)]*n[D(2)]*...*n[D(f)]*n(D(actions)) print("Q Shape = ",sarsa_agent.Q_sa.shape)
Q Shape = (30, 20, 3)
Unlicense
milestone-two/sarsa_lambda_agent_mountain_car_v0.ipynb
galleon/prototyping-self-driving-agents
**SARSA(Lambda) algorithm implementation.**
# SARSA(lambda) algorithm takes part from TD(0) and TD(1) algorithm. eps = 0.8 # greedy epsilon exploration-vs-exploitation variable. changed_eps= [] changes_alpha= [] alpha = 0.2 # learning rate value lambda_val = 0.8 # credit assignment variable to previous states. alpha_decay = 0.999 eps_decay = 0.995 sarsa_agent.e = np.zeros(shape = (sarsa_agent.m, sarsa_agent.n, 3)) # eligibility of all states. finish = False num_iter = 2000 for i_eps in tqdm(range(1, num_iter + 1)): state = env.reset() sarsa_agent.e[:, :, :] = 0 gamma = 1.0 ind = sarsa_agent.get_action_value_index(state) # greedy exploration and exploitation step. if np.random.random() < 1 - eps: action = np.argmax(sarsa_agent.Q_sa[ind[0], ind[1], :]) else: action = np.random.randint(0, 3) # running episodes for 200 times for this environment. for t in range(201): ind = sarsa_agent.get_action_value_index(state) next_state, reward, done, info = env.step(action) next_ind = sarsa_agent.get_action_value_index(next_state) if np.random.random() < 1 - eps: next_action = np.argmax(sarsa_agent.Q_sa[next_ind[0], next_ind[1], :]) else: next_action = np.random.randint(0, 3) # forward view T(lambda) SARSA equation for making updates. delta = reward + gamma * sarsa_agent.Q_sa[next_ind[0],next_ind[1], next_action] - sarsa_agent.Q_sa[ind[0],ind[1],action] sarsa_agent.e[ind[0],ind[1],action] += 1 sarsa_agent.Q_sa = np.add(sarsa_agent.Q_sa, np.multiply(alpha * delta, sarsa_agent.e)) sarsa_agent.e = np.multiply(gamma * lambda_val, sarsa_agent.e) if done: if t < 199: sarsa_agent.success.append((i_eps, t)) sarsa_agent.collective_record.append(-t) eps = max(0.0, eps * eps_decay) alpha = max(0.0, alpha * alpha_decay) break state = next_state action = next_action
100%|██████████| 2000/2000 [00:52<00:00, 37.97it/s]
Unlicense
milestone-two/sarsa_lambda_agent_mountain_car_v0.ipynb
galleon/prototyping-self-driving-agents
**Plotting reward fuction resuls and displaying output.**
# This graph shows the saturation of rewards to a minimum under 1000 episodes. fig, ax = plt.subplots(figsize = (9, 5)) plt.plot(sarsa_agent.collective_record[:],'.') plt.yticks(range(-110, -200, -10)) plt.ylabel("reward function values") plt.xlabel("episode number count") plt.grid() plt.show() # Calculating the mean performance of the agent. for i_eps in (range(1, 100)): state = env.reset() gamma = 1.0 ind = sarsa_agent.get_action_value_index(state) action = np.argmax(sarsa_agent.Q_sa[ind[0], ind[1], :]) for t in range(201): env.render() ind = sarsa_agent.get_action_value_index(state) next_state, reward, done, info = env.step(action) next_ind = sarsa_agent.get_action_value_index(next_state) next_action = np.argmax(sarsa_agent.Q_sa[next_ind[0], next_ind[1], :]) if done: if t < 199: sarsa_agent.success.append((i_eps, t)) sarsa_agent.collective_record.append(-t) sleep(1) break state = next_state action = next_action # Plotting the mean performance of the agent. fig, ax = plt.subplots(figsize = (9, 5)) plt.plot(sarsa_agent.collective_record[-100:], '-') plt.yticks(range(-110, -200, -10)) plt.title("Test Results") plt.ylabel("Mean reward function value") plt.xlabel("Episode number count") plt.grid() plt.show() # Demonstrating the output of the agent's working. show_video() # zipping the video folder for the given SARSA agent. !zip -r /content/file.zip /content/video # downloading the file resource. from google.colab import files files.download("/content/file.zip")
_____no_output_____
Unlicense
milestone-two/sarsa_lambda_agent_mountain_car_v0.ipynb
galleon/prototyping-self-driving-agents
Data
df = pd.read_csv("HW5_data.csv") x = df[["X", "Y"]].values y = df.Z.values df.head() df.describe()
_____no_output_____
MIT
.ipynb_checkpoints/polynomial_feat-checkpoint.ipynb
borab96/misc-notebooks
The data is split into training, validation and test sets with ratios $0.6:0.2:0.2$
x_train, x_val, x_test, y_train, y_val, y_test = train_test_val_split(x, y) print("Training size") print(x_train.shape) print("Validation size") print(x_val.shape) print("Test size") print(x_test.shape)
Training size (600, 2) Validation size (200, 2) Test size (200, 2)
MIT
.ipynb_checkpoints/polynomial_feat-checkpoint.ipynb
borab96/misc-notebooks
Model We are told to apply a polynomial transformation on the features and use linear regression to obtain the optimal coefficients of the polynomial fit. We create a simple pipeline to achieve this. The only hyperparameter is the maximal degree of the polynomial feature map. The feature map ignores the bias but the linear regressor does not which correctly accounts for the data not being centered. In order to tune the degree hyperparameter we sweep the parameter space and compute the MSE on the unseen validation set we had created. I am assuming the question does not require anything beyond this simple approach.
def poly_fit_pipeline(degree): polynomial_features = PolynomialFeatures(degree=degree, include_bias=False) pipeline = Pipeline([("polynomial_features", polynomial_features), ("linear_regression", LinearRegression())]) return pipeline degrees = [2,3,4,5,6,7] mse = [] for d in degrees: model = poly_fit_pipeline(d) model.fit(x_train, y_train) y_pred = model.predict(x_val) mse.append(mean_squared_error(y_pred, y_val)) print("Lowest validation MSE: "+str(round(np.min(mse),3))) print("Optimal degree: "+str(degrees[np.argmin(mse)])) plt.figure() plt.plot(degrees, np.log(mse)) plt.ylabel(r"$\log MSE$") plt.xlabel("D") model_optimal = poly_fit_pipeline(6) model_optimal.fit(x_train, y_train) mse_train = mean_squared_error(y_train, model.predict(x_train)) print("Training MSE: "+str(mse_train)) mse_val = mean_squared_error(y_val, model.predict(x_val)) print("Validation MSE: "+str(mse_val)) mse_test = mean_squared_error(y_test, model.predict(x_test)) print("Test MSE: "+str(mse_test) )
Training MSE: 0.008621014128936854 Validation MSE: 0.00895723100171667 Test MSE: 0.009421933878894272
MIT
.ipynb_checkpoints/polynomial_feat-checkpoint.ipynb
borab96/misc-notebooks
The optimization over the hyperparameter space indicates that a maximal degree of $6$ is optimal. Notice that the difference between a degree 5 fit and a degree 6 fit is miniscule so one could also go with the less complex model that offers very similar accuracy. We'll stick with $D=6$ though. Details of the optimal model
print("Model parameters") print(model_optimal.get_params()) print("regression coefficients") print(model_optimal['linear_regression'].coef_) print("regression intercept") print(model_optimal['linear_regression'].intercept_)
Model parameters {'memory': None, 'steps': [('polynomial_features', PolynomialFeatures(degree=6, include_bias=False, interaction_only=False, order='C')), ('linear_regression', LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False))], 'verbose': False, 'polynomial_features': PolynomialFeatures(degree=6, include_bias=False, interaction_only=False, order='C'), 'linear_regression': LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False), 'polynomial_features__degree': 6, 'polynomial_features__include_bias': False, 'polynomial_features__interaction_only': False, 'polynomial_features__order': 'C', 'linear_regression__copy_X': True, 'linear_regression__fit_intercept': True, 'linear_regression__n_jobs': None, 'linear_regression__normalize': False} regression coefficients [ 8.50234361e-01 9.62565178e-01 2.44544103e-01 -2.94295566e+00 5.06385541e-03 2.24830034e-03 4.74477640e-01 9.21766117e-04 -1.47698485e-03 2.97942590e-01 2.64905776e-03 2.57614722e-03 -1.42506563e-03 2.57835307e-04 2.44146426e-04 -6.50061514e-07 -4.57242498e-04 2.01383539e-04 4.08693006e-06 1.20000278e+00 -9.28090109e-06 -4.25226714e-06 9.58005035e-06 1.34097259e-05 -2.02810867e-05 7.75568090e-06 -1.56872787e-06] regression intercept 5.092513526440598
MIT
.ipynb_checkpoints/polynomial_feat-checkpoint.ipynb
borab96/misc-notebooks
Object Detection Setup
#@title import os !pip install --quiet tensorflow_text os.environ["TFHUB_MODEL_LOAD_FORMAT"] = "COMPRESSED" #@title import os import math import numpy as np import requests import tensorflow as tf import tensorflow_hub as hub import tensorflow_text as tf_text import tensorflow_datasets as tfds import matplotlib.pyplot as plt import seaborn as sns sns.set_style("whitegrid", {'axes.grid' : False}) %load_ext tensorboard import requests def download_image(url, path): r = requests.get(url, allow_redirects=True) with open(path, 'wb') as f: f.write(r.content) return path def plot(y, titles=None): for i, image in enumerate(y): if image is None: plt.subplot(1, len(y), i+1) plt.axis('off') continue t = titles[i] if titles else None plt.subplot(1, len(y), i+1, title=t) plt.imshow(image) plt.axis('off') plt.tight_layout()
_____no_output_____
Apache-2.0
notebooks/supervised/detection/object-detection-efficientdet-d4.ipynb
lucasdavid/algorithms-in-tensorflow
Model Definition
detector = hub.load("https://tfhub.dev/tensorflow/efficientdet/d4/1")
_____no_output_____
Apache-2.0
notebooks/supervised/detection/object-detection-efficientdet-d4.ipynb
lucasdavid/algorithms-in-tensorflow
Application
INPUT_SHAPE = [299, 299, 3] DATA_DIR = 'images/' IMAGES = [ 'https://raw.githubusercontent.com/keisen/tf-keras-vis/master/examples/images/goldfish.jpg', 'https://raw.githubusercontent.com/keisen/tf-keras-vis/master/examples/images/bear.jpg', 'https://raw.githubusercontent.com/keisen/tf-keras-vis/master/examples/images/soldiers.jpg', 'https://3.bp.blogspot.com/-W__wiaHUjwI/Vt3Grd8df0I/AAAAAAAAA78/7xqUNj8ujtY/s400/image02.png' ] #@title os.makedirs(os.path.join(DATA_DIR, 'unknown'), exist_ok=True) for i in IMAGES: _, f = os.path.split(i) download_image(i, os.path.join(DATA_DIR, 'unknown', f)) images_set = ( tf.keras.preprocessing.image_dataset_from_directory( DATA_DIR, image_size=INPUT_SHAPE[:2], batch_size=32, shuffle=False) .cache() .prefetch(buffer_size=tf.data.experimental.AUTOTUNE)) #@title plt.figure(figsize=(12, 4)) for images, _ in images_set.take(1): for i, image in enumerate(images): plt.subplot(math.ceil(len(images) / 4), 4, i+1) plt.imshow(image.numpy().astype('uint8')) plt.axis('off') plt.tight_layout() inputs = tf.cast(images[3:4], tf.uint8) y = detector(inputs) class_ids = y["detection_classes"] print(*y.keys(), sep='\n') y['num_detections'] y['detection_classes'] y['detection_boxes'] import matplotlib.patches as patches fig, ax = plt.subplots(1) ax.imshow(inputs[0].numpy()) # for y['detection_boxes']: ax.add_patch(patches.Rectangle((50,100),40,30,linewidth=1,edgecolor='r',facecolor='none')) ;
_____no_output_____
Apache-2.0
notebooks/supervised/detection/object-detection-efficientdet-d4.ipynb
lucasdavid/algorithms-in-tensorflow
Imports
from IPython.display import clear_output !pip install path.py !pip install pytorch3d clear_output() import numpy as np import math import random import os import plotly.graph_objects as go import plotly.express as px import torch from torch.utils.data import Dataset, DataLoader, Subset from torchvision import transforms, utils from path import Path random.seed = 42 !wget http://3dvision.princeton.edu/projects/2014/3DShapeNets/ModelNet10.zip !unzip -q ModelNet10.zip path = Path("ModelNet10") folders = [dir for dir in sorted(os.listdir(path)) if os.path.isdir(path/dir)] clear_output() classes = {folder: i for i, folder in enumerate(folders)} classes def default_transforms(): return transforms.Compose([ PointSampler(1024), Normalize(), RandomNoise(), ToSorted(), ToTensor() ]) !gdown https://drive.google.com/uc?id=1CVwVxdfUfP6TRcVUjjJvQeRcgCGcnSO_ from helping import * clear_output()
_____no_output_____
MIT
data_processing.ipynb
annwhoorma/pmldl-project
Data Preprocessing (optional)
with open(path/"dresser/train/dresser_0001.off", 'r') as f: verts, faces = read_off(f) i, j, k = np.array(faces).T x, y, z = np.array(verts).T # len(x) # visualize_rotate([go.Mesh3d(x=x, y=y, z=z, color='lightpink', opacity=0.50, i=i,j=j,k=k)]).show() # visualize_rotate([go.Scatter3d(x=x, y=y, z=z, mode='markers')]).show() # pcshow(x, y, z) pointcloud = PointSampler(1024)((verts, faces)) # pcshow(*pointcloud.T) norm_pointcloud = Normalize()(pointcloud) # pcshow(*norm_pointcloud.T) noisy_pointcloud = RandomNoise()(norm_pointcloud) # pcshow(*noisy_pointcloud.T) rot_pointcloud = RandomRotation_z()(noisy_pointcloud) # pcshow(*rot_pointcloud.T) sorted_pointcloud = ToSorted()(rot_pointcloud) # pcshow(*sorted_pointcloud.T) tensor_pointcloud = ToTensor()(sorted_pointcloud)
_____no_output_____
MIT
data_processing.ipynb
annwhoorma/pmldl-project
Creating Loaders for Final Progress Report Redefine classes
class PointCloudData(Dataset): def __init__(self, root_dir, valid=False, folder="train", transform=default_transforms(), folders=None): self.root_dir = root_dir if not folders: folders = [dir for dir in sorted(os.listdir(root_dir)) if os.path.isdir(root_dir/dir)] self.classes = {folder: i for i, folder in enumerate(folders)} self.transforms = transform self.valid = valid self.pcs = [] for category in self.classes.keys(): new_dir = root_dir/Path(category)/folder for file in os.listdir(new_dir): if file.endswith('.off'): sample = {} with open(new_dir/file, 'r') as f: verts, faces = read_off(f) sample['pc'] = (verts, faces) sample['category'] = category self.pcs.append(sample) def __len__(self): return len(self.pcs) def __getitem__(self, idx): pointcloud = self.transforms(self.pcs[idx]['pc']) category = self.pcs[idx]['category'] return pointcloud, self.classes[category] class PointCloudDataPre(Dataset): def __init__(self, root_dir, valid=False, folder="train", transform=default_transforms(), folders=None): self.root_dir = root_dir if not folders: folders = [dir for dir in sorted(os.listdir(root_dir)) if os.path.isdir(root_dir/dir)] self.classes = {folder: i for i, folder in enumerate(folders)} self.transforms = transform self.valid = valid self.pcs = [] for category in self.classes.keys(): new_dir = root_dir/Path(category)/folder for file in os.listdir(new_dir): if file.endswith('.off'): sample = {} with open(new_dir/file, 'r') as f: verts, faces = read_off(f) sample['pc'] = self.transforms((verts, faces)) sample['category'] = category self.pcs.append(sample) def __len__(self): return len(self.pcs) def __getitem__(self, idx): pointcloud = self.pcs[idx]['pc'] category = self.pcs[idx]['category'] return pointcloud, self.classes[category] class PointCloudDataBoth(Dataset): def __init__(self, root_dir, valid=False, folder="train", static_transform=default_transforms(), later_transform=None, folders=None): self.root_dir = root_dir if not folders: folders = [dir for dir in sorted(os.listdir(root_dir)) if os.path.isdir(root_dir/dir)] self.classes = {folder: i for i, folder in enumerate(folders)} self.static_transform = static_transform self.later_transform = later_transform self.valid = valid self.pcs = [] for category in self.classes.keys(): new_dir = root_dir/Path(category)/folder for file in os.listdir(new_dir): if file.endswith('.off'): sample = {} with open(new_dir/file, 'r') as f: verts, faces = read_off(f) sample['pc'] = self.static_transform((verts, faces)) sample['category'] = category self.pcs.append(sample) def __len__(self): return len(self.pcs) def __getitem__(self, idx): pointcloud = self.pcs[idx]['pc'] if self.later_transform is not None: pointcloud = self.later_transform(pointcloud) category = self.pcs[idx]['category'] return pointcloud, self.classes[category] !mkdir drive/MyDrive/Thesis/dataloaders/final
_____no_output_____
MIT
data_processing.ipynb
annwhoorma/pmldl-project
Overfitting - all augmentations applied before training
BATCH_SIZE = 48 trs = transforms.Compose([ PointSampler(1024), ToSorted(), Normalize(), ToTensor() ]) beds_train_dataset = PointCloudDataPre(path, folders=['bed'], transform=trs) beds_valid_dataset = PointCloudDataPre(path, folder='test', folders=['bed'], transform=trs) beds_train_loader = DataLoader(dataset=beds_train_dataset, shuffle=True, batch_size=BATCH_SIZE, drop_last=True) beds_valid_loader = DataLoader(dataset=beds_valid_dataset, batch_size=BATCH_SIZE, drop_last=True) !mkdir dataloader_beds_pre torch.save(beds_train_loader, 'dataloader_beds_pre/trainloader.pth') torch.save(beds_valid_loader, 'dataloader_beds_pre/validloader.pth') !mkdir drive/MyDrive/Thesis/dataloaders/final !cp -r dataloader_beds_pre drive/MyDrive/Thesis/dataloaders/final
mkdir: cannot create directory ‘dataloader_beds_pre’: File exists mkdir: cannot create directory ‘drive/MyDrive/Thesis/dataloaders/final’: File exists
MIT
data_processing.ipynb
annwhoorma/pmldl-project
Underfitting - all augmentations applied during training
BATCH_SIZE = 48 trs = transforms.Compose([ PointSampler(1024), ToSorted(), Normalize(), RandomNoise(), ToTensor() ]) beds_train_dataset = PointCloudData(path, folders=['bed'], transform=trs) beds_valid_dataset = PointCloudData(path, folder='test', folders=['bed'], transform=trs) beds_train_loader = DataLoader(dataset=beds_train_dataset, num_workers=4, shuffle=True, batch_size=BATCH_SIZE, drop_last=True) beds_valid_loader = DataLoader(dataset=beds_valid_dataset, num_workers=4, batch_size=BATCH_SIZE, drop_last=True) !mkdir dataloader_beds_dur torch.save(beds_train_loader, 'dataloader_beds_dur/trainloader.pth') torch.save(beds_valid_loader, 'dataloader_beds_dur/validloader.pth') !cp -r dataloader_beds_dur drive/MyDrive/Thesis/dataloaders/final
/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
MIT
data_processing.ipynb
annwhoorma/pmldl-project
Both - static and dynamic transformations
BATCH_SIZE = 48 static_trs = transforms.Compose([ PointSampler(1024), ToSorted(), Normalize(), ]) dynamic_trs = transforms.Compose([ RandomNoise(), ToTensor() ]) beds_train_dataset = PointCloudDataBoth(path, folders=['bed'], static_transform=static_trs, later_transform=dynamic_trs) beds_valid_dataset = PointCloudDataBoth(path, folder='test', folders=['bed'], static_transform=static_trs) beds_train_loader = DataLoader(dataset=beds_train_dataset, shuffle=True, batch_size=BATCH_SIZE, drop_last=True) beds_valid_loader = DataLoader(dataset=beds_valid_dataset, batch_size=BATCH_SIZE, drop_last=True) !mkdir dataloader_beds_both torch.save(beds_train_loader, 'dataloader_beds_both/trainloader.pth') torch.save(beds_valid_loader, 'dataloader_beds_both/validloader.pth') !cp -r dataloader_beds_both drive/MyDrive/Thesis/dataloaders/final
mkdir: cannot create directory ‘dataloader_beds_both’: File exists
MIT
data_processing.ipynb
annwhoorma/pmldl-project
Two classes: beds and tables
BATCH_SIZE = 48 static_trs = transforms.Compose([ PointSampler(1024), ToSorted(), Normalize(), ]) dynamic_trs = transforms.Compose([ RandomNoise(), ToTensor() ]) beds_train_dataset = PointCloudDataBoth(path, folders=['bed', 'table'], static_transform=static_trs, later_transform=dynamic_trs) beds_valid_dataset = PointCloudDataBoth(path, folder='test', folders=['bed', 'table'], static_transform=trs) beds_train_loader = DataLoader(dataset=beds_train_dataset, shuffle=True, batch_size=BATCH_SIZE, drop_last=True) beds_valid_loader = DataLoader(dataset=beds_valid_dataset, batch_size=BATCH_SIZE, drop_last=True) !mkdir dataloader_beds_tables torch.save(beds_train_loader, 'dataloader_beds_tables/trainloader.pth') torch.save(beds_valid_loader, 'dataloader_beds_tables/validloader.pth') !cp -r dataloader_beds_tables drive/MyDrive/Thesis/dataloaders/final
_____no_output_____
MIT
data_processing.ipynb
annwhoorma/pmldl-project
For 512
!mkdir drive/MyDrive/Thesis/dataloaders/final512
_____no_output_____
MIT
data_processing.ipynb
annwhoorma/pmldl-project
Overfitting - all augmentations applied before training
BATCH_SIZE = 48 trs = transforms.Compose([ PointSampler(512), ToSorted(), Normalize(), ToTensor() ]) beds_train_dataset = PointCloudDataPre(path, folders=['bed'], transform=trs) beds_valid_dataset = PointCloudDataPre(path, folder='test', folders=['bed'], transform=trs) beds_train_loader = DataLoader(dataset=beds_train_dataset, shuffle=True, batch_size=BATCH_SIZE, drop_last=True) beds_valid_loader = DataLoader(dataset=beds_valid_dataset, batch_size=BATCH_SIZE, drop_last=True) !mkdir dataloader_beds_pre torch.save(beds_train_loader, 'dataloader_beds_pre/trainloader.pth') torch.save(beds_valid_loader, 'dataloader_beds_pre/validloader.pth') !mkdir drive/MyDrive/Thesis/dataloaders/final !cp -r dataloader_beds_pre drive/MyDrive/Thesis/dataloaders/final512
mkdir: cannot create directory ‘dataloader_beds_pre’: File exists mkdir: cannot create directory ‘drive/MyDrive/Thesis/dataloaders/final’: File exists
MIT
data_processing.ipynb
annwhoorma/pmldl-project
Underfitting - all augmentations applied during training
BATCH_SIZE = 48 trs = transforms.Compose([ PointSampler(512), ToSorted(), Normalize(), ToTensor() ]) beds_train_dataset = PointCloudData(path, folders=['bed'], transform=trs) beds_valid_dataset = PointCloudData(path, folder='test', folders=['bed'], transform=trs) beds_train_loader = DataLoader(dataset=beds_train_dataset, num_workers=4, shuffle=True, batch_size=BATCH_SIZE, drop_last=True) beds_valid_loader = DataLoader(dataset=beds_valid_dataset, num_workers=4, batch_size=BATCH_SIZE, drop_last=True) !mkdir dataloader_beds_dur torch.save(beds_train_loader, 'dataloader_beds_dur/trainloader.pth') torch.save(beds_valid_loader, 'dataloader_beds_dur/validloader.pth') !cp -r dataloader_beds_dur drive/MyDrive/Thesis/dataloaders/final512
/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
MIT
data_processing.ipynb
annwhoorma/pmldl-project
Both - static and dynamic transformations
BATCH_SIZE = 48 static_trs = transforms.Compose([ PointSampler(512), ToSorted(), Normalize(), ]) dynamic_trs = transforms.Compose([ RandomNoise(), ToTensor() ]) beds_train_dataset = PointCloudDataBoth(path, folders=['bed'], static_transform=static_trs, later_transform=dynamic_trs) beds_valid_dataset = PointCloudDataBoth(path, folder='test', folders=['bed'], static_transform=static_trs) beds_train_loader = DataLoader(dataset=beds_train_dataset, shuffle=True, batch_size=BATCH_SIZE, drop_last=True) beds_valid_loader = DataLoader(dataset=beds_valid_dataset, batch_size=BATCH_SIZE, drop_last=True) !mkdir dataloader_beds_both torch.save(beds_train_loader, 'dataloader_beds_both/trainloader.pth') torch.save(beds_valid_loader, 'dataloader_beds_both/validloader.pth') !cp -r dataloader_beds_both drive/MyDrive/Thesis/dataloaders/final512
mkdir: cannot create directory ‘dataloader_beds_both’: File exists
MIT
data_processing.ipynb
annwhoorma/pmldl-project
Two classes: beds and tables
BATCH_SIZE = 48 static_trs = transforms.Compose([ PointSampler(512), ToSorted(), Normalize(), ]) dynamic_trs = transforms.Compose([ RandomNoise(), ToTensor() ]) beds_train_dataset = PointCloudDataBoth(path, folders=['bed', 'table'], static_transform=static_trs, later_transform=dynamic_trs) beds_valid_dataset = PointCloudDataBoth(path, folder='test', folders=['bed', 'table'], static_transform=trs) beds_train_loader = DataLoader(dataset=beds_train_dataset, shuffle=True, batch_size=BATCH_SIZE, drop_last=True) beds_valid_loader = DataLoader(dataset=beds_valid_dataset, batch_size=BATCH_SIZE, drop_last=True) !mkdir dataloader_beds_tables torch.save(beds_train_loader, 'dataloader_beds_tables/trainloader.pth') torch.save(beds_valid_loader, 'dataloader_beds_tables/validloader.pth') !cp -r dataloader_beds_tables drive/MyDrive/Thesis/dataloaders/final
_____no_output_____
MIT
data_processing.ipynb
annwhoorma/pmldl-project
Fill in full author references for mentions of authorsFor example, if we find `Calderon`, we want to produce a string```Pedro Calderón de la Barca```
testTexts = [ "Calderón de la Barca, Pedro", "CCCCCalderón", "Caldeeeeeerón", "Pedro Barca", "Pedro Barca", "Agustin Moreto", "A. Moreto", "Agustin", "Augustine", ]
_____no_output_____
MIT
pyling/detectAuthors.ipynb
dirkroorda/explore
TriggersWe are going to find trigger strings for authors in the input texts.In order to do that successfully, we normalize the text first:* we remove all accents from accented letters* we make everything lowercase We need a function that can strip accents from characters.From [stackoverflow](https://stackoverflow.com/questions/517923/what-is-the-best-way-to-remove-accents-in-a-python-unicode-string)
import re import unicodedata def normalize(text): text = unicodedata.normalize('NFD', text) text = text.encode('ascii', 'ignore') text = text.decode("utf-8") return text.lower().strip() normalize("Calderón de la Barca, Pedro")
_____no_output_____
MIT
pyling/detectAuthors.ipynb
dirkroorda/explore
AuthorsWe compile a list of authors that we want to detect.For each author we have a full name, a key, and a list of triggers.We format the specficiation as a *yaml* file (which maps to a Python dictionary).
authorSpec = ''' cald: full: Pedro Calderón de la Barca triggers: - calderon - barca more: full: Agustín Moreto triggers: - moreto - agustin - augustine '''
_____no_output_____
MIT
pyling/detectAuthors.ipynb
dirkroorda/explore