markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
We define the source to be scanned from the lineAll.db
def outputMatch(matches, minmatch = 5, mainLines = None): for m in matches: imax = len(m) ifound = 0 redshift = m[0] for i in range(1,len(m)): if len(m[i]) > 0: ifound += 1 if mainLines != None: ifound =0 for i in range(1,len(m)): for mainline in mainLines: if len(m[i]) > 0: for line in m[i]: if line[0].find(mainline) != -1: ifound += 1 if ifound >= minmatch: print("########################") print("## Redshift: %f"%(redshift)) print("## Freq. matched: %d"%(ifound)) print("##") print("## Formula Name E_K Frequency") print("## (K) (MHz)") for i in range(1,len(m)): if len(m[i]) > 0: print("## Line:") for line in m[i]: print line print("## \n###END###\n") source = "J2148+0657" redshift = 0.895 al = lt.analysisLines(dbline) cmdsql = "select lineid FROM lines WHERE source = '%s'"%(source) resdb = al.query(cmdsql) lineid = [] for l in resdb: lineid.append(l[0]) print(lineid)
notebooks/lines/scanningLineRedshiftwithSplat.ipynb
bosscha/alma-calibrator
gpl-2.0
Scan through the lines (lineid) matching with a local splatalogue.db. emax is the maximum energy of the upper level to restrain to low energy transitions...
m = al.scanningSplatRedshiftSourceLineid(lineid, zmin = redshift , zmax = 0.90, dz = 1e-4,nrao = True, emax= 40., absorption = True, emission = True ) redshift = [] lineDetected =[] minmatch = 15 for l in m: redshift.append(l[0]) ifound = 0 for i in range(1,len(l)): if len(l[i]) > 0: ifound += 1 if ifound >= minmatch: print("###Redshift: %f"%(l[0])) print("##") for line in l[1:-1]: if len(line) > 0: print line print("\n\n") lineDetected.append(ifound)
notebooks/lines/scanningLineRedshiftwithSplat.ipynb
bosscha/alma-calibrator
gpl-2.0
Plot the detected lines vs. the redshift.
pl.figure(figsize=(15,10)) pl.xlabel("z") pl.ylabel("Lines") pl.plot(redshift, lineDetected, "k-") pl.show() ## uncomment to save data in a pickle file f = open("3c273-redshift-hires-scan.pickle","w") pickle.dump(m,f ) f.close()
notebooks/lines/scanningLineRedshiftwithSplat.ipynb
bosscha/alma-calibrator
gpl-2.0
Display the matching transitions
mL = ['CO v=0','HCN','HCO+'] outputMatch(m, minmatch=3, mainLines = None)
notebooks/lines/scanningLineRedshiftwithSplat.ipynb
bosscha/alma-calibrator
gpl-2.0
Hack for Heat #3: Number of complaints over time This time, we're going to look at raw 311 complaint data. The data that I was working with previously was summarized data. This dataset is much bigger, which is nice because it'll give me a chance to maintain my SQL-querying-from-memory-skills. First, we're going to have to load all of this data into a postgres database. I wrote this tablebase. SQL-ing this The python library psycopg2 lets us work with postgres databases in python. We first create a connection object, that encapsulates the connection to the database, then create a cursor class that lets us make queries from that database.
connection = psycopg2.connect('dbname = threeoneone user=threeoneoneadmin password=threeoneoneadmin') cursor = connection.cursor()
src/bryan analyses/Hack for Heat #3.ipynb
heatseeknyc/data-science
mit
For example, we might want to extract the column names from our table:
cursor.execute('''SELECT * FROM threeoneone.INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'service'; ''') columns = cursor.fetchall() columns = [x[3] for x in columns] columns[0:5]
src/bryan analyses/Hack for Heat #3.ipynb
heatseeknyc/data-science
mit
Complaints over time Let's start with something simple. First, let's extract a list of all complaints, and the plot the number of complaints by month.
cursor.execute('''SELECT createddate FROM service;''') complaintdates = cursor.fetchall() complaintdates = pd.DataFrame(complaintdates) complaintdates.head()
src/bryan analyses/Hack for Heat #3.ipynb
heatseeknyc/data-science
mit
Renaming our column:
complaintdates.columns = ['Date']
src/bryan analyses/Hack for Heat #3.ipynb
heatseeknyc/data-science
mit
Next we have to convert these tuples into strings:
complaintdates['Date'] = [x [0] for x in complaintdates['Date']]
src/bryan analyses/Hack for Heat #3.ipynb
heatseeknyc/data-science
mit
Normally, if these were strings, we'd use the extract_dates function we wrote in a previous post. However, because I typed these as datetime objects, we can just extract the .year(), .month(), and .day() attributes:
type(complaintdates['Date'][0]) complaintdates['Day'] = [x.day for x in complaintdates['Date']] complaintdates['Month'] = [x.month for x in complaintdates['Date']] complaintdates['Year'] = [x.year for x in complaintdates['Date']]
src/bryan analyses/Hack for Heat #3.ipynb
heatseeknyc/data-science
mit
This is how many total complaints we have:
len(complaintdates)
src/bryan analyses/Hack for Heat #3.ipynb
heatseeknyc/data-science
mit
We can group them by month:
bymonth = complaintdates.groupby(by='Month').count() bymonth
src/bryan analyses/Hack for Heat #3.ipynb
heatseeknyc/data-science
mit
By year:
byyear = complaintdates.groupby(by='Year').count() byyear byday = complaintdates.groupby(by='Day').count() bydate = complaintdates.groupby(by='Date').count()
src/bryan analyses/Hack for Heat #3.ipynb
heatseeknyc/data-science
mit
Some matplotlib
plt.figure(figsize = (12,10)) x = range(0,12) y = bymonth['Date'] plt.plot(x,y) plt.figure(figsize = (12,10)) x = range(0,7) y = byyear['Date'] plt.plot(x,y) plt.figure(figsize = (12,10)) x = range(0,len(byday)) y = byday['Date'] plt.plot(x,y)
src/bryan analyses/Hack for Heat #3.ipynb
heatseeknyc/data-science
mit
The sharp decline we see at the end is obviously because not all months have the same number of days.
plt.figure(figsize=(12,10)) x = range(0,len(bydate)) y = bydate['Year'] #This is arbitrary - year, month, and day are all series that store the counts plt.plot(x,y)
src/bryan analyses/Hack for Heat #3.ipynb
heatseeknyc/data-science
mit
Load and Process the data
text = open('data/holmes.txt').read().lower() print('Total characters: {}'.format(len(text))) text[:300]
text_generator.ipynb
angelmtenor/data-science-keras
mit
Preprocess the data
text = text[1302:] # remove title, author page, and table of contents text = text.replace('\n', ' ') text = text.replace('\r', ' ') unique_characters = set(list(text)) print(unique_characters) # remove non-english characters import re text = re.sub("[$%&'()*@/àâèé0123456789-]", " ", text) text = text.replace('"', ' ') text = text.replace(' ', ' ') # shorten any extra dead space created above text[:300] chars = sorted(list(set(text))) num_chars = len(chars) print('Total characters: {}'.format(len(text))) print('Unique characters: {}'.format(num_chars)) print(chars)
text_generator.ipynb
angelmtenor/data-science-keras
mit
Split data into input/output pairs
# Transforms the input text and window-size into a set of input/output pairs # for use with the RNN """ window_size = 100 step_size = 5 input_pairs = [] output_pairs = [] for i in range(0, len(text) - window_size, step_size): input_pairs.append(text[i:i + window_size]) output_pairs.append(text[i + window_size])
text_generator.ipynb
angelmtenor/data-science-keras
mit
One-hot encoding characters
chars_to_indices = dict((c, i) for i, c in enumerate(chars)) indices_to_chars = dict((i, c) for i, c in enumerate(chars)) # create variables for one-hot encoded input/output X = np.zeros((len(input_pairs), window_size, num_chars), dtype=np.bool) y = np.zeros((len(input_pairs), num_chars), dtype=np.bool) # transform character-based input_pairs/output_pairs into equivalent numerical versions for i, sentence in enumerate(input_pairs): for t, char in enumerate(sentence): X[i, t, chars_to_indices[char]] = 1 y[i, chars_to_indices[output_pairs[i]]] = 1
text_generator.ipynb
angelmtenor/data-science-keras
mit
Recurrent Neural Network Model
from keras.models import Sequential from keras.layers import Dense, Activation, LSTM model = Sequential() model.add(LSTM(200, input_shape=(window_size, num_chars))) model.add(Dense(num_chars, activation=None)) model.add(Dense(num_chars, activation="softmax")) model.summary() optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0) model.compile(loss='categorical_crossentropy', optimizer=optimizer) # train the model print("Training ...") %time history = model.fit(X, y, batch_size=512, epochs=100,verbose=0) helper.show_training(history) model_path = os.path.join("models", "text_generator.h5") model.save(model_path) print("\nModel saved at", model_path)
text_generator.ipynb
angelmtenor/data-science-keras
mit
Make predictions
model = keras.models.load_model(model_path) print("Model loaded:", model_path) def predict_next_chars(model, input_chars, num_to_predict): """ predict a number of future characters """ predicted_chars = '' for i in range(num_to_predict): x_test = np.zeros((1, window_size, len(chars))) for t, char in enumerate(input_chars): x_test[0, t, chars_to_indices[char]] = 1. test_predict = model.predict(x_test, verbose=0)[0] # translate numerical prediction back to characters r = np.argmax(test_predict) d = indices_to_chars[r] # update predicted_chars and input predicted_chars += d input_chars += d input_chars = input_chars[1:] return predicted_chars for s in range(0, 500, 100): start_index = s input_chars = text[start_index:start_index + window_size] predict_input = predict_next_chars(model, input_chars, num_to_predict=100) print('------------------') input_line = 'input chars = ' + '\n' + input_chars + '"' + '\n' print(input_line) line = 'predicted chars = ' + '\n' + predict_input + '"' + '\n' print(line)
text_generator.ipynb
angelmtenor/data-science-keras
mit
Streaming with tweepy The Twitter streaming API is used to download twitter messages in real time. We use streaming api instead of rest api because, the REST api is used to pull data from twitter but the streaming api pushes messages to a persistent session. This allows the streaming api to download more data in real time than could be done using the REST API. In Tweepy, an instance of tweepy.Stream establishes a streaming session and routes messages to StreamListener instance. The on_data method of a stream listener receives all messages and calls functions according to the message type. But the on_data method is only a stub, so we need to implement the functionality by subclassing StreamListener. Using the streaming api has three steps. Create a class inheriting from StreamListener Using that class create a Stream object Connect to the Twitter API using the Stream.
# Tweet listner class which subclasses from tweepy.StreamListener class TweetListner(tweepy.StreamListener): """Twitter stream listner""" def __init__(self, csocket): self.clientSocket = csocket def dataProcessing(self, data): """Process the data, before sending to spark streaming """ sendData = {} # data that is sent to spark streamer user = data.get("user", {}) name = user.get("name", "undefined").encode('utf-8') followersCount = user.get("followers_count", 0) sendData["name"] = name sendData["followersCount"] = followersCount #data_string = "{}:{}".format(name, followersCount) self.clientSocket.send(json.dumps(sendData) + u"\n") # append new line character, so that spark recognizes it logging.debug(json.dumps(sendData)) def on_data(self, raw_data): """ Called when raw data is received from connection. return False to stop stream and close connection. """ try: data = json.loads(raw_data) self.dataProcessing(data) #self.clientSocket.send(json.dumps(sendData) + u"\n") # Because the connection was breaking return True except Exception as e: logging.error("An unhandled exception has occured, check your data processing") logging.error(e) raise e def on_error(self, status_code): """Called when a non-200 status code is returned""" logging.error("A non-200 status code is returned") return True # Creating a proxy socket def createProxySocket(host, port): """ Returns a socket which can be used to connect to spark. """ try: s = socket.socket() # initialize socket instance s.bind((host, port)) # bind to the given host and port s.listen(5) # Enable a server to accept connections. logging.info("Listening on the port {}".format(port)) cSocket, address = s.accept() # waiting for a connection logging.info("Received Request from: {}".format(address)) return cSocket except socket.error as e: if e.errno == socket.errno.EADDRINUSE: # Address in use logging.error("The given host:port {}:{} is already in use"\ .format(host, port)) logging.info("Trying on port: {}".format(port + 1)) return createProxySocket(host, port + 1)
TweetAnalysis/Final/Q1/Dalon_4_RTD_MiniPro_Tweepy_Q1.ipynb
dalonlobo/GL-Mini-Projects
mit
Drawbacks of twitter streaming API The major drawback of the Streaming API is that Twitter’s Steaming API provides only a sample of tweets that are occurring. The actual percentage of total tweets users receive with Twitter’s Streaming API varies heavily based on the criteria users request and the current traffic. Studies have estimated that using Twitter’s Streaming API users can expect to receive anywhere from 1% of the tweets to over 40% of tweets in near real-time. The reason that you do not receive all of the tweets from the Twitter Streaming API is simply because Twitter doesn’t have the current infrastructure to support it, and they don’t want to; hence, the Twitter Firehose. Ref So we will use a hack i.e. get the top trending topics and use that to filter data.
def getWOEIDForTrendsAvailable(api, place): """Returns the WOEID of the country if the trend is available there. """ # Iterate through trends data = api.trends_available() for item in data: if item["name"] == place: # Use place = "Worldwide" to get woeid of world woeid = item["woeid"] break return woeid #name = India, woeid # Get the list of trending topics from twitter def getTrendingTopics(api, woeid): """Get the top trending topics from twitter""" data = api.trends_place(woeid) listOfTrendingTopic = [trend["name"] for trend in data[0]["trends"]] return listOfTrendingTopic if __name__ == "__main__": try: api, auth = connectToTwitter() # connecting to twitter # Global information is available by using 1 as the WOEID # woeid = getWOEIDForTrendsAvailable(api, "Worldwide") # get the woeid of the worldwide woeid = 1 trendingTopics = getTrendingTopics(api, woeid)[:10] # Pick only top 10 trending topics host = "localhost" port = 8888 cSocket = createProxySocket(host, port) # Creating a socket while True: try: # Connect/reconnect the stream tweetStream = tweepy.Stream(auth, TweetListner(cSocket)) # Stream the twitter data # DON'T run this approach async or you'll just create a ton of streams! tweetStream.filter(track=trendingTopics) # Filter on trending topics except IncompleteRead: # Oh well, reconnect and keep trucking continue except KeyboardInterrupt: # Or however you want to exit this loop tweetStream.disconnect() break except Exception as e: logging.error("Unhandled exception has occured") logging.error(e) continue except KeyboardInterrupt: # Keyboard interrupt called logging.error("KeyboardInterrupt was hit") except Exception as e: logging.error("Unhandled exception has occured") logging.error(e)
TweetAnalysis/Final/Q1/Dalon_4_RTD_MiniPro_Tweepy_Q1.ipynb
dalonlobo/GL-Mini-Projects
mit
JSON file beolvasás
pd.read_json('data.json')
present/bi/2020/jupyter/2_pandas_filetipusok.ipynb
csaladenes/csaladenes.github.io
mit
Excel file beolvasás: sorok kihagyhatók a file tetejéről, munkalap neve választható.
df=pd.read_excel('2.17deaths causes.xls',sheet_name='2.17',skiprows=5)
present/bi/2020/jupyter/2_pandas_filetipusok.ipynb
csaladenes/csaladenes.github.io
mit
numpy egy matematikai bővítőcsomag
import numpy as np
present/bi/2020/jupyter/2_pandas_filetipusok.ipynb
csaladenes/csaladenes.github.io
mit
A nan értékek numpy-ban vannak definiálva.
df=df.set_index('Unnamed: 0').dropna(how='any').replace('-',np.nan) df2=pd.read_excel('2.17deaths causes.xls',sheet_name='2.17',skiprows=4)
present/bi/2020/jupyter/2_pandas_filetipusok.ipynb
csaladenes/csaladenes.github.io
mit
ffill azt jelenti forward fill, és a nan-okat kitölti a balra vagy fölötte álló értékkel. Az axis=0 a sorokat jelenti, az axis=1 az oszlopokat.
df2.loc[[0]].ffill(axis=1)
present/bi/2020/jupyter/2_pandas_filetipusok.ipynb
csaladenes/csaladenes.github.io
mit
Sorok/oszlopok törlése.
df=df.drop('Unnamed: 13',axis=1) df.columns [year for year in range(2011,2017)] df.columns=[year for year in range(2011,2017) for k in range(2)]
present/bi/2020/jupyter/2_pandas_filetipusok.ipynb
csaladenes/csaladenes.github.io
mit
Nested pythonic lista - két felsorolás egymás után
[str(year)+'-'+str(k) for year in range(2011,2017) for k in range(2)] nemek=['Masculin','Feminin'] [str(year)+'-'+nem for year in range(2011,2017) for nem in nemek] df.columns=[str(year)+'-'+nem for year in range(2011,2017) for nem in nemek] df evek=[str(year) for year in range(2011,2017) for nem in nemek] nemlista=[nem for year in range(2011,2017) for nem in nemek] df=df.T
present/bi/2020/jupyter/2_pandas_filetipusok.ipynb
csaladenes/csaladenes.github.io
mit
Új oszlopok a dimenzióknak.
df['Ev']=evek df['Nem']=nemlista df.head(6) df.set_index(['Ev','Nem'])
present/bi/2020/jupyter/2_pandas_filetipusok.ipynb
csaladenes/csaladenes.github.io
mit
unstack paranccsal egy MultiIndex (azaz többszintes index) pivot-álható.
df.set_index(['Ev','Nem'])[['Total']].unstack()
present/bi/2020/jupyter/2_pandas_filetipusok.ipynb
csaladenes/csaladenes.github.io
mit
Hiányzó értékek (nan-ok) helyettesítése.
pd.DataFrame([0,3,4,5,'gfgf',np.nan]).replace(np.nan,'Mas') pd.DataFrame([0,3,4,5,'gfgf',np.nan]).fillna('Mas')
present/bi/2020/jupyter/2_pandas_filetipusok.ipynb
csaladenes/csaladenes.github.io
mit
join - több DataFrame összefűzése. Az index ugyanaz kell legyen. Az oszlopok nevei különbözőek. Az index neve nem számít.
df1=pd.read_excel('pensiunea comfort 1.xlsx',sheet_name='Sheet1') df2=pd.read_excel('pensiunea comfort 1.xlsx',sheet_name='Sheet2') df3=pd.read_excel('pensiunea comfort 1.xlsx',sheet_name='Sheet3') df1=df1.dropna(how='all',axis=0).dropna(how='all',axis=1).set_index(2019) df2=df2.dropna(how='all',axis=0).dropna(how='all',axis=1).set_index(2019) df3=df3.dropna(how='all',axis=0).dropna(how='all',axis=1).set_index('2019/ NR. DE NOPTI') df1.join(df2).join(df3)
present/bi/2020/jupyter/2_pandas_filetipusok.ipynb
csaladenes/csaladenes.github.io
mit
SVD is one of the matrix factorization tecniques. It factors a matrix into three parts with which we can reconstruct the initial matrix. However, reconstructing original matrix is not mostly the primary aim. Rather, we factorize matrices in order to achive following goals: to find principal components to reduce matrix size removing redundant dimentions to find latent dimentions visualization In a simple terms, factorization can be defined as breaking something into its building blocks, in other terms, its factors. Using SVD, we can decompose a matrix into three separate matrices as follows: $$ A_{m x n} = U_{m x r} * \Sigma_{r x r} * (V_{n x r})^{T} $$ where - U is the left singular vectors - $\Sigma$ is the singular values sorted descending order along its diagonal, and full of zeroes elsewhere - V is the right singular vectors - m is the number of rows, - n is the number of columns(dimentions), - and r is the rank. Example
A = np.mat([ [4, 5, 4, 1, 1], [5, 3, 5, 0, 0], [0, 1, 0, 1, 1], [0, 0, 0, 0, 1], [1, 0, 0, 4, 5], [0, 1, 0, 5, 4], ]) U, S, V = np.linalg.svd(A) U.shape, S.shape, V.shape
SingularValueDecomposition.ipynb
muatik/dm
mit
Left singular vectors
U
SingularValueDecomposition.ipynb
muatik/dm
mit
Singular values
S np.diag(S)
SingularValueDecomposition.ipynb
muatik/dm
mit
As you can see, the singular values are sorted descendingly. Right singular values
V
SingularValueDecomposition.ipynb
muatik/dm
mit
Reconstructing the original matrix
def reconstruct(U, S, V, rank): return U[:,0:rank] * np.diag(S[:rank]) * V[:rank] r = len(S) reconstruct(U, S, V, r)
SingularValueDecomposition.ipynb
muatik/dm
mit
We use all the dimentions to get back to the original matrix. As a result, we obtain the matrix which is almost identical. Let's calculate the difference between the two matrices.
def calcError(A, B): return np.sum(np.power(A - B, 2)) calcError(A, reconstruct(U, S, V, r))
SingularValueDecomposition.ipynb
muatik/dm
mit
Expectedly, the error is infinitesimal. However, most of the time this is not our intention. Instead of using all the dimentions(rank), we only use some of them, which have more variance, in other words, which provides more information. Let's see what we will get when using only the first three most significant dimentions.
reconstruct(U, S, V, 3) calcError(A, reconstruct(U, S, V, 3))
SingularValueDecomposition.ipynb
muatik/dm
mit
Again, the reconstructed matrix is very similar to the original one. And the total error is still small. Now we can ask the question that which rank should we pick? There is trade-off that when you use more rank, you get closer to the original matrix and have less error, however you need to keep more data. On the other hand, if you use less rank, you will have much error but save space and remove the redundant dimentions and noise.
reconstruct(U, S, V, 2) calcError(A, reconstruct(U, S, V, 2)) A = np.mat([ [4, 5, 4, 0, 4, 0, 0, 1, 0, 1, 2, 1], [5, 3, 5, 5, 0, 1, 0, 0, 2, 0, 0, 2], [0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0], [0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1], [1, 0, 1, 0, 1, 5, 0, 0, 4, 5, 4, 0], [0, 1, 1, 0, 0, 4, 3, 5, 5, 3, 4, 0], ]) def reconstruct(U, S, V, rank): return U[:,0:rank] * np.diag(S[:rank]) * V[:rank] for rank in range(1, len(S)): rA = reconstruct(U, S, V, rank) error = calcError(A, rA) coverage = S[:rank].sum() / S.sum() print("with rank {}, coverage: {:.4f}, error: {:.4f}".format(rank, coverage, error))
SingularValueDecomposition.ipynb
muatik/dm
mit
As it can be seen above, more rank is used, less error occur. From another perspective, we get closer to the original data by increasing rank number. On the other hand, after a certain rank, using more rank will not contribute as much as Let's compare a reconstructed column to the original one by just naked eyes. Even it is reconstructed using only 4 dimention, we almost, with some error, get the original data.
print("Original:\n", A[:,10]) print("Reconstructed:\n", reconstruct(U, S, V, 4)[:,10]) imread("data/pacman.png", flatten=True).shape A = np.mat(imread("data/pacman.png", flatten=True)) U, S, V = np.linalg.svd(A) A.shape, U.shape, S.shape, V.shape for rank in range(1, len(S)): rA = reconstruct(U, S, V, rank) error = calcError(A, rA) coverage = S[:rank].sum() / S.sum() print("with rank {}, coverage: {:.4f}, error: {:.4f}".format(rank, coverage, error)) for i in range(1, 50, 5): rA = reconstruct(U, S, V, i) print(rA.shape) plt.imshow(rA, cmap='gray') plt.show() plt.imshow(data, interpolation='nearest') 128 * 128 - (10*128*2) from PIL import Image A = np.mat(imread("data/noise.png", flatten=True)) img = Image.open('data/noise.png') imggray = img.convert('LA') imgmat = np.array(list(imggray.getdata(band=0)), float) imgmat = np.array(list(imggray.getdata(band=0)), float) imgmat.shape = (imggray.size[1], imggray.size[0]) imgmat = np.matrix(imgmat) plt.figure(figsize=(9,6)) plt.imshow(imgmat, cmap='gray'); plt.show() U, S, V = np.linalg.svd(imgmat) for i in range(1, 10, 1): rA = reconstruct(U, S, V, i) print(rA.shape) plt.imshow(rA, cmap='gray'); plt.show()
SingularValueDecomposition.ipynb
muatik/dm
mit
Double check that we're using raw fluxes (norm = False):
if Starfish.config["grid"]["norm"] == False : print("All good.") h5i.grid_points.shape
demo7/raw/mixture_model_01_exploratory.ipynb
gully/starfish-demo
mit
Let's load the flux of each model grid point and compute the mean flux ratio with every other model grid point. There will be $N_{grid} \times N_{grid}$ pairs of flux ratios, only half of which are unique.
N_grid, D_dim = h5i.grid_points.shape N_tot = N_grid*N_grid d_grd = np.empty((N_tot, D_dim*2)) f_rat = np.empty(N_tot) c = 0 for i in np.arange(N_grid): print(i, end=' ') for j in np.arange(N_grid): d_grd[c] = np.hstack((h5i.grid_points[i], h5i.grid_points[j])) f_rat[c] = np.mean(h5i.load_flux(h5i.grid_points[i]))/np.mean(h5i.load_flux(h5i.grid_points[j])) c += 1
demo7/raw/mixture_model_01_exploratory.ipynb
gully/starfish-demo
mit
We now have a six dimensional design matrix and a scalar that we can fit to.
d_grd.shape, f_rat.shape from scipy.interpolate import LinearNDInterpolator interp_f_rat = LinearNDInterpolator(d_grd, f_rat) interp_f_rat(6000,4.0, 0, 6200, 5.0, -1.0) np.mean(h5i.load_flux([6000, 4.0, 0]))/np.mean(h5i.load_flux([6200, 5.0, -1.0]))
demo7/raw/mixture_model_01_exploratory.ipynb
gully/starfish-demo
mit
Checks out. So now we can produce $q_m$ on demand! Just a reminder... we have to do this for each order. The next step is to figure out how to implement this efficiently in parallel operations.
spec = h5i.load_flux(h5i.grid_points[i]) h5i.wl.shape, spec.shape import matplotlib.pyplot as plt %matplotlib inline plt.plot(h5i.wl, spec) Starfish.config["data"]["orders"]
demo7/raw/mixture_model_01_exploratory.ipynb
gully/starfish-demo
mit
Load Iris Flower Data
# Load data iris = datasets.load_iris() X = iris.data
machine-learning/dbscan_clustering-Copy1.ipynb
tpin3694/tpin3694.github.io
mit
Conduct Agglomerative Clustering In scikit-learn, AgglomerativeClustering uses the linkage parameter to determine the merging strategy to minimize the 1) variance of merged clusters (ward), 2) average of distance between observations from pairs of clusters (average), or 3) maximum distance between observations from pairs of clusters (complete). Two other parameters are useful to know. First, the affinity parameter determines the distance metric used for linkage (minkowski, euclidean, etc.). Second, n_clusters sets the number of clusters the clustering algorithm will attempt to find. That is, clusters are successively merged until there are only n_clusters remaining.
# Create meanshift object clt = AgglomerativeClustering(linkage='complete', affinity='euclidean', n_clusters=3) # Train model model = clt.fit(X_std)
machine-learning/dbscan_clustering-Copy1.ipynb
tpin3694/tpin3694.github.io
mit
Show Cluster Membership
# Show cluster membership model.labels_
machine-learning/dbscan_clustering-Copy1.ipynb
tpin3694/tpin3694.github.io
mit
We create the visibility. This just makes the uvw, time, antenna1, antenna2, weight columns in a table
times = numpy.array([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0]) * (numpy.pi / 12.0) frequency = numpy.array([1e8]) channel_bandwidth = numpy.array([1e7]) reffrequency = numpy.max(frequency) phasecentre = SkyCoord(ra=+15.0 * u.deg, dec=-45.0 * u.deg, frame='icrs', equinox='J2000') vt = create_visibility(lowcore, times, frequency, channel_bandwidth=channel_bandwidth, weight=1.0, phasecentre=phasecentre, polarisation_frame=PolarisationFrame("stokesI"))
workflows/notebooks/imaging-wterm_arlexecute.ipynb
SKA-ScienceDataProcessor/algorithm-reference-library
apache-2.0
Advise on wide field parameters. This returns a dictionary with all the input and calculated variables.
advice = advise_wide_field(vt, wprojection_planes=1)
workflows/notebooks/imaging-wterm_arlexecute.ipynb
SKA-ScienceDataProcessor/algorithm-reference-library
apache-2.0
Plot the synthesized UV coverage.
if doplot: plt.clf() plt.plot(vt.data['uvw'][:, 0], vt.data['uvw'][:, 1], '.', color='b') plt.plot(-vt.data['uvw'][:, 0], -vt.data['uvw'][:, 1], '.', color='r') plt.xlabel('U (wavelengths)') plt.ylabel('V (wavelengths)') plt.show() plt.clf() plt.plot(vt.data['uvw'][:, 0], vt.data['uvw'][:, 2], '.', color='b') plt.xlabel('U (wavelengths)') plt.ylabel('W (wavelengths)') plt.show() plt.clf() plt.plot(vt.data['time'][vt.u>0.0], vt.data['uvw'][:, 2][vt.u>0.0], '.', color='b') plt.plot(vt.data['time'][vt.u<=0.0], vt.data['uvw'][:, 2][vt.u<=0.0], '.', color='r') plt.xlabel('U (wavelengths)') plt.ylabel('W (wavelengths)') plt.show() plt.clf() n, bins, patches = plt.hist(vt.w, 50, normed=1, facecolor='green', alpha=0.75) plt.xlabel('W (wavelengths)') plt.ylabel('Count') plt.show()
workflows/notebooks/imaging-wterm_arlexecute.ipynb
SKA-ScienceDataProcessor/algorithm-reference-library
apache-2.0
Show the planar nature of the uvw sampling, rotating with hour angle Create a grid of components and predict each in turn, using the full phase term including w.
npixel = 512 cellsize=0.001 facets = 4 flux = numpy.array([[100.0]]) vt.data['vis'] *= 0.0 model = create_image_from_visibility(vt, npixel=512, cellsize=0.001, npol=1) spacing_pixels = npixel // facets log.info('Spacing in pixels = %s' % spacing_pixels) spacing = 180.0 * cellsize * spacing_pixels / numpy.pi centers = -1.5, -0.5, +0.5, +1.5 comps=list() for iy in centers: for ix in centers: pra = int(round(npixel // 2 + ix * spacing_pixels - 1)) pdec = int(round(npixel // 2 + iy * spacing_pixels - 1)) sc = pixel_to_skycoord(pra, pdec, model.wcs) log.info("Component at (%f, %f) %s" % (pra, pdec, str(sc))) comp = create_skycomponent(flux=flux, frequency=frequency, direction=sc, polarisation_frame=PolarisationFrame("stokesI")) comps.append(comp) predict_skycomponent_visibility(vt, comps)
workflows/notebooks/imaging-wterm_arlexecute.ipynb
SKA-ScienceDataProcessor/algorithm-reference-library
apache-2.0
Make the dirty image and point spread function using the two-dimensional approximation: $$V(u,v,w) =\int I(l,m) e^{2 \pi j (ul+um)} dl dm$$ Note that the shape of the sources vary with position in the image. This space-variant property of the PSF arises from the w-term neglected in the two-dimensional invert.
arlexecute.set_client(use_dask=True) dirty = create_image_from_visibility(vt, npixel=512, cellsize=0.001, polarisation_frame=PolarisationFrame("stokesI")) vt = weight_visibility(vt, dirty) future = invert_list_arlexecute_workflow([vt], [dirty], context='2d') dirty, sumwt = arlexecute.compute(future, sync=True)[0] if doplot: show_image(dirty) print("Max, min in dirty image = %.6f, %.6f, sumwt = %f" % (dirty.data.max(), dirty.data.min(), sumwt)) export_image_to_fits(dirty, '%s/imaging-wterm_dirty.fits' % (results_dir))
workflows/notebooks/imaging-wterm_arlexecute.ipynb
SKA-ScienceDataProcessor/algorithm-reference-library
apache-2.0
This occurs because the Fourier transform relationship between sky brightness and visibility is only accurate over small fields of view. Hence we can make an accurate image by partitioning the image plane into small regions, treating each separately and then glueing the resulting partitions into one image. We call this image plane partitioning image plane faceting. $$V(u,v,w) = \sum_{i,j} \frac{1}{\sqrt{1- l_{i,j}^2- m_{i,j}^2}} e^{-2 \pi j (ul_{i,j}+um_{i,j} + w(\sqrt{1-l_{i,j}^2-m_{i,j}^2}-1))} \int I(\Delta l, \Delta m) e^{-2 \pi j (u\Delta l_{i,j}+u \Delta m_{i,j})} dl dm$$
dirtyFacet = create_image_from_visibility(vt, npixel=512, cellsize=0.001, npol=1) future = invert_list_arlexecute_workflow([vt], [dirtyFacet], facets=4, context='facets') dirtyFacet, sumwt = arlexecute.compute(future, sync=True)[0] if doplot: show_image(dirtyFacet) print("Max, min in dirty image = %.6f, %.6f, sumwt = %f" % (dirtyFacet.data.max(), dirtyFacet.data.min(), sumwt)) export_image_to_fits(dirtyFacet, '%s/imaging-wterm_dirtyFacet.fits' % (results_dir))
workflows/notebooks/imaging-wterm_arlexecute.ipynb
SKA-ScienceDataProcessor/algorithm-reference-library
apache-2.0
That was the best case. This time, we will not arrange for the partitions to be centred on the sources.
dirtyFacet2 = create_image_from_visibility(vt, npixel=512, cellsize=0.001, npol=1) future = invert_list_arlexecute_workflow([vt], [dirtyFacet2], facets=2, context='facets') dirtyFacet2, sumwt = arlexecute.compute(future, sync=True)[0] if doplot: show_image(dirtyFacet2) print("Max, min in dirty image = %.6f, %.6f, sumwt = %f" % (dirtyFacet2.data.max(), dirtyFacet2.data.min(), sumwt)) export_image_to_fits(dirtyFacet2, '%s/imaging-wterm_dirtyFacet2.fits' % (results_dir))
workflows/notebooks/imaging-wterm_arlexecute.ipynb
SKA-ScienceDataProcessor/algorithm-reference-library
apache-2.0
Another approach is to partition the visibility data by slices in w. The measurement equation is approximated as: $$V(u,v,w) =\sum_i \int \frac{ I(l,m) e^{-2 \pi j (w_i(\sqrt{1-l^2-m^2}-1))})}{\sqrt{1-l^2-m^2}} e^{-2 \pi j (ul+um)} dl dm$$ If images constructed from slices in w are added after applying a w-dependent image plane correction, the w term will be corrected. The w-dependent w-beam is:
if doplot: wterm = create_w_term_like(model, phasecentre=vt.phasecentre, w=numpy.max(vt.w)) show_image(wterm) plt.show() dirtywstack = create_image_from_visibility(vt, npixel=512, cellsize=0.001, npol=1) future = invert_list_arlexecute_workflow([vt], [dirtywstack], vis_slices=101, context='wstack') dirtywstack, sumwt = arlexecute.compute(future, sync=True)[0] show_image(dirtywstack) plt.show() print("Max, min in dirty image = %.6f, %.6f, sumwt = %f" % (dirtywstack.data.max(), dirtywstack.data.min(), sumwt)) export_image_to_fits(dirtywstack, '%s/imaging-wterm_dirty_wstack.fits' % (results_dir))
workflows/notebooks/imaging-wterm_arlexecute.ipynb
SKA-ScienceDataProcessor/algorithm-reference-library
apache-2.0
The w-term can also be viewed as a time-variable distortion. Approximating the array as instantaneously co-planar, we have that w can be expressed in terms of $u,v$ $$w = a u + b v$$ Transforming to a new coordinate system: $$ l' = l + a (\sqrt{1-l^2-m^2}-1))$$ $$ m' = m + b (\sqrt{1-l^2-m^2}-1))$$ Ignoring changes in the normalisation term, we have: $$V(u,v,w) =\int \frac{I(l',m')}{\sqrt{1-l'^2-m'^2}} e^{-2 \pi j (ul'+um')} dl' dm'$$ To illustrate this, we will construct images as a function of time. For comparison, we show difference of each time slice from the best facet image. Instantaneously the sources are un-distorted but do lie in the wrong location.
for rows in vis_timeslice_iter(vt): visslice = create_visibility_from_rows(vt, rows) dirtySnapshot = create_image_from_visibility(visslice, npixel=512, cellsize=0.001, npol=1, compress_factor=0.0) future = invert_list_arlexecute_workflow([visslice], [dirtySnapshot], context='2d') dirtySnapshot, sumwt = arlexecute.compute(future, sync=True)[0] print("Max, min in dirty image = %.6f, %.6f, sumwt = %f" % (dirtySnapshot.data.max(), dirtySnapshot.data.min(), sumwt)) if doplot: dirtySnapshot.data -= dirtyFacet.data show_image(dirtySnapshot) plt.title("Hour angle %.2f hours" % (numpy.average(visslice.time) * 12.0 / 43200.0)) plt.show()
workflows/notebooks/imaging-wterm_arlexecute.ipynb
SKA-ScienceDataProcessor/algorithm-reference-library
apache-2.0
This timeslice imaging leads to a straightforward algorithm in which we correct each time slice and then sum the resulting timeslices.
dirtyTimeslice = create_image_from_visibility(vt, npixel=512, cellsize=0.001, npol=1) future = invert_list_arlexecute_workflow([vt], [dirtyTimeslice], vis_slices=vis_timeslices(vt, 'auto'), padding=2, context='timeslice') dirtyTimeslice, sumwt = arlexecute.compute(future, sync=True)[0] show_image(dirtyTimeslice) plt.show() print("Max, min in dirty image = %.6f, %.6f, sumwt = %f" % (dirtyTimeslice.data.max(), dirtyTimeslice.data.min(), sumwt)) export_image_to_fits(dirtyTimeslice, '%s/imaging-wterm_dirty_Timeslice.fits' % (results_dir))
workflows/notebooks/imaging-wterm_arlexecute.ipynb
SKA-ScienceDataProcessor/algorithm-reference-library
apache-2.0
Finally we try w-projection. For a fixed w, the measurement equation can be stated as as a convolution in Fourier space. $$V(u,v,w) =G_w(u,v) \ast \int \frac{I(l,m)}{\sqrt{1-l^2-m^2}} e^{-2 \pi j (ul+um)} dl dm$$ where the convolution function is: $$G_w(u,v) = \int \frac{1}{\sqrt{1-l^2-m^2}} e^{-2 \pi j (ul+um + w(\sqrt{1-l^2-m^2}-1))} dl dm$$ Hence when gridding, we can use the transform of the w beam to correct this effect while gridding.
dirtyWProjection = create_image_from_visibility(vt, npixel=512, cellsize=0.001, npol=1) gcfcf = create_awterm_convolutionfunction(model, nw=101, wstep=800.0/101, oversampling=8, support=60, use_aaf=True) future = invert_list_arlexecute_workflow([vt], [dirtyWProjection], context='2d', gcfcf=[gcfcf]) dirtyWProjection, sumwt = arlexecute.compute(future, sync=True)[0] if doplot: show_image(dirtyWProjection) print("Max, min in dirty image = %.6f, %.6f, sumwt = %f" % (dirtyWProjection.data.max(), dirtyWProjection.data.min(), sumwt)) export_image_to_fits(dirtyWProjection, '%s/imaging-wterm_dirty_WProjection.fits' % (results_dir))
workflows/notebooks/imaging-wterm_arlexecute.ipynb
SKA-ScienceDataProcessor/algorithm-reference-library
apache-2.0
Define the features and preprocess the car evaluation data set We'll preprocess the attributes into redundant features, such as using an integer index (linear) to represent a value for an attribute, as well as also using a one-hot encoding for each attribute's possible values as new features. Despite the fact that this is redundant, this will help to make the tree smaller since it has more choice on how to split data on each branch.
input_labels = [ ["buying", ["vhigh", "high", "med", "low"]], ["maint", ["vhigh", "high", "med", "low"]], ["doors", ["2", "3", "4", "5more"]], ["persons", ["2", "4", "more"]], ["lug_boot", ["small", "med", "big"]], ["safety", ["low", "med", "high"]], ] output_labels = ["unacc", "acc", "good", "vgood"] # Load data set data = np.genfromtxt(os.path.join('data', 'data/car.data'), delimiter=',', dtype="U") data_inputs = data[:, :-1] data_outputs = data[:, -1] def str_data_to_one_hot(data, input_labels): """Convert each feature's string to a flattened one-hot array. """ X_int = LabelEncoder().fit_transform(data.ravel()).reshape(*data.shape) X_bin = OneHotEncoder().fit_transform(X_int).toarray() attrs_names = [] for a in input_labels: key = a[0] for b in a[1]: value = b attrs_names.append("{}_is_{}".format(key, value)) return X_bin, attrs_names def str_data_to_linear(data, input_labels): """Convert each feature's string to an integer index""" X_lin = np.array([[ input_labels[a][1].index(j) for a, j in enumerate(i) ] for i in data]) # Indexes will range from 0 to n-1 attrs_names = [i[0] + "_index" for i in input_labels] return X_lin, attrs_names # Take both one-hot and linear versions of input features: X_one_hot, attrs_names_one_hot = str_data_to_one_hot(data_inputs, input_labels) X_linear_int, attrs_names_linear_int = str_data_to_linear(data_inputs, input_labels) # Put that together: X = np.concatenate([X_one_hot, X_linear_int], axis=-1) attrs_names = attrs_names_one_hot + attrs_names_linear_int # Outputs use indexes, this is not one-hot: integer_y = np.array([output_labels.index(i) for i in data_outputs]) print("Data set's shape,") print("X.shape, integer_y.shape, len(attrs_names), len(output_labels):") print(X.shape, integer_y.shape, len(attrs_names), len(output_labels)) # Shaping the data into a single pandas dataframe for naming columns: pdtrain = pd.DataFrame(X) pdtrain.columns = attrs_names dtrain = xgb.DMatrix(pdtrain, integer_y)
Decision-Trees-For-Knowledge-Discovery-With-XGBoost.ipynb
Vooban/Decision-Trees-For-Knowledge-Discovery
mit
Train simple decision trees (here using XGBoost) to fit the data set: First, let's define some hyperparameters, such as the depth of the tree.
num_rounds = 1 # Do not use boosting for now, we want only 1 decision tree per class. num_classes = len(output_labels) num_trees = num_rounds * num_classes # Let's use a max_depth of 4 for the sole goal of simplifying the visual representation produced # (ideally, a tree would be deeper to classify perfectly on that dataset) param = { 'max_depth': 4, 'objective': 'multi:softprob', 'num_class': num_classes } bst = xgb.train(param, dtrain, num_boost_round=num_rounds) print("Decision trees trained!") print("Mean Error Rate:", bst.eval(dtrain)) print("Accuracy:", (bst.predict(dtrain).argmax(axis=-1) == integer_y).mean()*100, "%")
Decision-Trees-For-Knowledge-Discovery-With-XGBoost.ipynb
Vooban/Decision-Trees-For-Knowledge-Discovery
mit
Plot and save the trees (one for each class): The 4 trees of the classifer (one tree per class) will each output a number that represents how much it is probable that the thing to classify belongs to that class, and it is by comparing the output at the end of all the trees for a given example that we could get the maximal output as associating the example to that class. Indeed, the binary situation where we have only one tree that outputs a positive and else negative number would be simpler to interpret rather than classifying for 4 binary classes at once.
def plot_first_trees(bst, output_labels, trees_name): """ Plot and save the first trees for multiclass classification before any boosting was performed. """ for tree_idx in range(len(output_labels)): class_name = output_labels[tree_idx] graph_save_path = os.path.join( "exported_xgboost_trees", "{}_{}_for_{}".format(trees_name, tree_idx, class_name) ) graph = xgb.to_graphviz(bst, num_trees=tree_idx) graph.render(graph_save_path) # from IPython.display import display # display(graph) ### Inline display in the notebook would be too huge and would require much side scrolling. ### So we rather plot it anew with matplotlib and a fixed size for inline quick view purposes: fig, ax = plt.subplots(figsize=(16, 16)) plot_tree(bst, num_trees=tree_idx, rankdir='LR', ax=ax) plt.title("Saved a high-resolution graph for the class '{}' to: {}.pdf".format(class_name, graph_save_path)) plt.show() # Plot our simple trees: plot_first_trees(bst, output_labels, trees_name="simple_tree")
Decision-Trees-For-Knowledge-Discovery-With-XGBoost.ipynb
Vooban/Decision-Trees-For-Knowledge-Discovery
mit
Note that the above trees can be viewed here online: https://github.com/Vooban/Decision-Trees-For-Knowledge-Discovery/tree/master/exported_xgboost_trees Plot the importance of each input features for those simple decision trees: Note here that it is the feature importance according to our simple, shallow trees. More complex trees would include more of the features/attributes with different proportions.
fig, ax = plt.subplots(figsize=(12, 7)) xgb.plot_importance(bst, ax=ax) plt.show()
Decision-Trees-For-Knowledge-Discovery-With-XGBoost.ipynb
Vooban/Decision-Trees-For-Knowledge-Discovery
mit
Let's now generate slightly more complex trees to aid inspection <p align="center"> <a href="http://theinceptionbutton.com/" ><img src="deeper.jpg" /></a> </p> Let's go deeper and build deeper trees. However, those trees are not maximally complex since XGBoost is rather built to boost over forests of small trees than a big one.
num_rounds = 1 # Do not use boosting for now, we want only 1 decision tree per class. num_classes = len(output_labels) num_trees = num_rounds * num_classes # Let's use a max_depth of 4 for the sole goal of simplifying the visual representation produced # (ideally, a tree would be deeper to classify perfectly on that dataset) param = { 'max_depth': 9, 'objective': 'multi:softprob', 'num_class': num_classes } bst = xgb.train(param, dtrain, num_boost_round=num_rounds) print("Decision trees trained!") print("Mean Error Rate:", bst.eval(dtrain)) print("Accuracy:", (bst.predict(dtrain).argmax(axis=-1) == integer_y).mean()*100, "%") # Plot our complex trees: plot_first_trees(bst, output_labels, trees_name="complex_tree") # And their feature importance: print("Now, our feature importance chart considers more features, but it is still not complete.") fig, ax = plt.subplots(figsize=(12, 7)) xgb.plot_importance(bst, ax=ax) plt.show()
Decision-Trees-For-Knowledge-Discovery-With-XGBoost.ipynb
Vooban/Decision-Trees-For-Knowledge-Discovery
mit
Note that the above trees can be viewed here online: https://github.com/Vooban/Decision-Trees-For-Knowledge-Discovery/tree/master/exported_xgboost_trees Finding a perfect classifier rather than an easily explainable one We'll now use boosting. The resulting trees can't be explained as easily as the previous ones, since one classifier will now have incrementally many trees for each class to reduce error, each new trees based on the errors of the previous ones. And those trees will each be weighted.
num_rounds = 10 # 10 rounds of boosting, thus 10 trees per class. num_classes = len(output_labels) num_trees = num_rounds * num_classes param = { 'max_depth': 20, 'eta': 1.43, 'objective': 'multi:softprob', 'num_class': num_classes, } bst = xgb.train(param, dtrain, early_stopping_rounds=1, num_boost_round=num_rounds, evals=[(dtrain, "dtrain")]) print("Boosted decision trees trained!") print("Mean Error Rate:", bst.eval(dtrain)) print("Accuracy:", (bst.predict(dtrain).argmax(axis=-1) == integer_y).mean()*100, "%")
Decision-Trees-For-Knowledge-Discovery-With-XGBoost.ipynb
Vooban/Decision-Trees-For-Knowledge-Discovery
mit
In our case, note that it is possible to have an error of 0 (thus an accuracy of 100%) since we have a dataset that represents a function, which is mathematically deterministic and could be interpreted as programmatically pure in the case it would be implemented. But wait... we just implemented and recreated the function that was used to model the dataset with our trees! We don't need cross validation nor a test set, because our training data already covers the full feature space (attribute space). Finally, the full attributes/features importance:
# Some plot options from the doc: # importance_type : str, default "weight" # How the importance is calculated: either "weight", "gain", or "cover" # "weight" is the number of times a feature appears in a tree # "gain" is the average gain of splits which use the feature # "cover" is the average coverage of splits which use the feature # where coverage is defined as the number of samples affected by the split importance_types = ["weight", "gain", "cover"] for i in importance_types: print("Importance type:", i) fig, ax = plt.subplots(figsize=(12, 7)) xgb.plot_importance(bst, importance_type=i, ax=ax) plt.show()
Decision-Trees-For-Knowledge-Discovery-With-XGBoost.ipynb
Vooban/Decision-Trees-For-Knowledge-Discovery
mit
Get Authorization URL Available per client. For Den it is: https://home.nest.com/login/oauth2?client_id=54033edb-04e0-4fc7-8306-5ed6cb7d7b1d&state=STATE Where STATE should be a value that is: Used to protect against cross-site request forgery attacks Format: any unguessable string We strongly recommend that you use a new, unique value for each call Create STATE helper
import uuid def _get_state(): """Get a unique id string.""" return str(uuid.uuid1()) _get_state()
notebooks/Authorization.ipynb
krismolendyke/den
mit
Create Authorization URL Helper
API_PROTOCOL = "https" API_LOCATION = "home.nest.com" from urlparse import SplitResult, urlunsplit from urllib import urlencode def _get_url(path, query, netloc=API_LOCATION): """Get a URL for the given path and query.""" split = SplitResult(scheme=API_PROTOCOL, netloc=netloc, path=path, query=query, fragment="") return urlunsplit(split) def get_auth_url(client_id=DEN_CLIENT_ID): """Get an authorization URL for the given client id.""" path = "login/oauth2" query = urlencode({"client_id": client_id, "state": _get_state()}) return _get_url(path, query) get_auth_url()
notebooks/Authorization.ipynb
krismolendyke/den
mit
Get Authorization Code get_auth_url() returns a URL that should be visited in the browser to get an authorization code. For Den, this authorization code will be a PIN.
!open "{get_auth_url()}"
notebooks/Authorization.ipynb
krismolendyke/den
mit
Cut and paste that PIN here:
pin = ""
notebooks/Authorization.ipynb
krismolendyke/den
mit
Get Access Token Use the pin code to request an access token. https://developer.nest.com/documentation/cloud/authorization-reference/
def get_access_token_url(client_id=DEN_CLIENT_ID, client_secret=DEN_CLIENT_SECRET, code=pin): """Get an access token URL for the given client id.""" path = "oauth2/access_token" query = urlencode({"client_id": client_id, "client_secret": client_secret, "code": code, "grant_type": "authorization_code"}) return _get_url(path, query, "api." + API_LOCATION) get_access_token_url()
notebooks/Authorization.ipynb
krismolendyke/den
mit
POST to that URL to get a response containing an access token:
import requests r = requests.post(get_access_token_url()) print r.status_code assert r.status_code == requests.codes.OK r.json()
notebooks/Authorization.ipynb
krismolendyke/den
mit
It seems like the access token can only be created once and has a 10 year expiration time.
access_token = r.json()["access_token"] access_token
notebooks/Authorization.ipynb
krismolendyke/den
mit
Preprocessing: Principal Component Analysis 1850 dimensions is a lot for SVM. We can use PCA to reduce these 1850 features to a manageable size, while maintaining most of the information in the dataset. Here it is useful to use a variant of PCA called RandomizedPCA, which is an approximation of PCA that can be much faster for large datasets. We saw this method in the previous notebook, and will use it again here:
from sklearn import decomposition pca = decomposition.RandomizedPCA(n_components=150, whiten=True) pca.fit(X_train) X_train_pca = pca.transform(X_train) X_test_pca = pca.transform(X_test) print(X_train_pca.shape) print(X_test_pca.shape)
notebooks/03.3 Case Study - Face Recognition with Eigenfaces.ipynb
samstav/scipy_2015_sklearn_tutorial
cc0-1.0
The classifier is correct on an impressive number of images given the simplicity of its learning model! Using a linear classifier on 150 features derived from the pixel-level data, the algorithm correctly identifies a large number of the people in the images. Again, we can quantify this effectiveness using one of several measures from the sklearn.metrics module. First we can do the classification report, which shows the precision, recall and other measures of the "goodness" of the classification:
from sklearn import metrics y_pred = clf.predict(X_test_pca) print(metrics.classification_report(y_test, y_pred, target_names=lfw_people.target_names))
notebooks/03.3 Case Study - Face Recognition with Eigenfaces.ipynb
samstav/scipy_2015_sklearn_tutorial
cc0-1.0
Another interesting metric is the confusion matrix, which indicates how often any two items are mixed-up. The confusion matrix of a perfect classifier would only have nonzero entries on the diagonal, with zeros on the off-diagonal.
print(metrics.confusion_matrix(y_test, y_pred)) print(metrics.f1_score(y_test, y_pred))
notebooks/03.3 Case Study - Face Recognition with Eigenfaces.ipynb
samstav/scipy_2015_sklearn_tutorial
cc0-1.0
Pipelining Above we used PCA as a pre-processing step before applying our support vector machine classifier. Plugging the output of one estimator directly into the input of a second estimator is a commonly used pattern; for this reason scikit-learn provides a Pipeline object which automates this process. The above problem can be re-expressed as a pipeline as follows:
from sklearn.pipeline import Pipeline clf = Pipeline([('pca', decomposition.RandomizedPCA(n_components=150, whiten=True)), ('svm', svm.LinearSVC(C=1.0))]) clf.fit(X_train, y_train) y_pred = clf.predict(X_test) print(metrics.confusion_matrix(y_pred, y_test))
notebooks/03.3 Case Study - Face Recognition with Eigenfaces.ipynb
samstav/scipy_2015_sklearn_tutorial
cc0-1.0
Aggregation Operators $project - shape documents e.g. select $match - filtering $skip - skip at start $limit - limit after some $unwind - for every field of the array field on which it is used it will create an instance of document containing the values of the field. This can be used for grouping Match operator Who has the highest followers to friend ratio?
query = [ {"$match": {"user.friends_count": {"$gt": 0}, "user.followers_count": {"$gt": 0}}}, {"$project": {"ratio": {"$divide": ["$user.followers_count", "$user.friends_count"]}, "screen_name": "$user.screen_name"}}, {"$sort": {"ratio": -1}} ] aggregate_and_show(collection, query)
udacity_data_science_notes/Data_Wrangling_with_MongoDB/lesson_05/lesson_05.ipynb
anshbansal/anshbansal.github.io
mit
For $match we use the same syntax that we use for read operations Project operator include fields from the original document insert computed fields rename fields create fields that hold sub documents Unwind operator need to use array values somehow Let's try and find who included the most user mentions
query = [ {"$unwind": "$entities.user_mentions"}, {"$group": {"_id": "$user.screen_name", "count": {"$sum": 1}}}, {"$sort": {"count": -1}} ] aggregate_and_show(collection, query)
udacity_data_science_notes/Data_Wrangling_with_MongoDB/lesson_05/lesson_05.ipynb
anshbansal/anshbansal.github.io
mit
group operators $sum $first $last $max $min $avg array operators - $push - $addToSet
#get unique hashtags by user query = [ {"$unwind": "$entities.hashtags"}, {"$group": {"_id": "$user.screen_name", "unique_hashtags": { "$addToSet": "$entities.hashtags.text" }}}, {"$sort": {"_id": -1}} ] aggregate_and_show(collection, query) # find number of unique user mentions query = [ {"$unwind": "$entities.user_mentions"}, {"$group": { "_id": "$user.screen_name", "mset": { "$addToSet": "$entities.user_mentions.screen_name" } }}, {"$unwind": "$mset"}, {"$group": {"_id": "$_id", "count": {"$sum": 1}}}, {"$sort": {"count": -1}} ] aggregate_and_show(collection, query)
udacity_data_science_notes/Data_Wrangling_with_MongoDB/lesson_05/lesson_05.ipynb
anshbansal/anshbansal.github.io
mit
Analysis of gapick tool genetic algorithm's (GA) run Read more about PSF stars finding tool gapick in astwro documentation. Results directory gapick wrotes several results into directory specified by --out_dir parameter. Set path to results dir below:
resultpath = '~/tmp/gapick_fine'
examples/gapick_analyse.ipynb
majkelx/astwro
mit
Checkout the content of this directory
import os os.chdir(os.path.expanduser(resultpath)) !ls
examples/gapick_analyse.ipynb
majkelx/astwro
mit
For each generation of GA there are three files: * genXXX.gen - dump of all individuals as boolean matrix * genXXX.lst - daophot LST file with PSF stars of best individual * genXXX.reg - DS9 region file corresponding to LST file Also, there are three links to most recent versions of that files:
!ls -l gen_last.*
examples/gapick_analyse.ipynb
majkelx/astwro
mit
Such links are maintained during execution of gapick script which allows partial results analysis while script is running. *.gen files structure is shown below:
!head gen_last.gen
examples/gapick_analyse.ipynb
majkelx/astwro
mit
Each line is one individual, each column is one candidate star. 1 means that candidate is a member of individual. Moreover there is about.txt with information about parameters:
!cat about.txt
examples/gapick_analyse.ipynb
majkelx/astwro
mit
Also several optput files of daophot commands are included, as well as opt files with used configuration. Evolution analysis logbook.pkl file contains statistics collected by deap module during evolution.
import deap, pickle f = open('logbook.pkl') logbook = pickle.load(f)
examples/gapick_analyse.ipynb
majkelx/astwro
mit
Plot values of chi, and number of selected stars, against generation number
gen = logbook.select('gen') chi_mins = logbook.chapters["fitness"].select("min") stars_avgs = logbook.chapters["size"].select("avg") fig, ax1 = subplots() fig.set_size_inches(10,6) line1 = ax1.plot(gen, chi_mins, "b-", label="Minimum chi") ax1.set_xlabel("Generation") ax1.set_ylabel("chi", color="b") for tl in ax1.get_yticklabels(): tl.set_color("b") ax2 = ax1.twinx() line2 = ax2.plot(gen, stars_avgs, "r-", label="Average stars number") ax2.set_ylabel("stars", color="r") for tl in ax2.get_yticklabels(): tl.set_color("r") lns = line1 + line2 labs = [l.get_label() for l in lns] ax1.legend(lns, labs, loc="center right") show()
examples/gapick_analyse.ipynb
majkelx/astwro
mit
Spośród wszystkich gwiazd, algorytm wybrał 100 (domyślna wartość parametru --stars_to_pick) kandydatów przy pomocy polecenia daophot PICK. Następnie poleceniem PSF obliczył point spread function na postawie tego pełnego zbioru i uzyskał błędy dopasowania profilu dla tych gwiazd (Profile errors zwracane przez daophot PSF). Z początkowej listy kandydatów odrzycił gwiazdy których błąd dopasowania profilu przekracza 0.1 (domyślna wartość parametru --max_psf_err). W tym przypadku pozostało 99 gwiazd. W dalszej ewolucji jedynie spośród tych 99 gwiazd wybierane były gwiazdy do PSF. Użytkownik, zamiast zdawać się na PICK, może również wskazać swoją listę początkowych kandydatów (paramter --lst_file). Dla początkowych zbirów gwiazd individuals pierwszej generacji, wybierani byli kandydaci z prawdopodobieństwem 0.3 (domyślna wartość parametru --ga_init_prob) co dało średnio 30 gwiazd w zbiorze. Average stars number ma właśnie wartość około 30 dla zerowej generacji na wykresie. Później wartość ta ustabilizowała się w zakresie 45-50 gwiazd. Wykres pokazuje też minimalizację parametru chi z generacji na generację, przy czym po 40 generacji spadek ten jest już bardzo nieznaczny. Kolejny wykres pokazuje które gwiazdy były najczęściej wybierane w kolejnych generacjach. Kolor na przecięciu generacji i gwiazdy-kandydata wskazuje w jak wielu zbiorach danej generacji występowała gwiazda.
spectrum = logbook.select('spectrum') x_starno = range(len(spectrum[0])) fig, ax1 = plt.subplots() fig.set_size_inches(30,10) #cont = ax1.contourf(x_starno, gen, spectrum) cont = ax1.imshow(spectrum, interpolation='nearest', cmap=plt.cm.YlGnBu) ax1.set_xlabel("Star") ax1.set_ylabel("Generation") plt.colorbar(cont) plt.show() #plt.contourf(range(99), gen, spectrum)
examples/gapick_analyse.ipynb
majkelx/astwro
mit
Na wykresie widać jak w ciągu około 20 populacji wyłaniają się wyraźni liderzy. Niemniej również w późniejszych generacjach występują zmiany. Pomiędzy 70 a 80 generacją algorytm "wymienił" jedną z gwiazd na inną, ktora w poczatkowych generacjach nie była preferowana. Comaprision with other evolutions Liczebność zbiorów pierwszej generacji Zgodnie z sugestią mojego promotora, dr. Gabrieli Michalskiej, przyjżałem się również jak przebiega ewolucja dla różnych ilości gwiazd w zbiorach wyjściowej generacji. Parametr --init_prob $x$ określa prawdopodobieństwo z jakim kandydat zostanie wylosowany do zbioru pierwszej generacji. Jeżeli, przykładowo skrypt wyznacza poleceniem daophot PICK 100 (wartość domyślna) kandydatów spośród których szukamy gwaizd do PSF, to początkowe zbiory miały średnio liczności $100 x$ Provide set of results dir and labels it below:
resultpaths = [ '~/tmp/gapick_fine', '~/tmp/gapick_simple', ] labels = [ 'fine', 'simple', ]
examples/gapick_analyse.ipynb
majkelx/astwro
mit
Pobranie danych logów z wyników
logbook = [] resultpaths = [os.path.expanduser(p) for p in resultpaths] for p in resultpaths: f = open(os.path.join(p,'logbook.pkl')) logbook.append(pickle.load(f)) gens = [] chi_min = [] stars_av = [] for l in logbook: gens.append(l.select('gen')) chi_min.append(l.chapters["fitness"].select("min")) stars_av.append(l.chapters["size"].select("avg")) fig, ax = subplots(2, 1, sharex=True) fig.set_size_inches(10,10) for c, d, gen in zip(chi_min, labels, gens): ax[0].plot(gen, c, label="Min chi ({})".format(d)) ax[0].set_ylabel("chi") ax[0].legend(loc="upper right") ax[0].tick_params(axis='y', which='both', labelleft='on', labelright='on') for s, d, gen in zip (stars_av, labels, gens): ax[1].plot(gen, s, label="Av stars no ({})".format(d)) ax[1].set_ylabel("stars") ax[1].legend(loc="lower right") ax[1].tick_params(axis='y', which='both', labelleft='on', labelright='on') ax[1].set_xlabel("Generation") show()
examples/gapick_analyse.ipynb
majkelx/astwro
mit
Sprawdźmy jak różne bądź podobne są skrajne rozwiązania, zestawiając "histogramy" ich przebiegu.
spectrum = [ l.select('spectrum') for l in logbook] x_starno = range(len(spectrum[0][0])) fig, ax = subplots(2,1) fig.subplots_adjust(hspace=0) fig.set_size_inches(10,10) ax[1].set_ylim(0, len(gen)) # flip ax[0].imshow(spectrum[0], interpolation='nearest', cmap=plt.cm.YlGnBu) ax[1].imshow(spectrum[-1], interpolation='nearest', cmap=plt.cm.YlGnBu) ax[0].set_xlabel("Star") ax[0].set_ylabel("Generation, ({})".format(labels[0])) ax[1].set_ylabel("Generation, ({})".format(labels[-1])) #plt.colorbar(cont) plt.show()
examples/gapick_analyse.ipynb
majkelx/astwro
mit
1. Clustering
#Som elibraries from sklearn import preprocessing from sklearn.cluster import DBSCAN, KMeans #Read teh data, dropna, get sample df = pd.read_csv("data/big3_position.csv",sep="\t").dropna() df["Revenue"] = np.log10(df["Revenue"]) df["Assets"] = np.log10(df["Assets"]) df["Employees"] = np.log10(df["Employees"]) df["MarketCap"] = np.log10(df["MarketCap"]) df = df.replace([np.inf,-np.inf],np.nan).dropna().sample(300) df.head(2) #Scale variables to give all of them the same weight X = df.loc[:,["Revenue","Assets","Employees","MarketCap"]] X = preprocessing.scale(X) print(X.sum(0)) print(X.std(0)) X
class8/class8_impute.ipynb
jgarciab/wwd2017
gpl-3.0
1a. Clustering with K-means k-means clustering aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells. Other methods: http://scikit-learn.org/stable/modules/clustering.html
#Get labels of each row and add a new column with the labels kmeans = KMeans(n_clusters=2, random_state=0).fit(X) labels = kmeans.labels_ df["kmeans_labels"] = labels sns.lmplot(x="MarketCap",y="Assets",hue="kmeans_labels",fit_reg=False,data=df)
class8/class8_impute.ipynb
jgarciab/wwd2017
gpl-3.0
1b. Clustering with DBSCAN The DBSCAN algorithm views clusters as areas of high density separated by areas of low density. Due to this rather generic view, clusters found by DBSCAN can be any shape, as oppos
#Get labels of each row and add a new column with the labels db = DBSCAN(eps=1, min_samples=10).fit(X) labels = db.labels_ df["dbscan_labels"] = labels sns.lmplot(x="MarketCap",y="Assets",hue="dbscan_labels",fit_reg=False,data=df) Image(url="http://scikit-learn.org/stable/_images/sphx_glr_plot_cluster_comparison_0011.png")
class8/class8_impute.ipynb
jgarciab/wwd2017
gpl-3.0
1c. Hierarchical clustering Keeps aggreagating from a point
import scipy import pylab import scipy.cluster.hierarchy as sch # Generate distance matrix based on the difference between rows D = np.zeros([4,4]) for i in range(4): for j in range(4): D[i,j] = np.sum(np.abs(X[:,i]-X[:,j])) #Euclidean distance or mutual information are also common print(D) #Create the linkage and plot Y = sch.linkage(D, method='centroid') #many methods, single, complete... Z1 = sch.dendrogram(Y, orientation='right',labels=["Revenue","Assets","Employees","MarketCap"])
class8/class8_impute.ipynb
jgarciab/wwd2017
gpl-3.0
2. Imputation of missing data (fancy)
#Required libraries !conda install tensorflow -y !pip install fancyimpute !pip install pydot_ng import sklearn.preprocessing import sklearn #Read the data again but do not df = pd.read_csv("data/big3_position.csv",sep="\t") df["Revenue"] = np.log10(df["Revenue"]) df["Assets"] = np.log10(df["Assets"]) df["Employees"] = np.log10(df["Employees"]) df["MarketCap"] = np.log10(df["MarketCap"]) le = sklearn.preprocessing.LabelEncoder() labels = le.fit_transform(df["TypeEnt"]) df["TypeEnt_int"] = labels print(le.classes_) df = df.replace([np.inf,-np.inf],np.nan).sample(300) df.head(2) X = df.loc[:,["Revenue","Assets","Employees","MarketCap","TypeEnt_int"]].values X df.describe() from fancyimpute import KNN # X is the complete data matrix # X_incomplete has the same values as X except a subset have been replace with NaN # Use 10 nearest rows which have a feature to fill in each row's missing features X_filled_knn = KNN(k=10).complete(X) df.loc[:,cols] = X_filled_knn df.describe()
class8/class8_impute.ipynb
jgarciab/wwd2017
gpl-3.0
Sample usage Let's test our JaccardScoreCallback class with a Keras model.
# Model / data parameters num_classes = 10 input_shape = (28, 28, 1) # The data, split between train and test sets (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() # Scale images to the [0, 1] range x_train = x_train.astype("float32") / 255 x_test = x_test.astype("float32") / 255 # Make sure images have shape (28, 28, 1) x_train = np.expand_dims(x_train, -1) x_test = np.expand_dims(x_test, -1) print("x_train shape:", x_train.shape) print(x_train.shape[0], "train samples") print(x_test.shape[0], "test samples") # Convert class vectors to binary class matrices. y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) model = keras.Sequential( [ keras.Input(shape=input_shape), layers.Conv2D(32, kernel_size=(3, 3), activation="relu"), layers.MaxPooling2D(pool_size=(2, 2)), layers.Conv2D(64, kernel_size=(3, 3), activation="relu"), layers.MaxPooling2D(pool_size=(2, 2)), layers.Flatten(), layers.Dropout(0.5), layers.Dense(num_classes, activation="softmax"), ] ) model.summary() batch_size = 128 epochs = 15 model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"]) callbacks = [JaccardScoreCallback(model, x_test, np.argmax(y_test, axis=-1), "logs")] model.fit( x_train, y_train, batch_size=batch_size, epochs=epochs, validation_split=0.1, callbacks=callbacks, )
examples/keras_recipes/ipynb/sklearn_metric_callbacks.ipynb
keras-team/keras-io
apache-2.0
Python uses the $\LaTeX$ language to typeset equations. Use a single set of $ to make your $\LaTeX$ inline and a double set $$ to center This code will produce the output: $$ \int \cos(x)\ dx = \sin(x) $$ Use can use $\LaTeX$ in plots:
plt.style.use('ggplot') x = np.linspace(0,2*np.pi,100) y = np.sin(5*x) * np.exp(-x) plt.plot(x,y) plt.title("The function $y\ =\ \sin(5x)\ e^{-x}$") plt.xlabel("This is in units of 2$\pi$") plt.text(2.0, 0.4, '$\Delta t = \gamma\, \Delta t$', color='green', fontsize=36)
08_Python_LaTeX.ipynb
UWashington-Astro300/Astro300-A16
mit