markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Tables can be split, rearranged and combined.
df4 = df.copy() df4 pieces = [df4[6:], df4[3:6], df4[:3]] # split row 2+3+3 pieces df5 = pd.concat(pieces) # concantenate (rearrange/combine) df5 df4+df5 # Operation between tables with original index sequence df0 = df.loc[:,'Kedai A':'Kedai C'] # Slicing and extracting columns pd.concat([df4, df0], axis = 1) # Concatenating columns (axis = 1 -> refers to column)
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
*** **_8.3 Plotting Functions_** ---Let us look on some of the simple plotting function on $Pandas$ (requires $Matplotlib$ library).
df_add = df.copy() # Simple auto plotting %matplotlib inline df_add.cumsum().plot() # Reposition the legend import matplotlib.pyplot as plt df_add.cumsum().plot() plt.legend(bbox_to_anchor=[1.3, 1])
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
In the above example, repositioning the legend requires the legend function in $Matplotlib$ library. Therefore, the $Matplotlib$ library must be explicitly imported.
df_add.cumsum().plot(kind='bar') plt.legend(bbox_to_anchor=[1.3, 1]) df_add.cumsum().plot(kind='barh', stacked=True) df_add.cumsum().plot(kind='hist', alpha=0.5) df_add.cumsum().plot(kind='area', alpha=0.4, stacked=False) plt.legend(bbox_to_anchor=[1.3, 1])
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
A 3-dimensional plot can be projected on a canvas but requires the $Axes3D$ library with slightly complicated settings.
# Plotting a 3D bar plot from mpl_toolkits.mplot3d import Axes3D import numpy as np # Convert the time format into ordinary strings time_series = pd.Series(df.index.format()) fig = plt.figure(figsize=(8,6)) ax = fig.add_subplot(111, projection='3d') # Plotting the bar graph column by column for c, z in zip(['r', 'g', 'b', 'y','m'], np.arange(len(df.columns))): xs = df.index ys = df.values[:,z] ax.bar(xs, ys, zs=z, zdir='y', color=c, alpha=0.5) ax.set_zlabel('Z') ax.set_xticklabels(time_series, va = 'baseline', ha = 'right', rotation = 15) ax.set_yticks(np.arange(len(df.columns))) ax.set_yticklabels(df.columns, va = 'center', ha = 'left', rotation = -42) ax.view_init(30, -30) fig.tight_layout()
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
*** **_8.4 Reading And Writing Data To File_** Data in **_DataFrame_** can be exported into **_csv_** (comma separated value) and **_Excel_** file. The users can also create a **_DataFrame_** from data in **_csv_** and **_Excel_** file, the data can then be processed.
# Export data to a csv file but separated with < TAB > rather than comma # the default separation is with comma df.to_csv('Tutorial8/Kedai.txt', sep='\t') # Export to Excel file df.to_excel('Tutorial8/Kedai.xlsx', sheet_name = 'Tarikh', index = True) # Importing data from csv file (without header) from_file = pd.read_csv('Tutorial8/Malaysian_Town.txt',sep='\t',header=None) from_file.head() # Importing data from Excel file (with header (the first row) that became the column names) from_excel = pd.read_excel('Tutorial8/Malaysian_Town.xlsx','Sheet1') from_excel.head()
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
Germany: LK Aurich (Niedersachsen)* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Niedersachsen-LK-Aurich.ipynb)
import datetime import time start = datetime.datetime.now() print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}") %config InlineBackend.figure_formats = ['svg'] from oscovida import * overview(country="Germany", subregion="LK Aurich", weeks=5); overview(country="Germany", subregion="LK Aurich"); compare_plot(country="Germany", subregion="LK Aurich", dates="2020-03-15:"); # load the data cases, deaths = germany_get_region(landkreis="LK Aurich") # get population of the region for future normalisation: inhabitants = population(country="Germany", subregion="LK Aurich") print(f'Population of country="Germany", subregion="LK Aurich": {inhabitants} people') # compose into one table table = compose_dataframe_summary(cases, deaths) # show tables with up to 1000 rows pd.set_option("max_rows", 1000) # display the table table
_____no_output_____
CC-BY-4.0
ipynb/Germany-Niedersachsen-LK-Aurich.ipynb
oscovida/oscovida.github.io
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Niedersachsen-LK-Aurich.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and " f"deaths at {fetch_deaths_last_execution()}.") # to force a fresh download of data, run "clear_cache()" print(f"Notebook execution took: {datetime.datetime.now()-start}")
_____no_output_____
CC-BY-4.0
ipynb/Germany-Niedersachsen-LK-Aurich.ipynb
oscovida/oscovida.github.io
Investigation of No-show Appointments Data Table of ContentsIntroductionData WranglingExploratory Data AnalysisConclusions IntroductionThe data includes some information about more than 100,000 Braxzilian medical appointments. It gives if the patient shows up or not for the appointment as well as some characteristics of patients and appointments. When we calculate overall no-show rate for all records, we see that it is pretty high; above 20%. It means more than one out of 5 patients does not show up at all. In this project, we specifically look at the associatons between no show-up rate and other variables and try to understand why the rate is at the level it is.
import pandas as pd import seaborn as sb import numpy as np import matplotlib.pyplot as plt % matplotlib inline
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
Data Wrangling
# Load your data and print out a few lines. Perform operations to inspect data # types and look for instances of missing or possibly errant data. filename = 'noshowappointments-kagglev2-may-2016.csv' df= pd.read_csv(filename) df.head() df.info() # no missing values
<class 'pandas.core.frame.DataFrame'> RangeIndex: 110527 entries, 0 to 110526 Data columns (total 14 columns): PatientId 110527 non-null float64 AppointmentID 110527 non-null int64 Gender 110527 non-null object ScheduledDay 110527 non-null object AppointmentDay 110527 non-null object Age 110527 non-null int64 Neighbourhood 110527 non-null object Scholarship 110527 non-null int64 Hipertension 110527 non-null int64 Diabetes 110527 non-null int64 Alcoholism 110527 non-null int64 Handcap 110527 non-null int64 SMS_received 110527 non-null int64 No-show 110527 non-null object dtypes: float64(1), int64(8), object(5) memory usage: 11.8+ MB
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
The data gives information about gender and age of the patient, neighbourhood of the hospital, if the patient has hypertension, diabetes, alcoholism or not, date and time of appointment and schedule, if the patient is registered in scholarship or not, and if SMS received or not as a reminder. When I look at the data types of columns, it is realized that AppointmentDay and ScheduledDay are recorded as object, or string to be more specific. And also, PatientId is recorded as float instead of integer. But I most probably will not make use of this information since it is very specific to the patient.The data seems pretty clear. There is no missing value or duplicated rows.First, I start with creating a dummy variable for No-show variable. So it makes easier for us to look at no show rate across different groups.
df.describe() df.isnull().any().sum() # no missing value df.duplicated().any()
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
Data Cleaning A dummy variable named no_showup is created. It takes the value 1 if the patient did not show up, and 0 otherwise. I omitted PatientId, AppointmentID and No-show columns. There are some rows with Age value of -1 which does not make much sense. So I dropped these rows.Other than that, the data seems pretty clean; no missing values, no duplicated rows.
df['No-show'].unique() df['no_showup'] = np.where(df['No-show'] == 'Yes', 1, 0) df.drop(['PatientId', 'AppointmentID', 'No-show'], axis = 1, inplace = True) noshow = df.no_showup == 1 show = df.no_showup == 0 index = df[df.Age == -1].index df.drop(index, inplace = True)
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
Exploratory Data Analysis What factors are important in predicting no-show rate?
plt.figure(figsize = (10,6)) df.Age[noshow].plot(kind = 'hist', alpha= 0.5, color= 'green', bins =20, label = 'no-show'); df.Age[show].plot(kind = 'hist', alpha= 0.4, color= 'orange', bins =20, label = 'show'); plt.legend(); plt.xlabel('Age'); plt.ylabel('Number of Patients');
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
I started exploratory data analysis by first looking at the relationship between age and no_showup. By looking age distributions for patients who showed up and not showed up, we can not say much. There is a spike at around age 0, and no show up number is not that high compared to other ages. We can infer that adults are careful about babies' appointments. As age increases, the number of patients in both groups decreases, which is plausible taking general demographics into account. To be able to say more about showup rate across different age groups we need to look at ratio of one group to another.First, I created age bins which are equally spaced from age 0 to the maximum age which is 115. It is called age_bins. Basically, it shows which bin the age of patient falls in. So I can look at the no_showup rate across different age bins.
bin_edges = np.arange(0, df.Age.max()+3, 3) df['age_bins'] = pd.cut(df.Age, bin_edges) base_color = sb.color_palette()[0] age_order = df.age_bins.unique().sort_values() g= sb.FacetGrid(data= df, row= 'Gender', row_order = ['M', 'F'], height=4, aspect = 2); g = g.map(sb.barplot, 'age_bins', 'no_showup', color = base_color, ci = None, order = age_order); g.axes[0,0].set_ylabel('No-show Rate'); g.axes[1,0].set_ylabel('No-show Rate'); plt.xlabel('Age Intervals') plt.xticks(rotation = 90);
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
No-show rate is smaller than average for babies ((0-3] interval). Then it increases as age get larger and it reaches peak at around 15-18 depending on gender. After that point, as age gets larger the no-show rate gets smaller. So middle-aged and old people are much more careful about their doctor appointments which is understandable as you get older, your health might not be in a good condition, you become more concerned about your health and do not miss your appointments. Or another explanation might be that as person ages, it is more probable to have a health condition which requires close doctor watch which incentivizes you attend to your scheduled appointments.There are spikes at the end of graphs, I suspect this happens due to small number of patients in corresponding bins. There are only 5 people in (114,117] bin which proves my suspicion right.
df.groupby('age_bins').size().sort_values().head(8) df.groupby('Gender').no_showup.mean()
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
There is no much difference across genders. No-show rates are close.
order_scholar = [0, 1] g= sb.FacetGrid(data= df, col= 'Gender', col_order = ['M', 'F'], height=4); g = g.map(sb.barplot, 'Scholarship', 'no_showup', order = order_scholar, color = base_color, ci = None,); g.axes[0,0].set_ylabel('No-show Rate'); g.axes[0,1].set_ylabel('No-show Rate');
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
If the patient is in Brazilian welfare program, then the probability of her not showing up for the appointment is larger than the probablity of a patient which is not registered in welfare program. There is no significant difference between males and females.
order_hyper = [0, 1] g= sb.FacetGrid(data= df, col= 'Gender', col_order = ['M', 'F'], height=4); g = g.map(sb.barplot, 'Hipertension', 'no_showup', order = order_hyper, color = base_color, ci = None,); g.axes[0,0].set_ylabel('No-show Rate'); g.axes[0,1].set_ylabel('No-show Rate');
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
When the patient has hypertension or diabetes, she would not want to miss doctor appointments. So having a disease to be watched closely incentivizes you to show up for your appointments. Again, being male or female does not make a significant difference in no-show rate.
order_diabetes = [0, 1] sb.barplot(data =df, x = 'Diabetes', y = 'no_showup', hue = 'Gender', ci = None, order = order_diabetes); sb.despine(); plt.ylabel('No-show Rate'); plt.legend(loc = 'lower right'); order_alcol = [0, 1] sb.barplot(data =df, x = 'Alcoholism', y = 'no_showup', hue = 'Gender', ci = None, order = order_alcol); sb.despine(); plt.ylabel('No-show Rate'); plt.legend(loc = 'lower right');
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
The story for alcoholism is a bit different. If the patient is a male with alcoholism, the probability of his no showing up is smaller than the one of male with no alcoholism. On the other hand, having alcoholism makes a female patient's probability of not showing up larger. Here I suspect if the number of females having alcoholism is very small or not, but I see below that the numbers in both groups are comparable.
df.groupby(['Gender', 'Alcoholism']).size() order_handcap = [0, 1, 2, 3, 4] sb.barplot(data =df, x = 'Handcap', y = 'no_showup', hue = 'Gender', ci = None, order = order_handcap); sb.despine(); plt.ylabel('No-show Rate'); plt.legend(loc = 'lower right'); df.groupby(['Handcap', 'Gender']).size()
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
We cannot see a significant difference across levels of Handcap variable. Label 4 for females is 1 but I do not pay attention to this since there are only 2 data points in this group. So being in different Handcap levels does not say much when predicting if a patient will show up.
plt.figure(figsize = (16,6)) sb.barplot(data = df, x='Neighbourhood', y='no_showup', color =base_color, ci = None); plt.xticks(rotation = 90); plt.ylabel('No-show Rate'); df.groupby('Neighbourhood').size().sort_values(ascending = True).head(10)
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
I want to see no-show rate in different neighborhoods. There is no significant difference across neighborhoods except ILHAS OCEÂNICAS DE TRINDADE. There are only 2 data points from this place in the dataset. The exceptions can occur with only 2 data points. Lastly, I want to look at how sending SMS to patients to remind their appointments effects no-show rate.
plt.figure(figsize = (5,5)) sb.barplot(data = df, x='SMS_received', y='no_showup', color =base_color, ci = None); plt.title('No-show Rate vs SMS received'); plt.ylabel('No-show Rate');
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
The association between SMS_received variable and no-show rate is very counterintuitive. I expect that when the patient receives SMS as a reminder, she is more likely to go to the appointment. Here the graph says exact opposite thing; when no SMS, the rate is around 16% whereas when SMS received it is more than 27%. It needs further and deeper examination. Understanding Negative Association between No-show Rate and SMS_received Variable
sb.barplot(data = df, x = 'SMS_received', y = 'no_showup', hue = 'Gender', ci = None); plt.title('No-show Rate vs SMS received'); plt.ylabel('No-show Rate'); plt.legend(loc ='lower right');
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
Gender does not make a significant impact on the rate with SMS and no SMS.Below I try to look at how no-show rate changes with time to appointment day. I convert ScheduledDay and AppointmentDay to datetime. There is no information about hour in AppointmentDay variable. It includes 00:00:00 for all rows whereas ScheduledDay column includes hour information.New variable named time_to_app represent time difference between AppointmentDay and ScheduledDay. It is supposed to be positive but because AppointmentDay includes 00:00:00 as hour for all appointments, time_to_app value is negative if both variables are on the same day. For example, if the patient schedules at 10 am for the appointment at 3pm the same day, time_to_app value for this appointment is (-1 days + 10 hours) since instead of 3 pm, midnight is recorded in AppointmentDay variable.
df['ScheduledDay'] = pd.to_datetime(df['ScheduledDay']) df['AppointmentDay'] = pd.to_datetime(df['AppointmentDay']) df['time_to_app']= df['AppointmentDay'] - df['ScheduledDay'] import datetime as dt rows_to_drop = df[df.time_to_app < dt.timedelta(days = -1)].index df.drop(rows_to_drop, inplace = True)
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
All time_to_app values smaller than 1 day are omitted since it points another error.
time_bins = [dt.timedelta(days=-1, hours= 0), dt.timedelta(days=-1, hours= 6), dt.timedelta(days=-1, hours= 12), dt.timedelta(days=-1, hours= 15), dt.timedelta(days=-1, hours = 18), dt.timedelta(days=1), dt.timedelta(days=2), dt.timedelta(days=3), dt.timedelta(days=7), dt.timedelta(days=15), dt.timedelta(days=30), dt.timedelta(days=90), dt.timedelta(days=180)] df['time_bins'] = pd.cut(df['time_to_app'], time_bins) df.groupby('time_bins').size()
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
I created bins for time_to_app variable. They are not equally spaced. I notice that there are significant number of patients in (-1 days, 0 days] bin. I partitioned it into smaller time bins to see the picture. The number of points in each bin is given above.I group the data by time_bins and look at no-show rate.
plt.figure(figsize =(9,6)) sb.barplot(data= df, y ='time_bins', x = 'no_showup', hue = 'SMS_received', ci = None); plt.xlabel('No-show Rate'); plt.ylabel('Time to Appointment');
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
When patient schedules an appointment for the same day which represented by the first 4 upper rows in the graaph above, no-show rate is pretty smaller than the average rate higher than 20%. If patients schedule an appointment for the same day (meaning patients make a schedule several hours before the appointment hour), with more than 95% probability they show up in the appointment. And unless there is more than 2 days to the appointment at the time the patient schedules the appointment, he does not receive SMS as a reminder. This explains why we see counterintuitive negative association between no-show rate and SMS_received variable. All patients schedule an appointment for the same day fall in no SMS received group with very low no-show rate and high number of patients and they pull down averall no-show rate of the group substantially. At the end, the rate for no SMS ends up much smaller than the rate for SMS getting patients. We can see the effect of SMS on grouped data in the graph. SMS lowers no-show rate in every group including both 0 and 1 values for SMS_received variable. For instance, no-show rate when no SMS is a bit higher than 27% whereas it is 24% when SMS is sent for (3 days, 7 days) group. As time to appointment gets larger, SMS is being more effective. For example, SMS improves no-show rate by 3%, 5.5% and 7.7% when there are 3-7, 7-15, 30-90 days to appointment when it is scheduled, respectively.We can see the overall effect of SMS on no-show rate by taking only those groups which have both SMS sent and no SMS sent patients. Excluding time bins smaller than 2 days, it is found that the rate is 0.33% with no SMS, and 28% with SMS sent.But it is pretty interesting that patient attends appointment with high probability if it is the same day, and no-show rate jumps abruptly from below 5% to above 20% even if schedule day and appointment day are only 1 day apart.
sms_sent = df[( df.AppointmentDay - df.ScheduledDay) >= dt.timedelta(days = 2) ] sms_sent.groupby('SMS_received').no_showup.mean()
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
Test for Embedding, to later move it into a layer
import numpy as np # Set-up numpy generator for random numbers random_number_generator = np.random.default_rng() # First tokenize the protein sequence (or any sequence) in kmers. def tokenize(protein_seqs, kmer_sz): kmers = set() # Loop over protein sequences for protein_seq in protein_seqs: # Loop over the whole sequence for i in range(len(protein_seq) - (kmer_sz - 1)): # Add kmers to the set, thus only unique kmers will remain kmers.add(protein_seq[i: i + kmer_sz]) # Map kmers for one hot-encoding kmer_to_id = dict() id_to_kmer = dict() for ind, kmer in enumerate(kmers): kmer_to_id[kmer] = ind id_to_kmer[ind] = kmer vocab_sz = len(kmers) assert vocab_sz == len(kmer_to_id.keys()) # Tokenize the protein sequence to integers tokenized = [] for protein_seq in protein_seqs: sequence = [] for i in range(len(protein_seq) - (kmer_sz -1)): # Convert kmer to integer kmer = protein_seq[i: i + kmer_sz] sequence.append(kmer_to_id[kmer]) tokenized.append(sequence) return tokenized, vocab_sz, kmer_to_id, id_to_kmer # Embedding dictionary to embed the tokenized sequence def embed(EMBEDDING_DIM, vocab_sz, rng): embedding = {} for i in range(vocab_sz): # Use random number generator to fill the embedding with embedding_dimension random numbers embedding[i] = rng.random(size=(embedding_dim, 1)) return embedding if __name__ == '__main__': # Globals KMER_SIZE = 3 # Choose a Kmer_size (this is a hyperparameter which can be optimized) EMBEDDING_DIM = 10 # Also a hyperparameter # Store myoglobin protein sequence in a list of protein sequences protein_seqs = ['MGLSDGEWQLVLNVWGKVEADIPGHGQEVLIRLFKGHPETLEKFDKFKHLKSEDEMKASEDLKKHGATVLTALGGILKKKGHHEAEIKPLAQSHATKHKIPVKYLEFISECIIQVLQSKHPGDFGADAQGAMNKALELFRKDMASNYKELGFQG'] # Tokenize the protein sequence tokenized_seqs, vocab_sz, kmer_to_id, id_to_kmer = tokenize(protein_seqs, KMER_SIZE) embedding = embed(embedding_dim, vocab_sz, random_number_generator) assert vocab_sz == len(embedding) # Embed the tokenized protein sequence for protein_seq in tokenized_seqs: for token in protein_seq: print(embedding[token]) break # Embedding matrix to embed the tokenized sequence def embed(embedding_dim, vocab_sz, rng): embedding = rng.random(size=(embedding_dim, vocab_sz)) return embedding emb = embed(EMBEDDING_DIM, vocab_sz, random_number_generator) emb.shape # First tokenize the protein sequence (or any sequence) in kmers. def tokenize(protein_seqs, kmer_sz): kmers = set() # Loop over protein sequences for protein_seq in protein_seqs: # Loop over the whole sequence for i in range(len(protein_seq) - (kmer_sz - 1)): # Add kmers to the set, thus only unique kmers will remain kmers.add(protein_seq[i: i + kmer_sz]) # Map kmers for one hot-encoding kmer_to_id = dict() id_to_kmer = dict() for ind, kmer in enumerate(kmers): kmer_to_id[kmer] = ind id_to_kmer[ind] = kmer vocab_sz = len(kmers) assert vocab_sz == len(kmer_to_id.keys()) # Tokenize the protein sequence to a one-hot-encoded matrix tokenized = [] for protein_seq in protein_seqs: sequence = [] for i in range(len(protein_seq) - (kmer_sz -1)): # Convert kmer to integer kmer = protein_seq[i: i + kmer_sz] # One hot encode the kmer x = kmer_to_id[kmer] x_vec = np.zeros((vocab_sz, 1)) x_vec[x] = 1 sequence.append(x_vec) tokenized.append(sequence) return tokenized, vocab_sz, kmer_to_id, id_to_kmer # Tokenize the protein sequence tokenized_seqs, vocab_sz, kmer_to_id, id_to_kmer = tokenize(protein_seqs, KMER_SIZE) for tokenized_seq in tokenized_seqs: y = np.dot(emb, tokenized_seq) y.shape np.array()
_____no_output_____
MIT
additional/notebooks/embedding.ipynb
Mees-Molenaar/protein_location
Spectral encoding of categorical featuresAbout a year ago I was working on a regression model, which had over a million features. Needless to say, the training was super slow, and the model was overfitting a lot. After investigating this issue, I realized that most of the features were created using 1-hot encoding of the categorical features, and some of them had tens of thousands of unique values. The problem of mapping categorical features to lower-dimensional space is not new. Recently one of the popular way to deal with it is using entity embedding layers of a neural network. However that method assumes that neural networks are used. What if we decided to use tree-based algorithms instead? In tis case we can use Spectral Graph Theory methods to create low dimensional embedding of the categorical features.The idea came from spectral word embedding, spectral clustering and spectral dimensionality reduction algorithms.If you can define a similarity measure between different values of the categorical features, we can use spectral analysis methods to find the low dimensional representation of the categorical feature. From the similarity function (or kernel function) we can construct an Adjacency matrix, which is a symmetric matrix, where the ij element is the value of the kernel function between category values i and j:$$ A_{ij} = K(i,j) \tag{1}$$It is very important that I only need a Kernel function, not a high-dimensional representation. This means that 1-hot encoding step is not necessary here. Also for the kernel-base machine learning methods, the categorical variable encoding step is not necessary as well, because what matters is the kernel function between two points, which can be constructed using the individual kernel functions.Once the adjacency matrix is constructed, we can construct a degree matrix:$$ D_{ij} = \delta_{ij} \sum_{k}{A_{ik}} \tag{2} $$Here $\delta$ is the Kronecker delta symbol. The Laplacian matrix is the difference between the two:$$ L = D - A \tag{3} $$And the normalize Laplacian matrix is defined as:$$ \mathscr{L} = D^{-\frac{1}{2}} L D^{-\frac{1}{2}} \tag{4} $$Following the Spectral Graph theory, we proceed with eigendecomposition of the normalized Laplacian matrix. The number of zero eigenvalues correspond to the number of connected components. In our case, let's assume that our categorical feature has two sets of values that are completely dissimilar. This means that the kernel function $K(i,j)$ is zero if $i$ and $j$ belong to different groups. In this case we will have two zero eigenvalues of the normalized Laplacian matrix.If there is only one connected component, we will have only one zero eigenvalue. Normally it is uninformative and is dropped to prevent multicollinearity of features. However we can keep it if we are planning to use tree-based models.The lower eigenvalues correspond to "smooth" eigenvectors (or modes), that are following the similarity function more closely. We want to keep only these eigenvectors and drop the eigenvectors with higher eigenvalues, because they are more likely represent noise. It is very common to look for a gap in the matrix spectrum and pick the eigenvalues below the gap. The resulting truncated eigenvectors can be normalized and represent embeddings of the categorical feature values. As an example, let's consider the Day of Week. 1-hot encoding assumes every day is similar to any other day ($K(i,j) = 1$). This is not a likely assumption, because we know that days of the week are different. For example, the bar attendance spikes on Fridays and Saturdays (at least in USA) because the following day is a weekend. Label encoding is also incorrect, because it will make the "distance" between Monday and Wednesday twice higher than between Monday and Tuesday. And the "distance" between Sunday and Monday will be six times higher, even though the days are next to each other. By the way, the label encoding corresponds to the kernel $K(i, j) = exp(-\gamma |i-j|)$
import numpy as np import pandas as pd np.set_printoptions(linewidth=130) def normalized_laplacian(A): 'Compute normalized Laplacian matrix given the adjacency matrix' d = A.sum(axis=0) D = np.diag(d) L = D-A D_rev_sqrt = np.diag(1/np.sqrt(d)) return D_rev_sqrt @ L @ D_rev_sqrt
_____no_output_____
Apache-2.0
spectral-analysis/spectral-encoding-of-categorical-features.ipynb
mlarionov/machine_learning_POC
We will consider an example, where weekdays are similar to each other, but differ a lot from the weekends.
#The adjacency matrix for days of the week A_dw = np.array([[0,10,9,8,5,2,1], [0,0,10,9,5,2,1], [0,0,0,10,8,2,1], [0,0,0,0,10,2,1], [0,0,0,0,0,5,3], [0,0,0,0,0,0,10], [0,0,0,0,0,0,0]]) A_dw = A_dw + A_dw.T A_dw #The normalized Laplacian matrix for days of the week L_dw_noem = normalized_laplacian(A_dw) L_dw_noem #The eigendecomposition of the normalized Laplacian matrix sz, sv = np.linalg.eig(L_dw_noem) sz
_____no_output_____
Apache-2.0
spectral-analysis/spectral-encoding-of-categorical-features.ipynb
mlarionov/machine_learning_POC
Notice, that the eigenvalues are not ordered here. Let's plot the eigenvalues, ignoring the uninformative zero.
%matplotlib inline from matplotlib import pyplot as plt import seaborn as sns sns.stripplot(data=sz[1:], jitter=False, );
_____no_output_____
Apache-2.0
spectral-analysis/spectral-encoding-of-categorical-features.ipynb
mlarionov/machine_learning_POC
We can see a pretty substantial gap between the first eigenvalue and the rest of the eigenvalues. If this does not give enough model performance, you can include the second eigenvalue, because the gap between it and the higher eigenvalues is also quite substantial. Let's print all eigenvectors:
sv
_____no_output_____
Apache-2.0
spectral-analysis/spectral-encoding-of-categorical-features.ipynb
mlarionov/machine_learning_POC
Look at the second eigenvector. The weekend values have a different size than the weekdays and Friday is close to zero. This proves the transitional role of Friday, that, being a day of the week, is also the beginning of the weekend.If we are going to pick two lowest non-zero eigenvalues, our categorical feature encoding will result in these category vectors:
#Picking only two eigenvectors category_vectors = sv[:,[1,3]] category_vectors category_vector_frame=pd.DataFrame(category_vectors, index=['mon', 'tue', 'wed', 'thu', 'fri', 'sat', 'sun'], columns=['col1', 'col2']).reset_index() sns.scatterplot(data=category_vector_frame, x='col1', y='col2', hue='index');
_____no_output_____
Apache-2.0
spectral-analysis/spectral-encoding-of-categorical-features.ipynb
mlarionov/machine_learning_POC
In the plot above we see that Monday and Tuesday, and also Saturday and Sunday are clustered close together, while Wednesday, Thursday and Friday are far apart. Learning the kernel functionIn the previous example we assumed that the similarity function is given. Sometimes this is the case, where it can be defined based on the business rules. However it may be possible to learn it from data.One of the ways to compute the Kernel is using [Wasserstein distance](https://en.wikipedia.org/wiki/Wasserstein_metric). It is a good way to tell how far apart two distributions are. The idea is to estimate the data distribution (including the target variable, but excluding the categorical variable) for each value of the categorical variable. If for two values the distributions are similar, then the divergence will be small and the similarity value will be large. As a measure of similarity I choose the RBF kernel (Gaussian radial basis function):$$ A_{ij} = exp(-\gamma W(i, j)^2) \tag{5}$$Where $W(i,j)$ is the Wasserstein distance between the data distributions for the categories i and j, and $\gamma$ is a hyperparameter that has to be tuned To try this approach will will use [liquor sales data set](https://www.kaggle.com/residentmario/iowa-liquor-sales/downloads/iowa-liquor-sales.zip/1). To keep the file small I removed some columns and aggregated the data.
liq = pd.read_csv('Iowa_Liquor_agg.csv', dtype={'Date': 'str', 'Store Number': 'str', 'Category': 'str', 'orders': 'int', 'sales': 'float'}, parse_dates=True) liq.Date = pd.to_datetime(liq.Date) liq.head()
_____no_output_____
Apache-2.0
spectral-analysis/spectral-encoding-of-categorical-features.ipynb
mlarionov/machine_learning_POC
Since we care about sales, let's encode the day of week using the information from the sales columnLet's check the histogram first:
sns.distplot(liq.sales, kde=False);
_____no_output_____
Apache-2.0
spectral-analysis/spectral-encoding-of-categorical-features.ipynb
mlarionov/machine_learning_POC
We see that the distribution is very skewed, so let's try to use log of sales columns instead
sns.distplot(np.log10(1+liq.sales), kde=False);
_____no_output_____
Apache-2.0
spectral-analysis/spectral-encoding-of-categorical-features.ipynb
mlarionov/machine_learning_POC
This is much better. So we will use a log for our distribution
liq["log_sales"] = np.log10(1+liq.sales)
_____no_output_____
Apache-2.0
spectral-analysis/spectral-encoding-of-categorical-features.ipynb
mlarionov/machine_learning_POC
Here we will follow [this blog](https://amethix.com/entropy-in-machine-learning/) for computation of the Kullback-Leibler divergence.Also note, that since there are no liquor sales on Sunday, we consider only six days in a week
from scipy.stats import wasserstein_distance from numpy import histogram from scipy.stats import iqr def dw_data(i): return liq[liq.Date.dt.dayofweek == i].log_sales def wass_from_data(i,j): return wasserstein_distance(dw_data(i), dw_data(j)) if i > j else 0.0 distance_matrix = np.fromfunction(np.vectorize(wass_from_data), (6,6)) distance_matrix += distance_matrix.T distance_matrix
_____no_output_____
Apache-2.0
spectral-analysis/spectral-encoding-of-categorical-features.ipynb
mlarionov/machine_learning_POC
As we already mentioned, the hyperparameter $\gamma$ has to be tuned. Here we just pick the value that will give a plausible result
gamma = 100 kernel = np.exp(-gamma * distance_matrix**2) np.fill_diagonal(kernel, 0) kernel norm_lap = normalized_laplacian(kernel) sz, sv = np.linalg.eig(norm_lap) sz sns.stripplot(data=sz[1:], jitter=False, );
_____no_output_____
Apache-2.0
spectral-analysis/spectral-encoding-of-categorical-features.ipynb
mlarionov/machine_learning_POC
Ignoring the zero eigenvalue, we can see that there is a bigger gap between the first eigenvalue and the rest of the eigenvalues, even though the values are all in the range between 1 and 1.3. Looking at the eigenvectors,
sv
_____no_output_____
Apache-2.0
spectral-analysis/spectral-encoding-of-categorical-features.ipynb
mlarionov/machine_learning_POC
Ultimately the number of eigenvectors to use is another hyperparameter, that should be optimized on a supervised learning task. The Category field is another candidate to do spectral analysis, and is, probably, a better choice since it has more unique values
len(liq.Category.unique()) unique_categories = liq.Category.unique() def dw_data_c(i): return liq[liq.Category == unique_categories[int(i)]].log_sales def wass_from_data_c(i,j): return wasserstein_distance(dw_data_c(i), dw_data_c(j)) if i > j else 0.0 #WARNING: THIS WILL TAKE A LONG TIME distance_matrix = np.fromfunction(np.vectorize(wass_from_data_c), (107,107)) distance_matrix += distance_matrix.T distance_matrix def plot_eigenvalues(gamma): "Eigendecomposition of the kernel and plot of the eigenvalues" kernel = np.exp(-gamma * distance_matrix**2) np.fill_diagonal(kernel, 0) norm_lap = normalized_laplacian(kernel) sz, sv = np.linalg.eig(norm_lap) sns.stripplot(data=sz[1:], jitter=True, ); plot_eigenvalues(100);
_____no_output_____
Apache-2.0
spectral-analysis/spectral-encoding-of-categorical-features.ipynb
mlarionov/machine_learning_POC
We can see, that a lot of eigenvalues are grouped around the 1.1 mark. The eigenvalues that are below that cluster can be used for encoding the Category feature.Please also note that this method is highly sensitive on selection of hyperparameter $\gamma$. For illustration let me pick a higher and a lower gamma
plot_eigenvalues(500); plot_eigenvalues(10)
_____no_output_____
Apache-2.0
spectral-analysis/spectral-encoding-of-categorical-features.ipynb
mlarionov/machine_learning_POC
TuplesIn Python tuples are very similar to lists, however, unlike lists they are *immutable* meaning they can not be changed. You would use tuples to present things that shouldn't be changed, such as days of the week, or dates on a calendar. In this section, we will get a brief overview of the following: 1.) Constructing Tuples 2.) Basic Tuple Methods 3.) Immutability 4.) When to Use TuplesYou'll have an intuition of how to use tuples based on what you've learned about lists. We can treat them very similarly with the major distinction being that tuples are immutable. Constructing TuplesThe construction of a tuples use () with elements separated by commas. For example:
# Create a tuple t = (1,2,3) # Check len just like a list len(t) # Can also mix object types t = ('one',2) # Show t # Use indexing just like we did in lists t[0] # Slicing just like a list t[-1]
_____no_output_____
MIT
Python-Programming/Python-3-Bootcamp/00-Python Object and Data Structure Basics/06-Tuples.ipynb
vivekparasharr/Learn-Programming
Basic Tuple MethodsTuples have built-in methods, but not as many as lists do. Let's look at two of them:
# Use .index to enter a value and return the index t.index('one') # Use .count to count the number of times a value appears t.count('one')
_____no_output_____
MIT
Python-Programming/Python-3-Bootcamp/00-Python Object and Data Structure Basics/06-Tuples.ipynb
vivekparasharr/Learn-Programming
ImmutabilityIt can't be stressed enough that tuples are immutable. To drive that point home:
t[0]= 'change'
_____no_output_____
MIT
Python-Programming/Python-3-Bootcamp/00-Python Object and Data Structure Basics/06-Tuples.ipynb
vivekparasharr/Learn-Programming
Because of this immutability, tuples can't grow. Once a tuple is made we can not add to it.
t.append('nope')
_____no_output_____
MIT
Python-Programming/Python-3-Bootcamp/00-Python Object and Data Structure Basics/06-Tuples.ipynb
vivekparasharr/Learn-Programming
Multivariate SuSiE and ENLOC model Aim This notebook aims to demonstrate a workflow of generating posterior inclusion probabilities (PIPs) from GWAS summary statistics using SuSiE regression and construsting SNP signal clusters from global eQTL analysis data obtained from multivariate SuSiE models. Methods overviewThis procedure assumes that molecular phenotype summary statistics and GWAS summary statistics are aligned and harmonized to have consistent allele coding (see [this module](../../misc/summary_stats_merger.html) for implementation details). Both molecular phenotype QTL and GWAS should be fine-mapped beforehand using mvSusiE or SuSiE. We further assume (and require) that molecular phenotype and GWAS data come from the same population ancestry. Violations from this assumption may not cause an error in the analysis computational workflow but the results obtained may not be valid. Input 1) GWAS Summary Statistics with the following columns: - chr: chromosome number - bp: base pair position - a1: effect allele - a2: other allele - beta: effect size - se: standard error of beta - z: z score2) eQTL data from multivariate SuSiE model with the following columns: - chr: chromosome number - bp: base pair position - a1: effect allele - a2: other allele - pip: posterior inclusion probability3) LD correlation matrix Output Intermediate files:1) GWAS PIP file with the following columns - var_id - ld_block - snp_pip - block_pip2) eQTL annotation file with the following columns - chr - bp - var_id - a1 - a2 - annotations, in the format: `gene:cs_num@tissue=snp_pip[cs_pip:cs_total_snps]`Final Outputs:1) Enrichment analysis result prefix.enloc.enrich.rst: estimated enrichment parameters and standard errors.2) Signal-level colocalization result prefix.enloc.sig.out: the main output from the colocalization analysis with the following format - column 1: signal cluster name (from eQTL analysis) - column 2: number of member SNPs - column 3: cluster PIP of eQTLs - column 4: cluster PIP of GWAS hits (without eQTL prior) - column 5: cluster PIP of GWAS hits (with eQTL prior) - column 6: regional colocalization probability (RCP)3) SNP-level colocalization result prefix.enloc.snp.out: SNP-level colocalization output with the following form at - column 1: signal cluster name - column 2: SNP name - column 3: SNP-level PIP of eQTLs - column 4: SNP-level PIP of GWAS (without eQTL prior) - column 5: SNP-level PIP of GWAS (with eQTL prior) - column 6: SNP-level colocalization probability4) Sorted list of colocalization signals Takes into consideration 3 situations: 1) "Major" and "minor" alleles flipped2) Different strand but same variant3) Remove variants with A/T and C/G alleles due to ambiguity Minimal working example
sos run mvenloc.ipynb merge \ --cwd output \ --eqtl-sumstats .. \ --gwas-sumstats .. sos run mvenloc.ipynb eqtl \ --cwd output \ --sumstats-file .. \ --ld-region .. sos run mvenloc.ipynb gwas \ --cwd output \ --sumstats-file .. \ --ld-region .. sos run mvenloc.ipynb enloc \ --cwd output \ --eqtl-pip .. \ --gwas-pip ..
_____no_output_____
MIT
pipeline/mvenloc.ipynb
seriousbamboo/xqtl-pipeline
Summary
head enloc.enrich.out head enloc.sig.out head enloc.snp.out
_____no_output_____
MIT
pipeline/mvenloc.ipynb
seriousbamboo/xqtl-pipeline
Command interface
sos run mvenloc.ipynb -h
_____no_output_____
MIT
pipeline/mvenloc.ipynb
seriousbamboo/xqtl-pipeline
Implementation
[global] parameter: cwd = path parameter: container = ""
_____no_output_____
MIT
pipeline/mvenloc.ipynb
seriousbamboo/xqtl-pipeline
Step 0: data formatting Extract common SNPS between the GWAS summary statistics and eQTL data
[merger] # eQTL summary statistics as a list of RData parameter: eqtl_sumstats = path # GWAS summary stats in gz format parameter: gwas_sumstats = path input: eqtl_sumstats, gwas_sumstats output: f"{cwd:a}/{eqtl_sumstats:bn}.standardized.gz", f"{cwd:a}/{gwas_sumstats:bn}.standardized.gz" R: expand = "${ }" ### # functions ### allele.qc = function(a1,a2,ref1,ref2) { # a1 and a2 are the first data-set # ref1 and ref2 are the 2nd data-set # Make all the alleles into upper-case, as A,T,C,G: a1 = toupper(a1) a2 = toupper(a2) ref1 = toupper(ref1) ref2 = toupper(ref2) # Strand flip, to change the allele representation in the 2nd data-set strand_flip = function(ref) { flip = ref flip[ref == "A"] = "T" flip[ref == "T"] = "A" flip[ref == "G"] = "C" flip[ref == "C"] = "G" flip } flip1 = strand_flip(ref1) flip2 = strand_flip(ref2) snp = list() # Remove strand ambiguous SNPs (scenario 3) snp[["keep"]] = !((a1=="A" & a2=="T") | (a1=="T" & a2=="A") | (a1=="C" & a2=="G") | (a1=="G" & a2=="C")) # Remove non-ATCG coding snp[["keep"]][ a1 != "A" & a1 != "T" & a1 != "G" & a1 != "C" ] = F snp[["keep"]][ a2 != "A" & a2 != "T" & a2 != "G" & a2 != "C" ] = F # as long as scenario 1 is involved, sign_flip will return TRUE snp[["sign_flip"]] = (a1 == ref2 & a2 == ref1) | (a1 == flip2 & a2 == flip1) # as long as scenario 2 is involved, strand_flip will return TRUE snp[["strand_flip"]] = (a1 == flip1 & a2 == flip2) | (a1 == flip2 & a2 == flip1) # remove other cases, eg, tri-allelic, one dataset is A C, the other is A G, for example. exact_match = (a1 == ref1 & a2 == ref2) snp[["keep"]][!(exact_match | snp[["sign_flip"]] | snp[["strand_flip"]])] = F return(snp) } # Extract information from RData eqtl.split = function(eqtl){ rows = length(eqtl) chr = vector(length = rows) pos = vector(length = rows) a1 = vector(length = rows) a2 = vector(length = rows) for (i in 1:rows){ split1 = str_split(eqtl[i], ":") split2 = str_split(split1[[1]][2], "_") chr[i]= split1[[1]][1] pos[i] = split2[[1]][1] a1[i] = split2[[1]][2] a2[i] = split2[[1]][3] } eqtl.df = data.frame(eqtl,chr,pos,a1,a2) } remove.dup = function(df){ df = df %>% arrange(PosGRCh37, -N) df = df[!duplicated(df$PosGRCh37),] return(df) } ### # Code ### # gene regions: # 1 = ENSG00000203710 # 2 = ENSG00000064687 # 3 = ENSG00000203710 # eqtl gene.name = scan(${_input[0]:r}, what='character') # initial filter of gwas variants that are in eqtl gwas = gwas_sumstats gwas_filter = gwas[which(gwas$id %in% var),] # create eqtl df eqtl.df = eqtl.split(eqtl$var) # allele flip f_gwas = gwas %>% filter(chr %in% eqtl.df$chr & PosGRCh37 %in% eqtl.df$pos) eqtl.df.f = eqtl.df %>% filter(pos %in% f_gwas$PosGRCh37) # check if there are duplicate pos length(unique(f_gwas$PosGRCh37)) # multiple snps with same pos dup.pos = f_gwas %>% group_by(PosGRCh37) %>% filter(n() > 1) f_gwas = remove.dup(f_gwas) qc = allele.qc(f_gwas$testedAllele, f_gwas$otherAllele, eqtl.df.f$a1, eqtl.df.f$a2) keep = as.data.frame(qc$keep) sign = as.data.frame(qc$sign_flip) strand = as.data.frame(qc$strand_flip) # sign flip f_gwas$z[qc$sign_flip] = -1 * f_gwas$z[qc$sign_flip] f_gwas$testedAllele[qc$sign_flip] = eqtl.df.f$a1[qc$sign_flip] f_gwas$otherAllele[qc$sign_flip] = eqtl.df.f$a2[qc$sign_flip] f_gwas$testedAllele[qc$strand_flip] = eqtl.df.f$a1[qc$strand_flip] f_gwas$otherAllele[qc$strand_flip] = eqtl.df.f$a2[qc$strand_flip] # remove ambigiuous if ( sum(!qc$keep) > 0 ) { eqtl.df.f = eqtl.df.f[qc$keep,] f_gwas = f_gwas[qc$keep,] }
_____no_output_____
MIT
pipeline/mvenloc.ipynb
seriousbamboo/xqtl-pipeline
Extract common SNPS between the summary statistics and LD
[eqtl_1, gwas_1 (filter LD file and sumstat file)] parameter: sumstat_file = path # LD and region information: chr, start, end, LD file paramter: ld_region = path input: sumstat_file, for_each = 'ld_region' output: f"{cwd:a}/{sumstat_file:bn}_{region[0]}_{region[1]}_{region[2]}.z.rds", f"{cwd:a}/{sumstat_file:bn}_{region[0]}_{region[1]}_{region[2]}.ld.rds" R: # FIXME: need to filter both ways for sumstats and for LD # lds filtered eqtl_id = which(var %in% eqtl.df.f$eqtl) ld_f = ld[eqtl_id, eqtl_id] # ld missing miss = which(is.na(ld_f), arr.ind=TRUE) miss_r = unique(as.data.frame(miss)$row) miss_c = unique(as.data.frame(miss)$col) total_miss = unique(union(miss_r,miss_c)) # FIXME: LD should not have missing data if properly processed by our pipeline # In the future we should throw an error when it happens if (length(total_miss)!=0){ ld_f2 = ld_f[-total_miss,] ld_f2 = ld_f2[,-total_miss] dim(ld_f2) }else{ld_f2 = ld_f} f_gwas.f = f_gwas %>% filter(id %in% eqtl_id.f$eqtl)
_____no_output_____
MIT
pipeline/mvenloc.ipynb
seriousbamboo/xqtl-pipeline
Step 1: fine-mapping
[eqtl_2, gwas_2 (finemapping)] # FIXME: RDS file should have included region information output: f"{_input[0]:nn}.susieR.rds", f"{_input[0]:nn}.susieR_plot.rds" R: susie_results = susieR::susie_rss(z = f_gwas.f$z,R = ld_f2, check_prior = F) susieR::susie_plot(susie_results,"PIP") susie_results$z = f_gwas.f$z susieR::susie_plot(susie_results,"z_original")
_____no_output_____
MIT
pipeline/mvenloc.ipynb
seriousbamboo/xqtl-pipeline
Step 2: fine-mapping results processing Construct eQTL annotation file using eQTL SNP PIPs and credible sets
[eqtl_3 (create signal cluster using CS)] output: f"{_input[0]:nn}.enloc_annot.gz" R: cs = eqtl[["sets"]][["cs"]][["L1"]] o_id = which(var %in% eqtl_id.f$eqtl) pip = eqtl$pip[o_id] eqtl_annot = cbind(eqtl_id.f, pip) %>% mutate(gene = gene.name,cluster = -1, cluster_pip = 0, total_snps = 0) for(snp in cs){ eqtl_annot$cluster[snp] = 1 eqtl_annot$cluster_pip[snp] = eqtl[["sets"]][["coverage"]] eqtl_annot$total_snps[snp] = length(cs) } eqtl_annot1 = eqtl_annot %>% filter(cluster != -1)%>% mutate(annot = sprintf("%s:%d@=%e[%e:%d]",gene,cluster,pip,cluster_pip,total_snps)) %>% select(c(chr,pos,eqtl,a1,a2,annot)) # FIXME: repeats whole process (extracting+fine-mapping+cs creation) 3 times before this next step eqtl_annot_comb = rbind(eqtl_annot3, eqtl_annot1, eqtl_annot2) # FIXME: write to a zip file write.table(eqtl_annot_comb, file = "eqtl.annot.txt", col.names = T, row.names = F, quote = F)
_____no_output_____
MIT
pipeline/mvenloc.ipynb
seriousbamboo/xqtl-pipeline
Export GWAS PIP
[gwas_3 (format PIP into enloc GWAS input)] output: f"{_input[0]:nn}.enloc_gwas.gz" R: gwas_annot1 = f_gwas.f %>% mutate(pip = susie_results$pip) # FIXME: repeat whole process (extracting common snps + fine-mapping) 3 times before the next steps gwas_annot_comb = rbind(gwas_annot3, gwas_annot1, gwas_annot2) gwas_loc_annot = gwas_annot_comb %>% select(id, chr, PosGRCh37,z) write.table(gwas_loc_annot, file = "loc.gwas.txt", col.names = F, row.names = F, quote = F) bash: perl format2torus.pl loc.gwas.txt > loc2.gwas.txt R: loc = data.table::fread("loc2.gwas.txt") loc = loc[["V2"]] gwas_annot_comb2 = gwas_annot_comb %>% select(id, chr, PosGRCh37,pip) gwas_annot_comb2 = cbind(gwas_annot_comb2, loc) %>% select(id, loc, pip) write.table(gwas_annot_comb2, file = "gwas.pip.txt", col.names = F, row.names = F, quote = F) bash: perl format2torus.pl gwas.pip.txt | gzip --best > gwas.pip.gz
_____no_output_____
MIT
pipeline/mvenloc.ipynb
seriousbamboo/xqtl-pipeline
Step 3: Colocalization with FastEnloc
[enloc] # eQTL summary statistics as a list of RData # FIXME: to replace later parameter: eqtl_pip = path # GWAS summary stats in gz format parameter: gwas_pip = path input: eqtl_pip, gwas_pip output: f"{cwd:a}/{eqtl_pip:bnn}.{gwas_pip:bnn}.xx.gz" bash: fastenloc -eqtl eqtl.annot.txt.gz -gwas gwas.pip.txt.gz sort -grk6 prefix.enloc.sig.out | gzip --best > prefix.enloc.sig.sorted.gz rm -f prefix.enloc.sig.out
_____no_output_____
MIT
pipeline/mvenloc.ipynb
seriousbamboo/xqtl-pipeline
Guided Investigation - Anomaly Lookup__Notebook Version:__ 1.0__Python Version:__ Python 3.6 (including Python 3.6 - AzureML)__Required Packages:__ azure 4.0.0, azure-cli-profile 2.1.4__Platforms Supported:__ - Azure Notebooks Free Compute - Azure Notebook on DSVM __Data Source Required:__ - Log Analytics tables DescriptionGain insights into the possible root cause of an alert by searching for related anomalies on the corresponding entities around the alert’s time. This notebook will provide valuable leads for an alert’s investigation, listing all suspicious increase in event counts or their properties around the time of the alert, and linking to the corresponding raw records in Log Analytics for the investigator to focus on and interpret.When you switch between Azure Notebooks Free Compute and Data Science Virtual Machine (DSVM), you may need to select Python version: please select Python 3.6 for Free Compute, and Python 3.6 - AzureML for DSVM. Table of Contents1. Initialize Azure Resource Management Clients2. Looking up for anomaly entities 1. Initialize Azure Resource Management Clients
# only run once !pip install --upgrade Azure-Sentinel-Utilities !pip install azure-cli-core # User Input and Save to Environmental store import os from SentinelWidgets import WidgetViewHelper env_dir = %env helper = WidgetViewHelper() # Enter Tenant Domain helper.set_env(env_dir, 'tenant_domain') # Enter Azure Subscription Id helper.set_env(env_dir, 'subscription_id') # Enter Azure Resource Group helper.set_env(env_dir, 'resource_group') env_dir = %env if 'tenant_domain' in env_dir: tenant_domain = env_dir['tenant_domain'] if 'subscription_id' in env_dir: subscription_id = env_dir['subscription_id'] if 'resource_group' in env_dir: resource_group = env_dir['resource_group'] from azure.loganalytics import LogAnalyticsDataClient from azure.loganalytics.models import QueryBody from azure.mgmt.loganalytics import LogAnalyticsManagementClient import SentinelAzure from SentinelAnomalyLookup import AnomalyFinder, AnomalyLookupViewHelper from pandas.io.json import json_normalize import sys import timeit import datetime as dt import pandas as pd import copy from IPython.display import HTML # Authentication to Log Analytics from azure.common.client_factory import get_client_from_cli_profile from azure.common.credentials import get_azure_cli_credentials # please enter your tenant domain below, for Microsoft, using: microsoft.onmicrosoft.com !az login --tenant $tenant_domain la_client = get_client_from_cli_profile(LogAnalyticsManagementClient, subscription_id = subscription_id) la = SentinelAzure.azure_loganalytics_helper.LogAnalyticsHelper(la_client) creds, _ = get_azure_cli_credentials(resource="https://api.loganalytics.io") la_data_client = LogAnalyticsDataClient(creds)
_____no_output_____
MIT
Notebooks/Guided Investigation - Anomaly Lookup.ipynb
CrisRomeo/Azure-Sentinel
2. Looking up for anomaly entities
# Select a workspace selected_workspace = WidgetViewHelper.select_log_analytics_workspace(la) display(selected_workspace) import ipywidgets as widgets workspace_id = la.get_workspace_id(selected_workspace.value) #DateTime format: 2019-07-15T07:05:20.000 q_timestamp = widgets.Text(value='2019-09-15',description='DateTime: ') display(q_timestamp) #Entity format: computer q_entity = widgets.Text(value='computer',description='Entity for search: ') display(q_entity) anomaly_lookup = AnomalyFinder(workspace_id, la_data_client) selected_tables = WidgetViewHelper.select_multiple_tables(anomaly_lookup) display(selected_tables) # This action may take a few minutes or more, please be patient. start = timeit.default_timer() anomalies, queries = anomaly_lookup.run(q_timestamp.value, q_entity.value, list(selected_tables.value)) display(anomalies) if queries is not None: url = WidgetViewHelper.construct_url_for_log_analytics_logs(tenant_domain, subscription_id, resource_group, selected_workspace.value) WidgetViewHelper.display_html(WidgetViewHelper.copy_to_clipboard(url, queries, 'Add queries to clipboard and go to Log Analytics')) print('==================') print('Elapsed time: ', timeit.default_timer() - start, ' seconds')
_____no_output_____
MIT
Notebooks/Guided Investigation - Anomaly Lookup.ipynb
CrisRomeo/Azure-Sentinel
4 - Train models and make predictions Motivation- **`tf.keras`** API offers built-in functions for training, validation and prediction.- Those functions are easy to use and enable you to train any ML model.- They also give you a high level of customizability. Objectives- Understand the common training workflow in TensorFlow.- Set an optimizer, a loss functions, and metrics with `Model.compile()`- Create custom losses and metrics from scratch- Train your model with `Model.fit()`- Evaluate your model with `Model.evaluate()`- Make predictions with `Model.predict()`- Discover useful callbacks during training like checkpointing and learning rate scheduling- Create custom callbacks to get 100% control on your training- Practise what you learned on a concrete example
import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import matplotlib.pyplot as plt tf.__version__
_____no_output_____
Apache-2.0
4 - Train models and make predictions.ipynb
oyiakoumis/tensorflow2-course
Table of contents:* [Overview](overview)* [Part 1: Setting an optimizer, a loss function, and metrics](part-1)* [Part 2: Training models and make predictions](part-2)* [Part 3: Using callbacks](part-3)* [Part 4: Exercise](part-4)* [Summary](summary)* [Where to go next](next) Overview - Model training and evaluation works exactly the same whether your model is a Sequential model, a model built with the Functional API, or a model written from scratch via model subclassing.- Here's what the typical end-to-end workflow looks like: - Define optimizer, training loss, and evaluation metrics (via `Model.compile()`) - Train your model on your training data (via `Model.fit()`) - Validate on a holdout set generated from the original training data - Evaluate the model on the test data (via `Model.evaluate()`) In the next sections we will use the **MNIST dataset** to explained in details how to train a model with the `keras.Model`'s methods listed above. As a reminder from chapter 2, the **MNIST dataset** is a large dataset of handwritten digits. Each image is a 28x28 matrix having values between 0 and 255.![mnist.png](./ressources/mnist.png)The following code cells build a `tf.data` pipeline for the MNIST dataset, splitting the dataset into a training set, a validation set and a test set (resp. 60%, 20%, and 20%), and build a simple artificial neural network model (ANN) for classification.
# Load the MNIST dataset train, test = tf.keras.datasets.mnist.load_data() # Overview of the dataset: images, labels = train print(type(images), type(labels)) print(images.shape, labels.shape) # First 9 images of the training set: plt.figure(figsize=(3,3)) for i in range(9): plt.subplot(3,3,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(images[i], cmap=plt.cm.binary) plt.show() # creates tf.data.Dataset train_ds = tf.data.Dataset.from_tensor_slices(train) test_ds = tf.data.Dataset.from_tensor_slices(test) # split train into train and validation num_val = train_ds.cardinality().numpy() * 0.2 train_ds = train_ds.skip(num_val) val_ds = train_ds.take(num_val) def configure_dataset(ds, is_training=True): if is_training: ds = ds.shuffle(48000).repeat() ds = ds.batch(64) ds = ds.map(lambda image, label: (image/255, label), num_parallel_calls=tf.data.AUTOTUNE) ds = ds.prefetch(tf.data.AUTOTUNE) return ds train_ds = configure_dataset(train_ds, is_training=True) val_ds = configure_dataset(val_ds, is_training=False) # Build the model: model = keras.Sequential([ keras.Input(shape=(28, 28), name="digits"), layers.Flatten(), layers.Dense(64, activation="relu", name="dense_1"), layers.Dense(64, activation="relu", name="dense_2"), layers.Dense(10, activation="softmax", name="predictions"), ]) model.summary()
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= flatten (Flatten) (None, 784) 0 _________________________________________________________________ dense_1 (Dense) (None, 64) 50240 _________________________________________________________________ dense_2 (Dense) (None, 64) 4160 _________________________________________________________________ predictions (Dense) (None, 10) 650 ================================================================= Total params: 55,050 Trainable params: 55,050 Non-trainable params: 0 _________________________________________________________________
Apache-2.0
4 - Train models and make predictions.ipynb
oyiakoumis/tensorflow2-course
CLIP GradCAM ColabThis Colab notebook uses [GradCAM](https://arxiv.org/abs/1610.02391) on OpenAI's [CLIP](https://openai.com/blog/clip/) model to produce a heatmap highlighting which regions in an image activate the most to a given caption.**Note:** Currently only works with the ResNet variants of CLIP. ViT support coming soon.
#@title Install dependencies #@markdown Please execute this cell by pressing the _Play_ button #@markdown on the left. #@markdown **Note**: This installs the software on the Colab #@markdown notebook in the cloud and not on your computer. %%capture !pip install ftfy regex tqdm matplotlib opencv-python scipy scikit-image !pip install git+https://github.com/openai/CLIP.git import numpy as np import torch import os import torch.nn as nn import torch.nn.functional as F import cv2 import urllib.request import matplotlib.pyplot as plt import clip from PIL import Image from skimage import transform as skimage_transform from scipy.ndimage import filters #@title Helper functions #@markdown Some helper functions for overlaying heatmaps on top #@markdown of images and visualizing with matplotlib. def normalize(x: np.ndarray) -> np.ndarray: # Normalize to [0, 1]. x = x - x.min() if x.max() > 0: x = x / x.max() return x # Modified from: https://github.com/salesforce/ALBEF/blob/main/visualization.ipynb def getAttMap(img, attn_map, blur=True): if blur: attn_map = filters.gaussian_filter(attn_map, 0.02*max(img.shape[:2])) attn_map = normalize(attn_map) cmap = plt.get_cmap('jet') attn_map_c = np.delete(cmap(attn_map), 3, 2) attn_map = 1*(1-attn_map**0.7).reshape(attn_map.shape + (1,))*img + \ (attn_map**0.7).reshape(attn_map.shape+(1,)) * attn_map_c return attn_map def viz_attn(img, attn_map, blur=True): fig, axes = plt.subplots(1, 2, figsize=(10, 5)) axes[0].imshow(img) axes[1].imshow(getAttMap(img, attn_map, blur)) for ax in axes: ax.axis("off") plt.show() def load_image(img_path, resize=None): image = Image.open(image_path).convert("RGB") if resize is not None: image = image.resize((resize, resize)) return np.asarray(image).astype(np.float32) / 255. #@title GradCAM: Gradient-weighted Class Activation Mapping #@markdown Our gradCAM implementation registers a forward hook #@markdown on the model at the specified layer. This allows us #@markdown to save the intermediate activations and gradients #@markdown at that layer. #@markdown To visualize which parts of the image activate for #@markdown a given caption, we use the caption as the target #@markdown label and backprop through the network using the #@markdown image as the input. #@markdown In the case of CLIP models with resnet encoders, #@markdown we save the activation and gradients at the #@markdown layer before the attention pool, i.e., layer4. class Hook: """Attaches to a module and records its activations and gradients.""" def __init__(self, module: nn.Module): self.data = None self.hook = module.register_forward_hook(self.save_grad) def save_grad(self, module, input, output): self.data = output output.requires_grad_(True) output.retain_grad() def __enter__(self): return self def __exit__(self, exc_type, exc_value, exc_traceback): self.hook.remove() @property def activation(self) -> torch.Tensor: return self.data @property def gradient(self) -> torch.Tensor: return self.data.grad # Reference: https://arxiv.org/abs/1610.02391 def gradCAM( model: nn.Module, input: torch.Tensor, target: torch.Tensor, layer: nn.Module ) -> torch.Tensor: # Zero out any gradients at the input. if input.grad is not None: input.grad.data.zero_() # Disable gradient settings. requires_grad = {} for name, param in model.named_parameters(): requires_grad[name] = param.requires_grad param.requires_grad_(False) # Attach a hook to the model at the desired layer. assert isinstance(layer, nn.Module) with Hook(layer) as hook: # Do a forward and backward pass. output = model(input) output.backward(target) grad = hook.gradient.float() act = hook.activation.float() # Global average pool gradient across spatial dimension # to obtain importance weights. alpha = grad.mean(dim=(2, 3), keepdim=True) # Weighted combination of activation maps over channel # dimension. gradcam = torch.sum(act * alpha, dim=1, keepdim=True) # We only want neurons with positive influence so we # clamp any negative ones. gradcam = torch.clamp(gradcam, min=0) # Resize gradcam to input resolution. gradcam = F.interpolate( gradcam, input.shape[2:], mode='bicubic', align_corners=False) # Restore gradient settings. for name, param in model.named_parameters(): param.requires_grad_(requires_grad[name]) return gradcam #@title Run #@markdown #### Image & Caption settings image_url = 'https://images2.minutemediacdn.com/image/upload/c_crop,h_706,w_1256,x_0,y_64/f_auto,q_auto,w_1100/v1554995050/shape/mentalfloss/516438-istock-637689912.jpg' #@param {type:"string"} image_caption = 'the cat' #@param {type:"string"} #@markdown --- #@markdown #### CLIP model settings clip_model = "RN50" #@param ["RN50", "RN101", "RN50x4", "RN50x16"] saliency_layer = "layer4" #@param ["layer4", "layer3", "layer2", "layer1"] #@markdown --- #@markdown #### Visualization settings blur = True #@param {type:"boolean"} device = "cuda" if torch.cuda.is_available() else "cpu" model, preprocess = clip.load(clip_model, device=device, jit=False) # Download the image from the web. image_path = 'image.png' urllib.request.urlretrieve(image_url, image_path) image_input = preprocess(Image.open(image_path)).unsqueeze(0).to(device) image_np = load_image(image_path, model.visual.input_resolution) text_input = clip.tokenize([image_caption]).to(device) attn_map = gradCAM( model.visual, image_input, model.encode_text(text_input).float(), getattr(model.visual, saliency_layer) ) attn_map = attn_map.squeeze().detach().cpu().numpy() viz_attn(image_np, attn_map, blur)
_____no_output_____
MIT
demos/CLIP_GradCAM_Visualization.ipynb
AdMoR/clipit
Capstone Project - Flight Delays Does weather events have impact the delay of flights (Brazil)? It is important to see this notebook with the step-by-step of the dataset cleaning process:[https://github.com/davicsilva/dsintensive/blob/master/notebooks/flightDelayPrepData_v2.ipynb](https://github.com/davicsilva/dsintensive/blob/master/notebooks/flightDelayPrepData_v2.ipynb)
from datetime import datetime # Pandas and NumPy import pandas as pd import numpy as np # Matplotlib for additional customization from matplotlib import pyplot as plt %matplotlib inline # Seaborn for plotting and styling import seaborn as sns # 1. Flight delay: any flight with (real_departure - planned_departure >= 15 minutes) # 2. The Brazilian Federal Agency for Civil Aviation (ANAC) does not define exactly what is a "flight delay" (in minutes) # 3. Anyway, the ANAC has a resolution for this subject: https://goo.gl/YBwbMy (last access: nov, 15th, 2017) # --- # DELAY, for this analysis, is defined as greater than 15 minutes (local flights only) DELAY = 15
_____no_output_____
Apache-2.0
notebooks/capstone-flightDelay.ipynb
davicsilva/dsintensive
1 - Local flights dataset. For now, only flights from January to September, 2017**A note about date columns on this dataset*** In the original dataset (CSV file from ANAC), the date was not in ISO8601 format (e.g. '2017-10-31 09:03:00')* To fix this I used regex (regular expression) to transform this column directly on CSV file* The original date was "31/10/2017 09:03" (october, 31, 2017 09:03)
#[flights] dataset_01 => all "Active Regular Flights" from 2017, from january to september #source: http://www.anac.gov.br/assuntos/dados-e-estatisticas/historico-de-voos #Last access this website: nov, 14th, 2017 flights = pd.read_csv('data/arf2017ISO.csv', sep = ';', dtype = str) flights['departure-est'] = flights[['departure-est']].apply(lambda row: row.str.replace("(?P<day>\d{2})/(?P<month>\d{2})/(?P<year>\d{4}) (?P<HOUR>\d{2}):(?P<MIN>\d{2})", "\g<year>/\g<month>/\g<day> \g<HOUR>:\g<MIN>:00"), axis=1) flights['departure-real'] = flights[['departure-real']].apply(lambda row: row.str.replace("(?P<day>\d{2})/(?P<month>\d{2})/(?P<year>\d{4}) (?P<HOUR>\d{2}):(?P<MIN>\d{2})", "\g<year>/\g<month>/\g<day> \g<HOUR>:\g<MIN>:00"), axis=1) flights['arrival-est'] = flights[['arrival-est']].apply(lambda row: row.str.replace("(?P<day>\d{2})/(?P<month>\d{2})/(?P<year>\d{4}) (?P<HOUR>\d{2}):(?P<MIN>\d{2})", "\g<year>/\g<month>/\g<day> \g<HOUR>:\g<MIN>:00"), axis=1) flights['arrival-real'] = flights[['arrival-real']].apply(lambda row: row.str.replace("(?P<day>\d{2})/(?P<month>\d{2})/(?P<year>\d{4}) (?P<HOUR>\d{2}):(?P<MIN>\d{2})", "\g<year>/\g<month>/\g<day> \g<HOUR>:\g<MIN>:00"), axis=1) # Departure and Arrival columns: from 'object' to 'date' format flights['departure-est'] = pd.to_datetime(flights['departure-est'], errors='ignore') flights['departure-real'] = pd.to_datetime(flights['departure-real'], errors='ignore') flights['arrival-est'] = pd.to_datetime(flights['arrival-est'], errors='ignore') flights['arrival-real'] = pd.to_datetime(flights['arrival-real'], errors='ignore') # translate the flight status from portuguese to english flights['flight-status'] = flights[['flight-status']].apply(lambda row: row.str.replace("REALIZADO", "ACCOMPLISHED"), axis=1) flights['flight-status'] = flights[['flight-status']].apply(lambda row: row.str.replace("CANCELADO", "CANCELED"), axis=1) flights.head() flights.size flights.to_csv("flights_csv.csv")
_____no_output_____
Apache-2.0
notebooks/capstone-flightDelay.ipynb
davicsilva/dsintensive
Some EDA's tasks
# See: https://stackoverflow.com/questions/37287938/sort-pandas-dataframe-by-value # df_departures = flights.groupby(['airport-A']).size().reset_index(name='number_departures') df_departures.sort_values(by=['number_departures'], ascending=False, inplace=True) df_departures
_____no_output_____
Apache-2.0
notebooks/capstone-flightDelay.ipynb
davicsilva/dsintensive
2 - Local airports (list with all the ~600 brazilian public airports)Source: https://goo.gl/mNFuPt (a XLS spreadsheet in portuguese; last access on nov, 15th, 2017)
# Airports dataset: all brazilian public airports (updated until october, 2017) airports = pd.read_csv('data/brazilianPublicAirports-out2017.csv', sep = ';', dtype= str) airports.head() # Merge "flights" dataset with "airports" in order to identify # local flights (origin and destination are in Brazil) flights = pd.merge(flights, airports, left_on="airport-A", right_on="airport", how='left') flights = pd.merge(flights, airports, left_on="airport-B", right_on="airport", how='left') flights.tail()
_____no_output_____
Apache-2.0
notebooks/capstone-flightDelay.ipynb
davicsilva/dsintensive
3 - List of codes (two letters) used when there was a flight delay (departure)I have found two lists that define two-letter codes used by the aircraft crew to justify the delay of the flights: a short and a long one.Source: https://goo.gl/vUC8BX (last access: nov, 15th, 2017)
# ------------------------------------------------------------------ # List of codes (two letters) used to justify a delay on the flight # - delayCodesShortlist.csv: list with YYY codes # - delayCodesLongList.csv: list with XXX codes # ------------------------------------------------------------------ delaycodes = pd.read_csv('data/delayCodesShortlist.csv', sep = ';', dtype = str) delaycodesLongList = pd.read_csv('data/delayCodesLonglist.csv', sep = ';', dtype = str) delaycodes.head()
_____no_output_____
Apache-2.0
notebooks/capstone-flightDelay.ipynb
davicsilva/dsintensive
4 - The Weather data from https://www.wunderground.com/historyFrom this website I captured a sample data from local airport (Campinas, SP, Brazil): January to September, 2017.The website presents data like this (see [https://goo.gl/oKwzyH](https://goo.gl/oKwzyH)):
# Weather sample: load the CSV with weather historical data (from Campinas, SP, Brazil, 2017) weather = pd.read_csv('data/DataScience-Intensive-weatherAtCampinasAirport-2017-Campinas_Airport_2017Weather.csv', \ sep = ',', dtype = str) weather["date"] = weather["year"].map(str) + "-" + weather["month"].map(str) + "-" + weather["day"].map(str) weather["date"] = pd.to_datetime(weather['date'],errors='ignore') weather.head()
_____no_output_____
Apache-2.0
notebooks/capstone-flightDelay.ipynb
davicsilva/dsintensive
Request workspace add
t0 = time.time() ekos = EventHandler(**paths) request = ekos.test_requests['request_workspace_add_1'] response_workspace_add = ekos.request_workspace_add(request) ekos.write_test_response('request_workspace_add_1', response_workspace_add) # request = ekos.test_requests['request_workspace_add_2'] # response_workspace_add = ekos.request_workspace_add(request) # ekos.write_test_response('request_workspace_add_2', response_workspace_add) print('-'*50) print('Time for request: {}'.format(time.time()-t0))
2018-09-20 19:02:54,811 event_handler.py 117 __init__ DEBUG Start EventHandler: event_handler 2018-09-20 19:02:54,814 event_handler.py 152 _load_mapping_objects DEBUG Loading mapping files from pickle file. 2018-09-20 19:02:55,637 event_handler.py 128 __init__ DEBUG Time for mapping: 0.8230469226837158 2018-09-20 19:02:55,671 event_handler.py 133 __init__ DEBUG Time for initiating EventHandler: 0.8610491752624512 2018-09-20 19:02:55,684 event_handler.py 50 f DEBUG Start: "request_workspace_add" 2018-09-20 19:02:55,689 event_handler.py 4257 request_workspace_add DEBUG Start: request_workspace_add 2018-09-20 19:02:55,713 event_handler.py 422 copy_workspace DEBUG Trying to copy workspace "default_workspace". Copy has alias "New test workspace" 2018-09-20 19:02:55,844 event_handler.py 2984 load_workspace DEBUG Trying to load new workspace "a377ee26-cd2d-411b-999c-073cd7a3dbd4" with alias "New test workspace"
MIT
notebooks/.ipynb_checkpoints/mw_requests_flow-checkpoint.ipynb
lvikt/ekostat_calculator
Update workspace uuid in test requests
update_workspace_uuid_in_test_requests()
2018-09-20 19:02:56,883 event_handler.py 117 __init__ DEBUG Start EventHandler: event_handler 2018-09-20 19:02:56,886 event_handler.py 152 _load_mapping_objects DEBUG Loading mapping files from pickle file. 2018-09-20 19:02:57,796 event_handler.py 128 __init__ DEBUG Time for mapping: 0.9100518226623535 2018-09-20 19:02:57,854 event_handler.py 133 __init__ DEBUG Time for initiating EventHandler: 0.9720554351806641
MIT
notebooks/.ipynb_checkpoints/mw_requests_flow-checkpoint.ipynb
lvikt/ekostat_calculator
Request workspace import default data
# ekos = EventHandler(**paths) # # When copying data the first time all sources has status=0, i.e. no data will be loaded. # request = ekos.test_requests['request_workspace_import_default_data'] # response_import_data = ekos.request_workspace_import_default_data(request) # ekos.write_test_response('request_workspace_import_default_data', response_import_data)
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/mw_requests_flow-checkpoint.ipynb
lvikt/ekostat_calculator
Import data from sharkweb
ekos = EventHandler(**paths) request = ekos.test_requests['request_sharkweb_import'] response_sharkweb_import = ekos.request_sharkweb_import(request) ekos.write_test_response('request_sharkweb_import', response_sharkweb_import) ekos.data_params ekos.selection_dicts # ekos = EventHandler(**paths) # ekos.mapping_objects['sharkweb_mapping'].df
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/mw_requests_flow-checkpoint.ipynb
lvikt/ekostat_calculator
Request data source list/edit
ekos = EventHandler(**paths) request = ekos.test_requests['request_workspace_data_sources_list'] response = ekos.request_workspace_data_sources_list(request) ekos.write_test_response('request_workspace_data_sources_list', response) request = response request['data_sources'][0]['status'] = False request['data_sources'][1]['status'] = False request['data_sources'][2]['status'] = False request['data_sources'][3]['status'] = False # request['data_sources'][4]['status'] = True # Edit data source response = ekos.request_workspace_data_sources_edit(request) ekos.write_test_response('request_workspace_data_sources_edit', response)
2018-09-20 19:31:23,369 event_handler.py 117 __init__ DEBUG Start EventHandler: event_handler 2018-09-20 19:31:23,373 event_handler.py 152 _load_mapping_objects DEBUG Loading mapping files from pickle file. 2018-09-20 19:31:24,259 event_handler.py 128 __init__ DEBUG Time for mapping: 0.8860509395599365 2018-09-20 19:31:24,295 event_handler.py 133 __init__ DEBUG Time for initiating EventHandler: 0.9250528812408447 2018-09-20 19:31:24,301 event_handler.py 50 f DEBUG Start: "request_workspace_data_sources_list" 2018-09-20 19:31:24,305 event_handler.py 4480 request_workspace_data_sources_list DEBUG Start: request_workspace_data_sources_list 2018-09-20 19:31:24,342 event_handler.py 2991 load_workspace DEBUG Trying to load new workspace "a377ee26-cd2d-411b-999c-073cd7a3dbd4" with alias "New test workspace" 2018-09-20 19:31:24,845 event_handler.py 3009 load_workspace INFO Workspace "a377ee26-cd2d-411b-999c-073cd7a3dbd4" with alias "New test workspace loaded." 2018-09-20 19:31:24,858 event_handler.py 54 f DEBUG Stop: "request_workspace_data_sources_list". Time for running method was 0.5530316829681396 2018-09-20 19:31:24,864 event_handler.py 50 f DEBUG Start: "request_workspace_data_sources_edit" 2018-09-20 19:31:24,869 event_handler.py 4438 request_workspace_data_sources_edit DEBUG Start: request_workspace_data_sources_list 2018-09-20 19:31:24,910 event_handler.py 3002 load_workspace DEBUG Workspace "a377ee26-cd2d-411b-999c-073cd7a3dbd4" with alias "New test workspace" is already loaded. Set reload=True if you want to reload the workspace. 2018-09-20 19:31:25,055 workspaces.py 1842 load_all_data DEBUG Data has been loaded from existing all_data.pickle file. 2018-09-20 19:31:25,058 event_handler.py 50 f DEBUG Start: "request_workspace_data_sources_list" 2018-09-20 19:31:25,063 event_handler.py 4480 request_workspace_data_sources_list DEBUG Start: request_workspace_data_sources_list 2018-09-20 19:31:25,099 event_handler.py 3002 load_workspace DEBUG Workspace "a377ee26-cd2d-411b-999c-073cd7a3dbd4" with alias "New test workspace" is already loaded. Set reload=True if you want to reload the workspace.
MIT
notebooks/.ipynb_checkpoints/mw_requests_flow-checkpoint.ipynb
lvikt/ekostat_calculator
Request subset add
ekos = EventHandler(**paths) request = ekos.test_requests['request_subset_add_1'] response_subset_add = ekos.request_subset_add(request) ekos.write_test_response('request_subset_add_1', response_subset_add) update_subset_uuid_in_test_requests(subset_alias='mw_subset')
2018-09-20 19:05:16,853 event_handler.py 117 __init__ DEBUG Start EventHandler: event_handler 2018-09-20 19:05:16,857 event_handler.py 152 _load_mapping_objects DEBUG Loading mapping files from pickle file. 2018-09-20 19:05:17,716 event_handler.py 128 __init__ DEBUG Time for mapping: 0.8590488433837891 2018-09-20 19:05:17,754 event_handler.py 133 __init__ DEBUG Time for initiating EventHandler: 0.9010515213012695 2018-09-20 19:05:17,789 event_handler.py 2984 load_workspace DEBUG Trying to load new workspace "a377ee26-cd2d-411b-999c-073cd7a3dbd4" with alias "New test workspace" 2018-09-20 19:05:18,278 event_handler.py 3002 load_workspace INFO Workspace "a377ee26-cd2d-411b-999c-073cd7a3dbd4" with alias "New test workspace loaded."
MIT
notebooks/.ipynb_checkpoints/mw_requests_flow-checkpoint.ipynb
lvikt/ekostat_calculator
Request subset get data filter
ekos = EventHandler(**paths) update_subset_uuid_in_test_requests(subset_alias='mw_subset') request = ekos.test_requests['request_subset_get_data_filter'] response_subset_get_data_filter = ekos.request_subset_get_data_filter(request) ekos.write_test_response('request_subset_get_data_filter', response_subset_get_data_filter) # import re # string = """{ # "workspace_uuid": "52725df4-b4a0-431c-a186-5e542fc6a3a4", # "data_sources": [ # { # "status": true, # "loaded": false, # "filename": "physicalchemical_sharkweb_data_all_2013-2014_20180916.txt", # "datatype": "physicalchemical" # } # ] # }""" # r = re.sub('"workspace_uuid": ".{36}"', '"workspace_uuid": "new"', string)
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/mw_requests_flow-checkpoint.ipynb
lvikt/ekostat_calculator
Request subset set data filter
ekos = EventHandler(**paths) update_subset_uuid_in_test_requests(subset_alias='mw_subset') request = ekos.test_requests['request_subset_set_data_filter'] response_subset_set_data_filter = ekos.request_subset_set_data_filter(request) ekos.write_test_response('request_subset_set_data_filter', response_subset_set_data_filter)
2018-09-20 13:54:00,112 event_handler.py 117 __init__ DEBUG Start EventHandler: event_handler 2018-09-20 13:54:00,112 event_handler.py 152 _load_mapping_objects DEBUG Loading mapping files from pickle file. 2018-09-20 13:54:00,912 event_handler.py 128 __init__ DEBUG Time for mapping: 0.8000011444091797 2018-09-20 13:54:00,942 event_handler.py 133 __init__ DEBUG Time for initiating EventHandler: 0.8300011157989502 2018-09-20 13:54:00,942 event_handler.py 117 __init__ DEBUG Start EventHandler: event_handler 2018-09-20 13:54:00,952 event_handler.py 152 _load_mapping_objects DEBUG Loading mapping files from pickle file. 2018-09-20 13:54:01,762 event_handler.py 128 __init__ DEBUG Time for mapping: 0.8100011348724365 2018-09-20 13:54:01,792 event_handler.py 133 __init__ DEBUG Time for initiating EventHandler: 0.8500008583068848 2018-09-20 13:54:01,822 event_handler.py 2887 load_workspace DEBUG Trying to load new workspace "fccc7645-8501-4541-975b-bdcfb40a5092" with alias "New test workspace" 2018-09-20 13:54:02,262 event_handler.py 2905 load_workspace INFO Workspace "fccc7645-8501-4541-975b-bdcfb40a5092" with alias "New test workspace loaded." 2018-09-20 13:54:02,452 event_handler.py 50 f DEBUG Start: "request_subset_set_data_filter" 2018-09-20 13:54:02,452 event_handler.py 3249 request_subset_set_data_filter DEBUG Start: request_subset_get_indicator_settings
MIT
notebooks/.ipynb_checkpoints/mw_requests_flow-checkpoint.ipynb
lvikt/ekostat_calculator
Request subset get indicator settings
ekos = EventHandler(**paths) request = ekos.test_requests['request_subset_get_indicator_settings'] # request = ekos.test_requests['request_subset_get_indicator_settings_no_areas'] # print(request['subset']['subset_uuid']) # request['subset']['subset_uuid'] = 'fel' # print(request['subset']['subset_uuid']) response_subset_get_indicator_settings = ekos.request_subset_get_indicator_settings(request) ekos.write_test_response('request_subset_get_indicator_settings', response_subset_get_indicator_settings)
2018-09-20 06:50:41,643 event_handler.py 117 __init__ DEBUG Start EventHandler: event_handler 2018-09-20 06:50:41,643 event_handler.py 152 _load_mapping_objects DEBUG Loading mapping files from pickle file. 2018-09-20 06:50:42,330 event_handler.py 128 __init__ DEBUG Time for mapping: 0.6864011287689209 2018-09-20 06:50:42,361 event_handler.py 133 __init__ DEBUG Time for initiating EventHandler: 0.7176012992858887 2018-09-20 06:50:42,361 event_handler.py 50 f DEBUG Start: "request_subset_get_indicator_settings" 2018-09-20 06:50:42,361 event_handler.py 3416 request_subset_get_indicator_settings DEBUG Start: request_subset_get_indicator_settings 2018-09-20 06:50:42,392 event_handler.py 2887 load_workspace DEBUG Trying to load new workspace "1a349dfd-5e08-4617-85a8-5bdde050a4ee" with alias "New test workspace" 2018-09-20 06:50:42,798 event_handler.py 2905 load_workspace INFO Workspace "1a349dfd-5e08-4617-85a8-5bdde050a4ee" with alias "New test workspace loaded." 2018-09-20 06:50:42,829 workspaces.py 1842 load_all_data DEBUG Data has been loaded from existing all_data.pickle file.
MIT
notebooks/.ipynb_checkpoints/mw_requests_flow-checkpoint.ipynb
lvikt/ekostat_calculator
Request subset set indicator settings
ekos = EventHandler(**paths) request = ekos.test_requests['request_subset_set_indicator_settings'] response_subset_set_indicator_settings = ekos.request_subset_set_indicator_settings(request) ekos.write_test_response('request_subset_set_indicator_settings', response_subset_set_indicator_settings)
2018-09-20 12:09:08,454 event_handler.py 117 __init__ DEBUG Start EventHandler: event_handler 2018-09-20 12:09:08,454 event_handler.py 152 _load_mapping_objects DEBUG Loading mapping files from pickle file. 2018-09-20 12:09:09,234 event_handler.py 128 __init__ DEBUG Time for mapping: 0.780001163482666 2018-09-20 12:09:09,264 event_handler.py 133 __init__ DEBUG Time for initiating EventHandler: 0.8100008964538574 2018-09-20 12:09:09,284 event_handler.py 50 f DEBUG Start: "request_subset_set_indicator_settings" 2018-09-20 12:09:09,294 event_handler.py 3627 request_subset_set_indicator_settings DEBUG Start: request_subset_set_indicator_settings 2018-09-20 12:09:09,324 event_handler.py 2887 load_workspace DEBUG Trying to load new workspace "1a349dfd-5e08-4617-85a8-5bdde050a4ee" with alias "New test workspace" 2018-09-20 12:09:09,444 logger.py 85 add_log DEBUG 2018-09-20 12:09:09,444 logger.py 86 add_log DEBUG ======================================================================================================================== 2018-09-20 12:09:09,454 logger.py 87 add_log DEBUG ### Log added for log_id "cc264e56-f958-4ec4-932d-bc0cc1d2caf8" at locaton: D:\git\ekostat_calculator\workspaces\1a349dfd-5e08-4617-85a8-5bdde050a4ee\log\subset_cc264e56-f958-4ec4-932d-bc0cc1d2caf8.log 2018-09-20 12:09:09,454 logger.py 88 add_log DEBUG ------------------------------------------------------------------------------------------------------------------------ 2018-09-20 12:09:09,544 logger.py 85 add_log DEBUG 2018-09-20 12:09:09,554 logger.py 86 add_log DEBUG ======================================================================================================================== 2018-09-20 12:09:09,554 logger.py 87 add_log DEBUG ### Log added for log_id "default_subset" at locaton: D:\git\ekostat_calculator\workspaces\1a349dfd-5e08-4617-85a8-5bdde050a4ee\log\subset_default_subset.log 2018-09-20 12:09:09,564 logger.py 88 add_log DEBUG ------------------------------------------------------------------------------------------------------------------------
MIT
notebooks/.ipynb_checkpoints/mw_requests_flow-checkpoint.ipynb
lvikt/ekostat_calculator
Request subset calculate status
ekos = EventHandler(**paths) request = ekos.test_requests['request_subset_calculate_status'] response = ekos.request_subset_calculate_status(request) ekos.write_test_response('request_subset_calculate_status', response)
2018-09-20 19:05:31,914 event_handler.py 117 __init__ DEBUG Start EventHandler: event_handler 2018-09-20 19:05:31,917 event_handler.py 152 _load_mapping_objects DEBUG Loading mapping files from pickle file. 2018-09-20 19:05:32,790 event_handler.py 128 __init__ DEBUG Time for mapping: 0.8740499019622803 2018-09-20 19:05:32,826 event_handler.py 133 __init__ DEBUG Time for initiating EventHandler: 0.9130520820617676 2018-09-20 19:05:32,831 event_handler.py 50 f DEBUG Start: "request_subset_calculate_status" 2018-09-20 19:05:32,837 event_handler.py 3296 request_subset_calculate_status DEBUG Start: request_subset_calculate_status 2018-09-20 19:05:32,871 event_handler.py 2984 load_workspace DEBUG Trying to load new workspace "a377ee26-cd2d-411b-999c-073cd7a3dbd4" with alias "New test workspace" 2018-09-20 19:05:33,359 event_handler.py 3002 load_workspace INFO Workspace "a377ee26-cd2d-411b-999c-073cd7a3dbd4" with alias "New test workspace loaded." 2018-09-20 19:05:33,453 workspaces.py 1842 load_all_data DEBUG Data has been loaded from existing all_data.pickle file. 2018-09-20 19:05:33,493 event_handler.py 2995 load_workspace DEBUG Workspace "a377ee26-cd2d-411b-999c-073cd7a3dbd4" with alias "New test workspace" is already loaded. Set reload=True if you want to reload the workspace. 2018-09-20 19:05:33,530 event_handler.py 2995 load_workspace DEBUG Workspace "a377ee26-cd2d-411b-999c-073cd7a3dbd4" with alias "New test workspace" is already loaded. Set reload=True if you want to reload the workspace.
MIT
notebooks/.ipynb_checkpoints/mw_requests_flow-checkpoint.ipynb
lvikt/ekostat_calculator
Request subset result get
ekos = EventHandler(**paths) request = ekos.test_requests['request_workspace_result'] response_workspace_result = ekos.request_workspace_result(request) ekos.write_test_response('request_workspace_result', response_workspace_result) response_workspace_result['subset']['a4e53080-2c68-40d5-957f-8cc4dbf77815']['result']['SE552170-130626']['result']['indicator_din_winter']['data'] workspace_uuid = 'fccc7645-8501-4541-975b-bdcfb40a5092' subset_uuid = 'a4e53080-2c68-40d5-957f-8cc4dbf77815' result = ekos.dict_data_timeseries(workspace_uuid=workspace_uuid, subset_uuid=subset_uuid, viss_eu_cd='SE575150-162700', element_id='indicator_din_winter') print(result['datasets'][0]['x']) print() print(result['y']) for k in range(len(result['datasets'])): print(result['datasets'][k]['x']) import datetime # Extend date list start_year = all_dates[0].year end_year = all_dates[-1].year+1 date_intervall = [] for year in range(start_year, end_year+1): for month in range(1, 13): d = datetime.datetime(year, month, 1) if d >= all_dates[0] and d <= all_dates[-1]: date_intervall.append(d) extended_dates = sorted(set(all_dates + date_intervall)) # Loop dates and add/remove values new_x = [] new_y = dict((item, []) for item in date_to_y) for date in extended_dates: if date in date_intervall: new_x.append(date.strftime('%y-%b')) else: new_x.append('') for i in new_y: new_y[i].append(date_to_y[i].get(date, None)) # new_y = {} # for i in date_to_y: # new_y[i] = [] # for date in all_dates: # d = date_to_y[i].get(date) # if d: # new_y[i].append(d) # else: # new_y[i].append(None) new_y[0] import datetime year_list = range(2011, 2013+1) month_list = range(1, 13) date_list = [] for year in year_list: for month in month_list: date_list.append(datetime.datetime(year, month, 1)) date_list a y[3][i] sorted(pd.to_datetime(df['SDATE']))
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/mw_requests_flow-checkpoint.ipynb
lvikt/ekostat_calculator
Notebook prirejen s strani http://www.pieriandata.com NumPy Indexing and SelectionIn this lecture we will discuss how to select elements or groups of elements from an array.
import numpy as np #Creating sample array arr = np.arange(0,11) #Show arr
_____no_output_____
MIT
theory/NumPy/01-NumPy-Indexing-and-Selection.ipynb
CrtomirJuren/python-delavnica
Bracket Indexing and SelectionThe simplest way to pick one or some elements of an array looks very similar to python lists:
#Get a value at an index arr[8] #Get values in a range arr[1:5] #Get values in a range arr[0:5] # l = ['a', 'b', 'c'] # l[0:2]
_____no_output_____
MIT
theory/NumPy/01-NumPy-Indexing-and-Selection.ipynb
CrtomirJuren/python-delavnica
BroadcastingNumPy arrays differ from normal Python lists because of their ability to broadcast. With lists, you can only reassign parts of a list with new parts of the same size and shape. That is, if you wanted to replace the first 5 elements in a list with a new value, you would have to pass in a new 5 element list. With NumPy arrays, you can broadcast a single value across a larger set of values:
l = list(range(10)) l l[0:5] = [100,100,100,100,100] l #Setting a value with index range (Broadcasting) arr[0:5]=100 #Show arr # Reset array, we'll see why I had to reset in a moment arr = np.arange(0,11) #Show arr #Important notes on Slices slice_of_arr = arr[0:6] #Show slice slice_of_arr #Change Slice slice_of_arr[:]=99 #Show Slice again slice_of_arr
_____no_output_____
MIT
theory/NumPy/01-NumPy-Indexing-and-Selection.ipynb
CrtomirJuren/python-delavnica
Now note the changes also occur in our original array!
arr
_____no_output_____
MIT
theory/NumPy/01-NumPy-Indexing-and-Selection.ipynb
CrtomirJuren/python-delavnica
Data is not copied, it's a view of the original array! This avoids memory problems!
#To get a copy, need to be explicit arr_copy = arr.copy() arr_copy
_____no_output_____
MIT
theory/NumPy/01-NumPy-Indexing-and-Selection.ipynb
CrtomirJuren/python-delavnica
Indexing a 2D array (matrices)The general format is **arr_2d[row][col]** or **arr_2d[row,col]**. I recommend using the comma notation for clarity.
arr_2d = np.array(([5,10,15],[20,25,30],[35,40,45])) #Show arr_2d #Indexing row arr_2d[1] # Format is arr_2d[row][col] or arr_2d[row,col] # Getting individual element value arr_2d[1][0] # Getting individual element value arr_2d[1,0] # 2D array slicing #Shape (2,2) from top right corner arr_2d[:2,1:] #Shape bottom row arr_2d[2] #Shape bottom row arr_2d[2,:]
_____no_output_____
MIT
theory/NumPy/01-NumPy-Indexing-and-Selection.ipynb
CrtomirJuren/python-delavnica
More Indexing HelpIndexing a 2D matrix can be a bit confusing at first, especially when you start to add in step size. Try google image searching *NumPy indexing* to find useful images, like this one: Image source: http://www.scipy-lectures.org/intro/numpy/numpy.html Conditional SelectionThis is a very fundamental concept that will directly translate to pandas later on, make sure you understand this part!Let's briefly go over how to use brackets for selection based off of comparison operators.
arr = np.arange(1,11) arr arr > 4 bool_arr = arr>4 bool_arr arr[bool_arr] arr[arr>2] x = 2 arr[arr>x]
_____no_output_____
MIT
theory/NumPy/01-NumPy-Indexing-and-Selection.ipynb
CrtomirJuren/python-delavnica
Chapter 4: Linear models[Link to outline](https://docs.google.com/document/d/1fwep23-95U-w1QMPU31nOvUnUXE2X3s_Dbk5JuLlKAY/editheading=h.9etj7aw4al9w)Concept map:![concepts_LINEARMODELS.png](attachment:c335ebb2-f116-486c-8737-22e517de3146.png) Notebook setup
import numpy as np import pandas as pd import scipy as sp import seaborn as sns from scipy.stats import uniform, norm # notebooks figs setup %matplotlib inline import matplotlib.pyplot as plt sns.set(rc={'figure.figsize':(8,5)}) blue, orange = sns.color_palette()[0], sns.color_palette()[1] # silence annoying warnings import warnings warnings.filterwarnings('ignore')
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
4.1 Linear models for relationship between two numeric variables- def'n linear model: **y ~ m*x + b**, a.k.a. linear regression- Amy has collected a new dataset: - Instead of receiving a fixed amount of stats training (100 hours), **each employee now receives a variable amount of stats training (anywhere from 0 hours to 100 hours)** - Amy has collected ELV values after one year as previously - Goal find best fit line for relationship $\textrm{ELV} \sim \beta_0 + \beta_1\!*\!\textrm{hours}$- Limitation: **we assume the change in ELV is proportional to number of hours** (i.e. linear relationship). Other types of hours-ELV relationship possible, but we will not be able to model them correctly (see figure below). New dataset - The `hours` column contains the `x` values (how many hours of statistics training did the employee receive), - The `ELV` column contains the `y` values (the employee ELV after one year)![excel_file_for_linear_models.png](attachment:71dfeb87-78ec-4523-94fa-7df9a6db4aec.png)
# Load data into a pandas dataframe df2 = pd.read_excel("data/ELV_vs_hours.ods", sheet_name="Data") # df2 df2.describe() # plot ELV vs. hours data sns.scatterplot(x='hours', y='ELV', data=df2) # linear model plot (preview) # sns.lmplot(x='hours', y='ELV', data=df2, ci=False)
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
Types of linear relationship between input and outputDifferent possible relationships between the number of hours of stats training and ELV gains:![figures/ELV_as_function_of_stats_hours.png](figures/ELV_as_function_of_stats_hours.png) 4.2 Fitting linear models- Main idea: use `fit` method from `statsmodels.ols` and a formula (approach 1)- Visual inspection- Results of linear model fit are: - `beta0` = $\beta_0$ = baseline ELV (y-intercept) - `beta1` = $\beta_1$ = increase in ELV for each additional hour of stats training (slope)- Five more alternative fitting methods (bonus material): 2. fit using statsmodels `OLS` 3. solution using `linregress` from `scipy` 4. solution using `optimize` from `scipy` 5. linear algebra solution using `numpy` 6. solution using `LinearRegression` model from scikit-learn Using statsmodels formula APIThe `statsmodels` Python library offers a convenient way to specify statistics model as a "formula" that describes the relationship we're looking for.Mathematically, the linear model is written:$\large \textrm{ELV} \ \ \sim \ \ \beta_0\cdot 1 \ + \ \beta_1\cdot\textrm{hours}$and the formula is:`ELV ~ 1 + hours`Note the variables $\beta_0$ and $\beta_1$ are omitted, since the whole point of fitting a linear model is to find these coefficients. The parameters of the model are:- Instead of $\beta_0$, the constant parameter will be called `Intercept`- Instead of a new name $\beta_1$, we'll call it `hours` coefficient (i.e. the coefficient associated with the `hours` variable in the model)
import statsmodels.formula.api as smf model = smf.ols('ELV ~ 1 + hours', data=df2) result = model.fit() # extact the best-fit model parameters beta0, beta1 = result.params beta0, beta1 # data points sns.scatterplot(x='hours', y='ELV', data=df2) # linear model for data x = df2['hours'].values # input = hours ymodel = beta0 + beta1*x # output = ELV sns.lineplot(x, ymodel) result.summary()
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
Alternative model fitting methods2. fit using statsmodels [`OLS`](https://www.statsmodels.org/stable/generated/statsmodels.regression.linear_model.OLS.html)3. solution using [`linregress`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.linregress.html) from `scipy`4. solution using [`minimize`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html) from `scipy`5. [linear algebra](https://numpy.org/doc/stable/reference/routines.linalg.html) solution using `numpy`6. solution using [`LinearRegression`](https://scikit-learn.org/stable/modules/linear_model.htmlordinary-least-squares) model from scikit-learn Data pre-processingThe `statsmodels` formula `ols` approach we used above was able to get the datadirectly from the dataframe `df2`, but some of the other model fitting methodsrequire data to be provided as regular arrays: the x-values and the y-values.
# extract hours and ELV data from df2 x = df2['hours'].values # hours data as an array y = df2['ELV'].values # ELV data as an array x.shape, y.shape # x
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
Two of the approaches required "packaging" the x-values along with a column of ones,to form a matrix (called a design matrix). Luckily `statsmodels` provides a convenient function for this:
import statsmodels.api as sm # add a column of ones to the x data X = sm.add_constant(x) X.shape # X
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
____ 2. fit using statsmodels OLS
model2 = sm.OLS(y, X) result2 = model2.fit() # result2.summary() result2.params
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
____ 3. solution using `linregress` from `scipy`
from scipy.stats import linregress result3 = linregress(x, y) result3.intercept, result3.slope
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
____ 4. Using an optimization approach
from scipy.optimize import minimize def sse(beta, x=x, y=y): """Compute the sum-of-squared-errors objective function.""" sumse = 0.0 for xi, yi in zip(x, y): yi_pred = beta[0] + beta[1]*xi ei = (yi_pred-yi)**2 sumse += ei return sumse result4 = minimize(sse, x0=[0,0]) beta0, beta1 = result4.x beta0, beta1
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
____ 5. Linear algebra solutionWe obtain the least squares solution using the Moore–Penrose inverse formula:$$ \large \vec{\beta} = (X^{\sf T} X)^{-1}X^{\sf T}\; \vec{y}$$
# 5. linear algebra solution using `numpy` import numpy as np result5 = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(y) beta0, beta1 = result5 beta0, beta1
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
_____ Using scikit-learn
# 6. solution using `LinearRegression` from scikit-learn from sklearn import linear_model model6 = linear_model.LinearRegression() model6.fit(x[:,np.newaxis], y) model6.intercept_, model6.coef_
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
4.3 Interpreting linear models- model fit checks - $R^2$ [coefficient of determination](https://en.wikipedia.org/wiki/Coefficient_of_determination) = the proportion of the variation in the dependent variable that is predictable from the independent variable - plot of residuals - many other: see [scikit docs](https://scikit-learn.org/stable/modules/model_evaluation.htmlregression-metrics)- hypothesis tests - is slope zero or nonzero? (and CI interval) - caution: cannot make any cause-and-effect claims; only a correlation- Predictions - given best-fir model obtained from data, we can make predictions (interpolations), e.g., what is the expected ELV after 50 hours of stats training? Interpreting the resultsLet's review some of the other data included in the `results.summary()` report for the linear model fit we did earlier.
result.summary()
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
Model parameters
beta0, beta1 = result.params result.params
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
The $R^2$ coefficient of determination$R^2 = 1$ corresponds to perfect prediction
result.rsquared
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
Hypothesis testing for slope coefficientIs there a non-zero slope coefficient?- **null hypothesis $H_0$**: `hours` has no effect on `ELV`, which is equivalent to $\beta_1 = 0$: $$ \large H_0: \qquad \textrm{ELV} \sim \mathcal{N}(\color{red}{\beta_0}, \sigma^2) \qquad \qquad \qquad $$- **alternative hypothesis $H_A$**: `hours` has an effect on `ELV`, and the slope is not zero, $\beta_1 \neq 0$: $$ \large H_A: \qquad \textrm{ELV} \sim \mathcal{N}\left( \color{blue}{\beta_0 + \beta_1\!\cdot\!\textrm{hours}}, \ \sigma^2 \right) $$
# p-value under the null hypotheis of zero slope or "no effect of `hours` on `ELV`" result.pvalues.loc['hours'] # 95% confidence interval for the hours-slope parameter # result.conf_int() CI_hours = list(result.conf_int().loc['hours']) CI_hours
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
Predictions using the modelWe can use the model we obtained to predict (interpolate) the ELV for future employees.
sns.scatterplot(x='hours', y='ELV', data=df2) ymodel = beta0 + beta1*x sns.lineplot(x, ymodel)
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
What ELV can we expect from a new employee that takes 50 hours of stats training?
result.predict({'hours':[50]}) result.predict({'hours':[100]})
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks