path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
News_analysis.ipynb
|
###Markdown
News analysis Notebook descriptionThis notebook aims to analyze the correlation between news about Bitcoin and Bitcoin price over years. Data overviewThe dataset used in the notebook's charts is the result of a merge of numerous public datasets and automated crawling, those datasets' sources are listed in the README file.Each post has been classified using fast-classifier from Flair framework (read the README for details about the license). Basic Analysis To achieve uniformity between charts, a project-wise colors palette has been used
###Code
from palette import palette
import ipywidgets as widgets
from ipywidgets import interact, interact_manual, IntSlider
base_path = './Datasets/'
###Output
_____no_output_____
###Markdown
For consistency with twitter analysis, posts' info has been grouped by date (view for grouping details)
###Code
daily_csv = base_path + 'news_daily_info.csv'
import pandas as pd
raw_df = pd.read_csv(daily_csv).dropna()
def avg_sentiment(group) -> float:
total = sum(group['count'])
sentiment = sum(group['count'] * group['signed_score'])/total
return sentiment
def score_to_label(score) -> str:
if score == 0:
return 'NEUTRAL'
return 'POSITIVE' if score > 0 else 'NEGATIVE'
def normalize(value: float, range_min: float, range_max: float) -> float:
return (value-range_min)/(range_max-range_min)
def normalize_series(series, series_min=None, series_max=None) -> pd.Series:
if series_min is None:
series_min = min(series)
if series_max is None:
series_max = max(series)
return series.apply(lambda x: normalize(x, series_min, series_max))
raw_df['label'] = raw_df['label'].apply(lambda x: x.replace('"', ''))
raw_df['signed_score'] = raw_df['conf'] * raw_df['label'].apply(lambda x: 1 if x == 'POSITIVE' else -1)
###Output
_____no_output_____
###Markdown
Common dates range is calculated intersecting the market dataframe with news one
###Code
market_daily_csv = base_path + '/market_daily_info.csv'
market_dates = pd.read_csv(market_daily_csv).dropna()['date']
dates_min = max([min(market_dates), min(raw_df['date'])])
dates_max = min([max(market_dates), max(raw_df['date'])])
dates = pd.concat([market_dates, raw_df['date']])
dates = dates.drop_duplicates().sort_values()
dates = dates[(dates_min <= dates) & (dates <= dates_max)]
raw_df = raw_df[(raw_df['date'] >= dates_min) & (raw_df['date'] <= dates_max)]
date_grouped = raw_df.groupby('date')
daily_df = pd.DataFrame(index=raw_df['date'].drop_duplicates())
daily_df['sentiment'] = date_grouped.apply(avg_sentiment)
daily_df['norm_sent'] = normalize_series(daily_df['sentiment'], -1, 1)
daily_df['label'] = daily_df['sentiment'].apply(score_to_label)
daily_df['count'] = date_grouped.apply(lambda x: sum(x['count']))
daily_df['norm_count'] = normalize_series(daily_df['count'], 0)
negatives = raw_df[raw_df['label'] == 'NEGATIVE'][['date', 'count']]
negatives.columns= ['date', 'negatives']
positives = raw_df[raw_df['label'] == 'POSITIVE'][['date', 'count']]
positives.columns= ['date', 'positives']
daily_df = daily_df.merge(negatives, on='date')
daily_df = daily_df.merge(positives, on='date')
daily_df = daily_df.drop_duplicates(subset=['date'])
daily_df = daily_df[(daily_df['date'] >= dates_min) & (daily_df['date'] <= dates_max)]
daily_df['norm_sent'] = normalize_series(daily_df['sentiment'], -1, 1)
###Output
_____no_output_____
###Markdown
As for news, the market info are day grained
###Code
market_daily_csv = base_path+ 'market_daily_info.csv'
market_raw_df = pd.read_csv(market_daily_csv)
market_raw_df = market_raw_df.dropna()
market_df = pd.DataFrame(dates, columns=['date'])
market_df = market_df.merge(market_raw_df, on='date')
market_df['mid_price'] = (market_df['high'] + market_df['low'])/2
market_df['norm_mid_price'] = normalize_series(market_df['mid_price'])
market_df = market_df[(dates_min <= market_df['date']) & (market_df['date']<= dates_max)]
import altair as alt
alt.data_transformers.enable('json')
###Output
_____no_output_____
###Markdown
Weekly volume
###Code
daily_df['month'] = daily_df['date'].apply(lambda x: x[:-3])
weekly_volume = daily_df[['month', 'count']].groupby(by='month', as_index=False).mean()
plot_title = alt.TitleParams('Weekly volume', subtitle='Average volume per week')
volume_chart = alt.Chart(weekly_volume, title=plot_title).mark_area().encode(alt.X('yearmonth(month):T', title='Date'),
alt.Y('count', title='Volume'),
color=alt.value(palette['news']))
volume_chart
###Output
_____no_output_____
###Markdown
There is a pattern in the volume per day: there are under 40 news per day (excluded 2021) and in August news is under 10 per day. This reflects the crawling method and the nature of news; for each day has been crawled only the first google news page and, normally, in August, when people are on holidays, the interest related to financial things drops. Basic data exploration Sentiment
###Code
sent_rounded = daily_df[['norm_sent']].copy()
sent_rounded['norm_sent'] = sent_rounded['norm_sent'].apply(lambda x: round(x, 2))
alt.Chart(sent_rounded, title='Sentiment dispersion').mark_boxplot(color=palette['news']).encode(alt.X('norm_sent', title='Normalized sentiment')).properties(height=200)
###Output
_____no_output_____
###Markdown
IQR shows that the used classifier had some problem classifying news or that they are simply impartial (for experience, Bitcoin-related news are not impartial). Contrary to what happens for other media, the news sentiment range is on the whole classification range.
###Code
sent_dist = sent_rounded.groupby('norm_sent', as_index=False).size()
sent_dist.columns = ['norm_sent', 'count']
alt.Chart(sent_dist, title='Sentiment distribution').mark_area().encode(alt.X('norm_sent', title='Normalized sentiment'), alt.Y('count', title='Count'), color=alt.value(palette['news']))
###Output
_____no_output_____
###Markdown
The sentiment distribution is similar to a normal distribution but wider, so the Pearson correlation could be useful. Volume
###Code
alt.Chart(daily_df, title='Volume dispersion').mark_boxplot(color=palette['news']).encode(alt.X('count', title=None)).properties(height=200)
###Output
_____no_output_____
###Markdown
As expected, there are few outliers (2021 news)
###Code
volumes = daily_df[['count']]
volumes_dist = volumes.groupby('count', as_index=False).size()
volumes_dist.columns = ['volume', 'count']
alt.Chart(volumes_dist, title='Volume distribution').mark_line().encode(alt.X('volume', title=None), alt.Y('count', title='Count'), color=alt.value(palette['news']))
###Output
_____no_output_____
###Markdown
The volume distribution is clearly unbalanced on the left, so Pearson correlation can't be applied to evaluate volume correlation. Sentiment analysis
###Code
domain = [0, 1]
color_range = [palette['negative'], palette['positive']]
time_selector = alt.selection(type='interval', encodings=['x'])
gradient = alt.Color('norm_sent', scale=alt.Scale(domain=domain, range=color_range), title='Normalized sentiment')
price_chart = alt.Chart(market_df).mark_line(color=palette['strong_price']).encode(
x=alt.X('yearmonthdate(date):T',
scale=alt.Scale(domain=time_selector),
title=None),
y=alt.Y('mid_price', title='Mid price')
)
dummy_df = daily_df[['date']].copy()
dummy_df['value'] = 0.5
dummy_chart = alt.Chart(dummy_df).mark_line(color=palette['smooth_neutral']).encode(alt.X('yearmonthdate(date):T',
scale=alt.Scale(domain=time_selector),
axis=None),
alt.Y('value',
scale=alt.Scale(domain=[0,1]),
axis=None))
plot_title = alt.TitleParams('Normalized sentiment vs Bitcoin price', subtitle='0:= negative, 1:= positive')
histogram = alt.Chart(daily_df, title=plot_title).mark_bar().encode(alt.X('yearmonthdate(date):T',
bin=alt.Bin(maxbins=100, extent=time_selector),
scale=alt.Scale(domain=time_selector),
axis=alt.Axis(labelOverlap='greedy', labelSeparation=6)),
alt.Y('norm_sent',
scale=alt.Scale(domain=[0,1]),
title='Normalized sentiment'),
color=gradient)
selection_plot = alt.Chart(daily_df).mark_bar().encode(alt.X('yearmonthdate(date):T',
bin=alt.Bin(maxbins=100),
title='Date',
axis=alt.Axis(labelOverlap='greedy', labelSeparation=6)),
alt.Y('norm_sent', title=None),
color=gradient).add_selection(time_selector).properties(height=50)
((histogram + dummy_chart + price_chart).resolve_scale(y='independent') & selection_plot).configure_axisRight(titleColor=palette['strong_price'])
###Output
_____no_output_____
###Markdown
Bar binning has some problems plotting the sentiment if it's in [-1, 1], for that reason the interactive version uses normalized sentiment and the static one uses original sentiment values.
###Code
domain = [-1, 1]
gradient = alt.Color('sentiment', scale=alt.Scale(domain=domain, range=color_range), title='Sentiment')
price_chart = alt.Chart(market_df).mark_line(color=palette['strong_price']).encode(
x=alt.X('yearmonthdate(date):T'),
y=alt.Y('mid_price', title='Mid price')
)
plot_title = alt.TitleParams('Static sentiment vs Bitcoin price', subtitle='-1:= negative, 1:= positive')
histogram = alt.Chart(daily_df, title=plot_title).mark_bar().encode(alt.X('yearmonthdate(date):T', title='Date'),
alt.Y('sentiment',
scale=alt.Scale(domain=[-1,1]),
title='Sentiment'),
color=gradient)
(histogram + price_chart).resolve_scale(y='independent').configure_axisRight(titleColor=palette['strong_price'])
###Output
_____no_output_____
###Markdown
Plotting sentiment in [-1, 1] permit to understand immediately if price direction is the same as the sentiment one.Is hard to view a pattern in news sentiment near days have opposite sentiments, this proves that the news are more similar to hypothesis than truth or facts. CorrelationTo measure the correlation between sentiment and price, two approaches will be used:- TLCC (Time Lagged Cross-Correlation): a measure of the correlation of the whole time series given a list of time offsets- Windowed TLCC: the time series are lagged as in the first case, but the correlation is calculated for each window; this is useful to understand correlation "direction" (so the time series' roles) over time. TLCC
###Code
methods = ['pearson', 'kendall', 'spearman']
offsets = list(range(-150, 151)) # list of days offset to test
correlations = []
sent_vs_price = pd.DataFrame(daily_df['date'], columns=['date'])
sent_vs_price['sent'] = daily_df['norm_sent']
sent_vs_price = sent_vs_price.merge(market_df[['date', 'mid_price']], on='date')
for method in methods:
method_correlations = [(method, offset, sent_vs_price['sent'].corr(sent_vs_price['mid_price'].shift(-offset), method=method))
for offset in offsets]
correlations.extend(method_correlations)
correlations_df = pd.DataFrame(correlations, columns=['method', 'offset', 'correlation'])
spearman_correlations = correlations_df[correlations_df['method'] == 'spearman']
max_corr = max(spearman_correlations['correlation'])
max_corr_offset = spearman_correlations[spearman_correlations['correlation'] == max_corr]['offset'].iloc[0]
min_corr = min(spearman_correlations['correlation'])
min_corr_offset = spearman_correlations[spearman_correlations['correlation'] == min_corr]['offset'].iloc[0]
max_corr_text = f'Max correlation ({round(max_corr, 3)}) with an offset of {max_corr_offset} days'
min_corr_text = f'Min correlation ({round(min_corr, 3)}) with an offset of {min_corr_offset} days'
plot_title = alt.TitleParams('News sentiment correlations', subtitle=['Positive offset: looking future prices',max_corr_text, min_corr_text])
corr_chart = alt.Chart(correlations_df, title=plot_title).mark_line().encode(alt.X('offset', title='Offset days'),
alt.Y('correlation', title='Correlation'),
alt.Color('method', title='Method'))
corr_chart
###Output
_____no_output_____
###Markdown
In this case, Pearson correlation could be considered reliable, but its trend is similar to the others two.Correlation near 0 is the reflection of the inconsistency of sentiment over days. WTLCCFor semplicity, the next chart will visualiza WTLCC using Spearman correlation only.
###Code
from math import ceil
def get_window(series: pd.Series, window) -> pd.Series:
return series.iloc[window[0]: window[1]]
def windowed_corr(first: pd.Series, second: pd.Series) -> list:
windows = [(window * window_size, (window * window_size)+window_size) for window in range(ceil(len(second)/window_size))]
windows_corr = [get_window(first, window).corr(get_window(second, window), method = 'spearman') for window in windows]
return windows_corr, windows
offsets = list(range(-66, 67, 4)) # reduced offsets for better visualization
window_size = 120 # one window = one quarter
windowed_correlations = []
for offset in offsets:
windows_corr, windows = windowed_corr(sent_vs_price['sent'], sent_vs_price['mid_price'].shift(-offset))
for window, window_corr in enumerate(windows_corr):
windowed_correlations.append((window, window_corr, offset))
windowed_correlations_df = pd.DataFrame(windowed_correlations, columns=['window', 'correlation', 'offset'])
plot_title = alt.TitleParams('Windowed lagged correlation sentiment/price', subtitle=['Positive offset: looking future prices',
'-1:= price as master, 1:= sentiment as master'])
color = alt.Color('correlation', scale=alt.Scale(domain=[-1, 1], range=[palette['negative'], palette['positive']]), title='Correlation')
alt.Chart(windowed_correlations_df, height=800, width=800, title=plot_title).mark_rect().encode(alt.X('window:O', title=f'Window ({window_size} days)'), alt.Y('offset:O', title='Offset days'), color)
from math import ceil
def get_window(series: pd.Series, window) -> pd.Series:
return series.iloc[window[0]: window[1]]
def windowed_corr(first: pd.Series, second: pd.Series) -> list:
windows = [(window * window_size, (window * window_size)+window_size) for window in range(ceil(len(second)/window_size))]
windows_corr = [get_window(first, window).corr(get_window(second, window), method = 'spearman') for window in windows]
return windows_corr, windows
offsets = list(range(-66, 67, 4)) # reduced offsets for better visualization
window_size = 60 # one window = two months
windowed_correlations = []
for offset in offsets:
windows_corr, windows = windowed_corr(sent_vs_price['sent'], sent_vs_price['mid_price'].shift(-offset))
for window, window_corr in enumerate(windows_corr):
windowed_correlations.append((window, window_corr, offset))
windowed_correlations_df = pd.DataFrame(windowed_correlations, columns=['window', 'correlation', 'offset'])
plot_title = alt.TitleParams('Windowed lagged correlation sentiment/price', subtitle='-1:= price as master, 1:= sentiment as master')
color = alt.Color('correlation', scale=alt.Scale(domain=[-1, 1], range=[palette['negative'], palette['positive']]), title='Correlation')
alt.Chart(windowed_correlations_df, height=800, width=800, title=plot_title).mark_rect().encode(alt.X('window:O', title=f'Window ({window_size} days)'), alt.Y('offset:O', title='Offset days'), color)
###Output
_____no_output_____
###Markdown
The heatmaps show, again, that the sentiment is too volatile to be useful Interactive WLTCC
###Code
from math import ceil
def get_window(series: pd.Series, window) -> pd.Series:
return series.iloc[window[0]: window[1]]
def windowed_corr(first: pd.Series, second: pd.Series, window_size: int) -> list:
windows = [(window * window_size, (window * window_size)+window_size) for window in range(ceil(len(second)/window_size))]
windows_corr = [get_window(first, window).corr(get_window(second, window), method = 'spearman') for window in windows]
return windows_corr, windows
def get_plot(window_size=60):
size = window_size
windowed_correlations = []
for offset in offsets:
windows_corr, windows = windowed_corr(sent_vs_price['sent'], sent_vs_price['mid_price'].shift(-offset), size)
for window, window_corr in enumerate(windows_corr):
windowed_correlations.append((window, window_corr, offset))
windowed_correlations_df = pd.DataFrame(windowed_correlations, columns=['window', 'correlation', 'offset'])
plot_title = alt.TitleParams('Windowed lagged correlation sentiment/price', subtitle=['Positive offset: looking future prices',
'-1:= price as master, 1:= sentiment as master'])
color = alt.Color('correlation', scale=alt.Scale(domain=[-1, 1], range=[palette['negative'], palette['positive']]), title='Correlation')
return alt.Chart(windowed_correlations_df, height=750, width=750, title=plot_title).mark_rect().encode(alt.X('window:O', title=f'Window ({window_size} days)'), alt.Y('offset:O', title='Offset days'), color)
interact(get_plot, window_size=IntSlider(value=60, min=5, max=365, step=1, continuous_update=False, description='Window size'))
offsets = list(range(-66, 67, 4)) # reduced offsets for better visualization
###Output
_____no_output_____
###Markdown
Volume analysis Another aspect of data is the volume, in other words: is relevant that the people speak well or bad about Bitcoin or it's enough that people speak?
###Code
time_selector = alt.selection(type='interval', encodings=['x'])
dummy_df = pd.DataFrame({'date': [min(daily_df['date']), max(daily_df['date'])], 'count': [0, 0]})
zero_line = alt.Chart(dummy_df).mark_line(color='grey').encode(x=alt.X('yearmonthdate(date):T'), y=alt.Y('count'))
price = alt.Chart(market_df).mark_line(color=palette['price']).encode(
x=alt.X('yearmonthdate(date):T',
scale=alt.Scale(domain=time_selector),
title=None),
y=alt.Y('mid_price', title='Mid price')
)
plot_title = alt.TitleParams('Volume vs Bitcoin price')
histogram = alt.Chart(daily_df, title=plot_title).mark_bar(color=palette['news']).encode(alt.X('yearmonthdate(date):T',
bin=alt.Bin(maxbins=100, extent=time_selector),
scale=alt.Scale(domain=time_selector),
axis=alt.Axis(labelOverlap='greedy', labelSeparation=6)),
alt.Y('count',
title='Volume'))
histogram_reg = histogram.transform_regression('date', 'count', method='poly', order=9).mark_line(color=palette['strong_neutral_1'])
volume_chart = histogram + zero_line
price_reg = price.transform_regression('date', 'mid_price', method='poly', order=9).mark_line(color=palette['strong_price'])
price_chart = price
selection_plot = alt.Chart(daily_df).mark_bar(color=palette['news']).encode(alt.X('yearmonthdate(date):T',
bin=alt.Bin(maxbins=100),
title='Date',
axis=alt.Axis(labelOverlap='greedy', labelSeparation=6)),
alt.Y('count', title=None)).add_selection(time_selector).properties(height=50)
(alt.layer(volume_chart, price_chart).resolve_scale(y='independent') & selection_plot).configure_axisRight(titleColor=palette['price']).configure_axisLeft(titleColor=palette['news'])
###Output
_____no_output_____
###Markdown
The regularity of news crawling makes the volume analysis useless. Correlation TLCC
###Code
methods = ['pearson', 'kendall', 'spearman']
offsets = list(range(-150, 151)) # list of days offset to test
correlations = []
volume_vs_price = pd.DataFrame(daily_df['date'], columns=['date'])
volume_vs_price['volume'] = daily_df['count']
volume_vs_price = volume_vs_price.merge(market_df[['date', 'mid_price']], on='date')
for method in methods:
method_correlations = [(method, offset, volume_vs_price['volume'].corr(sent_vs_price['mid_price'].shift(-offset), method=method))
for offset in offsets]
correlations.extend(method_correlations)
correlations_df = pd.DataFrame(correlations, columns=['method', 'offset', 'correlation'])
spearman_correlations = correlations_df[correlations_df['method'] == 'spearman']
max_corr = max(spearman_correlations['correlation'])
max_corr_offset = spearman_correlations[spearman_correlations['correlation'] == max_corr]['offset'].iloc[0]
min_corr = min(spearman_correlations['correlation'])
min_corr_offset = spearman_correlations[spearman_correlations['correlation'] == min_corr]['offset'].iloc[0]
max_corr_text = f'Max correlation ({round(max_corr, 3)}) with an offset of {max_corr_offset} days'
min_corr_text = f'Min correlation ({round(min_corr, 3)}) with an offset of {min_corr_offset} days'
plot_title = alt.TitleParams('News volume correlations', subtitle=['Positive offset: looking future prices',max_corr_text, min_corr_text])
corr_chart = alt.Chart(correlations_df, title=plot_title).mark_line().encode(alt.X('offset', title='Offset days'),
alt.Y('correlation', title='Correlation'),
alt.Color('method', title='Method'))
corr_chart
###Output
_____no_output_____
###Markdown
As anticipated, volume correlation is too near 0 to be considered informative. WLTCC
###Code
from math import ceil
def get_window(series: pd.Series, window) -> pd.Series:
return series.iloc[window[0]: window[1]]
def windowed_corr(first: pd.Series, second: pd.Series) -> list:
windows = [(window * window_size, (window * window_size)+window_size) for window in range(ceil(len(second)/window_size))]
windows_corr = [get_window(first, window).corr(get_window(second, window), method = 'spearman') for window in windows]
return windows_corr, windows
offsets = list(range(-66, 67, 4)) # reduced offsets for better visualization
window_size = 120 # one window = one quarter
windowed_correlations = []
for offset in offsets:
windows_corr, windows = windowed_corr(volume_vs_price['volume'], volume_vs_price['mid_price'].shift(-offset))
for window, window_corr in enumerate(windows_corr):
windowed_correlations.append((window, window_corr, offset))
windowed_correlations_df = pd.DataFrame(windowed_correlations, columns=['window', 'correlation', 'offset'])
plot_title = alt.TitleParams('Windowed lagged correlation volume/price', subtitle='-1:= price as master, 1:= volume as master')
color = alt.Color('correlation', scale=alt.Scale(domain=[-1, 1], range=[palette['negative'], palette['positive']]), title='Correlation')
alt.Chart(windowed_correlations_df, height=800, width=800, title=plot_title).mark_rect().encode(alt.X('window:O', title=f'Window ({window_size} days)'), alt.Y('offset:O', title='Offset days'), color)
###Output
_____no_output_____
###Markdown
For each offset, there are some windows positively correlated, but another negatively correlated; another demonstration that with this dataset, volume correlation analysis is useless. Interactive WLTCC
###Code
from math import ceil
def get_window(series: pd.Series, window) -> pd.Series:
return series.iloc[window[0]: window[1]]
def windowed_corr(first: pd.Series, second: pd.Series, window_size: int) -> list:
windows = [(window * window_size, (window * window_size)+window_size) for window in range(ceil(len(second)/window_size))]
windows_corr = [get_window(first, window).corr(get_window(second, window), method = 'spearman') for window in windows]
return windows_corr, windows
def get_plot(window_size=60):
size = window_size
windowed_correlations = []
for offset in offsets:
windows_corr, windows = windowed_corr(volume_vs_price['volume'], volume_vs_price['mid_price'].shift(-offset), size)
for window, window_corr in enumerate(windows_corr):
windowed_correlations.append((window, window_corr, offset))
windowed_correlations_df = pd.DataFrame(windowed_correlations, columns=['window', 'correlation', 'offset'])
plot_title = alt.TitleParams('Windowed lagged correlation volume/price', subtitle=['Positive offset: looking future prices',
'-1:= price as master, 1:= volume as master'])
color = alt.Color('correlation', scale=alt.Scale(domain=[-1, 1], range=[palette['negative'], palette['positive']]), title='Correlation')
return alt.Chart(windowed_correlations_df, height=750, width=750, title=plot_title).mark_rect().encode(alt.X('window:O', title=f'Window ({window_size} days)'), alt.Y('offset:O', title='Offset days'), color)
interact(get_plot, window_size=IntSlider(value=60, min=5, max=365, step=1, continuous_update=False, description='Window size'))
offsets = list(range(-66, 67, 4)) # reduced offsets for better visualization
###Output
_____no_output_____
|
Colab-Super-SloMo.ipynb
|
###Markdown
Slow Motion with Super SloMoThis notebook uses [Super SloMo](https://arxiv.org/abs/1712.00080) from the open source project [avinashpaliwal/Super-SloMo](https://github.com/avinashpaliwal/Super-SloMo) to slow down a given video.This is a modification of [this Colab Notebook](https://colab.research.google.com/github/tugstugi/dl-colab-notebooks/blob/master/notebooks/SuperSloMo.ipynbscrollTo=P7eRRjlYaV1s) by styler00dollar aka "sudo rm -rf / --no-preserve-root8353" on discord.This version:- allows to use Google Drive for own videos- uses CRF inside the ffmpeg command for better space usage- custom ffmpeg command possible- includes experemental audio support- removes .mkv input restriction and supports different filetypesMay be implemented:- different output formatInteresting things:- Can do 1080p without crashing (Dain can only do ~900p with 16GB VRAM)- 1080p 6x works with Super-SlomoSimple Tutorial:- Run cells with these play-buttons that are visible on the left side of the code/text. ```[ ]``` indicate a play-button. Check GPU
###Code
!nvidia-smi
#@markdown # Install avinashpaliwal/Super-SloMo
import os
from os.path import exists, join, basename, splitext, dirname
git_repo_url = 'https://github.com/styler00dollar/Colab-Super-SloMo'
project_name = splitext(basename(git_repo_url))[0]
if not exists(project_name):
# clone and install dependencies
!git clone -q --depth 1 {git_repo_url}
!pip install -q youtube-dl
ffmpeg_path = !which ffmpeg
ffmpeg_path = dirname(ffmpeg_path[0])
import sys
sys.path.append(project_name)
from IPython.display import YouTubeVideo
# Download pre-trained Model
def download_from_google_drive(file_id, file_name):
# download a file from the Google Drive link
!rm -f ./cookie
!curl -c ./cookie -s -L "https://drive.google.com/uc?export=download&id={file_id}" > /dev/null
confirm_text = !awk '/download/ {print $NF}' ./cookie
confirm_text = confirm_text[0]
!curl -Lb ./cookie "https://drive.google.com/uc?export=download&confirm={confirm_text}&id={file_id}" -o {file_name}
pretrained_model = 'SuperSloMo.ckpt'
if not exists(pretrained_model):
download_from_google_drive('1IvobLDbRiBgZr3ryCRrWL8xDbMZ-KnpF', pretrained_model)
###Output
_____no_output_____
###Markdown
Super SloMo on a Youtube Video
###Code
#@markdown #### Example URL: https://www.youtube.com/watch?v=P3lXKxOkxbg
YOUTUBE_ID = 'P3lXKxOkxbg' #@param{type:"string"}
YouTubeVideo(YOUTUBE_ID)
###Output
_____no_output_____
###Markdown
Info:0 fps means that the video path is wrong or you need to wait a bit for Google Drive to sync and try again.
###Code
%cd /content/
!rm -df youtube.mp4
# download the youtube with the given ID
!youtube-dl -f 'bestvideo[ext=mp4]' --output "youtube.%(ext)s" https://www.youtube.com/watch?v=$YOUTUBE_ID
# Detecting FPS of input file.
import os
import cv2
cap = cv2.VideoCapture('/content/youtube.mp4')
fps = cap.get(cv2.CAP_PROP_FPS)
print("Detected FPS: ")
print(fps)
# Configure
SLOW_MOTION_FACTOR = 3 #@param{type:"number"}
# You can change the final FPS manually
TARGET_FPS = fps*FPS_FACTOR
#TARGET_FPS = 90
print("Target FPS")
print(TARGET_FPS)
#@markdown # Creating video with sound
!python /content/Colab-Super-SloMo/video_to_slomo.py --ffmpeg {ffmpeg_path} --checkpoint /content/SuperSloMo.ckpt --video /content/youtube.mp4 --sf {SLOW_MOTION_FACTOR} --fps {TARGET_FPS} --output /content/output.mp4
!youtube-dl -x --audio-format aac https://www.youtube.com/watch?v=$YOUTUBE_ID --output /content/output-audio.aac
# Deleting old video, if it exists
if os.path.exists("/content/output.mp4"):
os.remove("/content/output.mp4")
# You can change these ffmpeg parameter
%shell ffmpeg -y -r {TARGET_FPS} -f image2 -pattern_type glob -i '/content/Colab-Super-SloMo/tmp/*.png' -i /content/output-audio.aac -shortest -crf 18 /content/output.mp4
#@markdown # Creating video without sound
# Deleting old video, if it exists
if os.path.exists("/content/output.mp4"):
os.remove("/content/output.mp4")
!python /content/Colab-Super-SloMo/video_to_slomo.py --ffmpeg {ffmpeg_path} --checkpoint /content/SuperSloMo.ckpt --video /content/youtube.mp4 --sf {SLOW_MOTION_FACTOR} --fps {TARGET_FPS} --output /content/output.mkv
# You can change these ffmpeg parameter
%shell ffmpeg -y -r {TARGET_FPS} -f image2 -pattern_type glob -i '/content/Colab-Super-SloMo/tmp/*.png' -crf 18 /content/output.mp4
###Output
_____no_output_____
###Markdown
Now you can playback the video with the last cell or copy the video back to Google Drive
###Code
#@markdown # [Optional] Copy video result to ```"Google Drive/output.mp4"```
# Connect Google Drive
from google.colab import drive
drive.mount('/content/drive')
print('Google Drive connected.')
# Copy video back to Google Drive
!cp /content/output.mp4 "/content/drive/My Drive/output.mp4"
###Output
_____no_output_____
###Markdown
Super SloMo on a Google Drive VideoThe default input path is:```"Google Drive/input.mp4"```. You can change the path if you want. Just change the file extention if you got a different format.
###Code
#@markdown ## Mount Google Drive and configure paths
# Connect Google Drive
from google.colab import drive
drive.mount('/content/drive')
print('Google Drive connected.')
# Configuration. "My Drive" represents your Google Drive.
# Input file:
INPUT_FILEPATH = "/content/drive/My Drive/input.mp4" #@param{type:"string"}
# Output file path. MP4 is recommended. Another extention will need further code-changes.
OUTPUT_FILE_PATH = "/content/drive/My Drive/output.mp4" #@param{type:"string"}
###Output
_____no_output_____
###Markdown
Info:0 fps means that the video path is wrong or you need to wait a bit for Google Drive to sync and try again.
###Code
#@markdown ## [Experimental] Create Video with sound
# Detecting FPS of input file.
import os
import cv2
cap = cv2.VideoCapture(f'{INPUT_FILEPATH}')
fps = cap.get(cv2.CAP_PROP_FPS)
print("Detected FPS: ")
print(fps)
# Configure
#@markdown ## Configuration
SLOW_MOTION_FACTOR = 3 #@param{type:"number"}
FPS_FACTOR = 3 #@param{type:"number"}
# You can change the final FPS manually
TARGET_FPS = fps*FPS_FACTOR
#TARGET_FPS = 90
print("Target FPS")
print(TARGET_FPS)
# Copy video from Google Drive
file_extention = os.path.splitext(INPUT_FILEPATH)[1]
!cp '{INPUT_FILEPATH}' /content/input{file_extention}
!python /content/Colab-Super-SloMo/video_to_slomo.py --ffmpeg {ffmpeg_path} --checkpoint /content/SuperSloMo.ckpt --video /content/input{file_extention} --sf {SLOW_MOTION_FACTOR}
%shell ffmpeg -i /content/input{file_extention} -acodec copy /content/output-audio.aac
# Deleting old video, if it exists
if os.path.exists("/content/output.mp4"):
os.remove("/content/output.mp4")
# You can change these ffmpeg parameter
%shell ffmpeg -y -r {TARGET_FPS} -f image2 -pattern_type glob -i '/content/Colab-Super-SloMo/tmp/*.png' -i /content/output-audio.aac -shortest -crf 18 /content/output.mp4
# Copy video back to Google Drive
!cp /content/output.mp4 '{OUTPUT_FILE_PATH}'
#@markdown ## Create video without sound
# Detecting FPS of input file.
import os
import cv2
cap = cv2.VideoCapture(f'{INPUT_FILEPATH}')
fps = cap.get(cv2.CAP_PROP_FPS)
print("Detected FPS: ")
print(fps)
#@markdown ## Configuration
SLOW_MOTION_FACTOR = 3 #@param{type:"number"}
FPS_FACTOR = 3 #@param{type:"number"}
# You can change the final FPS manually
TARGET_FPS = fps*FPS_FACTOR
#TARGET_FPS = 90
print("Target FPS")
print(TARGET_FPS)
# Copy video from Google Drive
file_extention = os.path.splitext(INPUT_FILEPATH)[1]
!cp '{INPUT_FILEPATH}' /content/input{file_extention}
!cd '{project_name}' && python video_to_slomo.py --ffmpeg {ffmpeg_path} --checkpoint ../{pretrained_model} --video /content/input{file_extention} --sf {SLOW_MOTION_FACTOR}
# Deleting old video, if it exists
if os.path.exists("/content/output.mp4"):
os.remove("/content/output.mp4")
# You can change these ffmpeg parameter
%shell ffmpeg -y -r {TARGET_FPS} -f image2 -pattern_type glob -i '/content/Colab-Super-SloMo/tmp/*.png' -crf 18 /content/output.mp4
# Copy video back to Google Drive
!cp /content/output.mp4 '{OUTPUT_FILE_PATH}'
#@markdown ## [Experimental] Create video with sound and removed duplicate frames
import cv2
file_extention = os.path.splitext(INPUT_FILEPATH)[1]
!cp '{INPUT_FILEPATH}' /content/input{file_extention}
# Get amount frames
cap = cv2.VideoCapture("/content/input.mp4")
length = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
# Detecting FPS of input file.
import os
import cv2
cap = cv2.VideoCapture('/content/input.mp4')
fps = cap.get(cv2.CAP_PROP_FPS)
print("Detected FPS: ")
print(fps)
# Deleting old video, if it exists
if os.path.exists("/content/output.mp4"):
os.remove("/content/output.mp4")
# Configure
SLOW_MOTION_FACTOR = 6 #@param{type:"number"}
FPS_FACTOR = 6 #@param{type:"number"}
TARGET_FPS = fps*FPS_FACTOR
!python /content/Colab-Super-SloMo/video_to_slomo.py --ffmpeg {ffmpeg_path} --checkpoint /content/SuperSloMo.ckpt --video /content/input.mp4 --sf {SLOW_MOTION_FACTOR} --remove_duplicate True
%shell ffmpeg -i /content/input{file_extention} -acodec copy /content/output-audio.aac
amount_files_created = len(os.listdir(('/content/Colab-Super-SloMo/tmp')))
# You can change these ffmpeg parameter
%shell ffmpeg -y -r {TARGET_FPS} -f image2 -pattern_type glob -i '/content/Colab-Super-SloMo/tmp/*.png' -i /content/output-audio.aac -shortest -crf 18 /content/output.mp4
# Copy video back to Google Drive
!cp /content/output.mp4 '{OUTPUT_FILE_PATH}'
#@markdown ## Create video without sound and removed duplicate frames
import cv2
file_extention = os.path.splitext(INPUT_FILEPATH)[1]
!cp '{INPUT_FILEPATH}' /content/input{file_extention}
# Get amount frames
cap = cv2.VideoCapture("/content/input.mp4")
length = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
# Detecting FPS of input file.
import os
import cv2
cap = cv2.VideoCapture('/content/input.mp4')
fps = cap.get(cv2.CAP_PROP_FPS)
print("Detected FPS: ")
print(fps)
# Deleting old video, if it exists
if os.path.exists("/content/output.mp4"):
os.remove("/content/output.mp4")
# Configure
SLOW_MOTION_FACTOR = 6 #@param{type:"number"}
FPS_FACTOR = 6 #@param{type:"number"}
TARGET_FPS = fps*FPS_FACTOR
!python /content/Colab-Super-SloMo/video_to_slomo.py --ffmpeg {ffmpeg_path} --checkpoint /content/SuperSloMo.ckpt --video /content/input.mp4 --sf {SLOW_MOTION_FACTOR} --remove_duplicate True
amount_files_created = len(os.listdir(('/content/Colab-Super-SloMo/tmp')))
# You can change these ffmpeg parameter
%shell ffmpeg -y -r {amount_files_created/(length/fps)} -f image2 -pattern_type glob -i '/content/Colab-Super-SloMo/tmp/*.png' -crf 18 /content/output.mp4
# Copy video back to Google Drive
!cp /content/output.mp4 '{OUTPUT_FILE_PATH}'
#@markdown ## Preview the result within Colab
#@markdown #### Don't try this with big files. It will crash Colab. Small files like 10mb are ok.
def show_local_mp4_video(file_name, width=640, height=480):
import io
import base64
from IPython.display import HTML
video_encoded = base64.b64encode(io.open(file_name, 'rb').read())
return HTML(data='''<video width="{0}" height="{1}" alt="test" controls>
<source src="data:video/mp4;base64,{2}" type="video/mp4" />
</video>'''.format(width, height, video_encoded.decode('ascii')))
show_local_mp4_video('/content/output.mp4', width=960, height=720)
###Output
_____no_output_____
###Markdown
Slow Motion with Super SloMoThis notebook uses [Super SloMo](https://arxiv.org/abs/1712.00080) from the open source project [avinashpaliwal/Super-SloMo](https://github.com/avinashpaliwal/Super-SloMo) to slow down a given video.This is a modification of [this Colab Notebook](https://colab.research.google.com/github/tugstugi/dl-colab-notebooks/blob/master/notebooks/SuperSloMo.ipynbscrollTo=P7eRRjlYaV1s) by styler00dollar aka "sudo rm -rf / --no-preserve-root8353" on discord.This version:- allows to use Google Drive for own videos- uses CRF inside the ffmpeg command for better space usage- custom ffmpeg command possible- includes experemental audio support- removes .mkv input restriction and supports different filetypesMay be implemented:- different output formatInteresting things:- Can do 1080p without crashing (Dain can only do ~900p with 16GB VRAM)- 1080p 6x works with Super-SlomoSimple Tutorial:- Run cells with these play-buttons that are visible on the left side of the code/text. ```[ ]``` indicate a play-button. Check GPU
###Code
!nvidia-smi
#@markdown # Install avinashpaliwal/Super-SloMo
import os
from os.path import exists, join, basename, splitext, dirname
git_repo_url = 'https://github.com/styler00dollar/Colab-Super-SloMo'
project_name = splitext(basename(git_repo_url))[0]
if not exists(project_name):
# clone and install dependencies
!git clone -q --depth 1 {git_repo_url}
!pip install -q youtube-dl
ffmpeg_path = !which ffmpeg
ffmpeg_path = dirname(ffmpeg_path[0])
import sys
sys.path.append(project_name)
from IPython.display import YouTubeVideo
# Download pre-trained Model
def download_from_google_drive(file_id, file_name):
# download a file from the Google Drive link
!rm -f ./cookie
!curl -c ./cookie -s -L "https://drive.google.com/uc?export=download&id={file_id}" > /dev/null
confirm_text = !awk '/download/ {print $NF}' ./cookie
confirm_text = confirm_text[0]
!curl -Lb ./cookie "https://drive.google.com/uc?export=download&confirm={confirm_text}&id={file_id}" -o {file_name}
pretrained_model = 'SuperSloMo.ckpt'
if not exists(pretrained_model):
download_from_google_drive('1IvobLDbRiBgZr3ryCRrWL8xDbMZ-KnpF', pretrained_model)
###Output
_____no_output_____
###Markdown
Super SloMo on a Youtube Video
###Code
#@markdown #### Example URL: https://www.youtube.com/watch?v=P3lXKxOkxbg
YOUTUBE_ID = 'P3lXKxOkxbg' #@param{type:"string"}
YouTubeVideo(YOUTUBE_ID)
###Output
_____no_output_____
###Markdown
Info:0 fps means that the video path is wrong or you need to wait a bit for Google Drive to sync and try again.
###Code
%cd /content/
!rm -df youtube.mp4
# download the youtube with the given ID
!youtube-dl -f 'bestvideo[ext=mp4]' --output "youtube.%(ext)s" https://www.youtube.com/watch?v=$YOUTUBE_ID
# Detecting FPS of input file.
import os
import cv2
cap = cv2.VideoCapture('/content/youtube.mp4')
fps = cap.get(cv2.CAP_PROP_FPS)
print("Detected FPS: ")
print(fps)
# Configure
SLOW_MOTION_FACTOR = 3 #@param{type:"number"}
# You can change the final FPS manually
TARGET_FPS = fps*FPS_FACTOR
#TARGET_FPS = 90
print("Target FPS")
print(TARGET_FPS)
#@markdown # Creating video with sound
!python /content/Colab-Super-SloMo/video_to_slomo.py --ffmpeg {ffmpeg_path} --checkpoint /content/SuperSloMo.ckpt --video /content/youtube.mp4 --sf {SLOW_MOTION_FACTOR} --fps {TARGET_FPS} --output /content/output.mp4
!youtube-dl -x --audio-format aac https://www.youtube.com/watch?v=$YOUTUBE_ID --output /content/output-audio.aac
# Deleting old video, if it exists
if os.path.exists("/content/output.mp4"):
os.remove("/content/output.mp4")
# You can change these ffmpeg parameter
%shell ffmpeg -y -r {TARGET_FPS} -f image2 -pattern_type glob -i '/content/Colab-Super-SloMo/tmp/*.png' -i /content/output-audio.aac -shortest -crf 18 -pix_fmt yuv420p /content/output.mp4
#@markdown # Creating video without sound
# Deleting old video, if it exists
if os.path.exists("/content/output.mp4"):
os.remove("/content/output.mp4")
!python /content/Colab-Super-SloMo/video_to_slomo.py --ffmpeg {ffmpeg_path} --checkpoint /content/SuperSloMo.ckpt --video /content/youtube.mp4 --sf {SLOW_MOTION_FACTOR} --fps {TARGET_FPS} --output /content/output.mkv
# You can change these ffmpeg parameter
%shell ffmpeg -y -r {TARGET_FPS} -f image2 -pattern_type glob -i '/content/Colab-Super-SloMo/tmp/*.png' -crf 18 -pix_fmt yuv420p /content/output.mp4
###Output
_____no_output_____
###Markdown
Now you can playback the video with the last cell or copy the video back to Google Drive
###Code
#@markdown # [Optional] Copy video result to ```"Google Drive/output.mp4"```
# Connect Google Drive
from google.colab import drive
drive.mount('/content/drive')
print('Google Drive connected.')
# Copy video back to Google Drive
!cp /content/output.mp4 "/content/drive/My Drive/output.mp4"
###Output
_____no_output_____
###Markdown
Super SloMo on a Google Drive VideoThe default input path is:```"Google Drive/input.mp4"```. You can change the path if you want. Just change the file extention if you got a different format.
###Code
#@markdown ## Mount Google Drive and configure paths
# Connect Google Drive
from google.colab import drive
drive.mount('/content/drive')
print('Google Drive connected.')
# Configuration. "My Drive" represents your Google Drive.
# Input file:
INPUT_FILEPATH = "/content/drive/My Drive/input.mp4" #@param{type:"string"}
# Output file path. MP4 is recommended. Another extention will need further code-changes.
OUTPUT_FILE_PATH = "/content/drive/My Drive/output.mp4" #@param{type:"string"}
###Output
_____no_output_____
###Markdown
Info:0 fps means that the video path is wrong or you need to wait a bit for Google Drive to sync and try again.
###Code
#@markdown ## [Experimental] Create Video with sound
# Detecting FPS of input file.
import os
import cv2
cap = cv2.VideoCapture(f'{INPUT_FILEPATH}')
fps = cap.get(cv2.CAP_PROP_FPS)
print("Detected FPS: ")
print(fps)
# Configure
#@markdown ## Configuration
SLOW_MOTION_FACTOR = 3 #@param{type:"number"}
FPS_FACTOR = 3 #@param{type:"number"}
# You can change the final FPS manually
TARGET_FPS = fps*FPS_FACTOR
#TARGET_FPS = 90
print("Target FPS")
print(TARGET_FPS)
# Copy video from Google Drive
file_extention = os.path.splitext(INPUT_FILEPATH)[1]
!cp '{INPUT_FILEPATH}' /content/input{file_extention}
!python /content/Colab-Super-SloMo/video_to_slomo.py --ffmpeg {ffmpeg_path} --checkpoint /content/SuperSloMo.ckpt --video /content/input{file_extention} --sf {SLOW_MOTION_FACTOR}
%shell ffmpeg -i /content/input{file_extention} -acodec copy /content/output-audio.aac
# Deleting old video, if it exists
if os.path.exists("/content/output.mp4"):
os.remove("/content/output.mp4")
# You can change these ffmpeg parameter
%shell ffmpeg -y -r {TARGET_FPS} -f image2 -pattern_type glob -i '/content/Colab-Super-SloMo/tmp/*.png' -i /content/output-audio.aac -shortest -crf 18 -pix_fmt yuv420p /content/output.mp4
# Copy video back to Google Drive
!cp /content/output.mp4 '{OUTPUT_FILE_PATH}'
#@markdown ## Create video without sound
# Detecting FPS of input file.
import os
import cv2
cap = cv2.VideoCapture(f'{INPUT_FILEPATH}')
fps = cap.get(cv2.CAP_PROP_FPS)
print("Detected FPS: ")
print(fps)
#@markdown ## Configuration
SLOW_MOTION_FACTOR = 3 #@param{type:"number"}
FPS_FACTOR = 3 #@param{type:"number"}
# You can change the final FPS manually
TARGET_FPS = fps*FPS_FACTOR
#TARGET_FPS = 90
print("Target FPS")
print(TARGET_FPS)
# Copy video from Google Drive
file_extention = os.path.splitext(INPUT_FILEPATH)[1]
!cp '{INPUT_FILEPATH}' /content/input{file_extention}
!cd '{project_name}' && python video_to_slomo.py --ffmpeg {ffmpeg_path} --checkpoint ../{pretrained_model} --video /content/input{file_extention} --sf {SLOW_MOTION_FACTOR}
# Deleting old video, if it exists
if os.path.exists("/content/output.mp4"):
os.remove("/content/output.mp4")
# You can change these ffmpeg parameter
%shell ffmpeg -y -r {TARGET_FPS} -f image2 -pattern_type glob -i '/content/Colab-Super-SloMo/tmp/*.png' -crf 18 -pix_fmt yuv420p /content/output.mp4
# Copy video back to Google Drive
!cp /content/output.mp4 '{OUTPUT_FILE_PATH}'
#@markdown ## [Experimental] Create video with sound and removed duplicate frames
import cv2
file_extention = os.path.splitext(INPUT_FILEPATH)[1]
!cp '{INPUT_FILEPATH}' /content/input{file_extention}
# Get amount frames
cap = cv2.VideoCapture("/content/input.mp4")
length = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
# Detecting FPS of input file.
import os
import cv2
cap = cv2.VideoCapture('/content/input.mp4')
fps = cap.get(cv2.CAP_PROP_FPS)
print("Detected FPS: ")
print(fps)
# Deleting old video, if it exists
if os.path.exists("/content/output.mp4"):
os.remove("/content/output.mp4")
# Configure
SLOW_MOTION_FACTOR = 6 #@param{type:"number"}
FPS_FACTOR = 6 #@param{type:"number"}
TARGET_FPS = fps*FPS_FACTOR
!python /content/Colab-Super-SloMo/video_to_slomo.py --ffmpeg {ffmpeg_path} --checkpoint /content/SuperSloMo.ckpt --video /content/input.mp4 --sf {SLOW_MOTION_FACTOR} --remove_duplicate True
%shell ffmpeg -i /content/input{file_extention} -acodec copy /content/output-audio.aac
amount_files_created = len(os.listdir(('/content/Colab-Super-SloMo/tmp')))
# You can change these ffmpeg parameter
%shell ffmpeg -y -r {TARGET_FPS} -f image2 -pattern_type glob -i '/content/Colab-Super-SloMo/tmp/*.png' -i /content/output-audio.aac -shortest -crf 18 -pix_fmt yuv420p /content/output.mp4
# Copy video back to Google Drive
!cp /content/output.mp4 '{OUTPUT_FILE_PATH}'
#@markdown ## Create video without sound and removed duplicate frames
import cv2
file_extention = os.path.splitext(INPUT_FILEPATH)[1]
!cp '{INPUT_FILEPATH}' /content/input{file_extention}
# Get amount frames
cap = cv2.VideoCapture("/content/input.mp4")
length = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
# Detecting FPS of input file.
import os
import cv2
cap = cv2.VideoCapture('/content/input.mp4')
fps = cap.get(cv2.CAP_PROP_FPS)
print("Detected FPS: ")
print(fps)
# Deleting old video, if it exists
if os.path.exists("/content/output.mp4"):
os.remove("/content/output.mp4")
# Configure
SLOW_MOTION_FACTOR = 6 #@param{type:"number"}
FPS_FACTOR = 6 #@param{type:"number"}
TARGET_FPS = fps*FPS_FACTOR
!python /content/Colab-Super-SloMo/video_to_slomo.py --ffmpeg {ffmpeg_path} --checkpoint /content/SuperSloMo.ckpt --video /content/input.mp4 --sf {SLOW_MOTION_FACTOR} --remove_duplicate True
amount_files_created = len(os.listdir(('/content/Colab-Super-SloMo/tmp')))
# You can change these ffmpeg parameter
%shell ffmpeg -y -r {amount_files_created/(length/fps)} -f image2 -pattern_type glob -i '/content/Colab-Super-SloMo/tmp/*.png' -crf 18 -pix_fmt yuv420p /content/output.mp4
# Copy video back to Google Drive
!cp /content/output.mp4 '{OUTPUT_FILE_PATH}'
#@markdown ## Preview the result within Colab
#@markdown #### Don't try this with big files. It will crash Colab. Small files like 10mb are ok.
def show_local_mp4_video(file_name, width=640, height=480):
import io
import base64
from IPython.display import HTML
video_encoded = base64.b64encode(io.open(file_name, 'rb').read())
return HTML(data='''<video width="{0}" height="{1}" alt="test" controls>
<source src="data:video/mp4;base64,{2}" type="video/mp4" />
</video>'''.format(width, height, video_encoded.decode('ascii')))
show_local_mp4_video('/content/output.mp4', width=960, height=720)
###Output
_____no_output_____
|
Capstone_1/Capstone_I_workbook_v3.ipynb
|
###Markdown
Springboard Capstone Project 1 -- Report[Dataset](https://www.kaggle.com/wendykan/lending-club-loan-data) used for the project Import necessary modules
###Code
import numpy as np
from numpy import loadtxt
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
pd.set_option('max_columns', None)
from scipy import stats
from scipy.stats import probplot
from scipy.stats.mstats import zscore
import statsmodels.stats.api as sms
import nltk
import collections as co
from collections import Counter
from collections import OrderedDict
from sklearn import datasets
from sklearn.datasets import make_classification
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn import metrics
from sklearn.metrics import accuracy_score
from sklearn.model_selection import KFold
from sklearn.model_selection import RandomizedSearchCV
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
from imblearn.over_sampling import SMOTE, ADASYN
from imblearn.over_sampling import RandomOverSampler
from xgboost import XGBClassifier
#from dask_ml.model_selection import GridSearchCV
###Output
_____no_output_____
###Markdown
[Google Colab](https://colab.research.google.com/) specific section to run XGBoost
###Code
!pip install -U -q imbalanced-learn
!pip install -U -q PyDrive
# Code to read csv file into colaboratory:
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# 1. Authenticate and create the PyDrive client.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
#2. Get the file
downloaded = drive.CreateFile({'id':'1iZ6FE7L8uAXOIi92PT9sosHcUUA2m8Cs'})
downloaded.GetContentFile('loan.csv')
#read loans.csv as a dataframe
loans_df = pd.read_csv('loan.csv',low_memory=False, engine='c')
###Output
_____no_output_____
###Markdown
Read file and create dataframe
###Code
loans_df = pd.read_csv('~/Downloads/tanay/data_springboard/loan.csv',low_memory=False, engine='c')
###Output
_____no_output_____
###Markdown
Inspect Dataframe
###Code
loans_df.describe()
loans_df['loan_status'].value_counts()
sns.set(style='darkgrid')
_=sns.countplot(y='loan_status', data=loans_df, order = loans_df['loan_status'].value_counts().index, orient='h')
###Output
_____no_output_____
###Markdown
Creation of new features based on translation of existing features
###Code
#define a function to classify loan status into one of the following bins ('Fully Paid', 'Default', 'Current')
def loan_status_bin(text):
if text in ('Fully Paid', 'Does not meet the credit policy. Status:Fully Paid'):
return 'Fully Paid'
elif text in ('Current', 'Issued'):
return 'Current'
elif text in ('Charged Off', 'Default', 'Does not meet the credit policy. Status:Charged Off', 'Late (16-30 days)', 'Late (31-120 days)', 'In Grace Period'):
return 'Default'
else:
'UNKNOWN BIN'
#create a new attribute 'loan_status_bin' in the dataframe
loans_df['loan_status_bin']=loans_df['loan_status'].apply(loan_status_bin)
loans_df['loan_status_bin'].unique()
sns.set(style='darkgrid')
_=sns.countplot(x='loan_status_bin', data=loans_df, order = loans_df['loan_status_bin'].value_counts().index)
# Fill null annual income by median annual income
loans_df.fillna(loans_df['annual_inc'].median()
, inplace=True)
loans_df[loans_df['annual_inc'].isnull()==True]['annual_inc'].count()
# create a new dataframe for Fully Paid loans
loans_df_fp=loans_df[loans_df['loan_status_bin']=='Fully Paid']
# create a new dataframe for Current loans
loans_df_def=loans_df[loans_df['loan_status_bin']=='Default']
print('For Default loans, mean annual income is {0}, standard deviation is {1}, size of dataframe is {2}'.format(loans_df_def['annual_inc'].mean(), loans_df_def['annual_inc'].std(), len(loans_df_def['annual_inc'])))
print('For Fully Paid loans, mean annual income is {0}, standard deviation is {1}, size of dataframe is {2}'.format(loans_df_fp['annual_inc'].mean(), loans_df_fp['annual_inc'].std(), len(loans_df_fp['annual_inc'])))
#define a function to convert grade into numerical values
def credit_grade(grade):
if grade in ('A'):
return 1
elif grade in ('B'):
return 2
elif grade in ('C'):
return 3
elif grade in ('D'):
return 4
elif grade in ('E'):
return 5
elif grade in ('F'):
return 6
elif grade in ('G'):
return 7
else:
99
#create a new attribute 'credit_grade' in the dataframe
loans_df['credit_grade']=loans_df['grade'].apply(credit_grade)
loans_df['credit_grade'].unique()
loans_df['application_type'].unique()
def derived_income(x, y, z):
if x == 'INDIVIDUAL':
return y
elif x == 'JOINT':
return z
else:
0
#create a feature derived income, which chooses between annual income & joint annual income based on the application type
loans_df['derived_income']=loans_df.apply(lambda x: derived_income(x['application_type'], x['annual_inc'], x['annual_inc_joint']), axis=1)
def derived_dti(x, y, z):
if x == 'INDIVIDUAL':
return y
elif x == 'JOINT':
return z
else:
0
#create a feature derived DTI, which chooses between DTI & jjoint DTI based on the application type
loans_df['derived_dti']=loans_df.apply(lambda x: derived_dti(x['application_type'], x['dti'], x['dti_joint']), axis=1)
# create a feature which tracks the ratio of installment to derived income
loans_df['inst_inc_ratio']=loans_df['installment']/ (loans_df['derived_income'] /12)
###Output
_____no_output_____
###Markdown
Model Training Features used in the current modelling: * loan_amount* credit_grade * interest_rate * derived_inc* derived_dti * inst_inc_ratio Training and Test DatasetsWhen fitting models, we would like to ensure two things:* We have found the best model (in terms of model parameters).* The model is highly likely to generalize i.e. perform well on unseen data.Purpose of splitting data into Training/testing sets We built our model with the requirement that the model fit the data well. As a side-effect, the model will fit THIS dataset well. What about new data? We wanted the model for predictions, right? One simple solution, leave out some data (for testing) and train the model on the rest This also leads directly to the idea of cross-validation, next section. First, we try a basic Logistic Regression:* Split the data into a training and test (hold-out) set* Train on the training set, and test for accuracy on the testing set
###Code
#create a dataframe which has Fully Paid and Default loans in it to be used for training.
loans_df_fp_def=loans_df[loans_df['loan_status_bin'].isin(['Fully Paid', 'Default'])]
sns.set(style='darkgrid')
_=sns.countplot(x='loan_status_bin', data=loans_df_fp_def, order = loans_df_fp_def['loan_status_bin'].value_counts().index)
###Output
_____no_output_____
###Markdown
Split the data into a training and test set.
###Code
X, y = loans_df_fp_def[['loan_amnt', 'credit_grade', 'int_rate', 'derived_income', 'derived_dti', 'inst_inc_ratio']].values, (loans_df_fp_def.loan_status_bin).values
Xlr, Xtestlr, ylr, ytestlr = train_test_split(X, y, random_state=5, stratify=y)
loans_df_curr=loans_df[loans_df['loan_status_bin'].isin(['Current'])]
sns.set(style='darkgrid')
_=sns.countplot(x='loan_status_bin', data=loans_df_curr, order = loans_df_curr['loan_status_bin'].value_counts().index)
###Output
_____no_output_____
###Markdown
Build a logistic regression classifier (naive)
###Code
clf = LogisticRegression()
# Fit the model on the trainng data.
clf.fit(Xlr, ylr)
# Print the accuracy from the testing data.
print(accuracy_score(clf.predict(Xtestlr), ytestlr))
###Output
0.7567005845421086
###Markdown
Tuning the ClassifierThe model has some hyperparameters we can tune for hopefully better performance. For tuning the parameters of your model, you will use a mix of *cross-validation* and *grid search*. In Logistic Regression, the most important parameter to tune is the *regularization parameter* `C`. Note that the regularization parameter is not always part of the logistic regression model. The regularization parameter is used to control for unlikely high regression coefficients, and in other cases can be used when data is sparse, as a method of feature selection.You will now implement some code to perform model tuning and selecting the regularization parameter $C$.We use the following `cv_score` function to perform K-fold cross-validation and apply a scoring function to each test fold. In this incarnation we use accuracy score as the default scoring function.
###Code
def cv_score(clf, x, y, score_func=accuracy_score):
result = 0
nfold = 5
for train, test in KFold(nfold).split(x): # split data into train/test groups, 5 times
clf.fit(x[train], y[train]) # fit
result += score_func(clf.predict(x[test]), y[test]) # evaluate score function on held-out data
return result / nfold # average
###Output
_____no_output_____
###Markdown
Below is an example of using the `cv_score` function for a basic logistic regression model without regularization.
###Code
clf1 = LogisticRegression()
score = cv_score(clf1, Xlr, ylr)
print(score)
###Output
0.7566957734959467
###Markdown
Exercise: Implement the following search procedure to find a good model For a given a list of possible values of `C` below For each C: Create a logistic regression model with that value of C Find the average score for this model using the `cv_score` function **only on the training set** `(Xlr, ylr)` Pick the C with the highest average scoreYour goal is to find the best model parameters based *only* on the training set, without showing the model test set at all (which is why the test set is also called a *hold-out* set).
###Code
#the grid of parameters to search over
Cs = [0.001, 0.1, 1, 10, 100]
max_score=0
for C in Cs:
clf2 = LogisticRegression(C=C)
score = cv_score(clf2, Xlr, ylr)
if score > max_score:
max_score = score
best_C =C
print ('max_score: {0}, best_C: {1}'.format(max_score, best_C))
###Output
max_score: 0.7566957734959467, best_C: 0.001
###Markdown
Use the C you obtained from the procedure earlier and train a Logistic Regression on the training data Calculate the accuracy on the test data
###Code
clf3=LogisticRegression(C=best_C)
clf3.fit(Xlr, ylr)
ypred=clf3.predict(Xtestlr)
print('accuracy score: ', accuracy_score(ypred, ytestlr), '\n')
###Output
accuracy score: 0.7567005845421086
###Markdown
Black Box Grid Search in `sklearn` on Logistic RegressionUse scikit-learn's [GridSearchCV](http://scikit-learn.org/stable/modules/generated/sklearn.grid_search.GridSearchCV.html) tool to perform cross validation and grid search. * Instead of writing your own loops above to iterate over the model parameters, use GridSearchCV to find the best model over the training set? * Does it give you the same best value of `C`?* How does this model you've obtained perform on the test set?
###Code
clf4=LogisticRegression()
parameters = {"C": [0.0001, 0.001, 0.01, 0.1, 1, 10, 100]}
fitmodel = GridSearchCV(clf4, param_grid=parameters, cv=5, scoring="accuracy", return_train_score=True)
fitmodel.fit(Xlr, ylr)
fitmodel.best_estimator_, fitmodel.best_params_, fitmodel.best_score_, fitmodel.cv_results_
clf5=LogisticRegression(C=fitmodel.best_params_['C'])
clf5.fit(Xlr, ylr)
ypred=clf5.predict(Xtestlr)
print('accuracy score: ', accuracy_score(ypred, ytestlr))
print('The new value of the C is: ', fitmodel.best_params_['C'])
###Output
accuracy score: 0.7567005845421086
The new value of the C is: 0.0001
###Markdown
Decision Tree Classifier (naive)
###Code
# fit a CART model to the data
clf_dt = DecisionTreeClassifier()
clf_dt.fit(Xlr, ylr)
print(clf_dt)
# make predictions
ypred = clf_dt.predict(Xtestlr)
# summarize the fit of the model
print(metrics.classification_report(ytestlr, ypred))
print(metrics.confusion_matrix(ytestlr, ypred))
###Output
DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=None,
max_features=None, max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, presort=False, random_state=None,
splitter='best')
precision recall f1-score support
Default 0.31 0.33 0.32 16857
Fully Paid 0.78 0.77 0.77 52428
avg / total 0.67 0.66 0.66 69285
[[ 5532 11325]
[12153 40275]]
###Markdown
Resampling of DataSince we already know that number of observations for **Default** is less than half of **Fully Paid**, we can see the impact of this uneven distribution in the precision and other metrics from classification report, and thus it becomes imperative to balance the classes by employing a resamping technique. Resampling using SMOTE
###Code
X_resampled, y_resampled = SMOTE().fit_sample(Xlr, ylr)
print(sorted(Counter(y_resampled).items()))
X_test_resampled, y_test_resampled = SMOTE().fit_sample(Xtestlr, ytestlr)
print(sorted(Counter(y_test_resampled).items()))
###Output
[('Default', 52428), ('Fully Paid', 52428)]
###Markdown
Training a classifier (logistic regression) using SMOTE resampled data
###Code
clf_smote = LogisticRegression().fit(X_resampled, y_resampled)
print(clf_smote)
# make predictions
ypred = clf_smote.predict(Xtestlr)
# summarize the fit of the model
print(metrics.classification_report(ytestlr, ypred))
print(metrics.confusion_matrix(ytestlr, ypred))
print('accuracy score: ', accuracy_score(ypred, ytestlr), '\n')
###Output
accuracy score: 0.518741430324024
###Markdown
Training Decision tree (CART) using SMOTE sampled data
###Code
# fit a CART model to the data
clf_dt_smote = DecisionTreeClassifier()
clf_dt_smote.fit(X_resampled, y_resampled)
print(clf_dt_smote)
# make predictions
ypred = clf_dt_smote.predict(Xtestlr)
# summarize the fit of the model
print(metrics.classification_report(ytestlr, ypred))
print(metrics.confusion_matrix(ytestlr, ypred))
###Output
DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=None,
max_features=None, max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, presort=False, random_state=None,
splitter='best')
precision recall f1-score support
Default 0.31 0.34 0.32 16857
Fully Paid 0.78 0.75 0.76 52428
avg / total 0.66 0.65 0.66 69285
[[ 5774 11083]
[13089 39339]]
###Markdown
Training Random forest using SMOTE sampled data
###Code
clf_rf_1 = RandomForestClassifier(max_depth=5, random_state=0)
clf_rf_1.fit(X_resampled, y_resampled)
print(clf_rf_1.feature_importances_)
print(clf_rf_1)
# make predictions
ypred = clf_rf_1.predict(Xtestlr)
# summarize the fit of the model
print(metrics.classification_report(ytestlr, ypred))
print(metrics.confusion_matrix(ytestlr, ypred))
###Output
precision recall f1-score support
Default 0.38 0.54 0.45 16857
Fully Paid 0.83 0.71 0.77 52428
avg / total 0.72 0.67 0.69 69285
[[ 9143 7714]
[14987 37441]]
###Markdown
Hyperparameter tuning Hyper parameter tuning of Random Forest using Randomized Search CV
###Code
# Number of trees in random forest
n_estimators = [int(x) for x in np.linspace(start = 200, stop = 2000, num = 5)]
# Number of features to consider at every split
max_features = ['auto', 'sqrt']
# Maximum number of levels in tree
max_depth = [int(x) for x in np.linspace(10, 100, num = 5)]
max_depth.append(None)
# Minimum number of samples required to split a node
min_samples_split = [2, 5, 10]
# Minimum number of samples required at each leaf node
min_samples_leaf = [1, 2, 4]
# Method of selecting samples for training each tree
bootstrap = [True, False]
# Create the random grid
random_grid = {'n_estimators': n_estimators,
'max_features': max_features,
'max_depth': max_depth,
'min_samples_split': min_samples_split,
'min_samples_leaf': min_samples_leaf,
'bootstrap': bootstrap}
print(random_grid)
# Use the random grid to search for best hyperparameters
# First create the base model to tune
clf_rf_2 = RandomForestClassifier()
# Random search of parameters, using 3 fold cross validation,
# search across 100 different combinations, and use all available cores
rf_random = RandomizedSearchCV(estimator = clf_rf_2, param_distributions = random_grid, n_iter = 10, cv = 3, verbose=2, random_state=42, n_jobs = -1)
# Fit the random search model
rf_random.fit(X_resampled, y_resampled)
rf_random.best_estimator_
rf_random.cv_results_
###Output
_____no_output_____
###Markdown
Hyper parameter tuning of Random Forest using Grid Search CV
###Code
# Create the parameter grid based on the results of random search
param_grid = {
'bootstrap': [False],
'max_depth': [45, 55, 65, 70],
'max_features': ['sqrt'],
'min_samples_leaf': [1, 2, 4],
'min_samples_split': [2, 3, 4],
'n_estimators': [1900, 2000, 2100]
}
# Create a base model
rf = RandomForestClassifier(random_state = 42)
print(param_grid)
# Instantiate the grid search model
grid_search = GridSearchCV(estimator = rf, param_grid = param_grid,
cv = 3, n_jobs = -1, return_train_score=True, verbose=4)
# Fit the grid search to the data
grid_search.fit(X_resampled, y_resampled)
###Output
_____no_output_____
###Markdown
Classification (Random Forest) with Scaling
###Code
# Setup the pipeline steps: steps
steps = [('scaler', StandardScaler()),
('rfc', RandomForestClassifier())]
#X_resampled, y_resampled
#Xtestlr, ytestlr
# Create the pipeline: pipeline
pipeline = Pipeline(steps)
# Fit the pipeline to the training set: knn_scaled
knn_scaled = pipeline.fit(X_resampled, y_resampled)
# Instantiate and fit a k-NN classifier to the unscaled data
knn_unscaled = RandomForestClassifier().fit(X_resampled, y_resampled)
# Compute and print metrics
print('Accuracy with Scaling: {}'.format(knn_scaled.score(Xtestlr, ytestlr)))
print('Accuracy without Scaling: {}'.format(knn_unscaled.score(Xtestlr, ytestlr)))
###Output
Accuracy with Scaling: 0.6875947174713142
Accuracy without Scaling: 0.6887926679656491
###Markdown
Gradient Boosting using XGBoost XGBoost classifier (naive)
###Code
model = XGBClassifier()
model.fit(X_resampled, y_resampled)
# make predictions for test data
# Xtestlr, ytestlr
y_pred = model.predict(Xtestlr)
# evaluate predictions
accuracy = accuracy_score(ytestlr, y_pred)
print("Accuracy: %.2f%%" % (accuracy * 100.0))
model.feature_importances_
###Output
_____no_output_____
###Markdown
XGBoost using early stopping
###Code
# fit model on training data
xgb = XGBClassifier()
eval_set = [(Xtestlr, ytestlr)]
xgb.fit(X_resampled, y_resampled, early_stopping_rounds=10, eval_metric="logloss", eval_set=eval_set, verbose=True)
# make predictions for test data
y_pred = xgb.predict(Xtestlr)
# evaluate predictions
accuracy = accuracy_score(ytestlr, y_pred)
print("Accuracy: %.2f%%" % (accuracy * 100.0))
###Output
Accuracy: 73.41%
###Markdown
Hyper Parameter tuning for XGBoost
###Code
n_estimators = [50, 100, 150, 200]
max_depth = [2, 4, 6, 8]
param_grid = dict(max_depth=max_depth, n_estimators=n_estimators)
kfold = StratifiedKFold(n_splits=10, shuffle=True, random_state=7)
grid_search = GridSearchCV(xgb, param_grid, scoring="neg_log_loss", n_jobs=-1, cv=kfold,
verbose=1)
result = grid_search.fit(X_resampled, y_resampled)
# summarize results
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
###Output
_____no_output_____
###Markdown
OOB Errors for Random Forests¶
###Code
print(__doc__)
RANDOM_STATE = 123
# NOTE: Setting the `warm_start` construction parameter to `True` disables
# support for parallelized ensembles but is necessary for tracking the OOB
# error trajectory during training.
ensemble_clfs = [
("RandomForestClassifier, max_features='sqrt'",
RandomForestClassifier(warm_start=True, oob_score=True,
max_features="sqrt",
random_state=RANDOM_STATE)),
("RandomForestClassifier, max_features='log2'",
RandomForestClassifier(warm_start=True, max_features='log2',
oob_score=True,
random_state=RANDOM_STATE)),
("RandomForestClassifier, max_features=None",
RandomForestClassifier(warm_start=True, max_features=None,
oob_score=True,
random_state=RANDOM_STATE))
]
# Map a classifier name to a list of (<n_estimators>, <error rate>) pairs.
error_rate = OrderedDict((label, []) for label, _ in ensemble_clfs)
# Range of `n_estimators` values to explore.
min_estimators = 15
max_estimators = 1500
for label, clf in ensemble_clfs:
for i in range(min_estimators, max_estimators + 1):
clf.set_params(n_estimators=i)
clf.fit(X_resampled, y_resampled)
# Record the OOB error for each `n_estimators=i` setting.
oob_error = 1 - clf.oob_score_
error_rate[label].append((i, oob_error))
# Generate the "OOB error rate" vs. "n_estimators" plot.
for label, clf_err in error_rate.items():
xs, ys = zip(*clf_err)
plt.plot(xs, ys, label=label)
plt.xlim(min_estimators, max_estimators)
plt.xlabel("n_estimators")
plt.ylabel("OOB error rate")
plt.legend(loc="upper right")
plt.show()
###Output
Automatically created module for IPython interactive environment
###Markdown
Predict loan status for loans with status = 'Current'
###Code
loans_df_curr['loan_status_pred']=clf_rf_1.predict(loans_df_curr[['loan_amnt', 'credit_grade', 'int_rate', 'derived_income', 'derived_dti', 'inst_inc_ratio']])
sns.set(style='darkgrid')
_=sns.countplot(x='loan_status_pred', data=loans_df_curr, order = loans_df_curr['loan_status_pred'].value_counts().index, hue='credit_grade')
###Output
_____no_output_____
|
notebooks/face_recognition.ipynb
|
###Markdown
Train Data
###Code
%%time
img_size = 64
samples = glob.glob('./lfwcrop_grey/faces/*')
data = []
print('Point 1')
for s in samples:
img = mpimg.imread(s)
data.append(np.expand_dims(img, 0))
%%time
data2 = []
for img in glob.glob("./FilmsFaceDatabase/s*/*.pgm"):
img_read = mpimg.imread(img)
img_read = cv2.resize(img_read, (64, 64))
data2.append(np.expand_dims(img_read, 0))
%%time
data3 = []
for img in glob.glob("./ufi-cropped/train/s*/*.pgm"):
n = mpimg.imread(img)
n = cv2.resize(n, (64, 64))
data3.append(np.expand_dims(n,0))
%%time
data4 = []
for img in glob.glob("./UTKFace/*"):
n = mpimg.imread(img)
n = cv2.cvtColor(n, cv2.COLOR_BGR2RGB) # конвертируем изображение в RGB
n = cv2.cvtColor(n, cv2.COLOR_RGB2GRAY) # делаем изображение ЧБ
n = cv2.resize(n, (64, 64))
data4.append(np.expand_dims(n,0))
full_data = data+data2+data3+data4
faces = np.concatenate(full_data, axis=0)
faces.shape
faces = np.expand_dims(faces, -1)
# prepare data
faces = faces / 255.
faces.shape
###Output
_____no_output_____
###Markdown
Fit autoencoder
###Code
# encoder
input_ = Input((64, 64, 1)) # 64
x = Conv2D(filters=8, kernel_size=2, strides=2, activation='relu')(input_) # 32
x = Conv2D(filters=16, kernel_size=2, strides=2, activation='relu')(x) # 16
x = Conv2D(filters=32, kernel_size=2, strides=2, activation='relu')(x) # 8
x = Conv2D(filters=64, kernel_size=2, strides=2, activation='relu')(x) # 4
x = Conv2D(filters=128, kernel_size=2, strides=2, activation='relu')(x) # 2
flat = Flatten()(x)
latent = Dense(128)(flat)
# decoder
reshape = Reshape((2,2,32)) #2
conv_2t_1 = Conv2DTranspose(filters=128, kernel_size=2, strides=2, activation='relu') # 4
conv_2t_2 = Conv2DTranspose(filters=64, kernel_size=2, strides=2, activation='relu') # 8
conv_2t_3 = Conv2DTranspose(filters=32, kernel_size=2, strides=2, activation='relu') # 16
conv_2t_4 = Conv2DTranspose(filters=16, kernel_size=2, strides=2, activation='relu') # 32
conv_2t_5 = Conv2DTranspose(filters=1, kernel_size=2, strides=2, activation='sigmoid') # 64
x = reshape(latent)
x = conv_2t_1(x)
x = conv_2t_2(x)
x = conv_2t_3(x)
x = conv_2t_4(x)
decoded = conv_2t_5(x) # 64
autoencoder = Model(input_, decoded)
encoder = Model(input_, latent)
decoder_input = Input((128,))
x_ = reshape(decoder_input)
x_ = conv_2t_1(x_)
x_ = conv_2t_2(x_)
x_ = conv_2t_3(x_)
x_ = conv_2t_4(x_)
decoded_ = conv_2t_5(x_) # 64
decoder = Model(decoder_input, decoded_)
autoencoder.summary()
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')
autoencoder.fit(faces, faces, epochs=10)
###Output
Epoch 1/10
37094/37094 [==============================] - 21s 573us/step - loss: 0.5981
Epoch 2/10
37094/37094 [==============================] - 23s 615us/step - loss: 0.5981
Epoch 3/10
37094/37094 [==============================] - 21s 575us/step - loss: 0.5981
Epoch 4/10
37094/37094 [==============================] - 23s 608us/step - loss: 0.5981
Epoch 5/10
37094/37094 [==============================] - 21s 571us/step - loss: 0.5981
Epoch 6/10
37094/37094 [==============================] - 21s 564us/step - loss: 0.5981
Epoch 7/10
37094/37094 [==============================] - 21s 565us/step - loss: 0.5981
Epoch 8/10
37094/37094 [==============================] - 21s 567us/step - loss: 0.5981
Epoch 9/10
37094/37094 [==============================] - 21s 563us/step - loss: 0.5981
Epoch 10/10
37094/37094 [==============================] - 21s 565us/step - loss: 0.5981
###Markdown
Save model
###Code
# save weights
encoder.save_weights('encoder_weights_mri.h5')
decoder.save_weights('decoder_weights_mri.h5')
# save architecture
json_encoder = encoder.to_json()
json_decoder = decoder.to_json()
with open('encoder_mri_json.txt', 'w') as file:
file.write(json_encoder)
with open('decoder_mri_json.txt', 'w') as file:
file.write(json_decoder)
###Output
_____no_output_____
|
examples/rossler-attractor.ipynb
|
###Markdown
Rössler attractorSee https://en.wikipedia.org/wiki/R%C3%B6ssler_attractor\begin{cases} \frac{dx}{dt} = -y - z \\ \frac{dy}{dt} = x + ay \\ \frac{dz}{dt} = b + z(x-c) \end{cases}
###Code
%matplotlib ipympl
import ipywidgets as widgets
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import solve_ivp
from functools import lru_cache
import mpl_interactions.ipyplot as iplt
###Output
_____no_output_____
###Markdown
Define function to plot Projecting on axesThe Rossler attractor is a 3 dimensional system, but as 3D plots are not yet supported by `mpl_interactions` we will only visualize the `x` and `y` components.**Note:** Matplotlib supports 3D plots, but `mpl_interactions` does not yet support them. That makes this a great place to contribute to `mpl_interactions` if you're interested in doing so. If you want to have a crack at it feel free to comment on https://github.com/ianhi/mpl-interactions/issues/89 and `@ianhi` will be happy to help you through the process. CachingOne thing to note here is that `mpl_interactions` will cache function calls for a given set of parameters so that the same function isn't called multiple times if you are plotting it on multiple axes. However, that cache will not persist as the parameters are modified. So here we build in our own cache to speed up execution kwarg collisionsWe can't use the `c` argument to `f` as `c` is reserved by `plot` (and `scatter` and other functions) by matplotlib in order to control the colors of the plot.
###Code
t_span = [0, 500]
t_eval = np.linspace(0, 500, 1550)
x0 = 0
y0 = 0
z0 = 0
cache = {}
def f(a, b, c_):
def deriv(t, cur_pos):
x, y, z = cur_pos
dxdt = -y - z
dydt = x + a * y
dzdt = b + z * (x - c_)
return [dxdt, dydt, dzdt]
id_ = (float(a), float(b), float(c_))
if id_ not in cache:
out = solve_ivp(deriv, t_span, y0=[x0, y0, z0], t_eval=t_eval).y[:2]
cache[id_] = out
else:
out = cache[id_]
return out.T # requires shape (N, 2)
fig, ax = plt.subplots()
controls = iplt.plot(
f,
".-",
a=(0.05, 0.3, 1000),
b=0.2,
c_=(1, 20), # we can't use `c` because that is a kwarg for matplotlib that controls color
parametric=True,
alpha=0.5,
play_buttons=True,
play_button_pos="left",
ylim="auto",
xlim="auto",
)
###Output
_____no_output_____
###Markdown
Coloring by time pointWhen we plot using `plot` we can't choose colors for individual points, so we can use the `scatter` function to color the points by the time point they have.
###Code
# use a different argument for c because `c` is an argument to plt.scatter
out = widgets.Output()
display(out)
def f(a, b, c_):
def deriv(t, cur_pos):
x, y, z = cur_pos
dxdt = -y - z
dydt = x + a * y
dzdt = b + z * (x - c_)
return [dxdt, dydt, dzdt]
id_ = (float(a), float(b), float(c_))
if id_ not in cache:
out = solve_ivp(deriv, t_span, y0=[0, 1, 0], t_eval=t_eval).y[:2]
cache[id_] = out
else:
out = cache[id_]
return out.T # requires shape (N, 2)
fig, ax = plt.subplots()
controls = iplt.scatter(
f,
a=(0.05, 0.3, 1000),
b=0.2,
c_=(1, 20),
parametric=True,
alpha=0.5,
play_buttons=True,
play_button_pos="left",
s=8,
c=t_eval,
)
controls = iplt.plot(
f,
"-",
controls=controls,
parametric=True,
alpha=0.5,
ylim="auto",
xlim="auto",
)
plt.colorbar().set_label("Time Point")
plt.tight_layout()
###Output
_____no_output_____
|
LeetCode/Leet Code.ipynb
|
###Markdown
Reverse Integer
###Code
a = int(input('Enter the number'))
# Method 1
astr = str(a)
for i in range(len(astr)-1,-1,-1):
print(astr[i],end='')
print()
# Method 2
ast = str(a)
print(ast[::-1])
###Output
321
321
###Markdown
Given two sorted arrays nums1 and nums2 of size m and n respectively, return the median of the two sorted arrays.
###Code
# Method 1
nums1 = [1,3]
nums2 = [2,7]
mergeList = sorted(nums1 + nums2)
s = sum(mergeList)/len(mergeList)
print(f'Median of two list is {s}')
# Method 2
import numpy as np
med = np.median(mergeList)
print(f'Median of two list using Numpy package is {med}')
###Output
Median of two list is 3.25
Median of two list using Numpy package is 2.5
###Markdown
Longest Palindromic Substring(1) Initialize variable revs_number = 0 (2) Loop while number > 0 (a) Multiply revs_number by 10 and add remainder of number divide by 10 to revs_number revs_number = revs_number*10 + number%10; (b) Divide num by 10 (3) Return revs_number
###Code
# Part 1 to check if the string is palindrome or not
def checkPali(s):
return s == s[::-1]
inString = str(input())
inString2 = checkPali(inString)
if inString2:
print(f'{inString} is a palindrome')
else:
print(f'{inString} is not a palindrome')
# Part 2 to check longest palindromic string
cout = 0
def recur_reverse(num):
global cout # We can use it out of the function
if (num > 0):
Reminder = num % 10
cout = (cout * 10) + Reminder
recur_reverse(num // 10)
return cout
cout = recur_reverse(int(inString))
if cout == inString:
print(f'{inString} is a logest palindrome')
else:
print(f'{inString} is not a longest palindrome')
###Output
1234321 is a palindrome
1234321 is not a longest palindrome
###Markdown
Implement atoi which converts a string to an integer. Sample 1Input: str = "42"Output: 42 Sample 2Input: str = " -42"Output: -42Explanation: The first non-whitespace character is '-', which is the minus sign. Then take as many numerical digits as possible, which gets 42. Sample 3Input: str = "4193 with words"Output: 4193Explanation: Conversion stops at digit '3' as the next character is not a numerical digit. Sample 4Input: str = "words and 987"Output: 0Explanation: The first non-whitespace character is 'w', which is not a numerical digit or a +/- sign. Therefore no valid conversion could be performed. Sample 5Input: str = "-91283472332"Output: -2147483648Explanation: The number "-91283472332" is out of the range of a 32-bit signed integer. Thefore INT_MIN (−231) is returned.
###Code
# Method 1
import re
# string = '4193 words'
# def atoi1(string):
# a = string.split( )
# for i in a:
# if re.search('^[0-9]*$',i):
# # print(i) # To check the value
# if a.index(i) == string.index(i):
# out = i
# else:
# out = 0
# return out
# print(f'Method 1 output: {atoi1("-91283472332")}')
# Method 2
def atoi2(string):
return int(string)
print('Method 2 output is',atoi2('-91283472332'))
# Method 3
def atoi3(s):
for i in s.split():
if i.isdigit():
out = int(i)
return out
print(f'Method 3 output is {atoi3("Aneruth has 2 dogs.")}')
def atoi4(s):
xc = []
for i in s.strip():
if i.isdigit():
xc.append(i)
out = int(''.join(xc))
else:
out = 0
return out
print(f'Method 4 output: {atoi4(" 42")}')
def atoi5(s):
match = re.match(r'^\s*([+-]?\d+)', s)
return min(max((int(match.group(1)) if match else 0), -2**31), 2**31 - 1)
print(f'Method 4 output: {atoi5(" 42")}')
###Output
Method 2 output is -91283472332
Method 3 output is 2
Method 4 output: 42
Method 4 output: 42
###Markdown
Two SumGiven an array of integers nums and an integer target, return indices of the two numbers such that they add up to target.You may assume that each input would have exactly one solution, and you may not use the same element twice.You can return the answer in any order.
###Code
aList = [1,2,3]
target = 5
for i in range(len(aList)): # Traverse the list
for j in range(i+1,len(aList)): # To get the pair for the list
tot = aList[i] + aList[j] # sum up the pair
if tot == target: # to check if the pair sum is same as target element
print(f'The pair is {[i,j]}') # to print the pair of which the sum is equivalent to target
print(f'Sum of the pair is {tot}')
###Output
The pair is [1, 2]
Sum of the pair is 5
###Markdown
Add Two NumbersYou are given two non-empty linked lists representing two non-negative integers. The digits are stored in reverse order, and each of their nodes contains a single digit. Add the two numbers and return the sum as a linked list.You may assume the two numbers do not contain any leading zero, except the number 0 itself.
###Code
l1 = [2,4,3]
l2 = [5,6,4]
# This is applicable only for list of same length
def find_sum_of_two_nos(l1,l2):
st1,st2 = '',''
for i,j in zip(l1,l2): # zips the two list
st1 += str(i)
st2 += str(j)
# To change it to int
st1_to_int = int(st1[::-1])
st2_to_int = int(st2[::-1])
sum_of_two_nos = st1_to_int + st2_to_int
sum_lst = [int(q) for q in str(sum_of_two_nos)]
return sum_lst[::-1]
find_sum_of_two_nos(l1,l2)
###Output
_____no_output_____
###Markdown
Longest Substring Without Repeating CharactersGiven a string s, find the length of the longest substring without repeating characters.----------------------------------------------------------------------------------------Example 1:Input: s = "abcabcbb"Output: 3Explanation: The answer is "abc", with the length of 3.----------------------------------------------------------------------------------------Example 2:Input: s = "bbbbb"Output: 1Explanation: The answer is "b", with the length of 1.----------------------------------------------------------------------------------------Example 3:Input: s = "pwwkew"Output: 3Explanation: The answer is "wke", with the length of 3.Notice that the answer must be a substring, "pwke" is a subsequence and not a substring.----------------------------------------------------------------------------------------Example 4:Input: s = ""Output: 0
###Code
def subString(string):
new_dict = {}
max_len,idx = 0,0
for i in range(len(string)):
if string[i] in new_dict and idx <= new_dict[string[i]]:
idx = new_dict[string[i]] + 1
else:
max_len = max(max_len,i-idx+1)
new_dict[string[i]] = i
return max_len
print(subString('abcabcbb'))
print(subString('bbbbb'))
print(subString('pwwkew'))
print(subString(''))
###Output
3
1
3
0
###Markdown
sWAP cASEYou are given a string and your task is to swap cases. In other words, convert all lowercase letters to uppercase letters and vice versa.For Example:Pythonist 2 → pYTHONIST 2
###Code
s = 'Pyhtonist 2'
s.swapcase()
###Output
_____no_output_____
###Markdown
String Split and JoinExample:>>> a = "this is a string">>> a = a.split(" ") a is converted to a list of strings. >>> print a[['this', 'is', 'a', 'string']]Sample Inputthis is a string Sample Outputthis-is-a-string
###Code
s1 = 'this is a string'
s1 = '-'.join(s1.split(' '))
print(s1)
###Output
this-is-a-string
###Markdown
Shift the Index postion of even numbers to n+1th index position. Test Case: [[0,1,2,3,4,5]] --> [[1.0.3,2,5,4]]
###Code
qList = [0,1,2,3,4,5]
# Method 1
def odd_even(x):
odds = sorted(filter(lambda n: n % 2 == 1, x))
evens = sorted(filter(lambda n: n % 2 == 0, x))
pairList = zip(odds, evens)
return [n for t in pairList for n in t]
print('Method 1',odd_even(qList))
# Method 2
eve_lst = qList[0:len(qList):2]
odd_lst = qList[1:len(qList):2]
pair = zip(odd_lst,eve_lst)
print('Method 2',[n for t in pair for n in t])
# Method 3
xz,kd = [],[]
for i in qList:
xz.append(str(i))
separator = ''
z = separator.join(xz)
ze = z[0:len(z):2]
zo = z[1:len(z):2]
zz = zip(zo,ze)
for i in zz:
for j in i:
kd.append(j)
print('Method 3',kd)
###Output
Method 1 [1, 0, 3, 2, 5, 4]
Method 2 [1, 0, 3, 2, 5, 4]
Method 3 ['1', '0', '3', '2', '5', '4']
###Markdown
Pair the first element with 4 elemnt from the index postion and subsequent mapps.Example: Input : [['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o']] Output : [[['a', 'e', 'i', 'm']],[['b', 'f', 'j', 'n']],[['c', 'g', 'k', 'o']],[['e', 'i', 'm']]]
###Code
long_list = ['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o']
split1 = long_list[0:len(long_list):4]
split2 = long_list[1:len(long_list):4]
split3 = long_list[2:len(long_list):4]
split4 = long_list[4:len(long_list):4]
[split1] + [split2] + [split3] + [split4]
###Output
_____no_output_____
###Markdown
Multiply StringsGiven two non-negative integers num1 and num2 represented as strings, return the product of num1 and num2, also represented as a string.Note: You must not use any built-in BigInteger library or convert the inputs to integer directly.Example 1:Input: num1 = "2", num2 = "3"Output: "6"Example 2:Input: num1 = "123", num2 = "456"Output: "56088"
###Code
a = '123'
b = '456'
k = map(int,a)
q = map(int,b)
def mn(a,b):
for i,j in zip(k,q):
x1 = i*j
return (str(x1))
mn(k,q)
###Output
_____no_output_____
###Markdown
Palindrome in both Decimal and Binary Given a number N. check whether a given number N is palindrome or not in it's both formates (Decimal and Binary ). Example 1:Input: N = 7Output: "Yes" Explanation: 7 is palindrome in it's decimal and also in it's binary (111).So answer is "Yes".Example 2:Input: N = 12Output: "No"Explanation: 12 is not palindrome in it's decimaland also in it's binary (1100).So answer is "No".
###Code
ds = []
def binaryPali(num):
if num >= 1:
bno = binaryPali(num // 2)
ds.append(num%2)
# print(num % 2, end = '')
return ds
# binaryPali(12)
def checkPali(func):
if func[::-1] == func:
print('Yes')
else:
print('Nope')
checkPali(binaryPali(7))
checkPali(binaryPali(12))
###Output
Yes
Nope
###Markdown
Matching Pair Given a set of numbers from 1 to N, each number is exactly present twice so there are N pairs. In the worst-case scenario, how many numbers X should be picked and removed from the set until we find a matching pair?Example 1:Input: N = 1Output: 2Explanation: When N=1 Then there is one pair and a matching pair can be extracted in 2 Draws.Example 2:Input: N = 2Output: 3Explanation: When N=2 then there are 2 pairs, let them be {1,2,1,2} and a matching pair will be made in 3 draws.
###Code
def findPairs(lst, K):
res = []
while lst:
num = lst.pop()
diff = K - num
if diff in lst:
res.append((diff, num))
res.reverse()
return res
# Driver code
lst = [1, 2,1,2]
K = 2
print(findPairs(lst, K))
###Output
[(1, 1)]
###Markdown
Remove Duplicates from Sorted ArrayGiven a sorted array nums, remove the duplicates in-place such that each element appears only once and returns the new length.Do not allocate extra space for another array, you must do this by modifying the input array in-place with O(1) extra memory.Clarification:Confused why the returned value is an integer but your answer is an array?Note that the input array is passed in by reference, which means a modification to the input array will be known to the caller as well. Example 1:Input: nums = [[1,1,2]]Output: 2, nums = [[1,2]]Explanation: Your function should return length = 2, with the first two elements of nums being 1 and 2 respectively. It doesn't matter what you leave beyond the returned length. Example 2:Input: nums = [[0,0,1,1,1,2,2,3,3,4]]Output: 5, nums = [[0,1,2,3,4]]Explanation: Your function should return length = 5, with the first five elements of nums being modified to 0, 1, 2, 3, and 4 respectively. It doesn't matter what values are set beyond the returned length.
###Code
# Method 1
def delDups(l):
l.sort()
dup = list(set(l))
return len(dup)
print(f'Method 1 Output:{delDups([0,0,1,1,1,2,2,3,3,4])}')
# Method 2
def removeDuplicate(aList):
aList.sort()
i = 0
for j in range(1,len(aList)):
if aList[j] != aList[i]:
i += 1
aList[i] = aList[j]
return i + 1
print(f'Method 2 Output:{removeDuplicate([0,0,1,1,1,2,2,3,3,4])}')
# Method 3
def remove_dup(nums):
nums[:] = sorted(list(set(nums)))
return len(nums)
print(f'Method 3 Output:{remove_dup([0,0,1,1,1,2,2,3,3,4])}')
###Output
Method 1 Output:5
Method 2 Output:5
Method 3 Output:5
###Markdown
Remove ElementGiven an array nums and a value val, remove all instances of that value in-place and return the new length.Do not allocate extra space for another array, you must do this by modifying the input array in-place with O(1) extra memory.The order of elements can be changed. It doesn't matter what you leave beyond the new length. Example 1:Input: nums = [[3,2,2,3]], val = 3Output: 2, nums = [[2,2]]Explanation: Your function should return length = 2, with the first two elements of nums being 2.It doesn't matter what you leave beyond the returned length. For example if you return 2 with nums = [[2,2,3,3]] or nums = [[2,2,0,0]], your answer will be accepted. Example 2:Input: nums = [[0,1,2,2,3,0,4,2]], val = 2Output: 5, nums = [[0,1,4,0,3]]Explanation: Your function should return length = 5, with the first five elements of nums containing 0, 1, 3, 0, and 4. Note that the order of those five elements can be arbitrary. It doesn't matter what values are set beyond the returned length.
###Code
# Method 1
def xuz(aList,val):
pit = [value for value in aList if value != val]
return len(pit)
print(f'Output from method 1: {xuz([0,1,2,2,3,0,4,2],2)}')
# Method 2
def removeElement(nums, val):
output = 0
for j in range(len(nums)):
if nums[j] != val:
nums[output] = nums[j]
output += 1
return output
print(f'Output from method 2: {removeElement([0,1,2,2,3,0,4,2],2)}')
###Output
Output from method 1: 5
Output from method 2: 5
###Markdown
Yet to see this Roman to Integer Roman numerals are represented by seven different symbols: I, V, X, L, C, D and M.Symbol ValueI 1V 5X 10L 50C 100D 500M 1000For example, 2 is written as II in Roman numeral, just two one's added together. 12 is written as XII, which is simply X + II. The number 27 is written as XXVII, which is XX + V + II.Roman numerals are usually written largest to smallest from left to right. However, the numeral for four is not IIII. Instead, the number four is written as IV. Because the one is before the five we subtract it making four. The same principle applies to the number nine, which is written as IX. There are six instances where subtraction is used:I can be placed before V (5) and X (10) to make 4 and 9. X can be placed before L (50) and C (100) to make 40 and 90. C can be placed before D (500) and M (1000) to make 400 and 900.Given a roman numeral, convert it to an integer. Example 1:Input: s = "III"Output: 3Example 2:Input: s = "IV"Output: 4Example 3:Input: s = "IX"Output: 9Example 4:Input: s = "LVIII"Output: 58Explanation: L = 50, V= 5, III = 3.Example 5:Input: s = "MCMXCIV"Output: 1994Explanation: M = 1000, CM = 900, XC = 90 and IV = 4.
###Code
def roman_to_int(s):
rom_val = {'I': 1, 'V': 5, 'X': 10, 'L': 50, 'C': 100, 'D': 500, 'M': 1000}
int_val = 0
for i in range(len(s)):
if i > 0 and rom_val[s[i]] > rom_val[s[i - 1]]:
int_val += rom_val[s[i]] - 2 * rom_val[s[i - 1]]
else:
int_val += rom_val[s[i]]
return int_val
roman_to_int('XX')
###Output
_____no_output_____
###Markdown
Yet to see this Minimum Index Sum of Two ListsSuppose Andy and Doris want to choose a restaurant for dinner, and they both have a list of favorite restaurants represented by strings.You need to help them find out their common interest with the least list index sum. If there is a choice tie between answers, output all of them with no order requirement. You could assume there always exists an answer.Example 1:Input: list1 = [["Shogun","Tapioca Express","Burger King","KFC"]], list2 = [["Piatti","The Grill at Torrey Pines","Hungry Hunter Steakhouse","Shogun"]]Output: [["Shogun"]]Explanation: The only restaurant they both like is "Shogun".Example 2:Input: list1 = [["Shogun","Tapioca Express","Burger King","KFC"]], list2 = [["KFC","Shogun","Burger King"]]Output: [["Shogun"]]Explanation: The restaurant they both like and have the least index sum is "Shogun" with index sum 1 (0+1).Example 3:Input: list1 = [["Shogun","Tapioca Express","Burger King","KFC"]], list2 = [["KFC","Burger King","Tapioca Express","Shogun"]]Output: [["KFC","Burger King","Tapioca Express","Shogun"]]Example 4:Input: list1 = [["Shogun","Tapioca Express","Burger King","KFC"]], list2 = [["KNN","KFC","Burger King","Tapioca Express","Shogun"]]Output: [["KFC","Burger King","Tapioca Express","Shogun"]]Example 5:Input: list1 = [["KFC"]], list2 = [["KFC"]]Output: [["KFC"]]
###Code
aL = ["KNN","KFC","Burger King","Tapioca Express","Shogun"]
bL = ["Piatti","The Grill at Torrey Pines","Shogun","KFC"]
# xx = []
# for i in aL:
# if i in bL:
# if i not in xx:
# xx.append(i)
# else:
# for j in bL:
# # if aL.index(i) == bL.index(j):
# # x.append(i)
# if list(set(i)) == list(set(j)):
# xx.append(j)
# print(xx)
def minIndex(l1,l2):
empty = []
for i in range(len(l1)):
for j in range(i,len(l2)-1):
if l1.index(l1[i]) < l2.index(l2[j]):
empty.append(l1[i])
# out = empty
# else:
# empty.append(l2[j])
# out = list(set(empty))
return empty
minIndex(aL,bL)
list(set(aL).intersection(set(bL)))
###Output
_____no_output_____
###Markdown
Search Insert PositionGiven a sorted array of distinct integers and a target value, return the index if the target is found. If not, return the index where it would be if it were inserted in order. Example 1:Input: nums = [[1,3,5,6]], target = 5Output: 2 Example 2:Input: nums = [[1,3,5,6]], target = 2Output: 1 Example 3:Input: nums = [[1,3,5,6]], target = 7Output: 4 Example 4:Input: nums = [[1,3,5,6]], target = 0Output: 0 Example 5:Input: nums = [[1]], target = 0Output: 0
###Code
# Method 1
def lookup(lt,tar):
if tar in lt:
output = lt.index(tar)
elif tar == 0 and tar not in lt:
output = 0
elif tar not in lt:
lt.append(tar)
output = lt.index(tar)
return output
print(f'Method 1 output: {lookup([1,3,5,6],0)}')
# Method 2
def lookup2(nums,tar):
while tar in nums:
out = nums.index(tar)
else:
if tar == 0 and tar not in nums:
out = 0
elif tar not in nums:
nums.append(tar)
out = nums.index(tar)
return out
print(f'Method 2 output: {lookup2([1,3,5,6],0)}')
###Output
Method 1 output: 0
Method 2 output: 0
###Markdown
Yet to see this Plus One Given a non-empty array of decimal digits representing a non-negative integer, increment one to the integer.The digits are stored such that the most significant digit is at the head of the list, and each element in the array contains a single digit.You may assume the integer does not contain any leading zero, except the number 0 itself. Example 1:Input: digits = [[1,2,3]]Output: [[1,2,4]]Explanation: The array represents the integer 123.Example 2:Input: digits = [[4,3,2,1]]Output: [[4,3,2,2]]Explanation: The array represents the integer 4321.Example 3:Input: digits = [[0]]Output: [[1]]
###Code
# Method 1
def plusOne(digits):
digits[-1] = digits[-1] + 1
return digits
# Method 2
dk = []
def inc_last(lst):
lst[-1] = lst[-1] + 1
return [int(x) if x.isdigit() else x for z in lst for x in str(z)]
print(f'Output from method 1 {plusOne([9])}')
print(f'Output from method 2 {inc_last([9,9])}')
###Output
Output from method 1 [10]
Output from method 2 [9, 1, 0]
###Markdown
Diagonal Traverse Given a matrix of M x N elements (M rows, N columns), return all elements of the matrix in diagonal order.
###Code
def mat_traverse(mat):
if not mat or not mat[0]:
return []
rows,cols = len(mat),len(mat[0])
diag = [[] for _ in range(rows + cols - 1)]
for i in range(rows):
for j in range(cols):
diag[i + j].append(mat[i][j])
res = diag[0] # Since the first element starts with first value.
for i in range(1,len(diag)):
if i % 2 == 1:
res.extend(diag[i])
else:
res.extend(diag[i][::-1])
return res
mat_traverse([[1,2,3,4],[5,6,7,8],[9,10,11,12]])
###Output
_____no_output_____
###Markdown
Yet to see this Power of TwoGiven an integer n, return true if it is a power of two. Otherwise, return false.An integer n is a power of two, if there exists an integer x such that n == pow(2,x).Example 1:Input: n = 1Output: trueExplanation: 2 ** 0 = 1Example 2:Input: n = 16Output: trueExplanation: 2 ** 4 = 16Example 3:Input: n = 3Output: falseExample 4:Input: n = 4Output: trueExample 5:Input: n = 5Output: false
###Code
def power_two(num):
inp,out = 1,1
while inp <= num:
if inp == num:
return True
inp = 2 ** out
out += 1
return False
power_two(4)
###Output
_____no_output_____
###Markdown
Median of Two Sorted ArraysGiven two sorted arrays nums1 and nums2 of size m and n respectively, return the median of the two sorted arrays.Follow up: The overall run time complexity should be O(log (m+n)).Example 1:Input: nums1 = [[1,3]], nums2 = [[2]]Output: 2.00000Explanation: merged array = [[1,2,3]] and median is 2.Example 2:Input: nums1 = [[1,2]], nums2 = [[3,4]]Output: 2.50000Explanation: merged array = [[1,2,3,4]] and median is (2 + 3) / 2 = 2.5.Example 3:Input: nums1 = [[0,0]], nums2 = [[0,0]]Output: 0.00000Example 4:Input: nums1 = [], nums2 = [[1]]Output: 1.00000Example 5:Input: nums1 = [[2]], nums2 = []Output: 2.00000
###Code
# Method 1
def median_sorted(l1,l2):
l1.extend(l2)
if len(l1) % 2 ==0:
median = (l1[len(l1) // 2 - 1] + l1[len(l1) // 2 ]) / 2.0
print(f'Median from method 1: {median}')
else:
median = l1[len(l1) // 2]
print(f'Median from method 1: {median}')
median_sorted([1,2],[3,4])
# Method 2
import numpy as np
def findMedianSortedArrays(nums1,nums2):
mergeList = sorted(nums1 + nums2)
MedianOfSortedArray = np.median(mergeList)
return MedianOfSortedArray
print(f'Output from method 2: {findMedianSortedArrays([1,2],[3,4])}')
# Method 3
def med(x,y):
x.extend(y)
if len(x) == 1:
median = float(x[0])
elif len(x) % 2 != 0:
median = float(x[len(x)//2])
else:
median = (x[len(x)//2]+x[(len(x)//2)-1])/2
return median
print(f'Output from method 3: {med([1,2],[3,4])}')
al = [1,2,3]
al.extend([4,5])
al
###Output
_____no_output_____
###Markdown
Length of Last WordGiven a string s consists of some words separated by spaces, return the length of the last word in the string. If the last word does not exist, return 0.A word is a maximal substring consisting of non-space characters only. Example 1:Input: s = "Hello World"Output: 5Example 2:Input: s = " "Output: 0
###Code
def len_of_last(s) -> int:
dum = s.split()
if dum:
print(len(dum[-1]))
elif " ":
print(0)
else:
print(len(s))
len_of_last(" ")
###Output
0
###Markdown
YET TO SEE THIS First Bad VersionYou are a product manager and currently leading a team to develop a new product. Unfortunately, the latest version of your product fails the quality check. Since each version is developed based on the previous version, all the versions after a bad version are also bad.Suppose you have n versions [[1, 2, ..., n]] and you want to find out the first bad one, which causes all the following ones to be bad.You are given an API bool isBadVersion(version) which returns whether version is bad. Implement a function to find the first bad version. You should minimize the number of calls to the API.Example 1:Input: n = 5, bad = 4Output: 4Explanation:call isBadVersion(3) -> falsecall isBadVersion(5) -> truecall isBadVersion(4) -> trueThen 4 is the first bad version.Example 2:Input: n = 1, bad = 1Output: 1
###Code
def isBad(n):
left,right = 1,n
while (left < right):
mid = left + (left-right) / 2
if isBadVersion(mid):
right = mid
else:
left = mid + 1
return left
###Output
_____no_output_____
###Markdown
Search in Rotated Sorted ArrayYou are given an integer array nums sorted in ascending order (with distinct values), and an integer target.Suppose that nums is rotated at some pivot unknown to you beforehand (i.e., [[0,1,2,4,5,6,7]] might become [[4,5,6,7,0,1,2]]).If target is found in the array return its index, otherwise, return -1. Example 1:Input: nums = [[4,5,6,7,0,1,2]], target = 0Output: 4Example 2:Input: nums = [[4,5,6,7,0,1,2]], target = 3Output: -1Example 3:Input: nums = [[1]], target = 0Output: -1
###Code
def searchRotated(nums,target):
if target in nums:idx = nums.index(target)
else:idx = -1
return idx
print(f'Method 1 output: {searchRotated([4,5,6,7,0,1,2],3)}')
###Output
Method 1 output: -1
###Markdown
Yet to see this Container With Most WaterGiven n non-negative integers $a_{1}$, $a_{2}$, ..., $a_{n}$ , where each represents a point at coordinate (i, $a_{i}$). n vertical lines are drawn such that the two endpoints of the line i is at (i, $a_{i}$) and (i, 0). Find two lines, which, together with the x-axis forms a container, such that the container contains the most water.Notice that you may not slant the container.Example 1:Input: height = [[1,8,6,2,5,4,8,3,7]]Output: 49Explanation: The above vertical lines are represented by array [[1,8,6,2,5,4,8,3,7]]. In this case, the max area of water (blue section) the container can contain is 49.Example 2:Input: height = [[1,1]]Output: 1Example 3:Input: height = [[4,3,2,1,4]]Output: 16Example 4:Input: height = [[1,2,1]]Output: 2
###Code
# def maxArea(heights):
# if len(heights) > 3:
# max_value = max(heights)
# sliced_list = heights[heights.index(max_value) + 1:]
# out = len(sliced_list) ** 2
# else:
# min_list_val = max(heights)
# sliced_list_val = heights[heights.index(min_list_val):]
# # out =
# return min_list_val
# maxArea([1,2,1])
def maxArea(height):
left, right = 0, len(height)-1
dummy_value = 0
while left < right:
dummy_value = max(dummy_value, (right-left) * min(height[left], height[right]))
if height[left] < height[right]:
left+=1
else:
right-=1
return dummy_value
maxArea([1,1])
###Output
_____no_output_____
###Markdown
3SumGiven an array nums of n integers, are there elements a, b, c in nums such that a + b + c = 0? Find all unique triplets in the array which gives the sum of zero.Notice that the solution set must not contain duplicate triplets.Example 1:Input: nums = [[-1,0,1,2,-1,-4]] | Output: [[[-1,-1,2]],[[-1,0,1]]]Example 2:Input: nums = [] | Output: []Example 3:Input: nums = [[0]] | Output: []
###Code
# Method 1 (Leet Code 315 / 318 test cases passed.)
def threeSum(aList):
aList.sort()
res = []
for i in range(len(aList)):
for j in range(i+1,len(aList)):
for k in range(j+1,len(aList)):
a,b,c = aList[i],aList[j],aList[k]
if a+b+c == 0 and (a,b,c):
res.append([a,b,c])
return res
print(f'Method 1 output: {threeSum([0])}')
###Output
Method 1 output: []
###Markdown
Sqrt(x)Given a non-negative integer x, compute and return the square root of x.Since the return type is an integer, the decimal digits are truncated, and only the integer part of the result is returned.Example 1:Input: x = 4Output: 2Example 2:Input: x = 8Output: 2Explanation: The square root of 8 is 2.82842..., and since the decimal part is truncated, 2 is returned.
###Code
def sqrt(x):
return int(pow(x,1/2))
sqrt(8)
###Output
_____no_output_____
###Markdown
Climbing StairsYou are climbing a staircase. It takes n steps to reach the top.Each time you can either climb 1 or 2 steps. In how many distinct ways can you climb to the top?Example 1:Input: n = 2Output: 2Explanation: There are two ways to climb to the top.1. 1 step + 1 step2. 2 stepsExample 2:Input: n = 3Output: 3Explanation: There are three ways to climb to the top.1. 1 step + 1 step + 1 step2. 1 step + 2 steps3. 2 steps + 1 step
###Code
# Method 1 recursive
def climb(x):
if x == 1 or x ==2 or x == 3:
z = x
else:
z = climb(x-1) + climb(x-2)
return z
print(f'Output for method 1: {climb(1)}')
# Method 2
def climbStairs(n):
if n == 1: out = 1
first,second = 1,2
for i in range(3,n+1):
third = first + second
first,second = second,third
out = second
return out
print(f'Output for method 2: {climbStairs(1)}')
###Output
Output for method 1: 1
Output for method 2: 1
###Markdown
Merge Sorted ArrayGiven two sorted integer arrays nums1 and nums2, merge nums2 into nums1 as one sorted array.The number of elements initialized in nums1 and nums2 are m and n respectively. You may assume that nums1 has a size equal to m + n such that it has enough space to hold additional elements from nums2.Example 1:Input: nums1 = [[1,2,3,0,0,0]], m = 3, nums2 = [[2,5,6]], n = 3Output: [[1,2,2,3,5,6]]Example 2:Input: nums1 = [[1]], m = 1, nums2 = [], n = 0Output: [[1]]
###Code
# Method 1
def merge_sorted(n1,n2,m,n):
n3 = n1[:m]
n3.extend(n2[:n])
n3.sort()
return n3
print(f'Output from method 1: {merge_sorted([1,2,3,0,0,0],[2,5,6],3,3)}')
# Method 2
def m(n1,n2,m,n):
if n == 0: return n1
n1[-(len(n2)):] = n2
n1.sort()
return n1
print(f'Output from methid 2: {m([1,2,3,0,0,0],[2,5,6],3,3)}')
###Output
Output from method 1: [1, 2, 2, 3, 5, 6]
Output from methid 2: [1, 2, 2, 3, 5, 6]
###Markdown
Pascal's TriangleGiven a non-negative integer numRows, generate the first numRows of Pascal's triangle.Example:Input: 5Output:[ [[1]], [[1,1]], [[1,2,1]], [[1,3,3,1]], [[1,4,6,4,1]]]
###Code
def pascal(n):
cs = []
for i in range(1,n + 1):
c = 1
for j in range(1, i+1):
# print(c,end=' ')
c = int(c * (i-j)/j)
cs.append(c)
return cs
pascal(5)
# Pascal function
def printPascal(n):
for line in range(1, n + 1):
C = 1
for i in range(1, line + 1):
# The first value in a
# line is always 1
print(C, end = " ")
C = int(C * (line - i) / i)
print("")
# Driver code
n = 5
printPascal(n)
###Output
1
1 1
1 2 1
1 3 3 1
1 4 6 4 1
###Markdown
Valid PalindromeGiven a string, determine if it is a palindrome, considering only alphanumeric characters and ignoring cases.Note: For the purpose of this problem, we define empty string as valid palindrome.Example 1:Input: "A man, a plan, a canal: Panama"Output: trueExample 2:Input: "race a car"Output: false
###Code
def valid_palindrome(s):
temp = "".join([i.lower() for i in s if i.isalnum()])
return temp[::-1] == temp
print(f'Method 1 output: {valid_palindrome("race a car")}')
###Output
Method 1 output: False
###Markdown
Single NumberGiven a non-empty array of integers nums, every element appears twice except for one. Find that single one.Follow up: Could you implement a solution with a linear runtime complexity and without using extra memory?Example 1:Input: nums = [[2,2,1]]Output: 1Example 2:Input: nums = [[4,1,2,1,2]]Output: 4Example 3:Input: nums = [[1]]Output: 1
###Code
# Method 1
def single_num(lt):
dup = []
for i in lt:
if i not in dup:
dup.append(i)
else:
dup.remove(i)
return dup.pop()
print(single_num([2,2,1]))
# Method 2
def single_num2(lt):
return 2 * sum(set(lt)) - sum(lt) # Considers the set of the give list by taking it's sum and subtracts the sum of total list
print(single_num2([2,2,1]))
###Output
1
1
###Markdown
Single Number IIGiven an integer array nums where every element appears three times except for one, which appears exactly once. Find the single element and return it.Example 1:Input: nums = [[2,2,3,2]] | Output: 3Example 2:Input: nums = [[0,1,0,1,0,1,99]] | Output: 99
###Code
# Method 1
def singleNumber_2(nums):
for i in nums:
if nums.count(i) == 1:
output = i
return output
print(f'Method 1 output: {singleNumber_2([2,2,1])}')
###Output
Method 1 output: 1
###Markdown
Two Sum II - Input array is sorted Given an array of integers numbers that is already sorted in ascending order, find two numbers such that they add up to a specific target number.Return the indices of the two numbers (1-indexed) as an integer array answer of size 2, where 1 <= answer[0] < answer[1] <= numbers.length.You may assume that each input would have exactly one solution and you may not use the same element twice.Example 1:Input: numbers = [[2,7,11,15]], target = 9Output: [[1,2]]Explanation: The sum of 2 and 7 is 9. Therefore index1 = 1, index2 = 2.Example 2:Input: numbers = [[2,3,4]], target = 6Output: [[1,3]]Example 3:Input: numbers = [[-1,0]], target = -1Output: [[1,2]]
###Code
def two_sumII(lt,tar):
for i in range(len(lt)):
for j in range(i+1,len(lt)):
tot = lt[i] + lt[j]
if tar == tot:
print([i+1,j+1])
# print(c)
two_sumII([2,7,11,15],9) # Solves 18/19 cases in leet code.
###Output
[1, 2]
###Markdown
Yet to see this XOR Operation in an ArrayGiven an integer n and an integer start.Define an array nums where nums[[i]] = start + 2*i (0-indexed) and n == nums.length.Return the bitwise XOR of all elements of nums. Example 1:Input: n = 5, start = 0Output: 8Explanation: Array nums is equal to [[0, 2, 4, 6, 8]] where (0 ^ 2 ^ 4 ^ 6 ^ 8) = 8.Where "^" corresponds to bitwise XOR operator.Example 2:Input: n = 4, start = 3Output: 8Explanation: Array nums is equal to [[3, 5, 7, 9]] where (3 ^ 5 ^ 7 ^ 9) = 8.Example 3:Input: n = 1, start = 7Output: 7Example 4:Input: n = 10, start = 5Output: 2
###Code
def xor(n,start):
pointer = 0
for i in range(n):
pointer ^= (start + 2*i)
return pointer
xor(5,0)
###Output
_____no_output_____
###Markdown
Buddy StringsGiven two strings A and B of lowercase letters, return true if you can swap two letters in A so the result is equal to B, otherwise, return false.Swapping letters is defined as taking two indices i and j (0-indexed) such that i != j and swapping the characters at A[[i]] and A[[j]]. For example, swapping at indices 0 and 2 in "abcd" results in "cbad". Example 1:Input: A = "ab", B = "ba"Output: trueExplanation: You can swap A[[0]] = 'a' and A[[1]] = 'b' to get "ba", which is equal to B.Example 2:Input: A = "ab", B = "ab"Output: falseExplanation: The only letters you can swap are A[[0]] = 'a' and A[[1]] = 'b', which results in "ba" != B.Example 3:Input: A = "aa", B = "aa"Output: trueExplanation: You can swap A[[0]] = 'a' and A[[1]] = 'a' to get "aa", which is equal to B.Example 4:Input: A = "aaaaaaabc", B = "aaaaaaacb"Output: trueExample 5:Input: A = "", B = "aa"Output: false
###Code
# Method 1 (20/29 Test case pass)
def buddy(A,B):
if A[::-1] == B:
x = True
else:
x = False
return x
# print(buddy('abab','abab'))
# Method 2 (19/29 Test case pass)
def l(a,b):
# Case 1
if ((len(a) and len(b)) >= 4) and ((len(a) or len(b)) < 20000):
z = []
x = ''
w = int(len(a)/2)
a1 = a[:w:]
a2 = a[len(a1):]
for i in a2:
z.append(i)
eve = z[0:len(z):2]
odd = z[1:len(z)]
for j in zip(eve,odd):
for k in j:
x += k
y = a1 + x
if y == b:
out = True
else:
out = False
# Case 2
elif (a == "") and (b == ""):
out = False
# Case 3
else:
if a[::-1] == b:
out = True
else:
out = False
return out
l("abab","abab")
###Output
_____no_output_____
###Markdown
Squares of a Sorted ArrayGiven an integer array nums sorted in non-decreasing order, return an array of the squares of each number sorted in non-decreasing order.Example 1:Input: nums = [[-4,-1,0,3,10]]Output: [[0,1,9,16,100]]Explanation: After squaring, the array becomes [[16,1,0,9,100]]. After sorting, it becomes [[0,1,9,16,100]].Example 2:Input: nums = [[-7,-3,2,3,11]]Output: [[4,9,9,49,121]]
###Code
# Method 1
def sq_array(s):
x = [i ** 2 for i in s]
x.sort()
return x
print(f'Method 1 output: {sq_array([-4,-1,0,3,10])}')
# Method 2
def sq2_array(s):
return sorted([i*i for i in s])
print(f'Method 2 output: {sq2_array([-4,-1,0,3,10])}')
###Output
Method 1 output: [0, 1, 9, 16, 100]
Method 2 output: [0, 1, 9, 16, 100]
###Markdown
Number of 1 BitsWrite a function that takes an unsigned integer and returns the number of '1' bits it has (also known as the Hamming weight).Note:Note that in some languages such as Java, there is no unsigned integer type. In this case, the input will be given as a signed integer type. It should not affect your implementation, as the integer's internal binary representation is the same, whether it is signed or unsigned.In Java, the compiler represents the signed integers using 2's complement notation. Therefore, in Example 3, the input represents the signed integer. -3. Example 1:Input: n = 00000000000000000000000000001011;Output: 3Explanation: The input binary string 00000000000000000000000000001011 has a total of three '1' bits.Example 2:Input: n = 00000000000000000000000010000000;Output: 1Explanation: The input binary string 00000000000000000000000010000000 has a total of one '1' bit.Example 3:Input: n = 11111111111111111111111111111101;Output: 31Explanation: The input binary string 11111111111111111111111111111101 has a total of thirty one '1' bits.
###Code
def count1(s):
cnt = 0
for i in s: # In leet code IDE we need to change the s as bin(s) to convert input to binary number.
if '1' in i:
cnt += 1
return cnt
count1('00000000000000000000000000001011')
###Output
_____no_output_____
###Markdown
Count PrimesCount the number of prime numbers less than a non-negative number, n.Example 1:Input: n = 10Output: 4Explanation: There are 4 prime numbers less than 10, they are 2, 3, 5, 7.Example 2:Input: n = 0Output: 0Example 3:Input: n = 1Output: 0
###Code
# Method 1
def primeCount(n):
primeList = []
for i in range(2,int(n**0.5)):
for j in range(2,i):
if (i%j) == 0:break
else:
primeList.append(i)
return len(primeList)
print(f'Method 1 output: {primeCount(499979)}')
def primeCoun(n):
primeList = []
for i in range(2,int(n**0.5)):
for j in range(2,i,2):
primeList.append(i)
return len(primeList)
primeCoun(499979)
###Output
_____no_output_____
###Markdown
Number of Segments in a StringYou are given a string s, return the number of segments in the string. A segment is defined to be a contiguous sequence of non-space characters.Example 1:Input: s = "Hello, my name is John"Output: 5Explanation: The five segments are [["Hello,", "my", "name", "is", "John"]]Example 2:Input: s = "Hello"Output: 1Example 3:Input: s = "love live! mu'sic forever"Output: 4Example 4:Input: s = ""Output: 0
###Code
def segment(s):
return len(s.split())
segment('Aneruth Mohanasundaram')
###Output
_____no_output_____
###Markdown
Power of ThreeGiven an integer n, return true if it is a power of three. Otherwise, return false.An integer n is a power of three, if there exists an integer x such that n == $3^{x}$. Example 1:Input: n = 27Output: trueExample 2:Input: n = 0Output: falseExample 3:Input: n = 9Output: trueExample 4:Input: n = 45Output: false
###Code
# Method 1 (Leet code 14059 / 21038 test cases passed.)
def power_three(x):
if x > 1:
if x % 3 == 0:
out = True
else:
out = False
return out
print(f'Method 1 output: {power_three(45)}')
# Method 2
from math import *
def isPowerOfThree(n):
if (log10(n) / log10(3)) % 1 == 0:
out = True
else:
out = False
return out
print(f'Method 2 output: {isPowerOfThree(125)}')
###Output
Method 1 output: True
Method 2 output: False
###Markdown
Largest Rectangle in HistogramGiven n non-negative integers representing the histogram's bar height where the width of each bar is 1, find the area of largest rectangle in the histogram.Example 1:Input: heights = [[2,1,5,6,2,3]]Output: 10Example 2:Input: heights = [[2,4]]Output: 4
###Code
# Method 1
def rectangle_histogram(h):
h.sort()
if len(h) == 1: out = h[0]
elif len(h) <= 2:
if h[0] < h[1]:out = max(h)
elif h[0] == h[1]:out = sum(h)
else:
first,second = h[-1], h[-2]
third = first - second
out = second + (first - third)
return out
# rectangle_histogram([2,1,5,6,2,3])
output = [[2,1,5,6,2,3],[2,4],[1,1],[0,9],[4,2]]
for i in output:
print(f'Method 1 output is {rectangle_histogram(i)}')
###Output
Method 1 output is 10
Method 1 output is 4
Method 1 output is 2
Method 1 output is 9
Method 1 output is 4
###Markdown
Binary SearchGiven a sorted (in ascending order) integer array nums of n elements and a target value, write a function to search target in nums. If target exists, then return its index, otherwise return -1.Example 1:Input: nums = [[-1,0,3,5,9,12]], target = 9Output: 4Explanation: 9 exists in nums and its index is 4Example 2:Input: nums = [[-1,0,3,5,9,12]], target = 2Output: -1Explanation: 2 does not exist in nums so return -1
###Code
num12 = [-1,2,4,7,9]
target = 9
if target in num12:
out = num12.index(target)
out
###Output
_____no_output_____
###Markdown
Add Digits Given a non-negative integer num, repeatedly add all its digits until the result has only one digit.Example:Input: 38Output: 2 Explanation: The process is like: 3 + 8 = 11, 1 + 1 = 2. Since 2 has only one digit, return it.
###Code
# Method 1
def add_digi(n):
x = [int(i) for i in str(n)]
sum_x = sum(x)
x1 = [int(i) for i in str(sum_x)]
sum_x1 = sum(x1)
zx = [int(i) for i in str(sum_x1)]
return sum(zx)
print(f'Value from method 1 is {add_digi(199)}')
# Method 2
def addDigits(num: int):
if num == 0:
return 0
if num % 9 == 0:
return 9
return num % 9
print(f'Value from method 2 is {addDigits(199)}')
###Output
Value from method 1 is 1
Value from method 2 is 1
###Markdown
Yet to see this Super PowYour task is to calculate $a^{b}$ mod 1337 where a is a positive integer and b is an extremely large positive integer given in the form of an array.Example 1:Input: a = 2, b = [[3]]Output: 8Example 2:Input: a = 2, b = [[1,0]]Output: 1024Example 3:Input: a = 1, b = [[4,3,3,8,5,2]]Output: 1Example 4:Input: a = 2147483647, b = [[2,0,0]]Output: 1198
###Code
# Method 1 (Does not support list with values more than 15. So this is not advisble to solve all edge case)
def super_pow(b,n):
x = int(''.join(map(str,n)))
a = pow(b,x)
if a > 100000:
out = a % 1337
else:
out = a
return int(out)
print(f'Method 1 output: {super_pow(7,[1,7])}')
# # Method 2
# def superPow(b,n):
# return out
def boo(num,pow):
val = ''.join(list(map(str,pow)))
intVal = map(float,val)
# ops = num ** val
# return ops % 1337
return intVal
boo(78267
[1,7,7,4,3,1,7,0,1,4,4,9,2,8,5,0,0,9,3,1,2,5,9,6,0,9,9,0,9,6,0,5,3,7,9,8,8,9,8,2,5,4,1,9,3,8,0,5,9,5,6,1,1,8,9,3,7,8,5,8,5,5,3,0,4,3,1,5,4,1,7,9,6,8,8,9,8,0,6,7,8,3,1,1,1,0,6,8,1,1,6,6,9,1,8,5,6,9,0,0,1,7,1,7,7,2,8,5,4,4,5,2,9,6,5,0,8,1,0,9,5,8,7,6,0,6,1,8,7,2,9,8,1,0,7,9,4,7,6,9,2,3,1,3,9,9,6,8,0,8,9,7,7,7,3,9,5,5,7,4,9,8,3,0,1,2,1,5,0,8,4,4,3,8,9,3,7,5,3,9,4,4,9,3,3,2,4,8,9,3,3,8,2,8,1,3,2,2,8,4,2,5,0,6,3,0,9,0,5,4,1,1,8,0,4,2,5,8,2,4,2,7,5,4,7,6,9,0,8,9,6,1,4,7,7,9,7,8,1,4,4,3,6,4,5,2,6,0,1,1,5,3,8,0,9,8,8,0,0,6,1,6,9,6,5,8,7,4,8,9,9,2,4,7,7,9,9,5,2,2,6,9,7,7,9,8,5,9,8,5,5,0,3,5,8,9,5,7,3,4,6,4,6,2,3,5,2,3,1,4,5,9,3,3,6,4,1,3,3,2,0,0,4,4,7,2,3,3,9,8,7,8,5,5,0,8,3,4,1,4,0,9,5,5,4,4,9,7,7,4,1,8,7,5,2,4,9,7,9,1,7,8,9,2,4,1,1,7,6,4,3,6,5,0,2,1,4,3,9,2,0,0,2,9,8,4,5,7,3,5,8,2,3,9,5,9,1,8,8,9,2,3,7,0,4,1,1,8,7,0,2,7,3,4,6,1,0,3,8,5,8,9,8,4,8,3,5,1,1,4,2,5,9,0,5,3,1,7,4,8,9,6,7,2,3,5,5,3,9,6,9,9,5,7,3,5,2,9,9,5,5,1,0,6,3,8,0,5,5,6,5,6,4,5,1,7,0,6,3,9,4,4,9,1,3,4,7,7,5,8,2,0,9,2,7,3,0,9,0,7,7,7,4,1,2,5,1,3,3,6,4,8,2,5,9,5,0,8,2,5,6,4,8,8,8,7,3,1,8,5,0,5,2,4,8,5,1,1,0,7,9,6,5,1,2,6,6,4,7,0,9,5,6,9,3,7,8,8,8,6,5,8,3,8,5,4,5,8,5,7,5,7,3,2,8,7,1,7,1,8,7,3,3,6,2,9,3,3,9,3,1,5,1,5,5,8,1,2,7,8,9,2,5,4,5,4,2,6,1,3,6,0,6,9,6,1,0,1,4,0,4,5,5,8,2,2,6,3,4,3,4,3,8,9,7,5,5,9,1,8,5,9,9,1,8,7,2,1,1,8,1,5,6,8,5,8,0,2,4,4,7,8,9,5,9,8,0,5,0,3,5,5,2,6,8,3,4,1,4,7,1,7,2,7,5,8,8,7,2,2,3,9,2,2,7,3,2,9,0,2,3,6,9,7,2,8,0,8,1,6,5,2,3,0,2,0,0,0,9,2,2,2,3,6,6,0,9,1,0,0,3,5,8,3,2,0,3,5,1,4,1,6,8,7,6,0,9,8,0,1,0,4,5,6,0,2,8,2,5,0,2,8,5,2,3,0,2,6,7,3,0,0,2,1,9,0,1,9,9,2,0,1,6,7,7,9,9,6,1,4,8,5,5,6,7,0,6,1,7,3,5,9,3,9,0,5,9,2,4,8,6,6,2,2,3,9,3,5,7,4,1,6,9,8,2,6,9,0,0,8,5,7,7,0,6,0,5,7,4,9,6,0,7,8,4,3,9,8,8,7,4,1,5,6,0,9,4,1,9,4,9,4,1,8,6,7,8,2,5,2,3,3,4,3,3,1,6,4,1,6,1,5,7,8,1,9,7,6,0,8,0,1,4,4,0,1,1,8,3,8,3,8,3,9,1,6,0,7,1,3,3,4,9,3,5,2,4,2,0,7,3,3,8,7,7,8,8,0,9,3,1,2,2,4,3,3,3,6,1,6,9,6,2,0,1,7,5,6,2,5,3,5,0,3,2,7,2,3,0,3,6,1,7,8,7,0,4,0,6,7,6,6,3,9,8,5,8,3,3,0,9,6,7,1,9,2,1,3,5,1,6,3,4,3,4,1,6,8,4,2,5])
###Output
<>:7: SyntaxWarning: 'int' object is not subscriptable; perhaps you missed a comma?
<>:7: SyntaxWarning: 'int' object is not subscriptable; perhaps you missed a comma?
<ipython-input-16-b3a08f8f7d6e>:7: SyntaxWarning: 'int' object is not subscriptable; perhaps you missed a comma?
boo(78267
###Markdown
Transpose MatrixGiven a 2D integer array matrix, return the transpose of matrix.The transpose of a matrix is the matrix flipped over its main diagonal, switching the matrix's row and column indices.Example 1:Input: matrix = [[[1,2,3]],[[4,5,6]],[[7,8,9]]]Output: [[[1,4,7]],[[2,5,8]],[[3,6,9]]]Example 2:Input: matrix = [[[1,2,3]],[[4,5,6]]]Output: [[[1,4]],[[2,5]],[[3,6]]]
###Code
# Method 1 Using numpy
import numpy as np
def transpose(matrix):
return np.transpose(matrix)
print('Output from method 1\n'+ f'{transpose([[1,2,3],[4,5,6],[7,8,9]])}')
# Method 2
def transpose2(matrix):
zx = list(map(list,zip(*matrix)))
return zx
print('Output from method 2\n' + f'{transpose2([[1,2,3],[4,5,6],[7,8,9]])}')
###Output
Output from method 1
[[1 4 7]
[2 5 8]
[3 6 9]]
Output from method 2
[[1, 4, 7], [2, 5, 8], [3, 6, 9]]
###Markdown
Reshape the MatrixIn MATLAB, there is a very useful function called 'reshape', which can reshape a matrix into a new one with different size but keep its original data.You're given a matrix represented by a two-dimensional array, and two positive integers r and c representing the row number and column number of the wanted reshaped matrix, respectively.The reshaped matrix need to be filled with all the elements of the original matrix in the same row-traversing order as they were.If the 'reshape' operation with given parameters is possible and legal, output the new reshaped matrix; Otherwise, output the original matrix.Example 1:Input: nums = [[[1,2]], [[3,4]]]r = 1, c = 4Output: [[1,2,3,4]]Explanation:The row-traversing of nums is [[1,2,3,4]]. The new reshaped matrix is a 1 * 4 matrix, fill it row by row by using the previous list.Example 2:Input: nums = [[[1,2]], [[3,4]]]r = 2, c = 4Output: [[[1,2]], [[3,4]]]Explanation:There is no way to reshape a 2 * 2 matrix to a 2 * 4 matrix. So output the original matrix.
###Code
# Method 1 using numpy
def matrix_reshape(mat,r,c):
return np.reshape(mat,(r,c))
matrix_reshape([[1,2], [3,4]],1,4)
###Output
_____no_output_____
###Markdown
Array Partition IGiven an integer array nums of 2n integers, group these integers into n pairs ($a_{1}$, $b_{1}$), ($a_{2}$, $b_{2}$), ..., ($a_{n}$, $b_{n}$) such that the sum of min($a_{i}$,$b_{i}$) for all i is maximized. Return the maximized sum. Example 1:Input: nums = [[1,4,3,2]]Output: 4Explanation: All possible pairings (ignoring the ordering of elements) are:1. (1, 4), (2, 3) -> min(1, 4) + min(2, 3) = 1 + 2 = 32. (1, 3), (2, 4) -> min(1, 3) + min(2, 4) = 1 + 2 = 33. (1, 2), (3, 4) -> min(1, 2) + min(3, 4) = 1 + 3 = 4So the maximum possible sum is 4.Example 2:Input: nums = [[6,2,6,5,1,2]]Output: 9Explanation: The optimal pairing is (2, 1), (2, 5), (6, 6). min(2, 1) + min(2, 5) + min(6, 6) = 1 + 2 + 6 = 9.
###Code
def ary_part(n):
n.sort()
return sum(n[::2])
ary_part([6,2,6,5,1,2])
###Output
_____no_output_____
###Markdown
Student Attendance Record IYou are given a string representing an attendance record for a student. The record only contains the following three characters:1. 'A' : Absent.2. 'L' : Late.3. 'P' : Present.A student could be rewarded if his attendance record doesn't contain more than one 'A' (absent) or more than two continuous 'L' (late).You need to return whether the student could be rewarded according to his attendance record.Example 1:Input: "PPALLP"Output: TrueExample 2:Input: "PPALLL"Output: False
###Code
def check_record(s): # (Leet Code 65 / 113 test cases passed.)
c = s.strip()
if c.count('A') > 1 or ('LLL') in s:
out = False
else:
out = True
return out
check_record("PPALLP")
###Output
_____no_output_____
###Markdown
Yet to see this Longest Common PrefixWrite a function to find the longest common prefix string amongst an array of strings.If there is no common prefix, return an empty string "".Example 1:Input: strs = [["flower","flow","flight"]]Output: "fl"Example 2:Input: strs = [["dog","racecar","car"]]Output: ""Explanation: There is no common prefix among the input strings.
###Code
def LCP1(s):
for i in range(len(s[0])): # looping through the list and checking each element present
c = s[0][i] # Alocating the first values as prefix
for j in range(1,len(s)): # iterates with the length of string with starting index value as 1
if i == len(s[j]) or s[j][i] != c: # if first element is not same as the next one then it breaks
return s[0][:i]
return s[0]
print(f'Method 1 output is: {LCP1(["flower","flow","flight"])}')
def LCP2(s):
if len(s) == 0: return ""
pref = s[0] # Making the prefix as "dog"
for i in range(1,len(s)): # looping through the list and checking
while s[i].find(pref)!= 0: # It checks each list element with find(prefix) values and if its not equal to 0 then it updates prefix
pref = pref[:len(pref)-1] # it updates the prefix and checks it with other list values.
return pref
print(f'Method 2 output is: {LCP2(["dog","racecar","car"])}')
###Output
Method 1 output is: fl
Method 2 output is:
###Markdown
Factorial Trailing ZeroesGiven an integer n, return the number of trailing zeroes in n!.Follow up: Could you write a solution that works in logarithmic time complexity?Example 1:Input: n = 3Output: 0Explanation: 3! = 6, no trailing zero.Example 2:Input: n = 5Output: 1Explanation: 5! = 120, one trailing zero.Example 3:Input: n = 0Output: 0
###Code
def trailingZeroes(n: int) -> int:
fact = 1
out = 0
if n <= 0:
out
else:
for i in range(1,n+1):
fact = fact * i
for j in str(fact):
if '0' in j:
out += 1
else:
out = 0
return out
trailingZeroes(10)
###Output
_____no_output_____
###Markdown
Number ComplementGiven a positive integer num, output its complement number. The complement strategy is to flip the bits of its binary representation.Example 1:Input: num = 5Output: 2Explanation: The binary representation of 5 is 101 (no leading zero bits), and its complement is 010. So you need to output 2.Example 2:Input: num = 1Output: 0Explanation: The binary representation of 1 is 1 (no leading zero bits), and its complement is 0. So you need to output 0.
###Code
def complement(num):
z = bin(num).replace('0b','')
x = int('1'*len(s),2) - num
return int(z,2)
complement(5)
###Output
_____no_output_____
###Markdown
SubsetsGiven an integer array nums of unique elements, return all possible subsets (the power set).The solution set must not contain duplicate subsets. Return the solution in any order. Example 1:Input: nums = [[1,2,3]]Output: [[],[[1]],[[2]],[[1,2]],[[3]],[[1,3]],[[2,3]],[[1,2,3]]]Example 2:Input: nums = [[0]]Output: [[],[[0]]]
###Code
# Method 1 okish answer (82%)
def powerset(s):
x = len(s)
ax = []
for i in range(1 << x):
out = [s[j] for j in range(x) if (i & (1 << j))] # Compares each element with actual list present. They check it based on the index position of the actual list
ax.append(out)
return ax
print(f'Method 1 output is :{powerset([4])}')
# Method 2 (Best 99.80%)
def subsets(nums):
from itertools import combinations
res = [[]]
res.extend([j for i in range(0,len(nums)) for j in list(map(list,combinations(nums,i+1)))])
return res
print(f'Method 2 output is :{subsets([4])}')
###Output
Method 1 output is :[[], [4]]
Method 2 output is :[[], [4]]
###Markdown
First Missing PositiveGiven an unsorted integer array nums, find the smallest missing positive integer.Example 1:Input: nums = [[1,2,0]] |Output: 3Example 2:Input: nums = [[3,4,-1,1]] |Output: 2Example 3:Input: nums = [[7,8,9,11,12]] |Output: 1
###Code
# Method 1
# def missing_element(nums):
# if nums == []:
# out = 1
# max_element = max(nums)
# value = []
# for j in range(1,max_element):
# if j not in nums:
# value.append(j)
# out = value[0]
# elif len(nums) <= 3:
# out = max_element + 1
# return out
# print(f'Output for method 1: {missing_element([0])}')
# Method 2
def missing_element2(nu):
max_element = max(nu)
empty = ''
for j in range(1,max_element):
if j not in nu:
empty = j
break
if empty == '':
empty = max_element + 1
elif not nu:
empty = 0
return empty
print(f'Output for method 2: {missing_element2([3,4,-1,1])}')
# Method 3
def missing_element3(nu):
max_e = None
if len(nu) == 0:
empty = 0
for i in nu:
if (max_e is None or i > max_e): max_e = i
empty = ''
for j in range(1,max_e):
if j not in nu:
empty = j
break
if empty == '':
empty = max_e + 1
return empty
print(f'Output for method 3: {missing_element3([0])}')
###Output
Output for method 3: 1
###Markdown
Yet to see this Edit DistanceGiven two strings word1 and word2, return the minimum number of operations required to convert word1 to word2.You have the following three operations permitted on a word:1.Insert a character2.Delete a character3.Replace a characterExample 1:Input: word1 = "horse", word2 = "ros" | Output: 3Explanation: horse -> rorse (replace 'h' with 'r')rorse -> rose (remove 'r')rose -> ros (remove 'e')Example 2:Input: word1 = "intention", word2 = "execution" | Output: 5Explanation: intention -> inention (remove 't')inention -> enention (replace 'i' with 'e')enention -> exention (replace 'n' with 'x')exention -> exection (replace 'n' with 'c')exection -> execution (insert 'u')
###Code
# Method 1
def edit_dist(word1,word2): # (Leet Code 190 / 1146 test cases passed.)
w1,w2 = set(word1),set(word2)
out = 0
if word1 == '' or word2 == '':
out = len(word1) or len(word2)
elif word1 == word2:
out = 0
elif (len(word1) or len(word2)) == 1:
out = len(w1 or w2)
else:
out1 = w1 & w2
out = len(out1)
return out
print(f'Output from method 1: {edit_dist("ab","bc")}')
# Method 2
def minDistance(self, s1: str, s2: str) -> int:
@lru_cache(maxsize=None)
def f(i, j):
if i == 0 and j == 0: return 0
if i == 0 or j == 0: return i or j
if s1[i - 1] == s2[j - 1]:
return f(i - 1, j - 1)
return min(1 + f(i, j - 1), 1 + f(i - 1, j), 1 + f(i - 1, j - 1))
m, n = len(s1), len(s2)
return f(m, n)
# print(f'Output from method 1: {minDistance("ab","bc")}')
###Output
Output from method 1: 1
###Markdown
Yet to see this Interleaving StringGiven strings s1, s2, and s3, find whether s3 is formed by an interleaving of s1 and s2.An interleaving of two strings s and t is a configuration where they are divided into non-empty substrings such that:s = s1 + s2 + ... + snt = t1 + t2 + ... + tm|n - m| <= 1The interleaving is s1 + t1 + s2 + t2 + s3 + t3 + ... or t1 + s1 + t2 + s2 + t3 + s3 + ...Note: a + b is the concatenation of strings a and b.Example 1:Input: s1 = "aabcc", s2 = "dbbca", s3 = "aadbbcbcac" | Output: trueExample 2:Input: s1 = "aabcc", s2 = "dbbca", s3 = "aadbbbaccc" | Output: falseExample 3:Input: s1 = "", s2 = "", s3 = "" | Output: true
###Code
# Partial Output
def find_foo(s1,s2,s3):
if len(s1) == len(s3) and len(s2) == '':
out = True
if len(s1) == len(s2):
s11,s12,s13 = s1[:2],s1[2:4],s1[4]
s21,s22,s23 = s2[:2],s2[2:4],s2[4]
String = s11 + s21 + s12 + s22 + s23 + s13
if String == s3:
out = True
else:
out = False
return out
print(f'Test Case 1 : {find_foo("aabcc","dbbca","aadbbcbcac")}')
print(f'Test Case 2 : {find_foo("aabcc","dbbca","aadbbbaccc")}')
# print(f'Test Case 3 : {find_foo("","","")}')
###Output
Test Case 1 : True
Test Case 2 : False
###Markdown
Yet to see this Spiral MatrixGiven an m x n matrix, return all elements of the matrix in spiral order.Example 1:Input: matrix = [[[1,2,3]],[[4,5,6]],[[7,8,9]]]Output: [[1,2,3,6,9,8,7,4,5]]Example 2:Input: matrix = [[[1,2,3,4]],[[5,6,7,8]],[[9,10,11,12]]]Output: [[1,2,3,4,8,12,11,10,9,5,6,7]]
###Code
# def spiral_matrix(matrix):
# start_row,end_row,start_col,end_col = 0,len(matrix),0,len(matrix[0])
# out = []
# while start_col < end_col or start_row < end_row:
# # Right
# if start_row < end_row:
# for i in range(start_col,end_col):
# out.extend(matrix[start_row][i])
# start_row += 1
# # Down
# if start_col < end_col:
# for i in range(start_row,end_row):
# out.extend(matrix[i][end_col])
# end_col -= 1
# # left
# if start_row < end_row:
# for i in range(start_col-1,end_col-1,-1):
# out.extend(matrix[end_row][i])
# end_row -= 1
# # Top
# if start_col < end_col:
# for i in range(end_row-1,start_row-1,-1):
# out.extend(matrix[i][start_col])
# start_col += 1
# return out
# print(spiral_matrix([[1,2,3],[4,5,6],[7,8,9]]))
###Output
_____no_output_____
###Markdown
Search a 2D MatrixWrite an efficient algorithm that searches for a value in an m x n matrix. This matrix has the following properties:Integers in each row are sorted from left to right.The first integer of each row is greater than the last integer of the previous row.Example 1:Input: matrix = [[[1,3,5,7]],[[10,11,16,20]],[[23,30,34,60]]], target = 3Output: trueExample 2:Input: matrix = [[[1,3,5,7]],[[10,11,16,20]],[[23,30,34,60]]], target = 13Output: false
###Code
# Method 1
def lookupMatrix1(matrix,target):
empty = []
for i in matrix:
for j in i:
empty.append(j)
if target in empty:
out = True
else:
out = False
return out
print(f'Method 1 output is: {lookupMatrix1([[1,3,5,7],[10,11,16,20],[23,30,34,60]],13)}')
# Method 2 (Better complexity compared to previous version)
def lookupMatrix2(matrix,target):
output = [j for i in matrix for j in i]
if target in output: return True
return False
print(f'Method 2 output is: {lookupMatrix2([[1,3,5,7],[10,11,16,20],[23,30,34,60]],13)}')
###Output
Method 1 output is: False
Method 2 output is: False
###Markdown
Search a 2D Matrix IIWrite an efficient algorithm that searches for a target value in an m x n integer matrix. The matrix has the following properties: - Integers in each row are sorted in ascending from left to right. - Integers in each column are sorted in ascending from top to bottom.Example 1:Input: matrix = [[[1,4,7,11,15]],[[2,5,8,12,19]],[[3,6,9,16,22]],[[10,13,14,17,24]],[[18,21,23,26,30]]], target = 5 | Output: trueExample 2:Input: matrix = [[[1,4,7,11,15]],[[2,5,8,12,19]],[[3,6,9,16,22]],[[10,13,14,17,24]],[[18,21,23,26,30]]], target = 20 | Output: false
###Code
# Method 1 (Okish solution)
def lookupMatrixVersion2_1(matrix,target):
for i in range(len(matrix[0])):
for j in range(len(matrix)):
if x[i][j] == target:return True
return False
print(f'Method 1 output: {lookupMatrixVersion2_1([[1,1]],-5)}')
# Method 2 (Okish solution)
def lookupMatrixVersion2_2(matrix,target):
return target in sum(matrix,[])
print(f'Method 2 output: {lookupMatrixVersion2_2([[1,1]],-5)}')
sum([[-1,-1]],[])
###Output
_____no_output_____
###Markdown
Yet to see this CombinationsGiven two integers n and k, return all possible combinations of k numbers out of 1 ... n.You may return the answer in any order.Example 1:Input: n = 4, k = 2Output:[ [[2,4]], [[3,4]], [[2,3]], [[1,2]], [[1,3]], [[1,4]],]Example 2:Input: n = 1, k = 1Output: [[[1]]]
###Code
def combinations(n,k):
out = []
for i in range(1,n+1):
for j in range(1,k+2):
if i!=j:
if i>j:
out.extend([[j,i]])
elif n == k:
out = [[n]]
return out
combinations(1,1)
n = 4
k = 2
out = []
emp1 = [i for i in range(1,n+1)]
emp2 = [j for j in range(1,k+1)]
# pair = [[a,b] for a in emp1 for b in emp2]
# for a in range(len(emp1)):
# for b in range(len(emp2)):
# # print(emp1[a],emp2[b])
# if emp1[a] != emp2[b]:
# out.extend([[emp1[a],emp2[b]]])
# out
clist = [(i,j) for i in emp1 for j in emp2]
clist
###Output
_____no_output_____
###Markdown
Reverse Words in a String Given an input string s, reverse the order of the words.A word is defined as a sequence of non-space characters. The words in s will be separated by at least one space.Return a string of the words in reverse order concatenated by a single space.Note that s may contain leading or trailing spaces or multiple spaces between two words. The returned string should only have a single space separating the words. Do not include any extra spaces.Example 1:Input: s = "the sky is blue"Output: "blue is sky the"Example 2:Input: s = " hello world "Output: "world hello"Explanation: Your reversed string should not contain leading or trailing spaces.Example 3:Input: s = "a good example"Output: "example good a"Explanation: You need to reduce multiple spaces between two words to a single space in the reversed string.Example 4:Input: s = " Bob Loves Alice "Output: "Alice Loves Bob"Example 5:Input: s = "Alice does not even like bob"Output: "bob like even not does Alice"
###Code
def rev_string(s):
k = s.split()
zx = k[::-1]
output = ' '.join(zx)
return output
rev_string(" Bob Loves Alice ")
###Output
_____no_output_____
###Markdown
Divide Two IntegersGiven two integers dividend and divisor, divide two integers without using multiplication, division, and mod operator.Return the quotient after dividing dividend by divisor.The integer division should truncate toward zero, which means losing its fractional part. For example, truncate(8.345) = 8 and truncate(-2.7335) = -2.Example 1:Input: dividend = 10, divisor = 3 | Output: 3Explanation: 10/3 = truncate(3.33333..) = 3.Example 2:Input: dividend = 7, divisor = -3 | Output: -2Explanation: 7/-3 = truncate(-2.33333..) = -2.Example 3:Input: dividend = 0, divisor = 1 | Output: 0Example 4:Input: dividend = 1, divisor = 1 | Output: 1
###Code
def int_div(a,b): # (Leet Code 988 / 989 test cases passed.)
return int(a/b)
int_div(10,3)
###Output
_____no_output_____
###Markdown
Yet to se this Remove Duplicate Letters or Smallest-subsequence of Distinct CharactersGiven a string s, remove duplicate letters so that every letter appears once and only once. You must make sure your result is the smallest in lexicographical order among all possible results.Note: This question is the same as 1081: https://leetcode.com/problems/smallest-subsequence-of-distinct-characters/ Example 1:Input: s = "bcabc" | Output: "abc"Example 2:Input: s = "cbacdcbc" | Output: "acdb"
###Code
def delDups(l):
zx = []
for i in l:
if i not in zx:
zx.append(i)
eve_lst = zx[0:len(zx):2]
eve = set(eve_lst)
odd_lst = zx[1:len(zx):2]
odd = set(odd_lst)
ass = eve.union(odd)
return ass
delDups("cbacdcbc")
###Output
_____no_output_____
###Markdown
To count $ in a 2D array for a certain range.
###Code
inp = [['$','.','.','.'],['.','.','$','$'],['$','.','$','.'],['.','.','.','.'],['$','.','.','$']]
def count_dollor(aList,start_pos,end_pos):
cout = 0
x = [j for i in aList for j in i]
for k in x[start_pos:end_pos]:
if '$' in k:
cout += 1
return cout
count_dollor(inp,2,15)
###Output
_____no_output_____
###Markdown
Yet to see this Find First and Last Position of Element in Sorted ArrayGiven an array of integers nums sorted in ascending order, find the starting and ending position of a given target value.If target is not found in the array, return [[-1, -1]].Example 1:Input: nums = [[5,7,7,8,8,10]], target = 8 | Output: [[3,4]]Example 2:Input: nums = [[5,7,7,8,8,10]], target = 6 |Output: [[-1,-1]]Example 3:Input: nums = [], target = 0 | Output: [[-1,-1]]
###Code
# Method 1
def searchRange(nums, target): # (Leet Code 42 / 88 test cases passed.)
output = []
if len(nums) == 1 and target in nums:
output.extend([0,0])
elif target in nums:
for i,j in enumerate(nums):
if j == target:
output.append(i)
else:
output.extend([-1,-1])
return output
print(f'Method 1 output: {searchRange([5,7,7,8,8,10],8)}')
# Method 2
def ane(aList,target):
for i in range(len(aList)):
if aList[i] == target:
left = i
break
else :
left = [-1,-1]
for j in range(len(aList)-1,-1,-1):
if aList[j] == target:
right = j
break
return [left,right]
print(f'Method 2 output: {ane([5,7,7,8,8,10],8)}')
###Output
Method 1 output: [3, 4]
Method 2 output: [3, 4]
###Markdown
Majority ElementGiven an array nums of size n, return the majority element.The majority element is the element that appears more than ⌊n / 2⌋ times. You may assume that the majority element always exists in the array. Example 1:Input: nums = [[3,2,3]] |Output: 3Example 2:Input: nums = [[2,2,1,1,1,2,2]] |Output: 2
###Code
# Method 1
# def major_element(aList): # (Leet code 30 / 46 test cases passed.)
# z = [i for i in aList if aList.count(i) > 1]
# return z
# print(f'Output from method 1: {major_element([8,8,7,7,7])}')
# Method 2
def most_common(lst):
return max(set(lst), key=lst.count)
print(f'Output from Method 2: {most_common([8,8,7,7,7])}')
# Method 3
def find(aList):
return sorted(aList)[len(aList)//2]
print(f'Output from Method 3: {find([8,8,7,7,7])}')
# Method 4
def find2(alist):
myDic = dict().fromkeys(list(set(alist)),0)
for i in alist:
if i in myDic: myDic[i] += 1
else: myDic[i] = 0
return max(myDic, key=myDic.get)
print(f'Output from Method 4: {find2([8,8,7,7,7])}')
###Output
Output from Method 2: 7
Output from Method 3: 7
Output from Method 4: 7
###Markdown
Yet to see this Minimum Number of Removals to Make Mountain ArrayYou may recall that an array arr is a mountain array if and only if:arr.length >= 3There exists some index i (0-indexed) with 0 < i < arr.length - 1 such that:arr[[0]] < arr[[1]] < ... < arr[[i - 1]] < arr[[i]]arr[[i]] > arr[[i + 1]] > ... > arr[[arr.length - 1]]Given an integer array nums, return the minimum number of elements to remove to make nums a mountain array.Example 1:Input: nums = [[1,3,1]] | Output: 0Explanation: The array itself is a mountain array so we do not need to remove any elements.Example 2:Input: nums = [[2,1,1,5,6,2,3,1]] | Output: 3Explanation: One solution is to remove the elements at indices 0, 1, and 5, making the array nums = [1,5,6,3,1].Example 3:Input: nums = [[4,3,2,1,1,2,3,1]] |Output: 4Example 4:Input: nums = [[1,2,3,4,4,3,2,1]] | Output: 1
###Code
# l = [2,1,1,5,6,2,3,1]
# for i in range(len(l)-1):
# # print (l[i],l[i+1])
# if l[i]!=l[i+1]:
# b.append(l[i+1])
# org = len(l)
# org2 = len(b)
# o = org - org2
# b
# b = [4,3,2,1,1,2,3,1]
# out = []
# middle = max(b)
# for i in range(len(b)):
# if b[i] < b[i-1]:
# out.extend([b[i]])
# # elif b[i] > b[i-1]:
# # out.extend([b[i]])
# output = len(b) - (len(b) - len(out))
# b = [1,3,1]
# for i,j in enumerate(b):
# if b[i] < b[j] and b[i] > b[j-1]:
# print('hi')
###Output
_____no_output_____
###Markdown
Maximum SubarrayGiven an integer array nums, find the contiguous subarray (containing at least one number) which has the largest sum and return its sum.Example 1:Input: nums = [[-2,1,-3,4,-1,2,1,-5,4]] | Output: 6 | Explanation: [[4,-1,2,1]] has the largest sum = 6.Example 2:Input: nums = [[1]] |Output: 1Example 3:Input: nums = [[0]] | Output: 0Example 4:Input: nums = [[-1]] | Output: -1Example 5:Input: nums = [[-100000]] | Output: -100000
###Code
# Method 1
def max_sub1(aList):
if len(aList) <= 2:
for i in aList:
if i == aList[0]:
out = max(aList)
elif i:
out = sum(aList)
else:
ax = list(set(aList))
ax.sort()
ane = ax[len(ax)//2:]
out = sum(ane)
return out
print(f'Method 1 output: {max_sub1([3,1])}')
# Method 2
def max_sub2(nums):
n = len(nums)
dp = [0] * n
dp[0] = nums[0]
for i in range(1,n):
dp[i] = max(dp[i-1] + nums[i], nums[i])
return max(dp)
print(f'Method 2 output: {max_sub2([3,1])}')
# Method 3
def max_sub3(nums):
current = maxSubarray = nums[0]
for i in nums[1:]:
current = max(i,current+i)
maxSubarray = max(maxSubarray,current)
return maxSubarray
print(f'Method 3 output: {max_sub3([3,1])}')
###Output
Method 1 output: 4
Method 2 output: 4
Method 3 output: 4
###Markdown
Maximum Product SubarrayGiven an integer array nums, find the contiguous subarray within an array (containing at least one number) which has the largest product.Example 1:Input: [[2,3,-2,4]] | Output: 6 | Explanation: [[2,3]] has the largest product 6.Example 2:Input: [[-2,0,-1]] | Output: 0 | Explanation: The result cannot be 2, because [[-2,-1]] is not a subarray.
###Code
def sub_prod(lt):
out,max_val,min_val = lt[0],lt[0],lt[0]
for i in range(1,len(lt)):
if lt[i] < 0
max_val,min_val = min_val,max_val
max_val = max(lt[i],max_val * lt[i])
min_val = min(lt[i],min_val * lt[i])
out = max(out,max_val)
return out
sub_prod([-2,0,-1])
###Output
_____no_output_____
###Markdown
Yet to see this Maximum Product of Three NumbersGiven an integer array nums, find three numbers whose product is maximum and return the maximum product.Example 1:Input: nums = [[1,2,3]] | Output: 6Example 2:Input: nums = [[1,2,3,4]] | Output: 24Example 3:Input: nums = [[-1,-2,-3]] | Output: -6
###Code
# Method 1
def max_prod(nums):
nums.sort() # If our list is not sorted
return max(nums[-1] *nums[0] * nums[1], nums[-1] *nums[-2] * nums[-3])
print(f'Method 1 output: {max_prod([-100,-98,-1,2,3,4])}')
def max_prod2(A):
A.sort()
if A[-1] < 0:prod = (A[-1] * A[-2] * A[-3])
elif A[0] * A[1] > A[-2] * A[-3]:prod = (A[-1] * A[1] * A[0])
else:prod = (A[-1] * A[-2] * A[-3])
return prod
print(f'Method 2 output: {max_prod2([-100,-98,-1,2,3,4])}')
###Output
Method 1 output: 39200
Method 2 output: 39200
###Markdown
Contains DuplicateGiven an array of integers, find if the array contains any duplicates.Your function should return true if any value appears at least twice in the array, and it should return false if every element is distinct.Example 1:Input: [[1,2,3,1]] | Output: trueExample 2:Input: [[1,2,3,4]] | Output: falseExample 3:Input: [[1,1,1,3,3,4,3,2,4,2]] | Output: true
###Code
# Method 1
def contains_duplicate(nums):
# j = 0
for i in range(len(nums)):
for j in range(1,len(nums)):
if nums[i] == nums[j]:
output =True
return output
print(f'Method 1 output: {contains_duplicate([1,1,1,3,3,4,3,2,4,2])}')
# Method 2
def dups(nums):
return len(list(set(nums))) != len(nums)
print(f'Method 2 output: {dups([1,1,1,3,3,4,3,2,4,2])}')
# Method 3
def dups3(nums):
for i in nums:
if nums.count(i) >= 2:
out = True
else:
out = False
return out
print(f'Method 3 output: {dups3([1,1,1,3,3,4,3,2,4,2])}')
###Output
Method 1 output: True
Method 2 output: True
Method 3 output: True
###Markdown
Yet to Check this Contains Duplicate II Given an integer array nums and an integer k, return true if there are two distinct indices i and j in the array such that nums[[i]] == nums[[j]] and abs(i - j) <= k.Example 1:Input: nums = [[1,2,3,1]], k = 3 | Output: trueExample 2:Input: nums = [[1,0,1,1]], k = 1 | Output: trueExample 3:Input: nums = [[1,2,3,1,2,3]], k = 2 | Output: false
###Code
def conDupII(nums,k):
cv = {}
for i in range(len(nums)):
j = nums[i]
if j in cv and i - cv[j] <= k:
out = True
cv[j] = i
else:
out = False
return out
conDupII([1,2,3,1,2,3],2)
b = [1,2,3,1,2,3]
we = []
for idx,i in enumerate(b):
if b.count(i) >= 2:
we.append(idx)
print(we)
res = [x-y for x, y in zip(b, b[1:])]
sum(res)
def conDupII(nums,k):
vb = []
for idx,i in enumerate(nums):
if nums.count(i) >= 2:
vb.append(i)
for j in range(len(vb)):
for k in range(1,len(vb)):
sub = vb[i] - vb[j]
if sub <= k:
o = sub
else:
o = sub
return o
conDupII([1,2,3,1],3)
###Output
_____no_output_____
###Markdown
Implement strStr()Return the index of the first occurrence of needle in haystack, or -1 if needle is not part of haystack.Clarification:What should we return when needle is an empty string? This is a great question to ask during an interview.For the purpose of this problem, we will return 0 when needle is an empty string. This is consistent to C's strstr() and Java's indexOf(). Example 1:Input: haystack = "hello", needle = "ll" | Output: 2Example 2:Input: haystack = "aaaaa", needle = "bba" | Output: -1Example 3:Input: haystack = "", needle = "" | Output: 0
###Code
# Method 1
def imp_str(str1,str2):
if not str1:
if not str2:
out = 0
else:out = -1
for i in range(len(str1)):
for j in range(len(str2)):
if str2[j] in str1[i]:
out = str1.find(str2)
return out
print(f'Method 1 output: {imp_str("bba","ba")}')
# Method 2
def imp_str2(str1,str2):
return str1.index(str2) if str2 in str1 else -1
print(f'Method 2 output: {imp_str2("","")}')
###Output
Method 1 output: 1
Method 2 output: 0
###Markdown
Third Maximum NumberGiven integer array nums, return the third maximum number in this array. If the third maximum does not exist, return the maximum number.Example 1:Input: nums = [[3,2,1]] |Output: 1Explanation: The third maximum is 1.Example 2:Input: nums = [[1,2]] | Output: 2Explanation: The third maximum does not exist, so the maximum (2) is returned instead.Example 3:Input: nums = [[2,2,3,1]] |Output: 1Explanation: Note that the third maximum here means the third maximum distinct number.Both numbers with value 2 are both considered as second maximum.
###Code
InputCheck = [[3,2,1],[1,2],[2,2,3,1],[5,2,2],[3,2,1],[5,2,4,1,3,6,0]]
# Method 1
def thirdMax(nums):
nums = set(nums)
if len(nums) < 3:
return max(nums)
nums -= {max(nums)}
nums -= {max(nums)}
return max(nums)
method_1_Out = []
for i in InputCheck:
method_1_Out.append(thirdMax(i))
print(f'Method 1 output: {method_1_Out}')
# Method 2
def thirdMax2(aList):
setVal = list(set(aList))
setVal.sort()
if len(setVal) <= 2: out = max(setVal)
else: out = setVal[-3]
return out
method_2_Out = []
for i in InputCheck:
method_2_Out.append(thirdMax2(i))
print(f'Method 2 output: {method_2_Out}')
# Method 3
def thirdMax3(nums):
if len(set(nums)) < 3:
out = max(nums)
else:
out = sorted(set(nums))[-3]
return out
method_3_Out = []
for i in InputCheck:
method_3_Out.append(thirdMax3(i))
print(f'Method 3 output: {method_3_Out}')
###Output
Method 1 output: [1, 2, 1, 5, 1, 4]
Method 2 output: [1, 2, 1, 5, 1, 4]
Method 3 output: [1, 2, 1, 5, 1, 4]
###Markdown
Yet to see this Combination SumGiven an array of distinct integers candidates and a target integer target, return a list of all unique combinations of candidates where the chosen numbers sum to target. You may return the combinations in any order.The same number may be chosen from candidates an unlimited number of times. Two combinations are unique if the frequency of at least one of the chosen numbers is different.It is guaranteed that the number of unique combinations that sum up to target is less than 150 combinations for the given input.Example 1:Input: candidates = [[2,3,6,7]], target = 7 | Output: [[[2,2,3]],[[7]]]Explanation:2 and 3 are candidates, and 2 + 2 + 3 = 7. Note that 2 can be used multiple times.7 is a candidate, and 7 = 7.These are the only two combinations.Example 2:Input: candidates = [[2,3,5]], target = 8 | Output: [[[2,2,2,2]],[[2,3,3]],[[3,5]]]Example 3:Input: candidates = [[2]], target = 1 | Output: []Example 4:Input: candidates = [[1]], target = 1 | Output: [[[1]]]Example 5:Input: candidates = [[1]], target = 2 | Output: [[[1,1]]]
###Code
def combinations_sum(candidates,target):
out = []
for i in range(0,len(candidates)):
for j in range(i-1,len(candidates)):
for k in range(j+1,len(candidates)):
tot = candidates[i] + candidates[j] + candidates[k]
if(i==j&j!=k&k!=i):
if tot == target:
out.extend([[candidates[i],candidates[j],candidates[k]]])
return out
combinations_sum([2,3,6,7],7)
###Output
_____no_output_____
###Markdown
Yet to see this Letter Combinations of a Phone NumberGiven a string containing digits from 2-9 inclusive, return all possible letter combinations that the number could represent. Return the answer in any order.A mapping of digit to letters (just like on the telephone buttons) is given below. Note that 1 does not map to any letters.Example 1:Input: digits = "23" | Output: [["ad","ae","af","bd","be","bf","cd","ce","cf"]]Example 2:Input: digits = "" | Output: []Example 3:Input: digits = "2" | Output: [["a","b","c"]]
###Code
def phone(str1):
list1 = list(str1.strip())
di = {"2":['a','b','c'],"3":['d','e','f'],"4":['g','h','i'],"5":['j','k','l'],"6":['m','n','o'],"7":['p','q','r','s'],"8":['t','u','v'],"9":['w','x','y','z']}
if not di:
val = []
val = [""]
for i in di:
cal = []
for j in val:
for k in di[i]:
cal.append(j+k)
val = cal
return val
phone('23')
###Output
_____no_output_____
###Markdown
Yet to see this Top K Frequent WordsGiven a non-empty list of words, return the k most frequent elements.Your answer should be sorted by frequency from highest to lowest. If two words have the same frequency, then the word with the lower alphabetical order comes first.Example 1:Input: [["i", "love", "leetcode", "i", "love", "coding"]], k = 2Output: [["i", "love"]]Explanation: "i" and "love" are the two most frequent words. Note that "i" comes before "love" due to a lower alphabetical order.Example 2:Input: [["the", "day", "is", "sunny", "the", "the", "the", "sunny", "is", "is"]], k = 4Output: [["the", "is", "sunny", "day"]]Explanation: "the", "is", "sunny" and "day" are the four most frequent words, with the number of occurrence being 4, 3, 2 and 1 respectively.
###Code
ut = ["the", "day", "is", "sunny", "the", "the", "the", "sunny", "is", "is"]
k = 4
ut.sort()
for i in range(len(ut)):
for j in range(len(ut[:k])):
###Output
_____no_output_____
###Markdown
Find All Numbers Disappeared in an ArrayGiven an array of integers where 1 ≤ a[[i]] ≤ n (n = size of array), some elements appear twice and others appear once.Find all the elements of [[1, n]] inclusive that do not appear in this array.Could you do it without extra space and in O(n) runtime? You may assume the returned list does not count as extra space.Example:Input: [[4,3,2,7,8,2,3,1]] | Output:[[5,6]]
###Code
# method 1
def missElement(aList):
if len(aList) == 0:
out = []
for i in range(len(aList)):
for j in range(i+1,len(aList)):
if aList[i] == aList[j]:
out = aList[j]+ 1
else:
out = [i for i in range(1,max(aList)) if i not in aList]
return [out]
print(f'Method 1 output: {missElement([1,1])}')
# method 2
def missElement2(nums):
return list(set(range(1,len(nums)+1))-set(nums))
print(f'Method 1 output: {missElement2([1,1])}')
###Output
Method 1 output: [2]
Method 1 output: [2]
###Markdown
Kth Missing Positive Number Given an array arr of positive integers sorted in a strictly increasing order, and an integer k.Find the kth positive integer that is missing from this array. Example 1:Input: arr = [[2,3,4,7,11]], k = 5 | Output: 9Explanation: The missing positive integers are [[1,5,6,8,9,10,12,13,...]]. The 5th missing positive integer is 9.Example 2:Input: arr = [[1,2,3,4]], k = 2 | Output: 6Explanation: The missing positive integers are [[5,6,7,...]]. The 2nd missing positive integer is 6.
###Code
# method 1
def kthMissingPositiveNumber(nums,k):
out = [i for i in range(1,max(nums)+k+1) if i not in nums]
return out[k-1]
print(f'Method 1 output: {kthMissingPositiveNumber([1,2,3,4,5,6],5)}')
###Output
Method 1 output: 11
###Markdown
Text Allignment
###Code
thickness = 5 #This must be an odd number
c = '1'
#Top Cone
for i in range(thickness):
print((c*i).rjust(thickness-1)+c+(c*i).ljust(thickness-1))
#Top Pillars
for i in range(thickness+1):
print((c*thickness).center(thickness*2)+(c*thickness).center(thickness*6))
#Middle Belt
for i in range((thickness+1)//2):
print((c*thickness*5).center(thickness*6))
#Bottom Pillars
for i in range(thickness+1):
print((c*thickness).center(thickness*2)+(c*thickness).center(thickness*6))
#Bottom Cone
for i in range(thickness):
print(((c*(thickness-i-1)).rjust(thickness)+c+(c*(thickness-i-1)).ljust(thickness)).rjust(thickness*6))
###Output
1
111
11111
1111111
111111111
11111 11111
11111 11111
11111 11111
11111 11111
11111 11111
11111 11111
1111111111111111111111111
1111111111111111111111111
1111111111111111111111111
11111 11111
11111 11111
11111 11111
11111 11111
11111 11111
11111 11111
111111111
1111111
11111
111
1
###Markdown
Best Time to Buy and Sell StockYou are given an array prices where prices[i] is the price of a given stock on the ith day.You want to maximize your profit by choosing a single day to buy one stock and choosing a different day in the future to sell that stock.Return the maximum profit you can achieve from this transaction. If you cannot achieve any profit, return 0. Example 1:Input: prices = [[7,1,5,3,6,4]] | Output: 5Explanation: Buy on day 2 (price = 1) and sell on day 5 (price = 6), profit = 6-1 = 5.Note that buying on day 2 and selling on day 1 is not allowed because you must buy before you sell.Example 2:Input: prices = [[7,6,4,3,1]] | Output: 0Explanation: In this case, no transactions are done and the max profit = 0.
###Code
# Method 1 (Leetcode 134 / 210 test cases passed.)
def maxProfit(prices):
min_element = min(prices)
# for i in range(prices.index(min_element),len(prices)):
if min_element == prices[-1]:
profit = 0
else:
aList = prices[min_element:]
max_element = max(aList)
profit = max_element - min_element
return profit
print(f'Method 1 output: {maxProfit([7,1,5,3,6,4])}')
# Method 2
def maxProfit2(price):
min_price = 10000
max_profit = 0
for i in range(len(price)):
if prices[i] < min_price:
min_price = price[i]
elif (price[i] - min_price > max_profit):
max_profit = price[i] - min_price
return max_profit
print(f'Method 2 output: {maxProfit([7,1,5,3,6,4])}')
# Method 3
def maxProfit3(price):
max_profit = 0
for i in range(len(price)-1):
for j in range(i+1,len(price)):
profit = price[j] - price[i]
if profit > max_profit:
max_profit = profit
return max_profit
print(f'Method 3 output: {maxProfit3([7,1,5,3,6,4])}')
# Method 4 (Vichu Annas' Logic)
def maxProfit4(prices):
if not prices:
return 0
profit = 0
buy_stock = prices[0]
for i in range(len(prices)):
if buy_stock > prices[i]:
buy_stock = prices[i]
profit = max((prices[i] - buy_stock, profit))
return profit
print(f'Method 4 output: {maxProfit4([7,1,5,3,6,4])}')
###Output
Method 1 output: 5
Method 2 output: 5
Method 3 output: 5
Method 4 output: 5
###Markdown
Excel Sheet Column NumberGiven a string columnTitle that represents the column title as appear in an Excel sheet, return its corresponding column number.For example:A -> 1B -> 2C -> 3...Z -> 26AA -> 27AB -> 28 ... Example 1:Input: columnTitle = "A" | Output: 1Example 2:Input: columnTitle = "AB" | Output: 28Example 3:Input: columnTitle = "ZY" | Output: 701Example 4:Input: columnTitle = "FXSHRXW" | Output: 2147483647
###Code
def excel_sheet(s):
a = 0
for i in s:
a = a * 26 + ord(i) - ord('A') + 1
return a
excel_sheet("FXSHRXW")
###Output
_____no_output_____
###Markdown
Shuffle StringGiven a string s and an integer array indices of the same length.The string s will be shuffled such that the character at the ith position moves to indices[i] in the shuffled string.Return the shuffled string.Example 1:Input: s = "codeleet", indices = [[4,5,6,7,0,2,1,3]] | Output: "leetcode"Explanation: As shown, "codeleet" becomes "leetcode" after shuffling.Example 2:Input: s = "abc", indices = [[0,1,2]] | Output: "abc"Explanation: After shuffling, each character remains in its position.Example 3:Input: s = "aiohn", indices = [[3,1,4,2,0]] | Output: "nihao"
###Code
def shuffleString(s,indices):
output = [''] * len(s)
for i in range(len(s)):
output[indices[i]] = s[i]
return ''.join(output)
shuffleString(s,indices)
###Output
_____no_output_____
###Markdown
Intersection of Two ArraysGiven two integer arrays nums1 and nums2, return an array of their intersection. Each element in the result must be unique and you may return the result in any order.Example 1:Input: nums1 = [[1,2,2,1]], nums2 = [[2,2]] | Output: [[2]]Example 2:Input: nums1 = [[4,9,5]], nums2 = [[9,4,9,8,4]] | Output: [[9,4]]Explanation: [[4,9]] is also accepted.
###Code
def intersectionArray(str1,str2):
return list(set(str1).intersection(str2))
intersectionArray([1,2,2,1],[2,2])
###Output
_____no_output_____
###Markdown
DI String MatchGiven a string S that only contains "I" (increase) or "D" (decrease), let N = S.length.Return any permutation A of [[0, 1, ..., N]] such that for all i = 0, ..., N-1:If S[[i]] == "I", then A[[i]] < A[[i+1]]If S[[i]] == "D", then A[[i]] > A[[i+1]] Example 1:Input: "IDID" | Output: [[0,4,1,3,2]]Example 2:Input: "III" | Output: [[0,1,2,3]]Example 3:Input: "DDI" | Output: [[3,2,0,1]]Expalination: [[--I->--D->--I->--D->-]] [[0--->4--->1--->3--->2]][[0-I->4-D->1-I->3-D->2]] Multiply StringsGiven two non-negative integers num1 and num2 represented as strings, return the product of num1 and num2, also represented as a string.Note: You must not use any built-in BigInteger library or convert the inputs to integer directly.Example 1:Input: num1 = "2", num2 = "3" | Output: "6"Example 2:Input: num1 = "123", num2 = "456" | Output: "56088"
###Code
# Method 1 (Your runtime beats 55.55 % of python3 submissions)
def multiply(num1,num2):
def helper(num):
res = 0
for i in num:
res *= 10
for j in '0123456789':
res += i > j
return res
return helper(num1) * helper(num2)
# multiply('2','3')
print(f'Method 1 output: {multiply("2","3")}')
# Merthod 2 (Your runtime beats 98.42 % of python3 submissions)
def strToInt(num1,num2):
r,q = 0,0
c = {'0': 0, '1': 1, '2': 2, '3': 3, '4': 4, '5': 5, '6': 6, '7': 7, '8': 8, '9': 9}
for i in num1:
r = 10*r + c[i]
for j in num2:
q = 10*q + c[j]
return str(r*q)
print(f'Method 2 output: {strToInt("2","3")}')
###Output
Method 1 output: 6
Method 2 output: 6
###Markdown
Add BinaryGiven two binary strings a and b, return their sum as a binary string.Example 1:Input: a = "11", b = "1" | Output: "100"Example 2:Input: a = "1010", b = "1011" | Output: "10101"
###Code
# Method 1 (Your runtime beats 90.90 % of python3 submissions)
def binSum(a,b):
return bin(int(a,2) + int(b,2))[2:]
binSum('1010','1011')
###Output
_____no_output_____
###Markdown
Add to Array-Form of IntegerFor a non-negative integer X, the array-form of X is an array of its digits in left to right order. For example, if X = 1231, then the array form is [[1,2,3,1]].Given the array-form A of a non-negative integer X, return the array-form of the integer X+K.Example 1:Input: A = [[1,2,0,0]], K = 34 | Output: [[1,2,3,4]]Explanation: 1200 + 34 = 1234Example 2:Input: A = [[2,7,4]], K = 181 | Output: [[4,5,5]]Explanation: 274 + 181 = 455Example 3:Input: A = [[2,1,5]], K = 806 | Output: [[1,0,2,1]]Explanation: 215 + 806 = 1021Example 4:Input: A = [[9,9,9,9,9,9,9,9,9,9]], K = 1 | Output: [[1,0,0,0,0,0,0,0,0,0,0]]Explanation: 9999999999 + 1 = 10000000000
###Code
# Method 1 (Your runtime beats 51.58 % of python3 submissions)
def arrToInt(aList,k):
b = [str(i) for i in aList]
c = int(''.join(b)) + k
d = [int(i) for i in ''.join(str(c))]
return d
print(f'Method 1 output: {arrToInt([9,9,9,9,9,9,9,9,9,9],1)}')
###Output
Method 1 output: [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
###Markdown
Remove Duplicates from Sorted Array IIGiven a sorted array nums, remove the duplicates in-place such that duplicates appeared at most twice and return the new length.Do not allocate extra space for another array; you must do this by modifying the input array in-place with O(1) extra memory.Clarification:Confused why the returned value is an integer, but your answer is an array?Note that the input array is passed in by reference, which means a modification to the input array will be known to the caller.Example 1:Input: nums = [[1,1,1,2,2,3]] |Output: 5, nums = [[1,1,2,2,3]]Explanation: Your function should return length = 5, with the first five elements of nums being 1, 1, 2, 2 and 3 respectively. It doesn't matter what you leave beyond the returned length.Example 2:Input: nums = [[0,0,1,1,1,1,2,3,3]] | Output: 7, nums = [[0,0,1,1,2,3,3]]Explanation: Your function should return length = 7, with the first seven elements of nums being modified to 0, 0, 1, 1, 2, 3 and 3 respectively. It doesn't matter what values are set beyond the returned length.
###Code
# Method 1 (Leet Code 118 / 164 test cases passed)
def removeDuplicates(nums):
for i in nums:
if nums.count(i) > 2:
nums.remove(i)
return len(nums)
print(f'Method 1 output: {removeDuplicates([-50,-50,-49,-48,-47,-47,-47,-46,-45,-43,-42,-41,-40,-40,-40,-40,-40,-40,-39,-38,-38,-38,-38,-37,-36,-35,-34,-34,-34,-33,-32,-31,-30,-28,-27,-26,-26,-26,-25,-25,-24,-24,-24,-22,-22,-21,-21,-21,-21,-21,-20,-19,-18,-18,-18,-17,-17,-17,-17,-17,-16,-16,-15,-14,-14,-14,-13,-13,-12,-12,-10,-10,-9,-8,-8,-7,-7,-6,-5,-4,-4,-4,-3,-1,1,2,2,3,4,5,6,6,7,8,8,9,9,10,10,10,11,11,12,12,13,13,13,14,14,14,15,16,17,17,18,20,21,22,22,22,23,23,25,26,28,29,29,29,30,31,31,32,33,34,34,34,36,36,37,37,38,38,38,39,40,40,40,41,42,42,43,43,44,44,45,45,45,46,47,47,47,47,48,49,49,49,50])}')
# Method 2
def foo(nums):
for val in set(nums):
while nums.count(val) > 2:
nums.remove(val)
return len(nums)
print(f'Method 2 output: {foo([-50,-50,-49,-48,-47,-47,-47,-46,-45,-43,-42,-41,-40,-40,-40,-40,-40,-40,-39,-38,-38,-38,-38,-37,-36,-35,-34,-34,-34,-33,-32,-31,-30,-28,-27,-26,-26,-26,-25,-25,-24,-24,-24,-22,-22,-21,-21,-21,-21,-21,-20,-19,-18,-18,-18,-17,-17,-17,-17,-17,-16,-16,-15,-14,-14,-14,-13,-13,-12,-12,-10,-10,-9,-8,-8,-7,-7,-6,-5,-4,-4,-4,-3,-1,1,2,2,3,4,5,6,6,7,8,8,9,9,10,10,10,11,11,12,12,13,13,13,14,14,14,15,16,17,17,18,20,21,22,22,22,23,23,25,26,28,29,29,29,30,31,31,32,33,34,34,34,36,36,37,37,38,38,38,39,40,40,40,41,42,42,43,43,44,44,45,45,45,46,47,47,47,47,48,49,49,49,50])}')
###Output
Method 1 output: 137
Method 2 output: 136
###Markdown
Merge Intervals Given an array of intervals where intervals[[i]] = [[starti, endi]], merge all overlapping intervals, and return an array of the non-overlapping intervals that cover all the intervals in the input.Example 1:Input: intervals = [[[1,3]],[[2,6]],[[8,10]],[[15,18]]] | Output: [[[1,6]],[[8,10]],[[15,18]]]Explanation: Since intervals [[1,3]] and [[2,6]] overlaps, merge them into [[1,6]].Example 2:Input: intervals = [[[1,4]],[[4,5]]] | Output: [[[1,5]]]Explanation: Intervals [[1,4]] and [[4,5]] are considered overlapping.
###Code
# Method 1 (Your runtime beats 75.54 % of python3 submissions)
def intervalMerge(aList):
r = []
for i in sorted(intervals,key=lambda x:x[0]):
if r and i[0] <= r[-1][1]:
r[-1][1] = max(r[-1][1],i[-1])
else:
r.append(i)
return r
intervalMerge([[1,3],[2,6],[8,10],[15,18]])
###Output
_____no_output_____
###Markdown
Valid NumberA valid number can be split up into these components (in order):A decimal number or an integer. 1. (Optional) An 'e' or 'E', followed by an integer. 2. One of the following formats: 1. At least one digit, followed by a dot '.'. 2. At least one digit, followed by a dot '.', followed by at least one digit. 3. A dot '.', followed by at least one digit.An integer can be split up into these components (in order): 1. (Optional) A sign character (either '+' or '-'). 2. At least one digit.For example, all the following are valid numbers: [["2", "0089", "-0.1", "+3.14", "4.", "-.9", "2e10", "-90E3", "3e+7", "+6e-1", "53.5e93", "-123.456e789"]], while the following are not valid numbers: [["abc", "1a", "1e", "e3", "99e2.5", "--6", "-+3", "95a54e53"]].Given a string s, return true if s is a valid number.Example 1:Input: s = "0" |Output: trueExample 2:Input: s = "e" |Output: falseExample 3:Input: s = "." | Output: falseExample 4:Input: s = ".1" | Output: true
###Code
def validNum(s):
try:
if 'inf' in s.lower() or s.isalpha():
return False
if float(s) or float(s) >= 0:
return True
except:
return False
validNum('inf')
###Output
_____no_output_____
###Markdown
Move ZeroesGiven an integer array nums, move all 0's to the end of it while maintaining the relative order of the non-zero elements.Note: that you must do this in-place without making a copy of the array. Example 1:Input: nums = [[0,1,0,3,12]] | Output: [[1,3,12,0,0]]Example 2:Input: nums = [[0]] | Output: [[0]]
###Code
# Method 1
def moveZeros(l):
for i in l:
if i == 0:
l.append(l.pop(l.index(i)))
return l
print(f'Method 1 output: {moveZeros([0,1,0,3,12])}')
###Output
Method 1 output: [1, 3, 12, 0, 0]
###Markdown
Reverse Vowels of a StringGiven a string s, reverse only all the vowels in the string and return it.The vowels are 'a', 'e', 'i', 'o', and 'u', and they can appear in both cases.Example 1:Input: s = "hello" | Output: "holle"Example 2:Input: s = "leetcode" | Output: "leotcede"
###Code
import re
# Method 1
def revVowels(s):
vowels = re.findall('(?i)[aeiou]', s)
return re.sub('(?i)[aeiou]', lambda m: vowels.pop(), s)
print(f'Method 1 output: {revVowels("hello")}')
###Output
Method 1 output: holle
###Markdown
Maximum Product of Two Elements in an ArrayGiven the array of integers nums, you will choose two different indices i and j of that array. Return the maximum value of (nums[[i]]-1)*(nums[[j]]-1).Example 1:Input: nums = [[3,4,5,2]] | Output: 12 Explanation: If you choose the indices i=1 and j=2 (indexed from 0), you will get the maximum value, that is, (nums[1]-1)*(nums[2]-1) = (4-1)*(5-1) = 3*4 = 12. Example 2:Input: nums = [[1,5,4,5]] | Output: 16Explanation: Choosing the indices i=1 and j=3 (indexed from 0), you will get the maximum value of (5-1)*(5-1) = 16.Example 3:Input: nums = [[3,7]] | Output: 12
###Code
# Method 1
def prod_two(aList):
aList.sort()
a,b = aList[-1],aList[-2]
prod = (a-1) * (b-1)
return prod
print(f'Method 1 output: {prod_two([3,8,5,2])}')
# Method 2
def prod_two2(aList):
a,b = 0,0
for i in aList:
if i > a:
b = a
a = i
elif i > b and i <= a:b = i
return (a-1) * (b-1)
print(f'Method 2 output: {prod_two2([3,8,5,2])}')
###Output
Method 1 output: 28
Method 2 output: 28
###Markdown
Product of Array Except SelfGiven an integer array nums, return an array answer such that answer[[i]] is equal to the product of all the elements of nums except nums[i].The product of any prefix or suffix of nums is guaranteed to fit in a 32-bit integer.Example 1:Input: nums = [[1,2,3,4]] | Output: [[24,12,8,6]]Example 2:Input: nums = [[-1,1,0,-3,3]] | Output: [[0,0,9,0,0]]
###Code
from functools import reduce
# Method 1 (Not feasible for larger numbers)
def ProdArray1(nums):
return [reduce(lambda x,y:x*y,nums[:i]+nums[i+1:]) for i in range(len(nums))]
print(f'Method 1 output: {ProdArray1([1,2,3,4])}')
# Method 2
def ProdArray2(nums):
result = [1] * len(nums)
for i in range(len(nums)):
result[i] = nums[i-1] * result[i-1]
right_prod = 1
for i in range(len(nums)-1, -1, -1):
result[i] *= right_prod
right_prod *= nums[i]
return result
print(f'Method 2 output: {ProdArray2([-1,1,0,-3,3])}')
###Output
Method 1 output: [24, 12, 8, 6]
Method 2 output: [0, 0, 27, 0, 0]
###Markdown
Valid ParenthesesGiven a string s containing just the characters '(', ')', '{', '}', '[' and ']', determine if the input string is valid.An input string is valid if:Open brackets must be closed by the same type of brackets.Open brackets must be closed in the correct order. Example 1:Input: s = "()" | Output: trueExample 2:Input: s = "()[]{}" | Output: trueExample 3:Input: s = "(]" | Output: falseExample 4:Input: s = "([)]" | Output: falseExample 5:Input: s = "{[]}" | Output: true
###Code
# Method 1 (Leetcode 67 / 91 test cases passed.)
def validParenthesis(string):
stack = []
open_brackets = ["(","[","{"]
close_brackets = [")","]","}"]
if len(string) == 1:out = False
else:
for i in string:
if i in open_brackets: # To check if our paraenthesis present in our list and push it to our stack
stack.append(i)
elif i in close_brackets:
if (len(stack) > 0) and (open_brackets[close_brackets.index(i)] == stack[len(stack) -1]):
stack.pop()
else:
break
if len(stack) == 0:
out = True
else:
out = False
return out
print(f'Method 1 output: {validParenthesis("{({[]})}")}')
###Output
Method 1 output: True
###Markdown
Truncate SentenceA sentence is a list of words that are separated by a single space with no leading or trailing spaces. Each of the words consists of only uppercase and lowercase English letters (no punctuation).For example, "Hello World", "HELLO", and "hello world hello world" are all sentences.You are given a sentence s and an integer k. You want to truncate s such that it contains only the first k words. Return s after truncating it.Example 1:Input: s = "Hello how are you Contestant", k = 4 | Output: "Hello how are you"Explanation:The words in s are [["Hello", "how" "are", "you", "Contestant"]].The first 4 words are [["Hello", "how", "are", "you"]].Hence, you should return "Hello how are you".Example 2:Input: s = "What is the solution to this problem", k = 4 | Output: "What is the solution"Explanation:The words in s are [["What", "is" "the", "solution", "to", "this", "problem"]].The first 4 words are [["What", "is", "the", "solution"]].Hence, you should return "What is the solution".Example 3:Input: s = "chopper is not a tanuki", k = 5 | Output: "chopper is not a tanuki"
###Code
# Method 1 (Leet code accuracy 96%)
def truncateSentence(s,k):
return " ".join(s.split()[:k])
print(f'Method 1 output is {truncateSentence("Hello how are you Contestant",4)}')
###Output
Method 1 output is Hello how are you
###Markdown
Determine Color of a Chessboard SquareYou are given coordinates, a string that represents the coordinates of a square of the chessboard. Below is a chessboard for your reference.Return true if the square is white, and false if the square is black.The coordinate will always represent a valid chessboard square. The coordinate will always have the letter first, and the number second. Example 1:Input: coordinates = "a1" | Output: falseExplanation: From the chessboard above, the square with coordinates "a1" is black, so return false.Example 2:Input: coordinates = "h3" | Output: trueExplanation: From the chessboard above, the square with coordinates "h3" is white, so return true.Example 3:Input: coordinates = "c7" | Output: false Sign of the Product of an ArrayThere is a function signFunc(x) that returns:1 if x is positive.-1 if x is negative.0 if x is equal to 0.You are given an integer array nums. Let product be the product of all values in the array nums.Return signFunc(product). Example 1:Input: nums = [[-1,-2,-3,-4,3,2,1]] | Output: 1Explanation: The product of all values in the array is 144, and signFunc(144) = 1Example 2:Input: nums = [[1,5,0,2,-3]] | Output: 0Explanation: The product of all values in the array is 0, and signFunc(0) = 0Example 3:Input: nums = [[-1,1,-1,1,-1]] | Output: -1Explanation: The product of all values in the array is -1, and signFunc(-1) = -1
###Code
# Method 1 using numpy package (Leetcode 30 / 74 test cases passed.)
inputList = [[-1,-2,-3,-4,3,2,1],[1,5,0,2,-3],[-1,1,-1,1,-1]]
import numpy as np
def arrayProd(nums):
if np.prod(nums) > 1:out = 1
elif np.prod(nums) < 0:out = -1
else:out = 0
return out
print('Method 1 checking........')
method1_ans = [arrayProd(i) for i in inputList]
print(f'Method 1 check done and value is {method1_ans}')
print()
# Method 2 (Most precise solution and optimal one 100% efficency)
def arrayProd2(nums):
res = 1
for i in nums:
res = res * i
if res > 1:out = 1
elif res < 0:out = -1
else:out = 0
return out
print('Method 2 checking.......')
method2_ans = [arrayProd2(i) for i in inputList]
print(f'Method 2 check done and value is {method2_ans}')
###Output
Method 1 checking........
Method 1 check done and value is [1, 0, -1]
Method 2 checking.......
Method 2 check done and value is [1, 0, -1]
###Markdown
Number of Different Integers in a StringYou are given a string word that consists of digits and lowercase English letters.You will replace every non-digit character with a space. For example, "a123bc34d8ef34" will become " 123 34 8 34". Notice that you are left with some integers that are separated by at least one space: "123", "34", "8", and "34".Return the number of different integers after performing the replacement operations on word.Two integers are considered different if their decimal representations without any leading zeros are different. Example 1:Input: word = "a123bc34d8ef34" | Output: 3Explanation: The three different integers are "123", "34", and "8". Notice that "34" is only counted once.Example 2:Input: word = "leet1234code234" | Output: 2Example 3:Input: word = "a1b01c001" | Output: 1Explanation: The three integers "1", "01", and "001" all represent the same integer becausethe leading zeros are ignored when comparing their decimal values.
###Code
# Method 1
def findInt(string):
numVal = "0123456789"
for i in range(len(string)):
if string[i] not in numVal:
string = string.replace(string[i]," ")
splitVal = string.split()
for i in range(len(splitVal)):
splitVal[i] = int(splitVal[i])
out = set(splitVal)
return len(out)
print(f'Method 1 output is {findInt("a123bc34d8ef34")}')
###Output
Method 1 output is 3
###Markdown
Second Largest Digit in a StringGiven an alphanumeric string s, return the second largest numerical digit that appears in s, or -1 if it does not exist.An alphanumeric string is a string consisting of lowercase English letters and digits.Example 1:Input: s = "dfa12321afd" | Output: 2Explanation: The digits that appear in s are [[1, 2, 3]]. The second largest digit is 2.Example 2:Input: s = "abc1111" |Output: -1Explanation: The digits that appear in s are [[1]]. There is no second largest digit.
###Code
# Method 1 using regular expression package (Your runtime beats 80.39 % of python3 submissions.)
import re
def secondLargestDigi(stirng):
val = re.findall('[0-9]',stirng)
output = list(set(val))
output.sort()
if len(output) <= 1: out = -1
else: out = int(output[-2])
return out
print(f'Method 1 output is {secondLargestDigi("abc1111")}')
###Output
Method 1 output is -1
###Markdown
Sum of Unique ElementsYou are given an integer array nums. The unique elements of an array are the elements that appear exactly once in the array.Return the sum of all the unique elements of nums.Example 1:Input: nums = [[1,2,3,2]] | Output: 4Explanation: The unique elements are [[1,3]], and the sum is 4.Example 2:Input: nums = [[1,1,1,1,1]] | Output: 0Explanation: There are no unique elements, and the sum is 0.Example 3:Input: nums = [[1,2,3,4,5]] | Output: 15Explanation: The unique elements are [[1,2,3,4,5]], and the sum is 15.
###Code
# Method 1 (leet code 35 / 73 test cases passed.)
def uniqueElements(nums):
if len(nums) <= 1: out = nums[0]
else:
uniqueVal = list(set(nums))
for i in uniqueVal:
if nums.count(i) >= 2:
uniqueVal.remove(i)
if len(uniqueVal) <= 1:out = 0
else:out = sum(uniqueVal)
return out
print(f'Method 1 output {uniqueElements([60,89,2,15,29,47,42])}')
# Method 2
def uniqueElements2(nums):
uniq = []
[uniq.append(num) for num in nums if nums.count(num) == 1]
return sum(uniq)
print(f'Method 2 output {uniqueElements2([60,89,2,15,29,47,42])}')
###Output
Method 1 output 284
Method 2 output 284
###Markdown
Count of Matches in TournamentYou are given an integer n, the number of teams in a tournament that has strange rules:If the current number of teams is even, each team gets paired with another team. A total of n / 2 matches are played, and n / 2 teams advance to the next round.If the current number of teams is odd, one team randomly advances in the tournament, and the rest gets paired. A total of (n - 1) / 2 matches are played, and (n - 1) / 2 + 1 teams advance to the next round.Return the number of matches played in the tournament until a winner is decided. Example 1:Input: n = 7 |Output: 6Explanation: Details of the tournament: - 1st Round: Teams = 7, Matches = 3, and 4 teams advance. - 2nd Round: Teams = 4, Matches = 2, and 2 teams advance. - 3rd Round: Teams = 2, Matches = 1, and 1 team is declared the winner.Total number of matches = 3 + 2 + 1 = 6.Example 2:Input: n = 14 | Output: 13Explanation: Details of the tournament: - 1st Round: Teams = 14, Matches = 7, and 7 teams advance. - 2nd Round: Teams = 7, Matches = 3, and 4 teams advance. - 3rd Round: Teams = 4, Matches = 2, and 2 teams advance. - 4th Round: Teams = 2, Matches = 1, and 1 team is declared the winner.Total number of matches = 7 + 3 + 2 + 1 = 13.
###Code
# Method 1
def recur(n):
return n-1 # Since all the sum returns total value - 1
print(f'Method 1 output {recur(7)}')
# Method 2
# def recur2(n):
# count = 0
# while n != 1:
# match = n //2
# count += match
# n -= count
# return count
# print(f'Method 2 output {recur2(7)}')
###Output
Method 1 output 6
###Markdown
Count Odd Numbers in an Interval RangeGiven two non-negative integers low and high. Return the count of odd numbers between low and high (inclusive).Example 1:Input: low = 3, high = 7 | Output: 3Explanation: The odd numbers between 3 and 7 are [[3,5,7]].Example 2:Input: low = 8, high = 10 | Output: 1Explanation: The odd numbers between 8 and 10 are [[9]].
###Code
# Method 1
def countOdd(low,high):
if (low and high) % 2 == 0:
out = len([i for i in range(low,high,2)])
elif (low and high) % 2 == 1:
out = len([i for i in range(low,high,2)])
elif (low % 2 == 0) and (high % 2 == 1):
ot = len([i for i in range(low,high,2)])
out = ot - 1
return out
print(f'Method 1 output {countOdd(800445804,979430543)}')
# Method 2
def countOdd2(a,b):
o = [i for i in range(a,b+1)]
v = [j for j in o if j % 2 == 1]
return len(v)
print(f'Method 2 output {countOdd2(14,17)}')
###Output
Method 1 output 89492370
Method 2 output 2
###Markdown
Water BottlesGiven numBottles full water bottles, you can exchange numExchange empty water bottles for one full water bottle.The operation of drinking a full water bottle turns it into an empty bottle.Return the maximum number of water bottles you can drink.Example 1:Input: numBottles = 9, numExchange = 3 | Output: 13Explanation: You can exchange 3 empty bottles to get 1 full water bottle. Number of water bottles you can drink: 9 + 3 + 1 = 13.Example 2:Input: numBottles = 15, numExchange = 4 | Output: 19Explanation: You can exchange 4 empty bottles to get 1 full water bottle. Number of water bottles you can drink: 15 + 3 + 1 = 19.Example 3:Input: numBottles = 5, numExchange = 5 | Output: 6Example 4:Input: numBottles = 2, numExchange = 3 | Output: 2
###Code
# Method 1
def waterBottles(num,ex):
# Condition one if our number of bottles is lesser than exchange
if num < ex: out = num
# Next condition if both our number of bottles is equal to exchange returns the number of bottle + 1
elif num == ex: out = num + 1
else:
diff = num // ex
if diff > ex:
sec = diff // ex
out = num + (diff + sec)
else:
out = num + ex
return out
print(f'Method 1 output {waterBottles(20,4)}')
# Method 2
def waterBottles2(num,ex):
return num + (num -1) // (ex - 1)
print(f'Method 1 output {waterBottles2(20,4)}')
###Output
Method 1 output 26
Method 1 output 26
###Markdown
Valid Perfect SquareGiven a positive integer num, write a function which returns True if num is a perfect square else False.Follow up: Do not use any built-in library function such as sqrt.Example 1:Input: num = 16 | Output: trueExample 2:Input: num = 14 | Output: false
###Code
# Method 1 (Leet code 69 / 70 test cases passed)
def validSqrt(num):
numList = [i for i in range(1,int(2147483647 ** 0.5))]
if num ** 0.5 in numList:
out = True
else:
out = False
return out
print(f'Method 1 output {validSqrt(16)}')
# Method 2
def validSqrt2(num):
if num ** 0.5 == num ** 0.5:out = True
else: out = False
return out
print(f'Method 1 output {validSqrt2(16)}')
###Output
Method 1 output True
Method 1 output True
###Markdown
Check if Binary String Has at Most One Segment of OnesGiven a binary string s without leading zeros, return true if s contains at most one contiguous segment of ones. Otherwise, return false. Example 1:Input: s = "1001" |Output: falseExplanation: The ones do not form a contiguous segment.Example 2:Input: s = "110" | Output: true
###Code
# Method 1
def contiguousSegment(astring):
pat = set(i.strip() for i in astring)
max_len = max([len(i) for i in pat])
empty_string = ''
for i in astring:
seg = empty_string + i
for j in list(pat):
if j in seg:out = True
else:out = False
return out
print(f'Method 1 output {contiguousSegment("110")}')
# Method 2
def contiguousSegment2(astring):
return '01' not in astring
print(f'Method 2 output {contiguousSegment2("110")}')
###Output
Method 1 output True
Method 2 output True
###Markdown
Minimum Distance to the Target ElementGiven an integer array nums (0-indexed) and two integers target and start, find an index i such that nums[[i]] == target and abs(i - start) is minimized. Note that abs(x) is the absolute value of x.Return abs(i - start).It is guaranteed that target exists in nums. Example 1:Input: nums = [[1,2,3,4,5]], target = 5, start = 3 | Output: 1Explanation: nums[[4]] = 5 is the only value equal to target, so the answer is abs(4 - 3) = 1.Example 2:Input: nums = [[1]], target = 1, start = 0 | Output: 0Explanation: nums[[0]] = 1 is the only value equal to target, so the answer is abs(0 - 0) = 0.Example 3:Input: nums = [[1,1,1,1,1,1,1,1,1,1]], target = 1, start = 0 | Output: 0Explanation: Every value of nums is 1, but nums[[0]] minimizes abs(i - start), which is abs(0 - 0) = 0.
###Code
# Method 1 (Leet code passed 67/72 use cases.)
def minDist1(alist,target,start):
return abs(alist.index(target) - start)
print(f'Method 1 output {minDist1([5,2,3,5,5],5,2)}')
# Method 2 (Yet to test this code)
def minDist2(alist,target,start):
if len(list(set(alist))) == 1: out = 0
else:
for i in range(len(alist)):
for j in range(i,len(alist)-1):
if alist[i] == alist[j+1]:
out = abs(alist.index(alist[i]) - alist.index(start))
else:
out = abs(alist.index(target) - start)
return out
print(f'Method 2 output {minDist2([1,2,3,4,5],5,3)}')
# Method 3 (Leet code passed 68/72 use cases.)
def minDist3(alist,target,start):
if len(list(set(alist))) == 1:out = 0
else:out = abs(alist.index(target) - start)
return out
print(f'Method 3 output {minDist3([5,2,3,5,5],5,2)}')
al,tar,strt = [1,2,3,4,5],5,3
for i in range(len(al)):
for j in range(i,len(al)-1):
if al[i] == tar:
out = abs(j-strt)
out
###Output
_____no_output_____
###Markdown
Sum of Digits in Base KGiven an integer n (in base 10) and a base k, return the sum of the digits of n after converting n from base 10 to base k.After converting, each digit should be interpreted as a base 10 number, and the sum should be returned in base 10. Example 1:Input: n = 34, k = 6 | Output: 9Explanation: 34 (base 10) expressed in base 6 is 54. 5 + 4 = 9.Example 2:Input: n = 10, k = 10 | Output: 1Explanation: n is already in base 10. 1 + 0 = 1.
###Code
# Method 1
def str_base(val, base):
res = ''
while val > 0:
res = str(val % base) + res
val //= base
if res: return sum([int(i) for i in res])
return 0
print(f'Method 1 output is {str_base(10,10)}')
###Output
Method 1 output is 1
###Markdown
Single Number IIIGiven an integer array nums, in which exactly two elements appear only once and all the other elements appear exactly twice. Find the two elements that appear only once. You can return the answer in any order.Follow up: Your algorithm should run in linear runtime complexity. Could you implement it using only constant space complexity? Example 1:Input: nums = [[1,2,1,3,2,5]] | Output: [[3,5]]Explanation: [[5, 3]] is also a valid answer.Example 2:Input: nums = [[-1,0]] | Output: [[-1,0]]Example 3:Input: nums = [[0,1]] | Output: [{1,0}]
###Code
# Method 1
def singleNum1(alist):
if len(alist) == 2:return alist
else:
dummy = [i for i in alist if alist.count(i) <= 1]
return dummy
print(f'Method 1 output is {singleNum1([1,2,1,3,2,5])}')
# Method 2
def singleNum2(alist):
val = set()
for i in alist:
if i in val:val.remove(i)
else:val.add(i)
return list(val)
print(f'Method 2 output is {singleNum2([1,2,1,3,2,5])}')
###Output
Method 1 output is [3, 5]
Method 2 output is [3, 5]
###Markdown
Count Items Matching a RuleYou are given an array items, where each items[[i]] = [[typei, colori, namei]] describes the type, color, and name of the ith item. You are also given a rule represented by two strings, ruleKey and ruleValue.The ith item is said to match the rule if one of the following is true:ruleKey == "type" and ruleValue == typei.ruleKey == "color" and ruleValue == colori.ruleKey == "name" and ruleValue == namei.Return the number of items that match the given rule.Example 1:Input: items = [[["phone","blue","pixel"]],[["computer","silver","lenovo"]],[["phone","gold","iphone"]]], ruleKey = "color", ruleValue = "silver"Output: 1Explanation: There is only one item matching the given rule, which is [["computer","silver","lenovo"]].Example 2:Input: items = [[["phone","blue","pixel"]],[["computer","silver","phone"]],[["phone","gold","iphone"]]], ruleKey = "type", ruleValue = "phone"Output: 2Explanation: There are only two items matching the given rule, which are [["phone","blue","pixel"]] and [["phone","gold","iphone"]]. Note that the item [["computer","silver","phone"]] does not match.
###Code
items = [["phone","blue","pixel"],["computer","silver","phone"],["phone","gold","iphone"]]
ruleKey = "type"
ruleValue = "phone"
# Method 1
def countMatches(items, ruleKey, ruleValue):
rule = ['type','color','name']
return len([i for i in items if i[rule.index(ruleKey)] == ruleValue])
print(f'Method 1 Output is: {countMatches(items, ruleKey, ruleValue)}')
# Method 2
def countMatches2(items, ruleKey, ruleValue):
if ruleKey == "type":
idx = 0
elif ruleKey == "color":
idx = 1
else:
idx = 2
return sum(1 for i in items if i[idx] == ruleValue)
print(f'Method 2 Output is: {countMatches2(items, ruleKey, ruleValue)}')
# Method 3
def countMatches3(items, ruleKey, ruleValue):
a=['type','color', 'name']
b=a.index(ruleKey)
ans=0
for i in items:
if i[b] == ruleValue:
ans+=1
return ans
print(f'Method 3 Output is: {countMatches3(items, ruleKey, ruleValue)}')
###Output
Method 1 Output is: 2
Method 2 Output is: 2
Method 3 Output is: 2
###Markdown
Lucky Numbers in a MatrixGiven a m * n matrix of distinct numbers, return all lucky numbers in the matrix in any order.A lucky number is an element of the matrix such that it is the minimum element in its row and maximum in its column.Example 1:Input: matrix = [[[3,7,8]],[[9,11,13]],[[15,16,17]]] | Output: [[15]]Explanation: 15 is the only lucky number since it is the minimum in its row and the maximum in its columnExample 2:Input: matrix = [[[1,10,4,2]],[[9,3,8,7]],[[15,16,17,12]]] | Output: [[12]]Explanation: 12 is the only lucky number since it is the minimum in its row and the maximum in its column.Example 3:Input: matrix = [[[7,8]],[[1,2]]] | Output: [[7]]
###Code
# Method 1
def luckyNumbers(matrix):
idx = len(matrix[0])
dup = [j for i in matrix for j in i]
dup.sort()
return [dup[-idx]]
print(f'Method 1 output is: {luckyNumbers(matrix)}')
# Method 2
def luckyNumbers2(matrix):
maxx_ele= [max(i) for i in zip(*matrix)]
min_ele = [min(i) for i in matrix]
return [i for i in min_ele if i in maxx_ele]
print(f'Method 2 output is: {luckyNumbers2(matrix)}')
###Output
Method 1 output is: [12]
Method 2 output is: [12]
###Markdown
Number of Students Doing Homework at a Given TimeGiven two integer arrays startTime and endTime and given an integer queryTime.The ith student started doing their homework at the time startTime[[i]] and finished it at time endTime[[i]].Return the number of students doing their homework at time queryTime. More formally, return the number of students where queryTime lays in the interval [[startTime[[i]], endTime[[i]]]] inclusive. Example 1:Input: startTime = [[1,2,3]], endTime = [[3,2,7]], queryTime = 4 | Output: 1Explanation: We have 3 students where:The first student started doing homework at time 1 and finished at time 3 and wasn't doing anything at time 4.The second student started doing homework at time 2 and finished at time 2 and also wasn't doing anything at time 4.The third student started doing homework at time 3 and finished at time 7 and was the only student doing homework at time 4.Example 2:Input: startTime = [[4]], endTime = [[4]], queryTime = 4 | Output: 1Explanation: The only student was doing their homework at the queryTime.Example 3:Input: startTime = [[4]], endTime = [[4]], queryTime = 5 | Output: 0Example 4:Input: startTime = [[1,1,1,1]], endTime = [[1,3,2,4]], queryTime = 7 | Output: 0Example 5:Input: startTime = [[9,8,7,6,5,4,3,2,1]], endTime = [[10,10,10,10,10,10,10,10,10]], queryTime = 5 | Output: 5
###Code
# Method 1
def busyStudent(stime,etime,query):
c = 0
if not (len(stime) and len(etime)): return 0
elif len(stime) == len(etime):
for i,j in zip(etime,stime):
if abs(i-j) >= query:
c += 1
elif i==j and i==query: c = 1
return c
print(f'Method 1 output: {busyStudent([1,2,3],[3,2,7],4)}')
# Method 2
def busyStudent2(startTime, endTime, queryTime):
ans=0
for i,j in zip(startTime,endTime):
if i<=queryTime<=j:
ans += 1
return ans
print(f'Method 2 output: {busyStudent2([1,2,3],[3,2,7],4)}')
# Method 3
def busyStudent3(startTime, endTime, queryTime):
val = 0
if not(len(startTime) and len(endTime)): return 0
else:
for i,j in zip(startTime,endTime):
if i<=queryTime<=j: val +=1
return val
print(f'Method 3 output: {busyStudent3([1,2,3],[3,2,7],4)}')
###Output
Method 1 output: 1
Method 2 output: 1
Method 3 output: 1
###Markdown
Sorting the SentenceA sentence is a list of words that are separated by a single space with no leading or trailing spaces. Each word consists of lowercase and uppercase English letters.A sentence can be shuffled by appending the 1-indexed word position to each word then rearranging the words in the sentence.For example, the sentence "This is a sentence" can be shuffled as "sentence4 a3 is2 This1" or "is2 sentence4 This1 a3".Given a shuffled sentence s containing no more than 9 words, reconstruct and return the original sentence. Example 1:Input: s = "is2 sentence4 This1 a3" | Output: "This is a sentence"Explanation: Sort the words in s to their original positions "This1 is2 a3 sentence4", then remove the numbers.Example 2:Input: s = "Myself2 Me1 I4 and3" | Output: "Me Myself and I"Explanation: Sort the words in s to their original positions "Me1 Myself2 and3 I4", then remove the numbers.
###Code
s = "is2 sentence4 This1 a3"
# Method 1 (66%. of users answer)
def sortSen1(s):
dum = s.split()
ax = [word.rstrip(word[-1]) for word in dum]
num = [int(num) for num in s if num.isdigit()]
mydict = {}
for i,j in zip(ax,num):
mydict[int(j)] = i
out = [mydict[key] for key in sorted(mydict)]
return ' '.join(out)
print(f'Method 1 output is: {sortSen1(s)}')
# Method 2 (Optimal one)
def sortSen2(s):
words = s.split()
words.sort(key=lambda w: w[-1])
return ' '.join(w[:-1] for w in words)
print(f'Method 2 output is: {sortSen2(s)}')
###Output
Method 1 output is: This is a sentence
Method 2 output is: This is a sentence
###Markdown
Merge Strings AlternatelyYou are given two strings word1 and word2. Merge the strings by adding letters in alternating order, starting with word1. If a string is longer than the other, append the additional letters onto the end of the merged string.Return the merged string.Example 1:Input: word1 = "abc", word2 = "pqr" | Output: "apbqcr"Example 2:Input: word1 = "ab", word2 = "pqrs" | Output: "apbqrs"Example 3:Input: word1 = "abcd", word2 = "pq" | Output: "apbqcd"
###Code
# Method 1
def mergeAlternately1(word1,word2):
w1 = [i for i in word1]
w2 = [j for j in word2]
ac = ''.join([j for i in [[k,q] for k,q in zip(w1,w2)] for j in i])
if len(word1) > len(word2):
for i in range(len(word1)):
if word1[i] not in ac:
ac += word1[i]
elif len(word2) > len(word1):
for i in range(len(word2)):
if word2[i] not in ac:
ac += word2[i]
return ac
print(f'Method 1 output: {mergeAlternately1("aeuh","nrt")}')
# Method 2
def mergeAlternately2(word1,word2):
s = list(word2)
for idx,word in enumerate(word1):
s.insert(idx*2,word)
return "".join(s)
print(f'Method 2 output: {mergeAlternately2("aeuh","nrt")}')
# Method 3
def mergeAlternately3(word1,word2):
if len(word1) < len(word2):
out = "".join([word1[i] + word2[i] for i in range(len(word1))]) + word2[len(word1):]
else:
out = "".join([word1[i] + word2[i] for i in range(len(word2))]) + word1[len(word2):]
return out
print(f'Method 3 output: {mergeAlternately3("aeuh","nrt")}')
###Output
Method 1 output: aneruth
Method 2 output: aneruth
Method 3 output: aneruth
###Markdown
Maximum Population YearYou are given a 2D integer array logs where each logs[[i]] = [[birthi, deathi]] indicates the birth and death years of the ith person.The population of some year x is the number of people alive during that year. The ith person is counted in year x's population if x is in the inclusive range [[birthi, deathi - 1]]. Note that the person is not counted in the year that they die.Return the earliest year with the maximum population.Example 1:Input: logs = [[[1993,1999]],[[2000,2010]]] | Output: 1993Explanation: The maximum population is 1, and 1993 is the earliest year with this population.Example 2:Input: logs = [[[1950,1961]],[[1960,1971]],[[1970,1981]]] | Output: 1960Explanation: The maximum population is 2, and it had happened in years 1960 and 1970. The earlier year between them is 1960.
###Code
# Method 1 (Leet Code 45 / 52 test cases passed.)
def population1(alist):
boo = {}
for i in range(len(alist)):
for j in range(alist[i][0],alist[i][1]):
if j in boo:
boo[j] += 1
else:
boo[j] = 1
return boo
print(f'Method 1 output is: {max(population1([[1950,1961],[1960,1971],[1970,1981]]),key=population1([[1950,1961],[1960,1971],[1970,1981]]).get)}')
# Method 2 (Leet Code accuracy 50.92%.)
def population2(alist):
boo = {}
for i in range(len(alist)):
for j in range(alist[i][0],alist[i][1]):
if j in boo:
boo[j] += 1
else:
boo[j] = 1
max_value = max([count for key,count in boo.items()])
dummy_list = [key for key,value in boo.items() if value >= max_value]
return min(dummy_list)
print(f'Method 2 output is: {population2([[1950,1961],[1960,1971],[1970,1981]])}')
###Output
Method 1 output is: 1960
Method 2 output is: 1960
###Markdown
Check if Word Equals Summation of Two WordsThe letter value of a letter is its position in the alphabet starting from 0 (i.e. 'a' -> 0, 'b' -> 1, 'c' -> 2, etc.).The numerical value of some string of lowercase English letters s is the concatenation of the letter values of each letter in s, which is then converted into an integer.For example, if s = "acb", we concatenate each letter's letter value, resulting in "021". After converting it, we get 21.You are given three strings firstWord, secondWord, and targetWord, each consisting of lowercase English letters 'a' through 'j' inclusive.Return true if the summation of the numerical values of firstWord and secondWord equals the numerical value of targetWord, or false otherwise.Example 1:Input: firstWord = "acb", secondWord = "cba", targetWord = "cdb" | Output: trueExplanation:The numerical value of firstWord is "acb" -> "021" -> 21.The numerical value of secondWord is "cba" -> "210" -> 210.The numerical value of targetWord is "cdb" -> "231" -> 231.We return true because 21 + 210 == 231.Example 2:Input: firstWord = "aaa", secondWord = "a", targetWord = "aab" | Output: falseExplanation: The numerical value of firstWord is "aaa" -> "000" -> 0.The numerical value of secondWord is "a" -> "0" -> 0.The numerical value of targetWord is "aab" -> "001" -> 1.We return false because 0 + 0 != 1.Example 3:Input: firstWord = "aaa", secondWord = "a", targetWord = "aaaa" | Output: trueExplanation: The numerical value of firstWord is "aaa" -> "000" -> 0.The numerical value of secondWord is "a" -> "0" -> 0.The numerical value of targetWord is "aaaa" -> "0000" -> 0.We return true because 0 + 0 == 0.
###Code
# Method 1 (Better Complexity compared to method 2 (77%))
import string
def wordSum(word1,word2,word3):
boo = {}
v = [i for i in list(map(str,string.ascii_lowercase))[:10]]
for i,j in zip(v,[k for k in range(len(v))]):
boo[i] = j
def wordInt(word):
ac = [boo[i] for i in word if i in boo.keys()]
return int("".join([str(i) for i in ac]))
w1 = wordInt(word1)
w2 = wordInt(word2)
w3 = wordInt(word3)
if w1+w2 == w3:out = True
else:out = False
return out
print(f'Method 1 output: {wordSum("acb","cba","cdb")}')
# Method 2 (Worst Time complexity)
def wordSum2(word1,word2,word3):
def dummy(word):
return int(''.join([str(ord(i) - ord('a')) for i in word]))
a1,a2,a3 = dummy(a),dummy(n),dummy(e)
if a1+a2 == a3:
return True
return False
print(f'Method 2 output: {wordSum2("acb","cba","cdb")}')
###Output
Method 1 output: True
Method 2 output: True
###Markdown
Replace All Digits with CharactersYou are given a 0-indexed string s that has lowercase English letters in its even indices and digits in its odd indices.There is a function shift(c, x), where c is a character and x is a digit, that returns the xth character after c.For example, shift('a', 5) = 'f' and shift('x', 0) = 'x'.For every odd index i, you want to replace the digit s[[i]] with shift(s[[i-1]], s[[i]]).Return s after replacing all digits. It is guaranteed that shift(s[[i-1]], s[[i]]) will never exceed 'z'.Example 1:Input: s = "a1c1e1" | Output: "abcdef"Explanation: The digits are replaced as follows:- s[[1]] -> shift('a',1) = 'b'- s[[3]] -> shift('c',1) = 'd'- s[[5]] -> shift('e',1) = 'f'Example 2:Input: s = "a1b2c3d4e" | Output: "abbdcfdhe"Explanation: The digits are replaced as follows:- s[[1]] -> shift('a',1) = 'b'- s[[3]] -> shift('b',2) = 'd'- s[[5]] -> shift('c',3) = 'f'- s[[7]] -> shift('d',4) = 'h'
###Code
# Method 1 (Worst Complexity 28%)
import string
def replaceString1(foo):
az = sorted(string.ascii_lowercase)
res = []
for idx,val in enumerate(foo):
if val.isdigit():
prev = foo[idx-1]
res.append(prev)
res.append(az[az.index(prev)+int(val)])
if len(foo) != len(res):
res.append(foo[-1])
return "".join(res)
print(f'Method 1 output: {replaceString1("a1b2c3d4e")}')
###Output
Method 1 output: abbdcfdhe
###Markdown
Determine if String Halves Are AlikeYou are given a string s of even length. Split this string into two halves of equal lengths, and let a be the first half and b be the second half.Two strings are alike if they have the same number of vowels ('a', 'e', 'i', 'o', 'u', 'A', 'E', 'I', 'O', 'U'). Notice that s contains uppercase and lowercase letters.Return true if a and b are alike. Otherwise, return false.Example 1:Input: s = "book" | Output: trueExplanation: a = "bo" and b = "ok". a has 1 vowel and b has 1 vowel. Therefore, they are alike.Example 2:Input: s = "textbook" | Output: falseExplanation: a = "text" and b = "book". a has 1 vowel whereas b has 2. Therefore, they are not alike. Notice that the vowel o is counted twice.Example 3:Input: s = "MerryChristmas" | Output: falseExample 4:Input: s = "AbCdEfGh" | Output: true
###Code
# Method 1 (Worst Time Complexity (19%))
def halveString1(s):
a = s[:len(s)//2]
b = s[len(s)//2:]
vowels = ['a', 'e', 'i', 'o', 'u', 'A', 'E', 'I', 'O', 'U']
res,res1 = [i for i in a if i in vowels],[i for i in b if i in vowels]
if len(res) == len(res1):
return True
return False
print(f'Method 1 output is: {halveString1("MerryChristmas")}')
# Method 2 Loading......
###Output
Method 1 output is: False
###Markdown
Yet to see this Rearrange Spaces Between WordsYou are given a string text of words that are placed among some number of spaces. Each word consists of one or more lowercase English letters and are separated by at least one space. It's guaranteed that text contains at least one word.Rearrange the spaces so that there is an equal number of spaces between every pair of adjacent words and that number is maximized. If you cannot redistribute all the spaces equally, place the extra spaces at the end, meaning the returned string should be the same length as text.Return the string after rearranging the spaces.Example 1:Input: text = " this is a sentence " | Output: "this is a sentence"Explanation: There are a total of 9 spaces and 4 words. We can evenly divide the 9 spaces between the words: 9 / (4-1) = 3 spaces.Example 2:Input: text = " practice makes perfect" | Output: "practice makes perfect "Explanation: There are a total of 7 spaces and 3 words. 7 / (3-1) = 3 spaces plus 1 extra space. We place this extra space at the end of the string.Example 3:Input: text = "hello world" | Output: "hello world"Example 4:Input: text = " walks udp package into bar a" | Output: "walks udp package into bar a "Example 5:Input: text = "a" | Output: "a"
###Code
# Method 1
def spaceRearrange1(text):
# split_txt = text.split()
space_count = 0
for i in text:
if i == " ":
space_count += 1
if space_count < 4:return text
return space_count // len(text.split()) - 1
# return space_count
print(f'Method 1 output is: {spaceRearrange1("hello world")}')
###Output
Method 1 output is: hello world
###Markdown
Yet to see this Determine Whether Matrix Can Be Obtained By Rotation Given two n x n binary matrices mat and target, return true if it is possible to make mat equal to target by rotating mat in 90-degree increments, or false otherwise.Example 1:Input: mat = [[[0,1]],[[1,0]]], target = [[[1,0]],[[0,1]]] | Output: trueExplanation: We can rotate mat 90 degrees clockwise to make mat equal target.Example 2:Input: mat = [[[0,1]],[[1,1]]], target = [[[1,0]],[[0,1]]] | Output: falseExplanation: It is impossible to make mat equal to target by rotating mat.Example 3:Input: mat = [[[0,0,0]],[[0,1,0]],[[1,1,1]]], target = [[[1,1,1]],[[0,1,0]],[[0,0,0]]] | Output: trueExplanation: We can rotate mat 90 degrees clockwise two times to make mat equal target.
###Code
# Method 1 (Leet Code 45 / 109 test cases passed.)
import numpy as np
def matRotate(val,target):
if np.rot90(val).tolist() == target:
return True
return False
print(f'Method 1 output: {matRotate([[0,0],[0,1]],[[1,1,1],[0,0],[0,1]])}')
###Output
Method 1 output: False
###Markdown
yet to see Ransom NoteGiven two stings ransomNote and magazine, return true if ransomNote can be constructed from magazine and false otherwise.Each letter in magazine can only be used once in ransomNote.Example 1:Input: ransomNote = "a", magazine = "b" | Output: falseExample 2:Input: ransomNote = "aa", magazine = "ab" | Output: falseExample 3:Input: ransomNote = "aa", magazine = "aab" | Output: true
###Code
ransome,magazine,match = 'aa','aab',0
for i in ransome:
if i not in magazine:
print(False)
else:
magazine=magazine.replace(i, '1', 1)
###Output
_____no_output_____
###Markdown
First Unique Character in a StringGiven a string s, return the first non-repeating character in it and return its index. If it does not exist, return -1.Example 1:Input: s = "leetcode" | Output: 0Example 2:Input: s = "loveleetcode" | Output: 2Example 3:Input: s = "aabb" | Output: -1
###Code
# Method 1 (Worst Time complexity)
def uniqueStr1(s):
if len(list(set(s))) <= 2:return -1
if len(s) == 1: return 0
return min([s.index(i) for i in s if s.count(i) <= 1])
print(f"Method 1 output is: {uniqueStr1('cc')}")
# Method 2
import collections
def uniqueStr2(s):
dictVal = collections.Counter(s)
for index,value in enumerate(dictVal):
if dictVal[value] == 1:
return index
return -1
print(f"Method 2 output is: {uniqueStr2('cc')}")
# Method 3 (Worst Time Complexity)
def uniqueStr3(s):
for idx,val in enumerate(s):
if s.count(val) == 1:
return idx
return -1
print(f"Method 3 output is: {uniqueStr3('cc')}")
###Output
Method 1 output is: -1
Method 2 output is: -1
Method 3 output is: -1
###Markdown
Find the DifferenceYou are given two strings s and t.String t is generated by random shuffling string s and then add one more letter at a random position.Return the letter that was added to t.Example 1:Input: s = "abcd", t = "abcde" Output: "e"Explanation: 'e' is the letter that was added.Example 2:Input: s = "", t = "y" | Output: "y"Example 3:Input: s = "a", t = "aa" | Output: "a"Example 4:Input: s = "ae", t = "aea" | Output: "a"
###Code
# Method 1
def findDiff1(s,t):
return list(set([i for i in t if t.count(i) - s.count(i) == 1]))[0]
print(f'Method 1 output is {findDiff1("ae","aea")}')
###Output
Method 1 output is a
###Markdown
Path CrossingGiven a string path, where path[i] = 'N', 'S', 'E' or 'W', each representing moving one unit north, south, east, or west, respectively. You start at the origin (0, 0) on a 2D plane and walk on the path specified by path.Return true if the path crosses itself at any point, that is, if at any time you are on a location you have previously visited. Return false otherwise.Example 1:Input: path = "NES" | Output: false Explanation: Notice that the path doesn't cross any point more than once.Example 2:Input: path = "NESWW" | Output: trueExplanation: Notice that the path visits the origin twice.
###Code
# Method 1 (Leet Code 59 / 80 test cases passed.)
def pathFind1(path):
setVal = collections.Counter(path)
for key,val in setVal.items():
if val >= 2:
return True
return False
pathFind1('NESWW')
###Output
_____no_output_____
###Markdown
Counting BitsGiven an integer n, return an array ans of length n + 1 such that for each i (0 <= i <= n), ans[i] is the number of 1's in the binary representation of i.Example 1:Input: n = 2 | Output: [[0,1,1]]Explanation:0 --> 0 | 1 --> 1 | 2 --> 10Example 2:Input: n = 5 | Output: [[0,1,1,2,1,2]]Explanation:0 --> 0 | 1 --> 1 | 2 --> 10 | 3 --> 11 | 4 --> 100 | 5 --> 101
###Code
# Method 1 (Okish answer)
def bitCount1(val):
return [i.count('1') for i in list(bin(i)[2:] for i in range(val+1))]
print(f'Method 1 output: {bitCount1(2)}')
###Output
Method 1 output: [0, 1, 1]
###Markdown
Fizz BuzzGiven an integer n, return a string array answer (1-indexed) where: - answer[[i]] == "FizzBuzz" if i is divisible by 3 and 5. - answer[[i]] == "Fizz" if i is divisible by 3. - answer[[i]] == "Buzz" if i is divisible by 5. - answer[[i]] == i if non of the above conditions are true.Example 1:Input: n = 3 | Output: [["1","2","Fizz"]]Example 2:Input: n = 5 | Output: [["1","2","Fizz","4","Buzz"]]Example 3:Input: n = 15 | Output: [["1","2","Fizz","4","Buzz","Fizz","7","8","Fizz","Buzz","11","Fizz","13","14","FizzBuzz"]]
###Code
# Method 1 (Okish answer)
def fizzbuzz1(n):
op = [str(i) for i in range(1,n+1)]
for i in op:
if int(i) % 3 == 0 and int(i) % 5 == 0:
idx = op.index(i)
op[idx] = 'Fizzbuzz'
# break
elif int(i) % 3 == 0:
idx = op.index(i)
op[idx] = 'Fizz'
elif int(i) % 5 == 0:
idx = op.index(i)
op[idx] = 'Buzz'
return op
print(f'Method 1 output is: {fizzbuzz1(100)}')
###Output
Method 1 output is: ['1', '2', 'Fizz', '4', 'Buzz', 'Fizz', '7', '8', 'Fizz', 'Buzz', '11', 'Fizz', '13', '14', '', '16', '17', 'Fizz', '19', 'Buzz', 'Fizz', '22', '23', 'Fizz', 'Buzz', '26', 'Fizz', '28', '29', 'Fizzbuzz', '31', '32', 'Fizz', '34', 'Buzz', 'Fizz', '37', '38', 'Fizz', 'Buzz', '41', 'Fizz', '43', '44', 'Fizzbuzz', '46', '47', 'Fizz', '49', 'Buzz', 'Fizz', '52', '53', 'Fizz', 'Buzz', '56', 'Fizz', '58', '59', 'Fizzbuzz', '61', '62', 'Fizz', '64', 'Buzz', 'Fizz', '67', '68', 'Fizz', 'Buzz', '71', 'Fizz', '73', '74', 'Fizzbuzz', '76', '77', 'Fizz', '79', 'Buzz', 'Fizz', '82', '83', 'Fizz', 'Buzz', '86', 'Fizz', '88', '89', 'Fizzbuzz', '91', '92', 'Fizz', '94', 'Buzz', 'Fizz', '97', '98', 'Fizz', 'Buzz']
###Markdown
Hamming DistanceThe Hamming distance between two integers is the number of positions at which the corresponding bits are different.Given two integers x and y, return the Hamming distance between them.Example 1:Input: x = 1, y = 4 | Output: 2Explanation:1 (0 0 0 1)4 (0 1 0 0) ↑ ↑The above arrows point to positions where the corresponding bits are different.Example 2:Input: x = 3, y = 1 | Output: 1
###Code
# Method 1 (yet to check for better solution)
def hamming(x,y):
get_bin = lambda val: format(val, 'b')
num1,num2 = get_bin(x),get_bin(y)
distance = 0
for i in range(len(num1)):
if num1[i] != num2[i]:
distance += 1
return distance
print(f'Method 1 output : {hamming(3,4)}')
###Output
Method 1 output : 1
###Markdown
Maximum Product of Word LengthsGiven a string array words, return the maximum value of length(word[[i]]) * length(word[[j]]) where the two words do not share common letters. If no such two words exist, return 0.Example 1:Input: words = [["abcw","baz","foo","bar","xtfn","abcdef"]] | Output: 16Explanation: The two words can be "abcw", "xtfn".Example 2:Input: words = [["a","ab","abc","d","cd","bcd","abcd"]] | Output: 4Explanation: The two words can be "ab", "cd".Example 3:Input: words = [["a","aa","aaa","aaaa"]] | Output: 0Explanation: No such pair of words.
###Code
# Method 1
def maxProdWord(wordsList):
alist = []
for i in range(len(wordsList)):
for j in range(i,len(wordsList)):
s1 = set(wordsList[i])
s2 = set(wordsList[j])
x = s1.intersection(s2)
xx = list(x)
if len(xx) == 0:
alist.append(len(wordsList[i])*len(wordsList[j]))
else:
continue
if len(alist) == 0: return 0
return max(alist)
print(f'Method 1 output is: {maxProdWord(["a","ab","abc","d","cd","bcd","abcd"])}')
###Output
Method 1 output is: 4
###Markdown
Longer Contiguous Segments of Ones than ZerosGiven a binary string s, return true if the longest contiguous segment of 1s is strictly longer than the longest contiguous segment of 0s in s. Return false otherwise.For example, in s = "110100010" the longest contiguous segment of 1s has length 2, and the longest contiguous segment of 0s has length 3.Note that if there are no 0s, then the longest contiguous segment of 0s is considered to have length 0. The same applies if there are no 1s.Example 1:Input: s = "1101" | Output: trueExplanation:The longest contiguous segment of 1s has length 2: "1101"The longest contiguous segment of 0s has length 1: "1101"The segment of 1s is longer, so return true.Example 2:Input: s = "111000" | Output: falseExplanation:The longest contiguous segment of 1s has length 3: "111000"The longest contiguous segment of 0s has length 3: "111000"The segment of 1s is not longer, so return false.Example 3:Input: s = "110100010" | Output: falseExplanation:The longest contiguous segment of 1s has length 2: "110100010"The longest contiguous segment of 0s has length 3: "110100010"The segment of 1s is not longer, so return false.
###Code
# Method 1 (Leet Code 118 / 141 test cases passed.)
def checkZeroOnes1(s):
if s.count('1') > s.count('0'):
return True
return False
print(f'Method 1 output: {checkZeroOnes1("110100010")}')
# Method 2
from re import findall
def checkZeroOnes2(s):
one = len(max(findall("11*",s)))
zero = len(max(findall("00*",s)))
return one > zero
print(f'Method 2 output: {checkZeroOnes2("11110000")}')
###Output
Method 1 output: False
Method 2 output: False
###Markdown
Yet to see this Word PatternGiven a pattern and a string s, find if s follows the same pattern.Here follow means a full match, such that there is a bijection between a letter in pattern and a non-empty word in s.Example 1:Input: pattern = "abba", s = "dog cat cat dog" | Output: trueExample 2:Input: pattern = "abba", s = "dog cat cat fish" | Output: falseExample 3:Input: pattern = "aaaa", s = "dog cat cat dog" | Output: falseExample 4:Input: pattern = "abba", s = "dog dog dog dog" | Output: false Sort ColorsGiven an array nums with n objects colored red, white, or blue, sort them in-place so that objects of the same color are adjacent, with the colors in the order red, white, and blue.We will use the integers 0, 1, and 2 to represent the color red, white, and blue, respectively.You must solve this problem without using the library's sort function.Example 1:Input: nums = [[2,0,2,1,1,0]] | Output: [[0,0,1,1,2,2]]Example 2:Input: nums = [[2,0,1]] | Output: [[0,1,2]]Example 3:Input: nums = [[0]] | Output: [0]Example 4:Input: nums = [[1]] |Output: [1]
###Code
# Method 1 (Inplace order using bubble sort)
def sortColor1(nums):
for i in range(len(nums)):
for j in range(i,len(nums)):
if nums[i] > nums[j]:
nums[i],nums[j] = nums[j],nums[i]
return nums
print(f'Method 1 output: {sortColor1([2,0,1])}')
# Method 2
def sortColor2(nums):
nums.sort()
return nums
print(f'Method 2 output: {sortColor2([2,0,1])}')
###Output
Method 1 output: [0, 1, 2]
Method 2 output: [0, 1, 2]
###Markdown
Yet to see this Repeated Substring PatternGiven a string s, check if it can be constructed by taking a substring of it and appending multiple copies of the substring together.Example 1:Input: s = "abab" | Output: trueExplanation: It is the substring "ab" twice.Example 2:Input: s = "aba" | Output: falseExample 3:Input: s = "abcabcabcabc" |Output: trueExplanation: It is the substring "abc" four times or the substring "abcabc" twice.
###Code
from collections import Counter
def count_substring(string):
stringDict = Counter([string[i:i+1] for i in range(len(string)-1)])
# if max(stringDict.values()) > 1: return True
return stringDict
count_substring('abcabcabcabc')
###Output
_____no_output_____
###Markdown
Maximum Product Difference Between Two PairsThe product difference between two pairs (a, b) and (c, d) is defined as (a * b) - (c * d).For example, the product difference between (5, 6) and (2, 7) is (5 * 6) - (2 * 7) = 16.Given an integer array nums, choose four distinct indices w, x, y, and z such that the product difference between pairs (nums[w], nums[x]) and (nums[y], nums[z]) is maximized.Return the maximum such product difference.Example 1:Input: nums = [5,6,2,7,4] | Output: 34Explanation: We can choose indices 1 and 3 for the first pair (6, 7) and indices 2 and 4 for the second pair (2, 4). The product difference is (6 * 7) - (2 * 4) = 34.Example 2:Input: nums = [4,2,5,9,7,4,8] | Output: 64Explanation: We can choose indices 3 and 6 for the first pair (9, 8) and indices 1 and 5 for the second pair (2, 4). The product difference is (9 * 8) - (2 * 4) = 64.
###Code
# Choose max two numbers and least two numbers
# Multiply max two numbers and least two numbers
# Subtract product of max numbers and least numbers
# Method 1 (Best Time Complexity but worst space complexity)
def maxProdBtwn1(nums):
nums.sort()
max1,max2,least1,least2 = nums[-1],nums[-2],nums[0],nums[1]
maxProd = max1*max2
leastProd = least1*least2
return maxProd - leastProd
print(f'Method 1 output is: {maxProdBtwn1([5,6,2,7,4])}')
# Method 2 (Better time complexity than first one)
def maxProdBtwn2(nums):
nums.sort()
return (nums[-1]*nums[-2]) - (nums[0]*nums[1])
print(f'Method 2 output is: {maxProdBtwn2([5,6,2,7,4])}')
###Output
Method 1 output is: 34
Method 2 output is: 34
###Markdown
Yet to see this Remove One Element to Make the Array Strictly IncreasingGiven a 0-indexed integer array nums, return true if it can be made strictly increasing after removing exactly one element, or false otherwise. If the array is already strictly increasing, return true.The array nums is strictly increasing if nums[i - 1] < nums[i] for each index (1 <= i < nums.length).Example 1:Input: nums = [1,2,10,5,7] | Output: trueExplanation: By removing 10 at index 2 from nums, it becomes [1,2,5,7]. [1,2,5,7] is strictly increasing, so return true.Example 2:Input: nums = [2,3,1,2] | Output: falseExplanation:[3,1,2] is the result of removing the element at index 0.[2,1,2] is the result of removing the element at index 1.[2,3,2] is the result of removing the element at index 2.[2,3,1] is the result of removing the element at index 3.No resulting array is strictly increasing, so return false.Example 3:Input: nums = [1,1,1] | Output: falseExplanation: The result of removing any element is [1,1]. [1,1] is not strictly increasing, so return false.Example 4:Input: nums = [1,2,3] | Output: trueExplanation: [1,2,3] is already strictly increasing, so return true.
###Code
# nums = [2,3,1,2]
# def boo(nums):
# out = False
# for i in range(0,len(nums)):
# if
# return out
# boo(nums)
###Output
_____no_output_____
###Markdown
Detect CapitalWe define the usage of capitals in a word to be right when one of the following cases holds:- All letters in this word are capitals, like "USA".- All letters in this word are not capitals, like "leetcode".- Only the first letter in this word is capital, like "Google".- Given a string word, return true if the usage of capitals in it is right. Example 1:Input: word = "USA" | Output: trueExample 2:Input: word = "FlaG" | Output: false
###Code
# method 1
import re
def capital1(word):
return re.fullmatch(r"[A-Z]*|.[a-z]*",word)
print(f'Method 1 output is: {capital1("FlaG")}')
# Method 2
def capital2(word):
if word.isupper() or len(word) == 1:
return True
else:
if word[0].isupper():
return True
else:
for j in range(1,len(word)):
if word[j].isupper():
return False
print(f'Method 2 output is: {capital2("anerutH")}')
###Output
Method 1 output is: None
Method 2 output is: False
###Markdown
Yet to see this Find K Closest ElementsGiven a sorted integer array arr, two integers k and x, return the k closest integers to x in the array. The result should also be sorted in ascending order.An integer a is closer to x than an integer b if:- |a - x| < |b - x|, or- |a - x| == |b - x| and a < bExample 1:Input: arr = [1,2,3,4,5], k = 4, x = 3 | Output: [1,2,3,4]Example 2:Input: arr = [1,2,3,4,5], k = 4, x = -1 |Output: [1,2,3,4]Constraints:- 1 <= k <= arr.length- 1 <= arr.length <= 104- arr is sorted in ascending order.- $-10^{4}$ <= arr[i], x <= $10^{4}$
###Code
arr,k,x = [1,2,3,4,5],4,3
output = []
for a in range(len(arr)):
for b in range(1,len(arr)) or abs(arr[a]-x) == abs(arr[b]-x) and arr[a] < arr[b]:
if abs(arr[a]-x) < abs(arr[b]-x):
output.append(arr[a])
output.extend([arr[b]])
list(set(output))
###Output
_____no_output_____
###Markdown
Yet to see this Find the Distance Value Between Two ArraysGiven two integer arrays arr1 and arr2, and the integer d, return the distance value between the two arrays.The distance value is defined as the number of elements arr1[i] such that there is not any element arr2[j] where |arr1[i]-arr2[j]| <= d.Example 1:Input: arr1 = [4,5,8], arr2 = [10,9,1,8], d = 2 . | Output: 2Explanation: For arr1[0]=4 we have: |4-10|=6 > d=2 |4-9|=5 > d=2 |4-1|=3 > d=2 |4-8|=4 > d=2 For arr1[1]=5 we have: |5-10|=5 > d=2 |5-9|=4 > d=2 |5-1|=4 > d=2 |5-8|=3 > d=2For arr1[2]=8 we have:|8-10|=2 <= d=2|8-9|=1 <= d=2|8-1|=7 > d=2|8-8|=0 <= d=2Example 2:Input: arr1 = [1,4,2,3], arr2 = [-4,-3,6,10,20,30], d = 3 | Output: 2Example 3:Input: arr1 = [2,1,100,3], arr2 = [-5,-2,10,-3,7], d = 6 | Output: 1
###Code
def attempt1(arr1,arr2,d):
return [abs(i-j) for i in arr1 for j in arr2 if abs(i-j)<= d]
attempt1([4,5,8],[10,9,1,8],2)
###Output
_____no_output_____
###Markdown
Degree of an ArrayGiven a non-empty array of non-negative integers nums, the degree of this array is defined as the maximum frequency of any one of its elements.Your task is to find the smallest possible length of a (contiguous) subarray of nums, that has the same degree as nums.Example 1:Input: nums = [1,2,2,3,1] | Output: 2Explanation: The input array has a degree of 2 because both elements 1 and 2 appear twice.Of the subarrays that have the same degree: [1, 2, 2, 3, 1], [1, 2, 2, 3], [2, 2, 3, 1], [1, 2, 2], [2, 2, 3], [2, 2]. The shortest length is 2. So return 2.Example 2:Input: nums = [1,2,2,3,1,4,2] | Output: 6Explanation: The degree is 3 because the element 2 is repeated 3 times. So [2,2,3,1,4,2] is the shortest subarray, therefore returning 6.
###Code
# Not a bad try (yet to see the logic)
from collections import Counter
n = [1,2,2,3,1]
dic = Counter(n)
v = 1
for key,val in dic.items():
if val >= 2:
out = key*v
print(out)
###Output
2
###Markdown
Yet to see this Daily TemperaturesGiven an array of integers temperatures represents the daily temperatures, return an array answer such that answer[i] is the number of days you have to wait after the ith day to get a warmer temperature. If there is no future day for which this is possible, keep answer[i] == 0 instead.Example 1:Input: temperatures = [73,74,75,71,69,72,76,73] | Output: [1,1,4,2,1,1,0,0]Example 2:Input: temperatures = [30,40,50,60] | Output: [1,1,1,0]Example 3:Input: temperatures = [30,60,90] | Output: [1,1,0]
###Code
temp = [73,74,75,71,69,72,76,73]
out = []
t = 0
for i in range(len(temp)-1):
if temp[i] < temp[i+1]:
t = temp[i]
out.append(temp.index(temp[i]) - temp.index(t))
###Output
_____no_output_____
###Markdown
Delete Columns to Make SortedYou are given an array of n strings strs, all of the same length.The strings can be arranged such that there is one on each line, making a grid. For example, strs = ["abc", "bce", "cae"] can be arranged as:abcbcecaeYou want to delete the columns that are not sorted lexicographically. In the above example (0-indexed), columns 0 ('a', 'b', 'c') and 2 ('c', 'e', 'e') are sorted while column 1 ('b', 'c', 'a') is not, so you would delete column 1.Return the number of columns that you will delete.Example 1:Input: strs = ["cba","daf","ghi"] | Output: 1Explanation: The grid looks as follows: cba daf ghiColumns 0 and 2 are sorted, but column 1 is not, so you only need to delete 1 column.Example 2:Input: strs = ["a","b"] | Output: 0Explanation: The grid looks as follows: a bColumn 0 is the only column and is sorted, so you will not delete any columns.Example 3:Input: strs = ["zyx","wvu","tsr"] | Output: 3Explanation: The grid looks as follows: zyx wvu tsrAll 3 columns are not sorted, so you will delete all 3.
###Code
# Method 1
def delCol1(strs):
result = 0
for i in range(len(strs[0])):
temp = [x[i] for x in strs]
result += 0 if temp == sorted(temp) else 1
return result
for i in [["cba","daf","ghi"], ["a","b"],["zyx","wvu","tsr"]]:
print(f'Method 1 output is: {delCol1(i)}')
print()
# Method 2
def delCol2(strs):
return sum(list(i) != sorted(i) for i in zip(*strs))
for i in [["cba","daf","ghi"], ["a","b"],["zyx","wvu","tsr"]]:
print(f'Method 2 output is: {delCol2(i)}')
###Output
Method 1 output is: 1
Method 1 output is: 0
Method 1 output is: 3
Method 2 output is: 1
Method 2 output is: 0
Method 2 output is: 3
###Markdown
Count Binary SubstringsGive a binary string s, return the number of non-empty substrings that have the same number of 0's and 1's, and all the 0's and all the 1's in these substrings are grouped consecutively.Substrings that occur multiple times are counted the number of times they occur.Example 1:Input: s = "00110011" | Output: 6Explanation: There are 6 substrings that have equal number of consecutive 1's and 0's: "0011", "01", "1100", "10", "0011", and "01".Notice that some of these substrings repeat and are counted the number of times they occur. Also, "00110011" is not a valid substring because all the 0's (and 1's) are not grouped together.Example 2:Input: s = "10101" | Output: 4Explanation: There are 4 substrings: "10", "01", "10", "01" that have equal number of consecutive 1's and 0's.
###Code
def countBinarySubstrings(s):
L = list(map(len, s.replace('01', '0 1').replace('10', '1 0').split(' ')))
res = sum(min(a,b) for a,b in zip(L, L[1:]) )
return res
print(f'{countBinarySubstrings("00110011")}')
# Yet to check this logic
###Output
_____no_output_____
###Markdown
Custom Sort Stringorder and str are strings composed of lowercase letters. In order, no letter occurs more than once.order was sorted in some custom order previously. We want to permute the characters of str so that they match the order that order was sorted. More specifically, if x occurs before y in order, then x should occur before y in the returned string.Return any permutation of str (as a string) that satisfies this property.Example:Input: order = "cba" | str = "abcd"Output: "cbad"Explanation: "a", "b", "c" appear in order, so the order of "a", "b", "c" should be "c", "b", and "a". Since "d" does not appear in order, it can be at any position in the returned string. "dcba", "cdba", "cbda" are also valid outputs.
###Code
# Method 1 (69/162 values matches)
def customSort(s1,s2):
alist = list(map(str,s1))
for i,j in enumerate(s2):
if j not in alist: alist.insert(i,j)
return ''.join(alist)
customSort('cba','abcd')
###Output
_____no_output_____
###Markdown
Decorators 2 - Name DirectoryLet's use decorators to build a name directory! You are given some information about people. Each person has a first name, last name, age and sex. Print their names in a specific format sorted by their age in ascending order i.e. the youngest person's name should be printed first. For two people of the same age, print them in the order of their input.For Henry Davids, the output should be:Mr. Henry DavidsFor Mary George, the output should be:Ms. Mary George
###Code
# Method 1
def nameDir(alist):
temp = [i.split(' ') for i in alist]
return [f'Mr. {temp[i][0]} {temp[i][1]}' if temp[i][-1] == 'M' else f'Ms. {temp[i][0]} {temp[i][1]}' for i in range(len(temp))]
nameDir(['Mike Thomson 20 M','Aneruth Mohanasundaram 24 M','Andria Bustle 30 F'])
###Output
_____no_output_____
###Markdown
Valid Triangle NumberGiven an integer array nums, return the number of triplets chosen from the array that can make triangles if we take them as side lengths of a triangle.Example 1: Input: nums = [2,2,3,4] | Output: 3Explanation: Valid combinations are: 2,3,4 (using the first 2)2,3,4 (using the second 2)2,2,3Example 2:Input: nums = [4,2,3,4] | Output: 4
###Code
# Method 1
# Approach: Sum of two sides must be greater than third one
def validTriangle1(nums):
counter = 0
for i in range(len(nums)-2):
for j in range(i+1,len(nums)-1):
for k in range(j+1,len(nums)):
if nums[i] + nums[j] > nums[k] and nums[i] + nums[k] > nums[j] and nums[k] + nums[j] > nums[i]:
counter += 1
return counter
print(f'Method 1 output is: {validTriangle1([2,2,3,4])}')
###Output
Method 1 output is: 3
###Markdown
Find the Duplicate NumberGiven an array of integers nums containing n + 1 integers where each integer is in the range [1, n] inclusive.There is only one repeated number in nums, return this repeated number.You must solve the problem without modifying the array nums and uses only constant extra space.Example 1:Input: nums = [1,3,4,2,2] | Output: 2Example 2:Input: nums = [3,1,3,4,2] | Output: 3Example 3:Input: nums = [1,1] | Output: 1Example 4:Input: nums = [1,1,2] | Output: 1
###Code
# Method 1 (Not ok for mlist value greater than 100000000)
def findDuplicate(nums):
return list(set(list(filter(lambda x: nums.count(x) > 1,nums))))[0]
print(f'Method 1 output is: {findDuplicate([1,3,4,2,3,3])}')
# Method 2
from collections import Counter
def findDuplicate1(nums):
# numDict = Counter(nums)
return [item for item, count in Counter(nums).items() if count > 1][0]
print(f'Method 2 output is: {findDuplicate1([1,3,4,2,3,3])}')
###Output
Method 1 output is: 3
Method 2 output is: 3
###Markdown
Largest Odd Number in StringYou are given a string num, representing a large integer. Return the largest-valued odd integer (as a string) that is a non-empty substring of num, or an empty string "" if no odd integer exists.A substring is a contiguous sequence of characters within a string.Example 1:Input: num = "52" | Output: "5"Explanation: The only non-empty substrings are "5", "2", and "52". "5" is the only odd number.Example 2:Input: num = "4206" | Output: ""Explanation: There are no odd numbers in "4206".Example 3:Input: num = "35427" | Output: "35427"Explanation: "35427" is already an odd number.
###Code
num = '52'
for i in range(len(num)-1,-1,-1):
if i in ['1','3','5','7','9']:
print(num[:i+1])
else:
print('')
###Output
###Markdown
Check if All the Integers in a Range Are CoveredYou are given a 2D integer array ranges and two integers left and right. Each ranges[i] = [starti, endi] represents an inclusive interval between starti and endi.Return true if each integer in the inclusive range [left, right] is covered by at least one interval in ranges. Return false otherwise.An integer x is covered by an interval ranges[i] = [starti, endi] if starti <= x <= endi.Example 1:Input: ranges = [[1,2],[3,4],[5,6]], left = 2, right = 5 | Output: trueExplanation: Every integer between 2 and 5 is covered: - 2 is covered by the first range. - 3 and 4 are covered by the second range. - 5 is covered by the third range.Example 2:Input: ranges = [[1,10],[10,20]], left = 21, right = 21 | Output: falseExplanation: 21 is not covered by any range.
###Code
# Method 1 (Okish Answer)
def checkRange(ranges,a,b):
val = [j for i in ranges for j in i]
if (a in val) and (b in val):
return True
return False
print(f'Method 1 output is: {checkRange([[1,50]],1,50)}')
# Method 2 (Best Method)
def checkRange1(ranges,a,b):
return set(i for i in range(a,b+1)).intersection(set(i for l,r in ranges for i in range(l, r+1))) == set(i for i in range(a,b+1))
print(f'Method 2 output is: {checkRange1([[1,50]],1,50)}')
# Method 3 (Okish Answer)
def checkRange2(ranges,a,b):
rangeSet = set(i for i in range(a,b+1))
valueList = set(j for i in ranges for j in i)
for i in rangeSet:
if i in valueList:
return True
return False
print(f'Method 3 output is: {checkRange2([[1,50]],1,50)}')
###Output
Method 1 output is: True
Method 2 output is: True
Method 3 output is: True
###Markdown
Find Numbers with Even Number of DigitsGiven an array nums of integers, return how many of them contain an even number of digits. Example 1:Input: nums = [12,345,2,6,7896] | Output: 2Explanation: 12 contains 2 digits (even number of digits). 345 contains 3 digits (odd number of digits). 2 contains 1 digit (odd number of digits). 6 contains 1 digit (odd number of digits). 7896 contains 4 digits (even number of digits). Therefore only 12 and 7896 contain an even number of digits.Example 2:Input: nums = [555,901,482,1771] | Output: 1 Explanation: Only 1771 contains an even number of digits.
###Code
# Method 1
def findNumbers1(nums):
return len(list(filter(lambda x: len(x) % 2 == 0, list(map(str,nums)))))
print(f'Method 1 output is: {findNumbers1([555,901,482,1771])}')
# Method 2
def findNumbers2(nums):
return len([i for i in nums if len(str(i)) % 2 == 0])
print(f'Method 2 output is: {findNumbers2([555,901,482,1771])}')
###Output
Method 1 output is: 1
Method 2 output is: 1
###Markdown
Squares of a Sorted ArrayGiven an integer array nums sorted in non-decreasing order, return an array of the squares of each number sorted in non-decreasing order. Example 1:Input: nums = [-4,-1,0,3,10] | Output: [0,1,9,16,100]Explanation: After squaring, the array becomes [16,1,0,9,100].After sorting, it becomes [0,1,9,16,100].Example 2:Input: nums = [-7,-3,2,3,11] | Output: [4,9,9,49,121]
###Code
# Method 1
def sortedSquareArray1(nums):
aList = [i**2 for i in nums]
aList.sort()
return aList
print(f'Method 1 output is: {sortedSquareArray1([-4,-1,0,3,10])}')
# Method 2
def sortedSquareArray2(nums):
return sorted([i**2 for i in nums])
print(f'Method 2 output is: {sortedSquareArray2([-4,-1,0,3,10])}')
###Output
Method 1 output is: [0, 1, 9, 16, 100]
Method 2 output is: [0, 1, 9, 16, 100]
###Markdown
Max Consecutive OnesGiven a binary array nums, return the maximum number of consecutive 1's in the array.Example 1:Input: nums = [1,1,0,1,1,1] | Output: 3Explanation: The first two digits or the last three digits are consecutive 1s. The maximum number of consecutive 1s is 3.Example 2:Input: nums = [1,0,1,1,0,1] | Output: 2
###Code
# Method 1
def maxOnes(arr):
maxVal,counter = 0,0
for i in range(len(arr)):
if arr[i] == 1:
counter += 1
maxVal = max(counter,maxVal)
else:
counter = 0
return maxVal
maxOnes([0,0,0])
###Output
_____no_output_____
###Markdown
Duplicate ZerosGiven a fixed-length integer array arr, duplicate each occurrence of zero, shifting the remaining elements to the right.Note that elements beyond the length of the original array are not written. Do the above modifications to the input array in place and do not return anything.Example 1:Input: arr = [1,0,2,3,0,4,5,0] | Output: [1,0,0,2,3,0,0,4]Explanation: After calling your function, the input array is modified to: [1,0,0,2,3,0,0,4]Example 2:Input: arr = [1,2,3] | Output: [1,2,3]Explanation: After calling your function, the input array is modified to: [1,2,3]
###Code
# Method 1
def dupZeros1(arr):
n= len(arr)
i = 0
while i < n:
if arr[i] == 0:
arr[i+1:n] = arr[i:n-1]
i += 2
else: i+=1
return arr
dupZeros1([1,0,2,3,0,4,5,0])
###Output
_____no_output_____
###Markdown
Remove ElementGiven an integer array nums and an integer val, remove all occurrences of val in nums in-place. The relative order of the elements may be changed.Since it is impossible to change the length of the array in some languages, you must instead have the result be placed in the first part of the array nums. More formally, if there are k elements after removing the duplicates, then the first k elements of nums should hold the final result. It does not matter what you leave beyond the first k elements.Return k after placing the final result in the first k slots of nums.Do not allocate extra space for another array. You must do this by modifying the input array in-place with O(1) extra memory.Example 1:Input: nums = [3,2,2,3], val = 3 | Output: 2, nums = [2,2,_,_]Explanation: Your function should return k = 2, with the first two elements of nums being 2.It does not matter what you leave beyond the returned k (hence they are underscores).Example 2:Input: nums = [0,1,2,2,3,0,4,2], val = 2 | Output: 5, nums = [0,1,4,0,3,_,_,_]Explanation: Your function should return k = 5, with the first five elements of nums containing 0, 0, 1, 3, and 4.Note that the five elements can be returned in any order.It does not matter what you leave beyond the returned k (hence they are underscores).
###Code
# Method 1
def remElement(nums,val):
for idx,key in enumerate(nums):
if val in nums: nums.pop(idx)
return len(nums)
print(f'Method 1 output is: {remElement([3,2,2,3],2)}')
###Output
Method 1 output is: 2
###Markdown
Check If N and Its Double ExistGiven an array arr of integers, check if there exists two integers N and M such that N is the double of M ( i.e. N = 2 * M).More formally check if there exists two indices i and j such that : - i != j - 0 <= i, j < arr.length - arr[i] == 2 * arr[j] Example 1:Input: arr = [10,2,5,3] | Output: trueExplanation: N = 10 is the double of M = 5,that is, 10 = 2 * 5.Example 2:Input: arr = [7,1,14,11] | Output: trueExplanation: N = 14 is the double of M = 7,that is, 14 = 2 * 7.Example 3:Input: arr = [3,1,7,11] | Output: falseExplanation: In this case does not exist N and M, such that N = 2 * M.
###Code
# Method 1
def checkNDouble1(nums):
if list(set(nums)) == [0]: return True
for i in range(len(nums)):
for j in range(len(nums)):
if nums[i] == 2*nums[j] and nums[i] != nums[j]: return True
return False
print(f'Method 1 output is: {checkNDouble1([-20,8,-6,-14,0,-19,14,4])}')
# Method 2 (Satisfies 99% test case)
def checkNDouble2(nums):
for i in range(len(nums)):
if nums[i]/2 in nums: return True
return False
print(f'Method 2 output is: {checkNDouble2([-20,8,-6,-14,0,-19,14,4])}')
###Output
Method 1 output is: True
Method 2 output is: True
###Markdown
Move ZeroesGiven an integer array nums, move all 0's to the end of it while maintaining the relative order of the non-zero elements.Note that you must do this in-place without making a copy of the array. Example 1:Input: nums = [0,1,0,3,12] | Output: [1,3,12,0,0]Example 2:Input: nums = [0] | Output: [0]
###Code
# Method 1
def moveZerosAtLast(alist):
moveLast = 0
for i in range(len(alist)):
if alist[i] != 0:
alist[moveLast],alist[i]= alist[i],alist[moveLast]
moveLast += 1
return alist
print(f'Method 1 output is: {moveZerosAtLast([0,1,0,3,12])}')
###Output
Method 1 output is: [1, 3, 12, 0, 0]
###Markdown
Sort Array By ParityGiven an integer array nums, move all the even integers at the beginning of the array followed by all the odd integers.Return any array that satisfies this condition. Example 1:Input: nums = [3,1,2,4] | Output: [2,4,3,1]Explanation: The outputs [4,2,3,1], [2,4,1,3], and [4,2,1,3] would also be accepted.Example 2:Input: nums = [0] | Output: [0]
###Code
# Method 1
def paritySort(nums):
for i in range(len(nums)):
for j in range(i,len(nums)):
if nums[j] %2 ==0:
nums[i],nums[j] = nums[j],nums[i]
return nums
print(f'Method 1 output is: {paritySort([3,1,2,4])}')
# Method 2
def paritySort1(nums):
return [i for i in nums if i%2 == 0] + [i for i in nums if i%2 != 0]
print(f'Method 2 output is: {paritySort1([3,1,2,4])}')
###Output
Method 1 output is: [4, 2, 3, 1]
Method 2 output is: [2, 4, 3, 1]
###Markdown
Remove Duplicates from Sorted ArrayGiven an integer array nums sorted in non-decreasing order, remove the duplicates in-place such that each unique element appears only once. The relative order of the elements should be kept the same.Since it is impossible to change the length of the array in some languages, you must instead have the result be placed in the first part of the array nums. More formally, if there are k elements after removing the duplicates, then the first k elements of nums should hold the final result. It does not matter what you leave beyond the first k elements.Return k after placing the final result in the first k slots of nums.Do not allocate extra space for another array. You must do this by modifying the input array in-place with O(1) extra memory.Example 1:Input: nums = [1,1,2] | Output: 2, nums = [1,2,-]Explanation: Your function should return k = 2, with the first two elements of nums being 1 and 2 respectively.It does not matter what you leave beyond the returned k (hence they are underscores).Example 2:Input: nums = [0,0,1,1,1,2,2,3,3,4] | Output: 5, nums = [0,1,2,3,4,-,-,-,-,-]Explanation: Your function should return k = 5, with the first five elements of nums being 0, 1, 2, 3, and 4 respectively.It does not matter what you leave beyond the returned k (hence they are underscores).
###Code
# Method 1
def removeDuplicartes(nums):
for i in range(len(nums) - 1, 0 , -1):
if nums[i] == nums[i-1]:
nums.pop(i)
return len(nums)
print(f'Method 1 output is: {removeDuplicartes([1,1,2])}')
###Output
Method 1 output is: 2
###Markdown
Height CheckerA school is trying to take an annual photo of all the students. The students are asked to stand in a single file line in non-decreasing order by height. Let this ordering be represented by the integer array expected where expected[i] is the expected height of the ith student in line.You are given an integer array heights representing the current order that the students are standing in. Each heights[i] is the height of the ith student in line (0-indexed).Return the number of indices where heights[i] != expected[i].Example 1:Input: heights = [1,1,4,2,1,3] | Output: 3Explanation: heights: [1,1,4,2,1,3]expected: [1,1,1,2,3,4]Indices 2, 4, and 5 do not match.Example 2:Input: heights = [5,1,2,3,4] | Output: 5Explanation:heights: [5,1,2,3,4]expected: [1,2,3,4,5]All indices do not match.Example 3:Input: heights = [1,2,3,4,5] | Output: 0Explanation:heights: [1,2,3,4,5]expected: [1,2,3,4,5]All indices match.
###Code
# Method 1
def heightSort(heights):
counter = 0
for i in range(len(sorted(heights))):
if sorted(heights)[i] != heights[i]:
counter += 1
return counter
print(f'Method 1 output is: {heightSort([1,1,4,2,1,3])}')
###Output
Method 1 output is: 3
###Markdown
Valid Mountain ArrayGiven an array of integers arr, return true if and only if it is a valid mountain array.Recall that arr is a mountain array if and only if: - arr.length >= 3There exists some i with 0 < i < arr.length - 1 such that: - arr[0] < arr[1] < ... < arr[i - 1] < arr[i] - arr[i] > arr[i + 1] > ... > arr[arr.length - 1]Example 1:Input: arr = [2,1] | Output: falseExample 2:Input: arr = [3,5,5] | Output: falseExample 3:Input: arr = [0,3,2,1] | Output: true
###Code
# Method 1 (51% test case passed)
def mountainArray1(nums):
''' Using three pointer approach which keeps track of previous,current and next element. '''
prev,curr,next = 0,0,0
for i in range(1,len(nums)-1):
curr,prev,next = nums[i],nums[i-1],nums[i+1]
if prev < curr and curr > next: return True
return False
print(f'Method 1 output is: {mountainArray1([0,3,2,1])}')
# Method 2
def mountainArray2(A):
if len(A) < 3: return False
flag = 0
for i in range(len(A)-1):
if A[i] == A[i+1]: return False
if flag == 0:
if A[i] > A[i+1]: flag = 1
else:
if A[i] <= A[i+1]: return False
return True if A[-2] > A[-1] and A[0] < A[1] else False
print(f'Method 2 output is: {mountainArray2([0,3,2,1])}')
###Output
Method 1 output is: True
Method 2 output is: True
###Markdown
Replace Elements with Greatest Element on Right SideGiven an array arr, replace every element in that array with the greatest element among the elements to its right, and replace the last element with -1.After doing so, return the array.Example 1:Input: arr = [17,18,5,4,6,1] | Output: [18,6,6,6,1,-1]Explanation: - index 0 --> the greatest element to the right of index 0 is index 1 (18).- index 1 --> the greatest element to the right of index 1 is index 4 (6).- index 2 --> the greatest element to the right of index 2 is index 4 (6).- index 3 --> the greatest element to the right of index 3 is index 4 (6).- index 4 --> the greatest element to the right of index 4 is index 5 (1).- index 5 --> there are no elements to the right of index 5, so we put -1.Example 2:Input: arr = [400] | Output: [-1]Explanation: There are no elements to the right of index 0.
###Code
# Method 1 (Solves but okish answer)
def replaceGreatest(arr):
for i in range(len(arr)-1):
arr_max = max(arr[i+1:])
arr[i] = max(0,arr_max)
arr[-1] = -1
return arr
print(f'Method 1 output is: {replaceGreatest([17,18,5,4,6,1])}')
# Method 2
def replaceElements(arr):
currGreatest = -1
for i in range(len(arr) - 1, -1, -1):
temp = arr[i]
arr[i] = currGreatest
if temp > currGreatest:
currGreatest = temp
return arr
print(f'Method 2 output is: {replaceElements([17,18,5,4,6,1])}')
###Output
Method 1 output is: [18, 6, 6, 6, 1, -1]
Method 2 output is: [18, 6, 6, 6, 1, -1]
###Markdown
Letter Combinations of a Phone NumberGiven a string containing digits from 2-9 inclusive, return all possible letter combinations that the number could represent. Return the answer in any order.A mapping of digit to letters (just like on the telephone buttons) is given below. Note that 1 does not map to any letters.Example 1:Input: digits = "23" | Output: ["ad","ae","af","bd","be","bf","cd","ce","cf"]Example 2:Input: digits = "" | Output: []Example 3:Input: digits = "2" | Output: ["a","b","c"]
###Code
# Method 1 (From Internet yet to implement my logic)
def phoneCombi(astring):
letterDic = {"2":["a","b","c"], "3":["d","e","f"], "4":["g","h","i"], "5":["j","k","l"], "6":["m","n","o"], "7":["p","q","r","s"], "8":["t","u","v"], "9":["w","x","y","z"]}
lenD, ans = len(astring), []
if astring == "": return []
def bfs(pos, st):
if pos == lenD: ans.append(st)
else:
letters = letterDic[astring[pos]]
for letter in letters:
bfs(pos+1,st+letter)
bfs(0,"")
return ans
phoneCombi("24")
###Output
_____no_output_____
###Markdown
Best Time to Buy and Sell Stock IIYou are given an array prices where prices[i] is the price of a given stock on the ith day.Find the maximum profit you can achieve. You may complete as many transactions as you like (i.e., buy one and sell one share of the stock multiple times).Note: You may not engage in multiple transactions simultaneously (i.e., you must sell the stock before you buy again). Example 1:Input: prices = [7,1,5,3,6,4] | Output: 7Explanation: Buy on day 2 (price = 1) and sell on day 3 (price = 5), profit = 5-1 = 4.Then buy on day 4 (price = 3) and sell on day 5 (price = 6), profit = 6-3 = 3.Example 2:Input: prices = [1,2,3,4,5] | Output: 4Explanation: Buy on day 1 (price = 1) and sell on day 5 (price = 5), profit = 5-1 = 4.Note that you cannot buy on day 1, buy on day 2 and sell them later, as you are engaging multiple transactions at the same time. You must sell before buying again.
###Code
# Method 1
def maxProfit(prices):
maxProfit = 0
for i in range(1,len(prices)):
if prices[i] > prices[i-1]:
maxProfit += prices[i] - prices[i-1]
return maxProfit
print(f'Method 1 output is: {maxProfit([7,1,5,3,6,4])}')
###Output
Method 1 output is: 7
###Markdown
Rotate ArrayGiven an array, rotate the array to the right by k steps, where k is non-negative.Example 1:Input: nums = [1,2,3,4,5,6,7], k = 3Output: [5,6,7,1,2,3,4]Explanation:rotate 1 steps to the right: [7,1,2,3,4,5,6]rotate 2 steps to the right: [6,7,1,2,3,4,5]rotate 3 steps to the right: [5,6,7,1,2,3,4]Example 2:Input: nums = [-1,-100,3,99], k = 2Output: [3,99,-1,-100]Explanation: rotate 1 steps to the right: [99,-1,-100,3]rotate 2 steps to the right: [3,99,-1,-100]
###Code
# Method 1
def rotate(nums, k):
nums[:] = nums[-k % len(nums):] + nums[:-k % len(nums)]
return nums
print(f'Method 1 output is: {rotate([1,2,3,4,5,6,7],3)}')
###Output
Method 1 output is: [5, 6, 7, 1, 2, 3, 4]
###Markdown
Reverse Words in a String IIIGiven a string s, reverse the order of characters in each word within a sentence while still preserving whitespace and initial word order.Example 1:Input: s = "Let's take LeetCode contest" | Output: "s'teL ekat edoCteeL tsetnoc"Example 2:Input: s = "God Ding" | Output: "doG gniD"
###Code
# Method 1
def reverseWords1(s):
return ' '.join(list(map(lambda x: x[::-1],s.split(' '))))
print(f'Method 1 output is: {reverseWords1("Lets take LeetCode contest")}')
###Output
Method 1 output is: steL ekat edoCteeL tsetnoc
###Markdown
Subsets IIGiven an integer array nums that may contain duplicates, return all possible subsets (the power set).The solution set must not contain duplicate subsets. Return the solution in any order.Example 1:Input: nums = [1,2,2] | Output: [[],[1],[1,2],[1,2,2],[2],[2,2]]Example 2:Input: nums = [0] | Output: [[],[0]]
###Code
# Method 1 output (15 / 20 test cases passed)
from itertools import combinations
def scrambleStr(alist):
out = [[]]
if len(alist) == 1:
out.extend([alist])
return out
asd = [list(l) for i in range(len(alist)) for l in set(combinations(alist, i+1))]
out.extend(asd)
return out
print(f'Method 1 output is: {scrambleStr([1,2,3])}')
# Method 2 (Online solution)
def scrambleStr2(nums):
nums.sort()
len_nums = len(nums)
ans = set()
for len_comb in range(1, len_nums + 1):
for c in combinations(nums, len_comb):
ans.add(c)
return [[]] + [list(tpl) for tpl in ans]
print(f'Method 2 output is: {scrambleStr2([1,2,3])}')
###Output
Method 1 output is: [[], [1], [2], [3], [2, 3], [1, 2], [1, 3], [1, 2, 3]]
Method 2 output is: [[], [1, 3], [1, 2], [2], [1, 2, 3], [2, 3], [1], [3]]
###Markdown
Next PermutationImplement next permutation, which rearranges numbers into the lexicographically next greater permutation of numbers.If such an arrangement is not possible, it must rearrange it as the lowest possible order (i.e., sorted in ascending order).The replacement must be in place and use only constant extra memory.Example 1:Input: nums = [1,2,3] | Output: [1,3,2]Example 2:Input: nums = [3,2,1] | Output: [1,2,3]Example 3:Input: nums = [1,1,5] | Output: [1,5,1]Example 4:Input: nums = [1] | Output: [1]
###Code
# Method 1
def nextPermu1(alist):
from itertools import permutations
if len(alist) == 1: return alist
if len(alist) == 0: return alist
return list(min([i for i in list(set(permutations(alist))) if i != tuple(alist)]))
print(f'Method 1 output is: {nextPermu1([1])}')
###Output
Method 1 output is: [1]
###Markdown
Largest NumberGiven a list of non-negative integers nums, arrange them such that they form the largest number.Note: The result may be very large, so you need to return a string instead of an integer.Example 1:Input: nums = [10,2] | Output: "210"Example 2:Input: nums = [3,30,34,5,9] | Output: "9534330"Example 3:Input: nums = [1] | Output: "1"Example 4:Input: nums = [10] | Output: "10"
###Code
# Method 1 (146 / 229 test cases passed)
def largestNumer(alist):
from itertools import permutations
if len(set(alist)) == 1: return str(alist[0])
strList = list(map(str,alist))
return max(list(''.join(i) for i in map(list,permutations(strList))))
print(f'Method 1 output is: {largestNumer([3,30,34,5,9])}')
###Output
Method 1 output is: 9534330
###Markdown
Top K Frequent ElementsGiven an integer array nums and an integer k, return the k most frequent elements. You may return the answer in any order.Example 1:Input: nums = [1,1,1,2,2,3], k = 2 | Output: [1,2]Example 2:Input: nums = [1], k = 1 | Output: [1]
###Code
# Method 1 output
def topKElements(alist):
from collections import Counter
return [i[0] for i in Counter(alist).most_common(k)]
print(f'Method 1 output is: {topKElements([1,1,1,2,2,3])}')
###Output
Method 1 output is: [1, 2]
###Markdown
Minimum Size Subarray SumGiven an array of positive integers nums and a positive integer target, return the minimal length of a contiguous subarray [numsl, numsl+1, ..., numsr-1, numsr] of which the sum is greater than or equal to target. If there is no such subarray, return 0 instead.Example 1:Input: target = 7, nums = [2,3,1,2,4,3] | Output: 2Explanation: The subarray [4,3] has the minimal length under the problem constraint.Example 2:Input: target = 4, nums = [1,4,4] | Output: 1Example 3:Input: target = 11, nums = [1,1,1,1,1,1,1,1] | Output: 0
###Code
# Method 1 output (16 / 19 test cases passed)
def minSubArrayLen1(target, nums):
from itertools import combinations
test = []
for i in range(len(nums)):
for j in combinations(nums,i+1):
if sum(j) >= target:
test.append(j)
return len(min(test,key=len))
return 0
print(f'Method 1 output is: {minSubArrayLen1(7, [2,3,1,2,4,3])}')
###Output
Method 1 output is: 2
###Markdown
Transpose FileGiven a text file file.txt, transpose its content.You may assume that each row has the same number of columns, and each field is separated by the ' ' character.Example:If file.txt has the following content:name agealice 21ryan 30Output the following:name alice ryanage 21 30
###Code
f = open('/Users/aneruthmohanasundaram/Documents/GitHub/Problem-Solving/LeetCode/file.txt','r')
test = [i.replace('\n','') for i in f.readlines()]
cor = [i.split(' ') for i in test]
[(i[0],i[1]) for i in cor]
# d = dict()
# for i in cor:
# if i[0] not in d:
# d[i[0]] = i[1]
###Output
_____no_output_____
###Markdown
Yet to see this Contains Duplicate IIIGiven an integer array nums and two integers k and t, return true if there are two distinct indices i and j in the array such that abs(nums[i] - nums[j]) <= t and abs(i - j) <= k.Example 1:Input: nums = [1,2,3,1], k = 3, t = 0 | Output: trueExample 2:Input: nums = [1,0,1,1], k = 1, t = 2 | Output: trueExample 3:Input: nums = [1,5,9,1,5,9], k = 2, t = 3 | Output: false
###Code
# nums = [1,2,3,1];k = 3;t = 0
# Method 1
def dupEle(nums,k,t):
for i in range(len(nums)):
for j in range(i,len(nums)):
if abs(nums[i] - nums[j]) <= t and abs(i - j) <= k:
return True
return False
dupEle([1,5,9,1,5,9],2,3)
###Output
_____no_output_____
###Markdown
Longest Increasing SubsequenceGiven an integer array nums, return the length of the longest strictly increasing subsequence.A subsequence is a sequence that can be derived from an array by deleting some or no elements without changing the order of the remaining elements. For example, [3,6,2,7] is a subsequence of the array [0,3,1,6,2,2,7]. Example 1:Input: nums = [10,9,2,5,3,7,101,18] | Output: 4Explanation: The longest increasing subsequence is [2,3,7,101], therefore the length is 4.Example 2:Input: nums = [0,1,0,3,2,3] | Output: 4Example 3:Input: nums = [7,7,7,7,7,7,7] | Output: 1
###Code
# Method 1
st = []
def find(inp, out=[]):
if len(inp)== 0 :
if len(out) != 0 :
st.append(out)
return
find(inp[1:], out[:])
if len(out)== 0:
find(inp[1:], inp[:1])
elif inp[0] > out[-1] :
out.append(inp[0])
find(inp[1:], out[:])
ls1 = [10,9,2,5,3,7,101,18]
find(ls1)
print(f'Method 1 output is: {len(max(st,key=len))}')
###Output
Method 1 output is: 4
###Markdown
Slowest KeyA newly designed keypad was tested, where a tester pressed a sequence of n keys, one at a time.You are given a string keysPressed of length n, where keysPressed[i] was the ith key pressed in the testing sequence, and a sorted list releaseTimes, where releaseTimes[i] was the time the ith key was released. Both arrays are 0-indexed. The 0th key was pressed at the time 0, and every subsequent key was pressed at the exact time the previous key was released.The tester wants to know the key of the keypress that had the longest duration. The ith keypress had a duration of releaseTimes[i] - releaseTimes[i - 1], and the 0th keypress had a duration of releaseTimes[0].Note that the same key could have been pressed multiple times during the test, and these multiple presses of the same key may not have had the same duration.Return the key of the keypress that had the longest duration. If there are multiple such keypresses, return the lexicographically largest key of the keypresses. Example 1:Input: releaseTimes = [9,29,49,50], keysPressed = "cbcd" | Output: "c"Explanation: The keypresses were as follows:Keypress for 'c' had a duration of 9 (pressed at time 0 and released at time 9).Keypress for 'b' had a duration of 29 - 9 = 20 (pressed at time 9 right after the release of the previous character and released at time 29).Keypress for 'c' had a duration of 49 - 29 = 20 (pressed at time 29 right after the release of the previous character and released at time 49).Keypress for 'd' had a duration of 50 - 49 = 1 (pressed at time 49 right after the release of the previous character and released at time 50).The longest of these was the keypress for 'b' and the second keypress for 'c', both with duration 20.'c' is lexicographically larger than 'b', so the answer is 'c'.Example 2:Input: releaseTimes = [12,23,36,46,62], keysPressed = "spuda" | Output: "a"Explanation: The keypresses were as follows:Keypress for 's' had a duration of 12.Keypress for 'p' had a duration of 23 - 12 = 11.Keypress for 'u' had a duration of 36 - 23 = 13.Keypress for 'd' had a duration of 46 - 36 = 10.Keypress for 'a' had a duration of 62 - 46 = 16.The longest of these was the keypress for 'a' with duration 16.
###Code
# Method 1
def slowestKey1(releaseTimes,keysPressed):
out = [releaseTimes[0]]
for i in range(len(releaseTimes)-1):
out.append(abs(releaseTimes[i] - releaseTimes[i+1]))
maxEle = max(out)
return max([k for j,k in zip(out,keysPressed) if maxEle == j])
print(f'Method 1 output is: {slowestKey1([9,29,49,50],"cbcd")}')
###Output
Method 1 output is: c
###Markdown
Arithmetic Slices II - SubsequenceGiven an integer array nums, return the number of all the arithmetic subsequences of nums.A sequence of numbers is called arithmetic if it consists of at least three elements and if the difference between any two consecutive elements is the same.For example, [1, 3, 5, 7, 9], [7, 7, 7, 7], and [3, -1, -5, -9] are arithmetic sequences.For example, [1, 1, 2, 5, 7] is not an arithmetic sequence.A subsequence of an array is a sequence that can be formed by removing some elements (possibly none) of the array.For example, [2,5,10] is a subsequence of [1,2,1,2,4,1,5,10].The test cases are generated so that the answer fits in 32-bit integer.Example 1:Input: nums = [2,4,6,8,10] | Output: 7Explanation: All arithmetic subsequence slices are:[2,4,6],[4,6,8],[6,8,10],[2,4,6,8],[4,6,8,10],[2,4,6,8,10],[2,6,10]Example 2:Input: nums = [7,7,7,7,7] | Output: 16Explanation: Any subsequence of this array is arithmetic.
###Code
# Method 1 (36 / 101 test cases passed)
def artiSlices(dummy):
from itertools import combinations
test = list(map(list,[j for i in range(len(dummy)) for j in combinations(dummy,i+1) if len(j) > 2]))
def check(alist):
return [j-i for i, j in zip(alist[:-1], alist[1:])]
counter = 0
for i in range(len(test)):
if len(set(check(test[i]))) == 1:
counter += 1
return counter
print(f'Method 1 output is: {artiSlices([1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1])}')
# heights =
# Output = 10
def largestRectangeHist(heights):
if len(heights) == 1: return heights[0]
if len(set(heights)) == 1: return sum(heights)
output = []
for i in range(len(heights)-1):
if heights[i+1] > heights[i]:
# output.append((heights[i+1] - heights[i]) + heights[i])
dp = abs(heights[i+1] - heights[i])
# print(dp)
val = heights[i+1] - dp + heights[i]
# print(val)
output.append(val)
return val
largestRectangeHist([0,9])
###Output
_____no_output_____
###Markdown
Reverse Prefix of WordGiven a 0-indexed string word and a character ch, reverse the segment of word that starts at index 0 and ends at the index of the first occurrence of ch (inclusive). If the character ch does not exist in word, do nothing. - For example, if word = "abcdefd" and ch = "d", then you should reverse the segment that starts at 0 and ends at 3 (inclusive). The resulting string will be "dcbaefd".Return the resulting string.Example 1:Input: word = "abcdefd", ch = "d" | Output: "dcbaefd"Example 2:Input: word = "xyxzxe", ch = "z" | Output: "zxyxxe"Example 3:Input: word = "abcd", ch = "z" | Output: "abcd"
###Code
# Method 1 (best)
def reversePrefix(word,ch):
if ch not in word: return word
wordList = list(map(str,word))
return ''.join(wordList[:wordList.index(ch)+1][::-1] + wordList[wordList.index(ch)+1:])
print(f'Method 1 output is: {reversePrefix("xyxzxe","z")}')
###Output
Method 1 output is: zxyxxe
###Markdown
Sort Array by Increasing FrequencyGiven an array of integers nums, sort the array in increasing order based on the frequency of the values. If multiple values have the same frequency, sort them in decreasing order.Return the sorted array. Example 1:Input: nums = [1,1,2,2,2,3] | Output: [3,1,1,2,2,2]Explanation: '3' has a frequency of 1, '1' has a frequency of 2, and '2' has a frequency of 3.Example 2:Input: nums = [2,3,1,3,2] | Output: [1,3,3,2,2]Explanation: '2' and '3' both have a frequency of 2, so they are sorted in decreasing order.Example 3:Input: nums = [-1,1,-6,4,5,-6,1,4,1] | Output: [5,-1,4,4,-6,-6,1,1,1]
###Code
# Method 1
def incFreq(nums):
lst = sorted([(key, freq) for key, freq in Counter(nums).items()],key=lambda tpl: (tpl[1], -tpl[0]))
ans = []
for key, freq in lst:
ans += [key] * freq
return ans
print(f'Method 1 ouput is: {incFreq([1,1,2,2,2,3])}')
###Output
Method 1 ouput is: [3, 1, 1, 2, 2, 2]
###Markdown
Find Smallest Letter Greater Than TargetGiven a characters array letters that is sorted in non-decreasing order and a character target, return the smallest character in the array that is larger than target.Note that the letters wrap around.For example, if target == 'z' and letters == ['a', 'b'], the answer is 'a'. Example 1:Input: letters = ["c","f","j"], target = "a" | Output: "c"Example 2:Input: letters = ["c","f","j"], target = "c" | Output: "f"Example 3:Input: letters = ["c","f","j"], target = "d" | Output: "f"Example 4:Input: letters = ["c","f","j"], target = "g" | Output: "j"Example 5:Input: letters = ["c","f","j"], target = "j" | Output: "c"
###Code
def smallestGreatest(letters,target):
for i in letters:
if target < i:
return i
return min(letters)
print(f'Method 1 output is: {smallestGreatest(["c","f","j"],"j")}')
###Output
Method 1 output is: c
###Markdown
Maximum Average Subarray IYou are given an integer array nums consisting of n elements, and an integer k.Find a contiguous subarray whose length is equal to k that has the maximum average value and return this value. Any answer with a calculation error less than 10-5 will be accepted.Example 1:Input: nums = [1,12,-5,-6,50,3], k = 4 | Output: 12.75000Explanation: Maximum average is (12 - 5 - 6 + 50) / 4 = 51 / 4 = 12.75Example 2:Input: nums = [5], k = 1 | Output: 5.00000
###Code
# Method 1 (okish answer)
def maxAvgSubArray(nums,k):
if len(nums) == 1: return nums[0]*1.0
return list(set([sum(nums[i:j+k])/k for i in range(len(nums)) for j in range(i,len(nums))]))[k]
print(f'Method 1 output is: {maxAvgSubArray([1,12,-5,-6,50,3],4)}')
# Method 2
def maxAvgSubArray2(nums,k):
best = now = sum(nums[:k])
for i in range(k,len(nums)):
now += nums[i] - nums[i-k]
if now>best:
best = now
return best/k
print(f'Method 2 output is: {maxAvgSubArray2([1,12,-5,-6,50,3],4)}')
###Output
Method 1 output is: 12.75
Method 2 output is: 12.75
###Markdown
Yet to see this Largest Number At Least Twice of OthersYou are given an integer array nums where the largest integer is unique.Determine whether the largest element in the array is at least twice as much as every other number in the array. If it is, return the index of the largest element, or return -1 otherwise.Example 1:Input: nums = [3,6,1,0] | Output: 1Explanation: 6 is the largest integer.For every other number in the array x, 6 is at least twice as big as x.The index of value 6 is 1, so we return 1.Example 2:Input: nums = [1,2,3,4] | Output: -1Explanation: 4 is less than twice the value of 3, so we return -1.Example 3:Input: nums = [1] | Output: 0Explanation: 1 is trivially at least twice the value as any other number because there are no other numbers.
###Code
def largeNum(nums):
out = list(map(lambda x: x*2,nums))
left,right = out[0],out[-1]
mid = left - (right + left)//2
maxEle = out[mid]
for i in out:
if maxEle < i:
maxEle = i
return nums.index(maxEle/2)
return -1
largeNum([3,6,1,0])
###Output
_____no_output_____
###Markdown
Letter Case PermutationGiven a string s, we can transform every letter individually to be lowercase or uppercase to create another string.Return a list of all possible strings we could create. You can return the output in any order.Example 1:Input: s = "a1b2" | Output: ["a1b2","a1B2","A1b2","A1B2"]Example 2:Input: s = "3z4" | Output: ["3z4","3Z4"]Example 3:Input: s = "12345" | Output: ["12345"]Example 4:Input: s = "0" | Output: ["0"]
###Code
# Method 1 (Online)
def letterPermu(s):
result = [[]]
for i in s:
if i.isalpha():
for j in range(len(result)):
result.append(result[j][:])
result[j].append(i.lower())
result[-1].append(i.upper())
else:
for d in result:
d.append(i)
return list(map("".join,result))
print(f'Method 1 output is: {letterPermu("s12dc2")}')
###Output
Method 1 output is: ['s12dc2', 'S12dc2', 's12Dc2', 'S12Dc2', 's12dC2', 'S12dC2', 's12DC2', 'S12DC2']
###Markdown
Yet to see this Find Array Given Subset SumsYou are given an integer n representing the length of an unknown array that you are trying to recover. You are also given an array sums containing the values of all 2n subset sums of the unknown array (in no particular order).Return the array ans of length n representing the unknown array. If multiple answers exist, return any of them.An array sub is a subset of an array arr if sub can be obtained from arr by deleting some (possibly zero or all) elements of arr. The sum of the elements in sub is one possible subset sum of arr. The sum of an empty array is considered to be 0.Note: Test cases are generated such that there will always be at least one correct answer.Example 1:Input: n = 3, sums = [-3,-2,-1,0,0,1,2,3]Output: [1,2,-3]Explanation: [1,2,-3] is able to achieve the given subset sums: - []: sum is 0 - [1]: sum is 1 - [2]: sum is 2 - [1,2]: sum is 3 - [-3]: sum is -3 - [1,-3]: sum is -2 - [2,-3]: sum is -1 - [1,2,-3]: sum is 0Note that any permutation of [1,2,-3] and also any permutation of [-1,-2,3] will also be accepted.Example 2:Input: n = 2, sums = [0,0,0,0] | Output: [0,0]Explanation: The only correct answer is [0,0].Example 3:Input: n = 4, sums = [0,0,5,5,4,-1,4,9,9,-1,4,3,4,8,3,8] | Output: [0,-1,4,5]Explanation: [0,-1,4,5] is able to achieve the given subset sums.
###Code
n = 3;sums = [-3,-2,-1,0,0,1,2,3]
# [j for i in range(0,len(sums)) for j in list(map(list,combinations(sums,i+1))) if sum(j) == n]
[sums[i:i+j] for i in range(len(sums)) for j in range(i,len(sums)) if sums[i] != sums[j] and sum(sums[i:i+j]) == n]
###Output
_____no_output_____
###Markdown
yet to see this Letter Tile PossibilitiesYou have n tiles, where each tile has one letter tiles[i] printed on it.Return the number of possible non-empty sequences of letters you can make using the letters printed on those tiles. Example 1:Input: tiles = "AAB" | Output: 8Explanation: The possible sequences are "A", "B", "AA", "AB", "BA", "AAB", "ABA", "BAA".Example 2:Input: tiles = "AAABBC" | Output: 188Example 3:Input: tiles = "V" | Output: 1
###Code
v = "AAB"
[v[i:i+j] for i in range(len(v)) for j in range(len(v)) if v[i] != v[j]]
###Output
_____no_output_____
###Markdown
Distinct SubsequencesGiven two strings s and t, return the number of distinct subsequences of s which equals t.A string's subsequence is a new string formed from the original string by deleting some (can be none) of the characters without disturbing the remaining characters' relative positions. (i.e., "ACE" is a subsequence of "ABCDE" while "AEC" is not).It is guaranteed the answer fits on a 32-bit signed integer.Example 1:Input: s = "rabbbit", t = "rabbit" | Output: 3Example 2:Input: s = "babgbag", t = "bag" | Output: 5
###Code
# Method 1 (40 / 63 test cases passed.) [Memory Limit Exceeded]
def disSubseq1(s,t):
from itertools import combinations
return len([j for i in range(len(s)) for j in list(map(''.join,combinations(s,i+1))) if j == 'rabbit'])
print(f'Method 1 output is: {disSubseq1("rabbbit","rabbit")}')
###Output
Method 1 output is: 3
###Markdown
Maximum Product of the Length of Two Palindromic SubsequencesGiven a string s, find two disjoint palindromic subsequences of s such that the product of their lengths is maximized. The two subsequences are disjoint if they do not both pick a character at the same index.Return the maximum possible product of the lengths of the two palindromic subsequences.A subsequence is a string that can be derived from another string by deleting some or no characters without changing the order of the remaining characters. A string is palindromic if it reads the same forward and backward. Example 1:Input: s = "leetcodecom" | Output: 9Explanation: An optimal solution is to choose "ete" for the 1st subsequence and "cdc" for the 2nd subsequence.The product of their lengths is: 3 * 3 = 9.Example 2:Input: s = "bb" | Output: 1Explanation: An optimal solution is to choose "b" (the first character) for the 1st subsequence and "b" (the second character) for the 2nd subsequence.The product of their lengths is: 1 * 1 = 1.Example 3:Input: s = "accbcaxxcxx" | Output: 25Explanation: An optimal solution is to choose "accca" for the 1st subsequence and "xxcxx" for the 2nd subsequence.The product of their lengths is: 5 * 5 = 25.
###Code
# Method 1 (partial solved case)
def maxPalinSub(s):
from itertools import combinations
if len(set(s)) == 1: return 1
return len(max(list(set([''.join(j) for i in range(len(s)) for j in combinations(s,i+1) if ''.join(j) == ''.join(j)[::-1] and len(j)])),key=len))**2
print(f'Method 1 output is: {maxPalinSub("leetcode")}')
###Output
Method 1 output is: 9
###Markdown
Find Unique Binary StringGiven an array of strings nums containing n unique binary strings each of length n, return a binary string of length n that does not appear in nums. If there are multiple answers, you may return any of them.Example 1:Input: nums = ["01","10"] | Output: "11"Explanation: "11" does not appear in nums. "00" would also be correct.Example 2:Input: nums = ["00","01"] | Output: "11"Explanation: "11" does not appear in nums. "10" would also be correct.Example 3:Input: nums = ["111","011","001"] | Output: "101"Explanation: "101" does not appear in nums. "000", "010", "100", and "110" would also be correct.
###Code
# Method 1 (Okish answer)
def uniqueBinary(nums):
intVal = list(map(lambda x: int(x,2),nums))
return max([bin(i)[2:] for i in range(max(intVal)) if i not in intVal],key=len)
print(f'Method 1 output is: {uniqueBinary(["111","011","001"])}')
# Method 2
def uniqueBinary1(nums):
n=len(nums[0])
last=pow(2,n)
for i in range(0,last):
x = bin(i)[2:]
if(len(x) < n):
x = "0" * (n-len(x)) + x
if x not in nums:
return x
print(f'Method 2 output is: {uniqueBinary1(["111","011","001"])}')
strs = ["eat","tea","tan","ate","nat","bat"]
anagrams = {}
for word in strs:
sortedWord = "".join(sorted(word))
if sortedWord in anagrams:
anagrams[sortedWord].append(word)
else:
anagrams[sortedWord] = [word]
list(anagrams.values())
# pattern = "abba"; s = "dog cat cat dog"
# def boo(pattern,s):
# c = list(map(lambda x: x[0],s.split(' ')))
# d = list(map(str,pattern))
# b = Counter(c).values()
# n = Counter(d).values()
# flag = False
# for i in range(1,len(b)-1): # for pattern
# for j in range(1,len(n)-1): # for s
# if b[i] == b[i+1]:
# print(True)
# else:
# print(False)
###Output
_____no_output_____
###Markdown
Find Original Array From Doubled ArrayAn integer array original is transformed into a doubled array changed by appending twice the value of every element in original, and then randomly shuffling the resulting array.Given an array changed, return original if changed is a doubled array. If changed is not a doubled array, return an empty array. The elements in original may be returned in any order. Example 1:Input: changed = [1,3,4,2,6,8] | Output: [1,3,4]Explanation: One possible original array could be [1,3,4]:- Twice the value of 1 is 1 * 2 = 2.- Twice the value of 3 is 3 * 2 = 6.- Twice the value of 4 is 4 * 2 = 8.Other original arrays could be [4,3,1] or [3,1,4].Example 2:Input: changed = [6,3,0,1] | Output: []Explanation: changed is not a doubled array.Example 3:Input: changed = [1] | Output: []Explanation: changed is not a doubled array.
###Code
# Method 1 (Solves partially)
def foo(a):
from collections import Counter
dicVal = Counter(a)
result = dict()
dicVal
for i in dicVal.keys():
checkVal = i*2
if checkVal in dicVal.keys() and checkVal not in result:
result[i] = 0
else:
return []
return result
# print(foo([1,3,4,2,6,8]))
# Method 2
def findOriginalArray(changed):
from collections import defaultdict
if len(changed)%2 == 1: return []
changed.sort()
dic = defaultdict(int)
for i in changed: dic[i]+=1
res=[]
for i in changed:
if dic[i]>0:
if dic[i*2]>0:
dic[i]-=1
dic[i*2]-=1
res.append(i)
else:
return []
return res
print(findOriginalArray([1,3,4,2,6,8]))
###Output
[1, 3, 4]
###Markdown
Minimum Number of Work Sessions to Finish the TasksThere are n tasks assigned to you. The task times are represented as an integer array tasks of length n, where the ith task takes tasks[i] hours to finish. A work session is when you work for at most sessionTime consecutive hours and then take a break.You should finish the given tasks in a way that satisfies the following conditions:If you start a task in a work session, you must complete it in the same work session.You can start a new task immediately after finishing the previous one.You may complete the tasks in any order.Given tasks and sessionTime, return the minimum number of work sessions needed to finish all the tasks following the conditions above.The tests are generated such that sessionTime is greater than or equal to the maximum element in tasks[i].Example 1:Input: tasks = [1,2,3], sessionTime = 3 | Output: 2Explanation: You can finish the tasks in two work sessions. - First work session: finish the first and the second tasks in 1 + 2 = 3 hours. - Second work session: finish the third task in 3 hours.Example 2:Input: tasks = [3,1,3,1,1], sessionTime = 8 | Output: 2Explanation: You can finish the tasks in two work sessions. - First work session: finish all the tasks except the last one in 3 + 1 + 3 + 1 = 8 hours. - Second work session: finish the last task in 1 hour.Example 3:Input: tasks = [1,2,3,4,5], sessionTime = 15 | Output: 1Explanation: You can finish all the tasks in one work session.
###Code
# Method 1
def minSession(tasks,sessionTime):
from itertools import combinations
if len(tasks) == 1 and sessionTime > tasks[0]: return 1
dummy = [j for i in range(len(tasks)) for j in list(set(combinations(tasks,i+1))) if sum(j) == sessionTime]
if len(dummy) == 0: return 1
return len(dummy)
customtest = [([1,2,3],3), ([3,1,3,1,1],8), ([1,2,3,4,5],15), ([1],2), ([1,2],5)]
for i in customtest:
task,sessionTime = i[0],i[1]
print(f'Method 1 output for task = {task} and sessionTime = {sessionTime} is: {minSession(task,sessionTime)}')
###Output
Method 1 output for task = [1, 2, 3] and sessionTime = 3 is: 2
Method 1 output for task = [3, 1, 3, 1, 1] and sessionTime = 8 is: 2
Method 1 output for task = [1, 2, 3, 4, 5] and sessionTime = 15 is: 1
Method 1 output for task = [1] and sessionTime = 2 is: 1
Method 1 output for task = [1, 2] and sessionTime = 5 is: 1
###Markdown
Count Good TripletsGiven an array of integers arr, and three integers a, b and c. You need to find the number of good triplets.A triplet (arr[i], arr[j], arr[k]) is good if the following conditions are true:0 <= i < j < k < arr.length - |arr[i] - arr[j]| <= a - |arr[j] - arr[k]| <= b - |arr[i] - arr[k]| <= cWhere |x| denotes the absolute value of x.Return the number of good triplets. Example 1:Input: arr = [3,0,1,1,9,7], a = 7, b = 2, c = 3 | Output: 4Explanation: There are 4 good triplets: [(3,0,1), (3,0,1), (3,1,1), (0,1,1)].Example 2:Input: arr = [1,1,2,2,3], a = 0, b = 0, c = 1 | Output: 0Explanation: No triplet satisfies all conditions.
###Code
arr = [1,1,2,2,3];a = 0;b = 0;c = 1;counter = 0
# Method 1
def goodTriplets1(arr,a,b,c):
counter = 0
for i in range(len(arr)-2):
for j in range(i+1,len(arr)):
for k in range(j+1,len(arr)):
if abs(arr[i] - arr[j]) <= a and abs(arr[j] - arr[k]) <= b and abs(arr[i] - arr[k]) <= c:
counter += 1
return counter
print(f'Method 1 output is: {goodTriplets1(arr,a,b,c)}')
# Method 2
def goodTriplets2(arr,a,b,c):
return len([1 for i in range(len(arr)-2) for j in range(i+1,len(arr)) for k in range(j+1,len(arr)) if abs(arr[i] - arr[j]) <= a and abs(arr[j] - arr[k]) <= b and abs(arr[i] - arr[k]) <= c])
print(f'Method 2 output is: {goodTriplets2(arr,a,b,c)}')
###Output
Method 1 output is: 0
Method 2 output is: 0
###Markdown
Number of Pairs of Interchangeable RectanglesYou are given n rectangles represented by a 0-indexed 2D integer array rectangles, where rectangles[i] = [widthi, heighti] denotes the width and height of the ith rectangle.Two rectangles i and j (i < j) are considered interchangeable if they have the same width-to-height ratio. More formally, two rectangles are interchangeable if widthi/heighti == widthj/heightj (using decimal division, not integer division).Return the number of pairs of interchangeable rectangles in rectangles.Example 1:Input: rectangles = [[4,8],[3,6],[10,20],[15,30]] | Output: 6Explanation: The following are the interchangeable pairs of rectangles by index (0-indexed): - Rectangle 0 with rectangle 1: 4/8 == 3/6. - Rectangle 0 with rectangle 2: 4/8 == 10/20. - Rectangle 0 with rectangle 3: 4/8 == 15/30. - Rectangle 1 with rectangle 2: 3/6 == 10/20. - Rectangle 1 with rectangle 3: 3/6 == 15/30. - Rectangle 2 with rectangle 3: 10/20 == 15/30.Example 2:Input: rectangles = [[4,5],[7,8]] | Output: 0Explanation: There are no interchangeable pairs of rectangles.
###Code
# Method 1 (38 / 46 test cases passed)
def rectPair1(rect):
col = len(rect)
check = [rect[i][0]/rect[i][1] for i in range(col)]
return len([1 for i in range(len(check)) for j in range(i+1,len(check)) if check[i] == check[j]])
print(f'Method 1 output is: {rectPair1([[4,8],[3,6],[10,20],[15,30]])}')
# Method 2 (Considerable soution not optimal one)
def rectPair2(rectangles):
from collections import defaultdict
ratios = [i[0]/i[1] for i in rectangles]
ans = 0
d = defaultdict(int)
for ratio in ratios:
if str(ratio) in d:
ans += d[str(ratio)]
d[str(ratio)] += 1
return ans
print(f'Method 2 output is: {rectPair2([[4,8],[3,6],[10,20],[15,30]])}')
###Output
Method 1 output is: 6
Method 2 output is: 6
###Markdown
Maximum Difference Between Increasing ElementsGiven a 0-indexed integer array nums of size n, find the maximum difference between nums[i] and nums[j] (i.e., nums[j] - nums[i]), such that 0 <= i < j < n and nums[i] < nums[j].Return the maximum difference. If no such i and j exists, return -1.Example 1:Input: nums = [7,1,5,4] | Output: 4Explanation:The maximum difference occurs with i = 1 and j = 2, nums[j] - nums[i] = 5 - 1 = 4.Note that with i = 1 and j = 0, the difference nums[j] - nums[i] = 7 - 1 = 6, but i > j, so it is not valid.Example 2:Input: nums = [9,4,3,2] | Output: -1Explanation:There is no i and j such that i < j and nums[i] < nums[j].Example 3:Input: nums = [1,5,2,10] | Output: 9Explanation:The maximum difference occurs with i = 0 and j = 3, nums[j] - nums[i] = 10 - 1 = 9.
###Code
# Method 1 (Solves but okish solution)
def maximumDifference1(nums):
out = [nums[j]-nums[i] if nums[i] < nums[j] else -1 for i in range(len(nums)) for j in range(i+1,len(nums))]
return max(out)
print(f'Method 1 output is: {maximumDifference1([7,1,5,4])}')
###Output
Method 1 output is: 4
###Markdown
Count Square Sum TriplesA square triple (a,b,c) is a triple where a, b, and c are integers and a2 + b2 = c2.Given an integer n, return the number of square triples such that 1 <= a, b, c <= n.Example 1:Input: n = 5 | Output: 2Explanation: The square triples are (3,4,5) and (4,3,5).Example 2:Input: n = 10 | Output: 4Explanation: The square triples are (3,4,5), (4,3,5), (6,8,10), and (8,6,10).
###Code
# Method 1
def countSquare1(n):
alist = [i for i in range(1,n+1)]
counter = 0
for i in range(len(alist)):
for j in range(len(alist)):
for k in range(len(alist)):
if alist[i]**2 + alist[j]**2 == alist[k]**2:
counter += 1
return counter
print(f'Method 1 outtput is: {countSquare1(5)}')
# Method 2 (41 / 91 test cases passed)
def countSquare2(n):
alist = [i for i in range(1,n+1)]
return len([1 for i in range(len(alist)) for j in range(len(alist)) for k in range(len(alist)) if alist[i]**2 + alist[j]**2 == alist[k]**2])
print(f'Method 2 outtput is: {countSquare2(5)}')
###Output
Method 1 outtput is: 2
Method 2 outtput is: 2
###Markdown
Count Nice Pairs in an ArrayYou are given an array nums that consists of non-negative integers. Let us define rev(x) as the reverse of the non-negative integer x. For example, rev(123) = 321, and rev(120) = 21. A pair of indices (i, j) is nice if it satisfies all of the following conditions:0 <= i < j < nums.lengthnums[i] + rev(nums[j]) == nums[j] + rev(nums[i])Return the number of nice pairs of indices. Since that number can be too large, return it modulo 10^9 + 7. Example 1:Input: nums = [42,11,1,97] | Output: 2Explanation: The two pairs are: - (0,3) : 42 + rev(97) = 42 + 79 = 121, 97 + rev(42) = 97 + 24 = 121. - (1,2) : 11 + rev(1) = 11 + 1 = 12, 1 + rev(11) = 1 + 11 = 12.Example 2:Input: nums = [13,10,35,24,76] | Output: 4
###Code
# Method 1 (64 / 84 test cases passed)
def countNicePair(nums):
counter = 0
for i in range(len(nums)):
for j in range(i+1,len(nums)):
if nums[i] + int(str(nums[j])[::-1]) == int(str(nums[i])[::-1]) + nums[j]:
counter += 1
return counter % 1_000_000_007
print(f'Method 1 output is: {countNicePair([42,11,1,97])}')
# Method 2
def countNicePair1(nums):
from collections import defaultdict
ans = 0
freq = defaultdict(int)
for x in nums:
x -= int(str(x)[::-1])
ans += freq[x]
freq[x] += 1
return ans % 1_000_000_007
print(f'Method 1 output is: {countNicePair1([42,11,1,97])}')
###Output
Method 1 output is: 2
Method 1 output is: 2
###Markdown
Number of Pairs of Strings With Concatenation Equal to TargetGiven an array of digit strings nums and a digit string target, return the number of pairs of indices (i, j) (where i != j) such that the concatenation of nums[i] + nums[j] equals target.Example 1:Input: nums = ["777","7","77","77"], target = "7777" | Output: 4Explanation: Valid pairs are: -(0, 1): "777" + "7" -(1, 0): "7" + "777" -(2, 3): "77" + "77" -(3, 2): "77" + "77"Example 2:Input: nums = ["123","4","12","34"], target = "1234" | Output: 2Explanation: Valid pairs are: -(0, 1): "123" + "4" -(2, 3): "12" + "34"Example 3:Input: nums = ["1","1","1"], target = "11" | Output: 6Explanation: Valid pairs are: -(0, 1): "1" + "1" -(1, 0): "1" + "1" -(0, 2): "1" + "1" -(2, 0): "1" + "1" -(1, 2): "1" + "1" -(2, 1): "1" + "1"
###Code
# Method 1 (Best method)
def numOfPairs(nums, target):
return len([1 for i in range(len(nums)) for j in range(len(nums)) if i != j and nums[i]+nums[j] == target])
print(f'Method 1 output is: {numOfPairs(["777","7","77","77"], "7777")}')
###Output
Method 1 output is: 4
###Markdown
Check if Numbers Are Ascending in a SentenceA sentence is a list of tokens separated by a single space with no leading or trailing spaces. Every token is either a positive number consisting of digits 0-9 with no leading zeros, or a word consisting of lowercase English letters.For example, "a puppy has 2 eyes 4 legs" is a sentence with seven tokens: "2" and "4" are numbers and the other tokens such as "puppy" are words.Given a string s representing a sentence, you need to check if all the numbers in s are strictly increasing from left to right (i.e., other than the last number, each number is strictly smaller than the number on its right in s).Return true if so, or false otherwise.Example 1:Input: s = "1 box has 3 blue 4 red 6 green and 12 yellow marbles" | Output: trueExplanation: The numbers in s are: 1, 3, 4, 6, 12.They are strictly increasing from left to right: 1 < 3 < 4 < 6 < 12.Example 2:Input: s = "hello world 5 x 5" | Output: falseExplanation: The numbers in s are: 5, 5. They are not strictly increasing.Example 3:Input: s = "sunset is at 7 51 pm overnight lows will be in the low 50 and 60 s" | Output: falseExplanation: The numbers in s are: 7, 51, 50, 60. They are not strictly increasing.Example 4:Input: s = "4 5 11 26" | Output: trueExplanation: The numbers in s are: 4, 5, 11, 26.They are strictly increasing from left to right: 4 < 5 < 11 < 26.
###Code
# Method 1 (100%)
def areNumbersAscending(s: str) -> bool:
alist = s.split(' ')
numList = [int(alist[i]) for i in range(len(alist)) if alist[i].isdigit()]
return all(i < j for i, j in zip(numList, numList[1:]))
print(f'Method 1 output is: {areNumbersAscending("sunset is at 7 51 pm overnight lows will be in the low 50 and 60 s")}')
###Output
Method 1 output is: False
###Markdown
Find the Middle Index in ArrayGiven a 0-indexed integer array nums, find the leftmost middleIndex (i.e., the smallest amongst all the possible ones).A middleIndex is an index where nums[0] + nums[1] + ... + nums[middleIndex-1] == nums[middleIndex+1] + nums[middleIndex+2] + ... + nums[nums.length-1].If middleIndex == 0, the left side sum is considered to be 0. Similarly, if middleIndex == nums.length - 1, the right side sum is considered to be 0.Return the leftmost middleIndex that satisfies the condition, or -1 if there is no such index. Example 1:Input: nums = [2,3,-1,8,4] | Output: 3Explanation:The sum of the numbers before index 3 is: 2 + 3 + -1 = 4The sum of the numbers after index 3 is: 4 = 4Example 2:Input: nums = [1,-1,4] | Output: 2Explanation:The sum of the numbers before index 2 is: 1 + -1 = 0The sum of the numbers after index 2 is: 0Example 3:Input: nums = [2,5] | Output: -1Explanation:There is no valid middleIndex.Example 4:Input: nums = [1] | Output: 0Explantion:The sum of the numbers before index 0 is: 0The sum of the numbers after index 0 is: 0
###Code
# Method 1
def findMiddleIndex(nums):
for i in range(len(nums)):
if nums[i]+sum(nums[:i]) == sum(nums[i:]):
return i
return -1
print(f'Method 1 output is: {findMiddleIndex([2,3,-1,8,4])}')
###Output
Method 1 output is: 3
###Markdown
Single Number IIIGiven an integer array nums, in which exactly two elements appear only once and all the other elements appear exactly twice. Find the two elements that appear only once. You can return the answer in any order.You must write an algorithm that runs in linear runtime complexity and uses only constant extra space. Example 1:Input: nums = [1,2,1,3,2,5] | Output: [3,5]Explanation: [5, 3] is also a valid answer.Example 2:Input: nums = [-1,0] | Output: [-1,0]Example 3:Input: nums = [0,1] | Output: [1,0]
###Code
# Method 1
def singleNumII(nums):
from collections import Counter
d = Counter(nums)
return [key for key,value in d.items() if value == 1]
print(f'Method 1 output is: {singleNumII([0,3,1,1,1,1,3,6,2,2,7])}')
###Output
Method 1 output is: [0, 6, 7]
###Markdown
Sum of Subsequence WidthsThe width of a sequence is the difference between the maximum and minimum elements in the sequence.Given an array of integers nums, return the sum of the widths of all the non-empty subsequences of nums. Since the answer may be very large, return it modulo 109 + 7.A subsequence is a sequence that can be derived from an array by deleting some or no elements without changing the order of the remaining elements. For example, [3,6,2,7] is a subsequence of the array [0,3,1,6,2,2,7]. Example 1:Input: nums = [2,1,3] | Output: 6Explanation: The subsequences are [1], [2], [3], [2,1], [2,3], [1,3], [2,1,3].The corresponding widths are 0, 0, 0, 1, 1, 2, 2.The sum of these widths is 6.Example 2:Input: nums = [2] | Output: 0
###Code
# Method 1 (Okish solution but need to think how to handle large list)
def subSeqSum(nums):
from itertools import combinations
subSeq = [list(j) for i in range(len((nums))) for j in combinations(nums,i) if len(j) != 0]
subSeq.append(nums)
return sum([0 if len(subSeq[i]) == 1 else abs( min(subSeq[i]) - max(subSeq[i]) ) for i in range(len(subSeq))]) % 1000000007
print(f'Method 1 output is : {subSeqSum([5,69,89,92,31])}')
###Output
Method 1 output is : 1653
###Markdown
Vowels of All SubstringsGiven a string word, return the sum of the number of vowels ('a', 'e', 'i', 'o', and 'u') in every substring of word.A substring is a contiguous (non-empty) sequence of characters within a string.Note: Due to the large constraints, the answer may not fit in a signed 32-bit integer. Please be careful during the calculations. Example 1:Input: word = "aba" | Output: 6Explanation: All possible substrings are: "a", "ab", "aba", "b", "ba", and "a". - "b" has 0 vowels in it - "a", "ab", "ba", and "a" have 1 vowel each - "aba" has 2 vowels in itHence, the total sum of vowels = 0 + 1 + 1 + 1 + 1 + 2 = 6. Example 2:Input: word = "abc" | Output: 3Explanation: All possible substrings are: "a", "ab", "abc", "b", "bc", and "c". - "a", "ab", and "abc" have 1 vowel each - "b", "bc", and "c" have 0 vowels eachHence, the total sum of vowels = 1 + 1 + 1 + 0 + 0 + 0 = 3. Example 3:Input: word = "ltcd" | Output: 0Explanation: There are no vowels in any substring of "ltcd".Example 4:Input: word = "noosabasboosa" | Output: 237Explanation: There are a total of 237 vowels in all the substrings.
###Code
# Method 1
def subStringVow(word):
return sum((i+1)*(len(word)-i) for i, val in enumerate(word) if val in 'aeiou')
print(f'Method 1 output is: {subStringVow("abcdefgetgds")}')
###Output
Method 1 output is: 92
###Markdown
Next Greater Element IThe next greater element of some element x in an array is the first greater element that is to the right of x in the same array.You are given two distinct 0-indexed integer arrays nums1 and nums2, where nums1 is a subset of nums2.For each 0 <= i < nums1.length, find the index j such that nums1[i] == nums2[j] and determine the next greater element of nums2[j] in nums2. If there is no next greater element, then the answer for this query is -1.Return an array ans of length nums1.length such that ans[i] is the next greater element as described above. Example 1:Input: nums1 = [4,1,2], nums2 = [1,3,4,2] | Output: [-1,3,-1]Explanation: The next greater element for each value of nums1 is as follows: - 4 is underlined in nums2 = [1,3,4,2]. There is no next greater element, so the answer is -1. - 1 is underlined in nums2 = [1,3,4,2]. The next greater element is 3. - 2 is underlined in nums2 = [1,3,4,2]. There is no next greater element, so the answer is -1.Example 2:Input: nums1 = [2,4], nums2 = [1,2,3,4] | Output: [3,-1]Explanation: The next greater element for each value of nums1 is as follows: - 2 is underlined in nums2 = [1,2,3,4]. The next greater element is 3. - 4 is underlined in nums2 = [1,2,3,4]. There is no next greater element, so the answer is -1.
###Code
def nextGreaterElement(nums1, nums2):
nextgreatest = {}
stack = []
for num in nums2:
while stack and stack[-1] < num:
nextgreatest[stack.pop()] = num
stack.append(num)
for i in range(len(nums1)):
nums1[i] = nextgreatest[nums1[i]] if nums1[i] in nextgreatest else -1
return nums1
print(f'Method 1 output is: {nextGreaterElement([2,4],[1,2,3,4])}')
'''
n1 = [4,1,2]
nums2 = [1,2,3,4]
c = []
# nums2[:nums2.index(max(nums2))]
for i in n1:
if i == max(nums2):
# print(i,-1)
c.append(-1)
else:
# print(i,nums2[i:])
t = sorted(nums2[i:])
print(t)
# print(i,t[0])
if t[0] != i:
print(i,t[1])
c.append(t[0])
else:
print(i,t[0])
c.append(-1)
'''
###Output
Method 1 output is: [3, -1]
###Markdown
yet to see this Next Greater Element IIGiven a circular integer array nums (i.e., the next element of nums[nums.length - 1] is nums[0]), return the next greater number for every element in nums.The next greater number of a number x is the first greater number to its traversing-order next in the array, which means you could search circularly to find its next greater number. If it doesn't exist, return -1 for this number. Example 1:Input: nums = [1,2,1] | Output: [2,-1,2]Explanation: The first 1's next greater number is 2; - The number 2 can't find next greater number. - The second 1's next greater number needs to search circularly, which is also 2.Example 2:Input: nums = [1,2,3,4,3] | Output: [2,3,4,-1,4]
###Code
nums = [1,2,3,4,3]
maxEle = nums[0]
res = []
for i in nums:
if i > maxEle:
maxEle = i
res.append(maxEle)
elif i == maxEle:res.append(-1)
else:res.append(maxEle)
res
###Output
_____no_output_____
###Markdown
Two Furthest Houses With Different ColorsThere are n houses evenly lined up on the street, and each house is beautifully painted. You are given a 0-indexed integer array colors of length n, where colors[i] represents the color of the ith house.Return the maximum distance between two houses with different colors.The distance between the ith and jth houses is abs(i - j), where abs(x) is the absolute value of x.Example 1:Input: colors = [1,1,1,6,1,1,1] | Output: 3Explanation: In the above image, color 1 is blue, and color 6 is red. The furthest two houses with different colors are house 0 and house 3. House 0 has color 1, and house 3 has color 6. The distance between them is abs(0 - 3) = 3. Note that houses 3 and 6 can also produce the optimal answer.Example 2:Input: colors = [1,8,3,8,3] | Output: 4Explanation: In the above image, color 1 is blue, color 8 is yellow, and color 3 is green. The furthest two houses with different colors are house 0 and house 4. House 0 has color 1, and house 4 has color 3. The distance between them is abs(0 - 4) = 4.Example 3:Input: colors = [0,1] | Output: 1Explanation: The furthest two houses with different colors are house 0 and house 1. House 0 has color 0, and house 1 has color 1. The distance between them is abs(0 - 1) = 1.
###Code
def boo(alist):
return max([abs(i-j) for i in range(len(alist)) for j in range(i,len(alist)) if i != j and alist[i] != alist[j]])
boo([100,0])
###Output
_____no_output_____
###Markdown
Adding Spaces to a StringYou are given a 0-indexed string s and a 0-indexed integer array spaces that describes the indices in the original string where spaces will be added. Each space should be inserted before the character at the given index.For example, given s = "EnjoyYourCoffee" and spaces = [5, 9], we place spaces before 'Y' and 'C', which are at indices 5 and 9 respectively. Thus, we obtain "Enjoy Your Coffee".Return the modified string after the spaces have been added.Example 1:Input: s = "LeetcodeHelpsMeLearn", spaces = [8,13,15] | Output: "Leetcode Helps Me Learn"Explanation: The indices 8, 13, and 15 correspond to the underlined characters in "LeetcodeHelpsMeLearn" We then place spaces before those characters.Example 2:Input: s = "icodeinpython", spaces = [1,5,7,9] | Output: "i code in py thon"Explanation:The indices 1, 5, 7, and 9 correspond to the underlined characters in "icodeinpython". We then place spaces before those characters.Example 3:Input: s = "spacing", spaces = [0,1,2,3,4,5,6] | Output: " s p a c i n g"Explanation:We are also able to place spaces before the first character of the string.
###Code
def spaceString(s: str, spaces):
j=0
string=""
for i in spaces:
new = s[j:i]
j=i
string = string+" "+new
string=string+" "+s[i:]
return string[1:]
spaceString("LeetcodeHelpsMeLearn",[8,13,15])
###Output
_____no_output_____
###Markdown
Three Consecutive OddsGiven an integer array arr, return true if there are three consecutive odd numbers in the array. Otherwise, return false. Example 1:Input: arr = [2,6,4,1] | Output: falseExplanation: There are no three consecutive odds.Example 2:Input: arr = [1,2,34,3,4,5,7,23,12] | Output: trueExplanation: [5,7,23] are three consecutive odds.
###Code
arr = [1,2,34,3,4,5,7,23,12]
def conscOdds1(arr):
for i in range(len(arr)):
if len(arr[i:i+3]) == 3:
newList = arr[i:i+3]
for j,k,l in zip(newList,newList[1:],newList[2:]):
if j%2 != 0 and k%2 != 0 and l%2 != 0:
return True
return False
# Method 2
def conscOdds2(arr):
if len(arr) == 2: return False
for j,k,l in zip(arr,arr[1:],arr[2:]):
if j%2 != 0 and k%2 != 0 and l%2 != 0: return True
return False
print(f'Method 1 output is: {conscOdds1(arr)}')
print(f'Method 2 output is: {conscOdds2(arr)}')
###Output
Method 1 output is: True
Method 2 output is: True
###Markdown
Find All Possible Recipes from Given SuppliesYou have information about n different recipes. You are given a string array recipes and a 2D string array ingredients. The ith recipe has the name recipes[i], and you can create it if you have all the needed ingredients from ingredients[i]. Ingredients to a recipe may need to be created from other recipes, i.e., ingredients[i] may contain a string that is in recipes.You are also given a string array supplies containing all the ingredients that you initially have, and you have an infinite supply of all of them.Return a list of all the recipes that you can create. You may return the answer in any order.Note that two recipes may contain each other in their ingredients. Example 1:Input: recipes = ["bread"], ingredients = [["yeast","flour"]], supplies = ["yeast","flour","corn"] | Output: ["bread"]Explanation:We can create "bread" since we have the ingredients "yeast" and "flour".Example 2:Input: recipes = ["bread","sandwich"], ingredients = [["yeast","flour"],["bread","meat"]], supplies = ["yeast","flour","meat"] | Output: ["bread","sandwich"]Explanation:We can create "bread" since we have the ingredients "yeast" and "flour".We can create "sandwich" since we have the ingredient "meat" and can create the ingredient "bread".Example 3:Input: recipes = ["bread","sandwich","burger"], ingredients = [["yeast","flour"],["bread","meat"],["sandwich","meat","bread"]], supplies = ["yeast","flour","meat"] | Output: ["bread","sandwich","burger"]Explanation:We can create "bread" since we have the ingredients "yeast" and "flour".We can create "sandwich" since we have the ingredient "meat" and can create the ingredient "bread".We can create "burger" since we have the ingredient "meat" and can create the ingredients "bread" and "sandwich".
###Code
# Okish method
recipes = ["bread","sandwich","burger"]
ingredients = [["yeast","flour"],["bread","meat"],["sandwich","meat","bread"]]
supplies = ["yeast","flour","meat"]
# Joining the recipes,ingredients
joinedRec = [(i,j) for i,j in zip(recipes,ingredients)]
resList = [joinedRec[l][0] for l in range(len(joinedRec)) for word in joinedRec[l][1] if word in supplies and joinedRec[l][0]]
list(set(resList))
###Output
_____no_output_____
###Markdown
Check if Every Row and Column Contains All NumbersAn n x n matrix is valid if every row and every column contains all the integers from 1 to n (inclusive).Given an n x n integer matrix matrix, return true if the matrix is valid. Otherwise, return false.Example 1:Input: matrix = [[1,2,3],[3,1,2],[2,3,1]] | Output: trueExplanation: In this case, n = 3, and every row and column contains the numbers 1, 2, and 3.Hence, we return true.Example 2:Input: matrix = [[1,1,1],[1,2,3],[1,2,3]] | Output: falseExplanation: In this case, n = 3, but the first row and the first column do not contain the numbers 2 or 3.Hence, we return false.
###Code
def checkValid(matrix) -> bool:
n=len(matrix)
for i in range(0,len(matrix)):
map=set()
for j in range(0,len(matrix)):
map.add(matrix[i][j])
if len(map)!=n:
return False
for i in range(0,len(matrix)):
map=set()
for j in range(0,len(matrix)):
map.add(matrix[j][i])
if len(map)!=n:
return False
return True
checkValid([[1,1,1],[1,2,3],[1,2,3]])
###Output
_____no_output_____
###Markdown
Find the Winner of an Array GameGiven an integer array arr of distinct integers and an integer k.A game will be played between the first two elements of the array (i.e. arr[0] and arr[1]). In each round of the game, we compare arr[0] with arr[1], the larger integer wins and remains at position 0, and the smaller integer moves to the end of the array. The game ends when an integer wins k consecutive rounds.Return the integer which will win the game.It is guaranteed that there will be a winner of the game. Example 1:Input: arr = [2,1,3,5,4,6,7], k = 2 | Output: 5Explanation: Let's see the rounds of the game:Round | arr | winner | win_count 1 | [2,1,3,5,4,6,7] | 2 | 1 2 | [2,3,5,4,6,7,1] | 3 | 1 3 | [3,5,4,6,7,1,2] | 5 | 1 4 | [5,4,6,7,1,2,3] | 5 | 2So we can see that 4 rounds will be played and 5 is the winner because it wins 2 consecutive games.Example 2:Input: arr = [3,2,1], k = 10 | Output: 3Explanation: 3 will win the first 10 rounds consecutively.
###Code
def getWinner(arr, k) -> int:
winEle,idx = arr[0],0
for i in range(1,len(arr)):
if(arr[i] < winEle):idx += 1
else:
winEle,idx = arr[i],1
if(idx == k):
break
return winEle
print(f'Method 1 output is: {getWinner([2,1,3,5,4,6,7],2)}')
###Output
Method 1 output is: 5
###Markdown
Sum of Even Numbers After QueriesYou are given an integer array nums and an array queries where queries[i] = [vali, indexi].For each query i, first, apply nums[indexi] = nums[indexi] + vali, then print the sum of the even values of nums.Return an integer array answer where answer[i] is the answer to the ith query. Example 1:Input: nums = [1,2,3,4], queries = [[1,0],[-3,1],[-4,0],[2,3]] | Output: [8,6,2,4]Explanation: At the beginning, the array is [1,2,3,4].After adding 1 to nums[0], the array is [2,2,3,4], and the sum of even values is 2 + 2 + 4 = 8.After adding -3 to nums[1], the array is [2,-1,3,4], and the sum of even values is 2 + 4 = 6.After adding -4 to nums[0], the array is [-2,-1,3,4], and the sum of even values is -2 + 4 = 2.After adding 2 to nums[3], the array is [-2,-1,3,6], and the sum of even values is -2 + 6 = 4.Example 2:Input: nums = [1], queries = [[4,0]] | Output: [0]
###Code
def sumEvenAfterQueries(nums, queries):
res = []
for i in range(len(queries)):
dp = queries[i]
# print(nums[dp[1]]+dp[0])
nums[dp[1]] += dp[0]
res.append(sum(list(filter(lambda x: x%2==0, nums))))
return res
print(f'Method 1 output is: {sumEvenAfterQueries([1,2,3,4],[[1,0],[-3,1],[-4,0],[2,3]])}')
def partition( s):
if not s: return [[]]
ans = []
for i in range(1, len(s) + 1):
if s[:i] == s[:i][::-1]: # prefix is a palindrome
for suf in partition(s[i:]): # process suffix recursively
ans.append([s[:i]] + suf)
return ans
partition(s)
###Output
_____no_output_____
###Markdown
Divide a String Into Groups of Size kA string s can be partitioned into groups of size k using the following procedure: - The first group consists of the first k characters of the string, the second group consists of the next k characters of the string, and so on. Each character can be a part of exactly one group. - For the last group, if the string does not have k characters remaining, a character fill is used to complete the group.Note that the partition is done so that after removing the fill character from the last group (if it exists) and concatenating all the groups in order, the resultant string should be s.Given the string s, the size of each group k and the character fill, return a string array denoting the composition of every group s has been divided into, using the above procedure.Example 1:Input: s = "abcdefghi", k = 3, fill = "x" | Output: ["abc","def","ghi"]Explanation: - The first 3 characters "abc" form the first group. - The next 3 characters "def" form the second group. - The last 3 characters "ghi" form the third group.Since all groups can be completely filled by characters from the string, we do not need to use fill.Thus, the groups formed are "abc", "def", and "ghi".Example 2:Input: s = "abcdefghij", k = 3, fill = "x" | Output: ["abc","def","ghi","jxx"]Explanation:Similar to the previous example, we are forming the first three groups "abc", "def", and "ghi".For the last group, we can only use the character 'j' from the string. To complete this group, we add 'x' twice.Thus, the 4 groups formed are "abc", "def", "ghi", and "jxx".
###Code
def divideStringIntoKGroups(s,k,fill,res=[]):
for i in range(0,len(s),k):
if len(s[i:i+k]) == k:
res.append(s[i:i+k])
else:
res.append(s[i:i+k].ljust(k,fill))
return res
print(f'Method 1 output is: {divideStringIntoKGroups("abcdefghij",3,"x")}')
###Output
Method 1 output is: ['abc', 'def', 'ghi', 'jxx']
###Markdown
Positions of Large GroupsIn a string s of lowercase letters, these letters form consecutive groups of the same character.For example, a string like s = "abbxxxxzyy" has the groups "a", "bb", "xxxx", "z", and "yy".A group is identified by an interval [start, end], where start and end denote the start and end indices (inclusive) of the group. In the above example, "xxxx" has the interval [3,6].A group is considered large if it has 3 or more characters.Return the intervals of every large group sorted in increasing order by start index. Example 1:Input: s = "abbxxxxzzy" | Output: [[3,6]]Explanation: "xxxx" is the only large group with start index 3 and end index 6.Example 2:Input: s = "abc" | Output: []Explanation: We have groups "a", "b", and "c", none of which are large groups.Example 3:Input: s = "abcdddeeeeaabbbcd" | Output: [[3,5],[6,9],[12,14]]Explanation: The large groups are "ddd", "eeee", and "bbb".
###Code
def posLargeGroups(s):
ans,idx = [],0
for i in range(len(s)):
if i == len(s) - 1 or s[i] != s[i+1]:
if i-idx+1 >= 3:
ans.append([idx, i])
idx = i+1
return ans
print(f'Method 1 output is: {posLargeGroups("abcdddeeeeaabbbcd")}')
###Output
Method 1 output is: [[3, 5], [6, 9], [12, 14]]
|
notebooks/temp/tiledb-cloud-csv-dataframe-tutorial.ipynb
|
###Markdown
Download data (650MB) https://drive.google.com/file/d/1_wAOUt7qfeXruXzA4jl2axrcc_B1HzQR/view?usp=sharing
###Code
tiledb.from_csv("taxi_jan20_dense_array", "taxi-rides-jan2020.csv",
parse_dates=['tpep_dropoff_datetime', 'tpep_pickup_datetime'])
A = tiledb.open("taxi_jan20_dense_array")
print(A.schema)
print(A.nonempty_domain())
df = A.df[0:9]
df
df = A.query(attrs=['tpep_dropoff_datetime', 'fare_amount']).df[0:4]
df
A.close()
tiledb.from_csv("taxi_jan20_sparse_array", "taxi-rides-jan2020.csv",
sparse=True,
index_col=['tpep_dropoff_datetime', 'fare_amount'],
parse_dates=['tpep_dropoff_datetime', 'tpep_pickup_datetime'])
A = tiledb.open("taxi_jan20_sparse_array")
print(A.schema)
A.nonempty_domain()
df = A.query().df[:]
df
#df = pd.DataFrame(A[np.datetime64('2020-01-01T00:00:00'):np.datetime64('2020-01-01T00:30:00')])
df = A.df[np.datetime64('2020-01-01T00:00:00'):np.datetime64('2020-01-01T00:30:00')]
df
df = A.df[:, 5.0:7.0]
df
df = A.df[np.datetime64('2020-01-01T00:00:00'):np.datetime64('2020-01-01T02:00:00'), 2.00:6.50]
df
A.close()
###Output
_____no_output_____
###Markdown
TODO: GIF of uploading array folder to S3 and then registering in TileDB Cloud?
###Code
# Set your username and password; not needed if running notebook on TileDB Cloud
#config = tiledb.Config()
#config["rest.username"] = "yourUserName"
#config["rest.token"] = "zzz"
#config["rest.password"] = "yyy"
tiledb_uri = 'tiledb://Broberg/taxi-data-dense-for-csv-tute'
with tiledb.open(tiledb_uri, ctx=tiledb.Ctx(config)) as arr :
df = arr.df[0:9]
df
tiledb_uri = 'tiledb://Broberg/taxi-data-sparse-for-csv-tute'
with tiledb.open(tiledb_uri, ctx=tiledb.Ctx(config)) as arr :
df = arr.df[np.datetime64('2020-01-01T00:00:00'):np.datetime64('2020-01-01T02:00:00'), 2.00:6.50]
df
###Output
_____no_output_____
|
docs/notebooks/AquaCrop_OSPy_Notebook_2.ipynb
|
###Markdown
AquaCrop-OSPy: Bridging the gap between research and practice in crop-water modelling This series of notebooks provides users with an introduction to AquaCrop-OSPy, an open-source Python implementation of the U.N. Food and Agriculture Organization (FAO) AquaCrop model. AquaCrop-OSPy is accompanied by a series of Jupyter notebooks, which guide users interactively through a range of common applications of the model. Only basic Python experience is required, and the notebooks can easily be extended and adapted by users for their own applications and needs. This notebook series consists of four parts:1. Running an AquaCrop-OSPy model2. Estimation of irrigation water demands3. Optimisation of irrigation management strategies4. Projection of climate change impacts Install and import AquaCrop-OSPy Install and import aquacrop as we did in Notebook 1.
###Code
# !pip install aquacrop==0.2
from aquacrop.classes import *
from aquacrop.core import *
# from google.colab import output
# output.clear()
# only used for local development
# import sys
# _=[sys.path.append(i) for i in ['.', '..']]
# from aquacrop.classes import *
# from aquacrop.core import *
###Output
_____no_output_____
###Markdown
Notebook 2: Estimating irrigation water demands under different irrigation strategies In Notebook 1, we learned how to create an `AquaCropModel` by selecting a weather data file, `SoilClass`, `CropClass` and `InitWCClass` (initial water content). In this notebook, we show how AquaCrop-OSPy can be used to explore impacts of different irrigation management strategies on water use and crop yields. The example workflow below shows how different irrigation management practices can be defined in the model, and resulting impacts on water use productivity explored to support efficient irrigation scheduling and planning decisions.We start by creating a weather DataFrame containing daily measurements of minimum temperature, maximum temperature, precipitation and reference evapotranspiration. In this example we will use the built in file containing weather data from Champion, Nebraska, USA. (**link**).
###Code
path = get_filepath('champion_climate.txt')
wdf = prepare_weather(path)
wdf
###Output
_____no_output_____
###Markdown
We will run a 37 season simulation starting at 1982-05-01 and ending on 2018-10-30
###Code
sim_start = '1982/05/01'
sim_end = '2018/10/30'
###Output
_____no_output_____
###Markdown
Next we must define a soil, crop and initial soil water content. This is done by creating a `SoilClass`, `CropClass` and `InitWCClass`. In this example we select a sandy loam soil, a Maize crop, and with the soil initially at Field Capacity.
###Code
soil= SoilClass('SandyLoam')
crop = CropClass('Maize',PlantingDate='05/01')
initWC = InitWCClass(value=['FC'])
###Output
_____no_output_____
###Markdown
Irrigation management parameters are selected by creating an `IrrMngtClass` object. With this class we can specify a range of different irrigation management strategies. The 6 different strategies can be selected using the `IrrMethod` argument when creating the class. These strategies are as follows:* `IrrMethod=0`: Rainfed (no irrigation)* `IrrMethod=1`: Irrigation is triggered if soil water content drops below a specified threshold (or four thresholds representing four major crop growth stages (emergence, canopy growth, max canopy, senescence).* `IrrMethod=2`: Irrigation is triggered every N days* `IrrMethod=3`: Predefined irrigation schedule* `IrrMethod=4`: Net irrigation (maintain a soil-water level by topping up all compartments daily)* `IrrMethod=5`: Constant depth applied each dayThe full list of parameters you can edit are:Variable Name | Type | Description | Default--- | --- | --- | ---IrrMethod| `int` | Irrigation method: | 0 || 0 : rainfed | || 1 : soil moisture targets || 2 : set time interval | || 3: predefined schedule | || 4: net irrigation | || 5: constant depth | SMT | `list[float]` | Soil moisture targets (%TAW) to maintain in each growth stage | [100,100,100,100]|| (only used if irrigation method is equal to 1) |IrrInterval | `int` | Irrigation interval in days | 3|| (only used if irrigation method is equal to 2) |Schedule | `pandas.DataFrame` | DataFrame containing dates and depths | None|| (only used if irrigation method is equal to 3) |NetIrrSMT | `float` | Net irrigation threshold moisture level (% of TAW that will be maintained) | 80.|| (only used if irrigation method is equal to 4) |depth | `float` | constant depth to apply on each day | 0.|| (only used if irrigation method is equal to 5) |WetSurf | `int` | Soil surface wetted by irrigation (%) | 100AppEff | `int` | Irrigation application efficiency (%) | 100MaxIrr | `float` | Maximum depth (mm) that can be applied each day | 25MaxIrrSeason | `float` | Maximum total irrigation (mm) that can be applied in one season | 10_000 For the purposes of this demonstration we will investigate the yields and irrigation applied for a range of constant soil-moisture thresholds. Meaning that all 4 soil-moisture thresholds are equal. These irrigation strategies will be compared over a 37 year period. The cell below will create and run an `AquaCropModel` for each irrigation strategy and save the final output.
###Code
# define labels to help after
labels=[]
outputs=[]
for smt in range(0,110,20):
crop.Name = str(smt) # add helpfull label
labels.append(str(smt))
irr_mngt = IrrMngtClass(IrrMethod=1,SMT=[smt]*4) # specify irrigation management
model = AquaCropModel(sim_start,sim_end,wdf,soil,crop,InitWC=initWC,IrrMngt=irr_mngt) # create model
model.initialize() # initilize model
model.step(till_termination=True) # run model till the end
outputs.append(model.Outputs.Final) # save results
###Output
_____no_output_____
###Markdown
Combine results so that they can be easily visualized.
###Code
import pandas as pd
dflist=outputs
labels[0]='Rainfed'
outlist=[]
for i in range(len(dflist)):
temp = pd.DataFrame(dflist[i][['Yield (tonne/ha)',
'Seasonal irrigation (mm)']])
temp['label']=labels[i]
outlist.append(temp)
all_outputs = pd.concat(outlist,axis=0)
# combine all results
results=pd.concat(outlist)
###Output
_____no_output_____
###Markdown
Use `matplotlib` and `seaborn` to show the range of yields and total irrigation for each strategy over the simulation years.
###Code
# import plotting libraries
import matplotlib.pyplot as plt
import seaborn as sns
# create figure consisting of 2 plots
fig,ax=plt.subplots(2,1,figsize=(10,14))
# create two box plots
sns.boxplot(data=results,x='label',y='Yield (tonne/ha)',ax=ax[0])
sns.boxplot(data=results,x='label',y='Seasonal irrigation (mm)',ax=ax[1])
# labels and font sizes
ax[0].tick_params(labelsize=15)
ax[0].set_xlabel('Soil-moisture threshold (%TAW)',fontsize=18)
ax[0].set_ylabel('Yield (t/ha)',fontsize=18)
ax[1].tick_params(labelsize=15)
ax[1].set_xlabel('Soil-moisture threshold (%TAW)',fontsize=18)
ax[1].set_ylabel('Total Irrigation (ha-mm)',fontsize=18)
plt.legend(fontsize=18)
###Output
_____no_output_____
###Markdown
Appendix A: Other types of irrigation strategy Testing different irrigation strategies is as simple as creating multiple `IrrMngtClass` objects. The **first** strategy we will test is rainfed growth (no irrigation).
###Code
# define irrigation management
rainfed = IrrMngtClass(IrrMethod=0)
###Output
_____no_output_____
###Markdown
The **second** strategy triggers irrigation if the root-zone water content drops below an irrigation threshold. There are 4 thresholds corresponding to four main crop growth stages (emergence, canopy growth, max canopy, canopy senescence). The quantity of water applied is given by `min(depletion,MaxIrr)` where `MaxIrr` can be specified when creating an `IrrMngtClass`.
###Code
# irrigate according to 4 different soil-moisture thresholds
threshold4_irrigate = IrrMngtClass(IrrMethod=1,SMT=[40,60,70,30]*4)
###Output
_____no_output_____
###Markdown
The **third** strategy irrigates every `IrrInterval` days where the quantity of water applied is given by `min(depletion,MaxIrr)` where `MaxIrr` can be specified when creating an `IrrMngtClass`.
###Code
# irrigate every 7 days
interval_7 = IrrMngtClass(IrrMethod=2,IrrInterval=7)
###Output
_____no_output_____
###Markdown
The **fourth** strategy irrigates according to a predefined calendar. This calendar is defined as a pandas DataFrame and this example, we will create a calendar that irrigates on the first Tuesday of each month.
###Code
import pandas as pd # import pandas library
all_days = pd.date_range(sim_start,sim_end) # list of all dates in simulation period
new_month=True
dates=[]
# iterate through all simulation days
for date in all_days:
#check if new month
if date.is_month_start:
new_month=True
if new_month:
# check if tuesday (dayofweek=1)
if date.dayofweek==1:
#save date
dates.append(date)
new_month=False
###Output
_____no_output_____
###Markdown
Now we have a list of all the first Tuesdays of the month, we can create the full schedule.
###Code
depths = [25]*len(dates) # depth of irrigation applied
schedule=pd.DataFrame([dates,depths]).T # create pandas DataFrame
schedule.columns=['Date','Depth'] # name columns
schedule
###Output
_____no_output_____
###Markdown
Then pass this schedule into our `IrrMngtClass`.
###Code
irrigate_schedule = IrrMngtClass(IrrMethod=3,Schedule=schedule)
###Output
_____no_output_____
###Markdown
The **fifth** strategy is net irrigation. This keeps the soil-moisture content above a specified level. This method differs from the soil moisture thresholds (second strategy) as each compartment is filled to field capacity, instead of water starting above the first compartment and filtering down. In this example the net irrigation mode will maintain a water content of 70% total available water.
###Code
net_irrigation = IrrMngtClass(IrrMethod=4,NetIrrSMT=70)
###Output
_____no_output_____
###Markdown
Now its time to compare the strategies over the 37 year period. The cell below will create and run an `AquaCropModel` for each irrigation strategy and save the final output.
###Code
# define labels to help after
labels=['rainfed','four thresholds','interval','schedule','net']
strategies = [rainfed,threshold4_irrigate,interval_7,irrigate_schedule,net_irrigation]
outputs=[]
for i,irr_mngt in enumerate(strategies): # for both irrigation strategies...
crop.Name = labels[i] # add helpfull label
model = AquaCropModel(sim_start,sim_end,wdf,soil,crop,InitWC=initWC,IrrMngt=irr_mngt) # create model
model.initialize() # initilize model
model.step(till_termination=True) # run model till the end
outputs.append(model.Outputs.Final) # save results
###Output
_____no_output_____
###Markdown
The final strategy to show is for a custom irrigation strategy. This is one of the key features of AquaCrop-OSPy as users can define an a complex irrigation strategy that incorperates any external data, code bases or machine learning models. To showcase this feature, we will define a function that will irrigate according to the follwong logic:1) There will be no rain over the next 10 days -> Irrigate 10mm2) There will be rain in the next 10 days but the soil is over 70% depleted -> Irrigate 10mm3) Otherwise -> No irrigation
###Code
# function to return the irrigation depth to apply on next day
def get_depth(model):
t = model.ClockStruct.TimeStepCounter # current timestep
# get weather data for next 7 days
weather10 = model.weather[t+1:min(t+10+1,len(model.weather))]
# if it will rain in next 7 days
if sum(weather10[:,2])>0:
# check if soil is over 70% depleted
if t>0 and model.InitCond.Depletion/model.InitCond.TAW > 0.7:
depth=10
else:
depth=0
else:
# no rain for next 10 days
depth=10
return depth
# create model with IrrMethod= Constant depth
crop.Name = 'weather' # add helpfull label
model = AquaCropModel(sim_start,sim_end,wdf,soil,crop,InitWC=initWC,
IrrMngt=IrrMngtClass(IrrMethod=5,))
model.initialize()
while not model.ClockStruct.ModelTermination:
# get depth to apply
depth=get_depth(model)
model.ParamStruct.IrrMngt.depth=depth
model.step()
outputs.append(model.Outputs.Final) # save results
labels.append('weather')
###Output
_____no_output_____
###Markdown
Combine results so that they can be easily visualized.
###Code
dflist=outputs
outlist=[]
for i in range(len(dflist)):
temp = pd.DataFrame(dflist[i][['Yield (tonne/ha)','Seasonal irrigation (mm)']])
temp['label']=labels[i]
outlist.append(temp)
all_outputs = pd.concat(outlist,axis=0)
# combine all results
results=pd.concat(outlist)
###Output
_____no_output_____
###Markdown
Use `matplotlib` and `seaborn` to show the range of yields and total irrigation for each strategy over the simulation years.
###Code
# import plotting libraries
import matplotlib.pyplot as plt
import seaborn as sns
# create figure consisting of 2 plots
fig,ax=plt.subplots(2,1,figsize=(10,14))
# create two box plots
sns.boxplot(data=results,x='label',y='Yield (tonne/ha)',ax=ax[0])
sns.boxplot(data=results,x='label',y='Seasonal irrigation (mm)',ax=ax[1])
# labels and font sizes
ax[0].tick_params(labelsize=15)
ax[0].set_xlabel(' ')
ax[0].set_ylabel('Yield (t/ha)',fontsize=18)
ax[1].tick_params(labelsize=15)
ax[1].set_xlabel(' ')
ax[1].set_ylabel('Total Irrigation (ha-mm)',fontsize=18)
plt.legend(fontsize=18)
###Output
_____no_output_____
###Markdown
AquaCrop-OSPy: Bridging the gap between research and practice in crop-water modelling This series of notebooks provides users with an introduction to AquaCrop-OSPy, an open-source Python implementation of the U.N. Food and Agriculture Organization (FAO) AquaCrop model. AquaCrop-OSPy is accompanied by a series of Jupyter notebooks, which guide users interactively through a range of common applications of the model. Only basic Python experience is required, and the notebooks can easily be extended and adapted by users for their own applications and needs. This notebook series consists of four parts:1. Running an AquaCrop-OSPy model2. Estimation of irrigation water demands3. Optimisation of irrigation management strategies4. Projection of climate change impacts Install and import AquaCrop-OSPy Install and import aquacrop as we did in Notebook 1.
###Code
# !pip install aquacrop==0.2
from aquacrop.classes import *
from aquacrop.core import *
# from google.colab import output
# output.clear()
# only used for local development
# import sys
# _=[sys.path.append(i) for i in ['.', '..']]
# from aquacrop.classes import *
# from aquacrop.core import *
###Output
_____no_output_____
###Markdown
Notebook 2: Estimating irrigation water demands under different irrigation strategies In Notebook 1, we learned how to create an `AquaCropModel` by selecting a weather data file, `SoilClass`, `CropClass` and `InitWCClass` (initial water content). In this notebook, we show how AquaCrop-OSPy can be used to explore impacts of different irrigation management strategies on water use and crop yields. The example workflow below shows how different irrigation management practices can be defined in the model, and resulting impacts on water use productivity explored to support efficient irrigation scheduling and planning decisions.We start by creating a weather DataFrame containing daily measurements of minimum temperature, maximum temperature, precipitation and reference evapotranspiration. In this example we will use the built in file containing weather data from Champion, Nebraska, USA. (**link**).
###Code
path = get_filepath('champion_climate.txt')
wdf = prepare_weather(path)
wdf
###Output
_____no_output_____
###Markdown
We will run a 37 season simulation starting at 1982-05-01 and ending on 2018-10-30
###Code
sim_start = '1982/05/01'
sim_end = '2018/10/30'
###Output
_____no_output_____
###Markdown
Next we must define a soil, crop and initial soil water content. This is done by creating a `SoilClass`, `CropClass` and `InitWCClass`. In this example we select a sandy loam soil, a Maize crop, and with the soil initially at Field Capacity.
###Code
soil= SoilClass('SandyLoam')
crop = CropClass('Maize',PlantingDate='05/01')
initWC = InitWCClass(value=['FC'])
###Output
_____no_output_____
###Markdown
Irrigation management parameters are selected by creating an `IrrMngtClass` object. With this class we can specify a range of different irrigation management strategies. The 6 different strategies can be selected using the `IrrMethod` argument when creating the class. These strategies are as follows:* `IrrMethod=0`: Rainfed (no irrigation)* `IrrMethod=1`: Irrigation is triggered if soil water content drops below a specified threshold (or four thresholds representing four major crop growth stages (emergence, canopy growth, max canopy, senescence).* `IrrMethod=2`: Irrigation is triggered every N days* `IrrMethod=3`: Predefined irrigation schedule* `IrrMethod=4`: Net irrigation (maintain a soil-water level by topping up all compartments daily)* `IrrMethod=5`: Constant depth applied each dayThe full list of parameters you can edit are:Variable Name | Type | Description | Default--- | --- | --- | ---IrrMethod| `int` | Irrigation method: | 0 || 0 : rainfed | || 1 : soil moisture targets || 2 : set time interval | || 3: predefined schedule | || 4: net irrigation | || 5: constant depth | SMT | `list[float]` | Soil moisture targets (%TAW) to maintain in each growth stage | [100,100,100,100]|| (only used if irrigation method is equal to 1) |IrrInterval | `int` | Irrigation interval in days | 3|| (only used if irrigation method is equal to 2) |Schedule | `pandas.DataFrame` | DataFrame containing dates and depths | None|| (only used if irrigation method is equal to 3) |NetIrrSMT | `float` | Net irrigation threshold moisture level (% of TAW that will be maintained) | 80.|| (only used if irrigation method is equal to 4) |depth | `float` | constant depth to apply on each day | 0.|| (only used if irrigation method is equal to 5) |WetSurf | `int` | Soil surface wetted by irrigation (%) | 100AppEff | `int` | Irrigation application efficiency (%) | 100MaxIrr | `float` | Maximum depth (mm) that can be applied each day | 25MaxIrrSeason | `float` | Maximum total irrigation (mm) that can be applied in one season | 10_000 For the purposes of this demonstration we will investigate the yields and irrigation applied for a range of constant soil-moisture thresholds. Meaning that all 4 soil-moisture thresholds are equal. These irrigation strategies will be compared over a 37 year period. The cell below will create and run an `AquaCropModel` for each irrigation strategy and save the final output.
###Code
# define labels to help after
labels=[]
outputs=[]
for smt in range(0,110,20):
crop.Name = str(smt) # add helpfull label
labels.append(str(smt))
irr_mngt = IrrMngtClass(IrrMethod=1,SMT=[smt]*4) # specify irrigation management
model = AquaCropModel(sim_start,sim_end,wdf,soil,crop,InitWC=initWC,IrrMngt=irr_mngt) # create model
model.initialize() # initilize model
model.step(till_termination=True) # run model till the end
outputs.append(model.Outputs.Final) # save results
###Output
_____no_output_____
###Markdown
Combine results so that they can be easily visualized.
###Code
import pandas as pd
dflist=outputs
labels[0]='Rainfed'
outlist=[]
for i in range(len(dflist)):
temp = pd.DataFrame(dflist[i][['Yield (tonne/ha)',
'Seasonal irrigation (mm)']])
temp['label']=labels[i]
outlist.append(temp)
all_outputs = pd.concat(outlist,axis=0)
# combine all results
results=pd.concat(outlist)
###Output
_____no_output_____
###Markdown
Use `matplotlib` and `seaborn` to show the range of yields and total irrigation for each strategy over the simulation years.
###Code
# import plotting libraries
import matplotlib.pyplot as plt
import seaborn as sns
# create figure consisting of 2 plots
fig,ax=plt.subplots(2,1,figsize=(10,14))
# create two box plots
sns.boxplot(data=results,x='label',y='Yield (tonne/ha)',ax=ax[0])
sns.boxplot(data=results,x='label',y='Seasonal irrigation (mm)',ax=ax[1])
# labels and font sizes
ax[0].tick_params(labelsize=15)
ax[0].set_xlabel('Soil-moisture threshold (%TAW)',fontsize=18)
ax[0].set_ylabel('Yield (t/ha)',fontsize=18)
ax[1].tick_params(labelsize=15)
ax[1].set_xlabel('Soil-moisture threshold (%TAW)',fontsize=18)
ax[1].set_ylabel('Total Irrigation (ha-mm)',fontsize=18)
plt.legend(fontsize=18)
###Output
No handles with labels found to put in legend.
###Markdown
Appendix A: Other types of irrigation strategy Testing different irrigation strategies is as simple as creating multiple `IrrMngtClass` objects. The **first** strategy we will test is rainfed growth (no irrigation).
###Code
# define irrigation management
rainfed = IrrMngtClass(IrrMethod=0)
###Output
_____no_output_____
###Markdown
The **second** strategy triggers irrigation if the root-zone water content drops below an irrigation threshold. There are 4 thresholds corresponding to four main crop growth stages (emergence, canopy growth, max canopy, canopy senescence). The quantity of water applied is given by `min(depletion,MaxIrr)` where `MaxIrr` can be specified when creating an `IrrMngtClass`.
###Code
# irrigate according to 4 different soil-moisture thresholds
threshold4_irrigate = IrrMngtClass(IrrMethod=1,SMT=[40,60,70,30]*4)
###Output
_____no_output_____
###Markdown
The **third** strategy irrigates every `IrrInterval` days where the quantity of water applied is given by `min(depletion,MaxIrr)` where `MaxIrr` can be specified when creating an `IrrMngtClass`.
###Code
# irrigate every 7 days
interval_7 = IrrMngtClass(IrrMethod=2,IrrInterval=7)
###Output
_____no_output_____
###Markdown
The **fourth** strategy irrigates according to a predefined calendar. This calendar is defined as a pandas DataFrame and this example, we will create a calendar that irrigates on the first Tuesday of each month.
###Code
import pandas as pd # import pandas library
all_days = pd.date_range(sim_start,sim_end) # list of all dates in simulation period
new_month=True
dates=[]
# iterate through all simulation days
for date in all_days:
#check if new month
if date.is_month_start:
new_month=True
if new_month:
# check if tuesday (dayofweek=1)
if date.dayofweek==1:
#save date
dates.append(date)
new_month=False
###Output
_____no_output_____
###Markdown
Now we have a list of all the first Tuesdays of the month, we can create the full schedule.
###Code
depths = [25]*len(dates) # depth of irrigation applied
schedule=pd.DataFrame([dates,depths]).T # create pandas DataFrame
schedule.columns=['Date','Depth'] # name columns
schedule
###Output
_____no_output_____
###Markdown
Then pass this schedule into our `IrrMngtClass`.
###Code
irrigate_schedule = IrrMngtClass(IrrMethod=3,Schedule=schedule)
###Output
_____no_output_____
###Markdown
The **fifth** strategy is net irrigation. This keeps the soil-moisture content above a specified level. This method differs from the soil moisture thresholds (second strategy) as each compartment is filled to field capacity, instead of water starting above the first compartment and filtering down. In this example the net irrigation mode will maintain a water content of 70% total available water.
###Code
net_irrigation = IrrMngtClass(IrrMethod=4,NetIrrSMT=70)
###Output
_____no_output_____
###Markdown
Now its time to compare the strategies over the 37 year period. The cell below will create and run an `AquaCropModel` for each irrigation strategy and save the final output.
###Code
# define labels to help after
labels=['rainfed','four thresholds','interval','schedule','net']
strategies = [rainfed,threshold4_irrigate,interval_7,irrigate_schedule,net_irrigation]
outputs=[]
for i,irr_mngt in enumerate(strategies): # for both irrigation strategies...
crop.Name = labels[i] # add helpfull label
model = AquaCropModel(sim_start,sim_end,wdf,soil,crop,InitWC=initWC,IrrMngt=irr_mngt) # create model
model.initialize() # initilize model
model.step(till_termination=True) # run model till the end
outputs.append(model.Outputs.Final) # save results
###Output
_____no_output_____
###Markdown
The final strategy to show is for a custom irrigation strategy. This is one of the key features of AquaCrop-OSPy as users can define an a complex irrigation strategy that incorperates any external data, code bases or machine learning models. To showcase this feature, we will define a function that will irrigate according to the follwong logic:1) There will be no rain over the next 10 days -> Irrigate 10mm2) There will be rain in the next 10 days but the soil is over 70% depleted -> Irrigate 10mm3) Otherwise -> No irrigation
###Code
# function to return the irrigation depth to apply on next day
def get_depth(model):
t = model.ClockStruct.TimeStepCounter # current timestep
# get weather data for next 7 days
weather10 = model.weather[t+1:min(t+10+1,len(model.weather))]
# if it will rain in next 7 days
if sum(weather10[:,2])>0:
# check if soil is over 70% depleted
if t>0 and model.InitCond.Depletion/model.InitCond.TAW > 0.7:
depth=10
else:
depth=0
else:
# no rain for next 10 days
depth=10
return depth
# create model with IrrMethod= Constant depth
crop.Name = 'weather' # add helpfull label
model = AquaCropModel(sim_start,sim_end,wdf,soil,crop,InitWC=initWC,
IrrMngt=IrrMngtClass(IrrMethod=5,))
model.initialize()
while not model.ClockStruct.ModelTermination:
# get depth to apply
depth=get_depth(model)
model.ParamStruct.IrrMngt.depth=depth
model.step()
outputs.append(model.Outputs.Final) # save results
labels.append('weather')
###Output
_____no_output_____
###Markdown
Combine results so that they can be easily visualized.
###Code
dflist=outputs
outlist=[]
for i in range(len(dflist)):
temp = pd.DataFrame(dflist[i][['Yield (tonne/ha)','Seasonal irrigation (mm)']])
temp['label']=labels[i]
outlist.append(temp)
all_outputs = pd.concat(outlist,axis=0)
# combine all results
results=pd.concat(outlist)
###Output
_____no_output_____
###Markdown
Use `matplotlib` and `seaborn` to show the range of yields and total irrigation for each strategy over the simulation years.
###Code
# import plotting libraries
import matplotlib.pyplot as plt
import seaborn as sns
# create figure consisting of 2 plots
fig,ax=plt.subplots(2,1,figsize=(10,14))
# create two box plots
sns.boxplot(data=results,x='label',y='Yield (tonne/ha)',ax=ax[0])
sns.boxplot(data=results,x='label',y='Seasonal irrigation (mm)',ax=ax[1])
# labels and font sizes
ax[0].tick_params(labelsize=15)
ax[0].set_xlabel(' ')
ax[0].set_ylabel('Yield (t/ha)',fontsize=18)
ax[1].tick_params(labelsize=15)
ax[1].set_xlabel(' ')
ax[1].set_ylabel('Total Irrigation (ha-mm)',fontsize=18)
plt.legend(fontsize=18)
###Output
No handles with labels found to put in legend.
|
scratchpad/coalice_experiment/notebooks/make_video.ipynb
|
###Markdown
Reconstruction of coal-ice melting Author: [email protected], Aniket Tekawade Data contributor: [email protected], Viktor Nikitin
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import os
import h5py
import sys
sys.path.append('../')
from recon4D import DataGetter
from ct_segnet import viewer
from ct_segnet.data_utils.data_io import DataFile
fnames = ['/data02/MyArchive/coalice/melting_086.h5', \
'/data02/MyArchive/coalice/flat_fields_melting_086.h5', \
'/data02/MyArchive/coalice/dark_fields_melting_086.h5']
ntheta = 361 # these many projections per 180 degree spin
recon_params = {"mask_ratio" : None, \
"contrast_s" : 0.01, "vert_slice" : slice(300,302,None)}
recon_path = '/data02/MyArchive/coalice/recons'
hf = h5py.File(fnames[0], 'r')
delta_t = hf['measurement/instrument/detector/exposure_time'][:]
# pixel_size = hf['measurement/instrument/detector/pixel_size'][:]
hf.close()
dget = DataGetter(*fnames, ntheta)
idx_list = [0, 720*5, 720*10, 720*15, 720*20, 720*25]
imgs = []
t_vals = []
center_val = dget.find_center(0)
for idx in idx_list:
img_t = dget.reconstruct_window(idx,center_val, **recon_params)[0]
imgs.append(img_t)
t_vals.append(idx*delta_t)
# save it
# fname_tstep = os.path.join(recon_path, "idx%i.hdf5"%idx)
for ii in range(len(imgs)):
fig, ax = plt.subplots(1,1)
ax.imshow(imgs[ii], cmap = 'gray')
ax.text(30,50,'t = %2.0f secs'%t_vals[ii])
delta_t
###Output
_____no_output_____
|
Modulo1/Ejercicios/.ipynb_checkpoints/Problemas Diversos-checkpoint.ipynb
|
###Markdown
PROBLEMAS DIVERSOS 1.Escribí un programa que solicite al usuario ingresar la cantidad de kilómetros recorridos por una motocicleta y la cantidad de litros de combustible que consumió durante ese recorrido. Mostrar el consumo de combustible por kilómetro. Kilómetros recorridos: 260Litros de combustible gastados: 12.5El consumo por kilómetro es de 20.8 2.Escriba un programa que pida los coeficientes de una ecuación de segundo grado (a x² + b x + c = 0) y escriba la solución.Se recuerda que una ecuación de segundo grado puede no tener solución, tener una solución única, tener dos soluciones o que todos los números sean solución. Su programa debe indicar:- En caso la ecuación cuadrática tenga solución real, su programa debe brindar la solución- En caso su ecuación no tenga solución real, su programa debe brindar un mensaje que diga "Ecuación no presenta solución real"
###Code
while True:
a = float(input('ingrese el primer coeficiente'))
if a!=0:
break
###Output
ingrese el primer coeficiente 0
ingrese el primer coeficiente 0
ingrese el primer coeficiente 0
ingrese el primer coeficiente
|
SciPy/SciPy.ipynb
|
###Markdown
SciPyScipy是一个用于数学、科学、工程领域的常用软件包,可以处理插值、积分、优化、图像处理、常微分方程数值解的求解、信号处理等问题。它用于有效计算Numpy矩阵,使Numpy和Scipy协同工作,高效解决问题。
###Code
import numpy as np
A = np.array([[1,2,3],[4,5,6],[7,8,8]])
###Output
_____no_output_____
###Markdown
线性代数
###Code
from scipy import linalg
# Compute the determinant of a matrix
linalg.det(A)
###Output
_____no_output_____
###Markdown
* Lu矩阵分解* 分解公式: A = P L U* 注: P 置换矩阵, L 下三角矩阵,U 上三角矩阵。
###Code
P, L, U = linalg.lu(A)
P
L
U
np.dot(L,U)
###Output
_____no_output_____
###Markdown
特征值和特征向量
###Code
EW, EV = linalg.eig(A)
EW
EV
###Output
_____no_output_____
###Markdown
求解
###Code
v = np.array([[2],[3],[5]])
v
s = linalg.solve(A,v)
s
###Output
_____no_output_____
###Markdown
稀疏线性代数
###Code
from scipy import sparse
A = sparse.lil_matrix((1000, 1000))
A
A[0,:100] = np.random.rand(100)
A[1,100:200] = A[0,:100]
A.setdiag(np.random.rand(1000))
A
###Output
_____no_output_____
###Markdown
线性代数和稀疏矩阵
###Code
from scipy.sparse import linalg
# Convert this matrix to Compressed Sparse Row format.
A.tocsr()
A = A.tocsr()
b = np.random.rand(1000)
linalg.spsolve(A, b)
###Output
_____no_output_____
|
notebooks/1_EDA.ipynb
|
###Markdown
Load dataset
###Code
def load_data(DATA_PATH):
texts = ''
for doc in glob.glob(DATA_PATH+'*'):
with open(doc) as f:
texts+='<DOCNAME>'+''.join(doc.split('/')[-1].split('.')[:-2])+'</DOCNAME>'
data = f.read().strip()
texts+=data
return texts
texts = load_data(DATA_PATH)
###Output
_____no_output_____
###Markdown
Parse tags to IOB notation
###Code
tags = ['THREAT_ACTOR', 'SOFTWARE', 'INDUSTRY', 'ORG', 'TIMESTAMP',
'MALWARE', 'COUNTRY', 'IOC', 'IDENTITY', 'CAMPAIGN', 'TOOL',
'MITRE_ATTACK', 'THEAT_ACTOR', 'ATTACK_PATTERN', 'TECHNIQUE',
'CITY']
tags_small = [x.lower() for x in tags]
class DataParser(HTMLParser):
def __init__(self, IOB=True):
super(DataParser, self).__init__()
self.IOB = IOB
self.cur_tag = 'O'
self.dataset = []
self.cur_doc = ''
def handle_starttag(self, tag, attrs):
if tag not in tags_small and tag!='docname':
self.cur_tag = 'O'
else:
self.cur_tag = tag
def handle_endtag(self, tag):
self.cur_tag = 'O'
def handle_data(self, data):
if self.cur_tag=='docname':
self.cur_doc = data
else:
data_tokens = data
tags = [(self.cur_doc, data_tokens, self.cur_tag)]
self.dataset+=tags
parser = DataParser(IOB = False)
parser.feed(texts)
tagged_dataset = parser.dataset
tagged_dataset = pd.DataFrame(tagged_dataset)
tagged_dataset.columns = ['DocName','text', 'intent']
tagged_dataset['intent'].unique()
###Output
_____no_output_____
###Markdown
EDA Fix error in tagging
###Code
tagged_dataset[tagged_dataset['intent'].isin(tags_small)]['intent'].value_counts()
tagged_dataset.loc[tagged_dataset[tagged_dataset['intent']=='theat_actor'].index,'intent'] = 'threat_actor'
tagged_dataset[tagged_dataset['intent'].isin(tags_small)]['intent'].value_counts()
tagged_dataset.to_csv(os.path.join(DATA_DRAFT_PATH,'entities_per_document.csv'),index=False)
tagged_dataset['DocName'].nunique()
tagged_dataset.head(20)
###Output
_____no_output_____
###Markdown
Calculate valuable entities in each document
###Code
tagged_dataset[tagged_dataset['intent'].isin(tags_small)]['DocName'].value_counts()
###Output
_____no_output_____
###Markdown
Calculate appearance of tag in each document
###Code
def calc_tag(series, tag='malware'):
series = series.value_counts()
try:
return series[tag]
except KeyError:
return 0
tagged_dataset.groupby('DocName')['intent'].agg(calc_tag)
from functools import partial
tag_occs = {}
for tag in tagged_dataset['intent'].unique():
if tag=='O':
continue
calc_tag_part = partial(calc_tag,tag = tag)
tag_occs[tag] = tagged_dataset.groupby('DocName')['intent'].agg(calc_tag_part).values
# Pythopn 3.5+
plt.figure(figsize=(20,20))
labels, data = [*zip(*tag_occs.items())] # 'transpose' items to parallel key, value lists
# or backwards compatable
labels, data = tag_occs.keys(), tag_occs.values()
plt.boxplot(data)
plt.xticks(range(1, len(labels) + 1), labels, rotation=45, fontsize=15)
plt.show()
###Output
_____no_output_____
###Markdown
Plot graph of common entities for 20200911_Talos_-_The_art_and_science_of_detecting_Cobalt_Strike, Mandiant_APT1_Report, Group-IB_Lazarus
###Code
tags_small
important_tags = ['threat_actor','industry','org','malware','country','technique','city']
important_tags = ['malware']
shoebox = {}
fileprefixes = {'Cobalt_Strike':'20200911_Talos_-_The_art_and_science_of_detecting_Cobalt_Strike',\
'APT1':'Mandiant_APT1_Report', 'Lazarus':'Group-IB_Lazarus'}
vertices = []#list(fileprefixes)
for name in fileprefixes:
prefix = fileprefixes[name]
Doc_info = tagged_dataset[tagged_dataset['DocName']==prefix]
Doc_info = Doc_info[Doc_info['text']!=name]
shoebox[name] = Doc_info[Doc_info['intent'].isin(important_tags)]['text'].value_counts()[:5]
vertices+=list(shoebox[name].index)
vertices = set(vertices)
list(fileprefixes)
from igraph import *
# Create graph
g = Graph(directed=True)
# Add vertices
g.add_vertices(len(list(fileprefixes))+len(vertices))
colors=['red','blue','green','magenta','yellow']
id_name_dict = {}
# Add ids and labels to vertices
for i, name in enumerate(list(fileprefixes)):
g.vs[i]["id"]= i
g.vs[i]["label"]= name
g.vs[i]["color"]= 'red'
g.vs[i]["size"]= 35
g.vs[i]["label_size"] = 15
id_name_dict[name] = i
for j, name in enumerate(vertices):
g.vs[i+j+1]["id"]= i+j+1
g.vs[i+j+1]["label"]= name[:10]
g.vs[i+j+1]["color"]= 'white'
g.vs[i+j+1]["label_size"] = 10
g.vs[i+j+1]["size"]= 1
id_name_dict[name] = i+j+1
edges = []
weights = []
for name_1 in list(fileprefixes):
for name_2 in vertices:
if name_1==name_2:
continue
if name_2 in shoebox[name_1].index:
edges.append((id_name_dict[name_1],id_name_dict[name_2]))
weights.append(shoebox[name_1].loc[name_2])
# Add edges
g.add_edges(edges)
# Add weights and edge labels
g.es['weight'] = weights
# g.es['label'] = weights
visual_style = {}
out_name = "graph.png"
# Set bbox and margin
visual_style["bbox"] = (700,700)
visual_style["margin"] = 50
# Set vertex colours
# visual_style["vertex_color"] = 'white'
# Set vertex size
# visual_style["vertex_size"] = 45
# Set vertex lable size
# visual_style["vertex_label_size"] = 22
# Don't curve the edges
visual_style["edge_curved"] = True
# Set the layout
my_layout = g.layout_lgl()
visual_style["layout"] = my_layout
# Plot the graph
plot(g, out_name, **visual_style)
###Output
_____no_output_____
###Markdown
Preprocessing Some basic checks: are there NAN's or duplicate rows?* NaN's : There are not* Duplicates: There are, so we erase them, this would disturb the metrics (biasing them towards too optimistic values)
###Code
print("Total number of NaN's:",df.isna().sum().sum())
print("Number of duplicated rows:",df.duplicated().sum())
df = df[df.duplicated()==False]
df.reset_index(inplace=True,drop=True)
###Output
_____no_output_____
###Markdown
As expected, we are working with a highly unbalance dataset, the mean of Class is 0.001667, which means that only 0.17% of the entries correspond to Class 1, Fraud.
###Code
df.describe()
###Output
_____no_output_____
###Markdown
Feature engineering: Time and Amount We check that the Time variable correspond to seconds (the database indicates that it correspond to two days)
###Code
print('Total number of days:',df.Time.max()/60/60/24)
###Output
Total number of days: 1.9999074074074075
###Markdown
We can perform some feature engineering based on the Time variable
###Code
df['TimeScaled'] = df.Time/60/60/24/2
df['TimeSin'] = np.sin(2*np.pi*df.Time/60/60/24)
df['TimeCos'] = np.cos(2*np.pi*df.Time/60/60/24)
df.drop(columns='Time',inplace=True)
###Output
_____no_output_____
###Markdown
Some basic statistics for each variable in the dataframe.It easily observed that all V's variables have zero mean and order 1 standard deviation (and they are sorted by it), they come from a PCA in which the variables where scaled before the PCA. There are entries with Amount = 0. What's is the meaning of this? Transactions with no money interchange? Is that really a Fraud? Is it interesting to detect them? Those are questions that we cannot answer here, but should be investigated in case of a real world problem.We can see that in this subgroup of the data, there is an over-representation of class 1 (FRAUD).
###Code
print('Probability of each one of the classes in the whole dataset')
for i, prob in enumerate(df.Class.value_counts(normalize=True)):
print('Class {}: {:.2f} %'.format(i,prob*100))
print('Probability of each one of the classes in the entries with Amount = 0')
for i, prob in enumerate(df[df.Amount==0].Class.value_counts(normalize=True)):
print('Class {}: {:.2f} %'.format(i,prob*100))
###Output
Probability of each one of the classes in the entries with Amount = 0
Class 0: 98.62 %
Class 1: 1.38 %
###Markdown
The Amount variable is too disperse, so it is better to work with it in logarithm scale, and then rescale it.This does not matter for Decision Tree based methods. Exercise: Why?
###Code
plt.figure(figsize=(10,6))
df['AmountLog'] = np.log10(1.+df.Amount)
plt.subplot(121)
sns.distplot(df.Amount,bins=200)
plt.xlim((0,1000))
plt.subplot(122)
sns.distplot(df.AmountLog)
# df.drop(columns='Amount',inplace=True)
plt.show()
scipy.stats.boxcox(1+df.Amount,lmbda=None,alpha=0.05)
df['AmountBC']= points
df.drop(columns=['Amount','AmountLog'],inplace=True)
plt.figure(figsize=(10,6))
# df['AmountLog'] = np.log10(1.+df.Amount)
plt.subplot(121)
points, lamb = scipy.stats.boxcox(1+df.Amount,lmbda=None,)
sns.distplot(points-1,axlabel='BoxCox:'+str(lamb))
plt.subplot(122)
sns.distplot(df.AmountLog)
# df.drop(columns='Amount',inplace=True)
plt.show()
###Output
_____no_output_____
###Markdown
Now, we save a copy of the cleaned dataframe, in order to preserve the preprocessing.
###Code
df.describe()
df.to_csv(os.path.join(DATA_PATH,'df_clean.csv'))
###Output
_____no_output_____
###Markdown
Exploration One dimensional histograms Let us explore the Time variable, can we see any pattern?
###Code
bins = np.linspace(0,1,24)
plt.figure(figsize=(10,6))
plt.subplot(121)
sns.distplot(df.TimeScaled,bins=bins,label='All',color='red')
plt.legend()
plt.subplot(122)
sns.distplot(df[df.Class==0].TimeScaled,bins=bins,kde=False,norm_hist=True,label='Normal')
sns.distplot(df[df.Class==1].TimeScaled,bins=bins,kde=False,norm_hist=True,label='Fraud')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
We can explore the histograms for all the variables, since there are around 30 of them.
###Code
for variable in df.columns:
plt.figure(figsize=(6,6))
bins = np.linspace(df[variable].min(),df[variable].max(),50)
sns.distplot(df[df.Class==0][variable],bins=bins,kde=False,norm_hist=True,label='Normal',axlabel=variable)
sns.distplot(df[df.Class==1][variable],bins=bins,kde=False,norm_hist=True,label='Fraud',axlabel=variable)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Pairwise scatterplots A really good way of getting intuition is through pairplots, i.e., scatter plots using two variables. In this way we can check if some variables are useful to disentangle the entries by Class.In this case, since there are 28+2 features, there would be 900/2 plots to check the pairwise relations.
###Code
# We first downsample Class 0 (normal) to obtain clearer plots
df_small = pd.merge( df[df.Class==1],df[df.Class==0].sample(n=10000),how='outer')
# We cannot plot all the variables, there are too many
variables_to_show = ['V4','V14','V17','V3']
sns.pairplot(df_small,vars=variables_to_show,
hue='Class',kind='scatter',markers="o",
plot_kws=dict(s=6, edgecolor=None, linewidth=0.01,alpha=0.5))
plt.show()
###Output
_____no_output_____
###Markdown
Same pairwise scatterplot with all the data, we visualize it easily giving some transparency to the most populated class, and also using smaller markers for it.
###Code
plt.figure(figsize=(5,5))
x_var = 'V16'
y_var = 'V9'
sns.scatterplot(data=df[df.Class==0],x=x_var, y=y_var,s=5,edgecolor=None,alpha=0.3)
sns.scatterplot(data=df[df.Class==1],x=x_var, y=y_var,color='orange',s=10,edgecolor='w')
plt.show()
###Output
_____no_output_____
###Markdown
Correlations It is also easy to see correlations among variables, but it is not very useful in this case.
###Code
sns.heatmap(df.corr(),vmin=-1,vmax=1)
sns.heatmap(df[df.Class==0].corr(),vmin=-1,vmax=1)
sns.heatmap(df[df.Class==1].corr(),vmin=-1,vmax=1)
###Output
_____no_output_____
|
Notebooks/Wen_CNN_105.ipynb
|
###Markdown
Wen CNN Notebook 102 simulated the CNN approach of Wen et al. 2019. The model was trained on human GenCode 26 just like Wen. Now, as Wen did, test the human model on chicken.
###Code
import time
def show_time():
t = time.time()
print(time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t)))
show_time()
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from keras.models import Sequential
from keras.layers import Conv2D,MaxPooling2D
from keras.layers import Dense,Embedding,Dropout
from keras.layers import Flatten,TimeDistributed
from keras.losses import BinaryCrossentropy
from keras.callbacks import ModelCheckpoint
from keras.models import load_model
import sys
IN_COLAB = False
try:
from google.colab import drive
IN_COLAB = True
except:
pass
if IN_COLAB:
print("On Google CoLab, mount cloud-local file, get our code from GitHub.")
PATH='/content/drive/'
#drive.mount(PATH,force_remount=True) # hardly ever need this
drive.mount(PATH) # Google will require login credentials
DATAPATH=PATH+'My Drive/data/' # must end in "/"
import requests
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_describe.py')
with open('RNA_describe.py', 'w') as f:
f.write(r.text)
from RNA_describe import ORF_counter
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/GenCodeTools.py')
with open('GenCodeTools.py', 'w') as f:
f.write(r.text)
from GenCodeTools import GenCodeLoader
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/KmerTools.py')
with open('KmerTools.py', 'w') as f:
f.write(r.text)
from KmerTools import KmerTools
else:
print("CoLab not working. On my PC, use relative paths.")
DATAPATH='data/' # must end in "/"
sys.path.append("..") # append parent dir in order to use sibling dirs
from SimTools.RNA_describe import ORF_counter
from SimTools.GenCodeTools import GenCodeLoader
from SimTools.KmerTools import KmerTools
BESTMODELPATH=DATAPATH+"BestModel-Wen" # saved on cloud instance and lost after logout
LASTMODELPATH=DATAPATH+"LastModel-Wen" # saved on Google Drive but requires login
###Output
On Google CoLab, mount cloud-local file, get our code from GitHub.
Drive already mounted at /content/drive/; to attempt to forcibly remount, call drive.mount("/content/drive/", force_remount=True).
###Markdown
Data Load
###Code
PC_TRAINS=8000
NC_TRAINS=8000
PC_TESTS=2000
NC_TESTS=2000
PC_LENS=(200,4000)
NC_LENS=(200,4000)
PC_FILENAME='Gallus_gallus.GRCg6a.cdna.all.fa.gz'
NC_FILENAME='Gallus_gallus.GRCg6a.ncrna.fa.gz' # chicken
PC_FULLPATH=DATAPATH+PC_FILENAME
NC_FULLPATH=DATAPATH+NC_FILENAME
MAX_K = 3
# With K={1,2,3}, num K-mers is 4^3 + 4^2 + 4^1 = 84.
# Wen specified 17x20 which is impossible.
# The factors of 84 are 1, 2, 3, 4, 6, 7, 12, 14, 21, 28, 42 and 84.
FRQ_CNT=84
ROWS=7
COLS=FRQ_CNT//ROWS
SHAPE2D = (ROWS,COLS,1)
EPOCHS=100 # 1000 # 200
SPLITS=5
FOLDS=5 # make this 5 for serious testing
show_time()
loader=GenCodeLoader()
loader.set_label(1)
loader.set_check_utr(False) # Chicken sequences do not have UTR/ORF annotation on the def line
pcdf=loader.load_file(PC_FULLPATH)
print("PC seqs loaded:",len(pcdf))
loader.set_label(0)
loader.set_check_utr(False)
ncdf=loader.load_file(NC_FULLPATH)
print("NC seqs loaded:",len(ncdf))
show_time()
def dataframe_length_filter(df,low_high):
(low,high)=low_high
# The pandas query language is strange,
# but this is MUCH faster than loop & drop.
return df[ (df['seqlen']>=low) & (df['seqlen']<=high) ]
def dataframe_extract_sequence(df):
return df['sequence'].tolist()
pc_all = dataframe_extract_sequence(
dataframe_length_filter(pcdf,PC_LENS))
nc_all = dataframe_extract_sequence(
dataframe_length_filter(ncdf,NC_LENS))
show_time()
print("PC seqs pass filter:",len(pc_all))
print("NC seqs pass filter:",len(nc_all))
# Garbage collection to reduce RAM footprint
pcdf=None
ncdf=None
###Output
2021-08-04 14:07:32 UTC
PC seqs pass filter: 24899
NC seqs pass filter: 8750
###Markdown
Data Prep
###Code
#pc_train=pc_all[:PC_TRAINS]
#nc_train=nc_all[:NC_TRAINS]
#print("PC train, NC train:",len(pc_train),len(nc_train))
#pc_test=pc_all[PC_TRAINS:PC_TRAINS+PC_TESTS]
#nc_test=nc_all[NC_TRAINS:NC_TRAINS+PC_TESTS]
pc_test=pc_all[:PC_TESTS]
nc_test=nc_all[:PC_TESTS]
print("PC test, NC test:",len(pc_test),len(nc_test))
# Garbage collection
pc_all=None
nc_all=None
def prepare_x_and_y(seqs1,seqs0):
len1=len(seqs1)
len0=len(seqs0)
total=len1+len0
L1=np.ones(len1,dtype=np.int8)
L0=np.zeros(len0,dtype=np.int8)
S1 = np.asarray(seqs1)
S0 = np.asarray(seqs0)
all_labels = np.concatenate((L1,L0))
all_seqs = np.concatenate((S1,S0))
# interleave (uses less RAM than shuffle)
for i in range(0,len0):
all_labels[i*2] = L0[i]
all_seqs[i*2] = S0[i]
all_labels[i*2+1] = L1[i]
all_seqs[i*2+1] = S1[i]
return all_seqs,all_labels # use this to test unshuffled
X,y = shuffle(all_seqs,all_labels) # sklearn.utils.shuffle
return X,y
#Xseq,y=prepare_x_and_y(pc_train,nc_train)
#print(Xseq[:3])
#print(y[:3])
show_time()
def seqs_to_kmer_freqs(seqs,max_K):
tool = KmerTools() # from SimTools
collection = []
for seq in seqs:
counts = tool.make_dict_upto_K(max_K)
# Last param should be True when using Harvester.
counts = tool.update_count_one_K(counts,max_K,seq,True)
# Given counts for K=3, Harvester fills in counts for K=1,2.
counts = tool.harvest_counts_from_K(counts,max_K)
fdict = tool.count_to_frequency(counts,max_K)
freqs = list(fdict.values())
collection.append(freqs)
return np.asarray(collection)
#Xfrq=seqs_to_kmer_freqs(Xseq,MAX_K)
# Garbage collection
#Xseq = None
show_time()
def reshape(frequency_matrix):
seq_cnt,frq_cnt=Xfrq.shape
# CNN inputs require a last dimension = numbers per pixel.
# For RGB images it is 3.
# For our frequency matrix it is 1.
new_matrix = frequency_matrix.reshape(seq_cnt,ROWS,COLS,1)
return new_matrix
#print("Xfrq")
#print("Xfrq type",type(Xfrq))
#print("Xfrq shape",Xfrq.shape)
#Xfrq2D = reshape(Xfrq)
#print("Xfrq2D shape",Xfrq2D.shape)
###Output
_____no_output_____
###Markdown
Test the neural network
###Code
def show_test_AUC(model,X,y):
ns_probs = [0 for _ in range(len(y))]
bm_probs = model.predict(X)
ns_auc = roc_auc_score(y, ns_probs)
bm_auc = roc_auc_score(y, bm_probs)
ns_fpr, ns_tpr, _ = roc_curve(y, ns_probs)
bm_fpr, bm_tpr, _ = roc_curve(y, bm_probs)
plt.plot(ns_fpr, ns_tpr, linestyle='--', label='Guess, auc=%.4f'%ns_auc)
plt.plot(bm_fpr, bm_tpr, marker='.', label='Model, auc=%.4f'%bm_auc)
plt.title('ROC')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
print("%s: %.2f%%" %('AUC',bm_auc*100.0))
def show_test_accuracy(model,X,y):
scores = model.evaluate(X, y, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
model = load_model(BESTMODELPATH) # keras.load_model()
print("Accuracy on test data.")
print("Prepare...")
show_time()
Xseq,y=prepare_x_and_y(pc_test,nc_test)
print("Extract K-mer features...")
show_time()
Xfrq=seqs_to_kmer_freqs(Xseq,MAX_K)
Xfrq2D = reshape(Xfrq)
print("Plot...")
show_time()
show_test_AUC(model,Xfrq2D,y)
show_test_accuracy(model,Xfrq2D,y)
show_time()
###Output
_____no_output_____
|
isic2016_scripts/train_ISIC_2016.ipynb
|
###Markdown
Train a model on the balanced dataset
###Code
# Run once to install
!pip install image-classifiers==0.2.2
!pip install image-classifiers==1.0.0b1
!pip install imgaug
# Import libs
import os
import time
import cv2
import numpy as np
import matplotlib.pyplot as plt
from keras import optimizers
import keras
import tensorflow as tf
import keras.backend as K
from sklearn.metrics import confusion_matrix, classification_report
from keras.models import load_model
from keras.models import Sequential
from keras.regularizers import l2
from keras.applications.vgg16 import VGG16
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ModelCheckpoint, CSVLogger, EarlyStopping, ReduceLROnPlateau
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.metrics import roc_curve, auc, roc_auc_score
import matplotlib.pyplot as plt
from tqdm import tqdm
from keras.utils import np_utils
from imgaug import augmenters as iaa
import itertools
np.random.seed(42)
# Print version
print("Keras Version", keras.__version__)
print("Tensorflow Version", tf.__version__)
# GPU test
from tensorflow.python.client import device_lib
def get_available_gpus():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos if x.device_type == 'GPU']
print(get_available_gpus())
# Get compute specs
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
# Helpers functions
def create_directory(directory):
'''
Creates a new folder in the specified directory if the folder doesn't exist.
INPUT
directory: Folder to be created, called as "folder/".
OUTPUT
New folder in the current directory.
'''
if not os.path.exists(directory):
os.makedirs(directory)
def plot_hist(img):
img_flat = img.flatten()
print(min(img_flat), max(img_flat))
plt.hist(img_flat, bins=20, color='c')
#plt.title("Data distribution")
plt.xlabel("Pixel values")
plt.grid(True)
plt.ylabel("Frequency")
plt.show()
# Focal loss function
##################################################################################
# Paper: https://arxiv.org/abs/1708.02002
#Focal loss down-weights the well-classified examples. This has
#the net effect of putting more training emphasis on that data that is hard to classify.
#In a practical setting where we have a data imbalance, our majority class will quickly
#become well-classified since we have much more data for it. Thus, in order to insure that we
#also achieve high accuracy on our minority class, we can use the focal loss to give those minority
#class examples more relative weight during training.
from keras import backend as K
import tensorflow as tf
def focal_loss(gamma=2., alpha=.25):
def focal_loss_fixed(y_true, y_pred):
pt_1 = tf.where(tf.equal(y_true, 1), y_pred, tf.ones_like(y_pred))
pt_0 = tf.where(tf.equal(y_true, 0), y_pred, tf.zeros_like(y_pred))
return -K.mean(alpha * K.pow(1. - pt_1, gamma) * K.log(pt_1)) - K.mean((1 - alpha) * K.pow(pt_0, gamma) * K.log(1. - pt_0))
return focal_loss_fixed
##################################################################################
# Define paths
base_path = os.path.abspath("../")
dataset_path = os.path.join(base_path, "dataset", "isic2016numpy")
model_path = os.path.join(base_path, "models")
print(os.listdir(dataset_path))
# Load data
x_train = np.load("{}/x_upsampled.npy".format(dataset_path))
y_train = np.load("{}/y_upsampled.npy".format(dataset_path))
x_test = np.load("{}/x_test.npy".format(dataset_path))
y_test = np.load("{}/y_test.npy".format(dataset_path))
# Shuffle training dataset
flag = 1
if flag == 1:
# Shuffle data
print("Shuffling data")
s = np.arange(x_train.shape[0])
np.random.shuffle(s)
x_train = x_train[s]
y_train = y_train[s]
else:
print("Not shuffling...")
pass
# Show shape
print("Dataset sample size :", x_train.shape, y_train.shape, x_test.shape, y_test.shape)
# Sanity check on training data
#img = x_train[0]
#plot_hist(img)
#plt.imshow(x_train[0])
# Import libs
import keras
from classification_models.keras import Classifiers
# Define architecture
arch, preprocess_input = Classifiers.get('vgg16')
# Preprocess the dataset
# 1. Use model preprocessing
#x_train = preprocess_input(x_train)
#x_test = preprocess_input(x_test)
# 2. Use standard preprocessing
prepro = False # False when using synthetic data
if prepro == True:
print("Preprocessing training data")
x_train = x_train.astype('float32')
x_train /= 255
else:
print("Not preprocessing training data, already preprocessed in MeGAN generator.")
pass
# Standardize test set
x_test = x_test.astype('float32')
x_test /= 255
print(x_train.shape, x_test.shape)
# Sanity check on preprocessed data
#img = x_test[0]
#plot_hist(img)
#plt.imshow(x_test[0])
# Experiment name
EXP_NAME = "results"
# Create folder for the experiment
create_directory("{}/{}".format(base_path, EXP_NAME))
output_path = os.path.join(base_path, EXP_NAME)
# Callbacks
weights_path = "{}/{}.h5".format(output_path, EXP_NAME)
checkpointer = ModelCheckpoint(filepath=weights_path, verbose=1, monitor='val_loss', save_best_only=True)
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=5, verbose=1, min_lr=1e-8, mode='auto') # new_lr = lr * factor
early_stopping = EarlyStopping(monitor='val_loss', min_delta=0, verbose=1, patience=8, mode='auto', restore_best_weights=True)
csv_logger = CSVLogger('{}/{}_training.csv'.format(output_path, EXP_NAME))
# Define class weights for imbalacned data
from sklearn.utils import class_weight
class_weights = class_weight.compute_class_weight('balanced', np.unique(np.argmax(y_train, axis=1)), np.argmax(y_train, axis=1))
print(class_weights)
def my_awesome_model():
'''Awesomest model'''
# Get backbone network
base_model = arch(input_shape=(256,256,3), weights='imagenet', include_top=False)
# Add GAP layer
x = keras.layers.GlobalAveragePooling2D()(base_model.output)
# Add FC layer
output = keras.layers.Dense(2, activation='softmax', trainable=True)(x)
# Freeze layers
#for layer in base_model.layers[:]:
#layer.trainable=False
# Build model
model = keras.models.Model(inputs=[base_model.input], outputs=[output])
# Optimizers
adadelta = optimizers.Adadelta(lr=0.001)
# Compile
model.compile(optimizer=adadelta, loss= [focal_loss(alpha=.25, gamma=2)], metrics=['accuracy'])
# Output model configuration
model.summary()
return model
model = None
model = my_awesome_model()
# Train the awesome model
# Configuration
batch_size = 16
epochs = 300
# Calculate the starting time
start_time = time.time()
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
class_weight = class_weights,
callbacks=[csv_logger, early_stopping, reduce_lr, checkpointer], # early_stopping, checkpointer, reduce_lr
shuffle=False)
end_time = time.time()
print("--- Time taken to train : %s hours ---" % ((end_time - start_time)//3600))
# Save model
# If checkpointer is used, dont use this
#model.save(weights_path)
# Plot and save accuravy loss graphs together
def plot_loss_accu_all(history):
loss = history.history['loss']
val_loss = history.history['val_loss']
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
epochs = range(len(loss))
plt.plot(epochs, acc, 'r')
plt.plot(epochs, val_acc, 'b')
plt.plot(epochs, loss, 'g')
plt.plot(epochs, val_loss, 'y')
plt.title('Accuracy/Loss')
#plt.ylabel('Rate')
#plt.xlabel('Epoch')
plt.legend(['trainacc', 'valacc', 'trainloss', 'valloss'], loc='lower right', fontsize=10)
plt.grid(True)
plt.savefig('{}/{}_acc_loss_graph.jpg'.format(output_path, EXP_NAME), dpi=100)
plt.show()
# Plot and save accuravy loss graphs individually
def plot_loss_accu(history):
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.plot(epochs, loss, 'g')
plt.plot(epochs, val_loss, 'y')
#plt.title('Training and validation loss')
plt.ylabel('Loss %')
plt.xlabel('Epoch')
plt.legend(['train', 'val'], loc='upper right')
plt.grid(True)
#plt.savefig('{}/{}_loss.jpg'.format(output_path, EXP_NAME), dpi=100)
#plt.savefig('{}/{}_loss.pdf'.format(output_path, EXP_NAME), dpi=300)
plt.show()
loss = history.history['accuracy']
val_loss = history.history['val_accuracy']
epochs = range(len(loss))
plt.plot(epochs, loss, 'r')
plt.plot(epochs, val_loss, 'b')
#plt.title('Training and validation accuracy')
plt.ylabel('Accuracy %')
plt.xlabel('Epoch')
plt.legend(['train', 'val'], loc='lower right')
plt.grid(True)
#plt.savefig('{}/{}_acc.jpg'.format(output_path, EXP_NAME), dpi=100)
#plt.savefig('{}/{}_acc.pdf'.format(output_path, EXP_NAME), dpi=300)
plt.show()
plot_loss_accu(model.history)
print("Done training and logging!")
###Output
_____no_output_____
###Markdown
Train a model on the balanced dataset
###Code
# Run once to install
!pip install image-classifiers==0.2.2
!pip install image-classifiers==1.0.0b1
!pip install imgaug
# Import libs
import os
import time
import cv2
import numpy as np
import matplotlib.pyplot as plt
from keras import optimizers
import keras
import tensorflow as tf
import keras.backend as K
from sklearn.metrics import confusion_matrix, classification_report
from keras.models import load_model
from keras.models import Sequential
from keras.regularizers import l2
from keras.applications.vgg16 import VGG16
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ModelCheckpoint, CSVLogger, EarlyStopping, ReduceLROnPlateau
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.metrics import roc_curve, auc, roc_auc_score
import matplotlib.pyplot as plt
from tqdm import tqdm
from keras.utils import np_utils
from imgaug import augmenters as iaa
import itertools
np.random.seed(42)
# Print version
print("Keras Version", keras.__version__)
print("Tensorflow Version", tf.__version__)
# GPU test
from tensorflow.python.client import device_lib
def get_available_gpus():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos if x.device_type == 'GPU']
print(get_available_gpus())
# Get compute specs
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
# Helpers functions
def create_directory(directory):
'''
Creates a new folder in the specified directory if the folder doesn't exist.
INPUT
directory: Folder to be created, called as "folder/".
OUTPUT
New folder in the current directory.
'''
if not os.path.exists(directory):
os.makedirs(directory)
def plot_hist(img):
img_flat = img.flatten()
print(min(img_flat), max(img_flat))
plt.hist(img_flat, bins=20, color='c')
#plt.title("Data distribution")
plt.xlabel("Pixel values")
plt.grid(True)
plt.ylabel("Frequency")
plt.show()
# Focal loss function
##################################################################################
# Paper: https://arxiv.org/abs/1708.02002
#Focal loss down-weights the well-classified examples. This has
#the net effect of putting more training emphasis on that data that is hard to classify.
#In a practical setting where we have a data imbalance, our majority class will quickly
#become well-classified since we have much more data for it. Thus, in order to insure that we
#also achieve high accuracy on our minority class, we can use the focal loss to give those minority
#class examples more relative weight during training.
from keras import backend as K
import tensorflow as tf
def focal_loss(gamma=2., alpha=.25):
def focal_loss_fixed(y_true, y_pred):
pt_1 = tf.where(tf.equal(y_true, 1), y_pred, tf.ones_like(y_pred))
pt_0 = tf.where(tf.equal(y_true, 0), y_pred, tf.zeros_like(y_pred))
return -K.mean(alpha * K.pow(1. - pt_1, gamma) * K.log(pt_1)) - K.mean((1 - alpha) * K.pow(pt_0, gamma) * K.log(1. - pt_0))
return focal_loss_fixed
##################################################################################
# Define paths
base_path = os.path.abspath("../")
dataset_path = os.path.join(base_path, "dataset", "isic2016numpy")
model_path = os.path.join(base_path, "models")
print(os.listdir(dataset_path))
# Load data
x_train = np.load("{}/x_upsampled.npy".format(dataset_path))
y_train = np.load("{}/y_upsampled.npy".format(dataset_path))
x_test = np.load("{}/x_test.npy".format(dataset_path))
y_test = np.load("{}/y_test.npy".format(dataset_path))
# Shuffle training dataset
flag = 1
if flag == 1:
# Shuffle data
print("Shuffling data")
s = np.arange(x_train.shape[0])
np.random.shuffle(s)
x_train = x_train[s]
y_train = y_train[s]
else:
print("Not shuffling...")
pass
# Show shape
print("Dataset sample size :", x_train.shape, y_train.shape, x_test.shape, y_test.shape)
# Sanity check on training data
#img = x_train[0]
#plot_hist(img)
#plt.imshow(x_train[0])
# Import libs
import keras
from classification_models.keras import Classifiers
# Define architecture
arch, preprocess_input = Classifiers.get('vgg16')
# Preprocess the dataset
# 1. Use model preprocessing
#x_train = preprocess_input(x_train)
#x_test = preprocess_input(x_test)
# 2. Use standard preprocessing
prepro = False # False when using synthetic data
if prepro == True:
print("Preprocessing training data")
x_train = x_train.astype('float32')
x_train /= 255
else:
print("Not preprocessing training data, already preprocessed in MeGAN generator.")
pass
# Standardize test set
x_test = x_test.astype('float32')
x_test /= 255
print(x_train.shape, x_test.shape)
# Sanity check on preprocessed data
#img = x_test[0]
#plot_hist(img)
#plt.imshow(x_test[0])
# Experiment name
EXP_NAME = "results"
# Create folder for the experiment
create_directory("{}/{}".format(base_path, EXP_NAME))
output_path = os.path.join(base_path, EXP_NAME)
# Callbacks
weights_path = "{}/{}.h5".format(output_path, EXP_NAME)
checkpointer = ModelCheckpoint(filepath=weights_path, verbose=1, monitor='val_loss', save_best_only=True)
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=5, verbose=1, min_lr=1e-8, mode='auto') # new_lr = lr * factor
early_stopping = EarlyStopping(monitor='val_loss', min_delta=0, verbose=1, patience=8, mode='auto', restore_best_weights=True)
csv_logger = CSVLogger('{}/{}_training.csv'.format(output_path, EXP_NAME))
# Define class weights for imbalacned data
from sklearn.utils import class_weight
class_weights = class_weight.compute_class_weight('balanced', np.unique(np.argmax(y_train, axis=1)), np.argmax(y_train, axis=1))
print(class_weights)
def my_awesome_model():
'''Awesomest model'''
# Get backbone network
base_model = arch(input_shape=(256,256,3), weights='imagenet', include_top=False)
# Add GAP layer
x = keras.layers.GlobalAveragePooling2D()(base_model.output)
# Add FC layer
output = keras.layers.Dense(2, activation='softmax', trainable=True)(x)
# Freeze layers
#for layer in base_model.layers[:]:
#layer.trainable=False
# Build model
model = keras.models.Model(inputs=[base_model.input], outputs=[output])
# Optimizers
adadelta = optimizers.Adadelta(lr=0.001)
# Compile
model.compile(optimizer=adadelta, loss= [focal_loss(alpha=.25, gamma=2)], metrics=['accuracy'])
# Output model configuration
model.summary()
return model
model = None
model = my_awesome_model()
# Train the awesome model
# Configuration
batch_size = 16
epochs = 300
# Calculate the starting time
start_time = time.time()
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
class_weight = class_weights,
callbacks=[csv_logger, early_stopping, reduce_lr, checkpointer], # early_stopping, checkpointer, reduce_lr
shuffle=False)
end_time = time.time()
print("--- Time taken to train : %s hours ---" % ((end_time - start_time)//3600))
# Save model
# If checkpointer is used, dont use this
#model.save(weights_path)
# Plot and save accuravy loss graphs together
def plot_loss_accu_all(history):
loss = history.history['loss']
val_loss = history.history['val_loss']
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
epochs = range(len(loss))
plt.plot(epochs, acc, 'r')
plt.plot(epochs, val_acc, 'b')
plt.plot(epochs, loss, 'g')
plt.plot(epochs, val_loss, 'y')
plt.title('Accuracy/Loss')
#plt.ylabel('Rate')
#plt.xlabel('Epoch')
plt.legend(['trainacc', 'valacc', 'trainloss', 'valloss'], loc='lower right', fontsize=10)
plt.grid(True)
plt.savefig('{}/{}_acc_loss_graph.jpg'.format(output_path, EXP_NAME), dpi=100)
plt.show()
# Plot and save accuravy loss graphs individually
def plot_loss_accu(history):
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.plot(epochs, loss, 'g')
plt.plot(epochs, val_loss, 'y')
#plt.title('Training and validation loss')
plt.ylabel('Loss %')
plt.xlabel('Epoch')
plt.legend(['train', 'val'], loc='upper right')
plt.grid(True)
#plt.savefig('{}/{}_loss.jpg'.format(output_path, EXP_NAME), dpi=100)
#plt.savefig('{}/{}_loss.pdf'.format(output_path, EXP_NAME), dpi=300)
plt.show()
loss = history.history['accuracy']
val_loss = history.history['val_accuracy']
epochs = range(len(loss))
plt.plot(epochs, loss, 'r')
plt.plot(epochs, val_loss, 'b')
#plt.title('Training and validation accuracy')
plt.ylabel('Accuracy %')
plt.xlabel('Epoch')
plt.legend(['train', 'val'], loc='lower right')
plt.grid(True)
#plt.savefig('{}/{}_acc.jpg'.format(output_path, EXP_NAME), dpi=100)
#plt.savefig('{}/{}_acc.pdf'.format(output_path, EXP_NAME), dpi=300)
plt.show()
plot_loss_accu(model.history)
print("Done training and logging!")
###Output
_____no_output_____
|
notebooks/Q-Learning Reinforcement Learning.ipynb
|
###Markdown
Source:* [Geeks for Geeks - SARSA Reinforcement Learning](https://www.geeksforgeeks.org/sarsa-reinforcement-learning/)* [Towards Data Science - Reinforcement learning: Temporal-Difference, SARSA, Q-Learning & Expected SARSA in python](https://towardsdatascience.com/reinforcement-learning-temporal-difference-sarsa-q-learning-expected-sarsa-on-python-9fecfda7467e)* [A Name Not Yet Taken AB - SARSA Algorithm in Python](https://www.annytab.com/sarsa-algorithm-in-python/) Importing the required libraries
###Code
import numpy as np
import gym
import time
import math
###Output
_____no_output_____
###Markdown
Building the environment Environments preloaded into gym:* [FrozenLake-v0](https://gym.openai.com/envs/FrozenLake-v0/)* [Taxi-v3](https://gym.openai.com/envs/Taxi-v3/)
###Code
env_name = 'FrozenLake-v0'
env = gym.make(env_name)
###Output
_____no_output_____
###Markdown
Defining utility functions to be used in the learning process Initialising Q
###Code
def init_Q(n_states, n_actions, init_Q_type="ones"):
"""
@param n_states the number of states
@param n_actions the number of actions
@param type random, ones or zeros for the initialization
"""
if init_Q_type == "ones":
return np.ones((n_states, n_actions))
elif init_Q_type == "random":
return np.random.random((n_states, n_actions))
elif init_Q_type == "zeros":
return np.zeros((n_states, n_actions))
###Output
_____no_output_____
###Markdown
Choose an action
###Code
# Numpy generator
rng = np.random.default_rng() # Create a default Generator.
def epsilon_greedy(Q, state, n_actions, epsilon):
"""
@param Q Q values {state, action} -> value
@param epsilon for exploration
@param n_actions number of actions
@param state state at time t
"""
if rng.uniform(0, 1) < epsilon:
action = np.random.randint(0, n_actions)
#action = env.action_space.sample()
else:
action = np.argmax(Q[state, :])
return action
###Output
_____no_output_____
###Markdown
Update Q-matrice (state-action value function)
###Code
# Function to learn the Q-value
def update(state1, action1, reward1, state2, action2, expected=False):
predict = Q[state1, action1]
target = reward1 + gamma * Q[state2, action2]
if expected:
expected_value = np.mean(Q[state2,:])
target = reward1 + gamma * expected_value
Q[state1, action1] = Q[state1, action1] + alpha * (target - predict)
###Output
_____no_output_____
###Markdown
Updating parameters Epsilon $\epsilon$ - Exploration rate
###Code
# Exploration rate
def get_epsilon(episode, init_epsilon, divisor=25):
n_epsilon = init_epsilon/(episode/1000+1)
# n_epsilon = min(1, 1.0 - math.log10((episode + 1) / divisor))
return n_epsilon
###Output
_____no_output_____
###Markdown
Alpha $\alpha$ - Learning rate
###Code
# Learning rate
def get_alpha(episode, init_alpha, divisor=25):
n_alpha = init_alpha/(episode/1000+1)
# n_alpha = min(1.0, 1.0 - math.log10((episode + 1) / divisor))
return n_alpha
###Output
_____no_output_____
###Markdown
Initializing different parameters
###Code
# Defining the different parameters
init_epsilon = 0.2 # trade-off exploration/exploitation - better if decreasing
init_alpha = 0.2 # learning rate, better if decreasing
# Specific to environment
gamma = 0.95 # discount for future rewards (also called decay factor)
n_states = env.observation_space.n
n_actions = env.action_space.n
# Episodes
n_episodes = 1000000
nmax_steps = 100 # maximum steps per episode
# Initializing the Q-matrix
Q = init_Q(n_states, n_actions, init_Q_type="zeros")
###Output
_____no_output_____
###Markdown
Training the learning agent
###Code
# Visualisation
render = True
# Initializing the reward
evolution_reward = []
# Starting the SARSA learning
for episode in range(n_episodes):
#print(f"Episode: {episode}")
n_episode_steps = 0
episode_reward = 0
done = False
state1 = env.reset()
while (not done) and (n_episode_steps < nmax_steps):
# Update parameters
epsilon = get_epsilon(episode, init_epsilon)
alpha = get_alpha(episode, init_alpha)
# Choose an action
action1 = epsilon_greedy(Q, state1, n_actions, epsilon)
# Visualizing the training
#if render:
# env.render()
# Getting the next state
state2, reward1, done, info = env.step(action1)
episode_reward += reward1
# Q-Learning
# Choosing the next action
action2 = np.argmax(Q[state2, :])
# Learning the Q-value
update(state1, action1, reward1, state2, action2)
# Updating the respective vaLues
state1 = state2
# /!\ action2 will become action1 in SARSA (we don't do another action) but not for Q-Learning.
# Maybe need to separate the functions
n_episode_steps += 1
# At the end of learning process
if render:
#print(f"This episode took {n_episode_steps} timesteps and reward {episode_reward}")
print('Episode {0}, Score: {1}, Timesteps: {2}, Epsilon: {3}, Alpha: {4}'.format(
episode+1, episode_reward, n_episode_steps, epsilon, alpha))
evolution_reward.append(episode_reward)
###Output
_____no_output_____
###Markdown
For FrozenLake-v0: In the above output, the red mark determines the current position of the agent in the environment while the direction given in brackets gives the direction of movement that the agent will make next. Note that the agent stays at it’s position if goes out of bounds. Evaluating the performance Mean reward
###Code
# Evaluating the performance
print ("Performace : ", sum(evolution_reward)/n_episodes)
# Visualizing the Q-matrix
print(Q)
import numpy as np
def running_mean(x, N):
cumsum = np.cumsum(np.insert(x, 0, 0))
return (cumsum[N:] - cumsum[:-N]) / float(N)
import numpy as np
import matplotlib.pyplot as plt
y = running_mean(evolution_reward,1000)
x = range(len(y))
plt.plot(x, y)
###Output
_____no_output_____
###Markdown
Evaluation through episode
###Code
# Variables
episodes = 1000
nmax_steps = 200
total_reward = 0
# Loop episodes
for episode in range(episodes):
n_episode_steps = 0
episode_reward = 0
done = False
# Start episode and get initial observation
state = env.reset()
while (not done) and (n_episode_steps < nmax_steps):
# Get an action (0:Left, 1:Down, 2:Right, 3:Up)
action = np.argmax(Q[state,:])
# Perform a step
state, reward, done, info = env.step(action)
# Update score
episode_reward += reward
total_reward += reward
n_episode_steps += 1
print('Episode {0}, Score: {1}, Timesteps: {2}'.format(
episode+1, episode_reward, n_episode_steps))
# Close the environment
env.close()
# Print the score
print('--- Evaluation ---')
print ('Score: {0} / {1}'.format(total_reward, episodes))
print()
###Output
_____no_output_____
|
notebooks/SIS-PMI-QJ0158-4325.ipynb
|
###Markdown
Since Gaia does not provide g mag error, I use quick approximation using the flux error... better to adapt the SIS code to use flux instead of magnitude.
###Code
s['x']=(s.ra-s.ra.mean())*u.deg.to(u.arcsec)
s['y']=(s.dec-s.dec.mean())*u.deg.to(u.arcsec)
s['dx']=(s.pmra-s.pmra.mean())
s['dy']=(s.pmdec-s.pmdec.mean())
s['xe']=s.ra_error*u.mas.to(u.arcsec)
s['ye']=s.dec_error*u.mas.to(u.arcsec)
s['dxe']=s.pmra_error
s['dye']=s.pmdec_error
s['g'] = s.phot_g_mean_mag
s['ge'] = s.phot_g_mean_mag/s.phot_g_mean_flux_over_error
def plot_chains(sampler,warmup=400):
fig, ax = plt.subplots(ndim,3, figsize=(12, 12))
samples = sampler.chain[:, warmup:, :].reshape((-1, ndim))
medians = []
for i in range(ndim):
ax[i,0].plot(sampler.chain[:, :, i].T, '-k', alpha=0.2);
ax[i,0].vlines(warmup,np.min(sampler.chain[:, :, i].T),np.max(sampler.chain[:, :, i].T),'r')
ax[i,1].hist(samples[:,i],bins=100,label=parameter[i]);
ax[i,1].legend()
ax[i,1].vlines(np.median(samples[:,i]),0,10000,lw=1,color='r',label="median")
medians.append(np.median(samples[:,i]))
ax[i,2].hexbin(samples[:,i],samples[:,(i+1)%ndim])#,s=1,alpha=0.1);
return medians
###Output
_____no_output_____
###Markdown
SIS lens inference
###Code
parameter = "xS,yS,gS,bL,xL,yL".split(',')
data = s[['x','y','g','xe','ye','ge']].as_matrix()
data
np.random.seed(0)
def init(N):
""" to initialise each walkers initial value : sets parameter randomly """
xS = norm.rvs(0,0.2,size=N)
yS = norm.rvs(0,0.2,size=N)
gS = gamma.rvs(10,5,size=N)
xL = norm.rvs(0,0.2,size=N)
yL = norm.rvs(0,0.2,size=N)
bL = beta.rvs(2,3,size=N)
return np.transpose(np.array([xS,yS,gS,bL,xL,yL]))
ndim = 6 # number of parameters in the model
nwalkers = 50 # number of MCMC walkers
nsteps = 1000 # number of MCMC steps
starting_guesses = init(nwalkers)
np.std([log_prior(guess) for guess in starting_guesses])
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_posterior, args=[data])
%time x = sampler.run_mcmc(starting_guesses, nsteps)
medians = plot_chains(sampler)
medians
np.random.seed(0)
def init2(N):
""" to initialise each walkers initial value : sets parameter randomly """
xS = norm.rvs(medians[0],0.02,size=N)
yS = norm.rvs(medians[1],0.02,size=N)
gS = norm.rvs(medians[2],1,size=N)
bL = norm.rvs(medians[3],0.1,size=N)
xL = norm.rvs(medians[4],0.02,size=N)
yL = norm.rvs(medians[5],0.01,size=N)
return np.transpose(np.array([xS,yS,gS,bL,xL,yL]))
starting_guesses2 = init2(nwalkers)
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_posterior, args=[data])
%time x = sampler.run_mcmc(starting_guesses2, nsteps)
plot_chains(sampler)
###Output
_____no_output_____
###Markdown
Proper motion inference
###Code
parameter = "xS,yS,dxS,dyS,gS,bL,xL,yL".split(',')
from lens.sis.inferencePM import *
data_pm = s[['x','y','dx','dy','g','xe','ye','dxe','dye','ge']].as_matrix()
data_pm
np.random.seed(0)
def initPM(N):
""" to initialise each walkers initial value : sets parameter randomly """
xS = norm.rvs(medians[0],0.02,size=N)
yS = norm.rvs(medians[1],0.02,size=N)
gS = norm.rvs(medians[2],1,size=N)
bL = norm.rvs(medians[3],0.1,size=N)
xL = norm.rvs(medians[4],0.02,size=N)
yL = norm.rvs(medians[5],0.01,size=N)
dxS = norm.rvs(0,0.1,size=N)
dyS = norm.rvs(0,0.1,size=N)
return np.transpose(np.array([xS,yS,dxS,dyS,gS,bL,xL,yL]))
ndim = 8 # number of parameters in the model
nwalkers = 50 # number of MCMC walkers
nsteps = 2000 # number of MCMC steps
starting_guesses_pm = initPM(nwalkers)
np.std([log_prior_pm(guess) for guess in starting_guesses_pm])
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_posterior_pm, args=[data_pm])
%time x = sampler.run_mcmc(starting_guesses_pm, nsteps)
medians = plot_chains(sampler,warmup=1000)
medians
###Output
_____no_output_____
|
Collabrative Filtering/ML0101EN-RecSys-Collaborative-Filtering-movies-py-v1.ipynb
|
###Markdown
Collaborative FilteringEstimated time needed: **25** minutes ObjectivesAfter completing this lab you will be able to:- Create recommendation system based on collaborative filtering Recommendation systems are a collection of algorithms used to recommend items to users based on information taken from the user. These systems have become ubiquitous can be commonly seen in online stores, movies databases and job finders. In this notebook, we will explore recommendation systems based on Collaborative Filtering and implement simple version of one using Python and the Pandas library. Table of contents Acquiring the Data Preprocessing Collaborative Filtering Acquiring the Data To acquire and extract the data, simply run the following Bash scripts: Dataset acquired from [GroupLens](http://grouplens.org/datasets/movielens?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ). Lets download the dataset. To download the data, we will use **`!wget`** to download it from IBM Object Storage. **Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
###Code
!wget -O moviedataset.zip https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/moviedataset.zip
print('unziping ...')
!unzip -o -j moviedataset.zip
###Output
--2020-10-16 13:38:45-- https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/moviedataset.zip
Resolving s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)... 67.228.254.196
Connecting to s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)|67.228.254.196|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 160301210 (153M) [application/zip]
Saving to: ‘moviedataset.zip’
moviedataset.zip 100%[===================>] 152.88M 22.0MB/s in 7.6s
2020-10-16 13:38:53 (20.1 MB/s) - ‘moviedataset.zip’ saved [160301210/160301210]
unziping ...
Archive: moviedataset.zip
inflating: links.csv
inflating: movies.csv
inflating: ratings.csv
inflating: README.txt
inflating: tags.csv
###Markdown
Now you're ready to start working with the data! Preprocessing First, let's get all of the imports out of the way:
###Code
#Dataframe manipulation library
import pandas as pd
#Math functions, we'll only need the sqrt function so let's import only that
from math import sqrt
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Now let's read each file into their Dataframes:
###Code
#Storing the movie information into a pandas dataframe
movies_df = pd.read_csv('movies.csv')
#Storing the user information into a pandas dataframe
ratings_df = pd.read_csv('ratings.csv')
###Output
_____no_output_____
###Markdown
Let's also take a peek at how each of them are organized:
###Code
#Head is a function that gets the first N rows of a dataframe. N's default is 5.
movies_df.head()
###Output
_____no_output_____
###Markdown
So each movie has a unique ID, a title with its release year along with it (Which may contain unicode characters) and several different genres in the same field. Let's remove the year from the title column and place it into its own one by using the handy [extract](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.extract.htmlpandas.Series.str.extract?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ) function that Pandas has. Let's remove the year from the **title** column by using pandas' replace function and store in a new **year** column.
###Code
#Using regular expressions to find a year stored between parentheses
#We specify the parantheses so we don't conflict with movies that have years in their titles
movies_df['year'] = movies_df.title.str.extract('(\(\d\d\d\d\))',expand=False)
#Removing the parentheses
movies_df['year'] = movies_df.year.str.extract('(\d\d\d\d)',expand=False)
#Removing the years from the 'title' column
movies_df['title'] = movies_df.title.str.replace('(\(\d\d\d\d\))', '')
#Applying the strip function to get rid of any ending whitespace characters that may have appeared
movies_df['title'] = movies_df['title'].apply(lambda x: x.strip())
###Output
_____no_output_____
###Markdown
Let's look at the result!
###Code
movies_df.head()
###Output
_____no_output_____
###Markdown
With that, let's also drop the genres column since we won't need it for this particular recommendation system.
###Code
#Dropping the genres column
movies_df = movies_df.drop('genres', 1)
###Output
_____no_output_____
###Markdown
Here's the final movies dataframe:
###Code
movies_df.head()
###Output
_____no_output_____
###Markdown
Next, let's look at the ratings dataframe.
###Code
ratings_df.head()
###Output
_____no_output_____
###Markdown
Every row in the ratings dataframe has a user id associated with at least one movie, a rating and a timestamp showing when they reviewed it. We won't be needing the timestamp column, so let's drop it to save on memory.
###Code
#Drop removes a specified row or column from a dataframe
ratings_df = ratings_df.drop('timestamp', 1)
###Output
_____no_output_____
###Markdown
Here's how the final ratings Dataframe looks like:
###Code
ratings_df.head()
###Output
_____no_output_____
###Markdown
Collaborative Filtering Now, time to start our work on recommendation systems. The first technique we're going to take a look at is called **Collaborative Filtering**, which is also known as **User-User Filtering**. As hinted by its alternate name, this technique uses other users to recommend items to the input user. It attempts to find users that have similar preferences and opinions as the input and then recommends items that they have liked to the input. There are several methods of finding similar users (Even some making use of Machine Learning), and the one we will be using here is going to be based on the **Pearson Correlation Function**.The process for creating a User Based recommendation system is as follows:- Select a user with the movies the user has watched- Based on his rating to movies, find the top X neighbours - Get the watched movie record of the user for each neighbour.- Calculate a similarity score using some formula- Recommend the items with the highest scoreLet's begin by creating an input user to recommend movies to:Notice: To add more movies, simply increase the amount of elements in the userInput. Feel free to add more in! Just be sure to write it in with capital letters and if a movie starts with a "The", like "The Matrix" then write it in like this: 'Matrix, The' .
###Code
userInput = [
{'title':'Breakfast Club, The', 'rating':5},
{'title':'Toy Story', 'rating':3.5},
{'title':'Jumanji', 'rating':2},
{'title':"Pulp Fiction", 'rating':5},
{'title':'Akira', 'rating':4.5}
]
inputMovies = pd.DataFrame(userInput)
inputMovies
###Output
_____no_output_____
###Markdown
Add movieId to input userWith the input complete, let's extract the input movies's ID's from the movies dataframe and add them into it.We can achieve this by first filtering out the rows that contain the input movies' title and then merging this subset with the input dataframe. We also drop unnecessary columns for the input to save memory space.
###Code
#Filtering out the movies by title
inputId = movies_df[movies_df['title'].isin(inputMovies['title'].tolist())]
#Then merging it so we can get the movieId. It's implicitly merging it by title.
inputMovies = pd.merge(inputId, inputMovies)
#Dropping information we won't use from the input dataframe
inputMovies = inputMovies.drop('year', 1)
#Final input dataframe
#If a movie you added in above isn't here, then it might not be in the original
#dataframe or it might spelled differently, please check capitalisation.
inputMovies
###Output
_____no_output_____
###Markdown
The users who has seen the same moviesNow with the movie ID's in our input, we can now get the subset of users that have watched and reviewed the movies in our input.
###Code
#Filtering out users that have watched movies that the input has watched and storing it
userSubset = ratings_df[ratings_df['movieId'].isin(inputMovies['movieId'].tolist())]
userSubset.head()
###Output
_____no_output_____
###Markdown
We now group up the rows by user ID.
###Code
#Groupby creates several sub dataframes where they all have the same value in the column specified as the parameter
userSubsetGroup = userSubset.groupby(['userId'])
###Output
_____no_output_____
###Markdown
lets look at one of the users, e.g. the one with userID=1130
###Code
userSubsetGroup.get_group(1130)
###Output
_____no_output_____
###Markdown
Let's also sort these groups so the users that share the most movies in common with the input have higher priority. This provides a richer recommendation since we won't go through every single user.
###Code
#Sorting it so users with movie most in common with the input will have priority
userSubsetGroup = sorted(userSubsetGroup, key=lambda x: len(x[1]), reverse=True)
###Output
_____no_output_____
###Markdown
Now lets look at the first user
###Code
userSubsetGroup[0:3]
###Output
_____no_output_____
###Markdown
Similarity of users to input userNext, we are going to compare all users (not really all !!!) to our specified user and find the one that is most similar. we're going to find out how similar each user is to the input through the **Pearson Correlation Coefficient**. It is used to measure the strength of a linear association between two variables. The formula for finding this coefficient between sets X and Y with N values can be seen in the image below. Why Pearson Correlation?Pearson correlation is invariant to scaling, i.e. multiplying all elements by a nonzero constant or adding any constant to all elements. For example, if you have two vectors X and Y,then, pearson(X, Y) == pearson(X, 2 \* Y + 3). This is a pretty important property in recommendation systems because for example two users might rate two series of items totally different in terms of absolute rates, but they would be similar users (i.e. with similar ideas) with similar rates in various scales .The values given by the formula vary from r = -1 to r = 1, where 1 forms a direct correlation between the two entities (it means a perfect positive correlation) and -1 forms a perfect negative correlation. In our case, a 1 means that the two users have similar tastes while a -1 means the opposite. We will select a subset of users to iterate through. This limit is imposed because we don't want to waste too much time going through every single user.
###Code
userSubsetGroup = userSubsetGroup[0:100]
###Output
_____no_output_____
###Markdown
Now, we calculate the Pearson Correlation between input user and subset group, and store it in a dictionary, where the key is the user Id and the value is the coefficient
###Code
#Store the Pearson Correlation in a dictionary, where the key is the user Id and the value is the coefficient
pearsonCorrelationDict = {}
#For every user group in our subset
for name, group in userSubsetGroup:
#Let's start by sorting the input and current user group so the values aren't mixed up later on
group = group.sort_values(by='movieId')
inputMovies = inputMovies.sort_values(by='movieId')
#Get the N for the formula
nRatings = len(group)
#Get the review scores for the movies that they both have in common
temp_df = inputMovies[inputMovies['movieId'].isin(group['movieId'].tolist())]
#And then store them in a temporary buffer variable in a list format to facilitate future calculations
tempRatingList = temp_df['rating'].tolist()
#Let's also put the current user group reviews in a list format
tempGroupList = group['rating'].tolist()
#Now let's calculate the pearson correlation between two users, so called, x and y
Sxx = sum([i**2 for i in tempRatingList]) - pow(sum(tempRatingList),2)/float(nRatings)
Syy = sum([i**2 for i in tempGroupList]) - pow(sum(tempGroupList),2)/float(nRatings)
Sxy = sum( i*j for i, j in zip(tempRatingList, tempGroupList)) - sum(tempRatingList)*sum(tempGroupList)/float(nRatings)
#If the denominator is different than zero, then divide, else, 0 correlation.
if Sxx != 0 and Syy != 0:
pearsonCorrelationDict[name] = Sxy/sqrt(Sxx*Syy)
else:
pearsonCorrelationDict[name] = 0
pearsonCorrelationDict.items()
pearsonDF = pd.DataFrame.from_dict(pearsonCorrelationDict, orient='index')
pearsonDF.columns = ['similarityIndex']
pearsonDF['userId'] = pearsonDF.index
pearsonDF.index = range(len(pearsonDF))
pearsonDF.head()
###Output
_____no_output_____
###Markdown
The top x similar users to input userNow let's get the top 50 users that are most similar to the input.
###Code
topUsers=pearsonDF.sort_values(by='similarityIndex', ascending=False)[0:50]
topUsers.head()
###Output
_____no_output_____
###Markdown
Now, let's start recommending movies to the input user. Rating of selected users to all moviesWe're going to do this by taking the weighted average of the ratings of the movies using the Pearson Correlation as the weight. But to do this, we first need to get the movies watched by the users in our **pearsonDF** from the ratings dataframe and then store their correlation in a new column called \_similarityIndex". This is achieved below by merging of these two tables.
###Code
topUsersRating=topUsers.merge(ratings_df, left_on='userId', right_on='userId', how='inner')
topUsersRating.head()
###Output
_____no_output_____
###Markdown
Now all we need to do is simply multiply the movie rating by its weight (The similarity index), then sum up the new ratings and divide it by the sum of the weights.We can easily do this by simply multiplying two columns, then grouping up the dataframe by movieId and then dividing two columns:It shows the idea of all similar users to candidate movies for the input user:
###Code
#Multiplies the similarity by the user's ratings
topUsersRating['weightedRating'] = topUsersRating['similarityIndex']*topUsersRating['rating']
topUsersRating.head()
#Applies a sum to the topUsers after grouping it up by userId
tempTopUsersRating = topUsersRating.groupby('movieId').sum()[['similarityIndex','weightedRating']]
tempTopUsersRating.columns = ['sum_similarityIndex','sum_weightedRating']
tempTopUsersRating.head()
#Creates an empty dataframe
recommendation_df = pd.DataFrame()
#Now we take the weighted average
recommendation_df['weighted average recommendation score'] = tempTopUsersRating['sum_weightedRating']/tempTopUsersRating['sum_similarityIndex']
recommendation_df['movieId'] = tempTopUsersRating.index
recommendation_df.head()
###Output
_____no_output_____
###Markdown
Now let's sort it and see the top 20 movies that the algorithm recommended!
###Code
recommendation_df = recommendation_df.sort_values(by='weighted average recommendation score', ascending=False)
recommendation_df.head(10)
movies_df.loc[movies_df['movieId'].isin(recommendation_df.head(10)['movieId'].tolist())]
###Output
_____no_output_____
|
Seaborn Study/sources/10 绘图实例(2) Drawing example(2).ipynb
|
###Markdown
10 绘图实例(2) Drawing example(2)本文主要讲述seaborn官网相关函数绘图实例。具体内容有:1. Grouped violinplots with split violins(violinplot)2. Annotated heatmaps(heatmap)3. Hexbin plot with marginal distributions(jointplot)4. Horizontal bar plots(barplot)5. Horizontal boxplot with observations(boxplot)6. Conditional means with observations(stripplot)7. Joint kernel density estimate(jointplot)8. Overlapping densities(ridge plot)9. Faceted logistic regression(lmplot)10. Plotting on a large number of facets(FacetGrid)
###Code
# import packages
# from IPython.core.interactiveshell import InteractiveShell
# InteractiveShell.ast_node_interactivity = "all"
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
###Output
_____no_output_____
###Markdown
1. Grouped violinplots with split violins(violinplot)
###Code
sns.set(style="whitegrid", palette="pastel", color_codes=True)
# Load the example tips dataset
tips = sns.load_dataset("tips")
# Draw a nested violinplot and split the violins for easier comparison 画分组的小提琴图
sns.violinplot(x="day", y="total_bill", hue="smoker",
# split表示当两种类别嵌套时分别用不同颜色表示
# inner表示小提琴内部的数据点表示形式
split=True, inner="quart",
# 设定hue对应类别的颜色
palette={"Yes": "y", "No": "b"},
data=tips)
sns.despine(left=True)
###Output
_____no_output_____
###Markdown
2. Annotated heatmaps(heatmap)
###Code
# Load the example flights dataset and conver to long-form
flights_long = sns.load_dataset("flights")
# 转成透视表后
flights = flights_long.pivot("month", "year", "passengers")
# Draw a heatmap with the numeric values in each cell
f, ax = plt.subplots(figsize=(9, 6))
# annot表示每个方格内写入数据,fmt注释的形式,linewidth行宽度
sns.heatmap(flights, annot=True, fmt="d", linewidths=.5, ax=ax);
###Output
_____no_output_____
###Markdown
3. Hexbin plot with marginal distributions(jointplot)
###Code
rs = np.random.RandomState(11)
x = rs.gamma(2, size=1000)
y = -.5 * x + rs.normal(size=1000)
# 边界核密度估计图 kind选择类型
sns.jointplot(x, y, kind="hex", color="#4CB391");
###Output
C:\ProgramData\Anaconda3\lib\site-packages\scipy\stats\stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
###Markdown
4. Horizontal bar plots(barplot)
###Code
sns.set(style="whitegrid")
# Initialize the matplotlib figure 设置图像大小
f, ax = plt.subplots(figsize=(6, 15))
# Load the example car crash dataset 获得数据集
crashes = sns.load_dataset("car_crashes").sort_values("total", ascending=False)
# Plot the total crashes 设置后续颜色色调
sns.set_color_codes("pastel")
sns.barplot(x="total", y="abbrev", data=crashes,
label="Total", color="b")
# Plot the crashes where alcohol was involved
# 通过不同色调显示颜色
sns.set_color_codes("muted")
sns.barplot(x="alcohol", y="abbrev", data=crashes,
label="Alcohol-involved", color="b")
# Add a legend and informative axis label
# 设置图例,frameon设置图例边框
ax.legend(ncol=2, loc="lower right", frameon=True)
ax.set(xlim=(0, 24), ylabel="",
xlabel="Automobile collisions per billion miles")
sns.despine(left=True, bottom=True)
###Output
_____no_output_____
###Markdown
5. Horizontal boxplot with observations(boxplot)
###Code
sns.set(style="ticks")
# Initialize the figure with a logarithmic x axis
f, ax = plt.subplots(figsize=(7, 6))
# 设置x轴为log标尺
ax.set_xscale("log")
# Load the example planets dataset
planets = sns.load_dataset("planets")
# Plot the orbital period with horizontal boxes 画图
# whis设定异常值解决方法,range为延长上下边缘线条
sns.boxplot(x="distance", y="method", data=planets,
whis="range", palette="vlag")
# Add in points to show each observation
# swarm添加散点
sns.swarmplot(x="distance", y="method", data=planets,
size=2, color=".3", linewidth=0)
# Tweak the visual presentation
ax.xaxis.grid(True)
ax.set(ylabel="")
sns.despine(trim=True, left=True)
###Output
_____no_output_____
###Markdown
6. Conditional means with observations(stripplot)
###Code
sns.set(style="whitegrid")
iris = sns.load_dataset("iris")
# "Melt" the dataset to "long-form" or "tidy" representation 提取species对应数据,以measurement命名
iris = pd.melt(iris, "species", var_name="measurement")
# Initialize the figure
f, ax = plt.subplots()
sns.despine(bottom=True, left=True)
# Show each observation with a scatterplot
# 绘制分布散点图
sns.stripplot(x="value", y="measurement", hue="species",
# dodge,jitter调整各点间距,防止重合
data=iris, dodge=True, jitter=True,
alpha=.25, zorder=1)
# Show the conditional means
# 绘制点图
sns.pointplot(x="value", y="measurement", hue="species",
data=iris, dodge=.532, join=False, palette="dark",
markers="d", scale=.75, ci=None)
# Improve the legend 自动获取图例
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles[3:], labels[3:], title="species",
handletextpad=0, columnspacing=1,
loc="lower right", ncol=3, frameon=True);
###Output
_____no_output_____
###Markdown
7. Joint kernel density estimate(jointplot)
###Code
sns.set(style="white")
# Generate a random correlated bivariate dataset
rs = np.random.RandomState(5)
mean = [0, 0]
cov = [(1, .5), (.5, 1)]
x1, x2 = rs.multivariate_normal(mean, cov, 500).T
x1 = pd.Series(x1, name="$X_1$")
x2 = pd.Series(x2, name="$X_2$")
# Show the joint distribution using kernel density estimation 画出联合分布图
# space表示侧边图和中央图距离
g = sns.jointplot(x1, x2, kind="kde", height=7, space=0)
###Output
_____no_output_____
###Markdown
8. Overlapping densities(ridge plot)
###Code
sns.set(style="white", rc={"axes.facecolor": (0, 0, 0, 0)})
# Create the data 创建数据
rs = np.random.RandomState(1979)
x = rs.randn(500)
g = np.tile(list("ABCDEFGHIJ"), 50)
df = pd.DataFrame(dict(x=x, g=g))
m = df.g.map(ord)
df["x"] += m
# Initialize the FacetGrid object
# 创建顺序调色板
pal = sns.cubehelix_palette(10, rot=-.25, light=.7)
# row,col定义数据子集的变量,这些变量将在网格的不同方面绘制
# aspect纵横比
# height 每个图片的高度设定
g = sns.FacetGrid(df, row="g", hue="g", aspect=15, height=.5, palette=pal)
# Draw the densities in a few steps
# 画出核密度图
g.map(sns.kdeplot, "x", clip_on=False, shade=True, alpha=1, lw=1.5, bw=.2)
g.map(sns.kdeplot, "x", clip_on=False, color="w", lw=2, bw=.2)
# 画出水平参考线
g.map(plt.axhline, y=0, lw=2, clip_on=False)
# Define and use a simple function to label the plot in axes coordinates
def label(x, color, label):
ax = plt.gca()
ax.text(0, .2, label, fontweight="bold", color=color,
ha="left", va="center", transform=ax.transAxes)
g.map(label, "x")
# Set the subplots to overlap
g.fig.subplots_adjust(hspace=-.25)
# Remove axes details that don't play well with overlap 移除边框
g.set_titles("")
g.set(yticks=[])
g.despine(bottom=True, left=True)
###Output
C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\tight_layout.py:211: UserWarning: Tight layout not applied. tight_layout cannot make axes height small enough to accommodate all axes decorations
warnings.warn('Tight layout not applied. '
C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\tight_layout.py:211: UserWarning: Tight layout not applied. tight_layout cannot make axes height small enough to accommodate all axes decorations
warnings.warn('Tight layout not applied. '
C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\tight_layout.py:211: UserWarning: Tight layout not applied. tight_layout cannot make axes height small enough to accommodate all axes decorations
warnings.warn('Tight layout not applied. '
###Markdown
9. Faceted logistic regression(lmplot)
###Code
# Load the example titanic dataset
df = sns.load_dataset("titanic")
# Make a custom palette with gendered colors 设置颜色
pal = dict(male="#6495ED", female="#F08080")
# Show the survival proability as a function of age and sex
# logistic设定画出逻辑回归模型
g = sns.lmplot(x="age", y="survived", col="sex", hue="sex", data=df,
palette=pal, y_jitter=.02, logistic=True);
g.set(xlim=(0, 80), ylim=(-.05, 1.05))
###Output
_____no_output_____
###Markdown
10. Plotting on a large number of facets(FacetGrid)
###Code
sns.set(style="ticks")
# Create a dataset with many short random walks 创建数据集
rs = np.random.RandomState(4)
pos = rs.randint(-1, 2, (20, 5)).cumsum(axis=1)
pos -= pos[:, 0, np.newaxis]
step = np.tile(range(5), 20)
walk = np.repeat(range(20), 5)
df = pd.DataFrame(np.c_[pos.flat, step, walk],
columns=["position", "step", "walk"])
# Initialize a grid of plots with an Axes for each walk 初始化绘图坐标窗口
# col_wrap每一行四张图,col以walk进行分类
grid = sns.FacetGrid(df, col="walk", hue="walk", palette="tab20c",
col_wrap=4, height=1.5)
# Draw a horizontal line to show the starting point 画出线条图
grid.map(plt.axhline, y=0, ls=":", c=".5")
# Draw a line plot to show the trajectory of each random walk 画图点图
grid.map(plt.plot, "step", "position", marker="o")
# Adjust the tick positions and labels 设定x,y坐标范围
grid.set(xticks=np.arange(5), yticks=[-3, 3],
xlim=(-.5, 4.5), ylim=(-3.5, 3.5))
# Adjust the arrangement of the plots
grid.fig.tight_layout(w_pad=1)
###Output
_____no_output_____
|
demystifying-machine-learning.ipynb
|
###Markdown
Demystifying Machine Learning Demo SessionPeter Flach and Niall TwomeyTuesday, 5th of December 2017 Typical Machine Learning PipelineNeed to prepare data into a matrix of observations X and a vector of labels y.  CASAS DatasetThis notebook considers the CASAS dataset. This is a dataset collected in a smart environment. As participants interact with the house, sensors record their interactions. There are a number of different sensor types including motion, door contact, light, temperature, water flow, etc.This notebook goes through a number of common issues in data science and machine learning pipelines when working with real data. Namely, several issues relating to dates, sensor values, etc. This are dealt with consistently using the functionality provided by the pandas library.The objective is to fix all errors (if we can), and then to convert the timeseries data to a form that would be recognisable by a machine learning algorithm. I have attempted to comment my code where possible to explain my thought processes. At several points in this script I could have taken shortcuts, but I also attempted to forgo brevity for clarity.Resources: - CASAS homepage: http://casas.wsu.edu- Pandas library: https://pandas.pydata.org/- SKLearn library: http://scikit-learn.org/ 
###Code
# Set up the libraries that we need to use
from os.path import join
import matplotlib.pyplot as pl
import seaborn as sns
from pprint import pprint
import pandas as pd
import numpy as np
from datetime import datetime, timedelta
from subprocess import call
% matplotlib inline
sns.set_style('darkgrid')
sns.set_context('poster')
# Download the data
url = 'http://casas.wsu.edu/datasets/twor.2009.zip'
zipfile = url.split('/')[-1]
dirname = '.'.join(zipfile.split('.')[:2])
filename = join(dirname, 'data')
print(' url: {}'.format(url))
print(' zipfile: {}'.format(zipfile))
print(' dirname: {}'.format(dirname))
print('filename: {}'.format(filename))
call(('wget', url))
call(('unzip', zipfile))
call(('rm', 'twor.2009.zip'))
# Read the data file
column_headings = ('date', 'time', 'sensor', 'value', 'annotation', 'state')
df = pd.read_csv(
filename,
delim_whitespace=True,
names=column_headings
)
df.head()
df.head()
for col in column_headings:
df.loc[:, col] = df[col].str.strip()
###Output
_____no_output_____
###Markdown
Small diversion: pandas dataframes
###Code
df.date.head()
df.sensor.unique()
df.annotation.unique()
df.state.unique()
###Output
_____no_output_____
###Markdown
Not everything is what it seems
###Code
df.date.dtype
df.time.dtype
###Output
_____no_output_____
###Markdown
The date and time columns are generic python **objects**. We will want them to be date time objects so that we can work with them naturally. Before so doing we will want to verify that all of the data are proper dates.
###Code
df.date.unique()
###Output
_____no_output_____
###Markdown
The final date is clearly incorrect. We can assume that '22009' is intended to be '2009'
###Code
df.loc[df.date.str.startswith('22009'), 'date'] = '2009-02-03'
df.date.unique()
###Output
_____no_output_____
###Markdown
Create the date time objects and set them as the index of the dataframe.
###Code
df['datetime'] = pd.to_datetime(df[['date', 'time']].apply(lambda row: ' '.join(row), axis=1))
df = df[['datetime', 'sensor', 'value', 'annotation', 'state']]
df.set_index('datetime', inplace=True)
df.head()
df.index.second
###Output
_____no_output_____
###Markdown
Querying the sensors 
###Code
df.sensor.unique()
###Output
_____no_output_____
###Markdown
- M-sensors are binary motion sensors (ON/OFF)- L-sensors are ambiant light sensors (ON/OFF)- D-sensors are binary door sensors (OPEN/CLOSED)- I-sensors are binary item presence sensors (PRESENT/ABSENT)- A-sensors are ADC (measuring temperature on hob/oven)M-, L-, I- and D-sensors are binary, whereas A-sensors have continuous values. So let's split them up into analogue and digital dataframes. Split the analogue and digital components from eachother
###Code
cdf = df[~df.sensor.str.startswith("A")][['sensor', 'value']]
adf = df[df.sensor.str.startswith("A")][['sensor', 'value']]
###Output
_____no_output_____
###Markdown
Categorical dataWe would like to create a matrix columns corresponding to the categorical sensor name (eg M13) which is `1` when the sensor value is `ON`, `-1` when the sensor value is `OFF`, and otherwise remains `0`. First we need to validate the values of the categorical dataframe.
###Code
cdf.head()
cdf.value.unique()
###Output
_____no_output_____
###Markdown
Some strange values: - ONF- OF- O- OFFFIt is often unclear how we should deal with errors such as these, so let's just convert the sensor value of all of these to `ON` in this demo.
###Code
for value in ('ONF', 'OF', 'O', 'OFFF'):
cdf.loc[cdf.value == value, 'value'] = 'ON'
cdf.value.unique()
cdf_cols = pd.get_dummies(cdf.sensor)
cdf_cols.head()
cdf_cols['M35'].plot(figsize=(10, 5))
kitchen_columns = ['M{}'.format(ii) for ii in (15, 16, 17, 18, 19, 51)]
start = datetime(2009, 2, 2, 10)
end = datetime(2009, 2, 2, 11)
cdf_cols[(cdf_cols.index > start) & (cdf_cols.index < end)][kitchen_columns].plot(subplots=True, figsize=(10, 10));
start = datetime(2009, 2, 2, 15)
end = datetime(2009, 2, 2, 17)
cdf_cols[(cdf_cols.index > start) & (cdf_cols.index < end)][kitchen_columns].plot(subplots=True, figsize=(10, 10));
###Output
_____no_output_____
###Markdown
Analogue datathe `value` column of the `adf` dataframe is still a set of strings, so let's convert these to floating point numbers
###Code
adf.head()
adf.value.astype(float)
% debug
f_inds = adf.value.str.endswith('F')
adf.loc[f_inds, 'value'] = adf.loc[f_inds, 'value'].str[:-1]
f_inds = adf.value.str.startswith('F')
adf.loc[f_inds, 'value'] = adf.loc[f_inds, 'value'].str[1:]
adf.loc[:, 'value'] = adf.value.astype(float)
adf.value.groupby(adf.sensor).plot(kind='kde', legend=True, figsize=(10, 5))
adf.head()
adf_keys = adf.sensor.unique()
adf_keys
adf_cols = pd.get_dummies(adf.sensor)
for key in adf_keys:
adf_cols[key] *= adf.value
adf_cols = adf_cols[adf_keys]
adf_cols.head()
###Output
_____no_output_____
###Markdown
Regrouping At this stage we have our data prepared as we need. We have arranged the categorical data into a matrix of 0 and 1, and the analogue data has also been similarly translated. What remains is to produce our label matrix. Since we have already introduced most of the methods in the previous sections, this should be quite straightforward.
###Code
annotation_inds = pd.notnull(df.annotation)
anns = df.loc[annotation_inds][['annotation', 'state']]
# Removing duplicated indices
anns = anns.groupby(level=0).first()
anns.head()
###Output
_____no_output_____
###Markdown
Interestingly there are also bugs in the labels!
###Code
for annotation, group in anns.groupby('annotation'):
counts = group.state.value_counts()
if counts.begin == counts.end:
print(' {}: equal counts ({} begins, {} ends)'.format(
annotation,
counts.begin,
counts.end
))
else:
print(' *** WARNING {}: inconsistent annotation counts with {} begins and {} ends'.format(
annotation,
counts.begin,
counts.end
))
def filter_annotations(anns):
left = iter(anns.index[:-1])
right = iter(anns.index[1:])
filtered_annotations = []
for ii, (ll, rr) in enumerate(zip(left, right)):
l = anns.loc[ll]
r = anns.loc[rr]
if l.state == 'begin' and r.state == 'end':
filtered_annotations.append(dict(label=l.annotation, start=ll, end=rr))
return filtered_annotations
annotations = []
for annotation, group in anns.groupby('annotation'):
gi = filter_annotations(group)
if len(gi) > 10:
print('{:>30} - {}'.format(annotation, len(group)))
annotations.extend(gi)
annotations[:10]
X_a = []
X_d = []
y = []
for ann in annotations:
try:
ai = adf_cols[ann['start']: ann['end']]
ci = cdf_cols[ann['start']: ann['end']]
yi = ann['label']
X_a.append(ai)
X_d.append(ci)
y.append(yi)
except KeyError:
pass
print(len(y), len(X_d), len(X_a))
ii = 10
print(y[ii])
print(X_d[ii].sum().to_dict())
print(X_a[ii].sum().to_dict())
X = []
for ii in range(len(y)):
xi = dict()
# Number of sensor activations
xi['nd'] = len(X_d)
xi['na'] = len(X_a)
# Duration of sensor windows
if len(X_d[ii]):
xi['dd'] = (X_d[ii].index[-1] - X_d[ii].index[0]).total_seconds()
if len(X_a[ii]):
xi['da'] = (X_a[ii].index[-1] - X_a[ii].index[0]).total_seconds()
for xx in (X_a[ii], X_d[ii]):
# Value counts of sensors
for kk, vv in xx.sum().to_dict().items():
if np.isfinite(vv) and vv > 0:
xi[kk] = vv
# Average time of day
for kk, vv in xx.index.hour.value_counts().to_dict().items():
kk = 'H_{}'.format(kk)
if kk not in xi:
xi[kk] = 0
xi[kk] += vv
X.append(xi)
for ii in range(10):
print(y[ii], X[ii], end='\n\n')
###Output
Meal_Preparation {'nd': 452, 'na': 452, 'dd': 484.62230100000005, 'da': 354.51503, 'AD1-A': 30.893200000000004, 'H_7': 138, 'D09': 4, 'M02': 2, 'M07': 2, 'M08': 5, 'M13': 8, 'M14': 11, 'M15': 12, 'M16': 35, 'M17': 35, 'M18': 8, 'M19': 3, 'M21': 1, 'M24': 1}
Meal_Preparation {'nd': 452, 'na': 452, 'dd': 1322.4642290000002, 'da': 754.7643400000001, 'AD1-A': 22.475260000000002, 'AD1-B': 0.129416, 'AD1-C': 0.903647, 'H_10': 250, 'D08': 6, 'D09': 12, 'D10': 8, 'I03': 1, 'M02': 1, 'M06': 4, 'M07': 7, 'M08': 6, 'M09': 13, 'M10': 2, 'M13': 4, 'M14': 6, 'M15': 14, 'M16': 36, 'M17': 78, 'M18': 19, 'M19': 4, 'M21': 1, 'M24': 1, 'M25': 1, 'M33': 1, 'M51': 12}
Meal_Preparation {'nd': 452, 'na': 452, 'dd': 35486.023611000004, 'da': 35222.125929, 'AD1-A': 67.37229, 'AD1-B': 43.09350769999998, 'AD1-C': 1.4234190000000002, 'H_17': 513, 'H_8': 565, 'H_12': 123, 'D07': 2, 'D08': 10, 'D09': 60, 'D10': 10, 'M08': 1, 'M09': 11, 'M13': 10, 'M14': 81, 'M15': 102, 'M16': 215, 'M17': 356, 'M18': 106, 'M19': 19, 'M51': 40}
Meal_Preparation {'nd': 452, 'na': 452, 'dd': 2319.053961, 'da': 2266.21102, 'AD1-A': 117.89126999999998, 'AD1-B': 7.2035079, 'H_7': 384, 'D08': 2, 'D09': 10, 'M10': 2, 'M14': 18, 'M15': 30, 'M16': 62, 'M17': 91, 'M18': 29, 'M19': 5, 'M29': 3, 'M30': 4, 'M31': 2, 'M32': 2, 'M33': 3, 'M35': 6, 'M36': 5, 'M37': 5, 'M38': 4, 'M39': 8, 'M40': 8, 'M41': 8, 'M51': 13}
Meal_Preparation {'nd': 452, 'na': 452, 'dd': 1238.73502, 'da': 1150.380691, 'AD1-A': 58.98005, 'AD1-B': 0.2373471, 'H_8': 80, 'H_7': 142, 'D09': 8, 'D10': 4, 'M14': 15, 'M15': 37, 'M16': 51, 'M17': 58, 'M18': 18, 'M51': 7}
Meal_Preparation {'nd': 452, 'na': 452, 'dd': 799.0063, 'da': 736.9905590000001, 'AD1-A': 19.67272, 'H_10': 177, 'D09': 6, 'D10': 4, 'M15': 4, 'M16': 30, 'M17': 67, 'M18': 29, 'M19': 14, 'M21': 2, 'M24': 2, 'M25': 2, 'M51': 10}
Meal_Preparation {'nd': 452, 'na': 452, 'dd': 52.67817, 'M16': 4, 'M17': 8, 'M18': 3, 'M19': 3, 'M21': 1, 'H_7': 19}
Meal_Preparation {'nd': 452, 'na': 452, 'dd': 457.53669, 'da': 13.529641000000002, 'AD1-B': 1.6587748000000002, 'H_8': 143, 'D08': 5, 'D09': 2, 'M09': 2, 'M13': 2, 'M14': 6, 'M15': 12, 'M16': 25, 'M17': 59, 'M18': 12, 'M19': 3, 'M21': 1, 'M24': 1, 'M25': 1, 'M51': 6}
Meal_Preparation {'nd': 452, 'na': 452, 'dd': 1672.20522, 'da': 1493.9588190000002, 'AD1-B': 8.200319499999999, 'H_12': 197, 'D09': 10, 'D10': 10, 'M09': 17, 'M10': 4, 'M13': 2, 'M14': 13, 'M15': 10, 'M16': 13, 'M17': 65, 'M18': 17, 'M19': 1, 'M21': 2, 'M24': 1, 'M51': 12}
Meal_Preparation {'nd': 452, 'na': 452, 'dd': 231.42034, 'da': 124.98819, 'AD1-B': 7.2119862999999995, 'H_18': 97, 'D15': 2, 'M06': 1, 'M07': 1, 'M08': 2, 'M09': 2, 'M15': 2, 'M16': 2, 'M17': 20, 'M18': 19, 'M19': 1, 'M31': 26, 'M51': 2}
###Markdown
Doing machine learning on this (FINALLY!)
###Code
# Classification models
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.naive_bayes import GaussianNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
# Preprocessing
from sklearn.feature_extraction import DictVectorizer
from sklearn.preprocessing import StandardScaler, PolynomialFeatures
from sklearn.pipeline import Pipeline
# Cross validation
from sklearn.model_selection import StratifiedKFold
results = []
model_classes = [
LogisticRegression,
RandomForestClassifier,
SVC,
GaussianNB,
KNeighborsClassifier,
DecisionTreeClassifier,
]
print('Learning models...', end='')
for model_class in model_classes:
folds = StratifiedKFold(5, shuffle=True, random_state=12345)
for fold_i, (train_inds, test_inds) in enumerate(folds.split(X, y)):
print('.', end='')
X_train, y_train = [X[i] for i in train_inds], [y[i] for i in train_inds]
X_test, y_test = [X[i] for i in test_inds], [y[i] for i in test_inds]
model = Pipeline((
('dict_to_vec', DictVectorizer(sparse=False)),
('scaling', StandardScaler()),
('classifier', model_class()),
))
model.fit(X_train, y_train)
results.append(dict(
model=model_class.__name__,
fold=fold_i,
train_acc=model.score(X_train, y_train),
test_acc=model.score(X_test, y_test)
))
print('...done!\n')
res = pd.DataFrame(results)
res
res.groupby('model')[['train_acc', 'test_acc']].mean()
###Output
_____no_output_____
|
notebooks/ch07_multi_classifier_dec.ipynb
|
###Markdown
7章 多値分類
###Code
# 必要ライブラリの導入
!pip install japanize_matplotlib | tail -n 1
!pip install torchviz | tail -n 1
!pip install torchinfo | tail -n 1
# 必要ライブラリのインポート
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import japanize_matplotlib
from IPython.display import display
# torch関連ライブラリのインポート
import torch
import torch.nn as nn
import torch.optim as optim
from torchinfo import summary
from torchviz import make_dot
# デフォルトフォントサイズ変更
plt.rcParams['font.size'] = 14
# デフォルトグラフサイズ変更
plt.rcParams['figure.figsize'] = (6,6)
# デフォルトで方眼表示ON
plt.rcParams['axes.grid'] = True
# numpyの表示桁数設定
np.set_printoptions(suppress=True, precision=4)
###Output
_____no_output_____
###Markdown
7.8 データ準備 データ読み込み
###Code
# 学習用データ準備
# ライブラリのインポート
from sklearn.datasets import load_iris
# データ読み込み
iris = load_iris()
# 入力データと正解データ取得
x_org, y_org = iris.data, iris.target
# 結果確認
print('元データ', x_org.shape, y_org.shape)
###Output
_____no_output_____
###Markdown
データ絞り込み
###Code
# データ絞り込み
# 入力データに関しては、sepal length(0)とpetal length(2)のみ抽出
x_select = x_org[:,[0,2]]
# 結果確認
print('元データ', x_select.shape, y_org.shape)
###Output
_____no_output_____
###Markdown
訓練データ・検証データの分割
###Code
# 訓練データ、検証データに分割 (シャフルも同時に実施)
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(
x_select, y_org, train_size=75, test_size=75,
random_state=123)
print(x_train.shape, x_test.shape, y_train.shape, y_test.shape)
###Output
_____no_output_____
###Markdown
訓練データの散布図表示
###Code
# データを正解値ごとに分割
x_t0 = x_train[y_train == 0]
x_t1 = x_train[y_train == 1]
x_t2 = x_train[y_train == 2]
# 散布図の表示
plt.scatter(x_t0[:,0], x_t0[:,1], marker='x', c='k', s=50, label='0 (setosa)')
plt.scatter(x_t1[:,0], x_t1[:,1], marker='o', c='b', s=50, label='1 (versicolour)')
plt.scatter(x_t2[:,0], x_t2[:,1], marker='+', c='k', s=50, label='2 (virginica)')
plt.xlabel('sepal_length')
plt.ylabel('petal_length')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
7.9 モデル定義
###Code
# 学習用パラメータ設定
# 入力次元数
n_input = x_train.shape[1]
# 出力次元数
# 分類先クラス数 今回は3になる
n_output = len(list(set(y_train)))
# 結果確認
print(f'n_input: {n_input} n_output: {n_output}')
# モデルの定義
# 2入力3出力のロジスティック回帰モデル
class Net(nn.Module):
def __init__(self, n_input, n_output):
super().__init__()
self.l1 = nn.Linear(n_input, n_output)
# 初期値を全部1にする
# 「ディープラーニングの数学」と条件を合わせる目的
self.l1.weight.data.fill_(1.0)
self.l1.bias.data.fill_(1.0)
def forward(self, x):
x1 = self.l1(x)
return x1
# インスタンスの生成
net = Net(n_input, n_output)
###Output
_____no_output_____
###Markdown
モデル確認
###Code
# モデル内のパラメータの確認
# l1.weightが行列にl1.biasがベクトルになっている
for parameter in net.named_parameters():
print(parameter)
# モデルの概要表示
print(net)
# モデルのサマリー表示
summary(net, (2,))
###Output
_____no_output_____
###Markdown
最適化アルゴリズムと損失関数
###Code
# 損失関数: 交差エントロピー関数
criterion = nn.CrossEntropyLoss()
# 学習率
lr = 0.01
# 最適化関数: 勾配降下法
optimizer = optim.SGD(net.parameters(), lr=lr)
###Output
_____no_output_____
###Markdown
7.10 勾配降下法 データのテンソル化
###Code
# 入力変数x_trainと正解値 y_trainのテンソル変数化
inputs = torch.tensor(x_train).float()
labels = torch.tensor(y_train).long()
# 検証用変数のテンソル変数化
inputs_test = torch.tensor(x_test).float()
labels_test = torch.tensor(y_test).long()
###Output
_____no_output_____
###Markdown
計算グラフの可視化
###Code
# 予測計算
outputs = net(inputs)
# 損失計算
loss = criterion(outputs, labels)
# 損失の計算グラフ可視化
g = make_dot(loss, params=dict(net.named_parameters()))
display(g)
###Output
_____no_output_____
###Markdown
予測ラベル値の取得方法
###Code
# torch.max関数呼び出し
# 2つめの引数は軸を意味している。1だと行ごとの集計。
print(torch.max(outputs, 1))
# ラベル値の配列を取得
torch.max(outputs, 1)[1]
###Output
_____no_output_____
###Markdown
繰り返し計算
###Code
# 学習率
lr = 0.01
# 初期化
net = Net(n_input, n_output)
# 損失関数: 交差エントロピー関数
criterion = nn.CrossEntropyLoss()
# 最適化関数: 勾配降下法
optimizer = optim.SGD(net.parameters(), lr=lr)
# 繰り返し回数
num_epochs = 10000
# 評価結果記録用
history = np.zeros((0,5))
# 繰り返し計算メインループ
for epoch in range(num_epochs):
# 訓練フェーズ
#勾配の初期化
optimizer.zero_grad()
# 予測計算
outputs = net(inputs)
# 損失計算
loss = criterion(outputs, labels)
# 勾配計算
loss.backward()
# パラメータ修正
optimizer.step()
# 予測ラベル算出
predicted = torch.max(outputs, 1)[1]
# 損失と精度の計算
train_loss = loss.item()
train_acc = (predicted == labels).sum() / len(labels)
#予測フェーズ
# 予測計算
outputs_test = net(inputs_test)
# 損失計算
loss_test = criterion(outputs_test, labels_test)
# 予測ラベル算出
predicted_test = torch.max(outputs_test, 1)[1]
# 損失と精度の計算
val_loss = loss_test.item()
val_acc = (predicted_test == labels_test).sum() / len(labels_test)
if ((epoch) % 10 == 0):
print (f'Epoch [{epoch}/{num_epochs}], loss: {train_loss:.5f} acc: {train_acc:.5f} val_loss: {val_loss:.5f}, val_acc: {val_acc:.5f}')
item = np.array([epoch, train_loss, train_acc, val_loss, val_acc])
history = np.vstack((history, item))
###Output
_____no_output_____
###Markdown
7.11 結果確認
###Code
#損失と精度の確認
print(f'初期状態: 損失: {history[0,3]:.5f} 精度: {history[0,4]:.5f}' )
print(f'最終状態: 損失: {history[-1,3]:.5f} 精度: {history[-1,4]:.5f}' )
# 学習曲線の表示 (損失)
plt.plot(history[:,0], history[:,1], 'b', label='訓練')
plt.plot(history[:,0], history[:,3], 'k', label='検証')
plt.xlabel('繰り返し回数')
plt.ylabel('損失')
plt.title('学習曲線(損失)')
plt.legend()
plt.show()
# 学習曲線の表示 (精度)
plt.plot(history[:,0], history[:,2], 'b', label='訓練')
plt.plot(history[:,0], history[:,4], 'k', label='検証')
plt.xlabel('繰り返し回数')
plt.ylabel('精度')
plt.title('学習曲線(精度)')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
モデルへの入力と出力の確認
###Code
# 正解データの0番目、2番目、3番目
print(labels[[0,2,3]])
# 該当する入力値を抽出
i3 = inputs[[0,2,3],:]
print(i3.data.numpy())
# 出力値にsoftmax関数をかけた結果を取得
softmax = torch.nn.Softmax(dim=1)
o3 = net(i3)
k3 = softmax(o3)
print(o3.data.numpy())
print(k3.data.numpy())
###Output
_____no_output_____
###Markdown
最終的な重み行列とバイアスの値
###Code
# 重み行列
print(net.l1.weight.data)
# バイアス
print(net.l1.bias.data)
###Output
_____no_output_____
###Markdown
決定境界の描画 描画領域計算
###Code
# x, yの描画領域計算
x_min = x_train[:,0].min()
x_max = x_train[:,0].max()
y_min = x_train[:,1].min()
y_max = x_train[:,1].max()
x_bound = torch.tensor([x_min, x_max])
# 結果確認
print(x_bound)
# 決定境界用の1次関数定義
def d_bound(x, i, W, B):
W1 = W[[2,0,1],:]
W2 = W - W1
w = W2[i,:]
B1 = B[[2,0,1]]
B2 = B - B1
b = B2[i]
v = -1/w[1]*(w[0]*x + b)
return v
# 決定境界のyの値を計算
W = net.l1.weight.data
B = net.l1.bias.data
y0_bound = d_bound(x_bound, 0, W, B)
y1_bound = d_bound(x_bound, 1, W, B)
y2_bound = d_bound(x_bound, 2, W, B)
# 結果確認
print(y0_bound)
print(y1_bound)
print(y2_bound)
# 散布図と決定境界の標示
# xとyの範囲を明示的に指定
plt.axis([x_min, x_max, y_min, y_max])
# 散布図
plt.scatter(x_t0[:,0], x_t0[:,1], marker='x', c='k', s=50, label='0 (setosa)')
plt.scatter(x_t1[:,0], x_t1[:,1], marker='o', c='b', s=50, label='1 (versicolour)')
plt.scatter(x_t2[:,0], x_t2[:,1], marker='+', c='k', s=50, label='2 (virginica)')
# 決定境界
plt.plot(x_bound, y0_bound, label='2_0')
plt.plot(x_bound, y1_bound, linestyle=':',label='0_1')
plt.plot(x_bound, y2_bound,linestyle='-.',label='1_2')
# 軸ラベルと凡例
plt.xlabel('sepal_length')
plt.ylabel('petal_length')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
7.12 入力変数の4次元化
###Code
# 訓練データ、検証データに分割 (シャフルも同時に実施)
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(
x_org, y_org, train_size=75, test_size=75,
random_state=123)
print(x_train.shape, x_test.shape, y_train.shape, y_test.shape)
# 入力次元数
n_input = x_train.shape[1]
print('入力データ(x)')
print(x_train[:5,:])
print(f'入力次元数: {n_input}')
# 入力データ x_train と正解データ y_train のテンソル変数化
inputs = torch.tensor(x_train).float()
labels = torch.tensor(y_train).long()
# 検証用データのテンソル変数化
inputs_test = torch.tensor(x_test).float()
labels_test = torch.tensor(y_test).long()
# 学習率
lr = 0.01
# 初期化
net = Net(n_input, n_output)
# 損失関数: 交差エントロピー関数
criterion = nn.CrossEntropyLoss()
# 最適化アルゴリズム: 勾配降下法
optimizer = optim.SGD(net.parameters(), lr=lr)
# 繰り返し回数
num_epochs = 10000
# 評価結果記録用
history = np.zeros((0,5))
for epoch in range(num_epochs):
# 訓練フェーズ
#勾配の初期化
optimizer.zero_grad()
# 予測計算
outputs = net(inputs)
# 損失計算
loss = criterion(outputs, labels)
# 勾配計算
loss.backward()
# パラメータ修正
optimizer.step()
#予測値算出
predicted = torch.max(outputs, 1)[1]
# 損失と精度の計算
train_loss = loss.item()
train_acc = (predicted == labels).sum() / len(labels)
#予測フェーズ
# 予測計算
outputs_test = net(inputs_test)
# 損失計算
loss_test = criterion(outputs_test, labels_test)
# 予測ラベル算出
predicted_test = torch.max(outputs_test, 1)[1]
# 損失と精度の計算
val_loss = loss_test.item()
val_acc = (predicted_test == labels_test).sum() / len(labels_test)
if ( epoch % 10 == 0):
print (f'Epoch [{epoch}/{num_epochs}], loss: {train_loss:.5f} acc: {train_acc:.5f} val_loss: {val_loss:.5f}, val_acc: {val_acc:.5f}')
item = np.array([epoch , train_loss, train_acc, val_loss, val_acc])
history = np.vstack((history, item))
# 損失と精度の確認
print(f'初期状態: 損失: {history[0,3]:.5f} 精度: {history[0,4]:.5f}' )
print(f'最終状態: 損失: {history[-1,3]:.5f} 精度: {history[-1,4]:.5f}' )
# 学習曲線の表示 (損失)
plt.plot(history[:,0], history[:,1], 'b', label='訓練')
plt.plot(history[:,0], history[:,3], 'k', label='検証')
plt.xlabel('繰り返し回数')
plt.ylabel('損失')
plt.title('学習曲線(損失)')
plt.legend()
plt.show()
# 学習曲線の表示 (精度)
plt.plot(history[:,0], history[:,2], 'b', label='訓練')
plt.plot(history[:,0], history[:,4], 'k', label='検証')
plt.xlabel('繰り返し回数')
plt.ylabel('精度')
plt.title('学習曲線(精度)')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
コラム NLLLoss損失関数
###Code
# 入力変数の準備
# 擬似的な出力データ
outputs_np = np.array(range(1, 13)).reshape((4,3))
# 擬似的な正解データ
labels_np = np.array([0, 1, 2, 0])
# Tensor化
outputs_dummy = torch.tensor(outputs_np).float()
labels_dummy = torch.tensor(labels_np).long()
# 結果確認
print(outputs_dummy.data)
print(labels_dummy.data)
# NLLLoss関数の呼び出し
nllloss = nn.NLLLoss()
loss = nllloss(outputs_dummy, labels_dummy)
print(loss.item())
###Output
_____no_output_____
###Markdown
コラム 多値分類モデルの他の実装パターン パターン2 モデルクラス側にLogS1oftmax関数を含める
###Code
# モデルの定義
# 2入力3出力のロジスティック回帰モデル
class Net(nn.Module):
def __init__(self, n_input, n_output):
super().__init__()
self.l1 = nn.Linear(n_input, n_output)
# softmax関数の定義
self.logsoftmax = nn.LogSoftmax(dim=1)
# 初期値を全部1にする
# 「ディープラーニングの数学」と条件を合わせる目的
self.l1.weight.data.fill_(1.0)
self.l1.bias.data.fill_(1.0)
def forward(self, x):
x1 = self.l1(x)
x2 = self.logsoftmax(x1)
return x2
# 学習率
lr = 0.01
# 初期化
net = Net(n_input, n_output)
# 損失関数: NLLLoss関数
criterion = nn.NLLLoss()
# 最適化関数: 勾配降下法
optimizer = optim.SGD(net.parameters(), lr=lr)
# 予測計算
outputs = net(inputs)
# 損失計算
loss = criterion(outputs, labels)
# 損失の計算グラフ可視化
g = make_dot(loss, params=dict(net.named_parameters()))
display(g)
# 学習率
lr = 0.01
# 初期化
net = Net(n_input, n_output)
# 損失関数: NLLLoss関数
criterion = nn.NLLLoss()
# 最適化関数: 勾配降下法
optimizer = optim.SGD(net.parameters(), lr=lr)
# 繰り返し回数
num_epochs = 10000
# 評価結果記録用
history = np.zeros((0,5))
for epoch in range(num_epochs):
# 訓練フェーズ
#勾配の初期化
optimizer.zero_grad()
# 予測計算
outputs = net(inputs)
# 損失計算
loss = criterion(outputs, labels)
# 勾配計算
loss.backward()
# パラメータ修正
optimizer.step()
#予測ラベル算出
predicted = torch.max(outputs, 1)[1]
# 損失と精度の計算
train_loss = loss.item()
train_acc = (predicted == labels).sum() / len(labels)
#予測フェーズ
# 予測計算
outputs_test = net(inputs_test)
# 損失計算
loss_test = criterion(outputs_test, labels_test)
#予測ラベル算出
predicted_test = torch.max(outputs_test, 1)[1]
# 損失と精度の計算
val_loss = loss_test.item()
val_acc = (predicted_test == labels_test).sum() / len(labels_test)
if ( epoch % 10 == 0):
print (f'Epoch [{epoch}/{num_epochs}], loss: {train_loss:.5f} acc: {train_acc:.5f} val_loss: {val_loss:.5f}, val_acc: {val_acc:.5f}')
item = np.array([epoch , train_loss, train_acc, val_loss, val_acc])
history = np.vstack((history, item))
#損失と精度の確認
print(f'初期状態: 損失: {history[0,3]:.5f} 精度: {history[0,4]:.5f}' )
print(f'最終状態: 損失: {history[-1,3]:.5f} 精度: {history[-1,4]:.5f}' )
# パターン1モデルの出力結果
w = outputs[:5,:].data
print(w.numpy())
# 確率値を得たい場合
print(torch.exp(w).numpy())
###Output
_____no_output_____
###Markdown
パターン3 モデルクラス側は素のsoftmax
###Code
# モデルの定義
# 2入力3出力のロジスティック回帰モデル
class Net(nn.Module):
def __init__(self, n_input, n_output):
super().__init__()
self.l1 = nn.Linear(n_input, n_output)
# softmax関数の定義
self.softmax = nn.Softmax(dim=1)
# 初期値を全部1にする
# 「ディープラーニングの数学」と条件を合わせる目的
self.l1.weight.data.fill_(1.0)
self.l1.bias.data.fill_(1.0)
def forward(self, x):
x1 = self.l1(x)
x2 = self.softmax(x1)
return x2
# 学習率
lr = 0.01
# 初期化
net = Net(n_input, n_output)
# 損失関数: NLLLoss関数
criterion = nn.NLLLoss()
# 最適化関数: 勾配降下法
optimizer = optim.SGD(net.parameters(), lr=lr)
# 繰り返し回数
num_epochs = 10000
# 評価結果記録用
history = np.zeros((0,5))
for epoch in range(num_epochs):
# 訓練フェーズ
#勾配の初期化
optimizer.zero_grad()
# 予測計算
outputs = net(inputs)
# ここで対数関数にかける
outputs2 = torch.log(outputs)
# 損失計算
loss = criterion(outputs2, labels)
# 勾配計算
loss.backward()
# パラメータ修正
optimizer.step()
#予測ラベル算出
predicted = torch.max(outputs, 1)[1]
# 損失と精度の計算
train_loss = loss.item()
train_acc = (predicted == labels).sum() / len(labels)
#予測フェーズ
# 予測計算
outputs_test = net(inputs_test)
# ここで対数関数にかける
outputs2_test = torch.log(outputs_test)
# 損失計算
loss_test = criterion(outputs2_test, labels_test)
#予測ラベル算出
predicted_test = torch.max(outputs_test, 1)[1]
# 対する損失と精度の計算
val_loss = loss_test.item()
val_acc = (predicted_test == labels_test).sum() / len(labels_test)
if ( epoch % 10 == 0):
print (f'Epoch [{epoch}/{num_epochs}], loss: {train_loss:.5f} acc: {train_acc:.5f} val_loss: {val_loss:.5f}, val_acc: {val_acc:.5f}')
item = np.array([epoch , train_loss, train_acc, val_loss, val_acc])
history = np.vstack((history, item))
#損失と精度の確認
print(f'初期状態: 損失: {history[0,3]:.5f} 精度: {history[0,4]:.5f}' )
print(f'最終状態: 損失: {history[-1,3]:.5f} 精度: {history[-1,4]:.5f}' )
# パターン2のモデル出力値
w = outputs[:5,:].data.numpy()
print(w)
###Output
_____no_output_____
|
Course2-Improving Deep Neural Networks Hyperparameter tuning, Regularization and Optimization/Week3/Tensorflow_introduction.ipynb
|
###Markdown
Introduction to TensorFlowWelcome to this week's programming assignment! Up until now, you've always used Numpy to build neural networks, but this week you'll explore a deep learning framework that allows you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. TensorFlow 2.3 has made significant improvements over its predecessor, some of which you'll encounter and implement here!By the end of this assignment, you'll be able to do the following in TensorFlow 2.3:* Use `tf.Variable` to modify the state of a variable* Explain the difference between a variable and a constant* Train a Neural Network on a TensorFlow datasetProgramming frameworks like TensorFlow not only cut down on time spent coding, but can also perform optimizations that speed up the code itself. Table of Contents- [1- Packages](1) - [1.1 - Checking TensorFlow Version](1-1)- [2 - Basic Optimization with GradientTape](2) - [2.1 - Linear Function](2-1) - [Exercise 1 - linear_function](ex-1) - [2.2 - Computing the Sigmoid](2-2) - [Exercise 2 - sigmoid](ex-2) - [2.3 - Using One Hot Encodings](2-3) - [Exercise 3 - one_hot_matrix](ex-3) - [2.4 - Initialize the Parameters](2-4) - [Exercise 4 - initialize_parameters](ex-4)- [3 - Building Your First Neural Network in TensorFlow](3) - [3.1 - Implement Forward Propagation](3-1) - [Exercise 5 - forward_propagation](ex-5) - [3.2 Compute the Cost](3-2) - [Exercise 6 - compute_cost](ex-6) - [3.3 - Train the Model](3-3)- [4 - Bibliography](4) 1 - Packages
###Code
import h5py
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.python.framework.ops import EagerTensor
from tensorflow.python.ops.resource_variable_ops import ResourceVariable
import time
###Output
_____no_output_____
###Markdown
1.1 - Checking TensorFlow Version You will be using v2.3 for this assignment, for maximum speed and efficiency.
###Code
tf.__version__
###Output
_____no_output_____
###Markdown
2 - Basic Optimization with GradientTapeThe beauty of TensorFlow 2 is in its simplicity. Basically, all you need to do is implement forward propagation through a computational graph. TensorFlow will compute the derivatives for you, by moving backwards through the graph recorded with `GradientTape`. All that's left for you to do then is specify the cost function and optimizer you want to use! When writing a TensorFlow program, the main object to get used and transformed is the `tf.Tensor`. These tensors are the TensorFlow equivalent of Numpy arrays, i.e. multidimensional arrays of a given data type that also contain information about the computational graph.Below, you'll use `tf.Variable` to store the state of your variables. Variables can only be created once as its initial value defines the variable shape and type. Additionally, the `dtype` arg in `tf.Variable` can be set to allow data to be converted to that type. But if none is specified, either the datatype will be kept if the initial value is a Tensor, or `convert_to_tensor` will decide. It's generally best for you to specify directly, so nothing breaks! Here you'll call the TensorFlow dataset created on a HDF5 file, which you can use in place of a Numpy array to store your datasets. You can think of this as a TensorFlow data generator! You will use the Hand sign data set, that is composed of images with shape 64x64x3.
###Code
train_dataset = h5py.File('datasets/train_signs.h5', "r")
test_dataset = h5py.File('datasets/test_signs.h5', "r")
x_train = tf.data.Dataset.from_tensor_slices(train_dataset['train_set_x'])
y_train = tf.data.Dataset.from_tensor_slices(train_dataset['train_set_y'])
x_test = tf.data.Dataset.from_tensor_slices(test_dataset['test_set_x'])
y_test = tf.data.Dataset.from_tensor_slices(test_dataset['test_set_y'])
type(x_train)
###Output
_____no_output_____
###Markdown
Since TensorFlow Datasets are generators, you can't access directly the contents unless you iterate over them in a for loop, or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`. Also, you can inspect the `shape` and `dtype` of each element using the `element_spec` attribute.
###Code
print(x_train.element_spec)
print(next(iter(x_train)))
###Output
tf.Tensor(
[[[227 220 214]
[227 221 215]
[227 222 215]
...
[232 230 224]
[231 229 222]
[230 229 221]]
[[227 221 214]
[227 221 215]
[228 221 215]
...
[232 230 224]
[231 229 222]
[231 229 221]]
[[227 221 214]
[227 221 214]
[227 221 215]
...
[232 230 224]
[231 229 223]
[230 229 221]]
...
[[119 81 51]
[124 85 55]
[127 87 58]
...
[210 211 211]
[211 212 210]
[210 211 210]]
[[119 79 51]
[124 84 55]
[126 85 56]
...
[210 211 210]
[210 211 210]
[209 210 209]]
[[119 81 51]
[123 83 55]
[122 82 54]
...
[209 210 210]
[209 210 209]
[208 209 209]]], shape=(64, 64, 3), dtype=uint8)
###Markdown
The dataset that you'll be using during this assignment is a subset of the sign language digits. It contains six different classes representing the digits from 0 to 5.
###Code
unique_labels = set()
for element in y_train:
unique_labels.add(element.numpy())
print(unique_labels)
###Output
{0, 1, 2, 3, 4, 5}
###Markdown
You can see some of the images in the dataset by running the following cell.
###Code
images_iter = iter(x_train)
labels_iter = iter(y_train)
plt.figure(figsize=(10, 10))
for i in range(25):
ax = plt.subplot(5, 5, i + 1)
plt.imshow(next(images_iter).numpy().astype("uint8"))
plt.title(next(labels_iter).numpy().astype("uint8"))
plt.axis("off")
###Output
_____no_output_____
###Markdown
There's one more additional difference between TensorFlow datasets and Numpy arrays: If you need to transform one, you would invoke the `map` method to apply the function passed as an argument to each of the elements.
###Code
def normalize(image):
"""
Transform an image into a tensor of shape (64 * 64 * 3, )
and normalize its components.
Arguments
image - Tensor.
Returns:
result -- Transformed tensor
"""
image = tf.cast(image, tf.float32) / 255.0
image = tf.reshape(image, [-1,])
return image
new_train = x_train.map(normalize)
new_test = x_test.map(normalize)
new_train.element_spec
print(next(iter(new_train)))
###Output
tf.Tensor([0.8901961 0.8627451 0.8392157 ... 0.8156863 0.81960785 0.81960785], shape=(12288,), dtype=float32)
###Markdown
2.1 - Linear FunctionLet's begin this programming exercise by computing the following equation: $Y = WX + b$, where $W$ and $X$ are random matrices and b is a random vector. Exercise 1 - linear_functionCompute $WX + b$ where $W, X$, and $b$ are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, this is how to define a constant X with the shape (3,1):```pythonX = tf.constant(np.random.randn(3,1), name = "X")```Note that the difference between `tf.constant` and `tf.Variable` is that you can modify the state of a `tf.Variable` but cannot change the state of a `tf.constant`.You might find the following functions helpful: - tf.matmul(..., ...) to do a matrix multiplication- tf.add(..., ...) to do an addition- np.random.randn(...) to initialize randomly
###Code
# GRADED FUNCTION: linear_function
def linear_function():
"""
Implements a linear function:
Initializes X to be a random tensor of shape (3,1)
Initializes W to be a random tensor of shape (4,3)
Initializes b to be a random tensor of shape (4,1)
Returns:
result -- Y = WX + b
"""
np.random.seed(1)
"""
Note, to ensure that the "random" numbers generated match the expected results,
please create the variables in the order given in the starting code below.
(Do not re-arrange the order).
"""
# (approx. 4 lines)
X = tf.constant(np.random.randn(3,1), name = "X")
W = tf.Variable(np.random.randn(4,3), name = "W")
b = tf.Variable(np.random.randn(4,1), name = "b")
Y = tf.add(tf.matmul(W,X), b)
# YOUR CODE STARTS HERE
# YOUR CODE ENDS HERE
return Y
result = linear_function()
print(result)
assert type(result) == EagerTensor, "Use the TensorFlow API"
assert np.allclose(result, [[-2.15657382], [ 2.95891446], [-1.08926781], [-0.84538042]]), "Error"
print("\033[92mAll test passed")
###Output
tf.Tensor(
[[-2.15657382]
[ 2.95891446]
[-1.08926781]
[-0.84538042]], shape=(4, 1), dtype=float64)
[92mAll test passed
###Markdown
**Expected Output**: ```result = [[-2.15657382] [ 2.95891446] [-1.08926781] [-0.84538042]]``` 2.2 - Computing the Sigmoid Amazing! You just implemented a linear function. TensorFlow offers a variety of commonly used neural network functions like `tf.sigmoid` and `tf.softmax`.For this exercise, compute the sigmoid of z. In this exercise, you will: Cast your tensor to type `float32` using `tf.cast`, then compute the sigmoid using `tf.keras.activations.sigmoid`. Exercise 2 - sigmoidImplement the sigmoid function below. You should use the following: - `tf.cast("...", tf.float32)`- `tf.keras.activations.sigmoid("...")`
###Code
# GRADED FUNCTION: sigmoid
def sigmoid(z):
"""
Computes the sigmoid of z
Arguments:
z -- input value, scalar or vector
Returns:
a -- (tf.float32) the sigmoid of z
"""
# tf.keras.activations.sigmoid requires float16, float32, float64, complex64, or complex128.
# (approx. 2 lines)
z = tf.cast(z, tf.float32)
a = tf.keras.activations.sigmoid(z)
# YOUR CODE STARTS HERE
# YOUR CODE ENDS HERE
return a
result = sigmoid(-1)
print ("type: " + str(type(result)))
print ("dtype: " + str(result.dtype))
print ("sigmoid(-1) = " + str(result))
print ("sigmoid(0) = " + str(sigmoid(0.0)))
print ("sigmoid(12) = " + str(sigmoid(12)))
def sigmoid_test(target):
result = target(0)
assert(type(result) == EagerTensor)
assert (result.dtype == tf.float32)
assert sigmoid(0) == 0.5, "Error"
assert sigmoid(-1) == 0.26894143, "Error"
assert sigmoid(12) == 0.9999939, "Error"
print("\033[92mAll test passed")
sigmoid_test(sigmoid)
###Output
type: <class 'tensorflow.python.framework.ops.EagerTensor'>
dtype: <dtype: 'float32'>
sigmoid(-1) = tf.Tensor(0.26894143, shape=(), dtype=float32)
sigmoid(0) = tf.Tensor(0.5, shape=(), dtype=float32)
sigmoid(12) = tf.Tensor(0.9999939, shape=(), dtype=float32)
[92mAll test passed
###Markdown
**Expected Output**: typeclass 'tensorflow.python.framework.ops.EagerTensor' dtype"dtype: 'float32' Sigmoid(-1)0.2689414 Sigmoid(0)0.5 Sigmoid(12)0.999994 2.3 - Using One Hot EncodingsMany times in deep learning you will have a $Y$ vector with numbers ranging from $0$ to $C-1$, where $C$ is the number of classes. If $C$ is for example 4, then you might have the following y vector which you will need to convert like this:This is called "one hot" encoding, because in the converted representation, exactly one element of each column is "hot" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In TensorFlow, you can use one line of code: - [tf.one_hot(labels, depth, axis=0)](https://www.tensorflow.org/api_docs/python/tf/one_hot)`axis=0` indicates the new axis is created at dimension 0 Exercise 3 - one_hot_matrixImplement the function below to take one label and the total number of classes $C$, and return the one hot encoding in a column wise matrix. Use `tf.one_hot()` to do this, and `tf.reshape()` to reshape your one hot tensor! - `tf.reshape(tensor, shape)`
###Code
# GRADED FUNCTION: one_hot_matrix
def one_hot_matrix(label, depth=6):
"""
Computes the one hot encoding for a single label
Arguments:
label -- (int) Categorical labels
depth -- (int) Number of different classes that label can take
Returns:
one_hot -- tf.Tensor A single-column matrix with the one hot encoding.
"""
# (approx. 1 line)
# Note: You need a rank zero tensor for expected output
one_hot = tf.reshape(tf.one_hot(label, depth, axis=0), (depth,))
# YOUR CODE STARTS HERE
# YOUR CODE ENDS HERE
return one_hot
def one_hot_matrix_test(target):
label = tf.constant(1)
depth = 4
result = target(label, depth)
print("Test 1:",result)
assert result.shape[0] == depth, "Use the parameter depth"
assert np.allclose(result, [0., 1. ,0., 0.] ), "Wrong output. Use tf.one_hot"
label_2 = [2]
result = target(label_2, depth)
print("Test 2:", result)
assert result.shape[0] == depth, "Use the parameter depth"
assert np.allclose(result, [0., 0. ,1., 0.] ), "Wrong output. Use tf.reshape as instructed"
print("\033[92mAll test passed")
one_hot_matrix_test(one_hot_matrix)
###Output
Test 1: tf.Tensor([0. 1. 0. 0.], shape=(4,), dtype=float32)
Test 2: tf.Tensor([0. 0. 1. 0.], shape=(4,), dtype=float32)
[92mAll test passed
###Markdown
**Expected output**```Test 1: tf.Tensor([0. 1. 0. 0.], shape=(4,), dtype=float32)Test 2: tf.Tensor([0. 0. 1. 0.], shape=(4,), dtype=float32)```
###Code
new_y_test = y_test.map(one_hot_matrix)
new_y_train = y_train.map(one_hot_matrix)
print(next(iter(new_y_test)))
###Output
tf.Tensor([1. 0. 0. 0. 0. 0.], shape=(6,), dtype=float32)
###Markdown
2.4 - Initialize the Parameters Now you'll initialize a vector of numbers with the Glorot initializer. The function you'll be calling is `tf.keras.initializers.GlorotNormal`, which draws samples from a truncated normal distribution centered on 0, with `stddev = sqrt(2 / (fan_in + fan_out))`, where `fan_in` is the number of input units and `fan_out` is the number of output units, both in the weight tensor. To initialize with zeros or ones you could use `tf.zeros()` or `tf.ones()` instead. Exercise 4 - initialize_parametersImplement the function below to take in a shape and to return an array of numbers using the GlorotNormal initializer. - `tf.keras.initializers.GlorotNormal(seed=1)` - `tf.Variable(initializer(shape=())`
###Code
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
"""
Initializes parameters to build a neural network with TensorFlow. The shapes are:
W1 : [25, 12288]
b1 : [25, 1]
W2 : [12, 25]
b2 : [12, 1]
W3 : [6, 12]
b3 : [6, 1]
Returns:
parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3
"""
initializer = tf.keras.initializers.GlorotNormal(seed=1)
#(approx. 6 lines of code)
W1 = tf.Variable(initializer(shape=(25, 12288)))
b1 = tf.Variable(initializer(shape=(25,1)))
W2 = tf.Variable(initializer(shape=(12,25)))
b2 = tf.Variable(initializer(shape=(12,1)))
W3 = tf.Variable(initializer(shape=(6,12)))
b3 = tf.Variable(initializer(shape=(6,1)))
# YOUR CODE STARTS HERE
# YOUR CODE ENDS HERE
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2,
"W3": W3,
"b3": b3}
return parameters
def initialize_parameters_test(target):
parameters = target()
values = {"W1": (25, 12288),
"b1": (25, 1),
"W2": (12, 25),
"b2": (12, 1),
"W3": (6, 12),
"b3": (6, 1)}
for key in parameters:
print(f"{key} shape: {tuple(parameters[key].shape)}")
assert type(parameters[key]) == ResourceVariable, "All parameter must be created using tf.Variable"
assert tuple(parameters[key].shape) == values[key], f"{key}: wrong shape"
assert np.abs(np.mean(parameters[key].numpy())) < 0.5, f"{key}: Use the GlorotNormal initializer"
assert np.std(parameters[key].numpy()) > 0 and np.std(parameters[key].numpy()) < 1, f"{key}: Use the GlorotNormal initializer"
print("\033[92mAll test passed")
initialize_parameters_test(initialize_parameters)
###Output
W1 shape: (25, 12288)
b1 shape: (25, 1)
W2 shape: (12, 25)
b2 shape: (12, 1)
W3 shape: (6, 12)
b3 shape: (6, 1)
[92mAll test passed
###Markdown
**Expected output**```W1 shape: (25, 12288)b1 shape: (25, 1)W2 shape: (12, 25)b2 shape: (12, 1)W3 shape: (6, 12)b3 shape: (6, 1)```
###Code
parameters = initialize_parameters()
###Output
_____no_output_____
###Markdown
3 - Building Your First Neural Network in TensorFlowIn this part of the assignment you will build a neural network using TensorFlow. Remember that there are two parts to implementing a TensorFlow model:- Implement forward propagation- Retrieve the gradients and train the modelLet's get into it! 3.1 - Implement Forward Propagation One of TensorFlow's great strengths lies in the fact that you only need to implement the forward propagation function and it will keep track of the operations you did to calculate the back propagation automatically. Exercise 5 - forward_propagationImplement the `forward_propagation` function.**Note** Use only the TF API. - tf.math.add- tf.linalg.matmul- tf.keras.activations.relu
###Code
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
"""
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
W3 = parameters['W3']
b3 = parameters['b3']
#(approx. 5 lines) # Numpy Equivalents:
Z1 = tf.math.add(tf.linalg.matmul(W1, X), b1) # Z1 = np.dot(W1, X) + b1
A1 = tf.keras.activations.relu(Z1) # A1 = relu(Z1)
Z2 = tf.math.add(tf.linalg.matmul(W2, A1), b2) # Z2 = np.dot(W2, A1) + b2
A2 = tf.keras.activations.relu(Z2) # A2 = relu(Z2)
Z3 = tf.math.add(tf.linalg.matmul(W3, A2), b3) # Z3 = np.dot(W3, A2) + b3
# YOUR CODE STARTS HERE
# YOUR CODE ENDS HERE
return Z3
def forward_propagation_test(target, examples):
minibatches = examples.batch(2)
for minibatch in minibatches:
forward_pass = target(tf.transpose(minibatch), parameters)
print(forward_pass)
assert type(forward_pass) == EagerTensor, "Your output is not a tensor"
assert forward_pass.shape == (6, 2), "Last layer must use W3 and b3"
assert np.allclose(forward_pass,
[[-0.13430887, 0.14086473],
[ 0.21588647, -0.02582335],
[ 0.7059658, 0.6484556 ],
[-1.1260961, -0.9329492 ],
[-0.20181894, -0.3382722 ],
[ 0.9558965, 0.94167566]]), "Output does not match"
break
print("\033[92mAll test passed")
forward_propagation_test(forward_propagation, new_train)
###Output
tf.Tensor(
[[-0.13430887 0.14086473]
[ 0.21588647 -0.02582335]
[ 0.7059658 0.6484556 ]
[-1.1260961 -0.9329492 ]
[-0.20181894 -0.3382722 ]
[ 0.9558965 0.94167566]], shape=(6, 2), dtype=float32)
[92mAll test passed
###Markdown
**Expected output**```tf.Tensor([[-0.13430887 0.14086473] [ 0.21588647 -0.02582335] [ 0.7059658 0.6484556 ] [-1.1260961 -0.9329492 ] [-0.20181894 -0.3382722 ] [ 0.9558965 0.94167566]], shape=(6, 2), dtype=float32)``` 3.2 Compute the CostAll you have to do now is define the loss function that you're going to use. For this case, since we have a classification problem with 6 labels, a categorical cross entropy will work! Exercise 6 - compute_costImplement the cost function below. - It's important to note that the "`y_pred`" and "`y_true`" inputs of [tf.keras.losses.categorical_crossentropy](https://www.tensorflow.org/api_docs/python/tf/keras/losses/categorical_crossentropy) are expected to be of shape (number of examples, num_classes). - `tf.reduce_mean` basically does the summation over the examples.
###Code
# GRADED FUNCTION: compute_cost
def compute_cost(logits, labels):
"""
Computes the cost
Arguments:
logits -- output of forward propagation (output of the last LINEAR unit), of shape (6, num_examples)
labels -- "true" labels vector, same shape as Z3
Returns:
cost - Tensor of the cost function
"""
#(1 line of code)
cost = tf.reduce_mean(tf.keras.losses.categorical_crossentropy(tf.transpose(labels), tf.transpose(logits), from_logits=True))
# YOUR CODE STARTS HERE
# YOUR CODE ENDS HERE
return cost
def compute_cost_test(target, Y):
pred = tf.constant([[ 2.4048107, 5.0334096 ],
[-0.7921977, -4.1523376 ],
[ 0.9447198, -0.46802214],
[ 1.158121, 3.9810789 ],
[ 4.768706, 2.3220146 ],
[ 6.1481323, 3.909829 ]])
minibatches = Y.batch(2)
for minibatch in minibatches:
result = target(pred, tf.transpose(minibatch))
break
print(result)
assert(type(result) == EagerTensor), "Use the TensorFlow API"
assert (np.abs(result - (0.25361037 + 0.5566767) / 2.0) < 1e-7), "Test does not match. Did you get the mean of your cost functions?"
print("\033[92mAll test passed")
compute_cost_test(compute_cost, new_y_train )
###Output
tf.Tensor(0.4051435, shape=(), dtype=float32)
[92mAll test passed
###Markdown
**Expected output**```tf.Tensor(0.4051435, shape=(), dtype=float32)``` 3.3 - Train the ModelLet's talk optimizers. You'll specify the type of optimizer in one line, in this case `tf.keras.optimizers.Adam` (though you can use others such as SGD), and then call it within the training loop. Notice the `tape.gradient` function: this allows you to retrieve the operations recorded for automatic differentiation inside the `GradientTape` block. Then, calling the optimizer method `apply_gradients`, will apply the optimizer's update rules to each trainable parameter. At the end of this assignment, you'll find some documentation that explains this more in detail, but for now, a simple explanation will do. ;) Here you should take note of an important extra step that's been added to the batch training process: - `tf.Data.dataset = dataset.prefetch(8)` What this does is prevent a memory bottleneck that can occur when reading from disk. `prefetch()` sets aside some data and keeps it ready for when it's needed. It does this by creating a source dataset from your input data, applying a transformation to preprocess the data, then iterating over the dataset the specified number of elements at a time. This works because the iteration is streaming, so the data doesn't need to fit into the memory.
###Code
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,
num_epochs = 1500, minibatch_size = 32, print_cost = True):
"""
Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.
Arguments:
X_train -- training set, of shape (input size = 12288, number of training examples = 1080)
Y_train -- test set, of shape (output size = 6, number of training examples = 1080)
X_test -- training set, of shape (input size = 12288, number of training examples = 120)
Y_test -- test set, of shape (output size = 6, number of test examples = 120)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 10 epochs
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
costs = [] # To keep track of the cost
train_acc = []
test_acc = []
# Initialize your parameters
#(1 line)
parameters = initialize_parameters()
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
W3 = parameters['W3']
b3 = parameters['b3']
optimizer = tf.keras.optimizers.Adam(learning_rate)
# The CategoricalAccuracy will track the accuracy for this multiclass problem
test_accuracy = tf.keras.metrics.CategoricalAccuracy()
train_accuracy = tf.keras.metrics.CategoricalAccuracy()
dataset = tf.data.Dataset.zip((X_train, Y_train))
test_dataset = tf.data.Dataset.zip((X_test, Y_test))
# We can get the number of elements of a dataset using the cardinality method
m = dataset.cardinality().numpy()
minibatches = dataset.batch(minibatch_size).prefetch(8)
test_minibatches = test_dataset.batch(minibatch_size).prefetch(8)
#X_train = X_train.batch(minibatch_size, drop_remainder=True).prefetch(8)# <<< extra step
#Y_train = Y_train.batch(minibatch_size, drop_remainder=True).prefetch(8) # loads memory faster
# Do the training loop
for epoch in range(num_epochs):
epoch_cost = 0.
#We need to reset object to start measuring from 0 the accuracy each epoch
train_accuracy.reset_states()
for (minibatch_X, minibatch_Y) in minibatches:
with tf.GradientTape() as tape:
# 1. predict
Z3 = forward_propagation(tf.transpose(minibatch_X), parameters)
# 2. loss
minibatch_cost = compute_cost(Z3, tf.transpose(minibatch_Y))
# We acumulate the accuracy of all the batches
train_accuracy.update_state(tf.transpose(Z3), minibatch_Y)
trainable_variables = [W1, b1, W2, b2, W3, b3]
grads = tape.gradient(minibatch_cost, trainable_variables)
optimizer.apply_gradients(zip(grads, trainable_variables))
epoch_cost += minibatch_cost
# We divide the epoch cost over the number of samples
epoch_cost /= m
# Print the cost every 10 epochs
if print_cost == True and epoch % 10 == 0:
print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
print("Train accuracy:", train_accuracy.result())
# We evaluate the test set every 10 epochs to avoid computational overhead
for (minibatch_X, minibatch_Y) in test_minibatches:
Z3 = forward_propagation(tf.transpose(minibatch_X), parameters)
test_accuracy.update_state(tf.transpose(Z3), minibatch_Y)
print("Test_accuracy:", test_accuracy.result())
costs.append(epoch_cost)
train_acc.append(train_accuracy.result())
test_acc.append(test_accuracy.result())
test_accuracy.reset_states()
return parameters, costs, train_acc, test_acc
parameters, costs, train_acc, test_acc = model(new_train, new_y_train, new_test, new_y_test, num_epochs=100)
###Output
Cost after epoch 0: 0.057612
Train accuracy: tf.Tensor(0.17314816, shape=(), dtype=float32)
Test_accuracy: tf.Tensor(0.24166666, shape=(), dtype=float32)
Cost after epoch 10: 0.049332
Train accuracy: tf.Tensor(0.35833332, shape=(), dtype=float32)
Test_accuracy: tf.Tensor(0.3, shape=(), dtype=float32)
Cost after epoch 20: 0.043173
Train accuracy: tf.Tensor(0.49907407, shape=(), dtype=float32)
Test_accuracy: tf.Tensor(0.43333334, shape=(), dtype=float32)
Cost after epoch 30: 0.037322
Train accuracy: tf.Tensor(0.60462964, shape=(), dtype=float32)
Test_accuracy: tf.Tensor(0.525, shape=(), dtype=float32)
Cost after epoch 40: 0.033147
Train accuracy: tf.Tensor(0.6490741, shape=(), dtype=float32)
Test_accuracy: tf.Tensor(0.5416667, shape=(), dtype=float32)
Cost after epoch 50: 0.030203
Train accuracy: tf.Tensor(0.68333334, shape=(), dtype=float32)
Test_accuracy: tf.Tensor(0.625, shape=(), dtype=float32)
Cost after epoch 60: 0.028050
Train accuracy: tf.Tensor(0.6935185, shape=(), dtype=float32)
Test_accuracy: tf.Tensor(0.625, shape=(), dtype=float32)
Cost after epoch 70: 0.026298
Train accuracy: tf.Tensor(0.72407407, shape=(), dtype=float32)
Test_accuracy: tf.Tensor(0.64166665, shape=(), dtype=float32)
Cost after epoch 80: 0.024799
Train accuracy: tf.Tensor(0.7425926, shape=(), dtype=float32)
Test_accuracy: tf.Tensor(0.68333334, shape=(), dtype=float32)
Cost after epoch 90: 0.023551
Train accuracy: tf.Tensor(0.75277776, shape=(), dtype=float32)
Test_accuracy: tf.Tensor(0.68333334, shape=(), dtype=float32)
###Markdown
**Expected output**```Cost after epoch 0: 0.057612Train accuracy: tf.Tensor(0.17314816, shape=(), dtype=float32)Test_accuracy: tf.Tensor(0.24166666, shape=(), dtype=float32)Cost after epoch 10: 0.049332Train accuracy: tf.Tensor(0.35833332, shape=(), dtype=float32)Test_accuracy: tf.Tensor(0.3, shape=(), dtype=float32)...```Numbers you get can be different, just check that your loss is going down and your accuracy going up!
###Code
# Plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per fives)')
plt.title("Learning rate =" + str(0.0001))
plt.show()
# Plot the train accuracy
plt.plot(np.squeeze(train_acc))
plt.ylabel('Train Accuracy')
plt.xlabel('iterations (per fives)')
plt.title("Learning rate =" + str(0.0001))
# Plot the test accuracy
plt.plot(np.squeeze(test_acc))
plt.ylabel('Test Accuracy')
plt.xlabel('iterations (per fives)')
plt.title("Learning rate =" + str(0.0001))
plt.show()
###Output
_____no_output_____
|
Random_Forest/Random_Forest_oob_r.ipynb
|
###Markdown
Training Part
###Code
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer,classification_report, matthews_corrcoef, accuracy_score, average_precision_score, roc_auc_score
###Output
_____no_output_____
###Markdown
Input data is read and named as the following
###Code
transactions = pd.read_csv('train.csv')
X_train = transactions.drop(labels='Class', axis=1)
y_train = transactions.loc[:,'Class']
num_folds = 5
# MCC_scorer = make_scorer(matthews_corrcoef)
###Output
_____no_output_____
###Markdown
Tuning parameters
###Code
rf = RandomForestClassifier(n_jobs=-1, random_state=1)
n_estimators = [50, 75, 500] #default = 50;
# ,50, 60, 90, 105, 120, 500, 1000
min_samples_split = [2, 5] # default=2
# , 5, 10, 15, 100
min_samples_leaf = [1, 5] # default = 1
param_grid_rf = {'n_estimators': n_estimators,
'min_samples_split': min_samples_split,
'min_samples_leaf': min_samples_split,
'oob_score': [True]
}
grid_rf = GridSearchCV(estimator=rf, param_grid=param_grid_rf,
n_jobs=-1, pre_dispatch='2*n_jobs', verbose=1, return_train_score=False)
grid_rf.fit(X_train, y_train)
###Output
Fitting 3 folds for each of 12 candidates, totalling 36 fits
###Markdown
The best score and the estimator
###Code
grid_rf.best_score_
grid_rf.best_params_
###Output
_____no_output_____
###Markdown
Evaluation Part
###Code
evaluation = pd.read_csv('validation.csv')
X_eval = evaluation.drop(labels='Class', axis=1)
y_eval = evaluation.loc[:,'Class']
def Random_Forest_eval(estimator, X_test, y_test):
y_pred = estimator.predict(X_test)
print('Classification Report')
print(classification_report(y_test, y_pred))
if y_test.nunique() <= 2:
try:
y_score = estimator.predict_proba(X_test)[:,1]
except:
y_score = estimator.decision_function(X_test)
print('AUPRC', average_precision_score(y_test, y_score))
print('AUROC', roc_auc_score(y_test, y_score))
Random_Forest_eval(grid_rf, X_eval, y_eval)
###Output
Classification Report
precision recall f1-score support
0 1.00 1.00 1.00 99511
1 0.94 0.77 0.85 173
accuracy 1.00 99684
macro avg 0.97 0.89 0.92 99684
weighted avg 1.00 1.00 1.00 99684
AUPRC 0.8307667573692485
AUROC 0.9597047481258497
|
talk/.ipynb_checkpoints/Vogelkamera-checkpoint.ipynb
|
###Markdown
Vogelkamera mit dem Raspberry PI Rebecca Breu, Juni 2016 Wie alles begann... ... wird die überhaupt benutzt? Erster Versuch: Digitalkamera* Intervallaufnahmen mit Digitalkamera* Suchen nach "interessanten" Bildern mit Python:
###Code
import glob
import numpy
from matplotlib import image
def rgb2gray(rgb):
return numpy.dot(rgb[...,:3], [0.299, 0.587, 0.144])
oldimg = None
for infile in glob.glob('*.JPG'):
img = rgb2gray(image.imread(infile))
if oldimg is not None:
diff = numpy.linalg.norm(img - oldimg)
# ... do something
oldimg = img
###Output
_____no_output_____
###Markdown
Probleme:* Batterie der Kamera reicht nur für gut drei Stunden* Bilder kopieren und auswerten per Hand wird nervig... Zweiter Versuch: Raspberry PI und Kamera-Modul Sonnenstandsberechnung mit astral
###Code
from astral import Astral
a = Astral()
a.solar_depression = 3
location = a['Berlin']
location.latitude = 50.9534001
location.longitude = 6.9548886
location.elevation = 56
print(location.dawn())
print(location.dusk())
###Output
2015-06-09 05:00:25+02:00
2015-06-09 22:02:27+02:00
###Markdown
PICamera — Python-Modul für die Kamera
###Code
import time
import picamera
with picamera.PiCamera() as camera:
camera.resolution = (1024, 768)
camera.start_preview()
# Camera warm-up time
time.sleep(2)
camera.capture('test.png')
camera.start_recording('my_video.h264', motion_output='motion.data')
camera.wait_recording(60)
camera.stop_recording()
###Output
_____no_output_____
###Markdown
Bewegungsdaten* Kamerachip liefert Bewegungsdaten zur Kodierung mit H264-Codec* Es werden 16x16-Pixel-Blöcke betrachtet* Für jeden Block: 2D-Vektor, wohin sich der Block bewegt + Wert, wie sehr sich alter und neuer Block unterscheiden Testen :) Bewegungsdaten on the fly analysieren
###Code
class MotionAnalyser(picamera.array.PiMotionAnalysis):
FRAMES = 5
def __init__(self, *args, **kwargs):
super(MotionAnalyser, self).__init__(*args, **kwargs)
self.motion = None
self.last_motions = deque([0] * self.FRAMES, maxlen=self.FRAMES)
def analyse(self, m):
data = numpy.sqrt(
numpy.square(m['x'].astype(numpy.float)) +
numpy.square(m['y'].astype(numpy.float))
)
norm = numpy.linalg.norm(data)
self.last_motions.append(norm)
if min(self.last_motions) > MOTION_THRESHOLD:
self.motion = True
with MotionAnalyser(camera) as analyser:
camera.start_recording('/dev/null', format='h264', motion_output=analyser)
while True:
if analyser.motion:
camera.stop_recording()
# ...
break
time.sleep(0.1)
###Output
_____no_output_____
###Markdown
Erste Ergebnisse nach ein paar warmen, trockenen Tagen: Es kommen Vögel zum Trinken und Baden! Aber... Wellen durch Wind und Regen = Bewegung :( Dritter Versuch: Passiver Infrarot-Sensor Das RPi-GPIO-Modul
###Code
from RPi import GPIO
import time
IR_PIN = 14 # data
GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)
GPIO.setup(IR_PIN, GPIO.IN)
try:
while True:
if GPIO.input(IR_PIN):
print('Bewegung!')
time.sleep(1)
except KeyboardInterrupt:
GPIO.cleanup()
###Output
_____no_output_____
###Markdown
Das RPi-GPIO-Modul
###Code
GPIO.add_event_detect(IR_PIN, GPIO.RISING)
while True:
if GPIO.event_detected(IR_PIN):
print('Bewegung!')
time.sleep(1)
###Output
_____no_output_____
###Markdown
Interrupts:
###Code
def my_callback(channel):
print('Bewegung!')
GPIO.add_event_detect(IR_PIN, GPIO.RISING, callback=my_callback)
###Output
_____no_output_____
###Markdown
Aber...* Sensor nicht empfindlich genug für kleine Vögel* Sensor reagiert träge:(Gibt es bessere Sensoren? Vierter Versuch: Bewegungsanalyse v. 2* Idee: Ignorieren der Wasseroberfläche* Vögel werden zumindestens beim Anflug erkannt und wenn sie auf dem Rand sitzen* Eventuell weniger Aufnahmen von badenden Vögeln in der Mitte der Tränke* Dafür kein Bilder-Spam bei Wind und Regen* Annahme: Kamera immer in gleicher Position, d.h. Wasseroberfläche anhand fester Pixelbereiche identifizierbar Motion Mask Anpassen der Bewegungs-Analyse
###Code
motion_mask = matplotlib.image.imread('motion_mask.png')[..., 0]
class MotionAnalyser(picamera.array.PiMotionAnalysis):
# ...
def analyse(self, m):
data = numpy.sqrt(
numpy.square(m['x'].astype(numpy.float)) +
numpy.square(m['y'].astype(numpy.float))
)
data = numpy.multiply(data, motion_mask)
norm = numpy.linalg.norm(data)
self.last_motions.append(norm)
if min(self.last_motions) > MOTION_THRESHOLD:
self.motion = True
###Output
_____no_output_____
|
python/MNIST_Random_Forests.ipynb
|
###Markdown
MNIST Random ForestsMNIST (whole training set, all 10 classes) for classification, using two types of features: pixels and LeNet5 features.See [Torchvision](https://pytorch.org/vision/stable/datasets.htmlmnist)'s documentation on the datasets object module, specifically with respect to MNIST.
###Code
# To split up the data into training and test sets
from sklearn.model_selection import train_test_split
import numpy as np
# For MNIST digits dataset
from torchvision import datasets
def data_loader():
# Import the data from
train_data_th = datasets.MNIST(root='./datasets', download=True, train=True)
test_data_th = datasets.MNIST(root='./datasets', download=True, train=False)
train_data, train_targets = train_data_th.data, train_data_th.targets
test_data, test_targets = test_data_th.data, test_data_th.targets
data_train = np.array(train_data[:]).reshape(-1, 784).astype(np.float32)
data_test = np.array(test_data[:]).reshape(-1, 784).astype(np.float32)
data_train = (data_train / 255)
dtrain_mean = data_train.mean(axis=0)
data_train -= dtrain_mean
data_test = (data_test / 255).astype(np.float32)
data_test -= dtrain_mean
train_set_size = 0.8 # 80% of training dataset
# val set size = 1 - train set size
train_data, val_data, y_train, y_val = train_test_split(data_train, train_targets, train_size=train_set_size, random_state=1778, shuffle=True)
return train_data, val_data, data_test, y_train, y_val, test_targets
# For reference, traditionally...
x_train, x_val, x_test, y_train, y_val, y_test = data_loader()
###Output
_____no_output_____
###Markdown
Some information about how we divided our data...``` print(x_train.shape) (48000, 784) print(x_val.shape) (12000, 784) print(x_test.shape) (10000, 784)```As you can see, our training data contains 48,000 samples, our validation data contains 12,000 samples and our testing data contains 10,000 samples. There are 784 features. Training the (default) modelAs an example, let's train the default pre-packaged RandomForestClassifier from `sklearn`.
###Code
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
# Training the random forest ensemble algorithm
rf = RandomForestClassifier()
rf.fit(x_train, y_train) # Should take about 33 seconds
# How does it perform on the training data set?
predictions = rf.predict(x_train)
targets = y_train
print("Training data error: " + str(1 - accuracy_score(targets, predictions)))
###Output
Training data error: 0.0
###Markdown
We should look at how the default model performed versus the validation data. We can look at the [classification report](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) and the [confusion matrix](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html).
###Code
import matplotlib.pyplot as plt
from sklearn.metrics import classification_report, confusion_matrix, ConfusionMatrixDisplay
pred = rf.predict(x_val)
# Validation data accuracy
predictions = rf.predict(x_val)
targets = y_val
print("Validation data error: " + str(1- accuracy_score(targets, predictions)) + '\n')
print("Classification Report")
print(classification_report(y_val, pred))
print("Confusion Matrix")
cm = confusion_matrix(y_val, pred)
disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=rf.classes_)
disp.plot()
plt.show()
###Output
Validation data error: 0.033499999999999974
Classification Report
precision recall f1-score support
0 0.97 0.99 0.98 1183
1 0.98 0.98 0.98 1338
2 0.95 0.97 0.96 1229
3 0.97 0.96 0.96 1208
4 0.96 0.96 0.96 1130
5 0.97 0.96 0.97 1060
6 0.98 0.98 0.98 1212
7 0.98 0.96 0.97 1303
8 0.96 0.94 0.95 1172
9 0.94 0.96 0.95 1165
accuracy 0.97 12000
macro avg 0.97 0.97 0.97 12000
weighted avg 0.97 0.97 0.97 12000
Confusion Matrix
###Markdown
Next, let's get some more information about this model (such as it's n_estimators) and see how it performs against the test data.
###Code
# What parameters was this random forest created with?
print("parameters = " + str(rf.get_params(deep=True)))
# How does it do against the test dataset?
predictions = rf.predict(x_test)
targets = y_test
print("Test data error: " + str(1 - accuracy_score(targets, predictions)))
print("Test data accuracy: " + str(accuracy_score(targets, predictions)))
###Output
parameters = {'bootstrap': True, 'ccp_alpha': 0.0, 'class_weight': None, 'criterion': 'gini', 'max_depth': None, 'max_features': 'auto', 'max_leaf_nodes': None, 'max_samples': None, 'min_impurity_decrease': 0.0, 'min_samples_leaf': 1, 'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'n_estimators': 100, 'n_jobs': None, 'oob_score': False, 'random_state': None, 'verbose': 0, 'warm_start': False}
Test data error: 0.03159999999999996
Test data accuracy: 0.9684
###Markdown
Tuning the hyperparametersAs we can see above, the default RandomForestsClassifier hyperparameters are already quite good for classifying MNIST digits by pixel features. But we want to see if we can do better.In the next cell, we loop creating and fitting models over a list of `n_estimators` in order to get a decent range of results.
###Code
import time
from sklearn.metrics import accuracy_score
# This contains all the default parameters.
parameters = {
"max_depth": None,
"min_samples_split": 2,
"min_samples_leaf": 1,
"min_impurity_decrease": 0,
"max_features": "auto",
"criterion": "gini" # "entropy"
}
n_estimators = [1, 5, 10, 50, 100, 500, 1000, 1500, 2500]
print("Parameters = %s\n" % str(parameters))
cross_val_metrics = np.zeros((len(n_estimators), 2), dtype=float)
for i, j in enumerate(n_estimators):
start_time = time.time()
rf = RandomForestClassifier(
n_estimators = n_estimators[i],
max_depth = parameters["max_depth"],
min_samples_split = parameters["min_samples_split"],
min_samples_leaf = parameters["min_samples_leaf"],
min_impurity_decrease=parameters['min_impurity_decrease'],
max_features = parameters["max_features"],
criterion = parameters["criterion"],
n_jobs=-1, # This ensures the model is being trained as fast as possible.
# tree_method = 'gpu_hist',
random_state=1778)
rf.fit(x_train, y_train)
# How did our model perform?
print(str(j) + " estimators:")
# Training data accuracy
predictions = rf.predict(x_train)
targets = y_train
cross_val_metrics[i, 0] = accuracy_score(targets, predictions)
# Validation data accuracy
# predictions = rf.predict(x_val)
# targets = y_val
# cross_val_metrics[i, 1] = accuracy_score(targets, predictions)
# Testing data accuracy
predictions = rf.predict(x_test)
targets = y_test
cross_val_metrics[i, 1] = accuracy_score(targets, predictions)
lap = time.time() - start_time
print("\tTime elapsed: %d m %.2f s" % (int(lap / 60) , lap - (int(lap / 60) * 60)))
print("\tTraining data error: " + str(1 - cross_val_metrics[i, 0]))
# print("\tValidation data accuracy: %f\n" % cross_val_metrics[i, 1])
print("\tTesting data error: " + str(1 - cross_val_metrics[i, 1]))
print("\tTesting data accuracy: " + str(cross_val_metrics[i, 1]) + '\n')
###Output
Parameters = {'max_depth': None, 'min_samples_split': 2, 'min_samples_leaf': 1, 'min_impurity_decrease': 0, 'max_features': 'auto', 'criterion': 'gini'}
1 estimators:
Time elapsed: 0 m 0.59 s
Training data error: 0.07152083333333337
Testing data error: 0.19389999999999996
Testing data accuracy: 0.8061
5 estimators:
Time elapsed: 0 m 0.57 s
Training data error: 0.006687499999999957
Testing data error: 0.0786
Testing data accuracy: 0.9214
10 estimators:
Time elapsed: 0 m 0.68 s
Training data error: 0.0009583333333332833
Testing data error: 0.054400000000000004
Testing data accuracy: 0.9456
50 estimators:
Time elapsed: 0 m 2.21 s
Training data error: 0.0
Testing data error: 0.03500000000000003
Testing data accuracy: 0.965
100 estimators:
Time elapsed: 0 m 4.15 s
Training data error: 0.0
Testing data error: 0.032399999999999984
Testing data accuracy: 0.9676
500 estimators:
Time elapsed: 0 m 20.24 s
Training data error: 0.0
Testing data error: 0.02959999999999996
Testing data accuracy: 0.9704
1000 estimators:
Time elapsed: 0 m 40.79 s
Training data error: 0.0
Testing data error: 0.02969999999999995
Testing data accuracy: 0.9703
1500 estimators:
Time elapsed: 0 m 59.26 s
Training data error: 0.0
Testing data error: 0.02959999999999996
Testing data accuracy: 0.9704
2500 estimators:
Time elapsed: 1 m 38.95 s
Training data error: 0.0
Testing data error: 0.029900000000000038
Testing data accuracy: 0.9701
###Markdown
Based off of those models, what trend do we see for the validation error as the number of estimators (trees) gets larger?
###Code
# print(cross_val_metrics)
fig = plt.figure()
ax = plt.gca()
ax.plot(n_estimators, 100 * (1 - cross_val_metrics[:,1]), "b.-", label='Test')
ax.plot(n_estimators, 100 * (1 - cross_val_metrics[:,0]), "g.-", label='Training')
ax.set_xlabel('Number of Trees')
ax.set_title('Error vs Number of Trees')
ax.set_ylabel('Error Percent')
ax.set_xscale('log')
ax.set_yscale('linear')
ax.legend(loc='best')
###Output
_____no_output_____
###Markdown
So, as we can see the validation error approaches zero as we add more trees. XGBoost
###Code
from sklearn.metrics import accuracy_score
import time
import torch as torch
# print(torch.cuda.get_device_name(0))
parameters = {
"tree_method": "exact",
'gpu_id': 0,
'n_jobs': -1,
'booster': 'gbtree', # can also "gblinear" or "dart"
'max_depth': 24,
'eval_metric': 'mlogloss', # accuracy_score, #'mlogloss', # 'mlogloss' by default
'gamma': 0,
'learning_rate': 2,
'subsample': 0.9,
'colsample_bynode':0.2
}
import xgboost as xgb
n_estimators = [1, 5, 10, 50, 100, 500, 1000, 1500, 2500]
cross_val_metrics = np.zeros((len(n_estimators), 2), dtype=float)
print("Parameters = %s\n" % str(parameters))
for i, j in enumerate(n_estimators):
start_time = time.time()
rf = xgb.XGBRFClassifier(
n_estimators = n_estimators[i],
tree_method = parameters['tree_method'],
gpu_id = parameters['gpu_id'],
n_jobs = parameters['n_jobs'],
# booster = parameters['booster'],
max_depth = parameters['max_depth'],
eval_metric = parameters['eval_metric'],
gamma = parameters['gamma'],
learning_rate = parameters['learning_rate'],
subsample = parameters['subsample'],
colsample_bynode = parameters['colsample_bynode'],
verbosity = 0,
use_label_encoder = False,
random_state = 1778
)
rf.fit(x_train, y_train)
# How did our model perform?
print(str(j) + " estimators:")
# Training data accuracy
predictions = rf.predict(x_train)
targets = y_train
cross_val_metrics[i, 0] = accuracy_score(targets, predictions)
# Validation data accuracy
# predictions = rf.predict(x_val)
# targets = y_val
# cross_val_metrics[i, 1] = accuracy_score(targets, predictions)
# Test data accuracy
predictions = rf.predict(x_test)
targets = y_test
cross_val_metrics[i, 1] = accuracy_score(targets, predictions)
lap = time.time() - start_time
print("\tTime elapsed: %d m %.2f s" % (int(lap / 60) , lap - (int(lap / 60) * 60)))
print("\tTrain data accuracy: %f" % cross_val_metrics[i, 0])
print("\tTest data accuracy: %f\n" % cross_val_metrics[i, 1])
import matplotlib.pyplot as plt
n_estimators = [1, 5, 10, 50, 100, 500, 1000, 1500]
cross_val_metrics = np.zeros((len(n_estimators), 2), dtype=float)
cross_val_metrics[:, 0] = [0.969833,0.993563,0.995500,0.996542,0.996521,0.996792,0.996646,0.996687]
cross_val_metrics[:, 1] = [0.896200, 0.952300, 0.960200, 0.963800, 0.964800, 0.964900, 0.965400, 0.965600]
print(cross_val_metrics)
fig = plt.figure()
ax = plt.gca()
ax.plot(n_estimators, (1 - cross_val_metrics[:,1]) * 100, "b.-", label='Test Error')
ax.plot(n_estimators, (1 - cross_val_metrics[:,0]) * 100, "g.-", label='Training Error')
ax.set_xlabel('Number of Trees')
ax.set_title('Error vs Number of Trees')
ax.set_ylabel('Error Percent')
ax.set_xscale('log')
ax.set_yscale('linear')
ax.legend(loc='upper right')
###Output
[[0.969833 0.8962 ]
[0.993563 0.9523 ]
[0.9955 0.9602 ]
[0.996542 0.9638 ]
[0.996521 0.9648 ]
[0.996792 0.9649 ]
[0.996646 0.9654 ]
[0.996687 0.9656 ]]
|
Day-1_Numpy.ipynb
|
###Markdown
Numpy N-dimensional ArrayNumPy is a Python library that can be used for scientific and numerical applications and is thetool to use for linear algebra operations.* The main data structure in NumPy is the ndarray,which is a shorthand name for N-dimensional array.* When working with NumPy, data in an ndarray is simply referred to as an array.* It is a fixed-sized array in memory that contains data of the same type, such as integers or floating point values.* The data type supported by an array can be accessed via the dtype attribute on the array.* The dimensions of an array can be accessed via the shape attribute that returns a tuple describing the length of each dimension. * There are a host of other attributes. * A simple way to create an array from data or simple Python data structures like a list is to use the array() function. * The example below creates a Python list of 3 floating point values, then creates an ndarray from the list and access the arrays’ shape and data type.
###Code
# importing Numpy
import numpy as np
# creating np array
a = np.array([1,2.3,25,0.1])
print(a)
#printing the shape of array
print('shape = ',a.shape)
#printing the Dtype of numpy
print(a.dtype)
# creating an empty array of shape 4*4
empty = np.empty([4,4])
print(empty)
# creating a matrix of Zeros
zeros = np.zeros([3,3])
print(zeros)
# creating a matrix with ones
ones = np.ones([4,4])
print(ones)
###Output
[[1. 1. 1. 1.]
[1. 1. 1. 1.]
[1. 1. 1. 1.]
[1. 1. 1. 1.]]
###Markdown
Combining Arrays* NumPy provides many functions to create new arrays from existing arrays.
###Code
Vertical Stack
Given two or more existing arrays, you can stack them vertically using the vstack() function.
For example, given two one-dimensional arrays, you can create a new two-dimensional array
with two rows by vertically stacking them.
This is demonstrated in the example below.
###Output
_____no_output_____
###Markdown
Vertical Stackv1 = np.array([9,7,0,0,5])print(v1)print(v1.shape)print('\n')v2 = np.array([0,6,0,2,2])print(v2)print(v2.shape)print('\n')vStack = np.vstack((v1,v2))print(vStack)
###Code
## Horizontal Stack
Given two or more existing arrays, you can stack them horizontally using the hstack() function.
For example, given two one-dimensional arrays, you can create a new one-dimensional array or
one row with the columns of the first and second arrays concatenated
###Output
_____no_output_____
###Markdown
horizontal Stackh1 = np.array([[1,5],[4,5]])print(h1)print(h1.shape)print('\n')h2 = np.array([[2,8],[8,9]])print(h2)print(h2.shape)print('\n')hStack = np.hstack((h1,h2))print(hStack)print(h2.shape) convering list to arraylist = [1,12,4,5,7]ary = np.array(list)print(type(ary)) Two-Dimensional List of Lists to Arraydata = [[11, 22],[33, 44],[55, 66]]arry = np.array(data)print(arry)print(arry.shape)print(type(arry))
###Code
## Array Indexing
###Output
_____no_output_____
###Markdown
One_Dimensional Indexingone_d = np.array([2,3,56,1,9,5])print(one_d[2])print(one_d[-1])print(one_d[-6]) Two-Dimensional Indexingtwo_d = np.array([[1,5], [459,52], [443,65]])print(two_d[0,0])print(two_d[1,0])print(two_d[2,1]) for printing all the elements in a rowprint(two_d[0,])
###Code
## Array Slicing
###Output
_____no_output_____
###Markdown
One-Dimensional slicing one_Sl = np.array([6,7,4,9,3])to print all the elementsprint(one_Sl[:])The first item of the array can be sliced by specifying a slice that starts at index 0 and ends at index 1 print(one_Sl[0:1])print(one_Sl[1:4])print(one_Sl[3:5]) negative Slicing Can Also be Doneprint(one_Sl[-2:])print(one_Sl[-3:-1])
###Code
## Two-Dimensional Slicing
###Output
_____no_output_____
###Markdown
Split Input and Output FeaturesIt is common to split your loaded data into input variables (X) and the output variable (y). Wecan do this by slicing all rows and all columns up to, but before the last column, then separatelyindexing the last column. For the input features, we can select all rows and all columns exceptthe last one by specifying : for in the rows index, and :-1 in the columns index.
###Code
Two_sl = np.array([[1,2,3,4],
[3,6,7,3],
[9,5,2,8]])
x = Two_sl[:,:-1]
y = Two_sl[:,-1]
print(x)
print('\n')
print(y)
# Splitting the Data into Training Data and Test data
split = 2
train_x = x[:split,:]
test_x = x[split:,:]
print(train_x)
print('\n')
print(test_x)
###Output
[[1 2 3]
[3 6 7]]
[[9 5 2]]
###Markdown
Array Reshaping
###Code
# array
Shape =np.array([[1,2,3,4],
[5,6,8,9]])
print('Rows: %d' % Shape.shape[0])
print('Cols : %d' % Shape.shape[1])
# Reshape 1D to 2D Array
one_re = np.array([2,3,5,4,1])
print(one_re)
print(one_re.shape)
print('\n')
one_re = one_re.reshape(one_re.shape[0],1)
print(one_re)
print(one_re.shape)
# Reshape 2D to 3D Array
data = np.array([[11, 22],
[33, 44],
[55, 66]])
print(data)
print(data.shape)
print('\n')
data_3d = data.reshape(data.shape[0],data.shape[1],1)
print(data_3d)
print(data_3d.shape)
###Output
[[11 22]
[33 44]
[55 66]]
(3, 2)
[[[11]
[22]]
[[33]
[44]]
[[55]
[66]]]
(3, 2, 1)
###Markdown
NumPy Array Broadcasting* Arrays with different sizes cannot be added, subtracted, or generally be used in arithmetic.* A way to overcome this is to duplicate the smaller array so that it has the dimensionality and size as the larger array. * This is called array broadcasting and is available in NumPy when performing array arithmetic, which can greatly reduce and simplify your code.
###Code
## broadcast scalar to one-dimensional array
m = np.array([1,2,3,4])
n = np.array([5])
o = np.add(m,n)
print(o)
## broadcast scalar to Two-dimensional array
m = np.array([[1,2,4],[3,6,7]])
n = np.array([3])
o = np.add(m,n)
print(o)
## broadcast oneD to twoD array
m = np.array([[1,2,3,4],[4,5,8,6]])
n = np.array([10,2,5,4])
o = np.add(m,n)
print(o)
###Output
[[11 4 8 8]
[14 7 13 10]]
|
materials/_build/jupyter_execute/materials/lectures/02_lecture-intro-more-git.ipynb
|
###Markdown
Lecture 2: Introduction & my goodness more git! Learning objectives:By the end of this lecture, students should be able to:- Create project boards using GitHub and link tasks to issues]- Create GitHub milestones to group related issues- Set-up main branch protection on a GitHub repository- Use branching and pull requests to propose changes to the main branch- Compare and contrast the Git and GitHub flow development workflows Project boardsExample of a physical [Kanban board](https://en.wikipedia.org/wiki/Kanban_board):Source: Example of a digital project board from GitHub:Reading: [About project boards - GitHub Help](https://help.github.com/en/github/managing-your-work-on-github/about-project-boards)```{figure} img/github_kanban.png---width: 800pxname: usethis-devtools---```Source: Why use project boards for collaborative software projects?- **Transparency:** everyone knows what everyone is doing- **Motivation:** emphasis on task completion- **Flexibility:** board columns and tasks are customized to each project Exercise: Getting to know GitHub project boardsWe are going to each create our own project board for our MDS homework. I have set-up a template GitHub repository for you so that you can easily populate it with relevant issues for your homework this block. You will use these issues to create your MDS homework project board. Steps:1. **Import** a copy of [this GitHub repository](https://github.com/UBC-MDS/mds-homework) (need help? see: [*How to import a GitHub repository*](https://help.github.com/en/github/importing-your-projects-to-github/importing-a-repository-with-github-importer))2. Using the GitHub webpage, make a new branch called `create` in your copy of that repository (this will generate the issues for you).3. Click on the Projects tab, and then click "Create a project". Give it a name, and select "Basic kanban" as the template option.4. Use the issues in the repo to set-up a project board for the next two weeks (or more) of your MDS homework. For each issue you add to the project, assign it to yourself and add a label of "group-work" or "individual-work".Additional Resources:- [Assigning issues and pull requests to other GitHub users](https://help.github.com/en/github/managing-your-work-on-github/assigning-issues-and-pull-requests-to-other-github-users)- [Applying labels to issues and pull requests](https://help.github.com/en/github/managing-your-work-on-github/applying-labels-to-issues-and-pull-requests) Relevance to course project:- You will be expected to create a project board for each of your groups projects and update it each milestone (at a minimum)- We expect that each issue should have at least one person assigned to it Milestones- Group related issues together that are needed to hit a given target (e.g., new release version of a software package)- Can assign a due date to a milestone- From the milestone page you can see list of statistics that are relevant to each milestone set in that repositoryReading: [About milestones - GitHub Help](https://help.github.com/en/github/managing-your-work-on-github/about-milestones) Example of the `readr` package milestones:```{figure} img/readr-milestones.png---height: 600pxname: readr-milestones---```Source: https://github.com/tidyverse/readr/milestones Exercise: Getting to know GitHub milestonesWe are going to practice creating milestones and associating issues with them. To do this we will continue working with the same repository that you just created a project board for. Steps:1. Click on the Issues tab, and then click on "Milestones". 2. Click "New milestone" and name it "week 1" and set the due date to be this Saturday. Click "Create milestone".3. Go to the Issues tab, and for each issue that should be associated with the week 1 milestone (i.e., things due this Saturday), open that issue and set the milestone for that issue as "week 1".4. Once you are done, go back to the Milestones page to view what the week 1 milestone looks like.5. If you finish early, do this for week 2. Relevance to course project:- You will be expected to create a milestone on each of your project repositories for each course assigned milestone. You must link the relevant issues needed to complete that milestone to it on GitHub. Main branch protectionOnce we have developed the first working version of our software (that will be the end of week 2 for us in this course), we want to consider our main branch as the **deployment** branch.What do we mean by **deployment** branch? Here we mean that other people may be **using and depending** on it, and thus, if we push changes to main they **must not break things**! How do I make sure changes won't break things? There are varying levels of checks and balances that can be put in place to do this. One fundamental practice is **main branch protection**. Here we essentially put a rule in place that no one can push directly to main, all changes to main must be sent via a pull request so that **at least** one entity (e.g., human) can check things over before the change gets applied to the main (i.e., deployment) branch.Readings:- [Configuring protected branches - GitHub help](https://help.github.com/en/github/administering-a-repository/configuring-protected-branches)- [About protected branches](https://docs.github.com/en/github/administering-a-repository/about-protected-branches). How to accept a pull request on a GitHub repository that has main branch protection(note: at the time of video making, the default branch on GitHub as still called the master branch)
###Code
from IPython.display import YouTubeVideo
YouTubeVideo('kOE6b8zpfCY', width=854, height=480)
###Output
_____no_output_____
|
notebooks/examples/spark/README.ipynb
|
###Markdown
Spark READMEApache Spark™ is a general engine for cluster scale computing. It provides API's for multiple languages including Python, Scala, and SQL.This notebook shows how to run Spark in both local and yarn-client modes within TAP, as well as using Spark Submit.Several [Spark examples](/tree/examples/spark) are included with TAP and [others](http://spark.apache.org/examples.html) are available on the [Spark website](http://spark.apache.org/)See the [PySpark API documentation](http://spark.apache.org/docs/latest/api/python/) for more information on the API's below. Supported ModesCurrently the YARN scheduler is supported on TAP and Spark jobs can be ran in three different modes:Mode | Good for Big Data | Supports Interactive Sessions | Supports Batch Jobs | Runtime | Use With | Best For---------- | --- | -- | --- | --- | --------------- | ----------------- | ------------------------------**Local mode** | No | Yes | Yes | Both driver and workers run locally | pyspark, spark-shell, spark-submit | Fast small scale testing in an interactive shell or batch. Best mode to start with if you are new to Spark.**Yarn Client** | Yes | Yes | Yes | Driver runs locally and workers run in cluster | pyspark, spark-shell, spark-submit | Big data in an interactive shell.**Yarn Cluster** | Yes | No | Yes | Both driver and workers run in cluster | spark-submit | Big data batch jobs.More information is avaialable in the [Spark Documentation](http://spark.apache.org/docs/latest/) Create a SparkContext in local modeIn local mode no cluster resources are used. It is easy to setup and is good for small scale testing.
###Code
import pyspark
# Create a SparkContext in local mode
sc = pyspark.SparkContext("local")
# Test the context is working by creating an RDD and performing a simple operation
rdd = sc.parallelize(range(10))
print rdd.count()
# Find out ore information about your SparkContext
print 'Python Version: ' + sc.pythonVer
print 'Spark Version: ' + sc.version
print 'Spark Master: ' + sc.master
print 'Spark Home: ' + str(sc.sparkHome)
print 'Spark User: ' + str(sc.sparkUser())
print 'Application Name: ' + sc.appName
print 'Application Id: ' + sc.applicationId
# Stop the context when you are done with it. When you stop the SparkContext resources
# are released and no further operations can be performed within that context.
sc.stop()
# Please restart the Kernel to switch to yarn-client mode
# This is only needed if you already ran with local mode in same session
# The Kernel can be restarted via the menus above or with the following code:
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Create a SparkContext in yarn-client modeIn yarn-client mode, a Spark job is launched in the cluster. This is needed to work with big data.
###Code
import pyspark
# create a configuration
conf = pyspark.SparkConf()
# set the master to "yarn-client"
conf.setMaster("yarn-client")
# set other options as desired
conf.set("spark.driver.memory", "512mb")
conf.set("spark.executor.memory", "1g")
# create the context
sc = pyspark.SparkContext(conf=conf)
# Test the context is working by creating an RDD and performing a simple operation
rdd = sc.parallelize(range(10))
print rdd.count()
# Find out ore information about your SparkContext
print 'Python Version: ' + sc.pythonVer
print 'Spark Version: ' + sc.version
print 'Spark Master: ' + sc.master
print 'Spark Home: ' + str(sc.sparkHome)
print 'Spark User: ' + str(sc.sparkUser())
print 'Application Name: ' + sc.appName
print 'Application Id: ' + sc.applicationId
# Stop the context when you are done with it. When you stop the SparkContext resources
# are released and no further operations can be performed within that context.
sc.stop()
###Output
_____no_output_____
###Markdown
Using Spark SubmitIt is possible to upload jars via Jupyter and use Spark Submit to run them. Jars can be uploaded by accessing the [Jupyter dashboard](/tree) and clicking the "Upload" button
###Code
# Call spark-submit to run the SparkPi example that ships with Spark.
# We didn't need to upload this jar because it is already loaded on the system.
!spark-submit --class org.apache.spark.examples.SparkPi \
--master local \
/usr/local/spark/lib/spark-examples-*.jar \
10
###Output
_____no_output_____
|
notebooks/FI/FI_MODEL016.ipynb
|
###Markdown
Feature Importance Analysis
###Code
import numpy as np
import pandas as pd
import matplotlib.pylab as plt
import seaborn as sns
import os
%matplotlib inline
###Output
_____no_output_____
###Markdown
LightGBM Model
###Code
MODEL_NUMBER = 'M016'
fi_df = pd.read_csv('../../fi/fi_M016_0910_0906_0.9377.csv')
sns.set(style="whitegrid")
sns.set_color_codes("pastel")
plt.figure(figsize=(12, 65))
ax = sns.barplot(x='importance',
y='feature',
data=fi_df.sort_values('importance', ascending=False),
color="b")
ax.set_title(f'Type {MODEL_NUMBER}')
plt.show()
low_importance = fi_df.groupby('feature')[['importance']].max().sort_values('importance').query('importance <= 100').index
[x for x in low_importance]
###Output
_____no_output_____
|
tratamento-dados/tratamento-candidaturas-eleitas.ipynb
|
###Markdown
Tratamento de dados coletados sobre candidaturas eleitas A coleta de dados referentes às candidaturas eleitas foi realizada através do notebook [../coleta.ipynb](../coleta.ipynb). Percebemos a necessidade de realizar limpeza dos dados coletados antes de utilizá-los nas análises, então nesse notebook descrevemos o pré-processamento necessário.
###Code
df_deputadas_1934_2023 = pd.read_csv('../dados/deputadas_1934_2023.csv')
df_deputadas_1934_2023.shape
df_deputadas_1934_2023.head(5)
###Output
_____no_output_____
###Markdown
Seleção de atributos desejados para análise
###Code
df_deputadas = df_deputadas_1934_2023[['id', 'siglaPartido', 'siglaUf',
'idLegislatura', 'sexo']]
df_deputadas.head(5)
###Output
_____no_output_____
###Markdown
Ajuste de valores faltantes
###Code
df_deputadas.isnull().sum(axis = 0)
df_deputadas['siglaPartido'].fillna('sem partido', inplace=True)
df_deputadas.isnull().sum(axis = 0)
df_deputadas.to_csv('../dados/candidaturas_eleitas.csv', index=False)
###Output
_____no_output_____
|
notebooks/00_Div_validation_plots.ipynb
|
###Markdown
Table of ContentsDependenciesFunctionsPathsMainPROFYLERNACapTCR-SeqBind RNA and CapICGCCapTCR-SeqBind RNA and CapNPCRNACapTCR-SeqBind RNA and CapAnalysis of diversity measureslinear regression Dependencies
###Code
library(Hmisc)
library(broom)
###Output
Loading required package: lattice
Loading required package: survival
Loading required package: Formula
Loading required package: ggplot2
Attaching package: ‘Hmisc’
The following objects are masked from ‘package:base’:
format.pval, units
###Markdown
Functions
###Code
source("~/OneDrive - UHN/R_src/Immune_diversity.R")
source("~/OneDrive - UHN/R_src/ggplot2_theme.R")
###Output
_____no_output_____
###Markdown
Paths
###Code
manifestpath <- "/Users/anabbi/OneDrive - UHN/Documents/IPD2/Manifests/"
datapath <- "/Users/anabbi/OneDrive - UHN/Documents/IPD2/Data/"
plotpath <- "/Users/anabbi/OneDrive - UHN/Documents/IPD2/Plots/"
mordpath <- "/Users/anabbi/Desktop/Sam/immpedcan/mixcr/PROFYLE/RNAseq/"
h4hpath <- "/Users/anabbi/Desktop/H4H/immpedcan/PROFYLE/mixcr/"
###Output
_____no_output_____
###Markdown
Main PROFYLE RNA
###Code
profyle_rna <- list.files(mordpath, pattern = "CLONES_TRB", recursive = T)
trb_list_profyle_rna <- immunelistfx(profyle_rna, mordpath, "TRB")
trb_list_profyle_rna <- trb_list_profyle_rna[sapply(trb_list_profyle_rna, function(x) length(unlist(x))) > 1] # remove files with one clonotype
length(trb_list_profyle_rna)
trb_list_profyle_rna <- trb_list_profyle_rna[sapply(trb_list_profyle_rna, function(x) var(unlist(x))) > 0] # remove files no variance
length(trb_list_profyle_rna)
div_trb_profyle_rna <- Divstats.fx(trb_list_profyle_rna, "TRB")
div_trb_profyle_rna
save(div_trb_profyle_rna, file = paste0(datapath, "diversity/div_trb_profyle_rna.RData"))
div_trb_profyle_rna <- as.data.frame(div_trb_profyle_rna)
div_trb_profyle_rna$sample_id <- gsub(".*TRB_", "", rownames(div_trb_profyle_rna))
div_trb_profyle_rna$sample_id <- gsub("_R_.*", "", div_trb_profyle_rna$sample_id)
div_trb_profyle_rna$sample_id
###Output
_____no_output_____
###Markdown
CapTCR-Seq
###Code
profyle_cap <- list.files(h4hpath, pattern = "CLONES_TRB", recursive = T)
profyle_cap
trb_list_profyle_cap <- immunelistfx(profyle_cap, h4hpath, "TRB")
length(trb_list_profyle_cap)
trb_list_profyle_cap <- trb_list_profyle_cap[sapply(trb_list_profyle_cap, function(x) length(unlist(x))) > 1] # remove files with one clonotype
length(trb_list_profyle_cap)
trb_list_profyle_cap <- trb_list_profyle_cap[sapply(trb_list_profyle_cap, function(x) var(unlist(x))) > 0] # remove files no variance
length(trb_list_profyle_cap)
div_trb_profyle_cap <- Divstats.fx(trb_list_profyle_cap, "TRB")
div_trb_profyle_cap
save(div_trb_profyle_cap, file = paste0(datapath, "diversity/div_trb_profyle_cap.RData"))
div_trb_profyle_cap <- as.data.frame(div_trb_profyle_cap)
div_trb_profyle_cap$sample_id <- gsub(".*TRB", "", rownames(div_trb_profyle_cap))
div_trb_profyle_cap$sample_id <- gsub(".txt", "", div_trb_profyle_cap$sample_id)
head(div_trb_profyle_cap)
###Output
_____no_output_____
###Markdown
Bind RNA and Cap
###Code
colnames(div_trb_profyle_cap)[1:ncol(div_trb_profyle_cap)-1] <-
paste0(colnames(div_trb_profyle_cap)[1:ncol(div_trb_profyle_cap)-1], "_TCRCap")
colnames(div_trb_profyle_rna)[1:ncol(div_trb_profyle_rna)-1] <-
paste0(colnames(div_trb_profyle_rna)[1:ncol(div_trb_profyle_rna)-1], "_RNAseq")
colnames(div_trb_profyle_rna)
div_trb_profyle_cap_rna <- merge(div_trb_profyle_cap, div_trb_profyle_rna, by = "sample_id")
###Output
_____no_output_____
###Markdown
ICGC CapTCR-Seq
###Code
icgcpath <- "/Users/anabbi/Desktop/H4H/immpedcan/ICGC/"
icgc_cap <- list.files(icgcpath, pattern = "CLONES_TRB", recursive = T)
icgc_cap <- icgc_cap[!grepl("MiSeq", icgc_cap)]
icgc_cap <- icgc_cap[!grepl("ds_batch", icgc_cap)]
icgc_cap
trb_list_icgc_cap <- immunelistfx(icgc_cap, icgcpath, "TRB")
length(trb_list_icgc_cap)
trb_list_icgc_cap <- trb_list_icgc_cap[sapply(trb_list_icgc_cap, function(x) length(unlist(x))) > 1] # remove files with one clonotype
length(trb_list_icgc_cap)
trb_list_icgc_cap <- trb_list_icgc_cap[sapply(trb_list_icgc_cap, function(x) var(unlist(x))) > 0] # remove files no variance
length(trb_list_icgc_cap)
div_trb_icgc_cap <- Divstats.fx(trb_list_icgc_cap, "TRB")
div_trb_icgc_cap
save(div_trb_icgc_cap, file = paste0(datapath, "diversity/div_trb_icgc_cap.RData"))
div_trb_icgc_cap <- as.data.frame(div_trb_icgc_cap)
div_trb_icgc_cap$sample_id <- gsub(".*TRB", "", rownames(div_trb_icgc_cap))
div_trb_icgc_cap$sample_id <- gsub(".txt", "", div_trb_icgc_cap$sample_id)
div_trb_icgc_cap
###Output
_____no_output_____
###Markdown
Bind RNA and Cap
###Code
load(file = paste0(datapath, "diversity/metadata_IC_TRB.RData"))
colnames(div_trb_icgc_cap)[1:ncol(div_trb_icgc_cap)-1] <-
paste0(colnames(div_trb_icgc_cap)[1:ncol(div_trb_icgc_cap)-1], "_TCRCap")
icgc_trb_rna <- metadata_IC_TRB[ metadata_IC_TRB$group == "ICGC",c(25:43,1)]
colnames(icgc_trb_rna)[1:ncol(icgc_trb_rna)-1] <-
paste0(colnames(icgc_trb_rna)[1:ncol(icgc_trb_rna)-1], "_RNAseq")
div_trb_icgc_cap_rna <- merge(div_trb_icgc_cap,
icgc_trb_rna, by = "sample_id")
dim(div_trb_icgc_cap_rna)
###Output
_____no_output_____
###Markdown
NPC RNA
###Code
npc_rna <- list.files(paste0(datapath, "mixcr/NPC/clones/"), pattern = "CLONES_TRB", recursive = T)
trb_list_npc_rna <- immunelistfx(npc_rna, paste0(datapath, "mixcr/NPC/clones/"), "TRB")
trb_list_npc_rna <- trb_list_npc_rna[sapply(trb_list_npc_rna, function(x) length(unlist(x))) > 1] # remove files with one clonotype
length(trb_list_npc_rna)
trb_list_npc_rna <- trb_list_npc_rna[sapply(trb_list_npc_rna, function(x) var(unlist(x))) > 0] # remove files no variance
length(trb_list_npc_rna)
div_trb_npc_rna <- Divstats.fx(trb_list_npc_rna, "TRB")
head(div_trb_npc_rna)
save(div_trb_npc_rna, file = paste0(datapath, "diversity/div_trb_npc_rna.RData"))
div_trb_npc_rna <- as.data.frame(div_trb_npc_rna)
div_trb_npc_rna$sample_id <- gsub(".*TRB", "", rownames(div_trb_npc_rna))
div_trb_npc_rna$sample_id <- gsub(".txt", "", div_trb_npc_rna$sample_id)
div_trb_npc_rna$sample_id
###Output
_____no_output_____
###Markdown
Remove normal tissues
###Code
div_trb_npc_rna <- div_trb_npc_rna[!div_trb_npc_rna$sample_id %in% c("N1", "N4", "N5", "N6", "N7"),]
dim(div_trb_npc_rna)
###Output
_____no_output_____
###Markdown
CapTCR-Seq
###Code
npc_cap <- list.files(paste0(datapath, "mixcr/NPC/TCRcap/"), pattern = "CLONES_TRB", recursive = T)
# downsampled to two million
npc_cap <- npc_cap[ grepl("2000000", npc_cap)]
trb_list_npc_cap <- immunelistfx(npc_cap, paste0(datapath, "mixcr/NPC/TCRcap/"), "TRB")
length(trb_list_npc_cap)
trb_list_npc_cap <- trb_list_npc_cap[sapply(trb_list_npc_cap, function(x) length(unlist(x))) > 1] # remove files with one clonotype
length(trb_list_npc_cap)
trb_list_npc_cap <- trb_list_npc_cap[sapply(trb_list_npc_cap, function(x) var(unlist(x))) > 0] # remove files no variance
div_trb_npc_cap <- Divstats.fx(trb_list_npc_cap, "TRB")
save(div_trb_npc_cap, file = paste0(datapath, "diversity/div_trb_npc_cap.RData"))
div_trb_npc_cap <- as.data.frame(div_trb_npc_cap)
div_trb_npc_cap$sample_id <- gsub(".*NPC_", "", rownames(div_trb_npc_cap))
div_trb_npc_cap$sample_id <- gsub("_2000000.txt", "", div_trb_npc_cap$sample_id)
div_trb_npc_cap$sample_id
###Output
_____no_output_____
###Markdown
remove Ns
###Code
div_trb_npc_cap <- div_trb_npc_cap[ !div_trb_npc_cap$sample_id %in% c("N1", "N4", "N5", "N6"),]
###Output
_____no_output_____
###Markdown
Bind RNA and Cap
###Code
colnames(div_trb_npc_rna)
colnames(div_trb_npc_cap)[1:ncol(div_trb_npc_cap)-1] <-
paste0(colnames(div_trb_npc_cap)[1:ncol(div_trb_npc_cap)-1], "_TCRCap")
colnames(div_trb_npc_rna)[1:ncol(div_trb_npc_rna)-1] <-
paste0(colnames(div_trb_npc_rna)[1:ncol(div_trb_npc_rna)-1], "_RNAseq")
div_trb_npc_cap_rna <- merge(div_trb_npc_cap, div_trb_npc_rna, by = "sample_id")
cap_rna <- rbind(div_trb_profyle_cap_rna,div_trb_icgc_cap_rna, div_trb_npc_cap_rna)
cap_rna$group <- NA
cap_rna$group[ grepl("PRO", cap_rna$sample_id)] <- "Pediatric"
cap_rna$group[ grepl("ICGC", cap_rna$sample_id)] <- "Pediatric"
cap_rna$group[ is.na(cap_rna$group)] <- "Adult"
cap_rna_plot <- ggplot(data = cap_rna, aes(y = estimated_Shannon_RNAseq, x = observed_Shannon_TCRCap,
label = sample_id)) +
geom_point(size = 5, aes(color = group)) +
geom_smooth(method = "lm", se = FALSE) + myplot +
scale_y_continuous(trans = "log10") +
scale_x_continuous(trans = "log10") +
annotation_logticks(sides = "bl") +
#geom_text()+
theme(legend.position = "bottom", legend.title = element_blank()) +
theme(axis.title = element_text(size = 35),
axis.line = element_line(color = "black"),
axis.text.x = element_text(size = 35, color = "black"),
axis.text.y = element_text(size = 35, color = "black")) +
labs(y = "Estimated Shannon diversity\n(RNAseq)", x = "Shannon diversity\n(CapTCR-seq)")
cap_rna_plot + annotate("text", x=40, y=3, label= "R^2 = 0.66", size = 10)
pdf(file = paste0(plotpath,"RNAseq_CapTCR_TRB.pdf"),
width = 10, height = 10,
useDingbats = FALSE)
print(cap_rna_plot + annotate("text", x=40, y=3, label= "R^2 = 0.66", size = 10) )
dev.off()
summary(cap_rna$estimated_Shannon_RNAseq)
lmreg_TRB <- lm(log10(estimated_Shannon_RNAseq) ~ log10(observed_Shannon_TCRCap),
data = cap_rna)
broom::glance(lmreg_TRB)
load(file = paste0(datapath,"PROFYLE_Div_bulk_TRA.RData"))
load(file = paste0(datapath, "PROFYLE_Div_bulk_TRB.RData"))
load(file = paste0(datapath,"PROFYLE_Div_cap_TRA.RData"))
load(file = paste0(datapath,"PROFYLE_Div_cap_TRB.RData"))
PROFYLE_Div_bulk_TRB
PROFYLE_Div_cap_TRB
ordershanTRB <- PROFYLE_Div_cap_TRB$filename[order(PROFYLE_Div_cap_TRB$observed_Shannon)]
ordershanTRA <- PROFYLE_Div_cap_TRA$filename[order(PROFYLE_Div_cap_TRA$observed_Shannon)]
PROFYLE_Div_cap_TRA$filename
PROFYLE_Div_cap_TRA$observed_Shannon
PROFYLE_Div_cap_TRA$observed_Simpson
PROFYLE_Div_cap_TRA[order(PROFYLE_Div_cap_TRA$observed_Shannon),]
clonesTRB <- clonplotfx(paste0(datapath,"PROFYLE/TCRcap/"),"TRB", ordershanTRB,"PROFYLE")
clonesTRA <- clonplotfx(paste0(datapath,"PROFYLE/TCRcap/"),"TRA", ordershanTRB,"PROFYLE")
clonesTRA + scale_x_discrete(labels = ordershanTRA) + labs(title = "clonesTRA")
clonesTRB + scale_x_discrete(labels = ordershanTRB) + labs(title = "clonesTRB")
if(file.exists(paste(plotpath,"PROFYLE_clonesTRA.pdf", sep = ""))){
message("file exists!")
}else{
pdf(file = paste(plotpath,"PROFYLE_clonesTRA.pdf", sep = ""),
width = 10,
height = 10,
useDingbats = FALSE)
print(clonesTRA + labs(title = "clonesTRA"))
dev.off()
}
if(file.exists(paste(plotpath,"PROFYLE_clonesTRB.pdf", sep = ""))){
message("file exists!")
}else{
pdf(file = paste(plotpath,"PROFYLE_clonesTRB.pdf", sep = ""),
width = 10,
height = 10,
useDingbats = FALSE)
print(clonesTRB + labs(title = "clonesTRB"))
dev.off()
}
###Output
_____no_output_____
###Markdown
Analysis of diversity measures
###Code
load(file = paste0(datapath,"NPC_PROFYLE_Div_bulk_cap_TRA.RData"))
load(file = paste0(datapath,"NPC_PROFYLE_Div_bulk_cap_TRB.RData"))
head(Div_TRA)
bulkcap_TRAplot_simp <- ggplot(data = Div_TRA,
aes(x = cap_observed_Simpson,
y = bulk_estimated_Simpson)) +
geom_point(aes(color = group),
size = 5) +
#plot format
theme(plot.title = element_text(size = 22),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
panel.background = element_rect(fill = "transparent",
colour = NA),
panel.border=element_blank(),
plot.margin = unit(c(0.5,0,0,0),"cm")) +
#axis
theme(axis.text = element_text(size = 24),
axis.text.x = element_text( hjust = 1),
axis.title = element_text(size = 28,
margin = margin(r = 4, unit = "mm")),
axis.line = element_line(color = "black")) +
#legends
theme(legend.position = "none",
legend.key = element_rect(fill = "white",
colour = "white"),
legend.text = element_text(size = 20)) +
guides(colour = guide_legend(override.aes = list(alpha = 1,
size = 8))) +
scale_x_continuous(trans = "log10") +
scale_y_continuous(trans = "log10") +
labs(y = "Inferred Simpson diversity (RNAseq)",
x = "Simpson diversity (TCRcap)",
title = "TRA diversity")
bulkcap_TRAplot_rich <- ggplot(data = Div_TRA,
aes(x = cap_observed_Richness,
y = bulk_estimated_Richness)) +
geom_point(aes(color = group),
size = 5) +
#plot format
theme(plot.title = element_text(size = 22),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
panel.background = element_rect(fill = "transparent",
colour = NA),
panel.border=element_blank(),
plot.margin = unit(c(0.5,0,0,0),"cm")) +
#axis
theme(axis.text = element_text(size = 24),
axis.text.x = element_text( hjust = 1),
axis.title = element_text(size = 28,
margin = margin(r = 4, unit = "mm")),
axis.line = element_line(color = "black")) +
#legends
theme(legend.position = "none",
legend.key = element_rect(fill = "white",
colour = "white"),
legend.text = element_text(size = 20)) +
guides(colour = guide_legend(override.aes = list(alpha = 1,
size = 8))) +
scale_x_continuous(trans = "log10") +
scale_y_continuous(trans = "log10") +
labs(y = "Inferred richness (RNAseq)",
x = "Richness (TCRcap)",
title = "TRA diversity")
bulkcap_TRAplot_rich
if(file.exists(paste(plotpath,"osimp_esimp_NPC_PROFYLE_TRA.pdf", sep = ""))){
message("file exists!")
}else{
pdf(file = paste(plotpath,"osimp_esimp_NPC_PROFYLE_TRA.pdf", sep = ""),
width = 10,
height = 10,
useDingbats = FALSE)
print(bulkcap_TRAplot_simp + annotation_logticks(sides = "lb"))
dev.off()
}
if(file.exists(paste(plotpath,"oshan_eshan_NPC_PROFYLE_TRA.pdf", sep = ""))){
message("file exists!")
}else{
pdf(file = paste(plotpath,"oshan_eshan_NPC_PROFYLE_TRA.pdf", sep = ""),
width = 10,
height = 10,
useDingbats = FALSE)
print(bulkcap_TRAplot_shan + annotation_logticks(sides = "lb"))
dev.off()
}
if(file.exists(paste(plotpath,"orich_erich_NPC_PROFYLE_TRA.pdf", sep = ""))){
message("file exists!")
}else{
pdf(file = paste(plotpath,"orich_erich_NPC_PROFYLE_TRA.pdf", sep = ""),
width = 10,
height = 10,
useDingbats = FALSE)
print(bulkcap_TRAplot_rich + annotation_logticks(sides = "lb"))
dev.off()
}
bulkcap_TRBplot_rich <- ggplot(data = Div_TRB,
aes(x = (cap_observed_Richness),
y = (bulk_estimated_Richness))) +
geom_point(aes(color = group),
size = 5) +
#plot format
theme(plot.title = element_text(size = 22),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
panel.background = element_rect(fill = "transparent",
colour = NA),
panel.border=element_blank(),
plot.margin = unit(c(0.5,0,0,0),"cm")) +
#axis
theme(axis.text = element_text(size = 24),
axis.ticks.x = element_blank(),
axis.text.x = element_text( hjust = 1),
axis.title = element_text(size = 28,
margin = margin(r = 4, unit = "mm")),
axis.line = element_line(color = "black")) +
#legends
theme(legend.position = "none",
legend.key = element_rect(fill = "white",
colour = "white"),
legend.text = element_text(size = 20)) +
guides(colour = guide_legend(override.aes = list(alpha = 1,
size = 8))) +
scale_x_continuous(trans = "log10") +
scale_y_continuous(trans = "log10") +
labs(y = "Inferred richness (RNAseq)",
x = "Richness (TCRcap)",
title = "TRB diversity")
bulkcap_TRBplot_simp <- ggplot(data = Div_TRB,
aes(x = (cap_observed_Simpson),
y = (bulk_estimated_Simpson))) +
geom_point(aes(color = group),
size = 5) +
#plot format
theme(plot.title = element_text(size = 22),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
panel.background = element_rect(fill = "transparent",
colour = NA),
panel.border=element_blank(),
plot.margin = unit(c(0.5,0,0,0),"cm")) +
#axis
theme(axis.text = element_text(size = 24),
axis.text.x = element_text( hjust = 1),
axis.title = element_text(size = 28,
margin = margin(r = 4, unit = "mm")),
axis.line = element_line(color = "black")) +
#legends
theme(legend.position = "none",
legend.key = element_rect(fill = "white",
colour = "white"),
legend.text = element_text(size = 20)) +
guides(colour = guide_legend(override.aes = list(alpha = 1,
size = 8))) +
scale_x_continuous(trans = "log10") +
scale_y_continuous(trans = "log10") +
labs(y = "Inferred Simpson diversity (RNAseq)",
x = "Simpson diversity (TCRcap)",
title = "TRB diversity")
bulkcap_TRBplot_shan <- ggplot(data = Div_TRB,
aes(x = (cap_observed_Shannon),
y = (bulk_estimated_Shannon))) +
geom_point(aes(color = group),
size = 5) +
#plot format
theme(plot.title = element_text(size = 22),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
panel.background = element_rect(fill = "transparent",
colour = NA),
panel.border=element_blank(),
plot.margin = unit(c(0.5,0,0,0),"cm")) +
#axis
theme(axis.text = element_text(size = 24),
axis.text.x = element_text( hjust = 1),
axis.title = element_text(size = 28,
margin = margin(r = 4, unit = "mm")),
axis.line = element_line(color = "black")) +
#legends
theme(legend.position = "none",
legend.key = element_rect(fill = "white",
colour = "white"),
legend.text = element_text(size = 20)) +
guides(colour = guide_legend(override.aes = list(alpha = 1,
size = 8))) +
scale_x_continuous(trans = "log10") +
scale_y_continuous(trans = "log10") +
labs(y = "Inferred Shannon diversity (RNAseq)",
x = "Shannon diversity (TCRcap)",
title = "TRB diversity")
max(Div_TRB$cap_estimated_Simpson)
bulkcap_TRBplot_shan
if(file.exists(paste(plotpath,"osimp_esimp_NPC_PROFYLE_TRB.pdf", sep = ""))){
message("file exists!")
}else{
pdf(file = paste(plotpath,"osimp_esimp_NPC_PROFYLE_TRB.pdf", sep = ""),
width = 10,
height = 10,
useDingbats = FALSE)
print(bulkcap_TRBplot_simp + annotation_logticks(sides = "lb"))
dev.off()
}
if(file.exists(paste(plotpath,"orich_erich_NPC_PROFYLE_TRB.pdf", sep = ""))){
message("file exists!")
}else{
pdf(file = paste(plotpath,"orich_erich_NPC_PROFYLE_TRB.pdf", sep = ""),
width = 10,
height = 10,
useDingbats = FALSE)
print(bulkcap_TRBplot_rich + annotation_logticks(sides = "lb"))
dev.off()
}
if(file.exists(paste(plotpath,"oshan_eshan_NPC_PROFYLE_TRB.pdf", sep = ""))){
message("file exists!")
}else{
pdf(file = paste(plotpath,"oshan_eshan_NPC_PROFYLE_TRB.pdf", sep = ""),
width = 10,
height = 10,
useDingbats = FALSE)
print(bulkcap_TRBplot_shan + annotation_logticks(sides = "lb"))
dev.off()
}
bulkcap_TRBplot_shan + annotation_logticks(sides = "lb")
###Output
_____no_output_____
###Markdown
linear regression
###Code
lmreg_TRA_rich <- lm(bulk_estimated_Richness ~ cap_observed_Richness, data = Div_TRA)
lmreg_TRA_shan <- lm(bulk_estimated_Shannon ~ cap_observed_Shannon, data = Div_TRA)
lmreg_TRA_simp <- lm(bulk_estimated_Simpson ~ cap_observed_Simpson, data = Div_TRA)
lmreg_TRB_rich <- lm(bulk_estimated_Richness ~ cap_observed_Richness, data = Div_TRB)
lmreg_TRB_shan <- lm(bulk_estimated_Shannon ~ cap_observed_Shannon, data = Div_TRB)
lmreg_TRB_simp <- lm(bulk_estimated_Simpson ~ cap_observed_Simpson, data = Div_TRB)
glance(lmreg_TRA_rich)
glance(lmreg_TRA_shan)
glance(lmreg_TRA_simp)
glance(lmreg_TRB_rich)
glance(lmreg_TRB_shan)
glance(lmreg_TRB_simp)
###Output
_____no_output_____
|
nbs/recommender.ipynb
|
###Markdown
Setup
###Code
# mount drive
from google.colab import drive
drive.mount('/content/gdrive')
cd "/content/gdrive/My Drive/Github/SubjectIndexing"
# general
import time
import itertools
import collections
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# sklearn models
from sklearn.preprocessing import StandardScaler, MultiLabelBinarizer
from sklearn.pipeline import make_pipeline
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score, precision_recall_fscore_support
from sklearn.metrics.pairwise import euclidean_distances, cosine_distances
# tensorflow
import tensorflow as tf
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping
# custom library for transformers
from src.utils.embeddings import Book2Vec
###Output
_____no_output_____
###Markdown
Import Data
###Code
# import dataset
df = pd.read_json('./data/dataset_B.json')
metadata = pd.read_json('./data/metadata.json')
X_embeddings = Book2Vec.load_embeddings('./work/embeddings_B_last4layers.pkl')
df['X_embeddings'] = list(X_embeddings)
full_dataset = df.merge(metadata, left_on='id', right_on='id')[['id', 'X', 'y', 'X_embeddings', 'subjects_new']]
dataset = full_dataset
# dataset = full_dataset[full_dataset['y'].isin(['BR', 'B', 'BJ'])]
X_embeddings.shape
# show subclasses count in data
pd.value_counts(full_dataset['y']).sort_index().plot(kind="bar", title='Count of each subclass in data', xlabel='subclass', ylabel='count')
# remove subjects that only appeared once
freq = collections.Counter(itertools.chain(*[x for x in dataset['subjects_new']]))
subjects_filtered = []
for row in dataset['subjects_new']:
row_temp = []
for c in row:
if freq[c] != 1:
row_temp.append(c)
subjects_filtered.append(row_temp)
dataset['subjects_filtered'] = subjects_filtered
dataset[['id', 'X_embeddings', 'y', 'subjects_new', 'subjects_filtered']].head()
# train test split
np.random.seed(0)
msk = np.random.rand(len(dataset)) < 0.8
dataset_train = dataset.copy()[msk]
dataset_test = dataset.copy()[~msk]
labels = sorted(list(set(dataset.y)))
class2label = {}
for i in range(len(labels)):
class2label[labels[i]] = i
label2class = {v:k for k,v in class2label.items()}
y_train_labels = [class2label[l] for l in dataset_train.y]
y_test_labels = [class2label[l] for l in dataset_test.y]
###Output
_____no_output_____
###Markdown
Nearest Neighbors Filter: Classifier
###Code
# filter possible subjects with SVM classifier
svm = make_pipeline(StandardScaler(), SVC(C=10, gamma='scale', random_state=0))
svm.fit(list(dataset_train.X_embeddings), y_train_labels)
TOP = 1
y_pred_top_k = np.argpartition(svm.decision_function(list(dataset_test.X_embeddings)), -TOP)[:,-TOP:]
# filter with top k subclass
accuracy_score_top_k = []
for i in range(len(y_test_labels)):
ct = 1 if (y_test_labels[i] in y_pred_top_k[i]) else 0
accuracy_score_top_k.append(ct)
print("top " + str(TOP) + " acc: " + str(round(np.mean(accuracy_score_top_k), 4)))
###Output
top 1 acc: 0.672
###Markdown
Filter: Distance
###Code
# find nearest neighbors
K = 3
distance_matrix = cosine_distances(list(dataset_test.X_embeddings), list(dataset_train.X_embeddings))
neighbors = np.argpartition(distance_matrix, K)[:,:K]
###Output
_____no_output_____
###Markdown
Apply Filters
###Code
# find suggested subjects
suggested_subjects = []
for i in range(len(neighbors)):
#suggest_temp = set(itertools.chain(*list(dataset_train.iloc[neighbors[i]]['subjects_filtered'])))
suggest_temp = set()
for l in y_pred_top_k[i]:
for n in range(len(neighbors[i])):
suggest = set(list(dataset_train.iloc[neighbors[i]]['subjects_filtered'])[n])
if label2class[l] in suggest:
suggest_temp.update(suggest)
suggest_temp = set(['']) if suggest_temp == set() else suggest_temp
suggested_subjects.append(suggest_temp)
dataset_test['subjects_suggest'] = suggested_subjects
###Output
_____no_output_____
###Markdown
Evaluate Model
###Code
# calculate scores
mlb = MultiLabelBinarizer()
mlb.fit(dataset.subjects_filtered)
subjects_new_matrix = mlb.transform(dataset_test.subjects_filtered)
subjects_suggest_matrix = mlb.transform(dataset_test.subjects_suggest)
subjects_new_matrix.shape
precision_recall_fscore_support(subjects_new_matrix, subjects_suggest_matrix, average='micro', zero_division=0)
precision_recall_fscore_support(subjects_new_matrix, subjects_suggest_matrix, average='macro', zero_division=0)
###Output
_____no_output_____
###Markdown
Save Result
###Code
output = dataset_test.merge(metadata, left_on='id', right_on='id')[['title', 'creator', 'subjects_filtered', 'subjects_suggest']]
output.to_csv('test_result.csv', index=False)
###Output
_____no_output_____
|
Uplift model.ipynb
|
###Markdown
This notebook provides a solution to [Retail Uplift Modelling contest](https://retailhero.ai/c/uplift_modeling/overview).The notebook uses some code from this great [uplift modelling module](https://github.com/maks-sh/scikit-uplift/).
###Code
import pandas as pd
import numpy as np
from datetime import date, datetime
from random import normalvariate
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import log_loss
from catboost import CatBoostClassifier
import lightgbm as lgbm
import matplotlib.pyplot as plt
import seaborn as sns
cat_params = {'learning_rate':0.01, 'max_depth':3, 'task_type':'GPU',
'loss_function':'Logloss', 'eval_metric':'Logloss',
'iterations':20000, 'od_type': "Iter", 'od_wait':200
}
lgbm_params = {'learning_rate':0.01,'max_depth':6,'num_leaves':20, 'min_data_in_leaf':3,
'subsample':0.8, 'colsample_bytree': 0.8, 'reg_alpha':0.01,'max_bin':416,
'bagging_freq':3,'reg_lambda':0.01,'num_leaves':20, 'n_estimators':600,
'eval_metric':'Logloss', 'application':'binary',
'iterations':20000, 'od_type': 'Iter', 'od_wait':200
}
# thresholds to calculate purchase of expensive products
sum_filter = [100, 250, 500, 1000, 2000]
# train model for transformed class using stratified K-fold cross-validation
# return prediction calculated as average of predictions of k trained models
# use 'catboost' or 'lgbm' to select model type
#
def trans_train_model(model, df_X, df_X_test, num_folds=5, random_state=0, verbose=2, show_features=False):
cat_params['random_state'] = random_state
lgbm_params['random_state'] = random_state
# new target for transformed class
df_X['new_target'] = (df_X['target'] + df_X['treatment_flg'] + 1) % 2
df_y = df_X['new_target']
treatment = df_X['treatment_flg'].to_numpy()
old_target = df_X['target'].to_numpy()
df_X = df_X.drop(['target', 'new_target', 'treatment_flg'], axis=1, errors='ignore')
X = df_X.to_numpy()
y = df_y.to_numpy()
X_test = df_X_test.to_numpy()
folds = StratifiedKFold(n_splits=num_folds, shuffle=True, random_state=random_state)
scores = []
uplift_scores = []
prediction = np.zeros(len(X_test))
feature_importance = np.zeros(len(df_X.columns))
for i, (train_index, valid_index) in enumerate(folds.split(X, y)):
X_train, X_valid = X[train_index], X[valid_index]
y_train, y_valid = y[train_index], y[valid_index]
treat_valid = treatment[valid_index]
old_target_vaild = old_target[valid_index]
if (model == 'catboost'):
f = CatBoostClassifier(**cat_params)
f.fit(X_train, y_train, eval_set=(X_valid, y_valid), use_best_model=True, verbose=False)
elif (model == 'lgbm'):
f = lgbm.LGBMClassifier(**lgbm_params)
f.fit(X_train, y_train, eval_set=(X_valid, y_valid), verbose=False)
else:
return None
y_pred_valid = f.predict_proba(X_valid)[:, 1]
score = log_loss(y_valid, y_pred_valid)
uplift_score = uplift_at_k(old_target_vaild, y_pred_valid, treat_valid)
uplift_scores.append(uplift_score)
if (verbose > 1):
print('OOF Uplift score: {0:.5f}'.format(uplift_score))
scores.append(score)
# predict on test and accumulate the result
y_pred = f.predict_proba(X_test)[:, 1]
prediction += y_pred
feature_importance += f.feature_importances_
# get average prediction & feature importance from all models
prediction /= num_folds
feature_importance /= num_folds
if (verbose > 0):
print('CV mean score: {0:.5f}, std: {1:.5f}'.format(np.mean(scores), np.std(scores)))
print('Uplift score @30%: {0:.5f}, std: {1:.5f}'.format(np.mean(uplift_scores), np.std(uplift_scores)))
if show_features:
feature_imp = pd.DataFrame(sorted(zip(feature_importance, df_X.columns)),
columns=['Value','Feature']).tail(50)
plt.figure(figsize=(20, 25))
sns.set(font_scale=1.5)
sns.barplot(x="Value", y="Feature", data=feature_imp.sort_values(by="Value", ascending=False))
plt.title('{0} features (avg over {1} folds)'.format(model, num_folds))
plt.tight_layout()
plt.savefig('{}_importances-01.png'.format(model))
plt.show()
return prediction
def uplift_at_k(y_true, uplift, treatment, k=0.3):
"""Compute uplift at first k percentage of the total sample.
Args:
y_true (1d array-like): Ground truth (correct) labels.
uplift (1d array-like): Predicted uplift, as returned by a model.
treatment (1d array-like): Treatment labels.
k (float > 0 and <= 1): Percentage of the total sample to compute uplift.
Returns:
float: Uplift at first k percentage of the total sample.
Reference:
Baseline from `RetailHero competition`_.
.. _RetailHero competition:
https://retailhero.ai/c/uplift_modeling/overview
"""
order = np.argsort(-uplift)
treatment_n = int((treatment == 1).sum() * k)
treatment_p = y_true[order][treatment[order] == 1][:treatment_n].mean()
control_n = int((treatment == 0).sum() * k)
control_p = y_true[order][treatment[order] == 0][:control_n].mean()
score_at_k = treatment_p - control_p
return score_at_k
# calculate number of purchases per every hour across all clients
#
def calc_purchase_hours(df_clients, df_purch):
for i in range(24):
df_dayfiltered = df_purch[df_purch['purch_hour'] == i][['client_id', 'transaction_id', 'purch_hour']]
ds_purch_hour = df_dayfiltered.groupby(['client_id', 'transaction_id']).last()
ds_counters = ds_purch_hour.groupby('client_id')['purch_hour'].count()
ds_counters.name = 'purch_hour_{}'.format(i)
df_clients = pd.merge(df_clients, ds_counters, how='left', on='client_id')
return df_clients
# Calculate number and total sum of purchases per day of week
# Later we also calculate ratios of these variables to the total number of purchases and
# total sum of purchases for every client
#
def calc_purchase_days(df_clients, df_purch):
for i in range(7):
df_dayfiltered = df_purch[df_purch['purch_weekday'] == i][['client_id', 'transaction_id',
'purch_weekday', 'trn_sum_from_iss']]
ds_purch_dow = df_dayfiltered.groupby(['client_id', 'transaction_id']).last()
ds_counters = ds_purch_dow.groupby('client_id')['purch_weekday'].count()
# DOW = day of week
ds_counters.name = 'purch_dow_{}'.format(i)
df_clients = pd.merge(df_clients, ds_counters, how='left', on='client_id')
ds_purch_dow = df_dayfiltered.groupby('client_id')['trn_sum_from_iss'].sum()
ds_counters.name = 'purch_sum_dow_{}'.format(i)
df_clients = pd.merge(df_clients, ds_counters, how='left', on='client_id')
return df_clients
# Try to fix age variable in the data. Also, set age_antinorm variable depending on
# type of error in the age data.
# Set real_fix flag to False to calculate age_antinorm, but not fix the age
#
# Use the following heuristics:
# 19XX means year of birth -> easy to convert to age
# 18XX - the same, but it should be 9 instead of 8
# 9XX - the same, first '1' is missed
# -9XX - the same as 19XX, '1' was OCRed as '-'
# etc
#
def fix_age(df_clients, real_fix=True):
# create a copy of age column. Modify the copy for now
df_clients['age2'] = df_clients['age']
age_index = (df_clients['age'] < -900) & (df_clients['age'] > -1000)
df_clients.loc[age_index, 'age2'] = -1 * df_clients.loc[age_index, 'age'] + 1019
df_clients.loc[age_index, 'age_antinorm'] = 1
age_index = (df_clients['age'] > 900) & (df_clients['age'] < 1000)
df_clients.loc[age_index, 'age2'] = 1019 - df_clients.loc[age_index, 'age']
df_clients.loc[age_index, 'age_antinorm'] = 2
age_index = (df_clients['age'] > 1900) & (df_clients['age'] < 2000)
df_clients.loc[age_index, 'age2'] = 2019 - df_clients.loc[age_index, 'age']
df_clients.loc[age_index, 'age_antinorm'] = 3
age_index = (df_clients['age'] > 120) & (df_clients['age'] < 200)
df_clients.loc[age_index, 'age2'] = df_clients.loc[age_index, 'age'] - 100
df_clients.loc[age_index, 'age_antinorm'] = 4
age_index = (df_clients['age'] > 1800) & (df_clients['age'] < 1900)
df_clients.loc[age_index, 'age2'] = df_clients.loc[age_index, 'age'] - 1800
df_clients.loc[age_index, 'age_antinorm'] = 5
# the following types of errors are impossible to recover
# so we set the age to mean of all clients (46), slightly randomizing it (std=16)
age_index = (df_clients['age'] > 120)
df_clients.loc[age_index, 'age2'] = normalvariate(46, 16)
df_clients.loc[age_index, 'age_antinorm'] = 6
age_index = (df_clients['age'] > 0) & (df_clients['age'] < 12)
df_clients.loc[age_index, 'age2'] = normalvariate(46, 16)
df_clients.loc[age_index, 'age_antinorm'] = 7
age_index = (df_clients['age'] == 0)
df_clients.loc[age_index, 'age2'] = normalvariate(46, 16)
df_clients.loc[age_index, 'age_antinorm'] = 8
age_index = (df_clients['age'] < 0)
df_clients.loc[age_index, 'age2'] = normalvariate(46, 16)
df_clients.loc[age_index, 'age_antinorm'] = 9
# use the modified copy
if (real_fix):
df_clients['age'] = df_clients['age2']
df_clients.drop('age2', axis=1, inplace=True)
return df_clients
# Calculate number and amount of purhases before and after the 1st redeem date across all clients
#
def calc_purchases_around_dates(df_clients, df):
df['redeem_ord'] = df['first_redeem_date'].apply(lambda x: date.toordinal(x))
df['purch_ord'] = df['transaction_datetime'].apply(lambda x: date.toordinal(x))
df['ord_diff'] = df['redeem_ord'] - df['purch_ord']
df_before = df[df['ord_diff'] > 0][['client_id', 'transaction_id', 'purchase_sum']]
df_after = df[df['ord_diff'] <= 0][['client_id', 'transaction_id', 'purchase_sum']]
df_before_all = df_before.groupby(['client_id', 'transaction_id']).last()
df_after_all = df_after.groupby(['client_id', 'transaction_id']).last()
ds_before_sum = df_before_all.groupby('client_id')['purchase_sum'].sum()
ds_after_sum = df_after_all.groupby('client_id')['purchase_sum'].sum()
ds_before_counters = df_before_all.groupby('client_id')['purchase_sum'].count()
ds_after_counters = df_after_all.groupby('client_id')['purchase_sum'].count()
ds_before_sum.name = 'before_redeem_sum'
ds_after_sum.name = 'after_redeem_sum'
ds_before_counters.name = 'before_redeem_counter'
ds_after_counters.name = 'after_redeem_counter'
df_clients = pd.merge(df_clients, ds_before_sum, how='left', on='client_id')
df_clients = pd.merge(df_clients, ds_after_sum, how='left', on='client_id')
df_clients = pd.merge(df_clients, ds_before_counters, how='left', on='client_id')
df_clients = pd.merge(df_clients, ds_after_counters, how='left', on='client_id')
return df_clients
# Calculate total sum of alcohol products across all clients
# Calculate total sum of "is_own_trademark" products across all clients
# Calculate number of purchased goods that are more expensive than a set of thresholds
# Calculate delta = num of days between the last and the 1st purchase (+1 to avoid divison by 0)
#
def calc_special_purchases(df_clients, df_purch_det):
df_filtered = df_purch_det[df_purch_det['is_alcohol'] == 1][['client_id', 'trn_sum_from_iss']]
ds_alco = df_filtered.groupby('client_id')['trn_sum_from_iss'].sum()
ds_alco.name = 'sum_alco'
df_clients = pd.merge(df_clients, ds_alco, how='left', on='client_id')
df_filtered = df_purch_det[df_purch_det['is_own_trademark'] == 1][['client_id', 'trn_sum_from_iss']]
ds_marked = df_filtered.groupby('client_id')['trn_sum_from_iss'].sum()
ds_marked.name = 'sum_mark'
df_clients = pd.merge(df_clients, ds_marked, how='left', on='client_id')
for threshold in sum_filter:
df_filtered = df_purch_det[df_purch_det['trn_sum_from_iss'] > threshold][['client_id', 'trn_sum_from_iss']]
ds_threshold = df_filtered.groupby('client_id')['trn_sum_from_iss'].count()
ds_threshold.name = 'over_{}'.format(threshold)
df_clients = pd.merge(df_clients, ds_threshold, how='left', on='client_id')
df_purch_det['delta'] = df_purch_det.groupby('client_id')['purch_day'].transform(lambda x: x.max()-x.min()+1)
df_delta = df_purchase_detailed.groupby('client_id').last()['delta']
df_clients = pd.merge(df_clients, df_delta, how='left', on='client_id')
return df_clients
# Calculate DOW (day of week), hour of 1st issue and 1st redeem timestamps
# Calculate ratio of alcohol and trademarked products to total products (money count)
# Calculate ratio of expensive to total products (transaction count)
# Calculate ratios for DOW, hour, before 1st redeem and after 1st redeem
#
def prepare_dataset(df_set, df_clients, features):
# df_set is a test or train dataframe
df = pd.concat([df_set, df_clients, features], axis=1, sort=True)
# remove extra rows that were created during concat
# those are the rows that were missing from df_set, but were present in df_clients
df = df[~df['target'].isnull()].copy()
# df['first_issue_date_weekday'] = df['first_issue_date'].dt.weekday
# df['first_redeem_date_weekday'] = df['first_redeem_date'].dt.weekday
# df['first_issue_date_hour'] = df['first_issue_date'].dt.hour
# df['first_redeem_date_hour'] = df['first_redeem_date'].dt.hour
# we need more redeem date features
# df['redeem_date_mo'] = df['first_redeem_date'].dt.month
# df['redeem_date_week'] = df['first_redeem_date'].dt.week
# df['redeem_date_doy'] = df['first_redeem_date'].dt.dayofyear
# df['redeem_date_q'] = df['first_redeem_date'].dt.quarter
# df['redeem_date_ms'] = df['first_redeem_date'].dt.is_month_start
# df['redeem_date_me'] = df['first_redeem_date'].dt.is_month_end
# convert dates to numbers
df['first_issue_date'] = df['first_issue_date'].apply(lambda x: date.toordinal(x))
df['first_redeem_date'] = df['first_redeem_date'].apply(lambda x: date.toordinal(x))
df['diff'] = df['first_redeem_date'] - df['first_issue_date']
# convert gender to one-hot encoding
# it is recommended to drop the first column, but it's not that important for boosting,
# and I decided to leave all of them to see them in the feature importance list
df = pd.get_dummies(df, prefix='gender', columns=['gender'])
df['alco_ratio'] = df['sum_alco'] / df['purchase_sum_sum_all']
df['mark_ratio'] = df['sum_mark'] / df['purchase_sum_sum_all']
# all transactions happened before 2019-03-19
cutoff_dt = date.toordinal(date(2019, 3, 19))
df['issue_diff'] = cutoff_dt - df['first_issue_date']
df['redeem_diff'] = cutoff_dt - df['first_redeem_date']
for threshold in sum_filter:
df['over_{}_ratio'.format(threshold)] = df['over_{}'.format(threshold)] / df['total_trans_count']
for i in range(7):
df['purch_dow_ratio_{}'.format(i)] = df['purch_dow_{}'.format(i)] / df['total_trans_count']
df['purch_sum_dow_ratio_{}'.format(i)] = df['purch_sum_dow_{}'.format(i)] / df['purchase_sum_sum_all']
for i in range(24):
df['purch_hour_ratio_{}'.format(i)] = df['purch_hour_{}'.format(i)] / df['total_trans_count']
df['before_redeem_sum_ratio'] = df['before_redeem_sum'] / df['purchase_sum_sum_all']
df['after_redeem_sum_ratio'] = df['after_redeem_sum'] / df['purchase_sum_sum_all']
df['before_redeem_counter_ratio'] = df['before_redeem_counter'] / df['total_trans_count']
df['after_redeem_counter_ratio'] = df['after_redeem_counter'] / df['total_trans_count']
df['avg_spent_perday'] = df['purchase_sum_sum_all'] / df['delta']
df['sum_alco_perday'] = df['sum_alco'] / df['delta']
df['after_redeem_sum_perday'] = df['after_redeem_sum'] / df['delta']
df['epoints_spent_perday'] = df['express_points_spent_sum_all'] / df['delta']
df['rpoints_spent_perday'] = df['regular_points_spent_sum_all'] / df['delta']
df['epoints_recd_perday'] = df['express_points_received_sum_all'] / df['delta']
df['rpoints_recd_perday'] = df['regular_points_received_sum_all'] / df['delta']
df['rpoints_acced_last_month'] = df['regular_points_received_sum_last_month'] - df['regular_points_spent_sum_last_month']
df['epoints_acced_last_month'] = df['express_points_received_sum_last_month'] - df['express_points_spent_sum_last_month']
return df
# Load data, pre-process timestamps
#
df_clients = pd.read_csv('data/clients.csv', index_col='client_id',
parse_dates=['first_issue_date','first_redeem_date'])
#df_clients['age_antinorm'] = 0
#df_clients = fix_age(df_clients)
# fill empty 1st redeem date with future date
df_clients['first_redeem_date'] = df_clients['first_redeem_date'].fillna(datetime(2019, 3, 19, 0, 0))
df_train = pd.read_csv('data/uplift_train.csv', index_col='client_id')
df_test = pd.read_csv('data/uplift_test.csv', index_col='client_id')
df_products = pd.read_csv('data/products.csv', index_col='product_id')
df_purchases = pd.read_csv('data/purchases.csv',parse_dates=['transaction_datetime'])
df_purchases['date'] = df_purchases['transaction_datetime'].dt.date
df_purchases['purch_weekday'] = df_purchases['transaction_datetime'].dt.weekday
df_purchases['purch_hour'] = df_purchases['transaction_datetime'].dt.hour
df_purchases['purch_day'] = df_purchases['transaction_datetime'].apply(lambda x: date.toordinal(x))
# merge products to purchases, and then to clients
df_purchase_detailed = pd.merge(df_purchases, df_products, how='left', on='product_id')
df_purchase_detailed = pd.merge(df_purchase_detailed, df_clients, how='left', on='client_id')
# Calculate features
#
df_clients = calc_purchases_around_dates(df_clients, df_purchase_detailed)
df_clients = calc_purchase_days(df_clients, df_purchases)
df_clients = calc_purchase_hours(df_clients, df_purchases)
df_clients = calc_special_purchases(df_clients, df_purchase_detailed)
df_clients = df_clients.fillna(value=0)
last_cols = ['regular_points_received', 'express_points_received','regular_points_spent',
'express_points_spent', 'purchase_sum','store_id']
all_hist = df_purchases.groupby(['client_id','transaction_id'])[last_cols].last()
last_month = df_purchases[df_purchases['transaction_datetime'] > '2019-02-18'].groupby(['client_id', 'transaction_id'])[last_cols].last()
features = pd.concat([all_hist.groupby('client_id')['purchase_sum'].count(),
last_month.groupby('client_id')['purchase_sum'].count(),
all_hist.groupby('client_id')['purchase_sum'].mean(),
all_hist.groupby('client_id')['purchase_sum'].std(),
all_hist.groupby('client_id')['express_points_spent'].mean(),
all_hist.groupby('client_id')['express_points_spent'].std(),
all_hist.groupby('client_id')['express_points_received'].mean(),
all_hist.groupby('client_id')['express_points_received'].std(),
all_hist.groupby('client_id')['regular_points_spent'].mean(),
all_hist.groupby('client_id')['regular_points_spent'].std(),
all_hist.groupby('client_id')['regular_points_received'].mean(),
all_hist.groupby('client_id')['regular_points_received'].std(),
all_hist.groupby('client_id').sum(),
all_hist.groupby('client_id')[['store_id']].nunique(),
last_month.groupby('client_id').sum(),
last_month.groupby('client_id')[['store_id']].nunique(),
],axis = 1)
features.columns = ['total_trans_count','last_month_trans_count',
'mean_purchase', 'std_purchase', 'mean_epoints_spent', 'std_epoints_spent',
'mean_epoints_recd', 'std_epoints_recd', 'mean_rpoints_spent', 'std_rpoints_spent',
'mean_rpoints_recd', 'std_rpoints_recd'] + \
list(c+"_sum_all" for c in last_cols) + list(c+"_sum_last_month" for c in last_cols)
train_X = prepare_dataset(df_train, df_clients, features)
# need to create target for merge during processing
df_test['target'] = 1
test_X = prepare_dataset(df_test, df_clients, features)
# remove target from test, it's not needed anymore
test_X.drop('target', axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Pipeline for the transformed class. For details see [this](https://habr.com/ru/company/ru_mts/blog/485976/).
###Code
pred = trans_train_model('catboost', train_X, test_X, verbose=2, show_features=True)
df_submission = pd.DataFrame({'client_id':test_X.index.values,'uplift': pred})
df_submission.to_csv('sub_3.csv', index=False)
###Output
_____no_output_____
|
Object tracking and Localization/representing_state_&_Motion/Interacting with a Car Object.ipynb
|
###Markdown
Interacting with a Car Object In this notebook, you've been given some of the starting code for creating and interacting with a car object.Your tasks are to:1. Become familiar with this code. - Know how to create a car object, and how to move and turn that car.2. Constantly visualize. - To make sure your code is working as expected, frequently call `display_world()` to see the result!3. **Make the car move in a 4x4 square path.** - If you understand the move and turn functions, you should be able to tell a car to move in a square path. This task is a **TODO** at the end of this notebook.Feel free to change the values of initial variables and add functions as you see fit!And remember, to run a cell in the notebook, press `Shift+Enter`.
###Code
import numpy as np
import car
%matplotlib inline
###Output
_____no_output_____
###Markdown
Define the initial variables
###Code
# Create a 2D world of 0's
height = 4
width = 6
world = np.zeros((height, width))
# Define the initial car state
initial_position = [0, 0] # [y, x] (top-left corner)
velocity = [0, 1] # [vy, vx] (moving to the right)
###Output
_____no_output_____
###Markdown
Create a car object
###Code
# Create a car object with these initial params
carla = car.Car(initial_position, velocity, world)
print('Carla\'s initial state is: ' + str(carla.state))
###Output
Carla's initial state is: [[0, 0], [0, 1]]
###Markdown
Move and track state
###Code
# Move in the direction of the initial velocity
carla.move()
# Track the change in state
print('Carla\'s state is: ' + str(carla.state))
# Display the world
carla.display_world()
###Output
Carla's state is: [[0, 1], [0, 1]]
###Markdown
TODO: Move in a square pathUsing the `move()` and `turn_left()` functions, make carla traverse a 4x4 square path.The output should look like:
###Code
## TODO: Make carla traverse a 4x4 square path
carla.move(2)
## Display the result
carla.display_world()
###Output
_____no_output_____
|
Files.ipynb
|
###Markdown
###Code
from sklearn.datasets import load_iris
Data = load_iris()
Data.data
Data.target
print(Data.DESCR)
Data.feature_names
###Output
_____no_output_____
###Markdown
FilesFile is a named location on disk to store related information. Python uses the file objects to interact with external files on the computer. These files could be of any format like text, binary, excel, audio, video files. Please note external libraries will be required to use some of the file formats.Typically in python file operation takes place in the following order:1. Open a file2. Perform Read or Write operation3. Close the file Creating a text file in Jupyter
###Code
%%writefile test.txt
This is a test file
###Output
Writing test.txt
###Markdown
Opening a filePython has a built-in function called open() to open a file. This function returns a file handle that can be used to read or write the file accordingly.Python allows a file to open in multiple modes. The following are some of the modes that can be used to open a file.* 'r' - open a file for reading* 'w' - open a file for writing, create the file if it does not exist* 'x' - open the file for exclusing creation* 'a' - open the file in append mode, contents will be appended to the end of the file, created if file does not exist* 't' - open a file in text format* 'b' - open a file in binary format* '+' - open a file for reading & writing (for updating)
###Code
# open the file in read mode
# f = open("C:/Python33/test.txt) # opens file in the specified full path
f = open('test.txt','r') # opens file in the current directory in read mode
print(f.writable()) # Is the file writable - False
print(f.readable()) # Is the file readable - True
print(f.seekable()) # Is random access allowed - Must be True
print(f.fileno())
# Different ways of opening the files
# f = open('test.txt') # equal to 'r' or 'rt' (read text)
# f = open('test.txt','w') # write in text mode
# f = open('image.bmp','r+b') # read and write in binary mode
# using encoding type
# f = open("test.txt", mode="r", encoding="utf-8")
###Output
_____no_output_____
###Markdown
Closing FileFiles need to be properly closed so that the resources can be released othewise they will be tied up unnecessarily.
###Code
# close the file using close function
f.close()
###Output
False
###Markdown
Opening & Closing Files in Safe Way
###Code
# Using Exception Method
try:
f = open("test.txt",mode="r",encoding="utf-8")
# perform file operations
f.seek(0)
print(f.read(4))
finally:
f.close()
# Using 'with' Method
# Explicit close is not required. File is closed implicitly when the block inside 'with' is exited.
with open('test.txt',encoding='utf-8') as f:
# perform file operations
print(f.read())
###Output
This is a test file
###Markdown
Reading a file
###Code
with open('test.txt') as f:
f.seek(0) # move the file pointer to start of the file
print(f.tell())
print(f.read(4)) # read the first 4 words
print(f.tell()) # print the current file handler position
print(f.read()) # read till end of file
print(f.tell())
print(f.read()) # this will print nothing as it reached end of file
print(f.tell())
###Output
0
This
4
is a test file
19
19
###Markdown
Reading File Line by LineLets create a simple file using Jupyter macro %%writefile
###Code
%%writefile test1.txt
Line 1
Line 2
Line 3
with open('test1.txt') as f:
print(f.readline(), end = '') # Prints the 1st Line
print(f.readline()) # Prints the 2nd Line
print(f.readline()) # Prints the 3rd Line
print(f.readline()) # prints blank lines
print(f.readline())
with open('test1.txt') as f:
print(f.readlines())
# using for loop
for line in open('test1.txt'):
print(line, end='') # end parameter to avoid printing blank lines
with open('newfile.txt','w+') as f:
f.writelines('New File - Line 1')
f.writelines('New File - Line 2')
f.seek(0)
print(f.readlines())
try:
f = open('newfile.txt','a')
f.write('this is appended line')
f.close()
f = open('newfile.txt','r')
print(f.read())
finally:
f.close()
###Output
New File - Line 1New File - Line 2this is appended linethis is appended linethis is appended linethis is appended linethis is appended line
|
Db2_11.1_Statistical_Functions.ipynb
|
###Markdown
Db2 Statistical FunctionsUpdated: 2019-10-03 Db2 already has a variety of Statistical functions built in. In Db2 11.1, a number of newfunctions have been added including: - [*COVARIANCE_SAMP*](covariance) - The COVARIANCE_SAMP function returns the sample covariance of a set of number pairs - [*STDDEV_SAMP*](stddev) - The STDDEV_SAMP column function returns the sample standard deviation (division by [n-1]) of a set of numbers. - [*VARIANCE_SAMP*](variance) or VAR_SAMP - The VARIANCE_SAMP column function returns the sample variance (division by [n-1]) of a set of numbers. - [*CUME_DIST*](cume) - The CUME_DIST column function returns the cumulative distribution of a row that is hypothetically inserted into a group of rows - [*PERCENT_RANK*](rank) - The PERCENT_RANK column function returns the relative percentile rank of a row that is hypothetically inserted into a group of rows. - [*PERCENTILE_DISC*](disc), [*PERCENTILE_CONT*](cont) - Returns the value that corresponds to the specified percentile given a sort specification by using discrete (DISC) or continuous (CONT) distribution - [*MEDIAN*](median) - The MEDIAN column function returns the median value in a set of values - [*WIDTH_BUCKET*](width) - The WIDTH_BUCKET function is used to create equal-width histograms Connect to the SAMPLE database
###Code
%run db2.ipynb
%run connection.ipynb
###Output
_____no_output_____
###Markdown
Sampling FunctionsThe traditional VARIANCE, COVARIANCE, and STDDEV functions have been available in Db2 for a long time. When computing these values, the formulae assume that the entire population has been counted (N). The traditional formula for standard deviation is: $$\sigma=\sqrt{\frac{1}{N}\sum_{i=1}^N(x_{i}-\mu)^{2}}$$ N refers to the size of the population and in many cases, we only have a sample, not the entire population of values. In this case, the formula needs to be adjusted to account for the sampling. $$s=\sqrt{\frac{1}{N-1}\sum_{i=1}^N(x_{i}-\bar{x})^{2}}$$ COVARIANCE_SAMPThe COVARIANCE_SAMP function returns the sample covariance of a set of number pairs.
###Code
%%sql
SELECT COVARIANCE_SAMP(SALARY, BONUS)
FROM EMPLOYEE
WHERE WORKDEPT = 'A00'
###Output
_____no_output_____
###Markdown
STDDEV_SAMPThe STDDEV_SAMP column function returns the sample standard deviation (division by [n-1]) of a set of numbers.
###Code
%%sql
SELECT STDDEV_SAMP(SALARY)
FROM EMPLOYEE
WHERE WORKDEPT = 'A00'
###Output
_____no_output_____
###Markdown
VARIANCE_SAMPThe VARIANCE_SAMP column function returns the sample variance (division by [n-1]) of a set of numbers.
###Code
%%sql
SELECT VARIANCE_SAMP(SALARY)
FROM EMPLOYEE
WHERE WORKDEPT = 'A00'
###Output
_____no_output_____
###Markdown
MEDIANThe MEDIAN column function returns the median value in a set of values.
###Code
%%sql
SELECT MEDIAN(SALARY) AS MEDIAN, AVG(SALARY) AS AVERAGE
FROM EMPLOYEE
WHERE WORKDEPT = 'E21'
###Output
_____no_output_____
###Markdown
CUME_DISTThe CUME_DIST column function returns the cumulative distribution of a row that is hypothetically inserted into a group of rows.
###Code
%%sql
SELECT CUME_DIST(47000) WITHIN GROUP (ORDER BY SALARY)
FROM EMPLOYEE
WHERE WORKDEPT = 'A00'
###Output
_____no_output_____
###Markdown
PERCENT_RANKThe PERCENT_RANK column function returns the relative percentile rank of arow that is hypothetically inserted into a group of rows.
###Code
%%sql
SELECT PERCENT_RANK(47000) WITHIN GROUP (ORDER BY SALARY)
FROM EMPLOYEE
WHERE WORKDEPT = 'A00'
###Output
_____no_output_____
###Markdown
PERCENTILE_DISCThe PERCENTILE_DISC/CONT returns the value that corresponds to the specified percentile given a sort specification by using discrete (DISC) or continuous (CONT) distribution.
###Code
%%sql
SELECT PERCENTILE_DISC(0.75) WITHIN GROUP (ORDER BY SALARY)
FROM EMPLOYEE
WHERE WORKDEPT = 'E21'
###Output
_____no_output_____
###Markdown
PERCENTILE_CONTThis is a function that gives you a continuous percentile calculation.
###Code
%%sql
SELECT PERCENTILE_CONT(0.75) WITHIN GROUP (ORDER BY SALARY)
FROM EMPLOYEE
WHERE WORKDEPT = 'E21'
###Output
_____no_output_____
###Markdown
WIDTH BUCKET and Histogram ExampleThe WIDTH_BUCKET function is used to create equal-width histograms. Using the EMPLOYEE table, This SQL will assign a bucket to each employee's salary using a range of 35000 to 100000 divided into 13 buckets.
###Code
%%sql
SELECT EMPNO, SALARY, WIDTH_BUCKET(SALARY, 35000, 100000, 13)
FROM EMPLOYEE
ORDER BY EMPNO
###Output
_____no_output_____
###Markdown
We can plot this information by adding some more details to the bucket output.
###Code
%%sql -a
WITH BUCKETS(EMPNO, SALARY, BNO) AS
(
SELECT EMPNO, SALARY,
WIDTH_BUCKET(SALARY, 35000, 100000, 9) AS BUCKET
FROM EMPLOYEE ORDER BY EMPNO
)
SELECT BNO, COUNT(*) AS COUNT FROM BUCKETS
GROUP BY BNO
ORDER BY BNO ASC
###Output
_____no_output_____
###Markdown
And here is a plot of the data to make sense of the histogram.
###Code
%%sql -pb
WITH BUCKETS(EMPNO, SALARY, BNO) AS
(
SELECT EMPNO, SALARY,
WIDTH_BUCKET(SALARY, 35000, 100000, 9) AS BUCKET
FROM EMPLOYEE ORDER BY EMPNO
)
SELECT BNO, COUNT(*) AS COUNT FROM BUCKETS
GROUP BY BNO
ORDER BY BNO ASC
###Output
_____no_output_____
###Markdown
Close the connection to avoid running out of connection handles to Db2 on Cloud.
###Code
%sql CONNECT RESET
###Output
_____no_output_____
###Markdown
Db2 Statistical FunctionsUpdated: 2019-10-03 Db2 already has a variety of Statistical functions built in. In Db2 11.1, a number of newfunctions have been added including: - [*COVARIANCE_SAMP*](covariance) - The COVARIANCE_SAMP function returns the sample covariance of a set of number pairs - [*STDDEV_SAMP*](stddev) - The STDDEV_SAMP column function returns the sample standard deviation (division by [n-1]) of a set of numbers. - [*VARIANCE_SAMP*](variance) or VAR_SAMP - The VARIANCE_SAMP column function returns the sample variance (division by [n-1]) of a set of numbers. - [*CUME_DIST*](cume) - The CUME_DIST column function returns the cumulative distribution of a row that is hypothetically inserted into a group of rows - [*PERCENT_RANK*](rank) - The PERCENT_RANK column function returns the relative percentile rank of a row that is hypothetically inserted into a group of rows. - [*PERCENTILE_DISC*](disc), [*PERCENTILE_CONT*](cont) - Returns the value that corresponds to the specified percentile given a sort specification by using discrete (DISC) or continuous (CONT) distribution - [*MEDIAN*](median) - The MEDIAN column function returns the median value in a set of values - [*WIDTH_BUCKET*](width) - The WIDTH_BUCKET function is used to create equal-width histograms Connect to the SAMPLE database
###Code
%run db2.ipynb
%run connection.ipynb
###Output
_____no_output_____
###Markdown
Sampling FunctionsThe traditional VARIANCE, COVARIANCE, and STDDEV functions have been available in Db2 for a long time. When computing these values, the formulae assume that the entire population has been counted (N). The traditional formula for standard deviation is: $$\sigma=\sqrt{\frac{1}{N}\sum_{i=1}^N(x_{i}-\mu)^{2}}$$ N refers to the size of the population and in many cases, we only have a sample, not the entire population of values. In this case, the formula needs to be adjusted to account for the sampling. $$s=\sqrt{\frac{1}{N-1}\sum_{i=1}^N(x_{i}-\bar{x})^{2}}$$ COVARIANCE_SAMPThe COVARIANCE_SAMP function returns the sample covariance of a set of number pairs.
###Code
%%sql
SELECT COVARIANCE_SAMP(SALARY, BONUS)
FROM EMPLOYEE
WHERE WORKDEPT = 'A00'
###Output
_____no_output_____
###Markdown
STDDEV_SAMPThe STDDEV_SAMP column function returns the sample standard deviation (division by [n-1]) of a set of numbers.
###Code
%%sql
SELECT STDDEV_SAMP(SALARY)
FROM EMPLOYEE
WHERE WORKDEPT = 'A00'
###Output
_____no_output_____
###Markdown
VARIANCE_SAMPThe VARIANCE_SAMP column function returns the sample variance (division by [n-1]) of a set of numbers.
###Code
%%sql
SELECT VARIANCE_SAMP(SALARY)
FROM EMPLOYEE
WHERE WORKDEPT = 'A00'
###Output
_____no_output_____
###Markdown
MEDIANThe MEDIAN column function returns the median value in a set of values.
###Code
%%sql
SELECT MEDIAN(SALARY) AS MEDIAN, AVG(SALARY) AS AVERAGE
FROM EMPLOYEE
WHERE WORKDEPT = 'E21'
###Output
_____no_output_____
###Markdown
CUME_DISTThe CUME_DIST column function returns the cumulative distribution of a row that is hypothetically inserted into a group of rows.
###Code
%%sql
SELECT CUME_DIST(47000) WITHIN GROUP (ORDER BY SALARY)
FROM EMPLOYEE
WHERE WORKDEPT = 'A00'
###Output
_____no_output_____
###Markdown
PERCENT_RANKThe PERCENT_RANK column function returns the relative percentile rank of arow that is hypothetically inserted into a group of rows.
###Code
%%sql
SELECT PERCENT_RANK(47000) WITHIN GROUP (ORDER BY SALARY)
FROM EMPLOYEE
WHERE WORKDEPT = 'A00'
###Output
_____no_output_____
###Markdown
PERCENTILE_DISCThe PERCENTILE_DISC/CONT returns the value that corresponds to the specified percentile given a sort specification by using discrete (DISC) or continuous (CONT) distribution.
###Code
%%sql
SELECT PERCENTILE_DISC(0.75) WITHIN GROUP (ORDER BY SALARY)
FROM EMPLOYEE
WHERE WORKDEPT = 'E21'
###Output
_____no_output_____
###Markdown
PERCENTILE_CONTThis is a function that gives you a continuous percentile calculation.
###Code
%%sql
SELECT PERCENTILE_CONT(0.75) WITHIN GROUP (ORDER BY SALARY)
FROM EMPLOYEE
WHERE WORKDEPT = 'E21'
###Output
_____no_output_____
###Markdown
WIDTH BUCKET and Histogram ExampleThe WIDTH_BUCKET function is used to create equal-width histograms. Using the EMPLOYEE table, This SQL will assign a bucket to each employee's salary using a range of 35000 to 100000 divided into 13 buckets.
###Code
%%sql
SELECT EMPNO, SALARY, WIDTH_BUCKET(SALARY, 35000, 100000, 13)
FROM EMPLOYEE
ORDER BY EMPNO
###Output
_____no_output_____
###Markdown
We can plot this information by adding some more details to the bucket output.
###Code
%%sql -a
WITH BUCKETS(EMPNO, SALARY, BNO) AS
(
SELECT EMPNO, SALARY,
WIDTH_BUCKET(SALARY, 35000, 100000, 9) AS BUCKET
FROM EMPLOYEE ORDER BY EMPNO
)
SELECT BNO, COUNT(*) AS COUNT FROM BUCKETS
GROUP BY BNO
ORDER BY BNO ASC
###Output
_____no_output_____
###Markdown
And here is a plot of the data to make sense of the histogram.
###Code
%%sql -pb
WITH BUCKETS(EMPNO, SALARY, BNO) AS
(
SELECT EMPNO, SALARY,
WIDTH_BUCKET(SALARY, 35000, 100000, 9) AS BUCKET
FROM EMPLOYEE ORDER BY EMPNO
)
SELECT BNO, COUNT(*) AS COUNT FROM BUCKETS
GROUP BY BNO
ORDER BY BNO ASC
###Output
_____no_output_____
|
Tree/notebook.ipynb
|
###Markdown
Decision Tree and Random Forest From ScratchThis notebook is in Active state of development! Code on GitHub! Understanding how a decision tree worksA decision tree consists of creating different rules by which we make the prediction.As you can see, decision trees usually have sub-trees that serve to fine-tune the prediction of the previous node. This is so until we get to a node that does not split. This last node is known as a leaf node or leaf node.  Besides,a decision trees can work for both regression problems and for classification problems. In fact, we will code a decision tree from scratch that can do both.Now you know the bases of this algorithm, but surely you have doubts. How does the algorithm decide which variable to use as the first cutoff? How do you choose the values? Let’s see it little by little programming our own decision tree from scratch in Python. How to find good split?To find good split you need to know about standart deviation. Basically it is a measure of the amount of variation or dispersion of a set of values. Exaclty what we need. As we want to get more information after every split we need to split data with lower variation or dispersion. For example: if our left split's standart deviation will be very high it means that there are many different values, and we can not be sure what our prediction will be. And when we have low standart deviation that means that all samples are quite similar and then we can take mean of their ouputs for example! But how we can compute standart deviation for two parts of data? Well, let's take a sum. $$\LargeQ = H(R_l) + H(R_r) $$ Where $R_l$ is left part, and $R_r$ is right part and $H(x) = \text{standart deviation of x}$ Now, let's see how this formula can help us to find better split and you will probably get sense of what's going on under the hood! Let's generate some random data:
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.datasets import load_boston, load_iris
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, accuracy_score
sns.set_theme()
np.random.seed(0)
a = np.random.uniform(-40, 40, 10)
b = np.random.uniform(50, 70, 80)
x_values = np.concatenate((a, b), axis=0)
a = np.random.uniform(-40, 40, 10)
b = np.random.uniform(50, 70, 80)
y_values = np.concatenate((a, b), axis=0)
fig, ax = plt.subplots(figsize=(14, 7))
sns.scatterplot(x=x_values, y=y_values, ax=ax);
###Output
_____no_output_____
###Markdown
Then let's apply our algorithm (we will map throw each value we can separate parts with and search for the best split with the lowest error)
###Code
def split_error(X, treshold):
left, right = np.where(X <= treshold)[0], np.where(X > treshold)[0]
if len(left) == 0 or len(right) == 0:
return 10000
error = np.std(X[left]) + np.std(X[right])
return error
def best_criteria(X):
best_treshold = None
best_error = None
unique_values = np.unique(X)
for treshold in unique_values:
error = split_error(X, treshold)
if best_error == None or error < best_error:
best_treshold = treshold
best_error = error
return best_treshold, best_error
best_treshold, best_error = best_criteria(x_values)
fig, ax = plt.subplots(figsize=(14, 7))
sns.scatterplot(x=x_values, y=y_values, ax=ax);
plt.axvline(best_treshold, 0, 1);
###Output
_____no_output_____
###Markdown
You can see that it does very bad split. But we did not do any mistake. But we can fix it! The problem here is that algorithm splits the data without respect to the amount of data in split! For example, in previous left split we had oly 3 samples, while in the right split we have almost 100! And our goal is to find split with the lower standart deviation and maximum amount of data in it! So, let's penalize errors where small amount of samples! $$\LargeQ = \frac{|R_l|}{|R_m|}H(R_l) + \frac{|R_r|}{|R_m|} H(R_r) $$ Here we just multiply the standart deviation from the formula above by amount of data in the split! Simple! So, why did we do this? - Imagine that we have split where 990 objects goes to the left and 10 go to the right. standart deviation on left part is close to zero, while standart deviation of the right part is HUGE, but, we don't mind having huge deviation for right part as there are only 10 samples of 1000 and in the left part where we have small deviation we have 990! Now look how our new algorithm will solve his problem with the same data!
###Code
def split_error(X, treshold):
left, right = np.where(X <= treshold)[0], np.where(X > treshold)[0]
if len(left) == 0 or len(right) == 0:
return 10000
error = len(left) / len(X) * np.std(X[left]) + len(right) / len(X) * np.std(X[right])
return error
def best_criteria(X):
best_treshold = None
best_error = None
unique_values = np.unique(X)
for treshold in unique_values:
error = split_error(X, treshold)
if best_error == None or error < best_error:
best_treshold = treshold
best_error = error
return best_treshold, best_error
best_treshold, best_error = best_criteria(x_values)
fig, ax = plt.subplots(figsize=(14, 7))
sns.scatterplot(x=x_values, y=y_values, ax=ax);
plt.axvline(best_treshold, 0, 1);
###Output
_____no_output_____
###Markdown
Implementing base classesSince **DecisionTreeRegressor** and **DecisionTreeClassifier** have a lot of the same functions we will create a base class and then inherit it instead of coding two huge classes for each
###Code
class Node:
"""
Class that will be used for building tree.
Linked list basically.
"""
def __init__(self, left=None, right=None, value=None, feature_idx=None, treshold=None):
self.left = left
self.right = right
self.value = value
self.feature_idx = feature_idx
self.treshold = treshold
class Tree:
def __init__(self, max_depth=5, min_samples_split=2, max_features=1):
self.max_depth = max_depth
self.min_samples_split = min_samples_split
self.max_features = max_features
self.tree = None
def fit(self, X, y):
self.tree = self.grow_tree(X, y)
def predict(self, X):
return [self.traverse_tree(x, self.tree) for x in X]
def traverse_tree(self, x, node):
if node.value != None:
return node.value
if x[node.feature_idx] <= node.treshold:
return self.traverse_tree(x, node.left)
return self.traverse_tree(x, node.right)
def split_error(self, X, feauture_idx, treshold):
"""
Calculate standart deviation after splitting into 2 groups
"""
left_idxs, right_idxs = self.split_node(X, feauture_idx, treshold)
if len(X) == 0 or len(left_idxs) == 0 or len(right_idxs) == 0:
return 10000
return len(left_idxs) / len(X) * self.standart_deviation(X[left_idxs], feauture_idx) + len(right_idxs) / len(X) * self.standart_deviation(X[right_idxs], feauture_idx)
def standart_deviation(self, X, feauture_idx):
"""
Calculate standart deviation
"""
return np.std(X[:, feauture_idx])
def split_node(self, X, feauture_idx, treshold):
"""
Split into 2 parts
Splitting a dataset means separating a dataset
into two lists of rows. Once we have the two
groups, we can then use our standart deviation
score above to evaluate the cost of the split.
"""
left_idxs = np.argwhere(X[:, feauture_idx] <= treshold).flatten()
right_idxs = np.argwhere(X[:, feauture_idx] > treshold).flatten()
return left_idxs, right_idxs
def best_criteria(self, X, feature_idxs):
"""
Find best split
Loop throw each feature, for each feature loop
throw each unique value, try each value as a
treshold, then choose one, with the smallest error
"""
best_feauture_idx = None
best_treshold = None
best_error = None
for feature_idx in feature_idxs:
unique_values = np.unique(X[:, feature_idx])
for treshold in unique_values:
error = self.split_error(X, feature_idx, treshold)
if best_error == None or error < best_error:
best_feauture_idx = feature_idx
best_treshold = treshold
best_error = error
return best_feauture_idx, best_treshold
###Output
_____no_output_____
###Markdown
Decision Tree RegressorKey point of regression task for decision tree is that we want to return mean value on the leaves
###Code
class DecisionTreeRegressor(Tree):
def grow_tree(self, X, y, depth=0):
n_samples, n_features = X.shape
if depth > self.max_depth or len(X) < self.min_samples_split:
return Node(value=y.mean())
feature_idxs = np.arange(int(n_features * self.max_features))
best_feauture_idx, best_treshold = self.best_criteria(X, feature_idxs)
left_idxs, right_idxs = self.split_node(X, best_feauture_idx, best_treshold)
if len(left_idxs) == 0 or len(right_idxs) == 0:
return Node(value=y.mean())
else:
left = self.grow_tree(X[left_idxs], y[left_idxs], depth + 1)
right = self.grow_tree(X[right_idxs], y[right_idxs], depth + 1)
return Node(left=left, right=right, feature_idx=best_feauture_idx, treshold=best_treshold)
###Output
_____no_output_____
###Markdown
Sigle predictionNot let's test our algorothm
###Code
data = load_boston()
X_train, X_test, y_train, y_test = train_test_split(data.data, data.target, test_size=0.25)
model = DecisionTreeRegressor()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print('MSE: {}'.format(mean_squared_error(y_test, y_pred)))
###Output
MSE: 44.444521493953076
###Markdown
MSE depending on tree depthLet's see how does it depen on the depth hyperparameter
###Code
df = pd.DataFrame(columns=['Depth', 'MSE'])
for depth in range(1, 12):
model = DecisionTreeRegressor(max_depth=depth)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
df = df.append({'Depth': depth, 'MSE': mean_squared_error(y_test, y_pred)}, ignore_index=True)
fig, ax = plt.subplots(figsize=(14, 7))
sns.lineplot(data=df, x='Depth', y='MSE', ax=ax);
###Output
_____no_output_____
###Markdown
Decision Tree ClassifierKey point of classification task for decision tree is that we want to return the most frequent class for specific condition
###Code
class DecisionTreeClassifier(Tree):
def grow_tree(self, X, y, depth=0):
n_samples, n_features = X.shape
if depth > self.max_depth or len(X) < self.min_samples_split:
counts = np.bincount(y)
return Node(value=np.argmax(counts))
feature_idxs = np.arange(int(n_features * self.max_features))
best_feauture_idx, best_treshold = self.best_criteria(X, feature_idxs)
left_idxs, right_idxs = self.split_node(X, best_feauture_idx, best_treshold)
if len(left_idxs) == 0 or len(right_idxs) == 0:
counts = np.bincount(y)
return Node(value=np.argmax(counts))
else:
left = self.grow_tree(X[left_idxs], y[left_idxs], depth + 1)
right = self.grow_tree(X[right_idxs], y[right_idxs], depth + 1)
return Node(left=left, right=right, feature_idx=best_feauture_idx, treshold=best_treshold)
###Output
_____no_output_____
###Markdown
Sigle predictionNot let's test our algorothm
###Code
X, y = load_iris(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20)
clf = DecisionTreeClassifier()
clf.fit(X_train, y_train);
accuracy_score(y_test, clf.predict(X_test))
###Output
_____no_output_____
###Markdown
Intuition behind Random ForestRandom forest is an ensemble of decision tree algorithms.It is an extension of bootstrap aggregation (bagging) of decision trees and can be used for classification and regression problems.In bagging, a number of decision trees are created where each tree is created from a different bootstrap sample of the training dataset. A bootstrap sample is a sample of the training dataset where a sample may appear more than once in the sample, referred to as sampling with replacement.Bagging is an effective ensemble algorithm as each decision tree is fit on a slightly different training dataset, and in turn, has a slightly different performanceA prediction on a regression problem is the average of the prediction across the trees in the ensemble. A prediction on a classification problem is the majority vote for the class label across the trees in the ensemble. Difference from just baggingUnlike bagging, random forest also involves selecting a subset of input features (columns or variables) at each split point in the construction of trees. Typically, constructing a decision tree involves evaluating the value for each input variable in the data in order to select a split point. By reducing the features to a random subset that may be considered at each split point, it forces each decision tree in the ensemble to be more different.
###Code
def bootstrap_sample(X, y, size):
n_samples = X.shape[0]
idxs = np.random.choice(n_samples, size=int(n_samples * size), replace=True)
return(X[idxs], y[idxs])
class RandomForestRegressor:
def __init__(self, min_samples_split=2, max_depth=100, n_estimators=5, bootstrap=0.9, max_features=1):
self.min_samples_split = min_samples_split
self.max_depth = max_depth
self.models = []
self.n_estimators = n_estimators
self.bootstrap = bootstrap
self.max_features = max_features
def fit(self, X, y):
for _ in range(self.n_estimators):
X_sample, y_sample = bootstrap_sample(X, y, size=self.bootstrap)
model = DecisionTreeRegressor(max_depth=self.max_depth, min_samples_split=self.min_samples_split, max_features=self.max_features)
model.fit(X_sample, y_sample)
self.models.append(model)
def predict(self, X):
n_samples, n_features = X.shape
res = np.zeros(n_samples)
for model in self.models:
res += model.predict(X)
return res / self.n_estimators
###Output
_____no_output_____
###Markdown
Single predictionNot let's test our algorothm
###Code
data = load_boston()
X_train, X_test, y_train, y_test = train_test_split(data.data, data.target, test_size=0.25)
model = RandomForestRegressor(n_estimators=3,
bootstrap=0.8,
max_depth=10,
min_samples_split=3,
max_features=1)
model.fit(X_train, y_train)
MSE = mean_squared_error(y_test, model.predict(X_test))
print('MSE: {}'.format(MSE))
###Output
MSE: 46.111317655779125
###Markdown
MSE depending on number of trees and featuresLet's see how does it depen on the trees hyperparameter
###Code
df = pd.DataFrame(columns=['Number of trees', 'MSE', 'Max features'])
for number_of_trees in range(1, 15):
model = RandomForestRegressor(n_estimators=number_of_trees,
bootstrap=0.9,
max_depth=10,
min_samples_split=3,
max_features=1)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
df = df.append({'Number of trees': number_of_trees, 'MSE': mean_squared_error(y_test, y_pred), 'Max features': 1}, ignore_index=True)
model = RandomForestRegressor(n_estimators=number_of_trees,
bootstrap=0.9,
max_depth=10,
min_samples_split=3,
max_features=0.9)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
df = df.append({'Number of trees': number_of_trees, 'MSE': mean_squared_error(y_test, y_pred), 'Max features': 0.9}, ignore_index=True)
fig, ax = plt.subplots(figsize=(14, 7))
sns.lineplot(data=df, x='Number of trees', y='MSE', hue='Max features', ax=ax);
###Output
_____no_output_____
|
Model backlog/Inference/0-tweet-inference-roberta-244-282.ipynb
|
###Markdown
Dependencies
###Code
import json, glob
from tweet_utility_scripts import *
from tweet_utility_preprocess_roberta_scripts_aux import *
from transformers import TFRobertaModel, RobertaConfig
from tokenizers import ByteLevelBPETokenizer
from tensorflow.keras import layers
from tensorflow.keras.models import Model
###Output
_____no_output_____
###Markdown
Load data
###Code
test = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/test.csv')
print('Test samples: %s' % len(test))
display(test.head())
###Output
Test samples: 3534
###Markdown
Model parameters
###Code
input_base_path = '/kaggle/input/244-robertabase/'
input_base_path_1 = '/kaggle/input/282-tweet-train-10fold-roberta-onecycle/'
with open(input_base_path + 'config.json') as json_file:
config = json.load(json_file)
config
# vocab_path = input_base_path + 'vocab.json'
# merges_path = input_base_path + 'merges.txt'
base_path = '/kaggle/input/qa-transformers/roberta/'
vocab_path = base_path + 'roberta-base-vocab.json'
merges_path = base_path + 'roberta-base-merges.txt'
config['base_model_path'] = base_path + 'roberta-base-tf_model.h5'
config['config_path'] = base_path + 'roberta-base-config.json'
model_path_list = glob.glob(input_base_path + '*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep='\n')
model_path_list_1 = glob.glob(input_base_path_1 + '*1.h5')
model_path_list_1 += glob.glob(input_base_path_1 + '*2.h5')
model_path_list_1 += glob.glob(input_base_path_1 + '*3.h5')
model_path_list_1 += glob.glob(input_base_path_1 + '*4.h5')
model_path_list_1 += glob.glob(input_base_path_1 + '*5.h5')
model_path_list_1 += glob.glob(input_base_path_1 + '*6.h5')
model_path_list_1 += glob.glob(input_base_path_1 + '*7.h5')
model_path_list_1 += glob.glob(input_base_path_1 + '*8.h5')
model_path_list_1 += glob.glob(input_base_path_1 + '*9.h5')
model_path_list_1.sort()
print('Models to predict:')
print(*model_path_list_1, sep='\n')
###Output
Models to predict:
/kaggle/input/244-robertabase/model_fold_1.h5
/kaggle/input/244-robertabase/model_fold_2.h5
/kaggle/input/244-robertabase/model_fold_3.h5
/kaggle/input/244-robertabase/model_fold_4.h5
/kaggle/input/244-robertabase/model_fold_5.h5
Models to predict:
/kaggle/input/282-tweet-train-10fold-roberta-onecycle/model_fold_1.h5
/kaggle/input/282-tweet-train-10fold-roberta-onecycle/model_fold_2.h5
/kaggle/input/282-tweet-train-10fold-roberta-onecycle/model_fold_3.h5
/kaggle/input/282-tweet-train-10fold-roberta-onecycle/model_fold_4.h5
/kaggle/input/282-tweet-train-10fold-roberta-onecycle/model_fold_5.h5
/kaggle/input/282-tweet-train-10fold-roberta-onecycle/model_fold_6.h5
/kaggle/input/282-tweet-train-10fold-roberta-onecycle/model_fold_7.h5
/kaggle/input/282-tweet-train-10fold-roberta-onecycle/model_fold_8.h5
/kaggle/input/282-tweet-train-10fold-roberta-onecycle/model_fold_9.h5
###Markdown
Tokenizer
###Code
tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path,
lowercase=True, add_prefix_space=True)
###Output
_____no_output_____
###Markdown
Pre process
###Code
test['text'].fillna('', inplace=True)
test['text'] = test['text'].apply(lambda x: x.lower())
test['text'] = test['text'].apply(lambda x: x.strip())
x_test, x_test_aux, x_test_aux_2 = get_data_test(test, tokenizer, config['MAX_LEN'], preprocess_fn=preprocess_roberta_test)
###Output
_____no_output_____
###Markdown
Model
###Code
module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name="base_model")
last_hidden_state, _ = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
logits = layers.Dense(2, name="qa_outputs", use_bias=False)(last_hidden_state)
start_logits, end_logits = tf.split(logits, 2, axis=-1)
start_logits = tf.squeeze(start_logits, axis=-1, name='y_start')
end_logits = tf.squeeze(end_logits, axis=-1, name='y_end')
model = Model(inputs=[input_ids, attention_mask], outputs=[start_logits, end_logits])
return model
###Output
_____no_output_____
###Markdown
Make predictions
###Code
NUM_TEST_IMAGES = len(test)
test_start_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN']))
test_end_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN']))
for model_path in model_path_list:
print(model_path)
model = model_fn(config['MAX_LEN'])
model.load_weights(model_path)
test_preds = model.predict(get_test_dataset(x_test, config['BATCH_SIZE']))
test_start_preds += test_preds[0] / len(model_path_list)
test_end_preds += test_preds[1] / len(model_path_list)
for model_path in model_path_list_1:
print(model_path)
model = model_fn(config['MAX_LEN'])
model.load_weights(model_path)
test_preds = model.predict(get_test_dataset(x_test, config['BATCH_SIZE']))
test_start_preds += test_preds[0] / len(model_path_list_1)
test_end_preds += test_preds[1] / len(model_path_list_1)
###Output
/kaggle/input/244-robertabase/model_fold_1.h5
/kaggle/input/244-robertabase/model_fold_2.h5
/kaggle/input/244-robertabase/model_fold_3.h5
/kaggle/input/244-robertabase/model_fold_4.h5
/kaggle/input/244-robertabase/model_fold_5.h5
/kaggle/input/282-tweet-train-10fold-roberta-onecycle/model_fold_1.h5
/kaggle/input/282-tweet-train-10fold-roberta-onecycle/model_fold_2.h5
/kaggle/input/282-tweet-train-10fold-roberta-onecycle/model_fold_3.h5
/kaggle/input/282-tweet-train-10fold-roberta-onecycle/model_fold_4.h5
/kaggle/input/282-tweet-train-10fold-roberta-onecycle/model_fold_5.h5
/kaggle/input/282-tweet-train-10fold-roberta-onecycle/model_fold_6.h5
/kaggle/input/282-tweet-train-10fold-roberta-onecycle/model_fold_7.h5
/kaggle/input/282-tweet-train-10fold-roberta-onecycle/model_fold_8.h5
/kaggle/input/282-tweet-train-10fold-roberta-onecycle/model_fold_9.h5
###Markdown
Post process
###Code
test['start'] = test_start_preds.argmax(axis=-1)
test['end'] = test_end_preds.argmax(axis=-1)
test['selected_text'] = test.apply(lambda x: decode(x['start'], x['end'], x['text'], config['question_size'], tokenizer), axis=1)
# Post-process
test["selected_text"] = test.apply(lambda x: ' '.join([word for word in x['selected_text'].split() if word in x['text'].split()]), axis=1)
test['selected_text'] = test.apply(lambda x: x['text'] if (x['selected_text'] == '') else x['selected_text'], axis=1)
test['selected_text'].fillna(test['text'], inplace=True)
###Output
_____no_output_____
###Markdown
Visualize predictions
###Code
test['text_len'] = test['text'].apply(lambda x : len(x))
test['label_len'] = test['selected_text'].apply(lambda x : len(x))
test['text_wordCnt'] = test['text'].apply(lambda x : len(x.split(' ')))
test['label_wordCnt'] = test['selected_text'].apply(lambda x : len(x.split(' ')))
test['text_tokenCnt'] = test['text'].apply(lambda x : len(tokenizer.encode(x).ids))
test['label_tokenCnt'] = test['selected_text'].apply(lambda x : len(tokenizer.encode(x).ids))
test['jaccard'] = test.apply(lambda x: jaccard(x['text'], x['selected_text']), axis=1)
display(test.head(10))
display(test.describe())
###Output
_____no_output_____
###Markdown
Test set predictions
###Code
submission = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/sample_submission.csv')
submission['selected_text'] = test['selected_text']
submission.to_csv('submission.csv', index=False)
submission.head(10)
###Output
_____no_output_____
|
project2-ml-globalterrorism.ipynb
|
###Markdown
DescriptionFile used: [global_terrorism_database](./Data/globalterrorismdb_0718dist.csv/globalterrorismdb_0718dist.csv) Loading Data
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy as sp
import seaborn as sb
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import StandardScaler,LabelBinarizer
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import cross_validate, cross_val_predict, cross_val_score, KFold
from sklearn.metrics import confusion_matrix
from types import SimpleNamespace
from imblearn.under_sampling import ClusterCentroids
dataFilePath = './Data/globalterrorismdb_0718dist.csv/globalterrorismdb_0718dist.csv'
originalDataset = pd.read_csv(dataFilePath, encoding="ISO-8859-1")
originalDataset
pd.set_option('display.max_rows', 300)
###Output
_____no_output_____
###Markdown
Data Statistics
###Code
originalDataset.shape
originalDataset.info()
pd.options.display
originalDataset.describe(include='all')
originalSetWithoutDups = originalDataset.drop_duplicates()
originalSetWithoutDups.shape
###Output
_____no_output_____
###Markdown
No duplicates, file is valid Information about columns
###Code
originalDataset.dtypes
orgFeatures = originalDataset.loc[:, originalDataset.columns]
categoricFeaturesList = list(orgFeatures.dtypes[orgFeatures.dtypes == object].index)
numericFeaturesList = list(orgFeatures.dtypes[orgFeatures.dtypes != object].index)
for column in originalDataset.columns:
if (originalDataset.dtypes[column].name.__eq__('object')):
sb.catplot(x=column, kind='count', data=originalDataset, height=10, aspect=2)
else:
originalDataset.hist(column=column, figsize=[15,10])
originalDataset.mode(axis=0)
###Output
_____no_output_____
###Markdown
Correlation
###Code
fig, ax = plt.subplots(figsize = (20,20))
sb.heatmap(originalDataset.corr(method='pearson'), annot=True, cmap='RdBu', ax=ax)
###Output
_____no_output_____
|
starter_notebook_ln.ipynb
|
###Markdown
Masakhane - Machine Translation for African Languages (Using JoeyNMT) Note before beginning: - The idea is that you should be able to make minimal changes to this in order to get SOME result for your own translation corpus. - The tl;dr: Go to the **"TODO"** comments which will tell you what to update to get up and running - If you actually want to have a clue what you're doing, read the text and peek at the links - With 100 epochs, it should take around 7 hours to run in Google Colab - Once you've gotten a result for your language, please attach and email your notebook that generated it to [email protected] - If you care enough and get a chance, doing a brief background on your language would be amazing. See examples in [(Martinus, 2019)](https://arxiv.org/abs/1906.05685) Retrieve your data & make a parallel corpusIf you want to use the JW300 data referenced on the Masakhane website or in our GitHub repo, you can use `opus-tools` to convert the data into a convenient format. `opus_read` from that package provides a convenient tool for reading the native aligned XML files and to convert them to TMX format. The tool can also be used to fetch relevant files from OPUS on the fly and to filter the data as necessary. [Read the documentation](https://pypi.org/project/opustools-pkg/) for more details.Once you have your corpus files in TMX format (an xml structure which will include the sentences in your target language and your source language in a single file), we recommend reading them into a pandas dataframe. Thankfully, Jade wrote a silly `tmx2dataframe` package which converts your tmx file to a pandas dataframe.
###Code
from google.colab import drive
drive.mount('/content/drive')
# TODO: Set your source and target languages. Keep in mind, these traditionally use language codes as found here:
# These will also become the suffix's of all vocab and corpus files used throughout
import os
source_language = "en"
target_language = "ln"
lc = False # If True, lowercase the data.
seed = 42 # Random seed for shuffling.
tag = "baseline" # Give a unique name to your folder - this is to ensure you don't rewrite any models you've already submitted
os.environ["src"] = source_language # Sets them in bash as well, since we often use bash scripts
os.environ["tgt"] = target_language
os.environ["tag"] = tag
# This will save it to a folder in our gdrive instead!
!mkdir -p "/content/drive/My Drive/masakhane/$src-$tgt-$tag"
os.environ["gdrive_path"] = "/content/drive/My Drive/masakhane/%s-%s-%s" % (source_language, target_language, tag)
!echo $gdrive_path
# Install opus-tools
! pip install opustools-pkg
# Downloading our corpus
! opus_read -d JW300 -s $src -t $tgt -wm moses -w jw300.$src jw300.$tgt -q
# extract the corpus file
! gunzip JW300_latest_xml_$src-$tgt.xml.gz
# Download the global test set.
! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-any.en
# And the specific test set for this language pair.
os.environ["trg"] = target_language
os.environ["src"] = source_language
! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-$trg.en
! mv test.en-$trg.en test.en
! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-$trg.$trg
! mv test.en-$trg.$trg test.$trg
# Read the test data to filter from train and dev splits.
# Store english portion in set for quick filtering checks.
en_test_sents = set()
filter_test_sents = "test.en-any.en"
j = 0
with open(filter_test_sents) as f:
for line in f:
en_test_sents.add(line.strip())
j += 1
print('Loaded {} global test sentences to filter from the training/dev data.'.format(j))
import pandas as pd
# TMX file to dataframe
source_file = 'jw300.' + source_language
target_file = 'jw300.' + target_language
source = []
target = []
skip_lines = [] # Collect the line numbers of the source portion to skip the same lines for the target portion.
with open(source_file) as f:
for i, line in enumerate(f):
# Skip sentences that are contained in the test set.
if line.strip() not in en_test_sents:
source.append(line.strip())
else:
skip_lines.append(i)
with open(target_file) as f:
for j, line in enumerate(f):
# Only add to corpus if corresponding source was not skipped.
if j not in skip_lines:
target.append(line.strip())
print('Loaded data and skipped {}/{} lines since contained in test set.'.format(len(skip_lines), i))
df = pd.DataFrame(zip(source, target), columns=['source_sentence', 'target_sentence'])
# if you get TypeError: data argument can't be an iterator is because of your zip version run this below
#df = pd.DataFrame(list(zip(source, target)), columns=['source_sentence', 'target_sentence'])
df.head(3)
###Output
Loaded data and skipped 6663/601113 lines since contained in test set.
###Markdown
Pre-processing and exportIt is generally a good idea to remove duplicate translations and conflicting translations from the corpus. In practice, these public corpora include some number of these that need to be cleaned.In addition we will split our data into dev/test/train and export to the filesystem.
###Code
# drop duplicate translations
df_pp = df.drop_duplicates()
# drop conflicting translations
# (this is optional and something that you might want to comment out
# depending on the size of your corpus)
df_pp.drop_duplicates(subset='source_sentence', inplace=True)
df_pp.drop_duplicates(subset='target_sentence', inplace=True)
# Shuffle the data to remove bias in dev set selection.
df_pp = df_pp.sample(frac=1, random_state=seed).reset_index(drop=True)
# Install fuzzy wuzzy to remove "almost duplicate" sentences in the
# test and training sets.
! pip install fuzzywuzzy
! pip install python-Levenshtein
import time
from fuzzywuzzy import process
import numpy as np
from os import cpu_count
from functools import partial
from multiprocessing import Pool
# reset the index of the training set after previous filtering
df_pp.reset_index(drop=False, inplace=True)
# Remove samples from the training data set if they "almost overlap" with the
# samples in the test set.
# Filtering function. Adjust pad to narrow down the candidate matches to
# within a certain length of characters of the given sample.
def fuzzfilter(sample, candidates, pad):
candidates = [x for x in candidates if len(x) <= len(sample)+pad and len(x) >= len(sample)-pad]
if len(candidates) > 0:
return process.extractOne(sample, candidates)[1]
else:
return np.nan
start_time = time.time()
### iterating over pandas dataframe rows is not recomended, let use multi processing to apply the function
with Pool(cpu_count()-1) as pool:
scores = pool.map(partial(fuzzfilter, candidates=list(en_test_sents), pad=5), df_pp['source_sentence'])
hours, rem = divmod(time.time() - start_time, 3600)
minutes, seconds = divmod(rem, 60)
print("done in {}h:{}min:{}seconds".format(hours, minutes, seconds))
# Filter out "almost overlapping samples"
df_pp = df_pp.assign(scores=scores)
df_pp = df_pp[df_pp['scores'] < 95]
# This section does the split between train/dev for the parallel corpora then saves them as separate files
# We use 1000 dev test and the given test set.
import csv
# Do the split between dev/train and create parallel corpora
num_dev_patterns = 1000
# Optional: lower case the corpora - this will make it easier to generalize, but without proper casing.
if lc: # Julia: making lowercasing optional
df_pp["source_sentence"] = df_pp["source_sentence"].str.lower()
df_pp["target_sentence"] = df_pp["target_sentence"].str.lower()
# Julia: test sets are already generated
dev = df_pp.tail(num_dev_patterns) # Herman: Error in original
stripped = df_pp.drop(df_pp.tail(num_dev_patterns).index)
with open("train."+source_language, "w") as src_file, open("train."+target_language, "w") as trg_file:
for index, row in stripped.iterrows():
src_file.write(row["source_sentence"]+"\n")
trg_file.write(row["target_sentence"]+"\n")
with open("dev."+source_language, "w") as src_file, open("dev."+target_language, "w") as trg_file:
for index, row in dev.iterrows():
src_file.write(row["source_sentence"]+"\n")
trg_file.write(row["target_sentence"]+"\n")
#stripped[["source_sentence"]].to_csv("train."+source_language, header=False, index=False) # Herman: Added `header=False` everywhere
#stripped[["target_sentence"]].to_csv("train."+target_language, header=False, index=False) # Julia: Problematic handling of quotation marks.
#dev[["source_sentence"]].to_csv("dev."+source_language, header=False, index=False)
#dev[["target_sentence"]].to_csv("dev."+target_language, header=False, index=False)
# Doublecheck the format below. There should be no extra quotation marks or weird characters.
! head train.*
! head dev.*
###Output
_____no_output_____
###Markdown
--- Installation of JoeyNMTJoeyNMT is a simple, minimalist NMT package which is useful for learning and teaching. Check out the documentation for JoeyNMT [here](https://joeynmt.readthedocs.io)
###Code
# Install JoeyNMT
! git clone https://github.com/joeynmt/joeynmt.git
! cd joeynmt; pip3 install .
###Output
_____no_output_____
###Markdown
Preprocessing the Data into Subword BPE Tokens- One of the most powerful improvements for agglutinative languages (a feature of most Bantu languages) is using BPE tokenization [ (Sennrich, 2015) ](https://arxiv.org/abs/1508.07909).- It was also shown that by optimizing the umber of BPE codes we significantly improve results for low-resourced languages [(Sennrich, 2019)](https://www.aclweb.org/anthology/P19-1021) [(Martinus, 2019)](https://arxiv.org/abs/1906.05685)- Below we have the scripts for doing BPE tokenization of our data. We use 4000 tokens as recommended by [(Sennrich, 2019)](https://www.aclweb.org/anthology/P19-1021). You do not need to change anything. Simply running the below will be suitable.
###Code
# One of the huge boosts in NMT performance was to use a different method of tokenizing.
# Usually, NMT would tokenize by words. However, using a method called BPE gave amazing boosts to performance
# Do subword NMT
from os import path
os.environ["src"] = source_language # Sets them in bash as well, since we often use bash scripts
os.environ["tgt"] = target_language
# Learn BPEs on the training data.
os.environ["data_path"] = path.join("joeynmt", "data", source_language + target_language) # Herman!
! subword-nmt learn-joint-bpe-and-vocab --input train.$src train.$tgt -s 4000 -o bpe.codes.4000 --write-vocabulary vocab.$src vocab.$tgt
# Apply BPE splits to the development and test data.
! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$src < train.$src > train.bpe.$src
! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$tgt < train.$tgt > train.bpe.$tgt
! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$src < dev.$src > dev.bpe.$src
! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$tgt < dev.$tgt > dev.bpe.$tgt
! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$src < test.$src > test.bpe.$src
! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$tgt < test.$tgt > test.bpe.$tgt
# Create directory, move everyone we care about to the correct location
! mkdir -p $data_path
! cp train.* $data_path
! cp test.* $data_path
! cp dev.* $data_path
! cp bpe.codes.4000 $data_path
! ls $data_path
# Also move everything we care about to a mounted location in google drive (relevant if running in colab) at gdrive_path
! cp train.* "$gdrive_path"
! cp test.* "$gdrive_path"
! cp dev.* "$gdrive_path"
! cp bpe.codes.4000 "$gdrive_path"
! ls "$gdrive_path"
# Create that vocab using build_vocab
! sudo chmod 777 joeynmt/scripts/build_vocab.py
! joeynmt/scripts/build_vocab.py joeynmt/data/$src$tgt/train.bpe.$src joeynmt/data/$src$tgt/train.bpe.$tgt --output_path joeynmt/data/$src$tgt/vocab.txt
# Some output
! echo "BPE Xhosa Sentences"
! tail -n 5 test.bpe.$tgt
! echo "Combined BPE Vocab"
! tail -n 10 joeynmt/data/$src$tgt/vocab.txt # Herman
# Also move everything we care about to a mounted location in google drive (relevant if running in colab) at gdrive_path
! cp train.* "$gdrive_path"
! cp test.* "$gdrive_path"
! cp dev.* "$gdrive_path"
! cp bpe.codes.4000 "$gdrive_path"
! ls "$gdrive_path"
###Output
_____no_output_____
###Markdown
Creating the JoeyNMT ConfigJoeyNMT requires a yaml config. We provide a template below. We've also set a number of defaults with it, that you may play with!- We used Transformer architecture - We set our dropout to reasonably high: 0.3 (recommended in [(Sennrich, 2019)](https://www.aclweb.org/anthology/P19-1021))Things worth playing with:- The batch size (also recommended to change for low-resourced languages)- The number of epochs (we've set it at 30 just so it runs in about an hour, for testing purposes)- The decoder options (beam_size, alpha)- Evaluation metrics (BLEU versus Crhf4)
###Code
# This creates the config file for our JoeyNMT system. It might seem overwhelming so we've provided a couple of useful parameters you'll need to update
# (You can of course play with all the parameters if you'd like!)
name = '%s%s' % (source_language, target_language)
gdrive_path = os.environ["gdrive_path"]
# Create the config
config = """
name: "{name}_transformer"
data:
src: "{source_language}"
trg: "{target_language}"
train: "data/{name}/train.bpe"
dev: "data/{name}/dev.bpe"
test: "data/{name}/test.bpe"
level: "bpe"
lowercase: False
max_sent_length: 100
src_vocab: "data/{name}/vocab.txt"
trg_vocab: "data/{name}/vocab.txt"
testing:
beam_size: 5
alpha: 1.0
training:
#load_model: "{gdrive_path}/models/{name}_transformer/1.ckpt" # if uncommented, load a pre-trained model from this checkpoint
random_seed: 42
optimizer: "adam"
normalization: "tokens"
adam_betas: [0.9, 0.999]
scheduling: "plateau" # TODO: try switching from plateau to Noam scheduling
patience: 5 # For plateau: decrease learning rate by decrease_factor if validation score has not improved for this many validation rounds.
learning_rate_factor: 0.5 # factor for Noam scheduler (used with Transformer)
learning_rate_warmup: 1000 # warmup steps for Noam scheduler (used with Transformer)
decrease_factor: 0.7
loss: "crossentropy"
learning_rate: 0.0003
learning_rate_min: 0.00000001
weight_decay: 0.0
label_smoothing: 0.1
batch_size: 4096
batch_type: "token"
eval_batch_size: 3600
eval_batch_type: "token"
batch_multiplier: 1
early_stopping_metric: "ppl"
epochs: 30 # TODO: Decrease for when playing around and checking of working. Around 30 is sufficient to check if its working at all
validation_freq: 1000 # TODO: Set to at least once per epoch.
logging_freq: 100
eval_metric: "bleu"
model_dir: "models/{name}_transformer"
overwrite: False # TODO: Set to True if you want to overwrite possibly existing models.
shuffle: True
use_cuda: True
max_output_length: 100
print_valid_sents: [0, 1, 2, 3]
keep_last_ckpts: 3
model:
initializer: "xavier"
bias_initializer: "zeros"
init_gain: 1.0
embed_initializer: "xavier"
embed_init_gain: 1.0
tied_embeddings: True
tied_softmax: True
encoder:
type: "transformer"
num_layers: 6
num_heads: 4 # TODO: Increase to 8 for larger data.
embeddings:
embedding_dim: 256 # TODO: Increase to 512 for larger data.
scale: True
dropout: 0.2
# typically ff_size = 4 x hidden_size
hidden_size: 256 # TODO: Increase to 512 for larger data.
ff_size: 1024 # TODO: Increase to 2048 for larger data.
dropout: 0.3
decoder:
type: "transformer"
num_layers: 6
num_heads: 4 # TODO: Increase to 8 for larger data.
embeddings:
embedding_dim: 256 # TODO: Increase to 512 for larger data.
scale: True
dropout: 0.2
# typically ff_size = 4 x hidden_size
hidden_size: 256 # TODO: Increase to 512 for larger data.
ff_size: 1024 # TODO: Increase to 2048 for larger data.
dropout: 0.3
""".format(name=name, gdrive_path=os.environ["gdrive_path"], source_language=source_language, target_language=target_language)
with open("joeynmt/configs/transformer_{name}.yaml".format(name=name),'w') as f:
f.write(config)
###Output
_____no_output_____
###Markdown
Train the ModelThis single line of joeynmt runs the training using the config we made above
###Code
# Train the model
# You can press Ctrl-C to stop. And then run the next cell to save your checkpoints!
!cd joeynmt; python3 -m joeynmt train configs/transformer_$src$tgt.yaml
# Copy the created models from the notebook storage to google drive for persistant storage
!cp -r joeynmt/models/${src}${tgt}_transformer/* "$gdrive_path/models/${src}${tgt}_transformer/"
# Output our validation accuracy
! cat "$gdrive_path/models/${src}${tgt}_transformer/validations.txt"
# Test our model
! cd joeynmt; python3 -m joeynmt test "$gdrive_path/models/${src}${tgt}_transformer/config.yaml"
###Output
_____no_output_____
|
module01_introduction_to_python/01_03_using_functions.ipynb
|
###Markdown
Using Functions Calling functions We often want to do things to our objects that are more complicated than just assigning them to variables.
###Code
len("pneumonoultramicroscopicsilicovolcanoconiosis")
###Output
_____no_output_____
###Markdown
Here we have "called a function". The function `len` takes one input, and has one output. The output is the length of whatever the input was. Programmers also call function inputs "parameters" or, confusingly, "arguments". Here's another example:
###Code
sorted("Python")
###Output
_____no_output_____
###Markdown
Which gives us back a *list* of the letters in Python, sorted alphabetically (more specifically, according to their [Unicode order](https://www.ssec.wisc.edu/~tomw/java/unicode.htmlx0000)). The input goes in brackets after the function name, and the output emerges wherever the function is used. So we can put a function call anywhere we could put a "literal" object or a variable.
###Code
len("Jim") * 8
x = len("Mike")
y = len("Bob")
z = x + y
print(z)
###Output
7
###Markdown
Using methods Objects come associated with a bunch of functions designed for working on objects of that type. We access these with a dot, just as we do for data attributes:
###Code
"shout".upper()
###Output
_____no_output_____
###Markdown
These are called methods. If you try to use a method defined for a different type, you get an error:
###Code
x = 5
type(x)
x.upper()
###Output
_____no_output_____
###Markdown
If you try to use a method that doesn't exist, you get an error:
###Code
x.wrong
###Output
_____no_output_____
###Markdown
Methods and properties are both kinds of **attribute**, so both are accessed with the dot operator. Objects can have both properties and methods:
###Code
z = 1 + 5j
z.real
z.conjugate()
z.conjugate
###Output
_____no_output_____
###Markdown
Functions are just a type of object! Now for something that will take a while to understand: don't worry if you don't get this yet, we'lllook again at this in much more depth later in the course.If we forget the (), we realise that a *method is just a property which is a function*!
###Code
z.conjugate
type(z.conjugate)
somefunc = z.conjugate
somefunc()
###Output
_____no_output_____
###Markdown
Functions are just a kind of variable, and we can assign new labels to them:
###Code
sorted([1, 5, 3, 4])
magic = sorted
type(magic)
magic(["Technology", "Advanced"])
###Output
_____no_output_____
###Markdown
Getting help on functions and methods The 'help' function, when applied to a function, gives help on it!
###Code
help(sorted)
###Output
Help on built-in function sorted in module builtins:
sorted(iterable, /, *, key=None, reverse=False)
Return a new list containing all items from the iterable in ascending order.
A custom key function can be supplied to customize the sort order, and the
reverse flag can be set to request the result in descending order.
###Markdown
The 'dir' function, when applied to an object, lists all its attributes (properties and methods):
###Code
dir("Hexxo")
###Output
_____no_output_____
###Markdown
Most of these are confusing methods beginning and ending with __, part of the internals of python. Again, just as with error messages, we have to learn to read past the bits that are confusing, to the bit we want:
###Code
"Hexxo".replace("x", "l")
help("FIsh".replace)
###Output
Help on built-in function replace:
replace(old, new, count=-1, /) method of builtins.str instance
Return a copy with all occurrences of substring old replaced by new.
count
Maximum number of occurrences to replace.
-1 (the default value) means replace all occurrences.
If the optional argument count is given, only the first count occurrences are
replaced.
###Markdown
Operators Now that we know that functions are a way of taking a number of inputs and producing an output, we should look again atwhat happens when we write:
###Code
x = 2 + 3
print(x)
###Output
5
###Markdown
This is just a pretty way of calling an "add" function. Things would be more symmetrical if add were actually written x = +(2, 3) Where '+' is just the name of the name of the adding function. In python, these functions **do** exist, but they're actually **methods** of the first input: they're the mysterious `__` functions we saw earlier (Two underscores.)
###Code
x.__add__(7)
###Output
_____no_output_____
###Markdown
We call these symbols, `+`, `-` etc, "operators". The meaning of an operator varies for different types:
###Code
"Hello" + "Goodbye"
[2, 3, 4] + [5, 6]
###Output
_____no_output_____
###Markdown
Sometimes we get an error when a type doesn't have an operator:
###Code
7 - 2
[2, 3, 4] - [5, 6]
###Output
_____no_output_____
###Markdown
The word "operand" means "thing that an operator operates on"! Or when two types can't work together with an operator:
###Code
[2, 3, 4] + 5
###Output
_____no_output_____
###Markdown
To do this, put:
###Code
[2, 3, 4] + [5]
###Output
_____no_output_____
###Markdown
Just as in Mathematics, operators have a built-in precedence, with brackets used to force an order of operations:
###Code
print(2 + 3 * 4)
print((2 + 3) * 4)
###Output
20
|
ml-modelle/RandomForest_Scores_Postprocessing.ipynb
|
###Markdown
Random Forest - Optimized Model - Scores and Probabilities Import Modules
###Code
# data analysis and wrangling
import pandas as pd
import numpy as np
import math
# own modules
import eda_methods as eda
# visualization
import seaborn as sns
sns.set(style="white")
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
from pandas.plotting import scatter_matrix
# warnings handler
import warnings
warnings.filterwarnings("ignore")
# Machine Learning Libraries
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import fbeta_score, accuracy_score, f1_score, recall_score, precision_score
from sklearn.metrics import average_precision_score, precision_recall_curve, plot_precision_recall_curve, roc_auc_score,roc_curve
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from xgboost import XGBClassifier
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
from sklearn.metrics import make_scorer
from sklearn.model_selection import KFold
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.impute import SimpleImputer
#Pipeline
from sklearn.model_selection import train_test_split, cross_val_predict
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder, MinMaxScaler
from sklearn.compose import ColumnTransformer
# Imbalanced Learn
from imblearn.over_sampling import RandomOverSampler
from imblearn.under_sampling import RandomUnderSampler
# random state
random_state = 100
# Variables for plot sizes
matplotlib.rc('font', size=16) # controls default text sizes
matplotlib.rc('axes', titlesize=14) # fontsize of the axes title
matplotlib.rc('axes', labelsize=14) # fontsize of the x and y labels
matplotlib.rc('xtick', labelsize=14) # fontsize of the tick labels
matplotlib.rc('ytick', labelsize=14) # fontsize of the tick labels
matplotlib.rc('legend', fontsize=14) # legend fontsize
matplotlib.rc('figure', titlesize=20)
###Output
_____no_output_____
###Markdown
Import Data
###Code
# new feature dataframe
df_importance = pd.read_csv('data/df_clean_engineered_all.csv')
y = df_importance['churn']
df_importance = df_importance.drop(['churn','plz_3','abo_registrierung_min','nl_registrierung_min','ort'], axis = 1)
df_importance = pd.get_dummies(df_importance, columns = ['kanal', 'objekt_name', 'aboform_name', 'zahlung_rhythmus_name','zahlung_weg_name', 'plz_1', 'plz_2', 'land_iso_code', 'anrede','titel'], drop_first = True)
important_features_combined_dropping = ['zahlung_weg_name_Rechnung',
'zahlung_rhythmus_name_halbjährlich',
'rechnungsmonat',
'received_anzahl_6m',
'openedanzahl_6m',
'objekt_name_ZEIT Digital',
'nl_zeitbrief',
'nl_aktivitaet',
'liefer_beginn_evt',
'cnt_umwandlungsstatus2_dkey',
'clickrate_3m',
'anrede_Frau',
'aboform_name_Geschenkabo',
'unsubscribed_anzahl_1m',
'studentenabo',
'received_anzahl_bestandskunden_6m',
'openrate_produktnews_3m',
'opened_anzahl_bestandskunden_6m',
'objekt_name_DIE ZEIT - CHRIST & WELT',
'nl_zeitshop',
'nl_opt_in_sum',
'nl_opened_1m',
'kanal_andere',
'kanal_B2B',
'clicked_anzahl_6m',
'che_reg',
'MONTH_DELTA_nl_min',
'zon_zp_red',
'zahlung_rhythmus_name_vierteljährlich',
'unsubscribed_anzahl_hamburg_1m',
'unsubscribed_anzahl_6m',
'sum_zon',
'sum_reg',
'shop_kauf',
'plz_2_10',
'plz_1_7',
'plz_1_1',
'openrate_zeitbrief_3m',
'openrate_produktnews_1m',
'openrate_3m',
'openrate_1m',
'nl_unsubscribed_6m',
'nl_fdz_organisch',
'metropole',
'cnt_abo_magazin',
'cnt_abo_diezeit_digital',
'cnt_abo',
'clicked_anzahl_bestandskunden_3m',
'aboform_name_Probeabo',
'aboform_name_Negative Option',
'MONTH_DELTA_abo_min']
df_importance = df_importance[important_features_combined_dropping]
X = df_importance
def train_predict(modelname, y_train, y_test, predictions_train, predictions_test):
'''
inputs:
- learner: the learning algorithm to be trained and predicted on
-
- y_train: income training set
-
- y_test: income testing set
'''
results = {}
# model name
results['model'] = modelname
# accuracy
results['acc_train'] = accuracy_score(y_train,predictions_train)
results['acc_test'] = accuracy_score(y_test,predictions_test)
# F1-score
results['f1_train'] = f1_score(y_train,predictions_train)
results['f1_test'] = f1_score(y_test,predictions_test)
# Recall
results['recall_train'] = recall_score(y_train,predictions_train)
results['recall_test'] = recall_score(y_test,predictions_test)
# Precision
results['precision_train'] = precision_score(y_train,predictions_train)
results['precision_test'] = precision_score(y_test,predictions_test)
# ROC AUC Score
results['roc_auc_test'] = roc_auc_score(y_test,predictions_test)
# Average Precison Score
results['avg_precision_score'] = average_precision_score(y_test,predictions_test)
# Return the results
return results
def pipeline(X,y,balance=None):
# devide features
categoric_features = list(X.columns[X.dtypes==object])
numeric_features = list(X.columns[X.dtypes != object])
# split train and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=random_state,stratify=y)
if balance == 'over':
# define oversampling strategy
print('Oversampling')
oversample = RandomOverSampler(sampling_strategy='minority')
X_train, y_train = oversample.fit_resample(X_train, y_train)
if balance == 'under':
print('Undersampling')
# define undersample strategy
undersample = RandomUnderSampler(sampling_strategy='majority')
X_train, y_train = undersample.fit_resample(X_train, y_train)
models={
# adjust parameters
'randomforest': RandomForestClassifier(n_jobs=-1,n_estimators=380, criterion='entropy',min_samples_split=4, min_samples_leaf=1, max_features='auto', max_depth=35,bootstrap=True,random_state=random_state),
}
# create preprocessors
numeric_transformer = Pipeline(steps=[
('imputer_num', SimpleImputer(strategy='median')),
('scaler', StandardScaler())
])
categorical_transformer = Pipeline(steps=[
('imputer_cat', SimpleImputer(strategy='constant', fill_value='missing')),
('onehot', OneHotEncoder(handle_unknown='ignore'))
])
preprocessor = ColumnTransformer(
transformers=[
('num', numeric_transformer, numeric_features),
('cat', categorical_transformer, categoric_features)
])
model_results = pd.DataFrame(columns=['model','acc_train','acc_test','f1_train','f1_test','recall_train','recall_test','precision_train','precision_test','roc_auc_test','avg_precision_score'])
# process pipeline for every model
for model in models.items():
print(model[0])
pipe = Pipeline(steps=[('preprocessor', preprocessor),
(model[0], model[1])
])
# fit model
pipe.fit(X_train, y_train)
#predict results
#y_train_pred = cross_val_predict(pipe, X_train, y_train, cv=5, n_jobs=-1)
y_train_pred = pipe.predict(X_train)
y_test_pred = pipe.predict(X_test)
y_test_prob = pipe.predict_proba(X_test)[:,1] #only for churn == 1 since 1-prob(churn) = prob(no churn)
ROC_curve = roc_curve(y_test, y_test_prob)
PR_curve = precision_recall_curve(y_test, y_test_prob)
results = train_predict(model[0],y_train, y_test, y_train_pred, y_test_pred)
model_results = pd.concat([model_results, pd.DataFrame(results,index=[0])])
conf_matrix = confusion_matrix(y_test, y_test_pred)
conf_mat_pd = pd.crosstab(np.ravel(y_test), np.ravel(y_test_pred),
colnames=["Predicted"], rownames=["Actual"])
sns.heatmap(conf_mat_pd, annot=True, cmap="Blues",fmt='d')
plt.show()
plt.close()
#print("\nResults on test data:")
#print(classification_report(y_test, y_test_pred))
#print("\nConfusion matrix on test")
#print(confusion_matrix(y_test, y_test_pred))
#print("\n")
return model_results, pipe, y_test_prob, y_test, ROC_curve, PR_curve, conf_matrix
###Output
_____no_output_____
###Markdown
Call the pipeline
###Code
#get back the results of the model
model_results, pipe, y_test_prob, y_test, ROC_curve, PR_curve, conf_matrix = pipeline(X,y,balance='under')
model_results
#model_results.reset_index(drop=True,inplace=True)
###Output
_____no_output_____
###Markdown
Probability Plot
###Code
prob_df = pd.DataFrame(columns=['y_test','y_test_proba'])
prob_df['y_test'] = y_test
prob_df['y_test_proba'] = y_test_prob
prob_df
prob_df = prob_df.reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Histogram
###Code
f, ax = plt.subplots(figsize=(11, 6), nrows=1, ncols = 1)
# Get AUC
textstr = f"AUC: {float(model_results.roc_auc_test.values):.3f}"
# Plot false distribution
false_pred = prob_df[prob_df['y_test'] == 0]
sns.distplot(false_pred['y_test_proba'], hist=True, kde=False,
bins=int(50), color = 'red',
hist_kws={'edgecolor':'black', 'alpha': 1.0},label='No Churn')
# Plot true distribution
true_pred = prob_df[prob_df['y_test'] == 1]
sns.distplot(true_pred['y_test_proba'], hist=True, kde=False,
bins=int(50), color = 'green',
hist_kws={'edgecolor':'black', 'alpha': 1.0}, label='Churn')
# These are matplotlib.patch.Patch properties
props = dict(boxstyle='round', facecolor='white', alpha=0.5)
# Place a text box in upper left in axes coords
plt.text(0.05, 0.95, textstr, transform=ax.transAxes, fontsize=14,
verticalalignment = "top", bbox=props)
# Set axis limits and labels
ax.set_title(f"{str(model_results.model.values)} Distribution")
ax.set_xlim(0,1)
ax.set_xlabel("Probability")
ax.legend();
###Output
_____no_output_____
###Markdown
Density Plot
###Code
f, ax = plt.subplots(figsize=(11, 6), nrows=1, ncols = 1)
# Get AUC
textstr = f"AUC: {float(model_results.roc_auc_test.values):.3f}"
# Plot false distribution
false_pred = prob_df[prob_df['y_test'] == 0]
sns.distplot(false_pred['y_test_proba'], hist=False, kde=True,
bins=int(25), color = 'red',
hist_kws={'edgecolor':'black', 'alpha': 1.0},label='No Churn')
# Plot true distribution
true_pred = prob_df[prob_df['y_test'] == 1]
sns.distplot(true_pred['y_test_proba'], hist=False, kde=True,
bins=int(25), color = 'green',
hist_kws={'edgecolor':'black', 'alpha': 1.0}, label='Churn')
# These are matplotlib.patch.Patch properties
props = dict(boxstyle='round', facecolor='white', alpha=0.5)
# Place a text box in upper left in axes coords
plt.text(0.05, 0.95, textstr, transform=ax.transAxes, fontsize=14,
verticalalignment = "top", bbox=props)
# Set axis limits and labels
ax.set_title(f"{str(model_results.model.values)} Distribution")
ax.set_xlim(0,1)
ax.set_xlabel("Probability")
ax.legend();
###Output
_____no_output_____
###Markdown
ROC - AUC Curve
###Code
ROC_curve
f, ax = plt.subplots(figsize=(11, 6), nrows=1, ncols = 1)
plt.plot(ROC_curve[0],ROC_curve[1],'r-',label = 'random forest AUC: %.3f'%float(model_results.roc_auc_test.values))
plt.plot([0,1],[0,1],'k-',label='random choice')
plt.plot([0,0,1,1],[0,1,1,1],'b-',label='optimal model')
plt.legend()
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
###Output
_____no_output_____
###Markdown
Precision Recall Curve
###Code
average_precision = average_precision_score(y_test, y_test_prob)
average_precision
PR_curve
f, ax = plt.subplots(figsize=(11, 6), nrows=1, ncols = 1)
plt.plot(PR_curve[0], PR_curve[1],'r-',label = 'random forest AP: %.3f'%average_precision)
plt.legend()
plt.xlabel('Recall')
plt.ylabel('Precision')
###Output
_____no_output_____
|
d2l/pytorch/chapter_attention-mechanisms/nadaraya-waston.ipynb
|
###Markdown
注意力汇聚:Nadaraya-Watson 核回归:label:`sec_nadaraya-watson`上节我们介绍了框架下的注意力机制的主要成分 :numref:`fig_qkv`:查询(自主提示)和键(非自主提示)之间的交互形成了注意力汇聚,注意力汇聚有选择地聚合了值(感官输入)以生成最终的输出。在本节中,我们将介绍注意力汇聚的更多细节,以便从宏观上了解注意力机制在实践中的运作方式。具体来说,1964年提出的Nadaraya-Watson核回归模型是一个简单但完整的例子,可以用于演示具有注意力机制的机器学习。
###Code
import torch
from torch import nn
from d2l import torch as d2l
###Output
_____no_output_____
###Markdown
[**生成数据集**]简单起见,考虑下面这个回归问题:给定的成对的“输入-输出”数据集$\{(x_1, y_1), \ldots, (x_n, y_n)\}$,如何学习$f$来预测任意新输入$x$的输出$\hat{y} = f(x)$?根据下面的非线性函数生成一个人工数据集,其中加入的噪声项为$\epsilon$:$$y_i = 2\sin(x_i) + x_i^{0.8} + \epsilon,$$其中$\epsilon$服从均值为$0$和标准差为$0.5$的正态分布。我们生成了$50$个训练样本和$50$个测试样本。为了更好地可视化之后的注意力模式,我们将训练样本进行排序。
###Code
n_train = 50 # 训练样本数
x_train, _ = torch.sort(torch.rand(n_train) * 5) # 排序后的训练样本
def f(x):
return 2 * torch.sin(x) + x**0.8
y_train = f(x_train) + torch.normal(0.0, 0.5, (n_train,)) # 训练样本的输出
x_test = torch.arange(0, 5, 0.1) # 测试样本
y_truth = f(x_test) # 测试样本的真实输出
n_test = len(x_test) # 测试样本数
n_test
###Output
_____no_output_____
###Markdown
下面的函数将绘制所有的训练样本(样本由圆圈表示),不带噪声项的真实数据生成函数$f$(标记为“Truth”),以及学习得到的预测函数(标记为“Pred”)。
###Code
def plot_kernel_reg(y_hat):
d2l.plot(x_test, [y_truth, y_hat], 'x', 'y', legend=['Truth', 'Pred'],
xlim=[0, 5], ylim=[-1, 5])
d2l.plt.plot(x_train, y_train, 'o', alpha=0.5);
###Output
_____no_output_____
###Markdown
平均汇聚我们先使用最简单的估计器来解决回归问题:基于平均汇聚来计算所有训练样本输出值的平均值:$$f(x) = \frac{1}{n}\sum_{i=1}^n y_i,$$:eqlabel:`eq_avg-pooling`如下图所示,这个估计器确实不够聪明:真实函数$f$(“Truth”)和预测函数(“Pred”)相差很大。
###Code
y_hat = torch.repeat_interleave(y_train.mean(), n_test)
plot_kernel_reg(y_hat)
###Output
_____no_output_____
###Markdown
[**非参数注意力汇聚**]显然,平均汇聚忽略了输入$x_i$。于是Nadaraya :cite:`Nadaraya.1964`和Watson :cite:`Watson.1964`提出了一个更好的想法,根据输入的位置对输出$y_i$进行加权:$$f(x) = \sum_{i=1}^n \frac{K(x - x_i)}{\sum_{j=1}^n K(x - x_j)} y_i,$$:eqlabel:`eq_nadaraya-watson`其中$K$是*核*(kernel)。公式 :eqref:`eq_nadaraya-watson`所描述的估计器被称为*Nadaraya-Watson核回归*(Nadaraya-Watson kernel regression)。这里我们不会深入讨论核函数的细节,但受此启发,我们可以从 :numref:`fig_qkv`中的注意力机制框架的角度重写 :eqref:`eq_nadaraya-watson`,成为一个更加通用的*注意力汇聚*(attention pooling)公式:$$f(x) = \sum_{i=1}^n \alpha(x, x_i) y_i,$$:eqlabel:`eq_attn-pooling`其中$x$是查询,$(x_i, y_i)$是键值对。比较 :eqref:`eq_attn-pooling`和 :eqref:`eq_avg-pooling`,注意力汇聚是$y_i$的加权平均。将查询$x$和键$x_i$之间的关系建模为*注意力权重*(attention weight)$\alpha(x, x_i)$,如 :eqref:`eq_attn-pooling`所示,这个权重将被分配给每一个对应值$y_i$。对于任何查询,模型在所有键值对注意力权重都是一个有效的概率分布:它们是非负的,并且总和为1。为了更好地理解注意力汇聚,我们考虑一个*高斯核*(Gaussian kernel),其定义为:$$K(u) = \frac{1}{\sqrt{2\pi}} \exp(-\frac{u^2}{2}).$$将高斯核代入 :eqref:`eq_attn-pooling`和 :eqref:`eq_nadaraya-watson`可以得到:$$\begin{aligned} f(x) &=\sum_{i=1}^n \alpha(x, x_i) y_i\\ &= \sum_{i=1}^n \frac{\exp\left(-\frac{1}{2}(x - x_i)^2\right)}{\sum_{j=1}^n \exp\left(-\frac{1}{2}(x - x_j)^2\right)} y_i \\&= \sum_{i=1}^n \mathrm{softmax}\left(-\frac{1}{2}(x - x_i)^2\right) y_i. \end{aligned}$$:eqlabel:`eq_nadaraya-watson-gaussian`在 :eqref:`eq_nadaraya-watson-gaussian`中,如果一个键$x_i$越是接近给定的查询$x$,那么分配给这个键对应值$y_i$的注意力权重就会越大,也就“获得了更多的注意力”。值得注意的是,Nadaraya-Watson核回归是一个非参数模型。因此, :eqref:`eq_nadaraya-watson-gaussian`是*非参数的注意力汇聚*(nonparametric attention pooling)模型。接下来,我们将基于这个非参数的注意力汇聚模型来绘制预测结果。你会发现新的模型预测线是平滑的,并且比平均汇聚的预测更接近真实。
###Code
# X_repeat的形状:(n_test,n_train),
# 每一行都包含着相同的测试输入(例如:同样的查询)
X_repeat = x_test.repeat_interleave(n_train).reshape((-1, n_train))
# x_train包含着键。attention_weights的形状:(n_test,n_train),
# 每一行都包含着要在给定的每个查询的值(y_train)之间分配的注意力权重
attention_weights = nn.functional.softmax(-(X_repeat - x_train)**2 / 2, dim=1)
# y_hat的每个元素都是值的加权平均值,其中的权重是注意力权重
y_hat = torch.matmul(attention_weights, y_train)
plot_kernel_reg(y_hat)
###Output
_____no_output_____
###Markdown
现在,我们来观察注意力的权重。这里测试数据的输入相当于查询,而训练数据的输入相当于键。因为两个输入都是经过排序的,因此由观察可知“查询-键”对越接近,注意力汇聚的[**注意力权重**]就越高。
###Code
d2l.show_heatmaps(attention_weights.unsqueeze(0).unsqueeze(0),
xlabel='Sorted training inputs',
ylabel='Sorted testing inputs')
###Output
_____no_output_____
###Markdown
[**带参数注意力汇聚**]非参数的Nadaraya-Watson核回归具有*一致性*(consistency)的优点:如果有足够的数据,此模型会收敛到最优结果。尽管如此,我们还是可以轻松地将可学习的参数集成到注意力汇聚中。例如,与 :eqref:`eq_nadaraya-watson-gaussian`略有不同,在下面的查询$x$和键$x_i$之间的距离乘以可学习参数$w$:$$\begin{aligned}f(x) &= \sum_{i=1}^n \alpha(x, x_i) y_i \\&= \sum_{i=1}^n \frac{\exp\left(-\frac{1}{2}((x - x_i)w)^2\right)}{\sum_{j=1}^n \exp\left(-\frac{1}{2}((x - x_j)w)^2\right)} y_i \\&= \sum_{i=1}^n \mathrm{softmax}\left(-\frac{1}{2}((x - x_i)w)^2\right) y_i.\end{aligned}$$:eqlabel:`eq_nadaraya-watson-gaussian-para`在本节的余下部分,我们将通过训练这个模型 :eqref:`eq_nadaraya-watson-gaussian-para`来学习注意力汇聚的参数。 批量矩阵乘法:label:`subsec_batch_dot`为了更有效地计算小批量数据的注意力,我们可以利用深度学习开发框架中提供的批量矩阵乘法。假设第一个小批量数据包含$n$个矩阵$\mathbf{X}_1,\ldots, \mathbf{X}_n$,形状为$a\times b$,第二个小批量包含$n$个矩阵$\mathbf{Y}_1, \ldots, \mathbf{Y}_n$,形状为$b\times c$。它们的批量矩阵乘法得到$n$个矩阵$\mathbf{X}_1\mathbf{Y}_1, \ldots, \mathbf{X}_n\mathbf{Y}_n$,形状为$a\times c$。因此,[**假定两个张量的形状分别是$(n,a,b)$和$(n,b,c)$,它们的批量矩阵乘法输出的形状为$(n,a,c)$**]。
###Code
X = torch.ones((2, 1, 4))
Y = torch.ones((2, 4, 6))
torch.bmm(X, Y).shape
###Output
_____no_output_____
###Markdown
在注意力机制的背景中,我们可以[**使用小批量矩阵乘法来计算小批量数据中的加权平均值**]。
###Code
weights = torch.ones((2, 10)) * 0.1
values = torch.arange(20.0).reshape((2, 10))
torch.bmm(weights.unsqueeze(1), values.unsqueeze(-1))
###Output
_____no_output_____
###Markdown
定义模型基于 :eqref:`eq_nadaraya-watson-gaussian-para`中的[**带参数的注意力汇聚**],使用小批量矩阵乘法,定义Nadaraya-Watson核回归的带参数版本为:
###Code
class NWKernelRegression(nn.Module):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.w = nn.Parameter(torch.rand((1,), requires_grad=True))
def forward(self, queries, keys, values):
# queries和attention_weights的形状为(查询个数,“键-值”对个数)
queries = queries.repeat_interleave(keys.shape[1]).reshape((-1, keys.shape[1]))
self.attention_weights = nn.functional.softmax(
-((queries - keys) * self.w)**2 / 2, dim=1)
# values的形状为(查询个数,“键-值”对个数)
return torch.bmm(self.attention_weights.unsqueeze(1),
values.unsqueeze(-1)).reshape(-1)
###Output
_____no_output_____
###Markdown
训练接下来,[**将训练数据集变换为键和值**]用于训练注意力模型。在带参数的注意力汇聚模型中,任何一个训练样本的输入都会和除自己以外的所有训练样本的“键-值”对进行计算,从而得到其对应的预测输出。
###Code
# X_tile的形状:(n_train,n_train),每一行都包含着相同的训练输入
X_tile = x_train.repeat((n_train, 1))
# Y_tile的形状:(n_train,n_train),每一行都包含着相同的训练输出
Y_tile = y_train.repeat((n_train, 1))
# keys的形状:('n_train','n_train'-1)
keys = X_tile[(1 - torch.eye(n_train)).type(torch.bool)].reshape((n_train, -1))
# values的形状:('n_train','n_train'-1)
values = Y_tile[(1 - torch.eye(n_train)).type(torch.bool)].reshape((n_train, -1))
###Output
_____no_output_____
###Markdown
[**训练带参数的注意力汇聚模型**]时,使用平方损失函数和随机梯度下降。
###Code
net = NWKernelRegression()
loss = nn.MSELoss(reduction='none')
trainer = torch.optim.SGD(net.parameters(), lr=0.5)
animator = d2l.Animator(xlabel='epoch', ylabel='loss', xlim=[1, 5])
for epoch in range(5):
trainer.zero_grad()
l = loss(net(x_train, keys, values), y_train)
l.sum().backward()
trainer.step()
print(f'epoch {epoch + 1}, loss {float(l.sum()):.6f}')
animator.add(epoch + 1, float(l.sum()))
###Output
_____no_output_____
###Markdown
如下所示,训练完带参数的注意力汇聚模型后,我们发现:在尝试拟合带噪声的训练数据时,[**预测结果绘制**]的线不如之前非参数模型的平滑。
###Code
# keys的形状:(n_test,n_train),每一行包含着相同的训练输入(例如,相同的键)
keys = x_train.repeat((n_test, 1))
# value的形状:(n_test,n_train)
values = y_train.repeat((n_test, 1))
y_hat = net(x_test, keys, values).unsqueeze(1).detach()
plot_kernel_reg(y_hat)
###Output
_____no_output_____
###Markdown
为什么新的模型更不平滑了呢?我们看一下输出结果的绘制图:与非参数的注意力汇聚模型相比,带参数的模型加入可学习的参数后,[**曲线在注意力权重较大的区域变得更不平滑**]。
###Code
d2l.show_heatmaps(net.attention_weights.unsqueeze(0).unsqueeze(0),
xlabel='Sorted training inputs',
ylabel='Sorted testing inputs')
###Output
_____no_output_____
|
box_office/notebooks/causal_actors.ipynb
|
###Markdown
Estimating Causal Effects of Actors on Movie RevenueAn example in Model Based Machine Learning.
###Code
# Imports
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import pyro
assert(pyro.__version__ == '0.4.1')
from pyro.distributions import Bernoulli
from pyro.distributions import Delta
from pyro.distributions import Normal
from pyro.distributions import Uniform
from pyro.distributions import LogNormal
from pyro.infer import SVI
from pyro.infer import Trace_ELBO
from pyro.optim import Adam
import torch.distributions.constraints as constraints
pyro.set_rng_seed(101)
# Data loader
from box_office.box_office import data_loader
###Output
_____no_output_____
###Markdown
The ProblemLets say producers of a movie approaches you with a set of actors they'd like to cast in their movie and want you to find out 2 things: 1) How much box office revenue will this movie make with this cast of actors? 2) How much of that revenue will every actor be responsible for? The "Data Science" SolutionYou scrape IMDB, get a list of movies that have grossed 1 million or higher (that's the least a decent movie could do), and retain actors who have appeared in at least 10 movies. Then arrange it in a data frame where every movie is a row, actors are columns, and presence or absence of actors in that movie is denoted as 1 or 0. That looks something like this:
###Code
# Load data from the dataframe
data_loc = "box_office/data/ohe_movies.csv"
x_train_tensors, y_train_tensors, actors, og_movies = data_loader.load_tensor_data(data_loc)
og_movies.head()
###Output
_____no_output_____
###Markdown
Now we can fit a Linear Regression to this dataset, or any Linear Model for that matter even a Neural Network. We'll go with Linear Regression for now because regression estimates directly represent a proportional relation with the outcome variable i.e we can treat them as causal effects. This seems like a good idea till we inspect what happens to the regression coefficients. Fitting a Linear Model hides confounders that affect both actors and revenues. For example, genre is a confounder. Take Action and comedy. Action movies on average make more than Comedy movies and each tend to cast a different set of actors. When unobserved the genre produces statistical dependence between if an actor is in the movie and its revenue.So the causal estimates for every actor are biased. Judi Dench played M in every James Bond movie from 1995 to 2012.Cobie Smulders plays Maria Hill in every Avenger’s movies.These are 2 well known but not massively popular actors who appear in high grossing movies.The regression estimates for them are biased, and will lead to an overestimate of revenue for a new movie casting them. This is because our DAG looks like this:  A Causal Solution We argue that there are certain common factors that go into picking a cast for a movie and the revenue that the movie generates. In Causality Theory these are called *Confounders*. So we propose the following data generation process: where certain unknown confounders *Z* influence the set of actors *A* and the revenue *R*. This is represented as a Bayesian Network.We will now stick with this generative model and figure a way to unbias our causal regression estimates. If we somehow find a way to estimate Z, then we can include it in our Regression Model and obtain unbiased causal estimates as regression coefficients. This is how every individual function for a variable will be.
###Code
def f_z(params):
"""Samples from P(Z)"""
z_mean0 = params['z_mean0']
z_std0 = params['z_std0']
z = pyro.sample("z", Normal(loc = z_mean0, scale = z_std0))
return z
def f_x(z, params):
"""
Samples from P(X|Z)
P(X|Z) is a Bernoulli with E(X|Z) = logistic(Z * W),
where W is a parameter (matrix). In training the W is
hyperparameters of the W distribution are estimated such
that in P(X|Z), the elements of the vector of X are
conditionally independent of one another given Z.
"""
def sample_W():
"""
Sample the W matrix
W is a parameter of P(X|Z) that is sampled from a Normal
with location and scale hyperparameters w_mean0 and w_std0
"""
w_mean0 = params['w_mean0']
w_std0 = params['w_std0']
W = pyro.sample("W", Normal(loc = w_mean0, scale = w_std0))
return W
W = sample_W()
linear_exp = torch.matmul(z, W)
# sample x using the Bernoulli likelihood
x = pyro.sample("x", Bernoulli(logits = linear_exp))
return x
def f_y(x, z, params):
"""
Samples from P(Y|X, Z)
Y is sampled from a Gaussian where the mean is an
affine combination of X and Z. Bayesian linear
regression is used to estimate the parameters of
this affine transformation function. Use torch.nn.Module to create
the Bayesian linear regression component of the overall
model.
"""
predictors = torch.cat((x, z), 1)
w = pyro.sample('weight', Normal(params['weight_mean0'], params['weight_std0']))
b = pyro.sample('bias', Normal(params['bias_mean0'], params['bias_std0']))
y_hat = (w * predictors).sum(dim=1) + b
# variance of distribution centered around y
sigma = pyro.sample('sigma', Normal(params['sigma_mean0'], params['sigma_std0']))
with pyro.iarange('data', len(predictors)):
pyro.sample('y', LogNormal(y_hat, sigma))
return y_hat
###Output
_____no_output_____
###Markdown
And this is our complete generative causal model.
###Code
def model(params):
"""The full generative causal model"""
z = f_z(params)
x = f_x(z, params)
y = f_y(x, z, params)
return {'z': z, 'x': x, 'y': y}
###Output
_____no_output_____
###Markdown
How to infer Z?If we could somehow measure all confounders that affect choice of cast and revenue generated by that cast, then we could condition on them and obtain unbiased estimates. This is the Ignorability assumption: that the outcome is independent of treatment assignment (choice of actors), so now average difference in outcomes between two groups of actors can only be attributable to the treatment (actors). The problem here is that it's impossible to check if we've measured all confounders. So Yixin Wang and David M. Blei proposed an algorithm; “The Deconfounder” to sidestep the search for confounders because its impossible to exhaust them.They find a latent variable model over the causes.Use it to infer latent variables for each individual movie.Then use this inferred variable as a “substitute confounder” and get back to treating this as a regression problem with the inferred variables as more data. So we use a probabilistic PCA model over actors to infer the latent variables that explain the distribution of actors.
###Code
def step_1_guide(params):
"""
Guide function for fitting P(Z) and P(X|Z) from data
"""
# Infer z hyperparams
qz_mean = pyro.param("qz_mean", params['z_mean0'])
qz_stddv = pyro.param("qz_stddv", params['z_std0'],
constraint=constraints.positive)
z = pyro.sample("z", Normal(loc = qz_mean, scale = qz_stddv))
# Infer w params
qw_mean = pyro.param("qw_mean", params["w_mean0"])
qw_stddv = pyro.param("qw_stddv", params["w_std0"],
constraint=constraints.positive)
w = pyro.sample("w", Normal(loc = qw_mean, scale = qw_stddv))
###Output
_____no_output_____
###Markdown
We use Pyro's _"guide"_ functionality to infer $P(Z)$ and $P(X|Z)$ using Stochastic Variational Inference, a scalable algorithm for approximating posterior distributions. For this, we define the above guide function. The primary goal however, is to estimate the causal estimates of actors which are the linear regression coefficients for each actor. For this we will write another guide function: one that optimizes for the linear regression parameters.
###Code
def step_2_guide(params):
# Z and W are just sampled using param values optimized in previous step
z = pyro.sample("z", Normal(loc = params['qz_mean'], scale = params['qz_stddv']))
w = pyro.sample("w", Normal(loc = params['qw_mean'], scale = params['qw_stddv']))
# Infer regression params
# parameters of (w : weight)
w_loc = pyro.param('w_loc', params['weight_mean0'])
w_scale = pyro.param('w_scale', params['weight_std0'])
# parameters of (b : bias)
b_loc = pyro.param('b_loc', params['bias_mean0'])
b_scale = pyro.param('b_scale', params['bias_std0'])
# parameters of (sigma)
sigma_loc = pyro.param('sigma_loc', params['sigma_mean0'])
sigma_scale = pyro.param('sigma_scale', params['sigma_std0'])
# sample (w, b, sigma)
w = pyro.sample('weight', Normal(w_loc, w_scale))
b = pyro.sample('bias', Normal(b_loc, b_scale))
sigma = pyro.sample('sigma', Normal(sigma_loc, sigma_scale))
###Output
_____no_output_____
###Markdown
The primary difference, between what Wang and Blei have done, and what we will do here, is that we have implemented our beliefs as a generative model. The factor model (Probabilistic PCA) and the regression have been combined into one model over the DAG. It's important to understand that Wang et. al. separate the estimation of 𝑍 from the estimation of regression parameters.This is done because 𝑍 by construction renders all causes (actors) independent of each other. Including the outcome (revenue) while learning parameters of 𝑍 would make the revenue conditionally independent of the actors which violates our primary assumption that actors are a cause of movie revenue.So they estimate 𝑍, then hardcode it into their regression problem.We handle this by running a 2 step training process in Pyro. That is with two different guide functions over the same DAG to optimize different parameters conditional on certain variables at a time. The first learns posterior of 𝑍 and 𝑊(a parameter of P(X|Z)) conditional on 𝑋. The second learns the regression parameters conditional on what we what we know about 𝑋, what we just learnt about 𝑊, and what we know about 𝑌. Once they are defined, we only need to train this generative model.
###Code
def training_step_1(x_data, params):
adam_params = {"lr": 0.0005}
optimizer = Adam(adam_params)
conditioned_on_x = pyro.condition(model, data = {"x" : x_data})
svi = SVI(conditioned_on_x, step_1_guide, optimizer, loss=Trace_ELBO())
print("\n Training Z marginal and W parameter marginal...")
n_steps = 2000
pyro.set_rng_seed(101)
# do gradient steps
pyro.get_param_store().clear()
for step in range(n_steps):
loss = svi.step(params)
if step % 100 == 0:
print("[iteration %04d] loss: %.4f" % (step + 1, loss/len(x_data)))
# grab the learned variational parameters
updated_params = {k: v for k, v in params.items()}
for name, value in pyro.get_param_store().items():
print("Updating value of hypermeter{}".format(name))
updated_params[name] = value.detach()
return updated_params
def training_step_2(x_data, y_data, params):
print("Training Bayesian regression parameters...")
pyro.set_rng_seed(101)
num_iterations = 1500
pyro.clear_param_store()
# Create a regression model
optim = Adam({"lr": 0.003})
conditioned_on_x_and_y = pyro.condition(model, data = {
"x": x_data,
"y" : y_data
})
svi = SVI(conditioned_on_x_and_y, step_2_guide, optim, loss=Trace_ELBO(), num_samples=1000)
for step in range(num_iterations):
loss = svi.step(params)
if step % 100 == 0:
print("[iteration %04d] loss: %.4f" % (step + 1, loss/len(x_data)))
updated_params = {k: v for k, v in params.items()}
for name, value in pyro.get_param_store().items():
print("Updating value of hypermeter: {}".format(name))
updated_params[name] = value.detach()
print("Training complete.")
return updated_params
def train_model():
num_datapoints, data_dim = x_train_tensors.shape
latent_dim = 30 # can be changed
params0 = {
'z_mean0': torch.zeros([num_datapoints, latent_dim]),
'z_std0' : torch.ones([num_datapoints, latent_dim]),
'w_mean0' : torch.zeros([latent_dim, data_dim]),
'w_std0' : torch.ones([latent_dim, data_dim]),
'weight_mean0': torch.zeros(data_dim + latent_dim),
'weight_std0': torch.ones(data_dim + latent_dim),
'bias_mean0': torch.tensor(0.),
'bias_std0': torch.tensor(1.),
'sigma_mean0' : torch.tensor(1.),
'sigma_std0' : torch.tensor(0.05)
} # These are our priors
params1 = training_step_1(x_train_tensors, params0)
params2 = training_step_2(x_train_tensors, y_train_tensors, params1)
return params1, params2
###Output
_____no_output_____
###Markdown
And now, we train the model to infer latent variable distributions and Bayesian Regression coefficients.
###Code
p1, p2 = train_model()
###Output
Training Z marginal and W parameter marginal...
[iteration 0001] loss: 304.3461
[iteration 0101] loss: 294.8547
[iteration 0201] loss: 290.4372
[iteration 0301] loss: 281.5974
[iteration 0401] loss: 274.5142
[iteration 0501] loss: 273.1243
[iteration 0601] loss: 261.1506
[iteration 0701] loss: 260.4133
[iteration 0801] loss: 250.2013
[iteration 0901] loss: 248.1334
[iteration 1001] loss: 250.6025
[iteration 1101] loss: 246.0507
[iteration 1201] loss: 240.1931
[iteration 1301] loss: 232.8412
[iteration 1401] loss: 229.0232
[iteration 1501] loss: 215.9541
[iteration 1601] loss: 209.8044
[iteration 1701] loss: 201.3092
[iteration 1801] loss: 187.6613
[iteration 1901] loss: 183.0304
Updating value of hypermeterqz_mean
Updating value of hypermeterqz_stddv
Updating value of hypermeterqw_mean
Updating value of hypermeterqw_stddv
Training Bayesian regression parameters...
[iteration 0001] loss: 258.9689
###Markdown
Causal effect of actors with and without confoundingBecause we have implemented all our assumptions as one generative model, finding causal estimates of actors is as simple as calling the condition and do queries from Pyro.Causal effect of actors without confounding: $E[Y|X=1] - E[Y|X=0]$ Causal effect of actors without confounding: $E[Y|do(X=1)] - E[Y|do(X=0)]$
###Code
def condCausal(their_tensors, absent_tensors, movie_inds):
their_cond = pyro.condition(model, data = {"x" : their_tensors})
absent_cond = pyro.condition(model, data = {"x" : absent_tensors})
their_y = []
for _ in range(1000):
their_y.append(torch.sum(their_cond(p2)['y'][movie_inds]).item())
absent_y = []
for _ in range(1000):
absent_y.append(torch.sum(absent_cond(p2)['y'][movie_inds]).item())
their_mean = np.mean(their_y)
absent_mean = np.mean(absent_y)
causal_effect_noconf = their_mean - absent_mean
return causal_effect_noconf
def doCausal(their_tensors, absent_tensors, movie_inds):
# With confounding
their_do = pyro.do(model, data = {"x" : their_tensors})
absent_do = pyro.do(model, data = {"x" : absent_tensors})
their_do_y = []
for _ in range(1000):
their_do_y.append(torch.sum(their_do(p2)['y'][movie_inds]).item())
absent_do_y = []
for _ in range(1000):
absent_do_y.append(torch.sum(absent_do(p2)['y'][movie_inds]).item())
their_do_mean = np.mean(their_do_y)
absent_do_mean = np.mean(absent_do_y)
causal_effect_conf = their_do_mean - absent_do_mean
return causal_effect_conf
def causal_effects(actor):
# Get all movies where that actor is present
# Make him/her absent, and then get conditional expectation
actor_tensor = pd.DataFrame(x_train_tensors.numpy(), columns=actors[1:])
# All movies where actor is present
movie_inds = actor_tensor.index[actor_tensor[actor] == 1.0]
absent_movies = actor_tensor.copy()
absent_movies[actor] = 0
their_tensors = x_train_tensors
absent_tensors = torch.tensor(absent_movies.to_numpy(dtype = 'float32'))
cond_effect_mean = condCausal(their_tensors, absent_tensors, movie_inds)
do_effect_mean = doCausal(their_tensors, absent_tensors, movie_inds)
# print(their_tensors.shape, absent_tensors.shape)
diff_mean = cond_effect_mean - do_effect_mean
if diff_mean > 0:
status = "Overvalued"
else: status = "Undervalued"
print("Causal conditional effect: ", cond_effect_mean)
print("Causal Do effect: ", do_effect_mean)
print("Diff: ", diff_mean)
print("Status: ", status)
###Output
_____no_output_____
###Markdown
We can now call these queries on the actors for whome we'd like to see biased and unbiased estimates for. Cobie Smulders and Judi Dench are the examples we set out to prove our point with, and the our generative model does obtain debiased estimates of their causal effect on revenue.
###Code
causal_effects("Cobie Smulders")
causal_effects("Judi Dench")
###Output
Causal conditional effect: 77.29178205871585
Causal Do effect: 76.0234787902832
Diff: 1.2683032684326463
Status: Overvalued
|
Week_3/Quiz_1/W3_Q1.ipynb
|
###Markdown
Machine Learning Foundation Specialization University of Washington - Seattle Week 3 Quiz1 Classification
###Code
## Created by Quincy Gu
## Created on 03/25/2020 01:43
## [email protected]
## Mayo Clinic College of Medicine and Sciences
###Output
_____no_output_____
|
Decision Tree update.ipynb
|
###Markdown
Assignment:- Applying Decision Tree on Amazon Fine Food Reviews Analysis Note:- However Decision tree algorithm does not support missing values. Amazon Fine Food Reviews AnalysisData Source: https://www.kaggle.com/snap/amazon-fine-food-reviewsThe Amazon Fine Food Reviews dataset consists of reviews of fine foods from Amazon.Number of reviews: 568,454Number of users: 256,059Number of products: 74,258Timespan: Oct 1999 - Oct 2012Number of Attributes/Columns in data: 10 Attribute Information:1. Id2. ProductId - unique identifier for the product3. UserId - unqiue identifier for the user4. ProfileName5. HelpfulnessNumerator - number of users who found the review helpful6. HelpfulnessDenominator - number of users who indicated whether they found the review helpful or not7. Score - rating between 1 and 58. Time - timestamp for the review9. Summary - brief summary of the review10. Text - text of the review 1. Objective:Given a review, determine whether the review is positive (Rating of 4 or 5) or negative (rating of 1 or 2). Use BoW, TF-IDF, Avg-Word2Vec,TF-IDF-Word2Vec to vectorise the reviews. Apply Decision Tree Algorithm for Amazon fine food Reviewsfind the optimal depth using cross validationGet feature importance for positive class and Negative class
###Code
# loading required libraries
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
import matplotlib
import sqlite3
import string
import gensim
import scipy
import nltk
import time
import seaborn as sns
from scipy import stats
from matplotlib import pyplot as plt
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_curve, roc_auc_score, auc
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_recall_fscore_support as prf1
from sklearn.model_selection import KFold
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
1.1 Connecting SQL file
###Code
#Loading the data
con = sqlite3.connect('./final.sqlite')
data = pd.read_sql_query("""
SELECT *
FROM Reviews
""", con)
print(data.shape)
data.head()
###Output
(364171, 12)
###Markdown
1.2 Data Preprocessing
###Code
data.Score.value_counts()
#i had done data preprocessing i had stored in final.sqlite now loaded this file no need to do again data preprocessing
###Output
_____no_output_____
###Markdown
1.3 Sorting the data
###Code
# Sorting the data according to the time-stamp
sorted_data = data.sort_values('Time', axis=0, ascending=True, inplace=False, kind='quicksort', na_position='last')
sorted_data.head()
###Output
_____no_output_____
###Markdown
1.4 Mapping
###Code
def partition(x):
if x == 'positive':
return 1
return 0
#Preparing the filtered data
actualScore = sorted_data['Score']
positiveNegative = actualScore.map(partition)
sorted_data['Score'] = positiveNegative
sorted_data.head()
###Output
_____no_output_____
###Markdown
1.5 Taking First 150k rows
###Code
# We will collect different 150000 rows without repetition from time_sorted_data dataframe
my_final = sorted_data[:150000]
print(my_final.shape)
my_final.head()
###Output
(150000, 12)
###Markdown
1.6 Spliting data into train and test based on time (70:30)
###Code
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_validate
x=my_final['CleanedText'].values
y=my_final['Score']
#Splitting data into train test and cross validation
x_train,x_test,y_train,y_test =train_test_split(x,y,test_size =0.3,random_state = 42)
print(x_train.shape)
print(x_test.shape)
print(y_train.shape)
print(y_test.shape)
###Output
(105000,)
(45000,)
(105000,)
(45000,)
###Markdown
Techniques For Vectorization Why we have to convert text to vectorBy converting text to vector we can use whole power of linear algebra.we can find a plane to seperate Bow and tfidf has high dimensions SO it takes more time to compute. so that's the reason am not applying Decision Tree Algorithm to bow,tfidf 2.BOW
###Code
#Bow
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
final_counts_Bow_tr= count_vect.fit_transform(x_train)# computing Bow
print("the type of count vectorizer ",type(final_counts_Bow_tr))
print("the shape of out text BOW vectorizer ",final_counts_Bow_tr.get_shape())
print("the number of unique words ", final_counts_Bow_tr.get_shape()[1])
final_counts_Bow_test= count_vect.transform(x_test)# computing Bow
print("the type of count vectorizer ",type(final_counts_Bow_test))
print("the shape of out text BOW vectorizer ",final_counts_Bow_test.get_shape())
###Output
the type of count vectorizer <class 'scipy.sparse.csr.csr_matrix'>
the shape of out text BOW vectorizer (105000, 38300)
the number of unique words 38300
the type of count vectorizer <class 'scipy.sparse.csr.csr_matrix'>
the shape of out text BOW vectorizer (45000, 38300)
###Markdown
2.1 Normalizing Data
###Code
# Data-preprocessing: Normalizing Data
from sklearn import preprocessing
standardized_data_train = preprocessing.normalize(final_counts_Bow_tr)
print(standardized_data_train.shape)
standardized_data_test = preprocessing.normalize(final_counts_Bow_test)
print(standardized_data_test.shape)
###Output
(105000, 38300)
(45000, 38300)
###Markdown
2.2.1 Replacing nan values with 0's.
###Code
# Replacing nan values with 0's.
standardized_data_train = np.nan_to_num(standardized_data_train)
standardized_data_test = np.nan_to_num(standardized_data_test)
from sklearn.model_selection import cross_val_score
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
param_grid = {'max_depth': [2,3,4,5,6,7,8,9,10,11,12]}
model = GridSearchCV(DecisionTreeClassifier(min_samples_leaf=5,criterion = 'gini',random_state = 100,class_weight ='balanced'), param_grid,scoring ='f1',cv=3 , n_jobs = -1,pre_dispatch=2)
model.fit(standardized_data_train, y_train)
print(model.best_score_, model.best_params_)
print("Model with best parameters :\n",model.best_estimator_)
print("Accuracy of the model : ",model.score(standardized_data_test, y_test))
a = model.best_params_
optimal_max_depth = a.get('max_depth')
results = model.cv_results_
results['mean_test_score']
max_depth=2,3,4,5,6,7,8,9,10,11,12
plt.plot(max_depth,results['mean_test_score'],marker='o')
plt.xlabel('max_depth')
plt.ylabel('f1score')
plt.title("F1score vs max_depth")
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Heatmap for Plotting CV Scores
###Code
pvt =pd.pivot_table(pd.DataFrame(model.cv_results_),values='mean_test_score',index='param_max_depth')
import seaborn as sns
ax = sns.heatmap(pvt,annot=True,fmt="f")
# DecisionTreeClassifier with Optimal value of depth
clf = DecisionTreeClassifier(max_depth=optimal_max_depth,class_weight ='balanced')
clf.fit(standardized_data_train,y_train)
y_pred = clf.predict(standardized_data_test)
###Output
_____no_output_____
###Markdown
2.3 Confusion Matrix
###Code
cm_bow=confusion_matrix(y_test,y_pred)
print("Confusion Matrix:")
sns.heatmap(cm_bow, annot=True, fmt='d')
plt.show()
#finding out true negative , false positive , false negative and true positve
tn, fp, fn, tp = cm_bow.ravel()
( tp, fp, fn, tp)
print(" true negitves are {} \n false positives are {} \n false negatives are {}\n true positives are {} \n ".format(tn,fp,fn,tp))
###Output
true negitves are 4685
false positives are 1429
false negatives are 11524
true positives are 27362
###Markdown
2.4 Calculating Accuracy,Error on test data,Precision,Recall,Classification Report
###Code
from sklearn.metrics import recall_score
from sklearn.metrics import precision_score
from sklearn.metrics import classification_report
from sklearn.metrics import roc_auc_score
# evaluating accuracy
acc_bow = accuracy_score(y_test, y_pred) * 100
print('\nThe Test Accuracy of the Decision tree for maxdepth = %.3f is %f%%' % (optimal_max_depth, acc_bow))
# Error on test data
test_error_bow = 100-acc_bow
print("\nTest Error Decision tree for maxdepth is %f%%" % (test_error_bow))
# evaluating precision
precision_score = precision_score(y_test, y_pred)
print('\nThe Test Precision Decision tree for maxdepth = %.3f is %f' % (optimal_max_depth, precision_score))
# evaluating recall
recall_score = recall_score(y_test, y_pred)
print('\nThe Test Recall of the Decision tree for maxdepth = %.3f is %f' % (optimal_max_depth, recall_score))
# evaluating Classification report
classification_report = classification_report(y_test, y_pred)
print('\nThe Test classification report of the Decision tree for maxdepth \n\n ',(classification_report))
###Output
The Test Accuracy of the Decision tree for maxdepth = 11.000 is 71.215556%
Test Error Decision tree for maxdepth is 28.784444%
The Test Precision Decision tree for maxdepth = 11.000 is 0.950366
The Test Recall of the Decision tree for maxdepth = 11.000 is 0.703647
The Test classification report of the Decision tree for maxdepth
precision recall f1-score support
0 0.29 0.77 0.42 6114
1 0.95 0.70 0.81 38886
micro avg 0.71 0.71 0.71 45000
macro avg 0.62 0.73 0.61 45000
weighted avg 0.86 0.71 0.76 45000
###Markdown
2.5 Plotting roc_auc curve
###Code
y_pred_proba = clf.predict_proba(standardized_data_test)[::,1]
fpr, tpr, _ = metrics.roc_curve(y_test, y_pred_proba)
auc = metrics.roc_auc_score(y_test, y_pred_proba)
plt.plot(fpr,tpr,label="bow, AUC="+str(auc))
plt.plot([0,1],[0,1],'r--')
plt.title('ROC curve: Decision Tree')
plt.legend(loc='lower right')
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
###Output
_____no_output_____
###Markdown
2.6 Top 25 words
###Code
words = count_vect.get_feature_names()
likelihood_df = pd.DataFrame(clf.feature_importances_.transpose(),columns=[ 'Score'],index=words)
top_25 = likelihood_df.sort_values(by='Score',ascending=False).iloc[:25]
top_25.reset_index(inplace=True)
top_words = top_25['index']
print(top_words)
from wordcloud import WordCloud
list_of_words_str = ' '.join(top_words)
wc = WordCloud(background_color="white", max_words=len(top_words),
width=900, height=600, collocations=False)
wc.generate(list_of_words_str)
print ("\n\nWord Cloud for Important features")
plt.imshow(wc, interpolation='bilinear')
plt.axis("off")
plt.show()
###Output
Word Cloud for Important features
###Markdown
2.7 Visualizing Decision tree By graph
###Code
from IPython.display import Image
from sklearn.tree import export_graphviz
from io import StringIO
from sklearn import tree
import pydotplus
target = ['1','0']
dot_data = StringIO()
export_graphviz(clf,max_depth=3,out_file=dot_data,filled=True,class_names=target,feature_names=count_vect.get_feature_names(),rounded=True,special_characters=True)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
# Show graph
Image(graph.create_png())
# Create PNG
graph.write_png("bag of words.png")
###Output
_____no_output_____
###Markdown
3. TF-IDF
###Code
#tf-idf
from sklearn.feature_extraction.text import TfidfVectorizer
tf_idf_vect = TfidfVectorizer()
final_counts_tfidf_tr= tf_idf_vect.fit_transform(x_train)
print("the type of count vectorizer ",type(final_counts_tfidf_tr))
print("the shape of out text tfidf vectorizer ",final_counts_tfidf_tr.get_shape())
print("the number of unique words ", final_counts_tfidf_tr.get_shape()[1])
final_counts_tfidf_test= tf_idf_vect.transform(x_test)
print("the type of count vectorizer ",type(final_counts_tfidf_test))
print("the shape of out text tfidf vectorizer ",final_counts_tfidf_test.get_shape())
print("the number of unique words ", final_counts_tfidf_test.get_shape()[1])
###Output
the type of count vectorizer <class 'scipy.sparse.csr.csr_matrix'>
the shape of out text tfidf vectorizer (105000, 38300)
the number of unique words 38300
the type of count vectorizer <class 'scipy.sparse.csr.csr_matrix'>
the shape of out text tfidf vectorizer (45000, 38300)
the number of unique words 38300
###Markdown
3.1 Normalizing Data
###Code
# Data-preprocessing: Normalizing Data
from sklearn import preprocessing
standardized_data_train = preprocessing.normalize(final_counts_tfidf_tr)
print(standardized_data_train.shape)
standardized_data_test = preprocessing.normalize(final_counts_tfidf_test)
print(standardized_data_test.shape)
###Output
(105000, 38300)
(45000, 38300)
###Markdown
3.2 Replacing nan values with 0's.
###Code
standardized_data_train = np.nan_to_num(standardized_data_train)
standardized_data_test = np.nan_to_num(standardized_data_test)
###Output
_____no_output_____
###Markdown
3.3 Applying Decision Tree Algorithm
###Code
from sklearn.model_selection import cross_val_score
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
param_grid = {'max_depth': [3,4,5,6,7,8,9,10,11]}
model = GridSearchCV(DecisionTreeClassifier(min_samples_leaf=5,criterion = 'gini',random_state = 100,class_weight ='balanced'), param_grid,scoring ='f1',cv=3 , n_jobs = -1,pre_dispatch=2)
model.fit(standardized_data_train, y_train)
print(model.best_score_, model.best_params_)
print("Model with best parameters :\n",model.best_estimator_)
print("Accuracy of the model : ",model.score(standardized_data_test, y_test))
a = model.best_params_
optimal_max_depth = a.get('max_depth')
results = model.cv_results_
results['mean_test_score']
max_depth=3,4,5,6,7,8,9,10,11
plt.plot(max_depth,results['mean_test_score'],marker='o')
plt.xlabel('max_depth')
plt.ylabel('f1score')
plt.title("F1score vs max_depth")
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Heatmap for Plotting CV Scores
###Code
pvt =pd.pivot_table(pd.DataFrame(model.cv_results_),values='mean_test_score',index='param_max_depth')
import seaborn as sns
ax = sns.heatmap(pvt,annot=True,fmt="f")
# DecisionTreeClassifier with Optimal value of depth
clf = DecisionTreeClassifier(max_depth=optimal_max_depth,class_weight ='balanced')
clf.fit(standardized_data_train,y_train)
y_pred = clf.predict(standardized_data_test)
###Output
_____no_output_____
###Markdown
3.4 Confusion Matrix
###Code
cm_tfidf=confusion_matrix(y_test,y_pred)
print("Confusion Matrix:")
sns.heatmap(cm_bow, annot=True, fmt='d')
plt.show()
#finding out true negative , false positive , false negative and true positve
tn, fp, fn, tp = cm_tfidf.ravel()
( tp, fp, fn, tp)
print(" true negitves are {} \n false positives are {} \n false negatives are {}\n true positives are {} \n ".format(tn,fp,fn,tp))
###Output
true negitves are 4824
false positives are 1290
false negatives are 12195
true positives are 26691
###Markdown
3.5 Calculating Accuracy,Error on test data,Precision,Recall,Classification Report
###Code
from sklearn.metrics import recall_score
from sklearn.metrics import precision_score
from sklearn.metrics import classification_report
# evaluating accuracy
acc_tfidf = accuracy_score(y_test, y_pred) * 100
print('\nThe Test Accuracy of the Decision tree for maxdepth = %.3f is %f%%' % (optimal_max_depth, acc_tfidf))
# Error on test data
test_error_tfidf = 100-acc_tfidf
print("\nTest Error of the Decision tree for maxdepth %f%%" % (test_error_tfidf))
# evaluating precision
precision_score = precision_score(y_test, y_pred)
print('\nThe Test Precision of the Decision tree for maxdepth is = %.3f is %f' % (optimal_max_depth, precision_score))
# evaluating recall
recall_score = recall_score(y_test, y_pred)
print('\nThe Test Recall of the Decision tree for maxdepth is = %.3f is %f' % (optimal_max_depth, recall_score))
# evaluating Classification report
classification_report = classification_report(y_test, y_pred)
print('\nThe Test classification report of the Decision tree for maxdepth is \n\n ',(classification_report))
###Output
The Test Accuracy of the Decision tree for maxdepth = 11.000 is 70.033333%
Test Error of the Decision tree for maxdepth 29.966667%
The Test Precision of the Decision tree for maxdepth is = 11.000 is 0.953897
The Test Recall of the Decision tree for maxdepth is = 11.000 is 0.686391
The Test classification report of the Decision tree for maxdepth is
precision recall f1-score support
0 0.28 0.79 0.42 6114
1 0.95 0.69 0.80 38886
micro avg 0.70 0.70 0.70 45000
macro avg 0.62 0.74 0.61 45000
weighted avg 0.86 0.70 0.75 45000
###Markdown
3.6 Plotting roc_auc curve
###Code
y_pred_proba = clf.predict_proba(standardized_data_test)[::,1]
fpr, tpr, _ = metrics.roc_curve(y_test, y_pred_proba)
auc = metrics.roc_auc_score(y_test, y_pred_proba)
plt.plot(fpr,tpr,label="tfidf, AUC="+str(auc))
plt.plot([0,1],[0,1],'r--')
plt.title('ROC curve: Decision Tree')
plt.legend(loc='lower right')
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
###Output
_____no_output_____
###Markdown
3.7 Top 25 words
###Code
words = tf_idf_vect.get_feature_names()
likelihood_df = pd.DataFrame(clf.feature_importances_.transpose(),columns=[ 'Score'],index=words)
top_25 = likelihood_df.sort_values(by='Score',ascending=False).iloc[:25]
top_25.reset_index(inplace=True)
top_words = top_25['index']
print(top_words)
from wordcloud import WordCloud
list_of_words_str = ' '.join(top_words)
wc = WordCloud(background_color="white", max_words=len(top_words),
width=900, height=600, collocations=False)
wc.generate(list_of_words_str)
print ("\n\nWord Cloud for Important features")
plt.imshow(wc, interpolation='bilinear')
plt.axis("off")
plt.show()
###Output
Word Cloud for Important features
###Markdown
3.8 Visualizing Decision tree By graph
###Code
from IPython.display import Image
from sklearn.tree import export_graphviz
from io import StringIO
from sklearn import tree
import pydotplus
target = ['1','0'] #1=positive,o=negative
dot_data = StringIO()
export_graphviz(clf,max_depth=3,out_file=dot_data,filled=True,class_names=target,feature_names=tf_idf_vect.get_feature_names(),rounded=True,special_characters=True)
# Draw graph
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
#show graph
Image(graph.create_png())
# Create PNG
graph.write_png("tfidf.png")
###Output
_____no_output_____
###Markdown
4. WORD2VEC
###Code
from gensim.models import Word2Vec
# List of sentence in X_train text
sent_of_train=[]
for sent in x_train:
sent_of_train.append(sent.split())
# List of sentence in X_est text
sent_of_test=[]
for sent in x_test:
sent_of_test.append(sent.split())
# Train your own Word2Vec model using your own train text corpus
# min_count = 5 considers only words that occured atleast 5 times
w2v_model=Word2Vec(sent_of_train,min_count=5,size=50, workers=4)
w2v_words = list(w2v_model.wv.vocab)
print("number of words that occured minimum 5 times ",len(w2v_words))
###Output
number of words that occured minimum 5 times 12829
###Markdown
5. Avg Word2Vec
###Code
# compute average word2vec for each review for X_train .
train_vectors = [];
for sent in sent_of_train:
sent_vec = np.zeros(50)
cnt_words =0;
for word in sent: #
if word in w2v_words:
vec = w2v_model.wv[word]
sent_vec += vec
cnt_words += 1
if cnt_words != 0:
sent_vec /= cnt_words
train_vectors.append(sent_vec)
# compute average word2vec for each review for X_test .
test_vectors = [];
for sent in sent_of_test:
sent_vec = np.zeros(50)
cnt_words =0;
for word in sent: #
if word in w2v_words:
vec = w2v_model.wv[word]
sent_vec += vec
cnt_words += 1
if cnt_words != 0:
sent_vec /= cnt_words
test_vectors.append(sent_vec)
###Output
_____no_output_____
###Markdown
5.1 Replacing nan values with 0's.
###Code
# Replacing nan values with 0's.
train_vectors = np.nan_to_num(train_vectors)
test_vectors = np.nan_to_num(test_vectors)
###Output
_____no_output_____
###Markdown
5.2 Standardizing Data
###Code
# Data-preprocessing: Standardizing the data
from sklearn.preprocessing import StandardScaler
standardized_data_train = StandardScaler().fit_transform(train_vectors)
print(standardized_data_train.shape)
standardized_data_test = StandardScaler().fit_transform(test_vectors)
print(standardized_data_test.shape)
###Output
(105000, 50)
(45000, 50)
###Markdown
5.3 Applying Decision Tree Algorithm
###Code
from sklearn.model_selection import cross_val_score
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
param_grid = {'max_depth': [3,4,5,6,7,8,9,10,11]}
model = GridSearchCV(DecisionTreeClassifier(min_samples_leaf=5,criterion = 'gini',random_state = 100,class_weight ='balanced'), param_grid,scoring ='f1',cv=3 , n_jobs = -1,pre_dispatch=2)
model.fit(standardized_data_train, y_train)
print(model.best_score_, model.best_params_)
print("Model with best parameters :\n",model.best_estimator_)
print("Accuracy of the model : ",model.score(standardized_data_test, y_test))
a = model.best_params_
optimal_max_depth = a.get('max_depth')
results = model.cv_results_
results['mean_test_score']
max_depth=3,4,5,6,7,8,9,10,11
plt.plot(max_depth,results['mean_test_score'],marker='o')
plt.xlabel('max_depth')
plt.ylabel('f1score')
plt.title("F1score vs max_depth")
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Heatmap for plotting CV Scores
###Code
pvt =pd.pivot_table(pd.DataFrame(model.cv_results_),values='mean_test_score',index='param_max_depth')
import seaborn as sns
ax = sns.heatmap(pvt,annot=True,fmt="f")
# DecisionTreeClassifier with Optimal value of depth
clf = DecisionTreeClassifier(max_depth=optimal_max_depth,class_weight ='balanced')
clf.fit(standardized_data_train,y_train)
y_pred = clf.predict(standardized_data_test)
###Output
_____no_output_____
###Markdown
5.4 Confusion Matrix
###Code
cm_avgw2v=confusion_matrix(y_test,y_pred)
print("Confusion Matrix:")
sns.heatmap(cm_avgw2v, annot=True, fmt='d')
plt.show()
#finding out true negative , false positive , false negative and true positve
tn, fp, fn, tp = cm_avgw2v.ravel()
( tp, fp, fn, tp)
print(" true negitves are {} \n false positives are {} \n false negatives are {}\n true positives are {} \n ".format(tn,fp,fn,tp))
###Output
true negitves are 4384
false positives are 1730
false negatives are 8512
true positives are 30374
###Markdown
5.5 Calculating Accuracy,Error on test data,Precision,Recall,Classification Report
###Code
from sklearn.metrics import recall_score
from sklearn.metrics import precision_score
from sklearn.metrics import classification_report
# evaluating accuracy
acc_avgw2v = accuracy_score(y_test, y_pred) * 100
print('\nThe Test Accuracy of the Decision tree for maxdepth is = %.3f is %f%%' % (optimal_max_depth, acc_avgw2v))
# Error on test data
test_error_avgw2v = 100-acc_avgw2v
print("\nTest Error of the Decision tree for maxdepth is %f%%" % (test_error_avgw2v))
# evaluating precision
precision_score = precision_score(y_test, y_pred)
print('\nThe Test Precision of the Decision tree for maxdepth is = %.3f is %f' % (optimal_max_depth, precision_score))
# evaluating recall
recall_score = recall_score(y_test, y_pred)
print('\nThe Test Recall of the Decision tree for maxdepth is = %.3f is %f' % (optimal_max_depth, recall_score))
# evaluating Classification report
classification_report = classification_report(y_test, y_pred)
print('\nThe Test classification report of the Decision tree for maxdepth is \n\n ',(classification_report))
###Output
The Test Accuracy of the Decision tree for maxdepth is = 11.000 is 77.240000%
Test Error of the Decision tree for maxdepth is 22.760000%
The Test Precision of the Decision tree for maxdepth is = 11.000 is 0.946113
The Test Recall of the Decision tree for maxdepth is = 11.000 is 0.781104
The Test classification report of the Decision tree for maxdepth is
precision recall f1-score support
0 0.34 0.72 0.46 6114
1 0.95 0.78 0.86 38886
micro avg 0.77 0.77 0.77 45000
macro avg 0.64 0.75 0.66 45000
weighted avg 0.86 0.77 0.80 45000
###Markdown
5.6 Plotting roc_auc curve
###Code
y_pred_proba = clf.predict_proba(standardized_data_test)[::,1]
fpr, tpr, _ = metrics.roc_curve(y_test, y_pred_proba)
auc = metrics.roc_auc_score(y_test, y_pred_proba)
plt.plot(fpr,tpr,label="Avgword2vec, AUC="+str(auc))
plt.plot([0,1],[0,1],'r--')
plt.title('ROC curve: Decision Tree')
plt.legend(loc='lower right')
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
###Output
_____no_output_____
###Markdown
5.7 Visualizing Decision tree By graph
###Code
from IPython.display import Image
from sklearn.tree import export_graphviz
from io import StringIO
from sklearn import tree
import pydotplus
target = ['1','0'] #1=positive,o=negative
dot_data = StringIO()
export_graphviz(clf,max_depth=3,out_file=dot_data,filled=True,class_names=target,rounded=True,special_characters=True)
# Draw graph
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
#show graph
Image(graph.create_png())
# Create PNG
graph.write_png("Avgword2vec.png")
###Output
_____no_output_____
###Markdown
6. TFIDF-Word2Vec
###Code
#tf-idf weighted w2v
from sklearn.feature_extraction.text import TfidfVectorizer
tfidfw2v_vect = TfidfVectorizer()
final_counts_tfidfw2v_train= tfidfw2v_vect.fit_transform(x_train)
print(type(final_counts_tfidfw2v_train))
print(final_counts_tfidfw2v_train.shape)
final_counts_tfidfw2v_test= tfidfw2v_vect.transform(x_test)
print(type(final_counts_tfidfw2v_test))
print(final_counts_tfidfw2v_test.shape)
# we are converting a dictionary with word as a key, and the idf as a value
dictionary = dict(zip(tfidfw2v_vect.get_feature_names(), list(tfidfw2v_vect.idf_)))
# TF-IDF weighted Word2Vec
tfidf_feat = tfidfw2v_vect.get_feature_names() # tfidf words/col-names
# final_tf_idf is the sparse matrix with row= sentence, col=word and cell_val = tfidf
tfidf_sent_vectors = []; # the tfidf-w2v for each sentence/review is stored in this list
row=0;
for sent in sent_of_train: # for each review/sentence
sent_vec = np.zeros(50) # as word vectors are of zero length
weight_sum =0; # num of words with a valid vector in the sentence/review
for word in sent: # for each word in a review/sentence
if word in w2v_words:
vec = w2v_model.wv[word]
# tf_idf = tf_idf_matrix[row, tfidf_feat.index(word)]
# to reduce the computation we are
# dictionary[word] = idf value of word in whole courpus
# sent.count(word) = tf valeus of word in this review
tf_idf = dictionary[word]*(sent.count(word)/len(sent))
sent_vec += (vec * tf_idf)
weight_sum += tf_idf
if weight_sum != 0:
sent_vec /= weight_sum
tfidf_sent_vectors.append(sent_vec)
row += 1
#Test case
tfidf_sent_vectors1 = []; # the tfidf-w2v for each sentence/review is stored in this list
row=0;
for sent in sent_of_test: # for each review/sentence
sent_vec = np.zeros(50) # as word vectors are of zero length
weight_sum =0; # num of words with a valid vector in the sentence/review
for word in sent: # for each word in a review/sentence
if word in w2v_words:
vec = w2v_model.wv[word]
# tf_idf = tf_idf_matrix[row, tfidf_feat.index(word)]
# to reduce the computation we are
# dictionary[word] = idf value of word in whole courpus
# sent.count(word) = tf valeus of word in this review
tf_idf = dictionary[word]*(sent.count(word)/len(sent))
sent_vec += (vec * tf_idf)
weight_sum += tf_idf
if weight_sum != 0:
sent_vec /= weight_sum
tfidf_sent_vectors1.append(sent_vec)
row += 1
print(len(tfidf_sent_vectors))
print(len(tfidf_sent_vectors1))
###Output
105000
45000
###Markdown
6.1 Replacing nan values with 0's.
###Code
# Replacing nan values with 0's.
tfidf_sent_vectors = np.nan_to_num(tfidf_sent_vectors)
tfidf_sent_vectors1 = np.nan_to_num(tfidf_sent_vectors1)
###Output
_____no_output_____
###Markdown
6.2 Standardizing the data
###Code
# Data-preprocessing: Standardizing the data
from sklearn.preprocessing import StandardScaler
standardized_data_train = StandardScaler().fit_transform(tfidf_sent_vectors)
print(standardized_data_train.shape)
standardized_data_test = StandardScaler().fit_transform(tfidf_sent_vectors1)
print(standardized_data_test.shape)
###Output
(105000, 50)
(45000, 50)
###Markdown
6.3 Applying Decision Tree Algorithm
###Code
from sklearn.model_selection import cross_val_score
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
param_grid = {'max_depth': [3,4,5,6,7,8,9,10,11]}
model = GridSearchCV(DecisionTreeClassifier(min_samples_leaf=5,criterion = 'gini',random_state = 100,class_weight ='balanced'), param_grid,scoring = 'f1', cv=10 , n_jobs = -1,pre_dispatch=2)
model.fit(standardized_data_train, y_train)
print(model.best_score_, model.best_params_)
print("Model with best parameters :\n",model.best_estimator_)
print("Accuracy of the model : ",model.score(standardized_data_test, y_test))
a = model.best_params_
optimal_max_depth = a.get('max_depth')
results = model.cv_results_
results['mean_test_score']
max_depth=3,4,5,6,7,8,9,10,11
plt.plot(max_depth,results['mean_test_score'],marker='o')
plt.xlabel('max_depth')
plt.ylabel('f1score')
plt.title("F1score vs max_depth")
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Heatmap for plotting CV Scores
###Code
pvt =pd.pivot_table(pd.DataFrame(model.cv_results_),values='mean_test_score',index='param_max_depth')
import seaborn as sns
ax = sns.heatmap(pvt,annot=True,fmt="f")
# DecisionTreeClassifier with Optimal value of depth
clf = DecisionTreeClassifier(max_depth=optimal_max_depth,class_weight ='balanced')
clf.fit(standardized_data_train,y_train)
y_pred_tfidfw2v = clf.predict(standardized_data_test)
###Output
_____no_output_____
###Markdown
6.4 Confusion Matrix
###Code
cm_tfidfw2v=confusion_matrix(y_test,y_pred)
print("Confusion Matrix:")
sns.heatmap(cm_tfidfw2v, annot=True, fmt='d')
plt.show()
#finding out true negative , false positive , false negative and true positve
tn, fp, fn, tp = cm_tfidfw2v.ravel()
( tp, fp, fn, tp)
print(" true negitves are {} \n false positives are {} \n false negatives are {}\n true positives are {} \n ".format(tn,fp,fn,tp))
###Output
true negitves are 4384
false positives are 1730
false negatives are 8512
true positives are 30374
###Markdown
6.5 Calculating Accuracy,Error on test data,Precision,Recall,Classification Report
###Code
from sklearn.metrics import recall_score
from sklearn.metrics import precision_score
from sklearn.metrics import classification_report
# evaluating accuracy
acc_tfidfw2v = accuracy_score(y_test, y_pred) * 100
print('\nThe Test Accuracy of the Decision tree for maxdepth is = %.3f is %f%%' % (optimal_max_depth, acc_tfidfw2v))
# Error on test data
test_error_tfidfw2v = 100-acc_tfidfw2v
print("\nTest Error of the Decision tree for maxdepth is %f%%" % (test_error_tfidfw2v))
# evaluating precision
precision_score = precision_score(y_test, y_pred)
print('\nThe Test Precision of the Decision tree for maxdepth is = %.3f is %f' % (optimal_max_depth, precision_score))
# evaluating recall
recall_score = recall_score(y_test, y_pred)
print('\nThe Test Recall of the Decision tree for maxdepth is = %.3f is %f' % (optimal_max_depth, recall_score))
# evaluating Classification report
classification_report = classification_report(y_test, y_pred)
print('\nThe Test classification report of the Decision tree for maxdepth is \n\n ',(classification_report))
###Output
The Test Accuracy of the Decision tree for maxdepth is = 11.000 is 77.240000%
Test Error of the Decision tree for maxdepth is 22.760000%
The Test Precision of the Decision tree for maxdepth is = 11.000 is 0.946113
The Test Recall of the Decision tree for maxdepth is = 11.000 is 0.781104
The Test classification report of the Decision tree for maxdepth is
precision recall f1-score support
0 0.34 0.72 0.46 6114
1 0.95 0.78 0.86 38886
micro avg 0.77 0.77 0.77 45000
macro avg 0.64 0.75 0.66 45000
weighted avg 0.86 0.77 0.80 45000
###Markdown
6.6 Visualizing Decision tree By graph
###Code
from IPython.display import Image
from sklearn.tree import export_graphviz
from io import StringIO
from sklearn import tree
import pydotplus
target = ['1','0'] #1=positive,o=negative
dot_data = StringIO()
export_graphviz(clf,max_depth=3,out_file=dot_data,filled=True,class_names=target,rounded=True,special_characters=True)
# Draw graph
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
# Show graph
Image(graph.create_png())
# Create PNG
graph.write_png("tfidfword2vec.png")
###Output
_____no_output_____
|
deeplearning/Art+Generation+with+Neural+Style+Transfer+-+v2.ipynb
|
###Markdown
Deep Learning & Art: Neural Style TransferWelcome to the second assignment of this week. In this assignment, you will learn about Neural Style Transfer. This algorithm was created by Gatys et al. (2015) (https://arxiv.org/abs/1508.06576). **In this assignment, you will:**- Implement the neural style transfer algorithm - Generate novel artistic images using your algorithm Most of the algorithms you've studied optimize a cost function to get a set of parameter values. In Neural Style Transfer, you'll optimize a cost function to get pixel values!
###Code
import os
import sys
import scipy.io
import scipy.misc
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
from PIL import Image
from nst_utils import *
import numpy as np
import tensorflow as tf
%matplotlib inline
###Output
_____no_output_____
###Markdown
1 - Problem StatementNeural Style Transfer (NST) is one of the most fun techniques in deep learning. As seen below, it merges two images, namely, a "content" image (C) and a "style" image (S), to create a "generated" image (G). The generated image G combines the "content" of the image C with the "style" of image S. In this example, you are going to generate an image of the Louvre museum in Paris (content image C), mixed with a painting by Claude Monet, a leader of the impressionist movement (style image S).Let's see how you can do this. 2 - Transfer LearningNeural Style Transfer (NST) uses a previously trained convolutional network, and builds on top of that. The idea of using a network trained on a different task and applying it to a new task is called transfer learning. Following the original NST paper (https://arxiv.org/abs/1508.06576), we will use the VGG network. Specifically, we'll use VGG-19, a 19-layer version of the VGG network. This model has already been trained on the very large ImageNet database, and thus has learned to recognize a variety of low level features (at the earlier layers) and high level features (at the deeper layers). Run the following code to load parameters from the VGG model. This may take a few seconds.
###Code
model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat")
print(model)
###Output
{'input': <tf.Variable 'Variable:0' shape=(1, 300, 400, 3) dtype=float32_ref>, 'conv1_1': <tf.Tensor 'Relu:0' shape=(1, 300, 400, 64) dtype=float32>, 'conv1_2': <tf.Tensor 'Relu_1:0' shape=(1, 300, 400, 64) dtype=float32>, 'avgpool1': <tf.Tensor 'AvgPool:0' shape=(1, 150, 200, 64) dtype=float32>, 'conv2_1': <tf.Tensor 'Relu_2:0' shape=(1, 150, 200, 128) dtype=float32>, 'conv2_2': <tf.Tensor 'Relu_3:0' shape=(1, 150, 200, 128) dtype=float32>, 'avgpool2': <tf.Tensor 'AvgPool_1:0' shape=(1, 75, 100, 128) dtype=float32>, 'conv3_1': <tf.Tensor 'Relu_4:0' shape=(1, 75, 100, 256) dtype=float32>, 'conv3_2': <tf.Tensor 'Relu_5:0' shape=(1, 75, 100, 256) dtype=float32>, 'conv3_3': <tf.Tensor 'Relu_6:0' shape=(1, 75, 100, 256) dtype=float32>, 'conv3_4': <tf.Tensor 'Relu_7:0' shape=(1, 75, 100, 256) dtype=float32>, 'avgpool3': <tf.Tensor 'AvgPool_2:0' shape=(1, 38, 50, 256) dtype=float32>, 'conv4_1': <tf.Tensor 'Relu_8:0' shape=(1, 38, 50, 512) dtype=float32>, 'conv4_2': <tf.Tensor 'Relu_9:0' shape=(1, 38, 50, 512) dtype=float32>, 'conv4_3': <tf.Tensor 'Relu_10:0' shape=(1, 38, 50, 512) dtype=float32>, 'conv4_4': <tf.Tensor 'Relu_11:0' shape=(1, 38, 50, 512) dtype=float32>, 'avgpool4': <tf.Tensor 'AvgPool_3:0' shape=(1, 19, 25, 512) dtype=float32>, 'conv5_1': <tf.Tensor 'Relu_12:0' shape=(1, 19, 25, 512) dtype=float32>, 'conv5_2': <tf.Tensor 'Relu_13:0' shape=(1, 19, 25, 512) dtype=float32>, 'conv5_3': <tf.Tensor 'Relu_14:0' shape=(1, 19, 25, 512) dtype=float32>, 'conv5_4': <tf.Tensor 'Relu_15:0' shape=(1, 19, 25, 512) dtype=float32>, 'avgpool5': <tf.Tensor 'AvgPool_4:0' shape=(1, 10, 13, 512) dtype=float32>}
###Markdown
The model is stored in a python dictionary where each variable name is the key and the corresponding value is a tensor containing that variable's value. To run an image through this network, you just have to feed the image to the model. In TensorFlow, you can do so using the [tf.assign](https://www.tensorflow.org/api_docs/python/tf/assign) function. In particular, you will use the assign function like this: ```pythonmodel["input"].assign(image)```This assigns the image as an input to the model. After this, if you want to access the activations of a particular layer, say layer `4_2` when the network is run on this image, you would run a TensorFlow session on the correct tensor `conv4_2`, as follows: ```pythonsess.run(model["conv4_2"])``` 3 - Neural Style Transfer We will build the NST algorithm in three steps:- Build the content cost function $J_{content}(C,G)$- Build the style cost function $J_{style}(S,G)$- Put it together to get $J(G) = \alpha J_{content}(C,G) + \beta J_{style}(S,G)$. 3.1 - Computing the content costIn our running example, the content image C will be the picture of the Louvre Museum in Paris. Run the code below to see a picture of the Louvre.
###Code
content_image = scipy.misc.imread("images/louvre.jpg")
imshow(content_image)
###Output
_____no_output_____
###Markdown
The content image (C) shows the Louvre museum's pyramid surrounded by old Paris buildings, against a sunny sky with a few clouds.** 3.1.1 - How do you ensure the generated image G matches the content of the image C?**As we saw in lecture, the earlier (shallower) layers of a ConvNet tend to detect lower-level features such as edges and simple textures, and the later (deeper) layers tend to detect higher-level features such as more complex textures as well as object classes. We would like the "generated" image G to have similar content as the input image C. Suppose you have chosen some layer's activations to represent the content of an image. In practice, you'll get the most visually pleasing results if you choose a layer in the middle of the network--neither too shallow nor too deep. (After you have finished this exercise, feel free to come back and experiment with using different layers, to see how the results vary.)So, suppose you have picked one particular hidden layer to use. Now, set the image C as the input to the pretrained VGG network, and run forward propagation. Let $a^{(C)}$ be the hidden layer activations in the layer you had chosen. (In lecture, we had written this as $a^{[l](C)}$, but here we'll drop the superscript $[l]$ to simplify the notation.) This will be a $n_H \times n_W \times n_C$ tensor. Repeat this process with the image G: Set G as the input, and run forward progation. Let $$a^{(G)}$$ be the corresponding hidden layer activation. We will define as the content cost function as:$$J_{content}(C,G) = \frac{1}{4 \times n_H \times n_W \times n_C}\sum _{ \text{all entries}} (a^{(C)} - a^{(G)})^2\tag{1} $$Here, $n_H, n_W$ and $n_C$ are the height, width and number of channels of the hidden layer you have chosen, and appear in a normalization term in the cost. For clarity, note that $a^{(C)}$ and $a^{(G)}$ are the volumes corresponding to a hidden layer's activations. In order to compute the cost $J_{content}(C,G)$, it might also be convenient to unroll these 3D volumes into a 2D matrix, as shown below. (Technically this unrolling step isn't needed to compute $J_{content}$, but it will be good practice for when you do need to carry out a similar operation later for computing the style const $J_{style}$.)**Exercise:** Compute the "content cost" using TensorFlow. **Instructions**: The 3 steps to implement this function are:1. Retrieve dimensions from a_G: - To retrieve dimensions from a tensor X, use: `X.get_shape().as_list()`2. Unroll a_C and a_G as explained in the picture above - If you are stuck, take a look at [Hint1](https://www.tensorflow.org/versions/r1.3/api_docs/python/tf/transpose) and [Hint2](https://www.tensorflow.org/versions/r1.2/api_docs/python/tf/reshape).3. Compute the content cost: - If you are stuck, take a look at [Hint3](https://www.tensorflow.org/api_docs/python/tf/reduce_sum), [Hint4](https://www.tensorflow.org/api_docs/python/tf/square) and [Hint5](https://www.tensorflow.org/api_docs/python/tf/subtract).
###Code
# GRADED FUNCTION: compute_content_cost
def compute_content_cost(a_C, a_G):
"""
Computes the content cost
Arguments:
a_C -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image C
a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image G
Returns:
J_content -- scalar that you compute using equation 1 above.
"""
### START CODE HERE ###
# Retrieve dimensions from a_G (≈1 line)
m, n_H, n_W, n_C = a_G.get_shape().as_list()
# Reshape a_C and a_G (≈2 lines)
a_C_unrolled = tf.transpose(tf.reshape(a_C, [n_H * n_W, n_C]))
a_G_unrolled = tf.transpose(tf.reshape(a_G, [n_H * n_W, n_C]))
# compute the cost with tensorflow (≈1 line)
J_content = tf.reduce_sum(tf.square(tf.subtract(a_C_unrolled, a_G_unrolled))) / (4*n_H*n_W*n_C)
### END CODE HERE ###
return J_content
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
a_C = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
J_content = compute_content_cost(a_C, a_G)
print("J_content = " + str(J_content.eval()))
###Output
J_content = 6.76559
###Markdown
**Expected Output**: **J_content** 6.76559 **What you should remember**:- The content cost takes a hidden layer activation of the neural network, and measures how different $a^{(C)}$ and $a^{(G)}$ are. - When we minimize the content cost later, this will help make sure $G$ has similar content as $C$. 3.2 - Computing the style costFor our running example, we will use the following style image:
###Code
style_image = scipy.misc.imread("images/monet_800600.jpg")
imshow(style_image)
###Output
_____no_output_____
###Markdown
This painting was painted in the style of *[impressionism](https://en.wikipedia.org/wiki/Impressionism)*.Lets see how you can now define a "style" const function $J_{style}(S,G)$. 3.2.1 - Style matrixThe style matrix is also called a "Gram matrix." In linear algebra, the Gram matrix G of a set of vectors $(v_{1},\dots ,v_{n})$ is the matrix of dot products, whose entries are ${\displaystyle G_{ij} = v_{i}^T v_{j} = np.dot(v_{i}, v_{j}) }$. In other words, $G_{ij}$ compares how similar $v_i$ is to $v_j$: If they are highly similar, you would expect them to have a large dot product, and thus for $G_{ij}$ to be large. Note that there is an unfortunate collision in the variable names used here. We are following common terminology used in the literature, but $G$ is used to denote the Style matrix (or Gram matrix) as well as to denote the generated image $G$. We will try to make sure which $G$ we are referring to is always clear from the context. In NST, you can compute the Style matrix by multiplying the "unrolled" filter matrix with their transpose:The result is a matrix of dimension $(n_C,n_C)$ where $n_C$ is the number of filters. The value $G_{ij}$ measures how similar the activations of filter $i$ are to the activations of filter $j$. One important part of the gram matrix is that the diagonal elements such as $G_{ii}$ also measures how active filter $i$ is. For example, suppose filter $i$ is detecting vertical textures in the image. Then $G_{ii}$ measures how common vertical textures are in the image as a whole: If $G_{ii}$ is large, this means that the image has a lot of vertical texture. By capturing the prevalence of different types of features ($G_{ii}$), as well as how much different features occur together ($G_{ij}$), the Style matrix $G$ measures the style of an image. **Exercise**:Using TensorFlow, implement a function that computes the Gram matrix of a matrix A. The formula is: The gram matrix of A is $G_A = AA^T$. If you are stuck, take a look at [Hint 1](https://www.tensorflow.org/api_docs/python/tf/matmul) and [Hint 2](https://www.tensorflow.org/api_docs/python/tf/transpose).
###Code
# GRADED FUNCTION: gram_matrix
def gram_matrix(A):
"""
Argument:
A -- matrix of shape (n_C, n_H*n_W)
Returns:
GA -- Gram matrix of A, of shape (n_C, n_C)
"""
### START CODE HERE ### (≈1 line)
GA = tf.matmul(A, tf.transpose(A))
### END CODE HERE ###
return GA
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
A = tf.random_normal([3, 2*1], mean=1, stddev=4)
GA = gram_matrix(A)
print("GA = " + str(GA.eval()))
###Output
GA = [[ 6.42230511 -4.42912197 -2.09668207]
[ -4.42912197 19.46583748 19.56387138]
[ -2.09668207 19.56387138 20.6864624 ]]
###Markdown
**Expected Output**: **GA** [[ 6.42230511 -4.42912197 -2.09668207] [ -4.42912197 19.46583748 19.56387138] [ -2.09668207 19.56387138 20.6864624 ]] 3.2.2 - Style cost After generating the Style matrix (Gram matrix), your goal will be to minimize the distance between the Gram matrix of the "style" image S and that of the "generated" image G. For now, we are using only a single hidden layer $a^{[l]}$, and the corresponding style cost for this layer is defined as: $$J_{style}^{[l]}(S,G) = \frac{1}{4 \times {n_C}^2 \times (n_H \times n_W)^2} \sum _{i=1}^{n_C}\sum_{j=1}^{n_C}(G^{(S)}_{ij} - G^{(G)}_{ij})^2\tag{2} $$where $G^{(S)}$ and $G^{(G)}$ are respectively the Gram matrices of the "style" image and the "generated" image, computed using the hidden layer activations for a particular hidden layer in the network. **Exercise**: Compute the style cost for a single layer. **Instructions**: The 3 steps to implement this function are:1. Retrieve dimensions from the hidden layer activations a_G: - To retrieve dimensions from a tensor X, use: `X.get_shape().as_list()`2. Unroll the hidden layer activations a_S and a_G into 2D matrices, as explained in the picture above. - You may find [Hint1](https://www.tensorflow.org/versions/r1.3/api_docs/python/tf/transpose) and [Hint2](https://www.tensorflow.org/versions/r1.2/api_docs/python/tf/reshape) useful.3. Compute the Style matrix of the images S and G. (Use the function you had previously written.) 4. Compute the Style cost: - You may find [Hint3](https://www.tensorflow.org/api_docs/python/tf/reduce_sum), [Hint4](https://www.tensorflow.org/api_docs/python/tf/square) and [Hint5](https://www.tensorflow.org/api_docs/python/tf/subtract) useful.
###Code
# GRADED FUNCTION: compute_layer_style_cost
def compute_layer_style_cost(a_S, a_G):
"""
Arguments:
a_S -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image S
a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image G
Returns:
J_style_layer -- tensor representing a scalar value, style cost defined above by equation (2)
"""
### START CODE HERE ###
# Retrieve dimensions from a_G (≈1 line)
m, n_H, n_W, n_C = a_G.get_shape().as_list()
# Reshape the images to have them of shape (n_C, n_H*n_W) (≈2 lines)
a_S = tf.transpose(tf.reshape(a_S, [n_H * n_W, n_C]))
a_G = tf.transpose(tf.reshape(a_G, [n_H * n_W, n_C]))
# Computing gram_matrices for both images S and G (≈2 lines)
GS = gram_matrix(a_S)
GG = gram_matrix(a_G)
# Computing the loss (≈1 line)
J_style_layer = tf.reduce_sum(tf.square(tf.subtract(GS, GG))) / (4 * (n_C**2) * (n_H*n_W)**2)
### END CODE HERE ###
return J_style_layer
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
a_S = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
J_style_layer = compute_layer_style_cost(a_S, a_G)
print("J_style_layer = " + str(J_style_layer.eval()))
###Output
J_style_layer = 9.19028
###Markdown
**Expected Output**: **J_style_layer** 9.19028 3.2.3 Style WeightsSo far you have captured the style from only one layer. We'll get better results if we "merge" style costs from several different layers. After completing this exercise, feel free to come back and experiment with different weights to see how it changes the generated image $G$. But for now, this is a pretty reasonable default:
###Code
STYLE_LAYERS = [
('conv1_1', 0.2),
('conv2_1', 0.2),
('conv3_1', 0.2),
('conv4_1', 0.2),
('conv5_1', 0.2)]
###Output
_____no_output_____
###Markdown
You can combine the style costs for different layers as follows:$$J_{style}(S,G) = \sum_{l} \lambda^{[l]} J^{[l]}_{style}(S,G)$$where the values for $\lambda^{[l]}$ are given in `STYLE_LAYERS`. We've implemented a compute_style_cost(...) function. It simply calls your `compute_layer_style_cost(...)` several times, and weights their results using the values in `STYLE_LAYERS`. Read over it to make sure you understand what it's doing. <!-- 2. Loop over (layer_name, coeff) from STYLE_LAYERS: a. Select the output tensor of the current layer. As an example, to call the tensor from the "conv1_1" layer you would do: out = model["conv1_1"] b. Get the style of the style image from the current layer by running the session on the tensor "out" c. Get a tensor representing the style of the generated image from the current layer. It is just "out". d. Now that you have both styles. Use the function you've implemented above to compute the style_cost for the current layer e. Add (style_cost x coeff) of the current layer to overall style cost (J_style)3. Return J_style, which should now be the sum of the (style_cost x coeff) for each layer.!-->
###Code
def compute_style_cost(model, STYLE_LAYERS):
"""
Computes the overall style cost from several chosen layers
Arguments:
model -- our tensorflow model
STYLE_LAYERS -- A python list containing:
- the names of the layers we would like to extract style from
- a coefficient for each of them
Returns:
J_style -- tensor representing a scalar value, style cost defined above by equation (2)
"""
# initialize the overall style cost
J_style = 0
for layer_name, coeff in STYLE_LAYERS:
# Select the output tensor of the currently selected layer
out = model[layer_name]
# Set a_S to be the hidden layer activation from the layer we have selected, by running the session on out
a_S = sess.run(out)
# Set a_G to be the hidden layer activation from same layer. Here, a_G references model[layer_name]
# and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that
# when we run the session, this will be the activations drawn from the appropriate layer, with G as input.
a_G = out
# Compute style_cost for the current layer
J_style_layer = compute_layer_style_cost(a_S, a_G)
# Add coeff * J_style_layer of this layer to overall style cost
J_style += coeff * J_style_layer
return J_style
###Output
_____no_output_____
###Markdown
**Note**: In the inner-loop of the for-loop above, `a_G` is a tensor and hasn't been evaluated yet. It will be evaluated and updated at each iteration when we run the TensorFlow graph in model_nn() below.<!-- How do you choose the coefficients for each layer? The deeper layers capture higher-level concepts, and the features in the deeper layers are less localized in the image relative to each other. So if you want the generated image to softly follow the style image, try choosing larger weights for deeper layers and smaller weights for the first layers. In contrast, if you want the generated image to strongly follow the style image, try choosing smaller weights for deeper layers and larger weights for the first layers!-->**What you should remember**:- The style of an image can be represented using the Gram matrix of a hidden layer's activations. However, we get even better results combining this representation from multiple different layers. This is in contrast to the content representation, where usually using just a single hidden layer is sufficient.- Minimizing the style cost will cause the image $G$ to follow the style of the image $S$. 3.3 - Defining the total cost to optimize Finally, let's create a cost function that minimizes both the style and the content cost. The formula is: $$J(G) = \alpha J_{content}(C,G) + \beta J_{style}(S,G)$$**Exercise**: Implement the total cost function which includes both the content cost and the style cost.
###Code
# GRADED FUNCTION: total_cost
def total_cost(J_content, J_style, alpha = 10, beta = 40):
"""
Computes the total cost function
Arguments:
J_content -- content cost coded above
J_style -- style cost coded above
alpha -- hyperparameter weighting the importance of the content cost
beta -- hyperparameter weighting the importance of the style cost
Returns:
J -- total cost as defined by the formula above.
"""
### START CODE HERE ### (≈1 line)
J = alpha*J_content+beta*J_style
### END CODE HERE ###
return J
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(3)
J_content = np.random.randn()
J_style = np.random.randn()
J = total_cost(J_content, J_style)
print("J = " + str(J))
###Output
J = 35.34667875478276
###Markdown
**Expected Output**: **J** 35.34667875478276 **What you should remember**:- The total cost is a linear combination of the content cost $J_{content}(C,G)$ and the style cost $J_{style}(S,G)$- $\alpha$ and $\beta$ are hyperparameters that control the relative weighting between content and style 4 - Solving the optimization problem Finally, let's put everything together to implement Neural Style Transfer!Here's what the program will have to do:1. Create an Interactive Session2. Load the content image 3. Load the style image4. Randomly initialize the image to be generated 5. Load the VGG16 model7. Build the TensorFlow graph: - Run the content image through the VGG16 model and compute the content cost - Run the style image through the VGG16 model and compute the style cost - Compute the total cost - Define the optimizer and the learning rate8. Initialize the TensorFlow graph and run it for a large number of iterations, updating the generated image at every step.Lets go through the individual steps in detail. You've previously implemented the overall cost $J(G)$. We'll now set up TensorFlow to optimize this with respect to $G$. To do so, your program has to reset the graph and use an "[Interactive Session](https://www.tensorflow.org/api_docs/python/tf/InteractiveSession)". Unlike a regular session, the "Interactive Session" installs itself as the default session to build a graph. This allows you to run variables without constantly needing to refer to the session object, which simplifies the code. Lets start the interactive session.
###Code
# Reset the graph
tf.reset_default_graph()
# Start interactive session
sess = tf.InteractiveSession()
###Output
_____no_output_____
###Markdown
Let's load, reshape, and normalize our "content" image (the Louvre museum picture):
###Code
content_image = scipy.misc.imread("images/louvre_small.jpg")
content_image = reshape_and_normalize_image(content_image)
###Output
_____no_output_____
###Markdown
Let's load, reshape and normalize our "style" image (Claude Monet's painting):
###Code
style_image = scipy.misc.imread("images/monet.jpg")
style_image = reshape_and_normalize_image(style_image)
###Output
_____no_output_____
###Markdown
Now, we initialize the "generated" image as a noisy image created from the content_image. By initializing the pixels of the generated image to be mostly noise but still slightly correlated with the content image, this will help the content of the "generated" image more rapidly match the content of the "content" image. (Feel free to look in `nst_utils.py` to see the details of `generate_noise_image(...)`; to do so, click "File-->Open..." at the upper-left corner of this Jupyter notebook.)
###Code
generated_image = generate_noise_image(content_image)
imshow(generated_image[0])
###Output
_____no_output_____
###Markdown
Next, as explained in part (2), let's load the VGG16 model.
###Code
model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat")
###Output
_____no_output_____
###Markdown
To get the program to compute the content cost, we will now assign `a_C` and `a_G` to be the appropriate hidden layer activations. We will use layer `conv4_2` to compute the content cost. The code below does the following:1. Assign the content image to be the input to the VGG model.2. Set a_C to be the tensor giving the hidden layer activation for layer "conv4_2".3. Set a_G to be the tensor giving the hidden layer activation for the same layer. 4. Compute the content cost using a_C and a_G.
###Code
# Assign the content image to be the input of the VGG model.
sess.run(model['input'].assign(content_image))
# Select the output tensor of layer conv4_2
out = model['conv4_2']
# Set a_C to be the hidden layer activation from the layer we have selected
a_C = sess.run(out)
# Set a_G to be the hidden layer activation from same layer. Here, a_G references model['conv4_2']
# and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that
# when we run the session, this will be the activations drawn from the appropriate layer, with G as input.
a_G = out
# Compute the content cost
J_content = compute_content_cost(a_C, a_G)
###Output
_____no_output_____
###Markdown
**Note**: At this point, a_G is a tensor and hasn't been evaluated. It will be evaluated and updated at each iteration when we run the Tensorflow graph in model_nn() below.
###Code
# Assign the input of the model to be the "style" image
sess.run(model['input'].assign(style_image))
# Compute the style cost
J_style = compute_style_cost(model, STYLE_LAYERS)
###Output
_____no_output_____
###Markdown
**Exercise**: Now that you have J_content and J_style, compute the total cost J by calling `total_cost()`. Use `alpha = 10` and `beta = 40`.
###Code
### START CODE HERE ### (1 line)
J = total_cost(J_content, J_style, alpha = 10, beta = 40)
### END CODE HERE ###
###Output
_____no_output_____
###Markdown
You'd previously learned how to set up the Adam optimizer in TensorFlow. Lets do that here, using a learning rate of 2.0. [See reference](https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer)
###Code
# define optimizer (1 line)
optimizer = tf.train.AdamOptimizer(2.0)
# define train_step (1 line)
train_step = optimizer.minimize(J)
###Output
_____no_output_____
###Markdown
**Exercise**: Implement the model_nn() function which initializes the variables of the tensorflow graph, assigns the input image (initial generated image) as the input of the VGG16 model and runs the train_step for a large number of steps.
###Code
def model_nn(sess, input_image, num_iterations = 200):
# Initialize global variables (you need to run the session on the initializer)
### START CODE HERE ### (1 line)
sess.run(tf.global_variables_initializer())
### END CODE HERE ###
# Run the noisy input image (initial generated image) through the model. Use assign().
### START CODE HERE ### (1 line)
generated_image = sess.run(model['input'].assign(input_image))
### END CODE HERE ###
for i in range(num_iterations):
# Run the session on the train_step to minimize the total cost
### START CODE HERE ### (1 line)
sess.run(train_step)
### END CODE HERE ###
# Compute the generated image by running the session on the current model['input']
### START CODE HERE ### (1 line)
generated_image = sess.run(model['input'])
### END CODE HERE ###
# Print every 20 iteration.
if i%20 == 0:
Jt, Jc, Js = sess.run([J, J_content, J_style])
print("Iteration " + str(i) + " :")
print("total cost = " + str(Jt))
print("content cost = " + str(Jc))
print("style cost = " + str(Js))
# save current generated image in the "/output" directory
save_image("output/" + str(i) + ".png", generated_image)
# save last generated image
save_image('output/generated_image.jpg', generated_image)
return generated_image
###Output
_____no_output_____
###Markdown
Run the following cell to generate an artistic image. It should take about 3min on CPU for every 20 iterations but you start observing attractive results after ≈140 iterations. Neural Style Transfer is generally trained using GPUs.
###Code
model_nn(sess, generated_image)
###Output
Iteration 0 :
total cost = 5.05035e+09
content cost = 7877.67
style cost = 1.26257e+08
|
06_Kaggle_HomeCredit/code/16_LigthGBM_MaybeNewFeatures.ipynb
|
###Markdown
Home Credit Default RiskCan you predict how capable each applicant is of repaying a loan? Many people struggle to get loans due to **insufficient or non-existent credit histories**. And, unfortunately, this population is often taken advantage of by untrustworthy lenders.Home Credit strives to broaden financial inclusion for the **unbanked population by providing a positive and safe borrowing experience**. In order to make sure this underserved population has a positive loan experience, Home Credit makes use of a variety of alternative data--including telco and transactional information--to predict their clients' repayment abilities.While Home Credit is currently using various statistical and machine learning methods to make these predictions, they're challenging Kagglers to help them unlock the full potential of their data. Doing so will ensure that clients capable of repayment are not rejected and that loans are given with a principal, maturity, and repayment calendar that will empower their clients to be successful. **Submissions are evaluated on area under the ROC curve between the predicted probability and the observed target.** Dataset
###Code
# #Python Libraries
import numpy as np
import scipy as sp
import pandas as pd
import statsmodels
import pandas_profiling
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import os
import sys
import time
import random
import requests
import datetime
import missingno as msno
import math
import sys
import gc
import os
# #sklearn
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RandomizedSearchCV
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import StratifiedKFold
from sklearn.ensemble import RandomForestRegressor
# #sklearn - preprocessing
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import LabelEncoder
# #sklearn - metrics
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import r2_score
from sklearn.metrics import roc_auc_score
# #XGBoost & LightGBM
import xgboost as xgb
import lightgbm as lgb
# #Missing value imputation
from fancyimpute import KNN, MICE
# #Hyperparameter Optimization
from hyperopt.pyll.base import scope
from hyperopt.pyll.stochastic import sample
from hyperopt import STATUS_OK, Trials, fmin, hp, tpe
# #MongoDB for Model Parameter Storage
from pymongo import MongoClient
pd.options.display.max_columns = 150
###Output
_____no_output_____
###Markdown
Data Dictionary
###Code
!ls -l ../data/
###Output
total 2621364
-rw-r--r-- 1 karti 197609 26567651 May 17 18:06 application_test.csv
-rw-r--r-- 1 karti 197609 166133370 May 17 18:06 application_train.csv
-rw-r--r-- 1 karti 197609 170016717 May 17 18:08 bureau.csv
-rw-r--r-- 1 karti 197609 375592889 May 17 18:08 bureau_balance.csv
-rw-r--r-- 1 karti 197609 424582605 May 17 18:10 credit_card_balance.csv
-rw-r--r-- 1 karti 197609 37436 May 30 00:41 HomeCredit_columns_description.csv
-rw-r--r-- 1 karti 197609 723118349 May 17 18:13 installments_payments.csv
-rw-r--r-- 1 karti 197609 392703158 May 17 18:14 POS_CASH_balance.csv
-rw-r--r-- 1 karti 197609 404973293 May 17 18:15 previous_application.csv
-rw-r--r-- 1 karti 197609 536202 May 17 18:06 sample_submission.csv
###Markdown
- application_{train|test}.csvThis is the main table, broken into two files for Train (**with TARGET**) and Test (without TARGET).Static data for all applications. **One row represents one loan in our data sample.**Observations:* Each row is unique------ bureau.csvAll client's previous credits provided by other financial institutions that were reported to Credit Bureau (for clients who have a loan in our sample).For every loan in our sample, there are as many rows as number of credits the client had in Credit Bureau before the application date.- bureau_balance.csvMonthly balances of previous credits in Credit Bureau.This table has one row for each month of history of every previous credit reported to Credit Bureau – i.e the table has (loans in sample * of relative previous credits * of months where we have some history observable for the previous credits) rows.- POS_CASH_balance.csvMonthly balance snapshots of previous POS (point of sales) and cash loans that the applicant had with Home Credit.This table has one row for each month of history of every previous credit in Home Credit (consumer credit and cash loans) related to loans in our sample – i.e. the table has (loans in sample * of relative previous credits * of months in which we have some history observable for the previous credits) rows.- credit_card_balance.csvMonthly balance snapshots of previous credit cards that the applicant has with Home Credit.This table has one row for each month of history of every previous credit in Home Credit (consumer credit and cash loans) related to loans in our sample – i.e. the table has (loans in sample * of relative previous credit cards * of months where we have some history observable for the previous credit card) rows.------ previous_application.csvAll previous applications for Home Credit loans of clients who have loans in our sample.There is one row for each previous application related to loans in our data sample.------ installments_payments.csvRepayment history for the previously disbursed credits in Home Credit related to the loans in our sample.There is a) one row for every payment that was made plus b) one row each for missed payment.One row is equivalent to one payment of one installment OR one installment corresponding to one payment of one previous Home Credit credit related to loans in our sample.- HomeCredit_columns_description.csvThis file contains descriptions for the columns in the various data files.  Data Pre-processing
###Code
df_application_train_original = pd.read_csv("../data/application_train.csv")
df_application_test_original = pd.read_csv("../data/application_test.csv")
df_bureau_original = pd.read_csv("../data/bureau.csv")
df_bureau_balance_original = pd.read_csv("../data/bureau_balance.csv")
df_credit_card_balance_original = pd.read_csv("../data/credit_card_balance.csv")
df_installments_payments_original = pd.read_csv("../data/installments_payments.csv")
df_pos_cash_balance_original = pd.read_csv("../data/POS_CASH_balance.csv")
df_previous_application_original = pd.read_csv("../data/previous_application.csv")
df_application_train = pd.read_csv("../data/application_train.csv")
df_application_test = pd.read_csv("../data/application_test.csv")
df_bureau = pd.read_csv("../data/bureau.csv")
df_bureau_balance = pd.read_csv("../data/bureau_balance.csv")
df_credit_card_balance = pd.read_csv("../data/credit_card_balance.csv")
df_installments_payments = pd.read_csv("../data/installments_payments.csv")
df_pos_cash_balance = pd.read_csv("../data/POS_CASH_balance.csv")
df_previous_application = pd.read_csv("../data/previous_application.csv")
print("df_application_train: ", df_application_train.shape)
print("df_application_test: ", df_application_test.shape)
print("df_bureau: ", df_bureau.shape)
print("df_bureau_balance: ", df_bureau_balance.shape)
print("df_credit_card_balance: ", df_credit_card_balance.shape)
print("df_installments_payments: ", df_installments_payments.shape)
print("df_pos_cash_balance: ", df_pos_cash_balance.shape)
print("df_previous_application: ", df_previous_application.shape)
gc.collect()
###Output
_____no_output_____
###Markdown
Feature: df_installments_payments
###Code
df_installments_payments['K_PREV_INSTALLMENT_PAYMENT_COUNT'] = df_installments_payments.groupby('SK_ID_CURR')['SK_ID_PREV'].transform('count')
# #Note: I haven't added the K_NUM_INSTALMENT_NUMBER_SUM feature since the K_NUM_INSTALMENT_NUMBER_SUM_TO_COUNT_RATIO
# #performed better i.e. had a higher value on the feature importance score
df_installments_payments['K_NUM_INSTALMENT_NUMBER_SUM'] = df_installments_payments.groupby('SK_ID_CURR')['NUM_INSTALMENT_NUMBER'].transform(np.sum)
df_installments_payments['K_NUM_INSTALMENT_NUMBER_SUM_TO_COUNT_RATIO'] = df_installments_payments['K_NUM_INSTALMENT_NUMBER_SUM']/df_installments_payments['K_PREV_INSTALLMENT_PAYMENT_COUNT']
df_installments_payments['TEMP_DAYS_INSTALMENT'] = df_installments_payments.groupby('SK_ID_CURR')['DAYS_INSTALMENT'].transform(np.sum)
df_installments_payments['TEMP_DAYS_ENTRY_PAYMENT'] = df_installments_payments.groupby('SK_ID_CURR')['DAYS_ENTRY_PAYMENT'].transform(np.sum)
df_installments_payments['K_INST_DAYS_DIFF'] = df_installments_payments['TEMP_DAYS_INSTALMENT'] - df_installments_payments['TEMP_DAYS_ENTRY_PAYMENT']
df_installments_payments['K_INST_DAYS_DIFF_TO_COUNT_RATIO'] = df_installments_payments['K_INST_DAYS_DIFF']/df_installments_payments['K_PREV_INSTALLMENT_PAYMENT_COUNT']
# df_installments_payments['K_DAYS_ENTRY_PAYMENT_MEAN'] = df_installments_payments.groupby('SK_ID_CURR')['DAYS_ENTRY_PAYMENT'].transform(np.mean)
df_installments_payments['K_DAYS_ENTRY_PAYMENT_MAX'] = df_installments_payments.groupby('SK_ID_CURR')['DAYS_ENTRY_PAYMENT'].transform(np.max)
# df_installments_payments['K_DAYS_ENTRY_PAYMENT_MIN'] = df_installments_payments.groupby('SK_ID_CURR')['DAYS_ENTRY_PAYMENT'].transform(np.min)
df_installments_payments['K_DAYS_ENTRY_PAYMENT_VAR'] = df_installments_payments.groupby('SK_ID_CURR')['DAYS_ENTRY_PAYMENT'].transform(np.var)
df_installments_payments['TEMP_AMT_INSTALMENT'] = df_installments_payments.groupby('SK_ID_CURR')['AMT_INSTALMENT'].transform(np.sum)
df_installments_payments['TEMP_AMT_PAYMENT'] = df_installments_payments.groupby('SK_ID_CURR')['AMT_PAYMENT'].transform(np.sum)
df_installments_payments['K_INST_AMT_DIFF'] = df_installments_payments['TEMP_AMT_INSTALMENT'] - df_installments_payments['TEMP_AMT_PAYMENT']
# #Note: I haven't added the K_INST_AMT_DIFF_TO_COUNT_RATIO feature since the K_INST_AMT_DIFF
# #performed better individually than with the count_ratio i.e. had a higher value on the feature importance score
# df_installments_payments['K_INST_AMT_DIFF_TO_COUNT_RATIO'] = df_installments_payments['K_INST_AMT_DIFF']/df_installments_payments['K_PREV_INSTALLMENT_PAYMENT_COUNT']
df_installments_payments['K_AMT_PAYMENT_MEAN'] = df_installments_payments.groupby('SK_ID_CURR')['AMT_PAYMENT'].transform(np.mean)
df_installments_payments['K_AMT_PAYMENT_MAX'] = df_installments_payments.groupby('SK_ID_CURR')['AMT_PAYMENT'].transform(np.max)
df_installments_payments['K_AMT_PAYMENT_MIN'] = df_installments_payments.groupby('SK_ID_CURR')['AMT_PAYMENT'].transform(np.min)
df_installments_payments['K_AMT_PAYMENT_VAR'] = df_installments_payments.groupby('SK_ID_CURR')['AMT_PAYMENT'].transform(np.var)
# #Drop Duplicates
df_installments_payments = df_installments_payments[['SK_ID_CURR', 'K_PREV_INSTALLMENT_PAYMENT_COUNT',
'K_INST_DAYS_DIFF', 'K_INST_AMT_DIFF',
'K_NUM_INSTALMENT_NUMBER_SUM_TO_COUNT_RATIO', 'K_INST_DAYS_DIFF_TO_COUNT_RATIO',
'K_DAYS_ENTRY_PAYMENT_MAX',
'K_DAYS_ENTRY_PAYMENT_VAR',
'K_AMT_PAYMENT_MEAN', 'K_AMT_PAYMENT_MAX', 'K_AMT_PAYMENT_MIN', 'K_AMT_PAYMENT_VAR'
]].drop_duplicates()
# #CHECKPOINT
print("df_installments_payments", df_installments_payments.shape)
print(len(set(df_installments_payments["SK_ID_CURR"]).intersection(set(df_application_train["SK_ID_CURR"]))))
print(len(set(df_installments_payments["SK_ID_CURR"]).intersection(set(df_application_test["SK_ID_CURR"]))))
print("Sum: ", 291643 + 47944)
gc.collect()
###Output
_____no_output_____
###Markdown
Feature: df_credit_card_balance
###Code
df_credit_card_balance['K_PREV_CREDIT_CARD_BALANCE_COUNT'] = df_credit_card_balance.groupby('SK_ID_CURR')['SK_ID_PREV'].transform('count')
# #All the four features below did not seem to have much weight in the feature importance of the model
# df_credit_card_balance['K_MONTHS_BALANCE_MAX'] = df_credit_card_balance.groupby('SK_ID_CURR')['MONTHS_BALANCE'].transform(np.max)
# df_credit_card_balance['K_MONTHS_BALANCE_MIN'] = df_credit_card_balance.groupby('SK_ID_CURR')['MONTHS_BALANCE'].transform(np.min)
# df_credit_card_balance['K_MONTHS_BALANCE_SUM'] = df_credit_card_balance.groupby('SK_ID_CURR')['MONTHS_BALANCE'].transform(np.sum)
# df_credit_card_balance['K_MONTHS_BALANCE_SUM_TO_COUNT_RATIO'] = df_credit_card_balance['K_MONTHS_BALANCE_SUM']/df_credit_card_balance['K_PREV_CREDIT_CARD_BALANCE_COUNT']
df_credit_card_balance['TEMP_AMT_BALANCE'] = df_credit_card_balance.groupby('SK_ID_CURR')['AMT_BALANCE'].transform(lambda x:x+1)
df_credit_card_balance['TEMP_AMT_CREDIT_LIMIT_ACTUAL'] = df_credit_card_balance.groupby('SK_ID_CURR')['AMT_CREDIT_LIMIT_ACTUAL'].transform(lambda x:x+1)
df_credit_card_balance['TEMP_UTILIZATION'] = df_credit_card_balance['TEMP_AMT_BALANCE']/df_credit_card_balance['TEMP_AMT_CREDIT_LIMIT_ACTUAL']
df_credit_card_balance['K_CREDIT_UTILIZATION_MEAN'] = df_credit_card_balance.groupby('SK_ID_CURR')['TEMP_UTILIZATION'].transform(np.mean)
df_credit_card_balance['K_CREDIT_UTILIZATION_MIN'] = df_credit_card_balance.groupby('SK_ID_CURR')['TEMP_UTILIZATION'].transform(np.min)
df_credit_card_balance['K_CREDIT_UTILIZATION_MAX'] = df_credit_card_balance.groupby('SK_ID_CURR')['TEMP_UTILIZATION'].transform(np.max)
df_credit_card_balance['K_CREDIT_UTILIZATION_VAR'] = df_credit_card_balance.groupby('SK_ID_CURR')['TEMP_UTILIZATION'].transform(np.var)
# #Validation: SK_ID_CURR = 105755
# #AMT_DRAWINGS_CURRENT = AMT_DRAWINGS_ATM_CURRENT + AMT_DRAWINGS_OTHER_CURRENT + AMT_DRAWINGS_POS_CURRENT
df_credit_card_balance['K_AMT_DRAWINGS_CURRENT_MEAN'] = df_credit_card_balance.groupby('SK_ID_CURR')['AMT_DRAWINGS_CURRENT'].transform(np.mean)
# df_credit_card_balance['K_AMT_DRAWINGS_CURRENT_MIN'] = df_credit_card_balance.groupby('SK_ID_CURR')['AMT_DRAWINGS_CURRENT'].transform(np.min)
df_credit_card_balance['K_AMT_DRAWINGS_CURRENT_MAX'] = df_credit_card_balance.groupby('SK_ID_CURR')['AMT_DRAWINGS_CURRENT'].transform(np.max)
# df_credit_card_balance['K_AMT_DRAWINGS_CURRENT_SUM'] = df_credit_card_balance.groupby('SK_ID_CURR')['AMT_DRAWINGS_CURRENT'].transform(np.sum)
df_credit_card_balance['TEMP_AMT_PAYMENT_TOTAL_CURRENT'] = df_credit_card_balance.groupby('SK_ID_CURR')['AMT_PAYMENT_TOTAL_CURRENT'].transform(lambda x:x+1)
df_credit_card_balance['TEMP_AMT_TOTAL_RECEIVABLE'] = df_credit_card_balance.groupby('SK_ID_CURR')['AMT_TOTAL_RECEIVABLE'].transform(lambda x:x+1)
df_credit_card_balance['TEMP_AMT_PAYMENT_OVER_RECEIVABLE'] = df_credit_card_balance['TEMP_AMT_PAYMENT_TOTAL_CURRENT']/df_credit_card_balance['TEMP_AMT_TOTAL_RECEIVABLE']
df_credit_card_balance['K_AMT_PAYMENT_OVER_RECEIVABLE_MEAN'] = df_credit_card_balance.groupby('SK_ID_CURR')['TEMP_AMT_PAYMENT_OVER_RECEIVABLE'].transform(np.mean)
df_credit_card_balance['K_AMT_PAYMENT_OVER_RECEIVABLE_MIN'] = df_credit_card_balance.groupby('SK_ID_CURR')['TEMP_AMT_PAYMENT_OVER_RECEIVABLE'].transform(np.min)
df_credit_card_balance['K_AMT_PAYMENT_OVER_RECEIVABLE_MAX'] = df_credit_card_balance.groupby('SK_ID_CURR')['TEMP_AMT_PAYMENT_OVER_RECEIVABLE'].transform(np.max)
# #CNT_DRAWINGS_CURRENT = CNT_DRAWINGS_ATM_CURRENT + CNT_DRAWINGS_OTHER_CURRENT + CNT_DRAWINGS_POS_CURRENT
df_credit_card_balance['K_CNT_DRAWINGS_CURRENT_MEAN'] = df_credit_card_balance.groupby('SK_ID_CURR')['CNT_DRAWINGS_CURRENT'].transform(np.mean)
df_credit_card_balance['K_CNT_DRAWINGS_CURRENT_MIN'] = df_credit_card_balance.groupby('SK_ID_CURR')['CNT_DRAWINGS_CURRENT'].transform(np.min)
df_credit_card_balance['K_CNT_DRAWINGS_CURRENT_MAX'] = df_credit_card_balance.groupby('SK_ID_CURR')['CNT_DRAWINGS_CURRENT'].transform(np.max)
# df_credit_card_balance['K_CNT_DRAWINGS_CURRENT_SUM'] = df_credit_card_balance.groupby('SK_ID_CURR')['CNT_DRAWINGS_CURRENT'].transform(np.sum)
# #Feature - CNT_INSTALMENT_MATURE_CUM
df_credit_card_balance['K_CNT_INSTALMENT_MATURE_CUM_MAX'] = df_credit_card_balance.groupby('SK_ID_CURR')['CNT_INSTALMENT_MATURE_CUM'].transform(np.max)
df_credit_card_balance['K_CNT_INSTALMENT_MATURE_CUM_MAX_TO_COUNT_RATIO'] = df_credit_card_balance['K_CNT_INSTALMENT_MATURE_CUM_MAX']/df_credit_card_balance['K_PREV_CREDIT_CARD_BALANCE_COUNT']
# #Feature - SK_DPD
# df_credit_card_balance['K_SK_DPD_MAX'] = df_credit_card_balance.groupby('SK_ID_CURR')['SK_DPD'].transform(np.max)
# df_credit_card_balance['K_SK_DPD_MEAN'] = df_credit_card_balance.groupby('SK_ID_CURR')['SK_DPD'].transform(np.mean)
# df_credit_card_balance['K_SK_DPD_SUM'] = df_credit_card_balance.groupby('SK_ID_CURR')['SK_DPD'].transform(np.sum)
# df_credit_card_balance['K_SK_DPD_VAR'] = df_credit_card_balance.groupby('SK_ID_CURR')['SK_DPD'].transform(np.var)
# #Drop Duplicates
df_credit_card_balance = df_credit_card_balance[['SK_ID_CURR', 'K_PREV_CREDIT_CARD_BALANCE_COUNT',
'K_CREDIT_UTILIZATION_MEAN', 'K_CREDIT_UTILIZATION_MIN', 'K_CREDIT_UTILIZATION_MAX', 'K_CREDIT_UTILIZATION_VAR',
'K_AMT_DRAWINGS_CURRENT_MEAN', 'K_AMT_DRAWINGS_CURRENT_MAX',
'K_AMT_PAYMENT_OVER_RECEIVABLE_MEAN', 'K_AMT_PAYMENT_OVER_RECEIVABLE_MIN', 'K_AMT_PAYMENT_OVER_RECEIVABLE_MAX',
'K_CNT_DRAWINGS_CURRENT_MEAN', 'K_CNT_DRAWINGS_CURRENT_MIN', 'K_CNT_DRAWINGS_CURRENT_MAX',
'K_CNT_INSTALMENT_MATURE_CUM_MAX_TO_COUNT_RATIO']].drop_duplicates()
# #CHECKPOINT
print("df_credit_card_balance", df_credit_card_balance.shape)
print(len(set(df_credit_card_balance["SK_ID_CURR"]).intersection(set(df_application_train["SK_ID_CURR"]))))
print(len(set(df_credit_card_balance["SK_ID_CURR"]).intersection(set(df_application_test["SK_ID_CURR"]))))
print("Sum: ", 86905 + 16653)
gc.collect()
###Output
_____no_output_____
###Markdown
Feature: df_pos_cash_balance
###Code
df_pos_cash_balance['K_PREV_POS_CASH_BALANCE_COUNT'] = df_pos_cash_balance.groupby('SK_ID_CURR')['SK_ID_PREV'].transform('count')
df_pos_cash_balance['K_MONTHS_BALANCE_POS_CASH_MEAN'] = df_pos_cash_balance.groupby('SK_ID_CURR')['MONTHS_BALANCE'].transform(np.mean)
df_pos_cash_balance['K_MONTHS_BALANCE_POS_CASH_MAX'] = df_pos_cash_balance.groupby('SK_ID_CURR')['MONTHS_BALANCE'].transform(np.max)
df_pos_cash_balance['K_MONTHS_BALANCE_POS_CASH_MIN'] = df_pos_cash_balance.groupby('SK_ID_CURR')['MONTHS_BALANCE'].transform(np.min)
df_pos_cash_balance['K_CNT_INSTALMENT_MEAN'] = df_pos_cash_balance.groupby('SK_ID_CURR')['CNT_INSTALMENT'].transform(np.mean)
df_pos_cash_balance['K_CNT_INSTALMENT_MAX'] = df_pos_cash_balance.groupby('SK_ID_CURR')['CNT_INSTALMENT'].transform(np.max)
df_pos_cash_balance['K_CNT_INSTALMENT_MIN'] = df_pos_cash_balance.groupby('SK_ID_CURR')['CNT_INSTALMENT'].transform(np.min)
df_pos_cash_balance['K_CNT_INSTALMENT_FUTURE_MEAN'] = df_pos_cash_balance.groupby('SK_ID_CURR')['CNT_INSTALMENT_FUTURE'].transform(np.mean)
df_pos_cash_balance['K_CNT_INSTALMENT_FUTURE_MAX'] = df_pos_cash_balance.groupby('SK_ID_CURR')['CNT_INSTALMENT_FUTURE'].transform(np.max)
# df_pos_cash_balance['K_CNT_INSTALMENT_FUTURE_MIN'] = df_pos_cash_balance.groupby('SK_ID_CURR')['CNT_INSTALMENT_FUTURE'].transform(np.min)
# #Drop Duplicates
df_pos_cash_balance = df_pos_cash_balance[['SK_ID_CURR', 'K_PREV_POS_CASH_BALANCE_COUNT',
'K_MONTHS_BALANCE_POS_CASH_MEAN','K_MONTHS_BALANCE_POS_CASH_MAX', 'K_MONTHS_BALANCE_POS_CASH_MIN',
'K_CNT_INSTALMENT_MEAN', 'K_CNT_INSTALMENT_MAX', 'K_CNT_INSTALMENT_MIN',
'K_CNT_INSTALMENT_FUTURE_MEAN', 'K_CNT_INSTALMENT_FUTURE_MAX']].drop_duplicates()
###Output
_____no_output_____
###Markdown
Feature: df_previous_application
###Code
# #Missing values have been masked with the value 365243.0
df_previous_application['DAYS_LAST_DUE'].replace(365243.0, np.nan, inplace=True)
df_previous_application['DAYS_TERMINATION'].replace(365243.0, np.nan, inplace=True)
df_previous_application['DAYS_FIRST_DRAWING'].replace(365243.0, np.nan, inplace=True)
df_previous_application['DAYS_FIRST_DUE'].replace(365243.0, np.nan, inplace=True)
df_previous_application['DAYS_LAST_DUE_1ST_VERSION'].replace(365243.0, np.nan, inplace=True)
df_previous_application['K_PREV_PREVIOUS_APPLICATION_COUNT'] = df_previous_application.groupby('SK_ID_CURR')['SK_ID_PREV'].transform('count')
df_previous_application['K_AMT_ANNUITY_MEAN'] = df_previous_application.groupby('SK_ID_CURR')['AMT_ANNUITY'].transform(np.mean)
df_previous_application['K_AMT_ANNUITY_MAX'] = df_previous_application.groupby('SK_ID_CURR')['AMT_ANNUITY'].transform(np.max)
df_previous_application['K_AMT_ANNUITY_MIN'] = df_previous_application.groupby('SK_ID_CURR')['AMT_ANNUITY'].transform(np.min)
df_previous_application['K_AMT_ANNUITY_VAR'] = df_previous_application.groupby('SK_ID_CURR')['AMT_ANNUITY'].transform(np.var)
# df_previous_application['K_AMT_ANNUITY_SUM'] = df_previous_application.groupby('SK_ID_CURR')['AMT_ANNUITY'].transform(np.sum)
# df_previous_application['K_AMT_ANNUITY_RANGE'] = df_previous_application['K_AMT_ANNUITY_MAX'] - df_previous_application['K_AMT_ANNUITY_MIN']
# #Features reduced the accuracy of my model
# df_previous_application['K_AMT_APPLICATION_MEAN'] = df_previous_application.groupby('SK_ID_CURR')['AMT_APPLICATION'].transform(np.mean)
# df_previous_application['K_AMT_APPLICATION_MAX'] = df_previous_application.groupby('SK_ID_CURR')['AMT_APPLICATION'].transform(np.max)
# df_previous_application['K_AMT_APPLICATION_MIN'] = df_previous_application.groupby('SK_ID_CURR')['AMT_APPLICATION'].transform(np.min)
# df_previous_application['K_AMT_APPLICATION_VAR'] = df_previous_application.groupby('SK_ID_CURR')['AMT_APPLICATION'].transform(np.var)
# df_previous_application['K_AMT_APPLICATION_SUM'] = df_previous_application.groupby('SK_ID_CURR')['AMT_APPLICATION'].transform(np.sum)
# df_previous_application['K_AMT_CREDIT_MEAN'] = df_previous_application.groupby('SK_ID_CURR')['AMT_CREDIT'].transform(np.mean)
# df_previous_application['K_AMT_CREDIT_MAX'] = df_previous_application.groupby('SK_ID_CURR')['AMT_CREDIT'].transform(np.max)
# df_previous_application['K_AMT_CREDIT_MIN'] = df_previous_application.groupby('SK_ID_CURR')['AMT_CREDIT'].transform(np.min)
# df_previous_application['K_AMT_CREDIT_VAR'] = df_previous_application.groupby('SK_ID_CURR')['AMT_CREDIT'].transform(np.var)
# df_previous_application['K_AMT_CREDIT_SUM'] = df_previous_application.groupby('SK_ID_CURR')['AMT_CREDIT'].transform(np.sum)
df_previous_application['TEMP_CREDIT_ALLOCATED'] = df_previous_application['AMT_CREDIT']/df_previous_application['AMT_APPLICATION']
df_previous_application['K_CREDIT_ALLOCATED_MEAN'] = df_previous_application.groupby('SK_ID_CURR')['TEMP_CREDIT_ALLOCATED'].transform(np.mean)
df_previous_application['K_CREDIT_ALLOCATED_MAX'] = df_previous_application.groupby('SK_ID_CURR')['TEMP_CREDIT_ALLOCATED'].transform(np.max)
df_previous_application['K_CREDIT_ALLOCATED_MIN'] = df_previous_application.groupby('SK_ID_CURR')['TEMP_CREDIT_ALLOCATED'].transform(np.min)
df_previous_application['TEMP_NAME_CONTRACT_STATUS_APPROVED'] = (df_previous_application['NAME_CONTRACT_STATUS'] == 'Approved').astype(int)
df_previous_application['K_NAME_CONTRACT_STATUS_APPROVED'] = df_previous_application.groupby('SK_ID_CURR')['TEMP_NAME_CONTRACT_STATUS_APPROVED'].transform(np.sum)
# df_previous_application['TEMP_NAME_CONTRACT_STATUS_REFUSED'] = (df_previous_application['NAME_CONTRACT_STATUS'] == 'REFUSED').astype(int)
# df_previous_application['K_NAME_CONTRACT_STATUS_REFUSED'] = df_previous_application.groupby('SK_ID_CURR')['TEMP_NAME_CONTRACT_STATUS_REFUSED'].transform(np.sum)
df_previous_application['K_AMT_DOWN_PAYMENT_MEAN'] = df_previous_application.groupby('SK_ID_CURR')['AMT_DOWN_PAYMENT'].transform(np.mean)
df_previous_application['K_AMT_DOWN_PAYMENT_MAX'] = df_previous_application.groupby('SK_ID_CURR')['AMT_DOWN_PAYMENT'].transform(np.max)
df_previous_application['K_AMT_DOWN_PAYMENT_MIN'] = df_previous_application.groupby('SK_ID_CURR')['AMT_DOWN_PAYMENT'].transform(np.min)
df_previous_application['K_AMT_GOODS_PRICE_MEAN'] = df_previous_application.groupby('SK_ID_CURR')['AMT_GOODS_PRICE'].transform(np.mean)
df_previous_application['K_AMT_GOODS_PRICE_MAX'] = df_previous_application.groupby('SK_ID_CURR')['AMT_GOODS_PRICE'].transform(np.max)
df_previous_application['K_AMT_GOODS_PRICE_MIN'] = df_previous_application.groupby('SK_ID_CURR')['AMT_GOODS_PRICE'].transform(np.min)
df_previous_application['K_DAYS_DECISION_MEAN'] = df_previous_application.groupby('SK_ID_CURR')['DAYS_DECISION'].transform(np.mean)
df_previous_application['K_DAYS_DECISION_MAX'] = df_previous_application.groupby('SK_ID_CURR')['DAYS_DECISION'].transform(np.max)
df_previous_application['K_DAYS_DECISION_MIN'] = df_previous_application.groupby('SK_ID_CURR')['DAYS_DECISION'].transform(np.min)
df_previous_application['K_CNT_PAYMENT_MEAN'] = df_previous_application.groupby('SK_ID_CURR')['CNT_PAYMENT'].transform(np.mean)
df_previous_application['K_CNT_PAYMENT_MAX'] = df_previous_application.groupby('SK_ID_CURR')['CNT_PAYMENT'].transform(np.max)
df_previous_application['K_CNT_PAYMENT_MIN'] = df_previous_application.groupby('SK_ID_CURR')['CNT_PAYMENT'].transform(np.min)
df_previous_application['K_DAYS_FIRST_DRAWING_MEAN'] = df_previous_application.groupby('SK_ID_CURR')['DAYS_FIRST_DRAWING'].transform(np.mean)
# df_previous_application['K_DAYS_FIRST_DRAWING_MAX'] = df_previous_application.groupby('SK_ID_CURR')['DAYS_FIRST_DRAWING'].transform(np.max)
# df_previous_application['K_DAYS_FIRST_DRAWING_MIN'] = df_previous_application.groupby('SK_ID_CURR')['DAYS_FIRST_DRAWING'].transform(np.min)
df_previous_application['K_DAYS_FIRST_DUE_MEAN'] = df_previous_application.groupby('SK_ID_CURR')['DAYS_FIRST_DUE'].transform(np.mean)
df_previous_application['K_DAYS_FIRST_DUE_MAX'] = df_previous_application.groupby('SK_ID_CURR')['DAYS_FIRST_DUE'].transform(np.max)
df_previous_application['K_DAYS_FIRST_DUE_MIN'] = df_previous_application.groupby('SK_ID_CURR')['DAYS_FIRST_DUE'].transform(np.min)
df_previous_application['K_DAYS_LAST_DUE_1ST_VERSION_MEAN'] = df_previous_application.groupby('SK_ID_CURR')['DAYS_LAST_DUE_1ST_VERSION'].transform(np.mean)
df_previous_application['K_DAYS_LAST_DUE_1ST_VERSION_MAX'] = df_previous_application.groupby('SK_ID_CURR')['DAYS_LAST_DUE_1ST_VERSION'].transform(np.max)
df_previous_application['K_DAYS_LAST_DUE_1ST_VERSION_MIN'] = df_previous_application.groupby('SK_ID_CURR')['DAYS_LAST_DUE_1ST_VERSION'].transform(np.min)
df_previous_application['K_DAYS_LAST_DUE_MEAN'] = df_previous_application.groupby('SK_ID_CURR')['DAYS_LAST_DUE'].transform(np.mean)
df_previous_application['K_DAYS_LAST_DUE_MAX'] = df_previous_application.groupby('SK_ID_CURR')['DAYS_LAST_DUE'].transform(np.max)
df_previous_application['K_DAYS_LAST_DUE_MIN'] = df_previous_application.groupby('SK_ID_CURR')['DAYS_LAST_DUE'].transform(np.min)
df_previous_application['K_DAYS_TERMINATION_MEAN'] = df_previous_application.groupby('SK_ID_CURR')['DAYS_TERMINATION'].transform(np.mean)
df_previous_application['K_DAYS_TERMINATION_MAX'] = df_previous_application.groupby('SK_ID_CURR')['DAYS_TERMINATION'].transform(np.max)
df_previous_application['K_DAYS_TERMINATION_MIN'] = df_previous_application.groupby('SK_ID_CURR')['DAYS_TERMINATION'].transform(np.min)
# #Drop Duplicates
df_previous_application = df_previous_application[['SK_ID_CURR', 'K_PREV_PREVIOUS_APPLICATION_COUNT',
'K_NAME_CONTRACT_STATUS_APPROVED',
'K_AMT_ANNUITY_VAR',
'K_AMT_ANNUITY_MEAN', 'K_AMT_ANNUITY_MAX', 'K_AMT_ANNUITY_MIN',
'K_CREDIT_ALLOCATED_MEAN', 'K_CREDIT_ALLOCATED_MAX', 'K_CREDIT_ALLOCATED_MIN',
'K_AMT_DOWN_PAYMENT_MEAN', 'K_AMT_DOWN_PAYMENT_MAX', 'K_AMT_DOWN_PAYMENT_MIN',
'K_AMT_GOODS_PRICE_MEAN', 'K_AMT_GOODS_PRICE_MAX', 'K_AMT_GOODS_PRICE_MIN',
'K_DAYS_DECISION_MEAN', 'K_DAYS_DECISION_MAX', 'K_DAYS_DECISION_MIN',
'K_CNT_PAYMENT_MEAN', 'K_CNT_PAYMENT_MAX', 'K_CNT_PAYMENT_MIN',
'K_DAYS_FIRST_DRAWING_MEAN',
'K_DAYS_FIRST_DUE_MEAN', 'K_DAYS_FIRST_DUE_MAX', 'K_DAYS_FIRST_DUE_MIN',
'K_DAYS_LAST_DUE_1ST_VERSION_MEAN', 'K_DAYS_LAST_DUE_1ST_VERSION_MAX', 'K_DAYS_LAST_DUE_1ST_VERSION_MIN',
'K_DAYS_LAST_DUE_MEAN', 'K_DAYS_LAST_DUE_MAX', 'K_DAYS_LAST_DUE_MIN',
'K_DAYS_TERMINATION_MEAN', 'K_DAYS_TERMINATION_MAX', 'K_DAYS_TERMINATION_MIN']].drop_duplicates()
df_previous_application_original[df_previous_application_original['SK_ID_CURR'] == 100006]
prev_app_df = df_previous_application_original.copy()
agg_funs = {'SK_ID_CURR': 'count', 'AMT_CREDIT': 'sum'}
prev_apps = prev_app_df.groupby('SK_ID_CURR').agg(agg_funs)
# prev_apps.columns = ['PREV APP COUNT', 'TOTAL PREV LOAN AMT']
# merged_df = app_data.merge(prev_apps, left_on='SK_ID_CURR', right_index=True, how='left')
prev_apps
gc.collect()
# df_bureau = pd.read_csv("../data/bureau.csv")
# df_bureau_balance = pd.read_csv("../data/bureau_balance.csv")
###Output
_____no_output_____
###Markdown
Feature: df_bureau_balance
###Code
# df_bureau_balance['K_BUREAU_BALANCE_COUNT'] = df_bureau_balance.groupby('SK_ID_BUREAU')['SK_ID_BUREAU'].transform('count')
df_bureau_balance['K_BUREAU_BALANCE_MONTHS_BALANCE_MEAN'] = df_bureau_balance.groupby('SK_ID_BUREAU')['MONTHS_BALANCE'].transform(np.mean)
df_bureau_balance['K_BUREAU_BALANCE_MONTHS_BALANCE_MAX'] = df_bureau_balance.groupby('SK_ID_BUREAU')['MONTHS_BALANCE'].transform(np.max)
df_bureau_balance['K_BUREAU_BALANCE_MONTHS_BALANCE_MIN'] = df_bureau_balance.groupby('SK_ID_BUREAU')['MONTHS_BALANCE'].transform(np.min)
# df_bureau_balance[['K_BB_STATUS_0', 'K_BB_STATUS_1', 'K_BB_STATUS_2', 'K_BB_STATUS_3',
# 'K_BB_STATUS_4', 'K_BB_STATUS_5', 'K_BB_STATUS_C', 'K_BB_STATUS_X']] = pd.get_dummies(df_bureau_balance['STATUS'], prefix="K_BB_STATUS_")
# #Drop Duplicates
df_bureau_balance = df_bureau_balance[['SK_ID_BUREAU','K_BUREAU_BALANCE_MONTHS_BALANCE_MEAN',
'K_BUREAU_BALANCE_MONTHS_BALANCE_MAX',
'K_BUREAU_BALANCE_MONTHS_BALANCE_MIN']].drop_duplicates()
gc.collect()
###Output
_____no_output_____
###Markdown
Feature: df_bureau
###Code
len(df_bureau["SK_ID_BUREAU"].unique())
df_bureau.shape
df_bureau = pd.merge(df_bureau, df_bureau_balance, on="SK_ID_BUREAU", how="left", suffixes=('_bureau', '_bureau_balance'))
df_bureau.shape
# #Feature - SK_ID_BUREAU represents each loan application.
# #Grouping by SK_ID_CURR significes the number of previous loans per applicant.
df_bureau['K_BUREAU_COUNT'] = df_bureau.groupby('SK_ID_CURR')['SK_ID_BUREAU'].transform('count')
# # #Feature - CREDIT_ACTIVE
# #Frequency Encoding
# temp_bureau_credit_active = df_bureau.groupby(['SK_ID_CURR','CREDIT_ACTIVE']).size()/df_bureau.groupby(['SK_ID_CURR']).size()
# temp_bureau_credit_active = temp_bureau_credit_active.to_frame().reset_index().rename(columns= {0: 'TEMP_BUREAU_CREDIT_ACTIVE_FREQENCODE'})
# temp_bureau_credit_active = temp_bureau_credit_active.pivot(index='SK_ID_CURR', columns='CREDIT_ACTIVE', values='TEMP_BUREAU_CREDIT_ACTIVE_FREQENCODE')
# temp_bureau_credit_active.reset_index(inplace = True)
# temp_bureau_credit_active.columns = ['SK_ID_CURR', 'K_CREDIT_ACTIVE_ACTIVE', 'K_CREDIT_ACTIVE_BADDEBT', 'K_CREDIT_ACTIVE_CLOSED', 'K_CREDIT_ACTIVE_SOLD']
# df_bureau = pd.merge(df_bureau, temp_bureau_credit_active, on=["SK_ID_CURR"], how="left", suffixes=('_bureau', '_credit_active_percentage'))
# del temp_bureau_credit_active
# #SUBSET DATA - CREDIT_ACTIVE == ACTIVE
# df_bureau_active = df_bureau[df_bureau['CREDIT_ACTIVE']=='Active']
# df_bureau_active['K_BUREAU_ACTIVE_DAYS_CREDIT_MEAN'] = df_bureau_active.groupby('SK_ID_CURR')['DAYS_CREDIT'].transform(np.mean)
# df_bureau_active['K_BUREAU_ACTIVE_DAYS_CREDIT_MAX'] = df_bureau_active.groupby('SK_ID_CURR')['DAYS_CREDIT'].transform(np.max)
# df_bureau = pd.merge(df_bureau, df_bureau_active, on=["SK_ID_CURR"], how="left", suffixes=('_bureau', '_bureau_active'))
# # #Feature - DAYS_CREDIT
df_bureau['K_BUREAU_DAYS_CREDIT_MEAN'] = df_bureau.groupby('SK_ID_CURR')['DAYS_CREDIT'].transform(np.mean)
df_bureau['K_BUREAU_DAYS_CREDIT_MAX'] = df_bureau.groupby('SK_ID_CURR')['DAYS_CREDIT'].transform(np.max)
df_bureau['K_BUREAU_DAYS_CREDIT_MIN'] = df_bureau.groupby('SK_ID_CURR')['DAYS_CREDIT'].transform(np.min)
# df_bureau['K_BUREAU_DAYS_CREDIT_VAR'] = df_bureau.groupby('SK_ID_CURR')['DAYS_CREDIT'].transform(np.var)
# # #Feature - DAYS_CREDIT_UPDATE
# df_bureau['K_BUREAU_DAYS_CREDIT_UPDATE_MEAN'] = df_bureau.groupby('SK_ID_CURR')['DAYS_CREDIT_UPDATE'].transform(np.mean)
# df_bureau['K_BUREAU_DAYS_CREDIT_UPDATE_MAX'] = df_bureau.groupby('SK_ID_CURR')['DAYS_CREDIT_UPDATE'].transform(np.max)
# df_bureau['K_BUREAU_DAYS_CREDIT_UPDATE_MIN'] = df_bureau.groupby('SK_ID_CURR')['DAYS_CREDIT_UPDATE'].transform(np.min)
# df_bureau['K_BUREAU_DAYS_CREDIT_UPDATE_VAR'] = df_bureau.groupby('SK_ID_CURR')['DAYS_CREDIT_UPDATE'].transform(np.var)
gc.collect()
# # #Successive difference between credit application per customer - Mean, Min, Max
temp_bureau_days_credit = df_bureau.copy()
temp_bureau_days_credit.sort_values(['SK_ID_CURR', 'DAYS_CREDIT'], inplace=True)
temp_bureau_days_credit['temp_successive_diff'] = temp_bureau_days_credit.groupby('SK_ID_CURR')['DAYS_CREDIT'].transform(lambda ele: ele.diff())
temp_bureau_days_credit['K_BUREAU_DAYS_CREDIT_SORTED_SUCCESSIVE_DIFF_MEAN'] = temp_bureau_days_credit.groupby('SK_ID_CURR')['temp_successive_diff'].transform(np.mean)
df_bureau = pd.merge(df_bureau, temp_bureau_days_credit[['SK_ID_CURR','K_BUREAU_DAYS_CREDIT_SORTED_SUCCESSIVE_DIFF_MEAN']].drop_duplicates(),
on="SK_ID_CURR", how="left", suffixes=('_bureau', '_days_credit_sorted_successive_diff'))
# del temp_bureau_days_credit
# df_bureau['K_BUREAU_CREDIT_DAY_OVERDUE_MEAN'] = df_bureau.groupby('SK_ID_CURR')['CREDIT_DAY_OVERDUE'].transform(np.mean)
# df_bureau['K_BUREAU_CREDIT_DAY_OVERDUE_MAX'] = df_bureau.groupby('SK_ID_CURR')['CREDIT_DAY_OVERDUE'].transform(np.max)
# df_bureau['K_BUREAU_CREDIT_DAY_OVERDUE_MIN'] = df_bureau.groupby('SK_ID_CURR')['CREDIT_DAY_OVERDUE'].transform(np.min)
df_bureau['K_BUREAU_DAYS_CREDIT_ENDDATE_MEAN'] = df_bureau.groupby('SK_ID_CURR')['DAYS_CREDIT_ENDDATE'].transform(np.mean)
df_bureau['K_BUREAU_DAYS_CREDIT_ENDDATE_MAX'] = df_bureau.groupby('SK_ID_CURR')['DAYS_CREDIT_ENDDATE'].transform(np.max)
df_bureau['K_BUREAU_DAYS_CREDIT_ENDDATE_MIN'] = df_bureau.groupby('SK_ID_CURR')['DAYS_CREDIT_ENDDATE'].transform(np.min)
df_bureau['K_BUREAU_DAYS_ENDDATE_FACT_MEAN'] = df_bureau.groupby('SK_ID_CURR')['DAYS_ENDDATE_FACT'].transform(np.mean)
df_bureau['K_BUREAU_DAYS_ENDDATE_FACT_MAX'] = df_bureau.groupby('SK_ID_CURR')['DAYS_ENDDATE_FACT'].transform(np.max)
df_bureau['K_BUREAU_DAYS_ENDDATE_FACT_MIN'] = df_bureau.groupby('SK_ID_CURR')['DAYS_ENDDATE_FACT'].transform(np.min)
df_bureau['K_BUREAU_AMT_CREDIT_MAX_OVERDUE_MEAN'] = df_bureau.groupby('SK_ID_CURR')['AMT_CREDIT_MAX_OVERDUE'].transform(np.mean)
df_bureau['K_BUREAU_AMT_CREDIT_MAX_OVERDUE_MAX'] = df_bureau.groupby('SK_ID_CURR')['AMT_CREDIT_MAX_OVERDUE'].transform(np.max)
df_bureau['K_BUREAU_AMT_CREDIT_MAX_OVERDUE_MIN'] = df_bureau.groupby('SK_ID_CURR')['AMT_CREDIT_MAX_OVERDUE'].transform(np.min)
# df_bureau['K_BUREAU_CNT_CREDIT_PROLONG_MAX'] = df_bureau.groupby('SK_ID_CURR')['CNT_CREDIT_PROLONG'].transform(np.max)
# df_bureau['K_BUREAU_CNT_CREDIT_PROLONG_SUM'] = df_bureau.groupby('SK_ID_CURR')['CNT_CREDIT_PROLONG'].transform(np.sum)
# #To-Do: Calculate a utilization metric for some of the features below?
df_bureau['K_BUREAU_AMT_CREDIT_SUM_MEAN'] = df_bureau.groupby('SK_ID_CURR')['AMT_CREDIT_SUM'].transform(np.mean)
df_bureau['K_BUREAU_AMT_CREDIT_SUM_MAX'] = df_bureau.groupby('SK_ID_CURR')['AMT_CREDIT_SUM'].transform(np.max)
df_bureau['K_BUREAU_AMT_CREDIT_SUM_MIN'] = df_bureau.groupby('SK_ID_CURR')['AMT_CREDIT_SUM'].transform(np.min)
df_bureau['K_BUREAU_AMT_CREDIT_SUM_DEBT_MEAN'] = df_bureau.groupby('SK_ID_CURR')['AMT_CREDIT_SUM_DEBT'].transform(np.mean)
df_bureau['K_BUREAU_AMT_CREDIT_SUM_DEBT_MAX'] = df_bureau.groupby('SK_ID_CURR')['AMT_CREDIT_SUM_DEBT'].transform(np.max)
df_bureau['K_BUREAU_AMT_CREDIT_SUM_DEBT_MIN'] = df_bureau.groupby('SK_ID_CURR')['AMT_CREDIT_SUM_DEBT'].transform(np.min)
df_bureau['K_BUREAU_AMT_CREDIT_SUM_LIMIT_MEAN'] = df_bureau.groupby('SK_ID_CURR')['AMT_CREDIT_SUM_LIMIT'].transform(np.mean)
df_bureau['K_BUREAU_AMT_CREDIT_SUM_LIMIT_MAX'] = df_bureau.groupby('SK_ID_CURR')['AMT_CREDIT_SUM_LIMIT'].transform(np.max)
df_bureau['K_BUREAU_AMT_CREDIT_SUM_LIMIT_MIN'] = df_bureau.groupby('SK_ID_CURR')['AMT_CREDIT_SUM_LIMIT'].transform(np.min)
df_bureau['K_BUREAU_AMT_CREDIT_SUM_OVERDUE_MEAN'] = df_bureau.groupby('SK_ID_CURR')['AMT_CREDIT_SUM_OVERDUE'].transform(np.mean)
df_bureau['K_BUREAU_AMT_CREDIT_SUM_OVERDUE_MAX'] = df_bureau.groupby('SK_ID_CURR')['AMT_CREDIT_SUM_OVERDUE'].transform(np.max)
df_bureau['K_BUREAU_AMT_CREDIT_SUM_OVERDUE_MIN'] = df_bureau.groupby('SK_ID_CURR')['AMT_CREDIT_SUM_OVERDUE'].transform(np.min)
df_bureau['K_BUREAU_AMT_ANNUITY_MEAN'] = df_bureau.groupby('SK_ID_CURR')['AMT_ANNUITY'].transform(np.mean)
df_bureau['K_BUREAU_AMT_ANNUITY_MAX'] = df_bureau.groupby('SK_ID_CURR')['AMT_ANNUITY'].transform(np.max)
df_bureau['K_BUREAU_AMT_ANNUITY_MIN'] = df_bureau.groupby('SK_ID_CURR')['AMT_ANNUITY'].transform(np.min)
# #Added from df_bureau_balance
df_bureau['K_BUREAU_BALANCE_MONTHS_BALANCE_MEAN'] = df_bureau.groupby('SK_ID_CURR')['K_BUREAU_BALANCE_MONTHS_BALANCE_MEAN'].transform(np.mean)
df_bureau['K_BUREAU_BALANCE_MONTHS_BALANCE_MAX'] = df_bureau.groupby('SK_ID_CURR')['K_BUREAU_BALANCE_MONTHS_BALANCE_MAX'].transform(np.max)
df_bureau['K_BUREAU_BALANCE_MONTHS_BALANCE_MIN'] = df_bureau.groupby('SK_ID_CURR')['K_BUREAU_BALANCE_MONTHS_BALANCE_MIN'].transform(np.min)
#Drop Duplicates
df_bureau = df_bureau[['SK_ID_CURR', 'K_BUREAU_COUNT',
'K_BUREAU_DAYS_CREDIT_MEAN', 'K_BUREAU_DAYS_CREDIT_MAX', 'K_BUREAU_DAYS_CREDIT_MIN',
'K_BUREAU_DAYS_CREDIT_SORTED_SUCCESSIVE_DIFF_MEAN',
'K_BUREAU_DAYS_CREDIT_ENDDATE_MEAN', 'K_BUREAU_DAYS_CREDIT_ENDDATE_MAX', 'K_BUREAU_DAYS_CREDIT_ENDDATE_MIN',
'K_BUREAU_DAYS_ENDDATE_FACT_MEAN', 'K_BUREAU_DAYS_ENDDATE_FACT_MAX', 'K_BUREAU_DAYS_ENDDATE_FACT_MIN',
'K_BUREAU_AMT_CREDIT_MAX_OVERDUE_MEAN', 'K_BUREAU_AMT_CREDIT_MAX_OVERDUE_MAX', 'K_BUREAU_AMT_CREDIT_MAX_OVERDUE_MIN',
'K_BUREAU_AMT_CREDIT_SUM_MEAN', 'K_BUREAU_AMT_CREDIT_SUM_MAX', 'K_BUREAU_AMT_CREDIT_SUM_MIN',
'K_BUREAU_AMT_CREDIT_SUM_DEBT_MEAN', 'K_BUREAU_AMT_CREDIT_SUM_DEBT_MAX', 'K_BUREAU_AMT_CREDIT_SUM_DEBT_MIN',
'K_BUREAU_AMT_CREDIT_SUM_LIMIT_MEAN', 'K_BUREAU_AMT_CREDIT_SUM_LIMIT_MAX', 'K_BUREAU_AMT_CREDIT_SUM_LIMIT_MIN',
'K_BUREAU_AMT_CREDIT_SUM_OVERDUE_MEAN', 'K_BUREAU_AMT_CREDIT_SUM_OVERDUE_MAX', 'K_BUREAU_AMT_CREDIT_SUM_OVERDUE_MIN',
'K_BUREAU_AMT_ANNUITY_MEAN', 'K_BUREAU_AMT_ANNUITY_MAX', 'K_BUREAU_AMT_ANNUITY_MIN',
'K_BUREAU_BALANCE_MONTHS_BALANCE_MEAN', 'K_BUREAU_BALANCE_MONTHS_BALANCE_MAX', 'K_BUREAU_BALANCE_MONTHS_BALANCE_MIN']].drop_duplicates()
df_bureau.shape
# #CHECKPOINT
print("df_bureau_original", df_bureau_original.shape)
print(len(set(df_bureau_original["SK_ID_CURR"]).intersection(set(df_application_train["SK_ID_CURR"]))))
print(len(set(df_bureau_original["SK_ID_CURR"]).intersection(set(df_application_test["SK_ID_CURR"]))))
print("Sum: ", 263491 + 42320)
# #CHECKPOINT
print("df_bureau", df_bureau.shape)
print(len(set(df_bureau["SK_ID_CURR"]).intersection(set(df_application_train["SK_ID_CURR"]))))
print(len(set(df_bureau["SK_ID_CURR"]).intersection(set(df_application_test["SK_ID_CURR"]))))
print("Sum: ", 263491 + 42320)
gc.collect()
###Output
_____no_output_____
###Markdown
Feature MAIN TABLE: df_application_train
###Code
# #Missing values have been masked with the value 365243
df_application_train['DAYS_EMPLOYED'].replace(365243, np.nan, inplace= True)
df_application_test['DAYS_EMPLOYED'].replace(365243, np.nan, inplace= True)
# #Feature - Divide existing features
df_application_train['K_APP_CREDIT_TO_INCOME_RATIO'] = df_application_train['AMT_CREDIT']/df_application_train['AMT_INCOME_TOTAL']
df_application_train['K_APP_ANNUITY_TO_INCOME_RATIO'] = df_application_train['AMT_ANNUITY']/df_application_train['AMT_INCOME_TOTAL']
df_application_train['K_APP_CREDIT_TO_ANNUITY_RATIO'] = df_application_train['AMT_CREDIT']/df_application_train['AMT_ANNUITY']
# df_application_train['K_APP_INCOME_PER_PERSON_RATIO'] = df_application_train['AMT_INCOME_TOTAL'] / df_application_train['CNT_FAM_MEMBERS']
df_application_train['K_APP_CREDITTOINCOME_TO_DAYSEMPLOYED_RATIO'] = df_application_train['K_APP_CREDIT_TO_INCOME_RATIO'] /df_application_train['DAYS_EMPLOYED']
df_application_train['K_APP_ANNUITYTOINCOME_TO_DAYSEMPLOYED_RATIO'] = df_application_train['K_APP_ANNUITY_TO_INCOME_RATIO'] /df_application_train['DAYS_EMPLOYED']
df_application_train['K_APP_ANNUITYTOCREDIT_TO_DAYSEMPLOYED_RATIO'] = df_application_train['K_APP_CREDIT_TO_ANNUITY_RATIO'] /df_application_train['DAYS_EMPLOYED']
df_application_train['K_APP_CREDIT_TO_GOODSPRICE_RATIO'] = df_application_train['AMT_CREDIT']/df_application_train['AMT_GOODS_PRICE']
df_application_train['K_APP_GOODSPRICE_TO_INCOME_RATIO'] = df_application_train['AMT_GOODS_PRICE']/df_application_train['AMT_INCOME_TOTAL']
df_application_train['K_APP_GOODSPRICE_TO_ANNUITY_RATIO'] = df_application_train['AMT_GOODS_PRICE']/df_application_train['AMT_ANNUITY']
# #Feature - Income, Education, Family Status
df_application_train['K_APP_INCOME_EDUCATION'] = df_application_train['NAME_INCOME_TYPE'] + df_application_train['NAME_EDUCATION_TYPE']
df_application_train['K_APP_INCOME_EDUCATION_FAMILY'] = df_application_train['NAME_INCOME_TYPE'] + df_application_train['NAME_EDUCATION_TYPE'] + df_application_train['NAME_FAMILY_STATUS']
# #Family
df_application_train['K_APP_EMPLOYED_TO_DAYSBIRTH_RATIO'] = df_application_train['DAYS_EMPLOYED']/df_application_train['DAYS_BIRTH']
df_application_train['K_APP_INCOME_PER_FAMILY'] = df_application_train['AMT_INCOME_TOTAL']/df_application_train['CNT_FAM_MEMBERS']
# df_application_train['K_APP_CHILDREN_TO_FAMILY_RATIO'] = df_application_train['CNT_CHILDREN']/df_application_train['CNT_FAM_MEMBERS']
# df_application_train['K_APP_FLAGS_SUM'] = df_application_train.loc[:, df_application_train.columns.str.contains('FLAG_DOCUMENT')].sum(axis=1)
df_application_train['K_APP_EXT_SOURCES_MEAN'] = df_application_train[['EXT_SOURCE_1', 'EXT_SOURCE_2', 'EXT_SOURCE_3']].mean(axis=1)
# #Feature - Divide existing features
df_application_test['K_APP_CREDIT_TO_INCOME_RATIO'] = df_application_test['AMT_CREDIT']/df_application_test['AMT_INCOME_TOTAL']
df_application_test['K_APP_ANNUITY_TO_INCOME_RATIO'] = df_application_test['AMT_ANNUITY']/df_application_test['AMT_INCOME_TOTAL']
df_application_test['K_APP_CREDIT_TO_ANNUITY_RATIO'] = df_application_test['AMT_CREDIT']/df_application_test['AMT_ANNUITY']
# df_application_test['K_APP_INCOME_PER_PERSON_RATIO'] = df_application_test['AMT_INCOME_TOTAL'] / df_application_test['CNT_FAM_MEMBERS']
df_application_test['K_APP_CREDITTOINCOME_TO_DAYSEMPLOYED_RATIO'] = df_application_test['K_APP_CREDIT_TO_INCOME_RATIO'] /df_application_test['DAYS_EMPLOYED']
df_application_test['K_APP_ANNUITYTOINCOME_TO_DAYSEMPLOYED_RATIO'] = df_application_test['K_APP_ANNUITY_TO_INCOME_RATIO'] /df_application_test['DAYS_EMPLOYED']
df_application_test['K_APP_ANNUITYTOCREDIT_TO_DAYSEMPLOYED_RATIO'] = df_application_test['K_APP_CREDIT_TO_ANNUITY_RATIO'] /df_application_test['DAYS_EMPLOYED']
df_application_test['K_APP_CREDIT_TO_GOODSPRICE_RATIO'] = df_application_test['AMT_CREDIT']/df_application_test['AMT_GOODS_PRICE']
df_application_test['K_APP_GOODSPRICE_TO_INCOME_RATIO'] = df_application_test['AMT_GOODS_PRICE']/df_application_test['AMT_INCOME_TOTAL']
df_application_test['K_APP_GOODSPRICE_TO_ANNUITY_RATIO'] = df_application_test['AMT_GOODS_PRICE']/df_application_test['AMT_ANNUITY']
# #Feature - Income, Education, Family Status
df_application_test['K_APP_INCOME_EDUCATION'] = df_application_test['NAME_INCOME_TYPE'] + df_application_test['NAME_EDUCATION_TYPE']
df_application_test['K_APP_INCOME_EDUCATION_FAMILY'] = df_application_test['NAME_INCOME_TYPE'] + df_application_test['NAME_EDUCATION_TYPE'] + df_application_test['NAME_FAMILY_STATUS']
# #Family
df_application_test['K_APP_EMPLOYED_TO_DAYSBIRTH_RATIO'] = df_application_test['DAYS_EMPLOYED']/df_application_test['DAYS_BIRTH']
df_application_test['K_APP_INCOME_PER_FAMILY'] = df_application_test['AMT_INCOME_TOTAL']/df_application_test['CNT_FAM_MEMBERS']
# df_application_test['K_APP_CHILDREN_TO_FAMILY_RATIO'] = df_application_test['CNT_CHILDREN']/df_application_test['CNT_FAM_MEMBERS']
# df_application_test['K_APP_FLAGS_SUM'] = df_application_test.loc[:, df_application_test.columns.str.contains('FLAG_DOCUMENT')].sum(axis=1)
df_application_test['K_APP_EXT_SOURCES_MEAN'] = df_application_test[['EXT_SOURCE_1', 'EXT_SOURCE_2', 'EXT_SOURCE_3']].mean(axis=1)
df_application_train.drop(['FLAG_MOBIL', 'APARTMENTS_AVG'], axis=1, inplace=True)
df_application_test.drop(['FLAG_MOBIL', 'APARTMENTS_AVG'], axis=1, inplace=True)
gc.collect()
###Output
_____no_output_____
###Markdown
Combine Datasets Encode categorical columns
###Code
arr_categorical_columns = df_application_train.select_dtypes(['object']).columns
for var_col in arr_categorical_columns:
df_application_train[var_col] = df_application_train[var_col].astype('category').cat.codes
gc.collect()
arr_categorical_columns = df_application_test.select_dtypes(['object']).columns
for var_col in arr_categorical_columns:
df_application_test[var_col] = df_application_test[var_col].astype('category').cat.codes
gc.collect()
# arr_categorical_columns = df_credit_card_balance.select_dtypes(['object']).columns
# for var_col in arr_categorical_columns:
# df_credit_card_balance[var_col] = df_credit_card_balance[var_col].astype('category').cat.codes
# # One-hot encoding for categorical columns with get_dummies
# def one_hot_encoder(df, nan_as_category = True):
# original_columns = list(df.columns)
# categorical_columns = df.select_dtypes(['object']).columns
# df = pd.get_dummies(df, columns= categorical_columns, dummy_na= nan_as_category)
# new_columns = [c for c in df.columns if c not in original_columns]
# return df, new_columns
###Output
_____no_output_____
###Markdown
Combine Datasets df_installments_payments
###Code
df_installments_payments_train = df_installments_payments[df_installments_payments["SK_ID_CURR"].isin(df_application_train["SK_ID_CURR"])]
df_installments_payments_test = df_installments_payments[df_installments_payments["SK_ID_CURR"].isin(df_application_test["SK_ID_CURR"])]
df_application_train = pd.merge(df_application_train, df_installments_payments_train, on="SK_ID_CURR", how="outer", suffixes=('_application', '_installments_payments'))
df_application_test = pd.merge(df_application_test, df_installments_payments_test, on="SK_ID_CURR", how="outer", suffixes=('_application', '_installments_payments'))
###Output
_____no_output_____
###Markdown
df_credit_card_balance
###Code
df_credit_card_balance_train = df_credit_card_balance[df_credit_card_balance["SK_ID_CURR"].isin(df_application_train["SK_ID_CURR"])]
df_credit_card_balance_test = df_credit_card_balance[df_credit_card_balance["SK_ID_CURR"].isin(df_application_test["SK_ID_CURR"])]
df_application_train = pd.merge(df_application_train, df_credit_card_balance_train, on="SK_ID_CURR", how="outer", suffixes=('_application', '_credit_card_balance'))
df_application_test = pd.merge(df_application_test, df_credit_card_balance_test, on="SK_ID_CURR", how="outer", suffixes=('_application', '_credit_card_balance'))
###Output
_____no_output_____
###Markdown
df_pos_cash_balance
###Code
df_pos_cash_balance_train = df_pos_cash_balance[df_pos_cash_balance["SK_ID_CURR"].isin(df_application_train["SK_ID_CURR"])]
df_pos_cash_balance_test = df_pos_cash_balance[df_pos_cash_balance["SK_ID_CURR"].isin(df_application_test["SK_ID_CURR"])]
df_application_train = pd.merge(df_application_train, df_pos_cash_balance_train, on="SK_ID_CURR", how="outer", suffixes=('_application', '_pos_cash_balance'))
df_application_test = pd.merge(df_application_test, df_pos_cash_balance_test, on="SK_ID_CURR", how="outer", suffixes=('_application', '_pos_cash_balance'))
###Output
_____no_output_____
###Markdown
df_previous_application
###Code
df_previous_application_train = df_previous_application[df_previous_application["SK_ID_CURR"].isin(df_application_train["SK_ID_CURR"])]
df_previous_application_test = df_previous_application[df_previous_application["SK_ID_CURR"].isin(df_application_test["SK_ID_CURR"])]
df_application_train = pd.merge(df_application_train, df_previous_application_train, on="SK_ID_CURR", how="outer", suffixes=('_application', '_previous_application'))
df_application_test = pd.merge(df_application_test, df_previous_application_test, on="SK_ID_CURR", how="outer", suffixes=('_application', '_previous_application'))
###Output
_____no_output_____
###Markdown
df_bureau_balance and df_bureau
###Code
df_bureau_train = df_bureau[df_bureau["SK_ID_CURR"].isin(df_application_train["SK_ID_CURR"])]
df_bureau_test = df_bureau[df_bureau["SK_ID_CURR"].isin(df_application_test["SK_ID_CURR"])]
df_application_train = pd.merge(df_application_train, df_bureau_train, on="SK_ID_CURR", how="outer", suffixes=('_application', '_bureau'))
df_application_test = pd.merge(df_application_test, df_bureau_test, on="SK_ID_CURR", how="outer", suffixes=('_application', '_bureau'))
gc.collect()
###Output
_____no_output_____
###Markdown
Model Building Train-Validation Split
###Code
input_columns = df_application_train.columns
input_columns = input_columns[input_columns != 'TARGET']
target_column = 'TARGET'
X = df_application_train[input_columns]
y = df_application_train[target_column]
gc.collect()
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, train_size=0.7)
lgb_params = {
'boosting_type': 'gbdt',
'colsample_bytree': 0.7, # #Same as feature_fraction
'learning_rate': 0.1604387053222455,
'max_depth' : 4,
'metric' : 'auc',
'min_child_weight': 9,
'num_boost_round': 5000, ##??
'nthread': -1,
'objective': 'binary',
'lambda_l1': 3.160842634951819,
'lambda_l2': 4.438488456929287,
'reg_gamma':0.6236454630290655,
'seed': 42,
'subsample': 0.8, # #Same as bagging_fraction
'verbose': 1,
}
# 'max_bin': 100}
# 'num_leaves': 40,
#task?
#num_boost_round
#can I run this on my gpu?
lgb_params = {
'boosting_type': 'gbdt',
'colsample_bytree': 0.7, # #Same as feature_fraction
'learning_rate': 0.0204387053222455,
'max_depth' : 4,
'metric' : 'auc',
'min_child_weight': 9,
'num_boost_round': 5000, ##??
'nthread': -1,
'objective': 'binary',
'lambda_l1': 3.160842634951819,
'lambda_l2': 4.438488456929287,
'reg_gamma':0.6236454630290655,
'seed': 42,
'subsample': 0.8, # #Same as bagging_fraction
'verbose': 1,
}
# 'max_bin': 100}
# 'num_leaves': 40,
#task?
#num_boost_round
#can I run this on my gpu?
# dtrain_lgb = lgb.Dataset(X_train, label=y_train)
# dtest_lgb = lgb.Dataset(X_test, label=y_test)
dtrain_lgb = lgb.Dataset(X, label=y)
cv_result_lgb = lgb.cv(lgb_params,
dtrain_lgb,
num_boost_round=5000,
nfold=5,
stratified=True,
early_stopping_rounds=200,
verbose_eval=100,
show_stdv=True)
num_boost_rounds_lgb = len(cv_result_lgb['auc-mean'])
print(max(cv_result_lgb['auc-mean']))
print('num_boost_rounds_lgb=' + str(num_boost_rounds_lgb))
# #Final Model
gc.collect()
# model = lgb.train(lgbm_params, lgb.Dataset(X_train,label=y_train), 270, lgb.Dataset(X_test,label=y_test), verbose_eval= 50)
# model = lgb.train(lgbm_params, lgb.Dataset(X,y), 270, [lgb.Dataset(X_train,label=y_train), lgb.Dataset(X_test,label=y_test)],verbose_eval= 50)
model_lgb = lgb.train(lgb_params, lgb.Dataset(X, label=y), num_boost_round=num_boost_rounds_lgb, verbose_eval= 50)
###Output
_____no_output_____
###Markdown
LB 0.792[3700] cv_agg's auc: 0.790667 + 0.0028882[3800] cv_agg's auc: 0.790669 + 0.00290076[3900] cv_agg's auc: 0.790662 + 0.0029096[4000] cv_agg's auc: 0.790644 + 0.002937750.7906856024277595num_boost_rounds_lgb=3867--------------------------------------------[3400] cv_agg's auc: 0.791002 + 0.00262433[3500] cv_agg's auc: 0.791015 + 0.00263916[3600] cv_agg's auc: 0.790988 + 0.002628050.791034849384312num_boost_rounds_lgb=3448---------------------------------------------LightGBMLB: 0.783[50] valid_0's auc: 0.782805 valid_1's auc: 0.782941[100] valid_0's auc: 0.80091 valid_1's auc: 0.799953[150] valid_0's auc: 0.812043 valid_1's auc: 0.810725[200] valid_0's auc: 0.82048 valid_1's auc: 0.818973[250] valid_0's auc: 0.827426 valid_1's auc: 0.825887----------------------------------------------XGBOOSTLB: 0.779[0] train-auc:0.663203 valid-auc:0.662528[100] train-auc:0.785183 valid-auc:0.78342[200] train-auc:0.798683 valid-auc:0.796623[269] train-auc:0.804624 valid-auc:0.802767LB: 0.785[0] train-auc:0.677391 valid-auc:0.677353[100] train-auc:0.800638 valid-auc:0.799899[200] train-auc:0.819781 valid-auc:0.818601[269] train-auc:0.828946 valid-auc:0.828207
###Code
df_predict = model_lgb.predict(df_application_test, num_iteration=model_lgb.best_iteration)
submission = pd.DataFrame()
submission["SK_ID_CURR"] = df_application_test["SK_ID_CURR"]
submission["TARGET"] = df_predict
submission.to_csv("../submissions/model_2_lightgbm_v41.csv", index=False)
# #Should be 48744, 2
submission.shape
# #Feature Importance
importance = dict(zip(X_train.columns, model_lgb.feature_importance()))
sorted(((value,key) for (key,value) in importance.items()), reverse=True)
# #Total Number of Features --- 227
len(sorted(((value,key) for (key,value) in importance.items()), reverse=True))
# #Number of Features > 50 --- 21
len(sorted(((value,key) for (key,value) in importance.items() if value > 50), reverse=True))
# #Number of Features > 60 --- 21
# (sorted(((key) for (key,value) in importance.items() if value > 50), reverse=True))
# #Custom Interrupt Point for Run All below condition
def
def def
cv_metrics = """
[100] cv_agg's auc: 0.781284 + 0.0026998
[200] cv_agg's auc: 0.786374 + 0.00245158
[300] cv_agg's auc: 0.787628 + 0.00275346
[400] cv_agg's auc: 0.787835 + 0.00279943
num_boost_rounds_lgb=390
"""
lb_metrics = 0.790
mongo_json = {
"model" : "lightgbm",
"model_params" : lgb_params,
"feature_importance" : dict([(key, int(val)) for key, val in importance.items()]),
"cv_metrics": cv_metrics,
"lb_metrics" : lb_metrics
}
###Output
_____no_output_____
###Markdown
MongoDB - My Personal ModelDB
###Code
client = MongoClient()
client = MongoClient('localhost', 27017)
db = client.kaggle_homecredit
collection = db.model2_lightgbm
db = client.kaggle_homecredit
collection = db.model2_lightgbm
collection.insert_one(mongo_json)
df_application_train.size
###Output
_____no_output_____
|
.ipynb_checkpoints/recipe V3-checkpoint.ipynb
|
###Markdown
Smile DetectionMy intentions for this notebook is to use it as a starting point for future computer vision projects. In fact, the purpose of this notebook isn't to use 10 lines of code to get some good results. Simply throwing blindly images to a convNet and get 90% accuracy. What I want to do is really understand the data, understand the model and avoir making mistakes. If you are looking for an interesting read, I suggest to look at [this link](https://karpathy.github.io/2019/04/25/recipe/). Get to know the dataThe goal wil be to classify images for detecting if the person is smiling or not (binary classification). The data comes from LFWcrop and is available [here](http://conradsanderson.id.au/lfwcrop/).Lets take a look at them.
###Code
from keras.preprocessing import image
import matplotlib.pyplot as plt
import numpy as np
from sklearn.model_selection import train_test_split, StratifiedKFold
import keras
from keras.models import Sequential, Model
from keras.layers import Input, Flatten, Dense, Dropout, Convolution2D, Conv2D, MaxPooling2D, Lambda, GlobalMaxPooling2D, GlobalAveragePooling2D, BatchNormalization, Activation, AveragePooling2D, Concatenate
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
from keras.utils import np_utils
import os
smile_names = os.listdir('./SMILE_Dataset/train/smile')
non_smile_names = os.listdir('./SMILE_Dataset/train/no_smile')
labels = pd.read_csv('./SMILE_Dataset/annotations.csv', header=None, names=['fname','label'])
?Image.open
from PIL import Image
x = np.array([np.array(Image.open('./SMILE_Dataset/all/'+fname)) for fname in labels['fname']])
x2 = np.array([image.img_to_array(image.load_img('./SMILE_Dataset/all/'+fname, target_size=(64, 64))) for fname in labels['fname']])
#x2 = image.img_to_array(image.load_img(img_src_name, target_size=(64, 64)))
X_train, X_val, y_train, y_val = train_test_split(x2, labels, test_size=0.10, random_state = 10)
#X_train = X_train.reshape(360, 193, 162, 1)
#train_features = np.reshape(train_features, (320, 2 * 2 * 512))
#X_train.shape
datagen = ImageDataGenerator(
rescale=1./255,
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180)
zoom_range = 0.1, # Randomly zoom image
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
horizontal_flip=False, # randomly flip images
vertical_flip=False) # randomly flip images
datagen.fit(X_train)
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
label_encoder = LabelEncoder()
integer_encoded = label_encoder.fit_transform(labels['label'][0:360])
Y_train = integer_encoded
#onehot_encoder = OneHotEncoder(sparse=False)
#integer_encoded = integer_encoded.reshape(len(integer_encoded), 1)
#Y_train = onehot_encoder.fit_transform(integer_encoded)
history = model.fit_generator(datagen.flow(X_train,Y_train, batch_size=20),
epochs = 5, validation_data = (X_train,Y_train),
verbose = 2, steps_per_epoch=X_train.shape[0]
, callbacks=[learning_rate_reduction])
for i in range(10):
train = dummy[0+i*10:20+i*10]
from sklearn.model_selection import StratifiedKFold
def load_data():
# load your data using this function
def create model():
# create your model using this function
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(64, 64, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
# Feed to a densily connected layer for prediction
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
return model
def train_and_evaluate__model(model, data_train, labels_train, data_test, labels_test):
model.fit...
# fit and evaluate here.
n_folds = 10
data, labels, header_info = load_data()
skf = StratifiedKFold(labels, n_folds=n_folds, shuffle=True)
for i, (train, test) in enumerate(skf):
print "Running Fold", i+1, "/", n_folds
model = None # Clearing the NN.
model = create_model()
train_and_evaluate_model(model, data[train], labels[train], data[test], labels[test])
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.10, random_state = 10)
200/10
import time
for i in range(10):
img = image.load_img('images/training/non_smile/' + non_smile_names[i], target_size=(64, 64))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x /= 255.
plt.imshow(x[0])
plt.axis('off')
plt.title(non_smile_names[i])
plt.show()
###Output
_____no_output_____
###Markdown
One thing that I notice here, is that the faces are croped from the background. And what seems to define a smile is if we see teeths. That means that someone who is surprised and has the mouth open with the shape of an 'O' would probably be classified as smiling. Furthermore, here we need very localized features maps, as we don't need filters for the context of the image. The next step is going to build a very simplistic toy example to see if I can look at my network and wrong predictions and understand where they might be coming from.In addition, this will allow us to have a full training + evaluation skeleton.- Fix a random seed- Keep it simple
###Code
#overfit one batch.
training_dir = 'SMILE_Dataset/train/'
testing_dir = 'SMILE_Dataset/test/'
#smile_names = os.listdir('./SMILE_Dataset/train/smile')
#non_smile_names = os.listdir('./SMILE_Dataset/train/no_smile')
print('Training: there are {} of smiling images and {} non smiling images.'.format(len(os.listdir('SMILE_Dataset/train/smile')),len(os.listdir('SMILE_Dataset/train/no_smile'))))
print('Testing: there are {} of smiling images and {} non smiling images.'.format(len(os.listdir('SMILE_Dataset/test/smile')),len(os.listdir('SMILE_Dataset/test/no_smile'))))
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(64, 64, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
# Feed to a densily connected layer for prediction
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()
from keras import optimizers
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=2e-5),
metrics=['acc'])
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
training_dir,
target_size=(64,64),
batch_size=20,
class_mode='binary')
test_generator = test_datagen.flow_from_directory(
testing_dir,
target_size=(64,64),
batch_size=20,
class_mode='binary')
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=test_generator,
validation_steps=50)
# Good practice to save the model after training!
model.save('smile-detection.h5')
###Output
_____no_output_____
###Markdown
Wow, we reached ~ 96% ! This is even better than using transfer learning.Lets quickly try with image augmentation.
###Code
train_datagen = ImageDataGenerator(rescale=1./255,
rotation_range=10,
width_shift_range=0.1,
shear_range=0.1,
zoom_range=0.1,
horizontal_flip=True,
fill_mode='nearest')
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
training_dir,
target_size=(64,64),
batch_size=20,
class_mode='binary')
test_generator = test_datagen.flow_from_directory(
testing_dir,
target_size=(64,64),
batch_size=20,
class_mode='binary')
# online archi
model = Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(64, 64, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# Feed to a densily connected layer for prediction
model.add(layers.Flatten())
model.add(layers.Dropout(0.2))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(64, 64, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
# Feed to a densily connected layer for prediction
model.add(layers.Flatten())
model.add(layers.Dropout(0.2))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
learning_rate_reduction = ReduceLROnPlateau(monitor='val_acc',
patience=3,
verbose=1,
factor=0.5,
min_lr=0.00001)
history = model.fit_generator(datagen.flow(X_train,Y_train, batch_size=batch_size),
epochs = epochs, validation_data = (X_val,Y_val),
verbose = 2, steps_per_epoch=X_train.shape[0] // batch_size
, callbacks=[learning_rate_reduction])
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0),
metrics=['acc'])
history = model.fit_generator(
train_generator,
steps_per_epoch=20,
epochs=50,
validation_data=test_generator,
validation_steps=50,
callbacks=[learning_rate_reduction])
model.save('smile-detection-97.h5')
###Output
_____no_output_____
###Markdown
Training was slow. A recent advancement in convnets are the use of depth convolutions which speend up training. This is the basis of the xception architecture that won in 2015.
###Code
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
history = model.fit_generator(
train_generator,
steps_per_epoch=25,
epochs=20,
validation_data=test_generator,
validation_steps=50)
model.save('smile-detection-91.h5')
datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
###Output
_____no_output_____
|
src/02 McCabe/02 McCabe.ipynb
|
###Markdown
Function problem
###Code
def eq_og(xa,relative_volatility):
'''
DESCRIPTION
Returns equilibrium data from an input
of liquid composition (xa) and relative volatility
INPUTS:
xa : Liquid Composition
relative_volatility : Relative Volatility
OUTPUTS:
ya : Vapour Composition
'''
ya=(relative_volatility*xa)/(1+(relative_volatility-1)*xa)
# ya found using Dalton and Raoults laws
return ya
def eq(xa,relative_volatility,nm):
'''
DESCRIPTION
Returns equilibrium data from an input
of liquid composition (xa) and relative volatility
accounting for the Murphree Efficiency of the
system
INPUTS:
xa : Liquid Composition
relative_volatility : Relative Volatility
nm : Murphree Efficiency
OUTPUTS:
ya : Vapour Composition
'''
ya=(relative_volatility*xa)/(1+(relative_volatility-1)*xa)
# ya found using Dalton and Raoults laws
ya=((ya-xa)*nm)+xa # using definition of murphree efficiency
return ya
def eq2(ya,relative_volatility,nm):
'''
DESCRIPTION
Returns equilibrium data from an input
of liquid composition (ya) and relative volatility
accounting for the Murphree Efficiency of the
system. This function is the inverse of eq(...) above.
INPUTS:
ya : Vapour Composition
relative_volatility : Relative Volatility
nm : Murphree Efficiency
OUTPUTS:
xa : Liquid Composition
'''
# inverse of eq() takes the form of a quadratic
a=((relative_volatility*nm)-nm-relative_volatility+1)
b=((ya*relative_volatility)-ya+nm-1-(relative_volatility*nm))
c=ya
xa=(-b-np.sqrt((b**2)-(4*a*c)))/(2*a) # solving quadratic using
# quadratic formula
return xa
def stepping_ESOL(x1,y1,relative_volatility,R,xd):
'''
DESCRIPTION:
Performs a single step over the ESOL
operating line.
INPUTS:
x1 : Initial liquid composition on ESOL
y1 : Initial vapour composition on ESOL
relative_volatility : Relative Volatility
R : Reflux Ratio
xd : Distillate Composition
OUTPUTS:
x1 : Initial liquid composition
x2 : Liquid composition after stepping
y1 : Initial vapour composition
y2 : Vapour composition after stepping
'''
x2=eq2(y1,relative_volatility,nm) #getting new liquid comp
y2=(((R*x2)/(R+1))+(xd/(R+1))) #ESOL equation
return x1,x2,y1,y2
def stepping_SSOL(x1,y1,relative_volatility,ESOL_q_x,ESOL_q_y,xb):
'''
DESCRIPTION:
Performs a single step over the SSOL
operating line.
INPUTS:
x1 : Initial liquid composition on ESOL
y1 : Initial vapour composition on ESOL
relative_volatility : Relative Volatility
ESOL_q_x : Point at which ESOL intersects q-line (x)
ESOL_q_y : Point at which ESOL intersects q-line (y)
xb : Bottoms composition
OUTPUTS:
x1 : Initial liquid composition
x2 : Liquid composition after stepping
y1 : Initial vapour composition
y2 : Vapour composition after stepping
'''
x2=eq2(y1,relative_volatility,nm) # getting new liquid comp
m=((xb-ESOL_q_y)/(xb-ESOL_q_x)) # gradient of SSOL
c=ESOL_q_y-(m*ESOL_q_x) # intercept of SSOL
y2=(m*x2)+c # SSOL equation in form 'y=mx+c'
return x1,x2,y1,y2
def McCabeThiele(PaVap,PbVap,R_factor,xf,xd,xb,q,nm):
'''
DESCRIPTION:
Performs the McCabe-Thiele construction in order to calculate
optimum number of stages, and optimum feed stage. Also taking into
account the Murphree Efficiency of the system.
INPUTS:
PaVap :Vapour pressure of component a (more volatile)
PbVap :Vapour pressure of component b (less volatile)
R_factor :Amount Rmin is scaled by to obtain the actual reflux ratio
xf :Feed composition
xd :Distillate composition
xb :Bottoms composition
q :Liquid fraction of feed
nm :Murphree Efficiency
OUTPUTS:
A McCabe-Thiele plot, displaying optimum number of equilibrium stages,
optimum feed stage, actual reflux ratio, actual bottoms composition.
'''
# Ensuring errors don't occur regarding dividing by 0
if q==1:
q-=0.00000001
if q==0:
q+=0.00000001
relative_volatility=PaVap/PbVap #obtaining relative volatility from definition
xa=np.linspace(0,1,100) #creating x-axis
ya_og=eq_og(xa[:],relative_volatility) #getting original equilibrium data
ya_eq=eq(xa[:],relative_volatility,nm) #getting modified equilibrium data
# taking into account the Murphree Efficiency
x_line=xa[:] #creating data-points for y=x line
y_line=xa[:]
# finding where the q-line intersects the equilibrium curve
# takes the form of a quadratic equation
al=relative_volatility
a=((al*q)/(q-1))-al+(al*nm)-(q/(q-1))+1-nm
b=(q/(q-1))-1+nm+((al*xf)/(1-q))-(xf/(1-q))-(al*nm)
c=xf/(1-q)
if q>1:
q_eqX=(-b+np.sqrt((b**2)-(4*a*c)))/(2*a)
else:
q_eqX=(-b-np.sqrt((b**2)-(4*a*c)))/(2*a)
# where the q-line intersects the equilibrium curve (x-axis)
q_eqy=eq(q_eqX,relative_volatility,nm)
# where the q-line intersects the equilibrium curve (y-axis)
theta_min=xd*(1-((xd-q_eqy)/(xd-q_eqX))) # ESOL y-intercept to obtain Rmin
R_min=(xd/theta_min)-1 # finding Rmin
R=R_factor*R_min # multiplying by R_factor to obtain R
theta=(xd/(R+1)) # finding new ESOL y-intercept
ESOL_q_x=((theta-(xf/(1-q)))/((q/(q-1))-((xd-theta)/xd)))
# Where the new ESOL intercepts the q-line (x-axis)
ESOL_q_y=(ESOL_q_x*((xd-theta)/xd))+theta
# Where the new ESOL intercepts the q-line (y-axis)
plt.figure(num=None, figsize=(6, 6), dpi=None, facecolor=None, edgecolor=None)
plt.axis([0,1,0,1]) #creating axes between 0-1
plt.plot([xd,xd],[0,xd],color='k',linestyle='--') # distillate comp line
plt.plot([xb,xb],[0,xb],color='g',linestyle='--') # bottoms comp line
plt.plot([xf,xf],[0,xf],color='b',linestyle='--') # feed comp line
#plt.plot([xd,0],[xd,theta_min],color='r',linestyle='--') # ESOL at Rmin
'''UN-COMMENT TO SEE ESOL FOR Rmin ^^^'''
plt.plot([xd,ESOL_q_x],[xd,ESOL_q_y],color='k') # ESOL at R
plt.plot([xb,ESOL_q_x],[xb,ESOL_q_y],color='k') # SSOL
x1,x2,y1,y2=stepping_ESOL(xd,xd,relative_volatility,R,xd)
step_count=1 # current number of equilibrium stages
plt.plot([x1,x2],[y1,y1],color='r') # Step_1
plt.plot([x2,x2],[y1,y2],color='r') # Step_2
plt.text(x2-0.045,y1+0.045,step_count)
while x2>ESOL_q_x: # up until the q-line, step down
x1,x2,y1,y2=stepping_ESOL(x2,y2,relative_volatility,R,xd)
plt.plot([x1,x2],[y1,y1],color='r')
plt.plot([x2,x2],[y1,y2],color='r')
step_count+=1 # incrementing equilibrium stage count
plt.text(x2-0.045,y1+0.045,step_count) # label the stage
feed_stage=step_count # obtaining optimum feed stage
x1,x2,y1,y2=stepping_SSOL(x1,y1,relative_volatility\
,ESOL_q_x,ESOL_q_y,xb)
plt.plot([x1,x2],[y1,y1],color='r')
plt.plot([x2,x2],[y1,y2],color='r')
step_count+=1
while x2>xb: # while the composition is greater than desired
x1,x2,y1,y2=stepping_SSOL(x2,y2,relative_volatility\
,ESOL_q_x,ESOL_q_y,xb)
# continue stepping...
plt.plot([x1,x2],[y1,y1],color='r')
plt.plot([x2,x2],[y1,y2],color='r')
plt.text(x2-0.045,y1+0.045,step_count) # label stage
step_count+=1 #increment equilibrium stage counter
plt.plot([x2,x2],[y1,0],color='k')
xb_actual=x2
stages=step_count-1
plt.plot(x_line,y_line,color='k') # x=y line
plt.plot(xa,ya_og,color='k') # equilibrium curve
plt.plot(xa,ya_eq,color='g',linestyle='--') # equilibrium curve (with efficiency)
plt.plot([xf,ESOL_q_x],[xf,ESOL_q_y],color='k') #q- line
#plt.plot([ESOL_q_x,q_eqX],[ESOL_q_y,q_eqy],color='r',linestyle='--') #q- line
'''UN-COMMENT TO SEE FULL q LINE ^^^'''
plt.xlabel('xa') #Just a few labels and information...
plt.ylabel('ya')
plt.grid(True) # wack the grid on for bonus points
plt.show()
return
###Output
_____no_output_____
###Markdown
Problem data
###Code
PaVap=179.2 # Vapour pressure of a
PbVap=74.3 # Vapour pressure of b
xd=0.975 # Distillate composition
xb=0.025 # Bottoms composition
xf=0.5 # Feed composition
q=0.5 # q
R_factor=1.3 # Reflux ratio = R_min* R_factor
nm=0.75 # Murphree Efficiency
McCabeThiele(PaVap,PbVap,R_factor,xf,xd,xb,q,nm)
###Output
_____no_output_____
|
python_bootcamp/Untitled Folder/Bayes Classifier.ipynb
|
###Markdown
Notebook Imports
###Code
from os import walk
from os.path import join
import pandas as pd
import matplotlib.pyplot as plt
import nltk
from nltk.stem import PorterStemmer
from nltk.stem import SnowballStemmer
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
%matplotlib inline
###Output
_____no_output_____
###Markdown
Constants
###Code
EXAMPLE_FILE = "../data/SpamData/01_Processing/practice_email.txt"
SPAM_1_PATH = "../data/SpamData/01_Processing/spam_assassin_corpus/spam_1"
SPAM_2_PATH = "../data/SpamData/01_Processing/spam_assassin_corpus/spam_2"
EASY_NONSPAM_1_PATH = "../data/SpamData/01_Processing/spam_assassin_corpus/easy_ham_1"
EASY_NONSPAM_2_PATH = "../data/SpamData/01_Processing/spam_assassin_corpus/easy_ham_2"
SPAM_CAT = 1
HAM_CAT = 0
DATA_JSON_FILE = "../data/SpamData/01_Processing/email-text-data.json"
###Output
_____no_output_____
###Markdown
Reading Files
###Code
stream = open(EXAMPLE_FILE, encoding="latin-1")
is_body = False
lines = []
for line in stream:
if is_body:
lines.append(line)
elif line == "\n":
is_body = True
stream.close()
email_body = "\n".join(lines)
print(email_body)
import sys
sys.getfilesystemencoding()
###Output
_____no_output_____
###Markdown
Generator Functions
###Code
def generate_square(N):
for my_number in range(N):
yield my_number ** 2
for i in generate_square(5):
print(i, end=" ->")
###Output
0 ->1 ->4 ->9 ->16 ->
###Markdown
Email body extraction
###Code
def email_body_generator(path):
for root, dirnames, filename in walk(path):
for file_name in filename:
filepath = join(root, file_name)
stream = open(filepath, encoding="latin-1")
is_body = False
lines = []
for line in stream:
if is_body:
lines.append(line)
elif line == "\n":
is_body = True
stream.close()
email_body = "\n".join(lines)
yield file_name, email_body
def df_from_directory(path, classification):
rows = []
row_names = []
for file_name, email_body in email_body_generator(path):
rows.append({"MESSAGE": email_body, "CATEGORY": classification})
row_names.append(file_name)
return pd.DataFrame(rows, index=row_names)
spam_emails = df_from_directory(SPAM_1_PATH, 1)
spam_emails = spam_emails.append(df_from_directory(SPAM_2_PATH,1))
spam_emails.head()
spam_emails.shape
ham_emails = df_from_directory(EASY_NONSPAM_1_PATH, HAM_CAT)
ham_emails = ham_emails.append(df_from_directory(EASY_NONSPAM_2_PATH, HAM_CAT))
ham_emails.shape
data = pd.concat([spam_emails, ham_emails])
print("Shape of entire dataframe is ", data.shape)
data.head()
data.tail()
###Output
_____no_output_____
###Markdown
Data Cleaning: Checking for Missing Values
###Code
data
data["MESSAGE"]
data["MESSAGE"].isnull().value_counts()
data["MESSAGE"].isnull().values.any()
# Check if there are empty emails (string lengh zero)
(data.MESSAGE.str.len() == 0).any()
(data.MESSAGE.str.len() == 0).value_counts()
###Output
_____no_output_____
###Markdown
Local empty emails
###Code
data[data.MESSAGE.str.len() == 0].index
###Output
_____no_output_____
###Markdown
Remove system file entries from dataframe
###Code
data.drop(["cmds"], inplace=True)
data.shape
###Output
_____no_output_____
###Markdown
Add Documents IDs to track Emails in Dataset
###Code
documents_ids = range(0,len(data.index))
data["DOC_ID"] = documents_ids
data["FILE_NAME"] = data.index
data.set_index("DOC_ID", inplace=True)
data
###Output
_____no_output_____
###Markdown
Save to File using Pandas
###Code
data.to_json(DATA_JSON_FILE)
###Output
_____no_output_____
###Markdown
Number of Spam Messages Visualised (Pie Chart)
###Code
data.CATEGORY.value_counts()
amount_of_spam = data.CATEGORY.value_counts()[1]
amount_of_ham = data.CATEGORY.value_counts()[0]
category_name = ["Spam","Legit Mail"]
sizes = [amount_of_spam, amount_of_ham]
custom_colours = ["#ff7675","#74b9ff"]
plt.figure(figsize=(2,2), dpi=227)
plt.pie(sizes, labels=category_name, textprops={"fontsize":6},
startangle=90,
autopct="%1.1f%%",
colors=custom_colours, explode=[0,0.1])
plt.show()
category_name = ["Spam","Legit Mail"]
sizes = [amount_of_spam, amount_of_ham]
custom_colours = ["#ff7675","#74b9ff"]
plt.figure(figsize=(2,2), dpi=227)
plt.pie(sizes, labels=category_name, textprops={"fontsize":6},
startangle=90,
autopct="%1.f%%",
colors=custom_colours,
pctdistance=0.8)
# Draw circle
centre_circle = plt.Circle((0,0), radius=0.6, fc="white")
plt.gca().add_artist(centre_circle)
plt.show()
category_name = ["Spam","Legit Mail","Updates","Promotions"]
sizes= [25,43,19,22]
custom_colours = ["#ff7675","#74b9ff","#55efc4","#ffeaa7"]
offset = [0.05,0.05,0.05,0.05]
plt.figure(figsize=(2,2), dpi=227)
plt.pie(sizes, labels=category_name, textprops={"fontsize":6},
startangle=90,
autopct="%1.f%%",
colors=custom_colours,
pctdistance=0.8, explode=offset)
# Draw circle
centre_circle = plt.Circle((0,0), radius=0.6, fc="white")
plt.gca().add_artist(centre_circle)
plt.show()
###Output
_____no_output_____
###Markdown
Natural Language Processing Text Pre-Processing
###Code
msg = "All work and no play makes Jack a dull boy"
msg.lower()
###Output
_____no_output_____
###Markdown
Download the NLTK Resources (Tokenizer & Stopwords)
###Code
nltk.download("punkt")
nltk.download("stopwords")
###Output
[nltk_data] Downloading package stopwords to
[nltk_data] /home/sjvasconcello/nltk_data...
[nltk_data] Unzipping corpora/stopwords.zip.
###Markdown
Tokenising
###Code
msg = "All work and no play makes Jack a dull boy"
word_tokenize(msg.lower())
###Output
_____no_output_____
###Markdown
Removing Stop Words
###Code
stop_words = set(stopwords.words("english"))
type(stop_words)
if "this" in stop_words: print("Found it!")
if "hello" not in stop_words: print("Nope!")
msg = "All work and no play makes Jack a dull boy. To be or not to be. \
Nobody expects the Spanish Inquisition!"
words = word_tokenize(msg.lower())
stemmer = SnowballStemmer("english")
filtered_words = []
for word in words:
if word not in stop_words:
stemmed_word = stemmer.stem(word)
filtered_words.append(stemmed_word)
print(filtered_words)
###Output
['work', 'play', 'make', 'jack', 'dull', 'boy', '.', '.', 'nobodi', 'expect', 'spanish', 'inquisit', '!']
|
week05/seminar05.ipynb
|
###Markdown
MultiheadAttention in details
###Code
import torch
from torch import nn
###Output
_____no_output_____
###Markdown
einops
###Code
import einops
x = torch.arange(2 * 3 * 4 * 5).reshape(2, 3, 4, 5)
x
###Output
_____no_output_____
###Markdown
Let's swap second and third dimensions and then union first and second dimensions
###Code
z = x.transpose(1, 2)
assert torch.allclose(torch.cat([z_ for z_ in z]), z.flatten(0, 1))
# [2, 3, 4, 5] -> [2, 4, 3, 5] -> [8, 3, 5]
x.transpose(1, 2).flatten(0, 1).shape
torch.allclose(
einops.rearrange(x, 'first second third fourth -> (first third) second fourth'),
x.transpose(1, 2).flatten(0, 1)
)
###Output
_____no_output_____
###Markdown
Which is more readable? :)
###Code
class MultiheadAttention(nn.Module):
def __init__(self, input_dim: int, num_heads: int, dropout: float):
super(MultiheadAttention, self).__init__()
self.input_dim = input_dim
self.num_heads = num_heads
self.head_dim = input_dim // num_heads
assert self.head_dim * num_heads == self.input_dim
self.scaling = self.head_dim ** -0.5
# Gather Q, K, V projections into one big projection
self.projection = nn.Linear(input_dim, input_dim * 3, bias=False)
self.out_projection = nn.Linear(input_dim, input_dim, bias=False)
self.dropout = nn.Dropout(dropout)
@staticmethod
def get_key_padding_mask(lengths: torch.Tensor) -> torch.Tensor:
"""
Args:
lengths (torch.Tensor):
Returns: mask to exclude keys that are pads, of shape `(batch, src_len)`,
where padding elements are indicated by 1s.
"""
max_length = torch.max(lengths).item()
mask = (
torch.arange(max_length, device=lengths.device)
.ge(lengths.view(-1, 1))
.contiguous()
.bool()
)
return mask
def _check_input_shape(self, input: torch.Tensor, mask: torch.BoolTensor):
if input.dim() != 3:
raise ValueError('Input should have 3 dimensions')
if input.size(-1) != self.input_dim:
raise ValueError('Expected order of dimensions is [T, B, C]')
if mask.dtype != torch.bool:
raise ValueError('Expected type of mask is torch.bool')
def forward(self, input: torch.Tensor, key_padding_mask: torch.BoolTensor) -> torch.Tensor:
self._check_input_shape(input, key_padding_mask)
input_len, batch_size, _ = input.size()
query, key, value = self.projection(input).chunk(3, dim=-1)
assert query.size() == (input_len, batch_size, self.input_dim)
# Gather batches with heads
query = einops.rearrange(
query, 'T batch (head dim) -> (batch head) T dim', head=self.num_heads
)
key = einops.rearrange(
key, 'T batch (head dim) -> (batch head) dim T', head=self.num_heads
)
value = einops.rearrange(
value, 'T batch (head dim) -> (batch head) T dim', head=self.num_heads
)
attn_weights = torch.bmm(query, key)
attn_weights.mul_(self.scaling)
assert attn_weights.size() == (batch_size * self.num_heads, input_len, input_len)
# Masking padding scores
attn_weights = attn_weights.view(batch_size, self.num_heads, input_len, input_len)
attn_weights = attn_weights.masked_fill(
key_padding_mask.unsqueeze(1).unsqueeze(2),
float('-inf'),
)
attn_weights = attn_weights.view(batch_size * self.num_heads, input_len, input_len)
attn_probs = torch.softmax(attn_weights, dim=-1)
attn_probs = self.dropout(attn_probs)
attn = torch.bmm(attn_probs, value)
assert attn.size() == (batch_size * self.num_heads, input_len, self.head_dim)
attn = einops.rearrange(
attn, '(batch head) T dim -> T batch (head dim)', head=self.num_heads
)
attn = self.out_projection(attn)
attn = self.dropout(attn)
return attn
###Output
_____no_output_____
###Markdown
Transformers in PyTorch
###Code
nn.Transformer
nn.TransformerDecoder
nn.TransformerDecoderLayer
nn.TransformerEncoder
nn.TransformerEncoderLayer
nn.MultiheadAttention
###Output
_____no_output_____
|
IsingModel_2D.ipynb
|
###Markdown
###Code
#
# 2D Ising model "IsingModel_2D"
#
from random import random, randrange
import numpy as np
import matplotlib.pyplot as plt
from google.colab import files
def E_int(array, Nx,Ny): #interaction term
temp=0
for i in range(0,Nx):
for j in range(0,Ny):
a=i+1
if i == Nx-1:
a=0
temp=temp+array[i,j]*array[a,j]
for i in range(0,Nx):
for j in range(0,Ny):
c=j+1
if j== Ny-1 :
c=0
temp=temp+array[i,j]*array[i,c]
return temp
def s_ini(s, Nx,Ny): #initial state (random)
for k in range(int(Nx*Ny/2)):
i=randrange(Nx)
j=randrange(Ny)
s[i,j]=-1*s[i,j]
return s
ET_plot=[]
MT_plot=[]
CT_plot=[]
KBT_list=[1,5] # Temperature
N_step =1000 # Nuber of step
Nave_step=int(N_step/10)
Nx= 10 # size of lattice (x direction)
Ny= 10 # size of lattice (y direction)
J = 1.0 # Interaction energy
B = 0.0 # External field
for KBT in KBT_list:
s= np.ones([Nx,Ny], int) # initial state
s=s_ini(s, Nx,Ny)
plt.imshow(s) # Initial state display
plt.title("Initial state")
plt.xlabel("Position")
plt.ylabel("Position")
plt.show()
E = -J* E_int(s, Nx,Ny) -B*np.sum(s) # Enerty calculation of the initial state
E2 = E*E
E_plot = []
E_plot.append(E/(Nx*Ny))
E2_plot = []
E2_plot.append((E/(Nx*Ny))*(E/(Nx*Ny)))
M = np.sum(s)
M_plot = []
Mave_plot=[]
M_plot.append(M/(Nx*Ny))
Mave_plot.append(M/(Nx*Ny))
for k in range(N_step):
i=randrange(Nx) # [i,j]:trial point
j=randrange(Ny)
s_trial=s.copy()
s_trial[i,j]= -1*s[i,j]
a=i+1
b=i-1
c=j+1
d=j-1
if i==Nx-1 :
a=0
if i==0 :
b=Nx-1
if j==Ny-1 :
c=0
if j==0 :
d=Ny-1
delta_E=2*s_trial[i,j]*-1*J*(s[a,j]+s[b,j]+s[i,c]+s[i,d])-B*(s_trial[i,j]-s[i,j])
E_trial =E+ delta_E
if E_trial < E : #Metropolis method
s = s_trial
E = E_trial
else :
if random() < np.exp(-(delta_E)/KBT):
s = s_trial
E = E_trial
E_plot.append(E/(Nx*Ny))
E2_plot.append((E/(Nx*Ny))**2)
M = np.sum(s)
M_plot.append(M/(Nx*Ny))
ET_plot.append(np.sum(E_plot[-Nave_step:])/Nave_step)
MT_plot.append(np.sum(M_plot[-Nave_step:])/Nave_step)
CT_plot.append((np.sum(E2_plot[-Nave_step:])/Nave_step-((np.sum(E_plot[-Nave_step:]))/Nave_step)**2)/KBT**2)
plt.imshow(s) # Final state display
plt.title("Final State")
plt.xlabel("Position")
plt.ylabel("Position")
plt.show()
plt.plot(np.linspace(1,N_step+1, N_step+1) ,E_plot)
plt.title("Energy Change")
plt.ylabel("Totla energy per spin")
plt.xlabel("Step")
plt.show()
plt.plot(KBT_list,ET_plot)
plt.ylabel("Totla energy per spin")
plt.xlabel("kBT")
plt.show()
plt.plot(KBT_list,MT_plot)
plt.ylabel("Total magnetic moment per spin")
plt.xlabel("kBT")
plt.show()
plt.plot(KBT_list,CT_plot)
plt.ylabel("Heat capacity per spin")
plt.xlabel("kBT")
plt.show()
###Output
_____no_output_____
|
Cleanse_Data.ipynb
|
###Markdown
Machine Learning for Engineers: [CleanseData](https://www.apmonitor.com/pds/index.php/Main/CleanseData)- [Data Cleansing](https://www.apmonitor.com/pds/index.php/Main/CleanseData) - Source Blocks: 7 - Description: Measurements from sensors or from human input can contain bad data that negatively affects machine learning. This tutorial demonstrates how to identify and remove bad data.- [Course Overview](https://apmonitor.com/pds)- [Course Schedule](https://apmonitor.com/pds/index.php/Main/CourseSchedule)
###Code
import numpy as np
z = np.array([[ 1, 2],
[ np.nan, 3],
[ 4, np.nan],
[ 5, 6]])
iz = np.any(np.isnan(z), axis=1)
print(~iz)
z = z[~iz]
print(z)
import numpy as np
import pandas as pd
z = pd.DataFrame({'x':[1,np.nan,4,5],'y':[2,3,np.nan,6]})
print(z)
result = z.dropna()
result = z.fillna(z.mean())
result = z[z['y']<5.5]
result = z[(z['y']<5.5) & (z['y']>=1.0)]
result = z[z['x'].notnull()].fillna(0).reset_index(drop=True)
###Output
_____no_output_____
|
machine_learning/reinforcement_learning/generalized_stochastic_policy_iteration/tabular/eligibility_trace/np_eligibility_trace/off_policy_stochastic_eligibility_trace_watkins_q_lambda.ipynb
|
###Markdown
Eligibility Trace: Off-policy, Watkins' Q(lambda), Stochastic
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Create environment
###Code
def create_environment_states():
"""Creates environment states.
Returns:
num_states: int, number of states.
num_terminal_states: int, number of terminal states.
num_non_terminal_states: int, number of non terminal states.
"""
num_states = 16
num_terminal_states = 2
num_non_terminal_states = num_states - num_terminal_states
return num_states, num_terminal_states, num_non_terminal_states
def create_environment_actions(num_non_terminal_states):
"""Creates environment actions.
Args:
num_non_terminal_states: int, number of non terminal states.
Returns:
max_num_actions: int, max number of actions possible.
num_actions_per_non_terminal_state: array[int], number of actions per
non terminal state.
"""
max_num_actions = 4
num_actions_per_non_terminal_state = np.repeat(
a=max_num_actions, repeats=num_non_terminal_states)
return max_num_actions, num_actions_per_non_terminal_state
def create_environment_successor_counts(num_states, max_num_actions):
"""Creates environment successor counts.
Args:
num_states: int, number of states.
max_num_actions: int, max number of actions possible.
Returns:
num_sp: array[int], number of successor
states s' that can be reached from state s by taking action a.
"""
num_sp = np.repeat(
a=1, repeats=num_states * max_num_actions)
num_sp = np.reshape(
a=num_sp,
newshape=(num_states, max_num_actions))
return num_sp
def create_environment_successor_arrays(
num_non_terminal_states, max_num_actions):
"""Creates environment successor arrays.
Args:
num_non_terminal_states: int, number of non terminal states.
max_num_actions: int, max number of actions possible.
Returns:
sp_idx: array[int], state indices of new state s' of taking action a
from state s.
p: array[float], transition probability to go from state s to s' by
taking action a.
r: array[float], reward from new state s' from state s by taking
action a.
"""
sp_idx = np.array(
object=[1, 0, 14, 4,
2, 1, 0, 5,
2, 2, 1, 6,
4, 14, 3, 7,
5, 0, 3, 8,
6, 1, 4, 9,
6, 2, 5, 10,
8, 3, 7, 11,
9, 4, 7, 12,
10, 5, 8, 13,
10, 6, 9, 15,
12, 7, 11, 11,
13, 8, 11, 12,
15, 9, 12, 13],
dtype=np.int64)
p = np.repeat(
a=1.0, repeats=num_non_terminal_states * max_num_actions * 1)
r = np.repeat(
a=-1.0, repeats=num_non_terminal_states * max_num_actions * 1)
sp_idx = np.reshape(
a=sp_idx,
newshape=(num_non_terminal_states, max_num_actions, 1))
p = np.reshape(
a=p,
newshape=(num_non_terminal_states, max_num_actions, 1))
r = np.reshape(
a=r,
newshape=(num_non_terminal_states, max_num_actions, 1))
return sp_idx, p, r
def create_environment():
"""Creates environment.
Returns:
num_states: int, number of states.
num_terminal_states: int, number of terminal states.
num_non_terminal_states: int, number of non terminal states.
max_num_actions: int, max number of actions possible.
num_actions_per_non_terminal_state: array[int], number of actions per
non terminal state.
num_sp: array[int], number of successor
states s' that can be reached from state s by taking action a.
sp_idx: array[int], state indices of new state s' of taking action a
from state s.
p: array[float], transition probability to go from state s to s' by
taking action a.
r: array[float], reward from new state s' from state s by taking
action a.
"""
(num_states,
num_terminal_states,
num_non_terminal_states) = create_environment_states()
(max_num_actions,
num_actions_per_non_terminal_state) = create_environment_actions(
num_non_terminal_states)
num_sp = create_environment_successor_counts(
num_states, max_num_actions)
(sp_idx,
p,
r) = create_environment_successor_arrays(
num_non_terminal_states, max_num_actions)
return (num_states,
num_terminal_states,
num_non_terminal_states,
max_num_actions,
num_actions_per_non_terminal_state,
num_sp,
sp_idx,
p,
r)
###Output
_____no_output_____
###Markdown
Set hyperparameters
###Code
def set_hyperparameters():
"""Sets hyperparameters.
Returns:
num_episodes: int, number of episodes to train over.
maximum_episode_length: int, max number of timesteps for an episode.
alpha: float, alpha > 0, learning rate.
epsilon: float, 0 <= epsilon <= 1, exploitation-exploration trade-off,
higher means more exploration.
gamma: float, 0 <= gamma <= 1, amount to discount future reward.
trace_decay_lambda: float, trace decay parameter lambda.
trace_update_type: int, trace update type, 0 = accumulating,
1 = replacing.
"""
num_episodes = 10000
maximum_episode_length = 200
alpha = 0.1
epsilon = 0.1
gamma = 1.0
trace_decay_lambda = 0.9
trace_update_type = 0
return (num_episodes,
maximum_episode_length,
alpha,
epsilon,
gamma,
trace_decay_lambda,
trace_update_type)
###Output
_____no_output_____
###Markdown
Create value function and policy arrays
###Code
def create_value_function_arrays(num_states, max_num_actions):
"""Creates value function arrays.
Args:
num_states: int, number of states.
max_num_actions: int, max number of actions possible.
Returns:
q: array[float], keeps track of the estimated value of each
state-action pair Q(s, a).
"""
return np.zeros(shape=[num_states, max_num_actions], dtype=np.float64)
def create_policy_arrays(num_non_terminal_states, max_num_actions):
"""Creates policy arrays.
Args:
num_non_terminal_states: int, number of non terminal states.
max_num_actions: int, max number of actions possible.
Returns:
policy: array[float], learned stochastic policy of which
action a to take in state s.
"""
policy = np.repeat(
a=1.0 / max_num_actions,
repeats=num_non_terminal_states * max_num_actions)
policy = np.reshape(
a=policy,
newshape=(num_non_terminal_states, max_num_actions))
return policy
###Output
_____no_output_____
###Markdown
Create eligibility traces
###Code
def create_eligibility_traces(num_states, max_num_actions):
"""Creates eligibility trace arrays.
Args:
num_states: int, number of states.
max_num_actions: int, max number of actions possible.
Returns:
eligibility_trace: array[float], keeps track of the eligibility the
trace for each state-action pair Q(s, a).
"""
return np.zeros(shape=[num_states, max_num_actions], dtype=np.float64)
###Output
_____no_output_____
###Markdown
Create algorithm
###Code
# Set random seed so that everything is reproducible
np.random.seed(seed=0)
def initialize_epsiode(
num_non_terminal_states,
max_num_actions,
q,
epsilon,
policy,
eligibility_trace):
"""Initializes epsiode with initial state and initial action.
Args:
num_non_terminal_states: int, number of non terminal states.
max_num_actions: int, max number of actions possible.
q: array[float], keeps track of the estimated value of each
state-action pair Q(s, a).
epsilon: float, 0 <= epsilon <= 1, exploitation-exploration trade-off,
higher means more exploration.
policy: array[float], learned stochastic policy of which
action a to take in state s.
eligibility_trace: array[float], keeps track of the eligibility the
trace for each state-action pair Q(s, a).
Returns:
init_s_idx: int, initial state index from set of non terminal states.
init_a_idx: int, initial action index from set of actions of state
init_s_idx.
policy: array[float], learned stochastic policy of which
action a to take in state s.
eligibility_trace: array[float], keeps track of the eligibility the
trace for each state-action pair Q(s, a).
"""
# Reset eligibility traces for new episode
eligibility_trace = np.zeros_like(a=eligibility_trace)
# Randomly choose an initial state from all non-terminal states
init_s_idx = np.random.randint(
low=0, high=num_non_terminal_states, dtype=np.int64)
# Choose policy for chosen state by epsilon-greedy choosing from the
# state-action-value function
policy = epsilon_greedy_policy_from_state_action_function(
max_num_actions, q, epsilon, init_s_idx, policy)
# Get initial action
init_a_idx = np.random.choice(
a=max_num_actions, p=policy[init_s_idx, :])
return init_s_idx, init_a_idx, policy, eligibility_trace
def epsilon_greedy_policy_from_state_action_function(
max_num_actions, q, epsilon, s_idx, policy):
"""Create epsilon-greedy policy from state-action value function.
Args:
max_num_actions: int, max number of actions possible.
q: array[float], keeps track of the estimated value of each
state-action pair Q(s, a).
epsilon: float, 0 <= epsilon <= 1, exploitation-exploration trade-off,
higher means more exploration.
s_idx: int, current state index.
policy: array[float], learned stochastic policy of which action a to
take in state s.
Returns:
policy: array[float], learned stochastic policy of which action a to
take in state s.
"""
# Save max state-action value and find the number of actions that have the
# same max state-action value
max_action_value = np.max(a=q[s_idx, :])
max_action_count = np.count_nonzero(a=q[s_idx, :] == max_action_value)
# Apportion policy probability across ties equally for state-action pairs
# that have the same value and zero otherwise
if max_action_count == max_num_actions:
max_policy_prob_per_action = 1.0 / max_action_count
remain_prob_per_action = 0.0
else:
max_policy_prob_per_action = (1.0 - epsilon) / max_action_count
remain_prob_per_action = epsilon / (max_num_actions - max_action_count)
policy[s_idx, :] = np.where(
q[s_idx, :] == max_action_value,
max_policy_prob_per_action,
remain_prob_per_action)
return policy
def loop_through_episode(
num_non_terminal_states,
max_num_actions,
num_sp,
sp_idx,
p,
r,
q,
policy,
eligibility_trace,
alpha,
epsilon,
gamma,
trace_decay_lambda,
trace_update_type,
maximum_episode_length,
s_idx,
a_idx):
"""Loops through episode to iteratively update policy.
Args:
num_non_terminal_states: int, number of non terminal states.
max_num_actions: int, max number of actions possible.
num_sp: array[int], number of successor states s' that can be reached
from state s by taking action a.
sp_idx: array[int], state indices of new state s' of taking action a
from state s.
p: array[float], transition probability to go from state s to s' by
taking action a.
r: array[float], reward from new state s' from state s by taking
action a.
q: array[float], keeps track of the estimated value of each
state-action pair Q(s, a).
policy: array[float], learned stochastic policy of which
action a to take in state s.
eligibility_trace: array[float], keeps track of the eligibility the
trace for each state-action pair Q(s, a).
alpha: float, alpha > 0, learning rate.
epsilon: float, 0 <= epsilon <= 1, exploitation-exploration trade-off,
higher means more exploration.
gamma: float, 0 <= gamma <= 1, amount to discount future reward.
trace_decay_lambda: float, trace decay parameter lambda.
trace_update_type: int, trace update type, 0 = accumulating,
1 = replacing.
maximum_episode_length: int, max number of timesteps for an episode.
s_idx: int, initial state index from set of non terminal states.
a_idx: int, initial action index from set of actions of state s_idx.
Returns:
q: array[float], keeps track of the estimated value of each
state-action pair Q(s, a).
policy: array[float], learned stochastic policy of which
action a to take in state s.
"""
# Loop through episode steps until termination
for t in range(0, maximum_episode_length):
# Get reward
successor_state_transition_index = np.random.choice(
a=num_sp[s_idx, a_idx],
p=p[s_idx, a_idx, :])
# Get reward from state and action
reward = r[s_idx, a_idx, successor_state_transition_index]
# Get next state
next_s_idx = sp_idx[s_idx, a_idx, successor_state_transition_index]
# Check to see if we actioned into a terminal state
if next_s_idx >= num_non_terminal_states:
# Calculate TD error delta
delta = reward - q[s_idx, a_idx]
# Update eligibility traces and state action value function with
# TD error
eligibility_trace, q = update_eligibility_trace_and_q(
s_idx,
a_idx,
0,
0,
delta,
num_non_terminal_states,
max_num_actions,
alpha,
gamma,
trace_decay_lambda,
trace_update_type,
q,
eligibility_trace)
break # episode terminated since we ended up in a terminal state
else:
# Choose policy for chosen state by epsilon-greedy choosing from the
# state-action-value function
policy = epsilon_greedy_policy_from_state_action_function(
max_num_actions, q, epsilon, next_s_idx, policy)
# Get next action
next_a_idx = np.random.choice(
a=max_num_actions, p=policy[next_s_idx, :])
# Get next action, max action of next state
max_action_value = np.max(a=q[next_s_idx, :])
q_max_tie_stack = np.extract(
condition=q[next_s_idx, :] == max_action_value,
arr=np.arange(max_num_actions))
max_action_count = q_max_tie_stack.size
if q[next_s_idx, next_a_idx] == max_action_value:
max_a_idx = next_a_idx
else:
max_a_idx = q_max_tie_stack[np.random.randint(max_action_count)]
# Calculate TD error delta
delta = reward + gamma * q[next_s_idx, next_a_idx] - q[s_idx, a_idx]
# Update eligibility traces and state action value function with
# TD error
eligibility_trace, q = update_eligibility_trace_and_q(
s_idx,
a_idx,
next_a_idx,
max_a_idx,
delta,
num_non_terminal_states,
max_num_actions,
alpha,
gamma,
trace_decay_lambda,
trace_update_type,
q,
eligibility_trace)
# Update state and action to next state and action
s_idx = next_s_idx
a_idx = next_a_idx
return q, policy
# This function updates the eligibility trace and state-action-value function
def update_eligibility_trace_and_q(
s_idx,
a_idx,
next_a_idx,
max_a_idx,
delta,
num_non_terminal_states,
max_num_actions,
alpha,
gamma,
trace_decay_lambda,
trace_update_type,
q,
eligibility_trace):
"""Updates the eligibility trace and state-action-value function.
Args:
s_idx: int, initial state index from set of non terminal states.
a_idx: int, initial action index from set of actions of state s_idx.
next_a_idx: int, next action index from set of actions of state
next_s_idx.
max_a_idx: int, action index from set of actions of state s_idx with
max Q value.
delta: float, difference between estimated and target Q.
num_non_terminal_states: int, number of non terminal states.
max_num_actions: int, max number of actions possible.
alpha: float, alpha > 0, learning rate.
gamma: float, 0 <= gamma <= 1, amount to discount future reward.
trace_decay_lambda: float, trace decay parameter lambda.
trace_update_type: int, trace update type, 0 = accumulating,
1 = replacing.
q: array[float], keeps track of the estimated value of each
state-action pair Q(s, a).
eligibility_trace: array[float], keeps track of the eligibility the
trace for each state-action pair Q(s, a).
Returns:
eligibility_trace: array[float], keeps track of the eligibility the
trace for each state-action pair Q(s, a).
q: array[float], keeps track of the estimated value of each
state-action pair Q(s, a).
"""
# Update eligibility trace
if trace_update_type == 1: # replacing
eligibility_trace[s_idx, a_idx] = 1.0
else: # accumulating or unknown
eligibility_trace[s_idx, a_idx] += 1.0
# Update state-action-value function
q += alpha * delta * eligibility_trace
if next_a_idx == max_a_idx:
eligibility_trace *= gamma * trace_decay_lambda
else:
eligibility_trace = np.zeros_like(a=eligibility_trace)
return eligibility_trace, q
def off_policy_stochastic_eligibility_trace_watkins_q_lambda(
num_non_terminal_states,
max_num_actions,
num_sp,
sp_idx,
p,
r,
q,
policy,
eligibility_trace,
alpha,
epsilon,
gamma,
trace_decay_lambda,
trace_update_type,
maximum_episode_length,
num_episodes):
"""Loops through episodes to iteratively update policy.
Args:
num_non_terminal_states: int, number of non terminal states.
max_num_actions: int, max number of actions possible.
num_sp: array[int], number of successor states s' that can be reached
from state s by taking action a.
sp_idx: array[int], state indices of new state s' of taking action a
from state s.
p: array[float], transition probability to go from state s to s' by
taking action a.
r: array[float], reward from new state s' from state s by taking
action a.
q: array[float], keeps track of the estimated value of each
state-action pair Q(s, a).
policy: array[float], learned stochastic policy of which
action a to take in state s.
eligibility_trace: array[float], keeps track of the eligibility the
trace for each state-action pair Q(s, a).
alpha: float, alpha > 0, learning rate.
epsilon: float, 0 <= epsilon <= 1, exploitation-exploration trade-off,
higher means more exploration.
gamma: float, 0 <= gamma <= 1, amount to discount future reward.
trace_decay_lambda: float, trace decay parameter lambda.
trace_update_type: int, trace update type, 0 = accumulating,
1 = replacing.
maximum_episode_length: int, max number of timesteps for an episode.
num_episodes: int, number of episodes to train over.
Returns:
q: array[float], keeps track of the estimated value of each
state-action pair Q(s, a).
policy: array[float], learned stochastic policy of which
action a to take in state s.
"""
for episode in range(0, num_episodes):
# Initialize episode to get initial state and action
(init_s_idx,
init_a_idx,
policy,
eligibility_trace) = initialize_epsiode(
num_non_terminal_states,
max_num_actions,
q,
epsilon,
policy,
eligibility_trace)
# Loop through episode and update the policy
q, policy = loop_through_episode(
num_non_terminal_states,
max_num_actions,
num_sp,
sp_idx,
p,
r,
q,
policy,
eligibility_trace,
alpha,
epsilon,
gamma,
trace_decay_lambda,
trace_update_type,
maximum_episode_length,
init_s_idx,
init_a_idx)
return q, policy
###Output
_____no_output_____
###Markdown
Run algorithm
###Code
def run_algorithm():
"""Runs the algorithm."""
(num_states,
_,
num_non_terminal_states,
max_num_actions,
_,
num_sp,
sp_idx,
p,
r) = create_environment()
trace_decay_lambda = 0.9
trace_update_type = 0
(num_episodes,
maximum_episode_length,
alpha,
epsilon,
gamma,
trace_decay_lambda,
trace_update_type) = set_hyperparameters()
q = create_value_function_arrays(num_states, max_num_actions)
policy = create_policy_arrays(num_non_terminal_states, max_num_actions)
eligibility_trace = create_eligibility_traces(num_states, max_num_actions)
# Print initial arrays
print("\nInitial state-action value function")
print(q)
print("\nInitial policy")
print(policy)
# Run on policy temporal difference sarsa
q, policy = off_policy_stochastic_eligibility_trace_watkins_q_lambda(
num_non_terminal_states,
max_num_actions,
num_sp,
sp_idx,
p,
r,
q,
policy,
eligibility_trace,
alpha,
epsilon,
gamma,
trace_decay_lambda,
trace_update_type,
maximum_episode_length,
num_episodes)
# Print final results
print("\nFinal state-action value function")
print(q)
print("\nFinal policy")
print(policy)
run_algorithm()
###Output
Initial state-action value function
[[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]]
Initial policy
[[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]]
Final state-action value function
[[-3.1360728 -2.12288203 -1. -3.22233939]
[-4.3075457 -3.22016626 -2.38116233 -4.39090571]
[-4.31469754 -4.66682275 -3.32363145 -3.56065581]
[-3.2640893 -1. -2.18899491 -3.11476863]
[-4.49545845 -2.61273679 -2.16927882 -4.25185148]
[-3.58370481 -3.70493181 -3.61509688 -3.25146987]
[-3.21829815 -4.18590431 -4.41356403 -2.14276454]
[-4.24969137 -2.23141464 -3.18133959 -4.19870164]
[-3.44921693 -3.55485719 -3.64387985 -3.77939849]
[-2.69921417 -4.2237111 -4.43406368 -2.33521349]
[-2.09355273 -3.54864227 -3.46577544 -1. ]
[-3.54157772 -3.27340551 -4.22163736 -4.39210421]
[-2.24512616 -4.30309607 -4.17079222 -3.29313934]
[-1. -3.37837737 -3.11098177 -2.22917459]
[ 0. 0. 0. 0. ]
[ 0. 0. 0. 0. ]]
Final policy
[[0.03333333 0.03333333 0.9 0.03333333]
[0.03333333 0.03333333 0.9 0.03333333]
[0.03333333 0.03333333 0.9 0.03333333]
[0.03333333 0.9 0.03333333 0.03333333]
[0.03333333 0.03333333 0.9 0.03333333]
[0.03333333 0.03333333 0.03333333 0.9 ]
[0.03333333 0.03333333 0.03333333 0.9 ]
[0.03333333 0.9 0.03333333 0.03333333]
[0.9 0.03333333 0.03333333 0.03333333]
[0.03333333 0.03333333 0.03333333 0.9 ]
[0.03333333 0.03333333 0.03333333 0.9 ]
[0.03333333 0.9 0.03333333 0.03333333]
[0.9 0.03333333 0.03333333 0.03333333]
[0.9 0.03333333 0.03333333 0.03333333]]
|
notebooks/models/0.2.7-tb-train.ipynb
|
###Markdown
Prepare Notebook Deleate Outdated Log Files
###Code
!rm "log.txt"
###Output
rm: cannot remove 'log.txt': No such file or directory
###Markdown
Install Dependencies
###Code
!pip install pytorch-metric-learning
!pip install faiss-gpu
!pip install PyPDF2
!pip install FPDF
!pip install efficientnet_pytorch
!pip install umap-learn
!pip install gpustat
#!apt install texlive-fonts-recommended texlive-fonts-extra cm-super dvipng
###Output
Collecting pytorch-metric-learning
Downloading pytorch_metric_learning-1.1.0-py3-none-any.whl (106 kB)
[?25l
[K |███ | 10 kB 28.1 MB/s eta 0:00:01
[K |██████▏ | 20 kB 35.3 MB/s eta 0:00:01
[K |█████████▏ | 30 kB 21.1 MB/s eta 0:00:01
[K |████████████▎ | 40 kB 17.2 MB/s eta 0:00:01
[K |███████████████▍ | 51 kB 7.8 MB/s eta 0:00:01
[K |██████████████████▍ | 61 kB 8.3 MB/s eta 0:00:01
[K |█████████████████████▌ | 71 kB 8.0 MB/s eta 0:00:01
[K |████████████████████████▋ | 81 kB 8.9 MB/s eta 0:00:01
[K |███████████████████████████▋ | 92 kB 9.5 MB/s eta 0:00:01
[K |██████████████████████████████▊ | 102 kB 7.5 MB/s eta 0:00:01
[K |████████████████████████████████| 106 kB 7.5 MB/s
[?25hRequirement already satisfied: torch>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from pytorch-metric-learning) (1.10.0+cu111)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from pytorch-metric-learning) (4.62.3)
Requirement already satisfied: torchvision in /usr/local/lib/python3.7/dist-packages (from pytorch-metric-learning) (0.11.1+cu111)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from pytorch-metric-learning) (1.19.5)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.7/dist-packages (from pytorch-metric-learning) (1.0.2)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch>=1.6.0->pytorch-metric-learning) (3.10.0.2)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->pytorch-metric-learning) (3.0.0)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->pytorch-metric-learning) (1.1.0)
Requirement already satisfied: scipy>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->pytorch-metric-learning) (1.4.1)
Requirement already satisfied: pillow!=8.3.0,>=5.3.0 in /usr/local/lib/python3.7/dist-packages (from torchvision->pytorch-metric-learning) (7.1.2)
Installing collected packages: pytorch-metric-learning
Successfully installed pytorch-metric-learning-1.1.0
Collecting faiss-gpu
Downloading faiss_gpu-1.7.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (85.5 MB)
[K |████████████████████████████████| 85.5 MB 131 kB/s
[?25hInstalling collected packages: faiss-gpu
Successfully installed faiss-gpu-1.7.2
Collecting PyPDF2
Downloading PyPDF2-1.26.0.tar.gz (77 kB)
[K |████████████████████████████████| 77 kB 5.0 MB/s
[?25hBuilding wheels for collected packages: PyPDF2
Building wheel for PyPDF2 (setup.py) ... [?25l[?25hdone
Created wheel for PyPDF2: filename=PyPDF2-1.26.0-py3-none-any.whl size=61102 sha256=47f56d9510a8f41299944b33fa62e771048b79496ba52e50de49937e6d28a7e5
Stored in directory: /root/.cache/pip/wheels/80/1a/24/648467ade3a77ed20f35cfd2badd32134e96dd25ca811e64b3
Successfully built PyPDF2
Installing collected packages: PyPDF2
Successfully installed PyPDF2-1.26.0
Collecting FPDF
Downloading fpdf-1.7.2.tar.gz (39 kB)
Building wheels for collected packages: FPDF
Building wheel for FPDF (setup.py) ... [?25l[?25hdone
Created wheel for FPDF: filename=fpdf-1.7.2-py2.py3-none-any.whl size=40725 sha256=59f0d0af88068acdaceabfa7ee7f46ee15a9cbbcc2ee39b25e6f47108fd98247
Stored in directory: /root/.cache/pip/wheels/d7/ca/c8/86467e7957bbbcbdf4cf4870fc7dc95e9a16404b2e3c3a98c3
Successfully built FPDF
Installing collected packages: FPDF
Successfully installed FPDF-1.7.2
Collecting efficientnet_pytorch
Downloading efficientnet_pytorch-0.7.1.tar.gz (21 kB)
Requirement already satisfied: torch in /usr/local/lib/python3.7/dist-packages (from efficientnet_pytorch) (1.10.0+cu111)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch->efficientnet_pytorch) (3.10.0.2)
Building wheels for collected packages: efficientnet-pytorch
Building wheel for efficientnet-pytorch (setup.py) ... [?25l[?25hdone
Created wheel for efficientnet-pytorch: filename=efficientnet_pytorch-0.7.1-py3-none-any.whl size=16446 sha256=76bea09f9d5c6b3cb26ab1c8d18f5f8ebfc875db6816c8b9ca30a1c2cf28b339
Stored in directory: /root/.cache/pip/wheels/0e/cc/b2/49e74588263573ff778da58cc99b9c6349b496636a7e165be6
Successfully built efficientnet-pytorch
Installing collected packages: efficientnet-pytorch
Successfully installed efficientnet-pytorch-0.7.1
Collecting umap-learn
Downloading umap-learn-0.5.2.tar.gz (86 kB)
[K |████████████████████████████████| 86 kB 5.0 MB/s
[?25hRequirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.7/dist-packages (from umap-learn) (1.19.5)
Requirement already satisfied: scikit-learn>=0.22 in /usr/local/lib/python3.7/dist-packages (from umap-learn) (1.0.2)
Requirement already satisfied: scipy>=1.0 in /usr/local/lib/python3.7/dist-packages (from umap-learn) (1.4.1)
Requirement already satisfied: numba>=0.49 in /usr/local/lib/python3.7/dist-packages (from umap-learn) (0.51.2)
Collecting pynndescent>=0.5
Downloading pynndescent-0.5.5.tar.gz (1.1 MB)
[K |████████████████████████████████| 1.1 MB 65.6 MB/s
[?25hRequirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from umap-learn) (4.62.3)
Requirement already satisfied: llvmlite<0.35,>=0.34.0.dev0 in /usr/local/lib/python3.7/dist-packages (from numba>=0.49->umap-learn) (0.34.0)
Requirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from numba>=0.49->umap-learn) (57.4.0)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from pynndescent>=0.5->umap-learn) (1.1.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.22->umap-learn) (3.0.0)
Building wheels for collected packages: umap-learn, pynndescent
Building wheel for umap-learn (setup.py) ... [?25l[?25hdone
Created wheel for umap-learn: filename=umap_learn-0.5.2-py3-none-any.whl size=82708 sha256=a1ff4bf7bc04a6502673fd6a077e9ddaa89ea75511dd7eb0fa32fdade69cd5aa
Stored in directory: /root/.cache/pip/wheels/84/1b/c6/aaf68a748122632967cef4dffef68224eb16798b6793257d82
Building wheel for pynndescent (setup.py) ... [?25l[?25hdone
Created wheel for pynndescent: filename=pynndescent-0.5.5-py3-none-any.whl size=52603 sha256=7419813f7f45d099113eb3f2f51f8492223c70986bd88b287e12e280ead1ab3d
Stored in directory: /root/.cache/pip/wheels/af/e9/33/04db1436df0757c42fda8ea6796d7a8586e23c85fac355f476
Successfully built umap-learn pynndescent
Installing collected packages: pynndescent, umap-learn
Successfully installed pynndescent-0.5.5 umap-learn-0.5.2
Collecting gpustat
Downloading gpustat-0.6.0.tar.gz (78 kB)
[K |████████████████████████████████| 78 kB 5.0 MB/s
[?25hRequirement already satisfied: six>=1.7 in /usr/local/lib/python3.7/dist-packages (from gpustat) (1.15.0)
Requirement already satisfied: nvidia-ml-py3>=7.352.0 in /usr/local/lib/python3.7/dist-packages (from gpustat) (7.352.0)
Requirement already satisfied: psutil in /usr/local/lib/python3.7/dist-packages (from gpustat) (5.4.8)
Collecting blessings>=1.6
Downloading blessings-1.7-py3-none-any.whl (18 kB)
Building wheels for collected packages: gpustat
Building wheel for gpustat (setup.py) ... [?25l[?25hdone
Created wheel for gpustat: filename=gpustat-0.6.0-py3-none-any.whl size=12617 sha256=8f758b54c7682ddb5324c2500911c2bbb7ad0fab53f3b0cf1fe62136e80ecc07
Stored in directory: /root/.cache/pip/wheels/e6/67/af/f1ad15974b8fd95f59a63dbf854483ebe5c7a46a93930798b8
Successfully built gpustat
Installing collected packages: blessings, gpustat
Successfully installed blessings-1.7 gpustat-0.6.0
###Markdown
Import Dependencies
###Code
!nvidia-smi
from efficientnet_pytorch import EfficientNet
from PyPDF2 import PdfFileMerger
from shutil import copyfile
from fpdf import FPDF
import logging
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import matplotlib.pyplot as plt
from torchvision import datasets, transforms
from pytorch_metric_learning import distances, losses, miners, reducers, testers
from pytorch_metric_learning.utils.accuracy_calculator import AccuracyCalculator
import pandas as pd
import numpy as np
import PIL
import os
import toml
from os.path import isfile, join
from google.colab import drive
import matplotlib
from matplotlib import rc
from sklearn.feature_extraction.image import extract_patches_2d
import umap
from skimage import io
from numpy.core.fromnumeric import mean
rc('text', usetex=False)
matplotlib.rcParams['text.latex.preamble'] = [r'\usepackage{amsmath}']
###Output
_____no_output_____
###Markdown
Mount Google DriveStructure------* conf* data * 04_train * 05_val * 06_test* out * experimentName_Iteration
###Code
drive.mount('/content/gdrive')
###Output
Mounted at /content/gdrive
###Markdown
Instansiate Logger ``` This is formatted as code```
###Code
#logging.basicConfig(filename="test.log", level=logging.INFO )
logger = logging.getLogger('log')
logger.setLevel(logging.DEBUG)
# create file handler which logs even debug messages
fh = logging.FileHandler('log.txt')
fh.setLevel(logging.INFO)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.INFO)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(message)s')
ch.setFormatter(formatter)
fh.setFormatter(formatter)
# add the handlers to logger
logger.addHandler(ch)
logger.addHandler(fh)
logger.propagate = False
###Output
_____no_output_____
###Markdown
PyTorch Dataset Class for FAU Papyri Data
###Code
class FAUPapyrusCollectionDataset(torch.utils.data.Dataset):
"""FAUPapyrusCollection dataset."""
def __init__(self, root_dir, processed_frame, transform=None):
self.root_dir = root_dir
self.processed_frame = processed_frame
self.transform = transform
self.targets = processed_frame["papyID"].unique()
def __len__(self):
return len(self.processed_frame)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
img_name = os.path.join(self.root_dir,
self.processed_frame.iloc[idx, 1])
img_name = img_name + '.png'
image = io.imread(img_name)
#image = PIL.Image.open(img_name)
if self.transform:
image = self.transform(image)
papyID = self.processed_frame.iloc[idx,3]
return image, papyID
###Output
_____no_output_____
###Markdown
PyTorch Network Architectures Simple CNN
###Code
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 32, 3, 1)
self.conv2 = nn.Conv2d(32, 64, 3, 1)
self.dropout1 = nn.Dropout2d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
self.fc1 = nn.Linear(12544, 128)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, 2)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
return x
###Output
_____no_output_____
###Markdown
PyTorch NN Functions Training
###Code
def train(model, loss_func, mining_func, device, train_loader, optimizer, train_set, epoch, accuracy_calculator, scheduler, accumulation_steps):
model.train()
model.zero_grad()
epoch_loss = 0.0
running_loss = 0.0
accumulation_steps = 2
for batch_idx, (input_imgs, labels) in enumerate(train_loader):
labels = labels.to(device)
input_imgs = input_imgs.to(device)
bs, ncrops, c, h, w = input_imgs.size()
#optimizer.zero_grad()
embeddings = model(input_imgs.view(-1, c, h, w))
embeddings_avg = embeddings.view(bs, ncrops, -1).mean(1)
#Use this if you have to check embedding size
#embedding_size = embeddings_avg.shape
#print(embedding_size)
indices_tuple = mining_func(embeddings_avg, labels)
loss = loss_func(embeddings_avg, labels, indices_tuple)
loss = loss / accumulation_steps
loss.backward()
if (batch_idx+1) % accumulation_steps == 0:
optimizer.step()
optimizer.zero_grad()
#optimizer.step()
epoch_loss += embeddings_avg.shape[0] * loss.item()
scheduler.step()
train_embeddings, train_labels = get_all_embeddings(train_set, model)
train_labels = train_labels.squeeze(1)
accuracies = accuracy_calculator.get_accuracy(
train_embeddings,
train_embeddings,
train_labels,
train_labels,
False)
#mean_loss = torch.mean(torch.stack(batch_loss_values))
logger.info(f"Epoch {epoch} averg loss from {batch_idx} batches: {epoch_loss}")
map = accuracies["mean_average_precision"]
logger.info(f"Eoch {epoch} maP: {map}")
return epoch_loss, accuracies["mean_average_precision"]
###Output
_____no_output_____
###Markdown
Validation
###Code
def val(train_set, test_set, model, accuracy_calculator):
train_embeddings, train_labels = get_all_embeddings(train_set, model)
test_embeddings, test_labels = get_all_embeddings(test_set, model)
train_labels = train_labels.squeeze(1)
test_labels = test_labels.squeeze(1)
print("Computing accuracy")
accuracies = accuracy_calculator.get_accuracy(
test_embeddings, train_embeddings, test_labels, train_labels, False
)
idx = torch.randperm(test_labels.nelement())
test_labels = test_labels.view(-1)[idx].view(test_labels.size())
random_accuracies = accuracy_calculator.get_accuracy(
test_embeddings, train_embeddings, test_labels, train_labels, False
)
map = accuracies["mean_average_precision"]
random_map = random_accuracies["mean_average_precision"]
logger.info(f"Val mAP = {map}")
logger.info(f"Val random mAP) = {random_map}")
return accuracies["mean_average_precision"], random_accuracies["mean_average_precision"]
###Output
_____no_output_____
###Markdown
Python-Helper-Functions Deep Metric Learning
###Code
from pytorch_metric_learning.testers import GlobalEmbeddingSpaceTester
from pytorch_metric_learning.utils import common_functions as c_f
class CustomTester(GlobalEmbeddingSpaceTester):
def get_embeddings_for_eval(self, trunk_model, embedder_model, input_imgs):
input_imgs = c_f.to_device(
input_imgs, device=self.data_device, dtype=self.dtype
)
print('yes')
# from https://pytorch.org/vision/stable/transforms.html#torchvision.transforms.FiveCrop
bs, ncrops, c, h, w = input_imgs.size()
result = embedder_model(trunk_model(input_imgs.view(-1, c, h, w))) # fuse batch size and ncrops
result_avg = result.view(bs, ncrops, -1).mean(1) # avg over crops
return result_avg
def visualizer_hook(umapper, umap_embeddings, labels, split_name, keyname, *args):
logging.info(
"UMAP plot for the {} split and label set {}".format(split_name, keyname)
)
label_set = np.unique(labels)
num_classes = len(label_set)
fig = plt.figure(figsize=(20, 15))
plt.gca().set_prop_cycle(
cycler(
"color", [plt.cm.nipy_spectral(i) for i in np.linspace(0, 0.9, num_classes)]
)
)
for i in range(num_classes):
idx = labels == label_set[i]
plt.plot(umap_embeddings[idx, 0], umap_embeddings[idx, 1], ".", markersize=1)
plt.show()
def get_all_embeddings(dataset, model, collate_fn=None, eval=True):
tester = CustomTester(visualizer=umap.UMAP(),visualizer_hook=visualizer_hook,)
return tester.get_all_embeddings(dataset, model, collate_fn=None)
###Output
_____no_output_____
###Markdown
Visualization Gradients
###Code
def gradient_visualization(parameters, output_path):
"""
Returns the parameter gradients over the epoch.
:param parameters: parameters of the network
:type parameters: iterator
:param results_folder: path to results folder
:type results_folder: str
"""
tex_fonts = {
# Use LaTeX to write all text
"text.usetex": False,
"font.family": "serif",
# Use 10pt font in plots, to match 10pt font in document
"axes.labelsize": 10,
"font.size": 10,
# Make the legend/label fonts a little smaller
"legend.fontsize": 8,
"xtick.labelsize": 8,
"ytick.labelsize": 8,
"legend.loc":'lower left'
}
plt.rcParams.update(tex_fonts)
ave_grads = []
layers = []
for n, p in parameters:
if (p.requires_grad) and ("bias" not in n):
layers.append(n)
ave_grads.append(p.grad.abs().mean())
plt.plot(ave_grads, alpha=0.3, color="b")
plt.hlines(0, 0, len(ave_grads) + 1, linewidth=1, color="k")
plt.xticks(range(0, len(ave_grads), 1), layers, rotation="vertical")
plt.xlim(xmin=0, xmax=len(ave_grads))
plt.xlabel("Layers")
plt.ylabel("average gradient")
plt.title("Gradient Visualization")
plt.grid(True)
plt.tight_layout()
plt.savefig(output_path + "/gradients.pdf")
plt.close()
###Output
_____no_output_____
###Markdown
Accuracy
###Code
def plot_acc(map_vals, random_map_vals, train_map, epochs, output_path):
width = 460
plt.style.use('seaborn-bright')
tex_fonts = {
# Use LaTeX to write all text
"text.usetex": False,
"font.family": "serif",
# Use 10pt font in plots, to match 10pt font in document
"axes.labelsize": 10,
"font.size": 10,
# Make the legend/label fonts a little smaller
"legend.fontsize": 8,
"xtick.labelsize": 8,
"ytick.labelsize": 8
}
#linestyle='dotted'
plt.rcParams.update(tex_fonts)
epochs = np.arange(1, epochs + 1)
fig, ax = plt.subplots(1, 1,figsize=set_size(width))
ax.plot(epochs, random_map_vals, 'r', label='random mAP')
ax.plot(epochs, train_map, 'g', label='train mAP')
ax.plot(epochs, map_vals, 'b', label='val mAP')
ax.set_title('Validation Accuracy')
ax.set_xlabel('Epochs')
ax.set_ylabel('Accuracy')
ax.legend()
fig.savefig(output_path + '/acc.pdf', format='pdf', bbox_inches='tight')
plt.close()
###Output
_____no_output_____
###Markdown
Loss
###Code
def plot_loss(train_loss_values, epochs, output_path):
width = 460
plt.style.use('seaborn-bright')
tex_fonts = {
# Use LaTeX to write all text
"text.usetex": False,
"font.family": "serif",
# Use 10pt font in plots, to match 10pt font in document
"axes.labelsize": 10,
"font.size": 10,
# Make the legend/label fonts a little smaller
"legend.fontsize": 8,
"xtick.labelsize": 8,
"ytick.labelsize": 8
}
plt.rcParams.update(tex_fonts)
epochs = np.arange(1, epochs + 1)
train_loss_values = np.array(train_loss_values)
plt.style.use('seaborn')
fig, ax = plt.subplots(1, 1,figsize=set_size(width))
ax.plot(epochs, train_loss_values, 'b', label='Training Loss', linestyle='dotted')
ax.set_title('Training')
ax.set_xlabel('Epochs')
ax.set_ylabel('Loss')
ax.legend()
fig.savefig(output_path + '/loss.pdf', format='pdf', bbox_inches='tight')
plt.close()
###Output
_____no_output_____
###Markdown
Hyperparameters
###Code
def plot_table(setting, param, dml_param, output_path):
width = 460
plt.style.use('seaborn-bright')
tex_fonts = {
# Use LaTeX to write all text
"text.usetex": False,
"font.family": "serif",
# Use 10pt font in plots, to match 10pt font in document
"axes.labelsize": 10,
"font.size": 10,
# Make the legend/label fonts a little smaller
"legend.fontsize": 8,
"xtick.labelsize": 8,
"ytick.labelsize": 8
}
plt.rcParams.update(tex_fonts)
########## Plot Settings ##################
setting_name_list = list(setting.keys())
setting_value_list = list(setting.values())
setting_name_list, setting_value_list = replace_helper(setting_name_list, setting_value_list)
vals = np.array([setting_name_list, setting_value_list], dtype=str).T
fig, ax = plt.subplots(1, 1, figsize=set_size(width))
ax.table(cellText=vals, colLabels=['Setting', 'Value'], loc='center', zorder=3, rowLoc='left', cellLoc='left')
ax.set_title('Experiment Settings')
ax.set_xticks([])
ax.set_yticks([])
fig.savefig(output_path + '/settings.pdf', format='pdf', bbox_inches='tight')
plt.close()
########## Plot Params ##################
param_name_list = param.keys()
param_value_list = param.values()
param_name_list, param_value_list = replace_helper(param_name_list, param_value_list)
param_vals = np.array([list(param_name_list), list(param_value_list)], dtype=str).T
fig, ax = plt.subplots(1, 1, figsize=set_size(width))
ax.table(cellText=param_vals, colLabels=['Hyperparameter', 'Value'], loc='center', zorder=3, rowLoc='left', cellLoc='left')
ax.set_title('Hyperparaeters')
ax.set_xticks([])
ax.set_yticks([])
fig.savefig(output_path + '/params.pdf', format='pdf', bbox_inches='tight')
plt.close()
########## Plot DML Params ##################
dml_param_name_list = dml_param.keys()
dml_param_value_list = dml_param.values()
dml_param_name_list, dml_param_value_list = replace_helper(dml_param_name_list, dml_param_value_list)
dml_param_vals = np.array([list(dml_param_name_list), list(dml_param_value_list)], dtype=str).T
fig, ax = plt.subplots(1, 1, figsize=set_size(width))
ax.table(cellText=dml_param_vals, colLabels=['DML Hyperparameter', 'Value'], loc='center', zorder=3, rowLoc='left', cellLoc='left')
ax.set_title('DML Hyperparameters')
ax.set_xticks([])
ax.set_yticks([])
fig.savefig(output_path + '/dml_params.pdf', format='pdf', bbox_inches='tight')
plt.close()
###Output
_____no_output_____
###Markdown
Dataloading
###Code
def create_processed_info(path, debug=False):
if debug:
info_path = join(path, 'debug_processed_info.csv')
else:
info_path = join(path, 'processed_info.csv')
if isfile(info_path):
processed_frame = pd.read_csv(info_path, index_col=0, dtype={'fnames':str,'papyID':int,'posinfo':str, 'pixelCentimer':float}, header=0)
else:
fnames = [f for f in listdir(path) if isfile(join(path, f))]
fnames = [ x for x in fnames if ".png" in x ]
fnames = [f.split('.',1)[0] for f in fnames]
fnames_frame = pd.DataFrame(fnames,columns=['fnames'])
fragmentID = pd.DataFrame([f.split('_',1)[0] for f in fnames], columns=['fragmentID'])
fnames_raw = [f.split('_',1)[1] for f in fnames]
processed_frame = pd.DataFrame(fnames_raw, columns=['fnames_raw'])
processed_frame = pd.concat([processed_frame, fnames_frame], axis=1)
processed_frame = pd.concat([processed_frame, fragmentID], axis=1)
processed_frame['papyID'] = processed_frame.fnames_raw.apply(lambda x: x.split('_',1)[0])
processed_frame['posinfo'] = processed_frame.fnames_raw.apply(lambda x: ''.join(filter(str.isalpha, x)))
processed_frame['pixelCentimer'] = processed_frame.fnames_raw.progress_apply(retrive_size_by_fname)
processed_frame.to_csv(info_path)
return processed_frame
###Output
_____no_output_____
###Markdown
Logging Thesis Settings
###Code
def set_size(width, fraction=1, subplots=(1, 1)):
"""Set figure dimensions to avoid scaling in LaTeX.
Parameters
----------
width: float or string
Document width in points, or string of predined document type
fraction: float, optional
Fraction of the width which you wish the figure to occupy
subplots: array-like, optional
The number of rows and columns of subplots.
Returns
-------
fig_dim: tuple
Dimensions of figure in inches
"""
if width == 'thesis':
width_pt = 426.79135
elif width == 'beamer':
width_pt = 307.28987
else:
width_pt = width
fig_width_pt = width_pt * fraction
inches_per_pt = 1 / 72.27
golden_ratio = (5**.5 - 1) / 2
fig_width_in = fig_width_pt * inches_per_pt
fig_height_in = fig_width_in * golden_ratio * (subplots[0] / subplots[1])
return (fig_width_in, fig_height_in)
def replace_helper(some_list_1, some_list_2):
new_list_1 = []
new_list_2 = []
for string_a, string_b in zip(some_list_1,some_list_2):
new_list_1.append(str(string_a).replace("_", " "))
new_list_2.append(str(string_b).replace("_", " "))
return new_list_1, new_list_2
###Output
_____no_output_____
###Markdown
Dir-Management
###Code
def create_output_dir(name, experiment_name, x=1):
while True:
dir_name = (name + (str(x) + '_iteration_' if x is not 0 else '') + '_' + experiment_name).strip()
if not os.path.exists(dir_name):
os.mkdir(dir_name)
return dir_name
else:
x = x + 1
###Output
_____no_output_____
###Markdown
Report-PDF
###Code
def create_logging(setting, param, dml_param, train_loss_values, map_vals, random_map_vals, train_map, epochs, output_dir, model):
plot_table(setting, param, dml_param, output_dir)
gradient_visualization(model.named_parameters(), output_dir)
plot_loss(train_loss_values, epochs, output_dir)
plot_acc(map_vals, random_map_vals, train_map, epochs, output_dir)
pdfs = ['/loss.pdf', '/acc.pdf', '/params.pdf','/dml_params.pdf', '/settings.pdf', '/gradients.pdf']
bookmarks = ['Loss', 'Accuracy', 'Hyperparameters','DML Hyperparameters', 'Seetings','Gradients']
merger = PdfFileMerger()
for i, pdf in enumerate(pdfs):
merger.append(output_dir + pdf, bookmark=bookmarks[i])
pdf = FPDF()
pdf.add_page()
pdf.set_font("Helvetica", size = 6)
# open the text file in read mode
f = open("log.txt", "r")
# insert the texts in pdf
for x in f:
pdf.cell(200, 6, txt = x, ln = 1, align = 'l')
# save the pdf with name .pdf
pdf.output("log.pdf")
merger.append("log.pdf", bookmark='Log')
merger.write(output_dir + "/report.pdf")
merger.close()
copyfile('log.txt', output_dir + '/log.txt')
###Output
_____no_output_____
###Markdown
Initialize Settings
###Code
device = torch.device("cuda")
model = Net().to(device)
config = toml.load('./gdrive/MyDrive/mt/conf/conf.toml')
setting = config.get('settings')
param = config.get('params')
dml_param = config.get('dml_params')
###Output
_____no_output_____
###Markdown
Logging Create Dir
###Code
output_dir = create_output_dir(setting['output'], setting['experiment_name'])
###Output
_____no_output_____
###Markdown
Hyperparameters
###Code
batch_size_train = param['batch_size_train']
batch_size_val = param['batch_size_val']
lr = param['lr']
num_epochs = param['num_epochs']
###Output
_____no_output_____
###Markdown
Optimizer
###Code
if param['optimizer'] == 'Adam':
optimizer = optim.Adam(model.parameters(), lr=lr)
elif param['optimizer'] == 'SGD':
optimizer =optim.SGD(model.parameters(), lr=lr)
elif param['optimizer'] == 'AdamW':
optimizer =optim.SGD(model.parameters(), lr=lr)
else:
logger.error(' Optimizer is not supported or you have not specified one.')
raise ValueError()
###Output
_____no_output_____
###Markdown
Model Architecture
###Code
if param['archi'] == 'SimpleCNN':
model = Net().to(device)
elif param['archi'] == 'efficientnetB0':
model = EfficientNet.from_name('efficientnet-b0').to(device)
elif param['archi'] == 'efficientnetB7':
model = EfficientNet.from_name('efficientnet-b7').to(device)
model._fc = torch.nn.Identity()
elif param['archi'] == 'densenet201':
model = torch.hub.load('pytorch/vision:v0.10.0', 'densenet201', pretrained=False).to(device)
model.classifier = torch.nn.Identity()
elif param['archi'] == 'ResNet':
model = models.resnet18(pretrained=True).to(device)
###Output
_____no_output_____
###Markdown
Scheduler
###Code
scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=[12], gamma=0.1)
###Output
_____no_output_____
###Markdown
PyTorch-Metric-Learning Hyperparameters Distance
###Code
if dml_param['distance'] == 'CosineSimilarity':
distance = distances.CosineSimilarity()
elif dml_param['distance'] == 'LpDistance':
distance = distances.LpDistance(normalize_embeddings=True, p=2, power=1)
else:
logger.error(' Distance is not supported or you have not specified one.')
raise ValueError()
###Output
_____no_output_____
###Markdown
Reducer
###Code
if dml_param['reducer'] == 'ThresholdReducer':
reducer = reducers.ThresholdReducer(low=dml_param['ThresholdReducer_low'])
elif dml_param['reducer'] == 'AvgNonZeroReducer':
reducer = reducers.AvgNonZeroReducer()
else:
logger.error(f'Reducer is not supported or you have not specified one.')
raise ValueError()
###Output
_____no_output_____
###Markdown
Los Function
###Code
if dml_param['loss_function'] == 'TripletMarginLoss':
loss_func = losses.TripletMarginLoss(margin=dml_param['TripletMarginLoss_margin'], distance=distance, reducer=reducer)
elif dml_param['loss_function'] == 'ContrastiveLoss':
loss_func = losses.ContrastiveLoss(pos_margin=1, neg_margin=0)
elif dml_param['loss_function'] == 'CircleLoss':
loss_func = losses.CircleLoss(m=dml_param['m'], gamma=dml_param['gamma'], distance=distance, reducer=reducer)
else:
logger.error('DML Loss is not supported or you have not specified one.')
raise ValueError()
###Output
_____no_output_____
###Markdown
Mining Function
###Code
if dml_param['miner'] == 'TripletMarginMiner':
mining_func = miners.TripletMarginMiner(
margin=dml_param['TripletMarginMiner_margin'],
distance=distance,
type_of_triplets=dml_param['type_of_triplets']
)
else:
logger.error('DML Miner is not supported or you have not specified one.')
raise ValueError()
###Output
_____no_output_____
###Markdown
Accuracy Calculator
###Code
accuracy_calculator = AccuracyCalculator(include=(dml_param['metric_1'],
dml_param['metric_2']),
k=dml_param['precision_at_1_k'])
###Output
_____no_output_____
###Markdown
Transformations Custom Transformation
###Code
class NCrop(object):
"""Rescale the image in a sample to a given size.
Args:
output_size (tuple or int): Desired output size. If tuple, output is
matched to output_size. If int, smaller of image edges is matched
to output_size keeping aspect ratio the same.
"""
def __init__(self, output_size, n):
self.output_size = output_size
self.n = n
def __call__(self, sample):
out = extract_patches_2d(sample, self.output_size, max_patches=self.n)
out = out.transpose((0,3,2,1))
out = torch.tensor(out)
return out
if param['transform'] == "TenCrop":
train_transform = transforms.Compose([
transforms.TenCrop((param['crop_1'],param['crop_2'])),
transforms.Lambda(lambda crops: torch.stack([transforms.PILToTensor()(crop) for crop in crops])),
transforms.ConvertImageDtype(torch.float),
transforms.Normalize((param['normalize_1'],param['normalize_2'],param['normalize_3']), (param['normalize_4'],param['normalize_5'],param['normalize_6']))]
)
val_transform = transforms.Compose([
transforms.TenCrop((param['crop_1'],param['crop_2'])),
transforms.Lambda(lambda crops: torch.stack([transforms.PILToTensor()(crop) for crop in crops])),
transforms.ConvertImageDtype(torch.float),
transforms.Normalize((param['normalize_1'],param['normalize_2'],param['normalize_3']), (param['normalize_4'],param['normalize_5'],param['normalize_6']))]
)
elif param['transform'] == "Ncrop":
train_transform = transforms.Compose([
NCrop((param['crop_1'],param['crop_2']),n=param['max_patches_train']),
transforms.ConvertImageDtype(torch.float),
transforms.Normalize((param['normalize_1'],param['normalize_2'],param['normalize_3']), (param['normalize_4'],param['normalize_5'],param['normalize_6']))]
)
val_transform = transforms.Compose([
NCrop((param['crop_1'],param['crop_2']),n=param['max_patches_train']),
transforms.ConvertImageDtype(torch.float),
transforms.Normalize((param['normalize_1'],param['normalize_2'],param['normalize_3']), (param['normalize_4'],param['normalize_5'],param['normalize_6']))]
)
if True:
train_transform = transforms.Compose([
NCrop((param['crop_1'],param['crop_2']),n=param['max_patches_train']),
transforms.ConvertImageDtype(torch.float),
transforms.Normalize((param['normalize_1'],param['normalize_2'],param['normalize_3']), (param['normalize_4'],param['normalize_5'],param['normalize_6']))]
)
val_transform = transforms.Compose([
NCrop((param['crop_1'],param['crop_2']),n=param['max_patches_train']),
transforms.ConvertImageDtype(torch.float),
transforms.Normalize((param['normalize_1'],param['normalize_2'],param['normalize_3']), (param['normalize_4'],param['normalize_5'],param['normalize_6']))]
)
###Output
_____no_output_____
###Markdown
Data Helpers
###Code
processed_frame_train = create_processed_info(setting['path_train'])
processed_frame_val = create_processed_info(setting['path_val'])
###Output
_____no_output_____
###Markdown
Dataset
###Code
train_dataset = FAUPapyrusCollectionDataset(setting['path_train'], processed_frame_train, train_transform)
val_dataset = FAUPapyrusCollectionDataset(setting['path_val'], processed_frame_val, val_transform)
###Output
_____no_output_____
###Markdown
Data Loader
###Code
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size_train, shuffle=param["shuffle"], drop_last=True, num_workers=4)
test_loader = torch.utils.data.DataLoader(val_dataset, batch_size=batch_size_val, drop_last=param["shuffle"], num_workers=4)
###Output
_____no_output_____
###Markdown
Result Lists
###Code
loss_vals = []
val_loss_vals = []
map_vals = []
random_map_vals = []
train_map_vals = []
###Output
_____no_output_____
###Markdown
Training Log Hyperparameters
###Code
logger.info(f'Debug: {setting["debug"]}')
logger.info(f'Loos Function: {dml_param["loss_function"]}')
logger.info(f'Margin Miner Margin: {dml_param["TripletMarginMiner_margin"]}')
logger.info(f'Triplet Margin Loss: {dml_param["TripletMarginLoss_margin"]}')
logger.info(f'Type of Tribles: {dml_param["type_of_triplets"]}')
logger.info(f'Miner: {dml_param["miner"]}')
logger.info(f'Reducer: {dml_param["reducer"]}')
logger.info(f'Archi: {param["archi"]}')
logger.info(f'Epochs: {param["num_epochs"]}')
logger.info(f'Batch Size Train: {param["batch_size_train"]}')
logger.info(f'Batch Size Val: {param["batch_size_val"]}')
logger.info(f'Optimizer: {param["optimizer"]}')
logger.info(f'Learning Rate: {param["lr"]}')
logger.info(f'Shuffle: {param["shuffle"]}')
###Output
Debug: False
Loos Function: TripletMarginLoss
Margin Miner Margin: 0.2
Triplet Margin Loss: 0.2
Type of Tribles: semihard
Miner: TripletMarginMiner
Reducer: AvgNonZeroReducer
Archi: efficientnetB7
Epochs: 20
Batch Size Train: 64
Batch Size Val: 1
Optimizer: SGD
Learning Rate: 0.0001
Shuffle: True
###Markdown
Train
###Code
if setting["training"]:
old_map = 0
for epoch in range(1, num_epochs + 1):
############### Training ###############
train_loss, train_map = train(
model,
loss_func,mining_func,
device,
train_loader,
optimizer,
train_dataset,
epoch,
accuracy_calculator,
scheduler,
accumulation_steps=param["accumulation_steps"]
)
############### Validation ###############
map, random_map = val(val_dataset, val_dataset, model, accuracy_calculator)
############### Fill Lists ###############
loss_vals.append(train_loss)
map_vals.append(map)
random_map_vals.append(random_map)
train_map_vals.append(train_map)
############### Checkpoint ###############
if map >= old_map:
torch.save({
'epoch': epoch,
'model_state_dict': model.state_dict(),
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': train_loss,
}, output_dir + "/model.pt")
old_map = map
############### Logging ###############
create_logging(setting, param, dml_param, loss_vals, map_vals, random_map_vals, train_map_vals, epoch, output_dir, model)
###Output
_____no_output_____
###Markdown
Inference Imports
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import torch
import torchvision
from torchvision import datasets, transforms
import cv2
from pytorch_metric_learning.distances import CosineSimilarity
from pytorch_metric_learning.utils import common_functions as c_f
from pytorch_metric_learning.utils.inference import InferenceModel, MatchFinder
import json
import skimage
###Output
_____no_output_____
###Markdown
Helpers
###Code
def print_decision(is_match):
if is_match:
print("Same class")
else:
print("Different class")
mean = [0.6143, 0.6884, 0.7665]
std = [0.229, 0.224, 0.225]
inv_normalize = transforms.Normalize(
mean=[-m / s for m, s in zip(mean, std)], std=[1 / s for s in std]
)
import numpy as np
import cv2
import json
from matplotlib import pyplot as plt
def imshow(img, figsize=(21, 9), boarder=None, get_img = False):
img = inv_normalize(img)
BLUE = [255,0,0]
npimg = img.numpy()
transposed = np.transpose(npimg, (1, 2, 0))
#boarderized = draw_border(transposed, bt=5, with_plot=False, gray_scale=False, color_name="red")
x = int(transposed.shape[1] * 0.025)
y = int(transposed.shape[2] * 0.025)
if x > y:
y=x
else:
y=x
if boarder == 'green':
boarderized = cv2.copyMakeBorder(transposed,x,x,y,y,cv2.BORDER_CONSTANT,value=[0,255,0])
elif boarder == 'red':
boarderized = cv2.copyMakeBorder(transposed,x,x,y,y,cv2.BORDER_CONSTANT,value=[255,0,0])
else:
boarderized = transposed
if get_img:
return boarderized
else:
plt.figure(figsize=figsize)
plt.imshow((boarderized * 255).astype(np.uint8))
plt.show()
###Output
_____no_output_____
###Markdown
Transform
###Code
transform = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize(mean=mean, std=std)]
)
###Output
_____no_output_____
###Markdown
Dataset
###Code
class FAUPapyrusCollectionInferenceDataset(torch.utils.data.Dataset):
"""FAUPapyrusCollection dataset."""
def __init__(self, root_dir, processed_frame, transform=None):
self.root_dir = root_dir
self.processed_frame = processed_frame
self.transform = transform
self.targets = processed_frame["papyID"].unique()
def __len__(self):
return len(self.processed_frame)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
img_name = os.path.join(self.root_dir,
self.processed_frame.iloc[idx, 1])
img_name = img_name + '.png'
#image = io.imread(img_name , plugin='matploPILtlib')
image = PIL.Image.open(img_name)
if self.transform:
image = self.transform(image)
#if False:
max_img_size = 2048
if (image.shape[1] > max_img_size) or (image.shape[2] > max_img_size):
image = transforms.CenterCrop(max_img_size)(image)
papyID = self.processed_frame.iloc[idx,3]
return image, papyID
# No longer needed deleated soon
class MyInferenceModel(InferenceModel):
def get_embeddings_from_tensor_or_dataset(self, inputs, batch_size):
inputs = self.process_if_list(inputs)
embeddings = []
if isinstance(inputs, (torch.Tensor, list)):
for i in range(0, len(inputs), batch_size):
embeddings.append(self.get_embeddings(inputs[i : i + batch_size]))
elif isinstance(inputs, torch.utils.data.Dataset):
dataloader = torch.utils.data.DataLoader(inputs, batch_size=batch_size)
for inp, _ in dataloader:
embeddings.append(self.get_embeddings(inp))
else:
raise TypeError(f"Indexing {type(inputs)} is not supported.")
return torch.cat(embeddings)
dataset = FAUPapyrusCollectionInferenceDataset(setting['path_val'], processed_frame_val, transform)
###Output
_____no_output_____
###Markdown
Apply DML-Helper Functions
###Code
def get_labels_to_indices(dataset):
labels_to_indices = {}
for i, sample in enumerate(dataset):
img, label = sample
if label in labels_to_indices.keys():
labels_to_indices[label].append(i)
else:
labels_to_indices[label] = [i]
return labels_to_indices
labels_to_indices = get_labels_to_indices(dataset)
###Output
_____no_output_____
###Markdown
Load Checkpoint
###Code
model = model = EfficientNet.from_name('efficientnet-b7').to(device)
model._fc = torch.nn.Identity()
checkpoint = torch.load(output_dir + "/model.pt")
model.load_state_dict(checkpoint['model_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
epoch = checkpoint['epoch']
loss = checkpoint['loss']
model.to(device)
###Output
_____no_output_____
###Markdown
Prepare DML Methods
###Code
match_finder = MatchFinder(distance=CosineSimilarity(), threshold=0.2)
inference_model = InferenceModel(model, match_finder=match_finder)
###Output
_____no_output_____
###Markdown
PapyIDs to Index Prepare KNN for Inference on Embeddings
###Code
# create faiss index
inference_model.train_knn(dataset, batch_size=1)
###Output
_____no_output_____
###Markdown
Infercening
###Code
k = 100
lowest_acc = 1
highest_acc = 0
temp_counter = 0
max_counter = 2
for papyID in labels_to_indices.keys():
if temp_counter >=max_counter:
break
for fragment in labels_to_indices[papyID]:
if temp_counter >=max_counter:
break
temp_counter = temp_counter + 1
img, org_label = dataset[fragment]
img = img.unsqueeze(0)
#print(f"query image: {org_label}")
#imshow(torchvision.utils.make_grid(img))
distances, indices = inference_model.get_nearest_neighbors(img, k=k)
#print(len(distances[0]))
nearest_imgs = [dataset[i][0] for i in indices.cpu()[0]]
#print(f"Nearest Images:\n")
neighbours = []
labels = []
for i in indices.cpu()[0]:
neighbour, label = dataset[i]
#print(f"Label: {label}")
neighbours.append(neighbour)
labels.append(label)
occurrences = labels.count(org_label)
acc = occurrences / 100
if acc < lowest_acc:
lowest_acc = acc
print(f'Found new lowest example with acc {acc}')
input_img_of_lowest_acc = img
input_label_of_lowest_acc = org_label
input_index_of_lowest_acc = fragment
detected_neighbours_of_lowest_acc = neighbours
detected_labels_of_lowest_acc = labels
detected_distances_of_lowest_acc = distances
if acc > highest_acc:
highest_acc = acc
print(f'Found new highest example with acc {acc}')
input_img_of_highest_acc = img
input_label_of_highest_acc = org_label
input_index_of_highest_acc = fragment
detected_neighbours_of_highest_acc = neighbours
detected_labels_of_highest_acc = labels
detected_distances_of_highest_acc = distances
def get_inference_plot(neighbours, labels, distances, org_label, img ,k, lowest):
if lowest:
print(f"query image for lowest acc: {org_label}")
else:
print(f"query image for highest acc: {org_label}")
imshow(torchvision.utils.make_grid(img))
Nr = k
Nc = 10
my_dpi = 96
fig, axs = plt.subplots(Nr, Nc)
fig.set_figheight(320)
fig.set_figwidth(30)
fig.suptitle(f'Neighbour Crops of {org_label}')
for i, neighbour in enumerate(neighbours):
#print(neighbour.shape)
neighbour_crops = extract_patches_2d(image=neighbour.T.numpy(), patch_size=(32,32), max_patches= 10)
neighbour_crops = neighbour_crops.transpose((0,3,2,1))
neighbour_crops = torch.tensor(neighbour_crops)
for j in range(Nc):
if j == 0:
distance = (distances[i].cpu().numpy().round(2))
row_label = f"label: {labels[i]} \n distance: {distance}"
axs[i,j].set_ylabel(row_label)
neighbour_crop = neighbour_crops[j]
img = inv_normalize(neighbour_crop)
npimg = img.numpy()
transposed = np.transpose(npimg, (1, 2, 0))
# find right size for the frame
x = int(transposed.shape[1] * 0.05)
boarder = 'green'
if org_label == labels[i]:
boarderized = cv2.copyMakeBorder(transposed,x,x,x,x,cv2.BORDER_CONSTANT,value=[0,1,0])
elif org_label != labels[i]:
boarderized = cv2.copyMakeBorder(transposed,x,x,x,x,cv2.BORDER_CONSTANT,value=[1,0,0])
else:
boarderized = transposed
axs[i,j].imshow(boarderized, aspect='auto')
plt.tight_layout()
if lowest:
plt.savefig(output_dir + "/results_for_lowest_acc.pdf",bbox_inches='tight',dpi=100)
else:
plt.savefig(output_dir + "/results_for_highest_acc.pdf",bbox_inches='tight',dpi=100)
plt.show()
#get_inference_plot(neighbours, labels, distances[0], org_label, img, k=100)
get_inference_plot(detected_neighbours_of_highest_acc, detected_labels_of_highest_acc, detected_distances_of_highest_acc[0], input_label_of_highest_acc, input_img_of_highest_acc, k=100, lowest=False)
get_inference_plot(detected_neighbours_of_lowest_acc, detected_labels_of_lowest_acc, detected_distances_of_lowest_acc[0], input_label_of_lowest_acc, input_img_of_lowest_acc, k=100, lowest=True)
###Output
_____no_output_____
|
examples/Custom_evaluation.ipynb
|
###Markdown
Using Polara for custom evaluation scenarios Polara is designed to automate the process of model prototyping and evaluation as much as possible. As a part of it,Polara follows a certain data management workflow, aimed at maintaining a consistent and predictable internal state. By default, it implements several conventional evaluation scenarios fully controlled by a set of configurational parameters. A user does not have to worry about anything beyond just setting the appropriate values of these parameters (a complete list of them can be obtained by calling the `get_configuration` method of a `RecommenderData` instance). As the result an input preferences data will be automatically pre-processed and converted into a convenient representation with an independent access to the training and evaluation parts. This default behaviour, however, can be flexibly manipulated to run custom scenarios with externally provided evaluation data. This flexibility is achieved with the help of the special `set_test_data` method implemented in the `RecommenderData` class. This guide demonstrates how to use the configuration parameters in conjunction with this method to cover various customizations. Prepare data We will use Movielens-1M data for experimentation. The data will be divided into several parts:1. *observations*, used for training, 2. *holdout*, used for evaluating recommendations against the true preferences,3. *unseen data*, used for warm-start scenarios, where test users with their preferences are not a part of training.The last two datasets serve as an imitation of external data sources, which are not a part of initial data model. Also note, that *holdout* dataset contains items of both known and unseen (warm-start) users.
###Code
import numpy as np
from polara.datasets.movielens import get_movielens_data
seed = 0
def random_state(seed=seed): # to fix random state in experiments
return np.random.RandomState(seed=seed)
###Output
_____no_output_____
###Markdown
Downloading the data (alternatively you can provide a path to the local copy of the data as an argument to the function):
###Code
data = get_movielens_data()
###Output
_____no_output_____
###Markdown
Sampling 5% of the preferences data to form the *holdout* dataset:
###Code
data_sampled = data.sample(frac=0.95, random_state=random_state()).sort_values('userid')
holdout = data[~data.index.isin(data_sampled.index)]
###Output
_____no_output_____
###Markdown
Make 20% of all users unseen during the training phase:
###Code
users, unseen_users = np.split(data_sampled.userid.drop_duplicates().values,
[int(0.8*data_sampled.userid.nunique()),])
observations = data_sampled.query('userid in @users')
###Output
_____no_output_____
###Markdown
Scenario 0: building a recommender model without any evaluation This is the simplest case, which allows to completely ignore evaluation phase. **This sets an initial configuration for all further evaluation scenarios**.
###Code
from polara.recommender.data import RecommenderData
from polara.recommender.models import SVDModel
data_model = RecommenderData(observations, 'userid', 'movieid', 'rating', seed=seed)
###Output
_____no_output_____
###Markdown
We will use `prepare_training_only` method instead of the general `prepare`:
###Code
data_model.prepare_training_only()
###Output
Preparing data...
Done.
There are 766928 events in the training and 0 events in the holdout.
###Markdown
This sets all the required configuration parameters and transform the data accordingly. Let's check that test data is empty,
###Code
data_model.test
###Output
_____no_output_____
###Markdown
and the whole input was used as a training part:
###Code
data_model.training.shape
observations.shape
###Output
_____no_output_____
###Markdown
Internally, the data was transformed to have a certain numeric representation, which Polara relies on:
###Code
data_model.training.head()
observations.head()
###Output
_____no_output_____
###Markdown
The mapping between external and internal data representations is stored in the `data_model.index` attribute.The transformation can be disabled by setting the `build_index` attribute to `False` before data processing (not recommended). You can easily build a recommendation model now:
###Code
svd = SVDModel(data_model)
svd.build()
###Output
PureSVD training time: 0.128s
###Markdown
However, the recommendations cannot be generated, as there is no testing data. The following function call will raise an error:```pythonsvd.get_recommendations()``` Scenario 1: evaluation with pre-specified holdout data for known users In the competitions like [*Netflix Prize*](https://en.wikipedia.org/wiki/Netflix_Prize) you may be provided with a dedicated evaluation dataset (a *probe* set), which contains hidden preferences information about *known* users. In terms of the Polara syntax, this is a *holdout* set.You can assign this holdout set to the data model by calling the `set_test_data` method as follows:
###Code
data_model.set_test_data(holdout=holdout, warm_start=False)
###Output
6 unique movieid's within 6 holdout interactions were filtered. Reason: not in the training data.
1129 unique userid's within 9479 holdout interactions were filtered. Reason: not in the training data.
###Markdown
Mind the `warm_start=False` argument, which tells Polara to work only with known users. If some users from holdout are not a part of the training data, they will be filtered out and the corresponding notification message will be displayed (you can turn it off by setting `data_model.verbose=False`). In this example 1129 users were filtered out, as initially the holdout set contained both known and unknown users.Note, that items not present in the training data are also filtered. This behavior can be changed by setting `data_model.ensure_consistency=False` (not recommended).
###Code
data_model.test.holdout.userid.nunique()
###Output
_____no_output_____
###Markdown
The recommendation model can now be evaluated:
###Code
svd.switch_positive = 4 # treat ratings below 4 as negative feedback
svd.evaluate()
data_model.test.holdout.query('rating>=4').shape[0] # maximum number of possible true_positive hits
svd.evaluate('relevance')
###Output
_____no_output_____
###Markdown
Scenario 2: see recommendations for selected known users without evaluation Polara also allows to handle cases, where you don't have a probe set and the task is to simply generate recommendations for a list of selected test users. The evaluation in that case is to be performed externally.Let's randomly pick a few test users from all known users (i.e. those who are present in the training data):
###Code
test_users = random_state().choice(users, size=5, replace=False)
test_users
###Output
_____no_output_____
###Markdown
You can provide this list by setting the `test_users` argument of the `set_test_data` method:
###Code
data_model.set_test_data(test_users=test_users, warm_start=False)
###Output
_____no_output_____
###Markdown
Recommendations in that case will have a corresponding shape of `number of test users` x `top-n` (by default top-10).
###Code
svd.get_recommendations().shape
print((len(test_users), svd.topk))
###Output
(5, 10)
###Markdown
As the holdout was not provided, it's previous state is cleared from the data model:
###Code
print(data_model.test.holdout)
###Output
None
###Markdown
The order of test user id's in the recommendations matrix may not correspond to their order in the `test_users` list. The true order can be obtained via `index` attribute - the users are sorted in ascending order by their internal index. This order is used to construct the recommendations matrix.
###Code
data_model.index.userid.training.query('old in @test_users')
test_users
###Output
_____no_output_____
###Markdown
Note, that **there's no need to provide *testset* argument in the case of known users**.All the information about test users' preferences is assumed to be fully present in the training data and the following function call will intentionally raise an error: ```pythondata_model.set_test_data(testset=some_test_data, warm_start=False)```If the testset contains new (unseen) information, you should consider the warm-start scenarios, described below. Scenario 3: see recommendations for unseen users without evaluation Let's form a dataset with new users and their preferences:
###Code
unseen_data = data_sampled.query('userid in @unseen_users')
unseen_data.shape
assert unseen_data.userid.nunique() == len(unseen_users)
print(len(unseen_users))
###Output
1208
###Markdown
None of these users are present in the training:
###Code
data_model.index.userid.training.old.isin(unseen_users).any()
###Output
_____no_output_____
###Markdown
In order to generate recommendations for these users, we assign the dataset of their preferences as a *testset* (mind the *warm_start* argument value):
###Code
data_model.set_test_data(testset=unseen_data, warm_start=True)
###Output
18 unique movieid's within 26 testset interactions were filtered. Reason: not in the training data.
###Markdown
As we use an SVD-based model, there is no need for any modifications to generate recommendations - it uses the same analytical formula for both standard and warm-start regime:
###Code
svd.get_recommendations().shape
###Output
_____no_output_____
###Markdown
Note, that internally the `unseen_data` dataset is transformed: users are reindexed starting from 0 and items are reindexed based on the current item index of the training set.
###Code
data_model.test.testset.head()
data_model.index.userid.test.head() # test user index mapping, new index starts from 0
data_model.index.itemid.head() # item index mapping
unseen_data.head()
###Output
_____no_output_____
###Markdown
Scenario 4: evaluate recommendations for unseen users with external holdout data This is the most complete scenario. We generate recommendations based on the test users' preferences, encoded in the `testset`, and evaluate them against the `holdout`. You should use this setup only when the Polara's built-in warm-start evaluation pipeline (turned on by `data_model.warm_start=True` ) is not sufficient, , e.g. when the preferences data is fixed and provided externally.
###Code
data_model.set_test_data(testset=unseen_data, holdout=holdout, warm_start=True)
###Output
18 unique movieid's within 26 testset interactions were filtered. Reason: not in the training data.
6 unique movieid's within 6 holdout interactions were filtered. Reason: not in the training data.
4484 userid's were filtered out from holdout. Reason: inconsistent with testset.
79 userid's were filtered out from testset. Reason: inconsistent with holdout.
###Markdown
As previously, all unrelated users and items are removed from the datasets and the remaining entities are reindexed.
###Code
data_model.test.testset.head(10)
data_model.test.holdout.head(10)
svd.switch_positive = 4
svd.evaluate()
data_model.test.holdout.query('rating>=4').shape[0] # maximum number of possible true positives
svd.evaluate('relevance')
###Output
_____no_output_____
|
Medicinal Plant Leaf Classification with CNN.ipynb
|
###Markdown
Dataset
###Code
#Downloading Dataset
!wget "http://leafsnap.com/static/dataset/leafsnap-dataset.tar"
#Extracting Dataset
!tar -xvf /content/leafsnap-dataset.tar
!pip install imageio #imageio is used to read the images of various formats
import os #os for traversing into the directory for files
import PIL #PIL is used to resize the images, so that the shape of all images will be same
import imageio #imageio is used to read images
import pandas as pd #pandas is used for reading and writing the csv files
import numpy as np #numpy is used for creating input array
import random #random is used for andomizing the inputs
import math #
import shutil #sutil is used to delete unnecessary folders
from tensorflow import keras
#txt file is included with the dataset contains the file_id, path of image, species and source
dataset = pd.read_csv("./leafsnap-dataset-images.txt", sep="\t")
dataset.head()
print("Total number of species in the dataset: ",len(dataset.species.unique()))
###Output
Total number of species in the dataset: 185
###Markdown
Extracting all filenames from path
###Code
dataset["filename"] = None
filename_index_in_dataframe = dataset.columns.get_loc("filename")
for i in range(len(dataset)):
dataset.iloc[i, filename_index_in_dataframe] = os.path.basename(dataset.image_path[i])
dataset.head()
###Output
_____no_output_____
###Markdown
Removing Extra Directory We remove 155 directories from our dataset and only conserve 30 Directory that are present in Plant Medical Records.csv
###Code
#Plants Medical Records is csv file that contains the name of 30 medicinal plant and their uses.
df2=pd.read_csv('Plants_Record.csv')
df2.head()
df2.Species.unique()
names_=df2['Species']
names_.values
#Creating a new dataframe that holds the data of 30 medicinal plants that we use in this project
new_df=dataset[ (dataset.species==names_[0])| (dataset.species==names_[1])
| (dataset.species==names_[2])| (dataset.species==names_[3])
| (dataset.species==names_[4])| (dataset.species==names_[5])
| (dataset.species==names_[6])| (dataset.species==names_[7])
| (dataset.species==names_[8])| (dataset.species==names_[9])
| (dataset.species==names_[10])| (dataset.species==names_[11])
| (dataset.species==names_[12])| (dataset.species==names_[13])
| (dataset.species==names_[14])| (dataset.species==names_[15])
| (dataset.species==names_[16])| (dataset.species==names_[17])
| (dataset.species==names_[18])| (dataset.species==names_[19])
| (dataset.species==names_[20])| (dataset.species==names_[21])
| (dataset.species==names_[22])| (dataset.species==names_[23])
| (dataset.species==names_[24])| (dataset.species==names_[25])
| (dataset.species==names_[26])| (dataset.species==names_[27])
| (dataset.species==names_[28])| (dataset.species==names_[29])
]
dataset=new_df
to_conserve=new_df.species.unique()
for i in range(len(to_conserve)):
to_conserve[i]=to_conserve[i].replace(' ','_').lower()
to_conserve
def remove_extra(to_remove_dir):
all_dir=(os.listdir('/content/dataset/'+to_remove_dir))
flag=0
count=0
directory_name='/content/dataset/'+to_remove_dir
for i in all_dir:
for j in to_conserve:
flag=0
if i==j:
flag=1
break
if flag!=1:
t_remove=directory_name+i+'/'
shutil.rmtree(t_remove)
print("Removed "+t_remove)
remove_extra('images/field/')
remove_extra('images/lab/')
remove_extra('segmented/field/')
remove_extra('segmented/lab/')
###Output
Removed /content/dataset/images/field/fraxinus_nigra/
Removed /content/dataset/images/field/stewartia_pseudocamellia/
Removed /content/dataset/images/field/taxodium_distichum/
Removed /content/dataset/images/field/amelanchier_laevis/
Removed /content/dataset/images/field/juglans_nigra/
Removed /content/dataset/images/field/pinus_flexilis/
Removed /content/dataset/images/field/prunus_yedoensis/
Removed /content/dataset/images/field/aesculus_glabra/
Removed /content/dataset/images/field/morus_rubra/
Removed /content/dataset/images/field/prunus_sargentii/
Removed /content/dataset/images/field/carpinus_caroliniana/
Removed /content/dataset/images/field/prunus_subhirtella/
Removed /content/dataset/images/field/corylus_colurna/
Removed /content/dataset/images/field/evodia_daniellii/
Removed /content/dataset/images/field/salix_caroliniana/
Removed /content/dataset/images/field/acer_ginnala/
Removed /content/dataset/images/field/cryptomeria_japonica/
Removed /content/dataset/images/field/syringa_reticulata/
Removed /content/dataset/images/field/pinus_strobus/
Removed /content/dataset/images/field/juniperus_virginiana/
Removed /content/dataset/images/field/crataegus_pruinosa/
Removed /content/dataset/images/field/styrax_obassia/
Removed /content/dataset/images/field/prunus_virginiana/
Removed /content/dataset/images/field/quercus_imbricaria/
Removed /content/dataset/images/field/quercus_velutina/
Removed /content/dataset/images/field/chamaecyparis_thyoides/
Removed /content/dataset/images/field/pinus_parviflora/
Removed /content/dataset/images/field/pinus_echinata/
Removed /content/dataset/images/field/acer_palmatum/
Removed /content/dataset/images/field/ailanthus_altissima/
Removed /content/dataset/images/field/ptelea_trifoliata/
Removed /content/dataset/images/field/ilex_opaca/
Removed /content/dataset/images/field/juglans_cinerea/
Removed /content/dataset/images/field/platanus_acerifolia/
Removed /content/dataset/images/field/populus_tremuloides/
Removed /content/dataset/images/field/magnolia_stellata/
Removed /content/dataset/images/field/chionanthus_retusus/
Removed /content/dataset/images/field/quercus_palustris/
Removed /content/dataset/images/field/quercus_virginiana/
Removed /content/dataset/images/field/malus_coronaria/
Removed /content/dataset/images/field/pinus_pungens/
Removed /content/dataset/images/field/acer_saccharinum/
Removed /content/dataset/images/field/quercus_robur/
Removed /content/dataset/images/field/quercus_rubra/
Removed /content/dataset/images/field/quercus_muehlenbergii/
Removed /content/dataset/images/field/betula_lenta/
Removed /content/dataset/images/field/quercus_michauxii/
Removed /content/dataset/images/field/platanus_occidentalis/
Removed /content/dataset/images/field/cornus_kousa/
Removed /content/dataset/images/field/pinus_cembra/
Removed /content/dataset/images/field/pinus_taeda/
Removed /content/dataset/images/field/crataegus_crus-galli/
Removed /content/dataset/images/field/phellodendron_amurense/
Removed /content/dataset/images/field/halesia_tetraptera/
Removed /content/dataset/images/field/salix_babylonica/
Removed /content/dataset/images/field/ulmus_americana/
Removed /content/dataset/images/field/quercus_montana/
Removed /content/dataset/images/field/quercus_coccinea/
Removed /content/dataset/images/field/cladrastis_lutea/
Removed /content/dataset/images/field/pyrus_calleryana/
Removed /content/dataset/images/field/metasequoia_glyptostroboides/
Removed /content/dataset/images/field/celtis_tenuifolia/
Removed /content/dataset/images/field/pinus_rigida/
Removed /content/dataset/images/field/picea_orientalis/
Removed /content/dataset/images/field/acer_pensylvanicum/
Removed /content/dataset/images/field/amelanchier_arborea/
Removed /content/dataset/images/field/malus_hupehensis/
Removed /content/dataset/images/field/pinus_sylvestris/
Removed /content/dataset/images/field/carya_ovata/
Removed /content/dataset/images/field/celtis_occidentalis/
Removed /content/dataset/images/field/magnolia_virginiana/
Removed /content/dataset/images/field/pinus_resinosa/
Removed /content/dataset/images/field/quercus_stellata/
Removed /content/dataset/images/field/malus_angustifolia/
Removed /content/dataset/images/field/pinus_bungeana/
Removed /content/dataset/images/field/chamaecyparis_pisifera/
Removed /content/dataset/images/field/fraxinus_pennsylvanica/
Removed /content/dataset/images/field/salix_matsudana/
Removed /content/dataset/images/field/cercidiphyllum_japonicum/
Removed /content/dataset/images/field/carya_cordiformis/
Removed /content/dataset/images/field/carya_tomentosa/
Removed /content/dataset/images/field/crataegus_laevigata/
Removed /content/dataset/images/field/carya_glabra/
Removed /content/dataset/images/field/amelanchier_canadensis/
Removed /content/dataset/images/field/pinus_wallichiana/
Removed /content/dataset/images/field/nyssa_sylvatica/
Removed /content/dataset/images/field/ginkgo_biloba/
Removed /content/dataset/images/field/ulmus_parvifolia/
Removed /content/dataset/images/field/liriodendron_tulipifera/
Removed /content/dataset/images/field/malus_pumila/
Removed /content/dataset/images/field/catalpa_speciosa/
Removed /content/dataset/images/field/magnolia_tripetala/
Removed /content/dataset/images/field/picea_pungens/
Removed /content/dataset/images/field/liquidambar_styraciflua/
Removed /content/dataset/images/field/quercus_falcata/
Removed /content/dataset/images/field/cedrus_libani/
Removed /content/dataset/images/field/salix_nigra/
Removed /content/dataset/images/field/acer_platanoides/
Removed /content/dataset/images/field/pinus_peucea/
Removed /content/dataset/images/field/quercus_nigra/
Removed /content/dataset/images/field/acer_pseudoplatanus/
Removed /content/dataset/images/field/quercus_macrocarpa/
Removed /content/dataset/images/field/crataegus_viridis/
Removed /content/dataset/images/field/quercus_shumardii/
Removed /content/dataset/images/field/pinus_thunbergii/
Removed /content/dataset/images/field/betula_nigra/
Removed /content/dataset/images/field/quercus_alba/
Removed /content/dataset/images/field/aesculus_hippocastamon/
Removed /content/dataset/images/field/pinus_virginiana/
Removed /content/dataset/images/field/acer_negundo/
Removed /content/dataset/images/field/malus_floribunda/
Removed /content/dataset/images/field/fraxinus_americana/
Removed /content/dataset/images/field/pinus_nigra/
Removed /content/dataset/images/field/crataegus_phaenopyrum/
Removed /content/dataset/images/field/cornus_mas/
Removed /content/dataset/images/field/ostrya_virginiana/
Removed /content/dataset/images/field/quercus_acutissima/
Removed /content/dataset/images/field/betula_populifolia/
Removed /content/dataset/images/field/tilia_tomentosa/
Removed /content/dataset/images/field/oxydendrum_arboreum/
Removed /content/dataset/images/field/pseudolarix_amabilis/
Removed /content/dataset/images/field/pinus_koraiensis/
Removed /content/dataset/images/field/cornus_florida/
Removed /content/dataset/images/field/gymnocladus_dioicus/
Removed /content/dataset/images/field/staphylea_trifolia/
Removed /content/dataset/images/field/tilia_europaea/
Removed /content/dataset/images/field/acer_saccharum/
Removed /content/dataset/images/field/quercus_cerris/
Removed /content/dataset/images/field/cedrus_deodara/
Removed /content/dataset/images/field/quercus_marilandica/
Removed /content/dataset/images/field/pinus_densiflora/
Removed /content/dataset/images/field/prunus_serotina/
Removed /content/dataset/images/field/abies_nordmanniana/
Removed /content/dataset/images/field/broussonettia_papyrifera/
Removed /content/dataset/images/field/quercus_phellos/
Removed /content/dataset/images/field/diospyros_virginiana/
Removed /content/dataset/images/field/acer_griseum/
Removed /content/dataset/images/field/maclura_pomifera/
Removed /content/dataset/images/field/koelreuteria_paniculata/
Removed /content/dataset/images/field/styrax_japonica/
Removed /content/dataset/images/field/magnolia_denudata/
Removed /content/dataset/images/field/aesculus_pavi/
Removed /content/dataset/images/field/tilia_americana/
Removed /content/dataset/images/field/quercus_bicolor/
Removed /content/dataset/images/field/populus_grandidentata/
Removed /content/dataset/images/field/picea_abies/
Removed /content/dataset/images/field/magnolia_macrophylla/
Removed /content/dataset/images/field/aesculus_flava/
Removed /content/dataset/images/field/acer_rubrum/
Removed /content/dataset/images/field/prunus_serrulata/
Removed /content/dataset/images/field/zelkova_serrata/
Removed /content/dataset/images/field/magnolia_soulangiana/
Removed /content/dataset/images/field/prunus_pensylvanica/
Removed /content/dataset/images/field/malus_baccata/
Removed /content/dataset/images/lab/fraxinus_nigra/
Removed /content/dataset/images/lab/stewartia_pseudocamellia/
Removed /content/dataset/images/lab/taxodium_distichum/
Removed /content/dataset/images/lab/amelanchier_laevis/
Removed /content/dataset/images/lab/juglans_nigra/
Removed /content/dataset/images/lab/pinus_flexilis/
Removed /content/dataset/images/lab/prunus_yedoensis/
Removed /content/dataset/images/lab/aesculus_glabra/
Removed /content/dataset/images/lab/morus_rubra/
Removed /content/dataset/images/lab/prunus_sargentii/
Removed /content/dataset/images/lab/carpinus_caroliniana/
Removed /content/dataset/images/lab/prunus_subhirtella/
Removed /content/dataset/images/lab/corylus_colurna/
Removed /content/dataset/images/lab/evodia_daniellii/
Removed /content/dataset/images/lab/salix_caroliniana/
Removed /content/dataset/images/lab/acer_ginnala/
Removed /content/dataset/images/lab/cryptomeria_japonica/
Removed /content/dataset/images/lab/syringa_reticulata/
Removed /content/dataset/images/lab/pinus_strobus/
Removed /content/dataset/images/lab/juniperus_virginiana/
Removed /content/dataset/images/lab/crataegus_pruinosa/
Removed /content/dataset/images/lab/styrax_obassia/
Removed /content/dataset/images/lab/prunus_virginiana/
Removed /content/dataset/images/lab/quercus_imbricaria/
Removed /content/dataset/images/lab/quercus_velutina/
Removed /content/dataset/images/lab/chamaecyparis_thyoides/
Removed /content/dataset/images/lab/pinus_parviflora/
Removed /content/dataset/images/lab/pinus_echinata/
Removed /content/dataset/images/lab/acer_palmatum/
Removed /content/dataset/images/lab/ailanthus_altissima/
Removed /content/dataset/images/lab/ptelea_trifoliata/
Removed /content/dataset/images/lab/ilex_opaca/
Removed /content/dataset/images/lab/juglans_cinerea/
Removed /content/dataset/images/lab/platanus_acerifolia/
Removed /content/dataset/images/lab/populus_tremuloides/
Removed /content/dataset/images/lab/magnolia_stellata/
Removed /content/dataset/images/lab/chionanthus_retusus/
Removed /content/dataset/images/lab/quercus_palustris/
Removed /content/dataset/images/lab/quercus_virginiana/
Removed /content/dataset/images/lab/malus_coronaria/
Removed /content/dataset/images/lab/pinus_pungens/
Removed /content/dataset/images/lab/acer_saccharinum/
Removed /content/dataset/images/lab/quercus_robur/
Removed /content/dataset/images/lab/quercus_rubra/
Removed /content/dataset/images/lab/quercus_muehlenbergii/
Removed /content/dataset/images/lab/betula_lenta/
Removed /content/dataset/images/lab/quercus_michauxii/
Removed /content/dataset/images/lab/platanus_occidentalis/
Removed /content/dataset/images/lab/cornus_kousa/
Removed /content/dataset/images/lab/pinus_cembra/
Removed /content/dataset/images/lab/pinus_taeda/
Removed /content/dataset/images/lab/crataegus_crus-galli/
Removed /content/dataset/images/lab/phellodendron_amurense/
Removed /content/dataset/images/lab/halesia_tetraptera/
Removed /content/dataset/images/lab/salix_babylonica/
Removed /content/dataset/images/lab/ulmus_americana/
Removed /content/dataset/images/lab/quercus_montana/
Removed /content/dataset/images/lab/quercus_coccinea/
Removed /content/dataset/images/lab/cladrastis_lutea/
Removed /content/dataset/images/lab/pyrus_calleryana/
Removed /content/dataset/images/lab/metasequoia_glyptostroboides/
Removed /content/dataset/images/lab/celtis_tenuifolia/
Removed /content/dataset/images/lab/pinus_rigida/
Removed /content/dataset/images/lab/picea_orientalis/
Removed /content/dataset/images/lab/acer_pensylvanicum/
Removed /content/dataset/images/lab/amelanchier_arborea/
Removed /content/dataset/images/lab/malus_hupehensis/
Removed /content/dataset/images/lab/pinus_sylvestris/
Removed /content/dataset/images/lab/carya_ovata/
Removed /content/dataset/images/lab/celtis_occidentalis/
Removed /content/dataset/images/lab/magnolia_virginiana/
Removed /content/dataset/images/lab/pinus_resinosa/
Removed /content/dataset/images/lab/quercus_stellata/
Removed /content/dataset/images/lab/ulmus_procera/
Removed /content/dataset/images/lab/malus_angustifolia/
Removed /content/dataset/images/lab/pinus_bungeana/
Removed /content/dataset/images/lab/chamaecyparis_pisifera/
Removed /content/dataset/images/lab/fraxinus_pennsylvanica/
Removed /content/dataset/images/lab/salix_matsudana/
Removed /content/dataset/images/lab/cercidiphyllum_japonicum/
Removed /content/dataset/images/lab/carya_cordiformis/
Removed /content/dataset/images/lab/carya_tomentosa/
Removed /content/dataset/images/lab/crataegus_laevigata/
Removed /content/dataset/images/lab/carya_glabra/
Removed /content/dataset/images/lab/amelanchier_canadensis/
Removed /content/dataset/images/lab/pinus_wallichiana/
Removed /content/dataset/images/lab/nyssa_sylvatica/
Removed /content/dataset/images/lab/ginkgo_biloba/
Removed /content/dataset/images/lab/ulmus_parvifolia/
Removed /content/dataset/images/lab/liriodendron_tulipifera/
Removed /content/dataset/images/lab/malus_pumila/
Removed /content/dataset/images/lab/catalpa_speciosa/
Removed /content/dataset/images/lab/magnolia_tripetala/
Removed /content/dataset/images/lab/picea_pungens/
Removed /content/dataset/images/lab/liquidambar_styraciflua/
Removed /content/dataset/images/lab/quercus_falcata/
Removed /content/dataset/images/lab/cedrus_libani/
Removed /content/dataset/images/lab/salix_nigra/
Removed /content/dataset/images/lab/acer_platanoides/
Removed /content/dataset/images/lab/pinus_peucea/
Removed /content/dataset/images/lab/quercus_nigra/
Removed /content/dataset/images/lab/acer_pseudoplatanus/
Removed /content/dataset/images/lab/quercus_macrocarpa/
Removed /content/dataset/images/lab/crataegus_viridis/
Removed /content/dataset/images/lab/quercus_shumardii/
Removed /content/dataset/images/lab/pinus_thunbergii/
Removed /content/dataset/images/lab/betula_nigra/
Removed /content/dataset/images/lab/quercus_alba/
Removed /content/dataset/images/lab/aesculus_hippocastamon/
Removed /content/dataset/images/lab/pinus_virginiana/
Removed /content/dataset/images/lab/acer_negundo/
Removed /content/dataset/images/lab/malus_floribunda/
Removed /content/dataset/images/lab/fraxinus_americana/
Removed /content/dataset/images/lab/pinus_nigra/
Removed /content/dataset/images/lab/crataegus_phaenopyrum/
Removed /content/dataset/images/lab/cornus_mas/
Removed /content/dataset/images/lab/ostrya_virginiana/
Removed /content/dataset/images/lab/quercus_acutissima/
Removed /content/dataset/images/lab/betula_populifolia/
Removed /content/dataset/images/lab/tilia_tomentosa/
Removed /content/dataset/images/lab/oxydendrum_arboreum/
Removed /content/dataset/images/lab/pseudolarix_amabilis/
Removed /content/dataset/images/lab/pinus_koraiensis/
Removed /content/dataset/images/lab/cornus_florida/
Removed /content/dataset/images/lab/gymnocladus_dioicus/
Removed /content/dataset/images/lab/staphylea_trifolia/
Removed /content/dataset/images/lab/tilia_europaea/
Removed /content/dataset/images/lab/acer_saccharum/
Removed /content/dataset/images/lab/quercus_cerris/
Removed /content/dataset/images/lab/cedrus_deodara/
Removed /content/dataset/images/lab/quercus_marilandica/
Removed /content/dataset/images/lab/pinus_densiflora/
Removed /content/dataset/images/lab/prunus_serotina/
Removed /content/dataset/images/lab/abies_nordmanniana/
Removed /content/dataset/images/lab/broussonettia_papyrifera/
Removed /content/dataset/images/lab/quercus_phellos/
Removed /content/dataset/images/lab/diospyros_virginiana/
Removed /content/dataset/images/lab/acer_griseum/
Removed /content/dataset/images/lab/maclura_pomifera/
Removed /content/dataset/images/lab/koelreuteria_paniculata/
Removed /content/dataset/images/lab/styrax_japonica/
Removed /content/dataset/images/lab/magnolia_denudata/
Removed /content/dataset/images/lab/aesculus_pavi/
Removed /content/dataset/images/lab/tilia_americana/
Removed /content/dataset/images/lab/quercus_bicolor/
Removed /content/dataset/images/lab/populus_grandidentata/
Removed /content/dataset/images/lab/picea_abies/
Removed /content/dataset/images/lab/magnolia_macrophylla/
Removed /content/dataset/images/lab/aesculus_flava/
Removed /content/dataset/images/lab/acer_rubrum/
Removed /content/dataset/images/lab/prunus_serrulata/
Removed /content/dataset/images/lab/zelkova_serrata/
Removed /content/dataset/images/lab/magnolia_soulangiana/
Removed /content/dataset/images/lab/prunus_pensylvanica/
Removed /content/dataset/images/lab/malus_baccata/
Removed /content/dataset/segmented/field/fraxinus_nigra/
Removed /content/dataset/segmented/field/stewartia_pseudocamellia/
Removed /content/dataset/segmented/field/taxodium_distichum/
Removed /content/dataset/segmented/field/amelanchier_laevis/
Removed /content/dataset/segmented/field/juglans_nigra/
Removed /content/dataset/segmented/field/pinus_flexilis/
Removed /content/dataset/segmented/field/prunus_yedoensis/
Removed /content/dataset/segmented/field/aesculus_glabra/
Removed /content/dataset/segmented/field/morus_rubra/
Removed /content/dataset/segmented/field/prunus_sargentii/
Removed /content/dataset/segmented/field/carpinus_caroliniana/
Removed /content/dataset/segmented/field/prunus_subhirtella/
Removed /content/dataset/segmented/field/corylus_colurna/
Removed /content/dataset/segmented/field/evodia_daniellii/
Removed /content/dataset/segmented/field/salix_caroliniana/
Removed /content/dataset/segmented/field/acer_ginnala/
Removed /content/dataset/segmented/field/cryptomeria_japonica/
Removed /content/dataset/segmented/field/syringa_reticulata/
Removed /content/dataset/segmented/field/pinus_strobus/
Removed /content/dataset/segmented/field/juniperus_virginiana/
Removed /content/dataset/segmented/field/crataegus_pruinosa/
Removed /content/dataset/segmented/field/styrax_obassia/
Removed /content/dataset/segmented/field/prunus_virginiana/
Removed /content/dataset/segmented/field/quercus_imbricaria/
Removed /content/dataset/segmented/field/quercus_velutina/
Removed /content/dataset/segmented/field/chamaecyparis_thyoides/
Removed /content/dataset/segmented/field/pinus_parviflora/
Removed /content/dataset/segmented/field/pinus_echinata/
Removed /content/dataset/segmented/field/acer_palmatum/
Removed /content/dataset/segmented/field/ailanthus_altissima/
Removed /content/dataset/segmented/field/ptelea_trifoliata/
Removed /content/dataset/segmented/field/ilex_opaca/
Removed /content/dataset/segmented/field/juglans_cinerea/
Removed /content/dataset/segmented/field/platanus_acerifolia/
Removed /content/dataset/segmented/field/populus_tremuloides/
Removed /content/dataset/segmented/field/magnolia_stellata/
Removed /content/dataset/segmented/field/chionanthus_retusus/
Removed /content/dataset/segmented/field/quercus_palustris/
Removed /content/dataset/segmented/field/quercus_virginiana/
Removed /content/dataset/segmented/field/malus_coronaria/
Removed /content/dataset/segmented/field/pinus_pungens/
Removed /content/dataset/segmented/field/acer_saccharinum/
Removed /content/dataset/segmented/field/quercus_robur/
Removed /content/dataset/segmented/field/quercus_rubra/
Removed /content/dataset/segmented/field/quercus_muehlenbergii/
Removed /content/dataset/segmented/field/betula_lenta/
Removed /content/dataset/segmented/field/quercus_michauxii/
Removed /content/dataset/segmented/field/platanus_occidentalis/
Removed /content/dataset/segmented/field/cornus_kousa/
Removed /content/dataset/segmented/field/pinus_cembra/
Removed /content/dataset/segmented/field/pinus_taeda/
Removed /content/dataset/segmented/field/crataegus_crus-galli/
Removed /content/dataset/segmented/field/phellodendron_amurense/
Removed /content/dataset/segmented/field/halesia_tetraptera/
Removed /content/dataset/segmented/field/salix_babylonica/
Removed /content/dataset/segmented/field/ulmus_americana/
Removed /content/dataset/segmented/field/quercus_montana/
Removed /content/dataset/segmented/field/quercus_coccinea/
Removed /content/dataset/segmented/field/cladrastis_lutea/
Removed /content/dataset/segmented/field/pyrus_calleryana/
Removed /content/dataset/segmented/field/metasequoia_glyptostroboides/
Removed /content/dataset/segmented/field/celtis_tenuifolia/
Removed /content/dataset/segmented/field/pinus_rigida/
Removed /content/dataset/segmented/field/picea_orientalis/
Removed /content/dataset/segmented/field/acer_pensylvanicum/
Removed /content/dataset/segmented/field/amelanchier_arborea/
Removed /content/dataset/segmented/field/malus_hupehensis/
Removed /content/dataset/segmented/field/pinus_sylvestris/
Removed /content/dataset/segmented/field/carya_ovata/
Removed /content/dataset/segmented/field/celtis_occidentalis/
Removed /content/dataset/segmented/field/magnolia_virginiana/
Removed /content/dataset/segmented/field/pinus_resinosa/
Removed /content/dataset/segmented/field/quercus_stellata/
Removed /content/dataset/segmented/field/malus_angustifolia/
Removed /content/dataset/segmented/field/pinus_bungeana/
Removed /content/dataset/segmented/field/chamaecyparis_pisifera/
Removed /content/dataset/segmented/field/fraxinus_pennsylvanica/
Removed /content/dataset/segmented/field/salix_matsudana/
Removed /content/dataset/segmented/field/cercidiphyllum_japonicum/
Removed /content/dataset/segmented/field/carya_cordiformis/
Removed /content/dataset/segmented/field/carya_tomentosa/
Removed /content/dataset/segmented/field/crataegus_laevigata/
Removed /content/dataset/segmented/field/carya_glabra/
Removed /content/dataset/segmented/field/amelanchier_canadensis/
Removed /content/dataset/segmented/field/pinus_wallichiana/
Removed /content/dataset/segmented/field/nyssa_sylvatica/
Removed /content/dataset/segmented/field/ginkgo_biloba/
Removed /content/dataset/segmented/field/ulmus_parvifolia/
Removed /content/dataset/segmented/field/liriodendron_tulipifera/
Removed /content/dataset/segmented/field/malus_pumila/
Removed /content/dataset/segmented/field/catalpa_speciosa/
Removed /content/dataset/segmented/field/magnolia_tripetala/
Removed /content/dataset/segmented/field/picea_pungens/
Removed /content/dataset/segmented/field/liquidambar_styraciflua/
Removed /content/dataset/segmented/field/quercus_falcata/
Removed /content/dataset/segmented/field/cedrus_libani/
Removed /content/dataset/segmented/field/salix_nigra/
Removed /content/dataset/segmented/field/acer_platanoides/
Removed /content/dataset/segmented/field/pinus_peucea/
Removed /content/dataset/segmented/field/quercus_nigra/
Removed /content/dataset/segmented/field/acer_pseudoplatanus/
Removed /content/dataset/segmented/field/quercus_macrocarpa/
Removed /content/dataset/segmented/field/crataegus_viridis/
Removed /content/dataset/segmented/field/quercus_shumardii/
Removed /content/dataset/segmented/field/pinus_thunbergii/
Removed /content/dataset/segmented/field/betula_nigra/
Removed /content/dataset/segmented/field/quercus_alba/
Removed /content/dataset/segmented/field/aesculus_hippocastamon/
Removed /content/dataset/segmented/field/pinus_virginiana/
Removed /content/dataset/segmented/field/acer_negundo/
Removed /content/dataset/segmented/field/malus_floribunda/
Removed /content/dataset/segmented/field/fraxinus_americana/
Removed /content/dataset/segmented/field/pinus_nigra/
Removed /content/dataset/segmented/field/crataegus_phaenopyrum/
Removed /content/dataset/segmented/field/cornus_mas/
Removed /content/dataset/segmented/field/ostrya_virginiana/
Removed /content/dataset/segmented/field/quercus_acutissima/
Removed /content/dataset/segmented/field/betula_populifolia/
Removed /content/dataset/segmented/field/tilia_tomentosa/
Removed /content/dataset/segmented/field/oxydendrum_arboreum/
Removed /content/dataset/segmented/field/pseudolarix_amabilis/
Removed /content/dataset/segmented/field/pinus_koraiensis/
Removed /content/dataset/segmented/field/cornus_florida/
Removed /content/dataset/segmented/field/gymnocladus_dioicus/
Removed /content/dataset/segmented/field/staphylea_trifolia/
Removed /content/dataset/segmented/field/tilia_europaea/
Removed /content/dataset/segmented/field/acer_saccharum/
Removed /content/dataset/segmented/field/quercus_cerris/
Removed /content/dataset/segmented/field/cedrus_deodara/
Removed /content/dataset/segmented/field/quercus_marilandica/
Removed /content/dataset/segmented/field/pinus_densiflora/
Removed /content/dataset/segmented/field/prunus_serotina/
Removed /content/dataset/segmented/field/abies_nordmanniana/
Removed /content/dataset/segmented/field/broussonettia_papyrifera/
Removed /content/dataset/segmented/field/quercus_phellos/
Removed /content/dataset/segmented/field/diospyros_virginiana/
Removed /content/dataset/segmented/field/acer_griseum/
Removed /content/dataset/segmented/field/maclura_pomifera/
Removed /content/dataset/segmented/field/koelreuteria_paniculata/
Removed /content/dataset/segmented/field/styrax_japonica/
Removed /content/dataset/segmented/field/magnolia_denudata/
Removed /content/dataset/segmented/field/aesculus_pavi/
Removed /content/dataset/segmented/field/tilia_americana/
Removed /content/dataset/segmented/field/quercus_bicolor/
Removed /content/dataset/segmented/field/populus_grandidentata/
Removed /content/dataset/segmented/field/picea_abies/
Removed /content/dataset/segmented/field/magnolia_macrophylla/
Removed /content/dataset/segmented/field/aesculus_flava/
Removed /content/dataset/segmented/field/acer_rubrum/
Removed /content/dataset/segmented/field/prunus_serrulata/
Removed /content/dataset/segmented/field/zelkova_serrata/
Removed /content/dataset/segmented/field/magnolia_soulangiana/
Removed /content/dataset/segmented/field/prunus_pensylvanica/
Removed /content/dataset/segmented/field/malus_baccata/
Removed /content/dataset/segmented/lab/fraxinus_nigra/
Removed /content/dataset/segmented/lab/stewartia_pseudocamellia/
Removed /content/dataset/segmented/lab/taxodium_distichum/
Removed /content/dataset/segmented/lab/amelanchier_laevis/
Removed /content/dataset/segmented/lab/juglans_nigra/
Removed /content/dataset/segmented/lab/pinus_flexilis/
Removed /content/dataset/segmented/lab/prunus_yedoensis/
Removed /content/dataset/segmented/lab/aesculus_glabra/
Removed /content/dataset/segmented/lab/morus_rubra/
Removed /content/dataset/segmented/lab/prunus_sargentii/
Removed /content/dataset/segmented/lab/carpinus_caroliniana/
Removed /content/dataset/segmented/lab/prunus_subhirtella/
Removed /content/dataset/segmented/lab/corylus_colurna/
Removed /content/dataset/segmented/lab/evodia_daniellii/
Removed /content/dataset/segmented/lab/salix_caroliniana/
Removed /content/dataset/segmented/lab/acer_ginnala/
Removed /content/dataset/segmented/lab/cryptomeria_japonica/
Removed /content/dataset/segmented/lab/syringa_reticulata/
Removed /content/dataset/segmented/lab/pinus_strobus/
Removed /content/dataset/segmented/lab/juniperus_virginiana/
Removed /content/dataset/segmented/lab/crataegus_pruinosa/
Removed /content/dataset/segmented/lab/styrax_obassia/
Removed /content/dataset/segmented/lab/prunus_virginiana/
Removed /content/dataset/segmented/lab/quercus_imbricaria/
Removed /content/dataset/segmented/lab/quercus_velutina/
Removed /content/dataset/segmented/lab/chamaecyparis_thyoides/
Removed /content/dataset/segmented/lab/pinus_parviflora/
Removed /content/dataset/segmented/lab/pinus_echinata/
Removed /content/dataset/segmented/lab/acer_palmatum/
Removed /content/dataset/segmented/lab/ailanthus_altissima/
Removed /content/dataset/segmented/lab/ptelea_trifoliata/
Removed /content/dataset/segmented/lab/ilex_opaca/
Removed /content/dataset/segmented/lab/juglans_cinerea/
Removed /content/dataset/segmented/lab/platanus_acerifolia/
Removed /content/dataset/segmented/lab/populus_tremuloides/
Removed /content/dataset/segmented/lab/magnolia_stellata/
Removed /content/dataset/segmented/lab/chionanthus_retusus/
Removed /content/dataset/segmented/lab/quercus_palustris/
Removed /content/dataset/segmented/lab/quercus_virginiana/
Removed /content/dataset/segmented/lab/malus_coronaria/
Removed /content/dataset/segmented/lab/pinus_pungens/
Removed /content/dataset/segmented/lab/acer_saccharinum/
Removed /content/dataset/segmented/lab/quercus_robur/
Removed /content/dataset/segmented/lab/quercus_rubra/
Removed /content/dataset/segmented/lab/quercus_muehlenbergii/
Removed /content/dataset/segmented/lab/betula_lenta/
Removed /content/dataset/segmented/lab/quercus_michauxii/
Removed /content/dataset/segmented/lab/platanus_occidentalis/
Removed /content/dataset/segmented/lab/cornus_kousa/
Removed /content/dataset/segmented/lab/pinus_cembra/
Removed /content/dataset/segmented/lab/pinus_taeda/
Removed /content/dataset/segmented/lab/crataegus_crus-galli/
Removed /content/dataset/segmented/lab/phellodendron_amurense/
Removed /content/dataset/segmented/lab/halesia_tetraptera/
Removed /content/dataset/segmented/lab/salix_babylonica/
Removed /content/dataset/segmented/lab/ulmus_americana/
Removed /content/dataset/segmented/lab/quercus_montana/
Removed /content/dataset/segmented/lab/quercus_coccinea/
Removed /content/dataset/segmented/lab/cladrastis_lutea/
Removed /content/dataset/segmented/lab/pyrus_calleryana/
Removed /content/dataset/segmented/lab/metasequoia_glyptostroboides/
Removed /content/dataset/segmented/lab/celtis_tenuifolia/
Removed /content/dataset/segmented/lab/pinus_rigida/
Removed /content/dataset/segmented/lab/picea_orientalis/
Removed /content/dataset/segmented/lab/acer_pensylvanicum/
Removed /content/dataset/segmented/lab/amelanchier_arborea/
Removed /content/dataset/segmented/lab/malus_hupehensis/
Removed /content/dataset/segmented/lab/pinus_sylvestris/
Removed /content/dataset/segmented/lab/carya_ovata/
Removed /content/dataset/segmented/lab/celtis_occidentalis/
Removed /content/dataset/segmented/lab/magnolia_virginiana/
Removed /content/dataset/segmented/lab/pinus_resinosa/
Removed /content/dataset/segmented/lab/quercus_stellata/
Removed /content/dataset/segmented/lab/ulmus_procera/
Removed /content/dataset/segmented/lab/malus_angustifolia/
Removed /content/dataset/segmented/lab/pinus_bungeana/
Removed /content/dataset/segmented/lab/chamaecyparis_pisifera/
Removed /content/dataset/segmented/lab/fraxinus_pennsylvanica/
Removed /content/dataset/segmented/lab/salix_matsudana/
Removed /content/dataset/segmented/lab/cercidiphyllum_japonicum/
Removed /content/dataset/segmented/lab/carya_cordiformis/
Removed /content/dataset/segmented/lab/carya_tomentosa/
Removed /content/dataset/segmented/lab/crataegus_laevigata/
Removed /content/dataset/segmented/lab/carya_glabra/
Removed /content/dataset/segmented/lab/amelanchier_canadensis/
Removed /content/dataset/segmented/lab/pinus_wallichiana/
Removed /content/dataset/segmented/lab/nyssa_sylvatica/
Removed /content/dataset/segmented/lab/ginkgo_biloba/
Removed /content/dataset/segmented/lab/ulmus_parvifolia/
Removed /content/dataset/segmented/lab/liriodendron_tulipifera/
Removed /content/dataset/segmented/lab/malus_pumila/
Removed /content/dataset/segmented/lab/catalpa_speciosa/
Removed /content/dataset/segmented/lab/magnolia_tripetala/
Removed /content/dataset/segmented/lab/picea_pungens/
Removed /content/dataset/segmented/lab/liquidambar_styraciflua/
Removed /content/dataset/segmented/lab/quercus_falcata/
Removed /content/dataset/segmented/lab/cedrus_libani/
Removed /content/dataset/segmented/lab/salix_nigra/
Removed /content/dataset/segmented/lab/acer_platanoides/
Removed /content/dataset/segmented/lab/pinus_peucea/
Removed /content/dataset/segmented/lab/quercus_nigra/
Removed /content/dataset/segmented/lab/acer_pseudoplatanus/
Removed /content/dataset/segmented/lab/quercus_macrocarpa/
Removed /content/dataset/segmented/lab/crataegus_viridis/
Removed /content/dataset/segmented/lab/quercus_shumardii/
Removed /content/dataset/segmented/lab/pinus_thunbergii/
Removed /content/dataset/segmented/lab/betula_nigra/
Removed /content/dataset/segmented/lab/quercus_alba/
Removed /content/dataset/segmented/lab/aesculus_hippocastamon/
Removed /content/dataset/segmented/lab/pinus_virginiana/
Removed /content/dataset/segmented/lab/acer_negundo/
Removed /content/dataset/segmented/lab/malus_floribunda/
Removed /content/dataset/segmented/lab/fraxinus_americana/
Removed /content/dataset/segmented/lab/pinus_nigra/
Removed /content/dataset/segmented/lab/crataegus_phaenopyrum/
Removed /content/dataset/segmented/lab/cornus_mas/
Removed /content/dataset/segmented/lab/ostrya_virginiana/
Removed /content/dataset/segmented/lab/quercus_acutissima/
Removed /content/dataset/segmented/lab/betula_populifolia/
Removed /content/dataset/segmented/lab/tilia_tomentosa/
Removed /content/dataset/segmented/lab/oxydendrum_arboreum/
Removed /content/dataset/segmented/lab/pseudolarix_amabilis/
Removed /content/dataset/segmented/lab/pinus_koraiensis/
Removed /content/dataset/segmented/lab/cornus_florida/
Removed /content/dataset/segmented/lab/gymnocladus_dioicus/
Removed /content/dataset/segmented/lab/staphylea_trifolia/
Removed /content/dataset/segmented/lab/tilia_europaea/
Removed /content/dataset/segmented/lab/acer_saccharum/
Removed /content/dataset/segmented/lab/quercus_cerris/
Removed /content/dataset/segmented/lab/cedrus_deodara/
Removed /content/dataset/segmented/lab/quercus_marilandica/
Removed /content/dataset/segmented/lab/pinus_densiflora/
Removed /content/dataset/segmented/lab/prunus_serotina/
Removed /content/dataset/segmented/lab/abies_nordmanniana/
Removed /content/dataset/segmented/lab/broussonettia_papyrifera/
Removed /content/dataset/segmented/lab/quercus_phellos/
Removed /content/dataset/segmented/lab/diospyros_virginiana/
Removed /content/dataset/segmented/lab/acer_griseum/
Removed /content/dataset/segmented/lab/maclura_pomifera/
Removed /content/dataset/segmented/lab/koelreuteria_paniculata/
Removed /content/dataset/segmented/lab/styrax_japonica/
Removed /content/dataset/segmented/lab/magnolia_denudata/
Removed /content/dataset/segmented/lab/aesculus_pavi/
Removed /content/dataset/segmented/lab/tilia_americana/
Removed /content/dataset/segmented/lab/quercus_bicolor/
Removed /content/dataset/segmented/lab/populus_grandidentata/
Removed /content/dataset/segmented/lab/picea_abies/
Removed /content/dataset/segmented/lab/magnolia_macrophylla/
Removed /content/dataset/segmented/lab/aesculus_flava/
Removed /content/dataset/segmented/lab/acer_rubrum/
Removed /content/dataset/segmented/lab/prunus_serrulata/
Removed /content/dataset/segmented/lab/zelkova_serrata/
Removed /content/dataset/segmented/lab/magnolia_soulangiana/
Removed /content/dataset/segmented/lab/prunus_pensylvanica/
Removed /content/dataset/segmented/lab/malus_baccata/
###Markdown
Resize all images into 64x64 pixelsResizing the images are beneficial for the CNN becuase it learn more features from the small sized image.
###Code
#Creating new resized directory which will contain the resized images.
!rm -rf resized
!mkdir resized
def resizer(input_, filename_, output_dir="", size=(1024,768)):
outfile = os.path.splitext(filename_)[0] #extracting filenames
ext = os.path.splitext(input_)[1] #extracting the extension of images .jpg,.png,etc.
if input_ != outfile:
if not os.path.isfile(output_dir + "/" + outfile + ext):
try :
im = PIL.Image.open(input_)
im = im.resize(size, PIL.Image.ANTIALIAS) #resizing image
im.save(output_dir + "/" + outfile + ext) #saving new resized image
except IOError:
print("Can'nt open File")
output_dir = "/content/resized/"
size = (64, 64)
filenames_dir = list(dataset["image_path"])
filenames = list(dataset["filename"])
for i in range(len(filenames)):
resizer(filenames_dir[i], filenames[i], output_dir=output_dir, size=size)
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
w=20
h=20
fig=plt.figure(figsize=(8, 8))
columns = 4
rows = 5
for i in range(1, columns*rows +1):
path=random.choice(os.listdir("/content/resized/"))
image = mpimg.imread("/content/resized/"+path)
fig.add_subplot(rows, columns, i)
plt.imshow(image)
plt.show()
###Output
_____no_output_____
###Markdown
Creating Target
###Code
dataset["target"] = None
#index of new column
index_labels_integer = dataset.columns.get_loc("target")
#index of species column
index_species = dataset.columns.get_loc("species")
counter = 0
for i in range(len(dataset)):
if i == 0:
dataset.iloc[i, index_labels_integer] = counter
if i > 0:
if dataset.iloc[i-1, index_species] == dataset.iloc[i, index_species]:
dataset.iloc[i, index_labels_integer] = counter
else:
counter += 1
dataset.iloc[i, index_labels_integer] = counter
dataset.tail()
###Output
_____no_output_____
###Markdown
Preprocessing Creating a vector that holds the pixel array of the images.
###Code
vectors = []
for i in range(len(dataset)):
file = "resized" + "/" + dataset.iloc[i, filename_index_in_dataframe]
#read as rgb array
img = imageio.imread(file)
vectors.append(img)
###Output
_____no_output_____
###Markdown
Shuffling all the datatset, so that the data should not get memorized by our classifier
###Code
#relevant variables
label = dataset["species"]
source = dataset["source"]
target = dataset["target"]
vectors = vectors
filename = dataset["filename"]
path = dataset["image_path"]
#randomization
allinfo = list(zip(label, source, target, vectors, filename, path))
random.shuffle(allinfo) #shuffle
label, source, target, vectors, filename, path = zip(*allinfo) #decompose again
dataset = pd.DataFrame({"filename":filename, "label":label, "source":source, "target":target, "path":path}) #store picture information in randomized order
dataset.to_csv("dataset_rand.csv", index=False)
###Output
_____no_output_____
###Markdown
Stacking all the image numpy array into one unit and assigning it to X.Assiging the target to Y
###Code
X = np.stack((vectors))
Y = np.asarray(target)
###Output
_____no_output_____
###Markdown
Scaling the inputs and Converting the Target into OneHot Encoding
###Code
X = X/255
Y_one_hot = keras.utils.to_categorical(Y, num_classes=30)
print(Y.shape, Y_one_hot.shape)
###Output
(5248,) (5248, 30)
###Markdown
Reading the randomized/shuffled dataset
###Code
dataset_rand = pd.read_csv("dataset_rand.csv")
###Output
_____no_output_____
###Markdown
Model Creation and TrainingImporting all required libraries for creating a Deep Learning Model
###Code
from tensorflow.keras.models import Sequential, load_model
from tensorflow.keras.layers import Dense, Activation, Conv2D, MaxPooling2D, Flatten, Dropout, BatchNormalization
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import regularizers
from tensorflow.keras.callbacks import ModelCheckpoint
###Output
_____no_output_____
###Markdown
Splitting data for train, test and validation set
###Code
split_train = 0.8 #train 0.8, validate 0.1, test 0.1
split_val = 0.9
index_train = int(split_train*len(X))
index_val = int(split_val*len(X))
X_train = X[:index_train]
X_val = X[index_train:index_val]
X_test = X[index_val:]
Y_train = Y_one_hot[:index_train]
Y_val = Y_one_hot[index_train:index_val]
Y_test = Y_one_hot[index_val:]
#for later predictions on test set
target_test = dataset_rand.loc[index_val:len(X), "target"]
labels_test = dataset_rand.loc[index_val:len(X), "label"]
filenames_test = dataset_rand.loc[index_val:len(X), "filename"]
source_test = dataset_rand.loc[index_val:len(X), "source"]
path_test = dataset_rand.loc[index_val:len(X), "path"]
print(X_train.shape, X_val.shape, X_test.shape, Y_train.shape, Y_val.shape, Y_test.shape)
###Output
(4198, 64, 64, 3) (525, 64, 64, 3) (525, 64, 64, 3) (4198, 30) (525, 30) (525, 30)
###Markdown
CNNFeature extraction from the images in deep learning is performed with the Convolutional Neural CNN (Convolutional Neural Networks) uses the same process as the used in Dense Networks that is provided by Multi-Layer Perceptron. Dense Layer is different from the CNN (Convolutional Neural Network) Layers because its CNN uses convolutions whereas the Dense Layer uses Matrix Multiplication to form an output.
###Code
input_shape = (X_train.shape[1], X_train.shape[2], X_train.shape[3]) #(64, 64, 3)
num_classes = 30
model = Sequential()
model.add(Conv2D(32, kernel_size=(5, 5), strides=(1, 1), input_shape=input_shape)) #Convolutional Layer with 5x5 filter
model.add(Activation('relu')) #Rectified linear unit (0 for -ve values)
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2))) #Maxpooling layer with pool size 2x2
model.add(Conv2D(64, (5, 5))) #Convolutional Layer with 5x5 filter
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(rate=0.7)) #Dropout Layer helps to avoid over fitting
model.add(Flatten()) #converting the 3d array into single dimension array
model.add(Dense(1000))
model.add(Activation('relu'))
model.add(Dropout(rate=0.7))
model.add(Dense(num_classes, activation='softmax')) #Softmax layer to get 30 outputs
'''Adam optimizer is an adaptive learning rate, according to the loss it self adjusts to provide better accuracy,
Crossentropy loss is used for classification tasks.
'''
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
best_model_file = "PlantClassifier.h5"
best_model = ModelCheckpoint(best_model_file, monitor='val_loss', verbose=1, save_best_only=True)
results = model.fit(X_train, Y_train, epochs=100, batch_size=64, validation_data=(X_val, Y_val), callbacks=[best_model])
print('Traing finished.')
model = load_model(best_model_file)
###Output
Epoch 1/100
65/66 [============================>.] - ETA: 0s - loss: 2.7204 - accuracy: 0.2317
Epoch 00001: val_loss improved from inf to 1.51132, saving model to PlantClassifier.h5
66/66 [==============================] - 1s 18ms/step - loss: 2.7090 - accuracy: 0.2342 - val_loss: 1.5113 - val_accuracy: 0.5714
Epoch 2/100
64/66 [============================>.] - ETA: 0s - loss: 1.4932 - accuracy: 0.5496
Epoch 00002: val_loss improved from 1.51132 to 0.97782, saving model to PlantClassifier.h5
66/66 [==============================] - 1s 15ms/step - loss: 1.4891 - accuracy: 0.5503 - val_loss: 0.9778 - val_accuracy: 0.6819
Epoch 3/100
66/66 [==============================] - ETA: 0s - loss: 1.0801 - accuracy: 0.6660
Epoch 00003: val_loss improved from 0.97782 to 0.65723, saving model to PlantClassifier.h5
66/66 [==============================] - 1s 13ms/step - loss: 1.0801 - accuracy: 0.6660 - val_loss: 0.6572 - val_accuracy: 0.7848
Epoch 4/100
63/66 [===========================>..] - ETA: 0s - loss: 0.8437 - accuracy: 0.7329
Epoch 00004: val_loss improved from 0.65723 to 0.55298, saving model to PlantClassifier.h5
66/66 [==============================] - 1s 14ms/step - loss: 0.8441 - accuracy: 0.7318 - val_loss: 0.5530 - val_accuracy: 0.8133
Epoch 5/100
64/66 [============================>.] - ETA: 0s - loss: 0.7024 - accuracy: 0.7729
Epoch 00005: val_loss improved from 0.55298 to 0.45406, saving model to PlantClassifier.h5
66/66 [==============================] - 1s 13ms/step - loss: 0.7010 - accuracy: 0.7742 - val_loss: 0.4541 - val_accuracy: 0.8381
Epoch 6/100
66/66 [==============================] - ETA: 0s - loss: 0.5771 - accuracy: 0.8125
Epoch 00006: val_loss improved from 0.45406 to 0.39809, saving model to PlantClassifier.h5
66/66 [==============================] - 2s 33ms/step - loss: 0.5771 - accuracy: 0.8125 - val_loss: 0.3981 - val_accuracy: 0.8571
Epoch 7/100
65/66 [============================>.] - ETA: 0s - loss: 0.5238 - accuracy: 0.8269
Epoch 00007: val_loss improved from 0.39809 to 0.38215, saving model to PlantClassifier.h5
66/66 [==============================] - 1s 13ms/step - loss: 0.5216 - accuracy: 0.8273 - val_loss: 0.3821 - val_accuracy: 0.8514
Epoch 8/100
66/66 [==============================] - ETA: 0s - loss: 0.4728 - accuracy: 0.8433
Epoch 00008: val_loss improved from 0.38215 to 0.35835, saving model to PlantClassifier.h5
66/66 [==============================] - 3s 43ms/step - loss: 0.4728 - accuracy: 0.8433 - val_loss: 0.3584 - val_accuracy: 0.8629
Epoch 9/100
66/66 [==============================] - ETA: 0s - loss: 0.4059 - accuracy: 0.8580
Epoch 00009: val_loss improved from 0.35835 to 0.33998, saving model to PlantClassifier.h5
66/66 [==============================] - 1s 14ms/step - loss: 0.4059 - accuracy: 0.8580 - val_loss: 0.3400 - val_accuracy: 0.8762
Epoch 10/100
61/66 [==========================>...] - ETA: 0s - loss: 0.3817 - accuracy: 0.8727
Epoch 00010: val_loss improved from 0.33998 to 0.31324, saving model to PlantClassifier.h5
66/66 [==============================] - 1s 13ms/step - loss: 0.3786 - accuracy: 0.8726 - val_loss: 0.3132 - val_accuracy: 0.8857
Epoch 11/100
66/66 [==============================] - ETA: 0s - loss: 0.3636 - accuracy: 0.8759
Epoch 00011: val_loss improved from 0.31324 to 0.29488, saving model to PlantClassifier.h5
66/66 [==============================] - 1s 13ms/step - loss: 0.3636 - accuracy: 0.8759 - val_loss: 0.2949 - val_accuracy: 0.8895
Epoch 12/100
61/66 [==========================>...] - ETA: 0s - loss: 0.3146 - accuracy: 0.8886
Epoch 00012: val_loss did not improve from 0.29488
66/66 [==============================] - 1s 10ms/step - loss: 0.3116 - accuracy: 0.8890 - val_loss: 0.2980 - val_accuracy: 0.8952
Epoch 13/100
65/66 [============================>.] - ETA: 0s - loss: 0.3073 - accuracy: 0.8969
Epoch 00013: val_loss did not improve from 0.29488
66/66 [==============================] - 1s 10ms/step - loss: 0.3069 - accuracy: 0.8969 - val_loss: 0.3441 - val_accuracy: 0.8724
Epoch 14/100
66/66 [==============================] - ETA: 0s - loss: 0.2688 - accuracy: 0.9085
Epoch 00014: val_loss improved from 0.29488 to 0.28923, saving model to PlantClassifier.h5
66/66 [==============================] - 1s 13ms/step - loss: 0.2688 - accuracy: 0.9085 - val_loss: 0.2892 - val_accuracy: 0.9010
Epoch 15/100
65/66 [============================>.] - ETA: 0s - loss: 0.2614 - accuracy: 0.9147
Epoch 00015: val_loss improved from 0.28923 to 0.28731, saving model to PlantClassifier.h5
66/66 [==============================] - 1s 15ms/step - loss: 0.2628 - accuracy: 0.9140 - val_loss: 0.2873 - val_accuracy: 0.8990
Epoch 16/100
61/66 [==========================>...] - ETA: 0s - loss: 0.2598 - accuracy: 0.9139
Epoch 00016: val_loss did not improve from 0.28731
66/66 [==============================] - 1s 10ms/step - loss: 0.2608 - accuracy: 0.9140 - val_loss: 0.3066 - val_accuracy: 0.8876
Epoch 17/100
66/66 [==============================] - ETA: 0s - loss: 0.2634 - accuracy: 0.9150
Epoch 00017: val_loss improved from 0.28731 to 0.23614, saving model to PlantClassifier.h5
66/66 [==============================] - 1s 14ms/step - loss: 0.2634 - accuracy: 0.9150 - val_loss: 0.2361 - val_accuracy: 0.9048
Epoch 18/100
61/66 [==========================>...] - ETA: 0s - loss: 0.2226 - accuracy: 0.9206
Epoch 00018: val_loss improved from 0.23614 to 0.22300, saving model to PlantClassifier.h5
66/66 [==============================] - 1s 14ms/step - loss: 0.2258 - accuracy: 0.9190 - val_loss: 0.2230 - val_accuracy: 0.9124
Epoch 19/100
61/66 [==========================>...] - ETA: 0s - loss: 0.2042 - accuracy: 0.9298
Epoch 00019: val_loss did not improve from 0.22300
66/66 [==============================] - 1s 10ms/step - loss: 0.2058 - accuracy: 0.9300 - val_loss: 0.2291 - val_accuracy: 0.9124
Epoch 20/100
61/66 [==========================>...] - ETA: 0s - loss: 0.2131 - accuracy: 0.9270
Epoch 00020: val_loss did not improve from 0.22300
66/66 [==============================] - 1s 10ms/step - loss: 0.2148 - accuracy: 0.9259 - val_loss: 0.2262 - val_accuracy: 0.9048
Epoch 21/100
66/66 [==============================] - ETA: 0s - loss: 0.2120 - accuracy: 0.9304
Epoch 00021: val_loss did not improve from 0.22300
66/66 [==============================] - 1s 10ms/step - loss: 0.2120 - accuracy: 0.9304 - val_loss: 0.2451 - val_accuracy: 0.9048
Epoch 22/100
65/66 [============================>.] - ETA: 0s - loss: 0.1778 - accuracy: 0.9356
Epoch 00022: val_loss did not improve from 0.22300
66/66 [==============================] - 1s 10ms/step - loss: 0.1771 - accuracy: 0.9357 - val_loss: 0.2369 - val_accuracy: 0.9105
Epoch 23/100
61/66 [==========================>...] - ETA: 0s - loss: 0.1880 - accuracy: 0.9393
Epoch 00023: val_loss did not improve from 0.22300
66/66 [==============================] - 1s 10ms/step - loss: 0.1825 - accuracy: 0.9404 - val_loss: 0.2301 - val_accuracy: 0.9219
Epoch 24/100
61/66 [==========================>...] - ETA: 0s - loss: 0.1724 - accuracy: 0.9383
Epoch 00024: val_loss improved from 0.22300 to 0.20914, saving model to PlantClassifier.h5
66/66 [==============================] - 1s 13ms/step - loss: 0.1713 - accuracy: 0.9381 - val_loss: 0.2091 - val_accuracy: 0.9276
Epoch 25/100
61/66 [==========================>...] - ETA: 0s - loss: 0.1673 - accuracy: 0.9419
Epoch 00025: val_loss did not improve from 0.20914
66/66 [==============================] - 1s 10ms/step - loss: 0.1697 - accuracy: 0.9424 - val_loss: 0.2614 - val_accuracy: 0.9105
Epoch 26/100
66/66 [==============================] - ETA: 0s - loss: 0.1645 - accuracy: 0.9459
Epoch 00026: val_loss did not improve from 0.20914
66/66 [==============================] - 1s 10ms/step - loss: 0.1645 - accuracy: 0.9459 - val_loss: 0.2540 - val_accuracy: 0.9048
Epoch 27/100
61/66 [==========================>...] - ETA: 0s - loss: 0.1543 - accuracy: 0.9426
Epoch 00027: val_loss did not improve from 0.20914
66/66 [==============================] - 1s 10ms/step - loss: 0.1530 - accuracy: 0.9426 - val_loss: 0.2192 - val_accuracy: 0.9048
Epoch 28/100
66/66 [==============================] - ETA: 0s - loss: 0.1514 - accuracy: 0.9471
Epoch 00028: val_loss improved from 0.20914 to 0.20891, saving model to PlantClassifier.h5
66/66 [==============================] - 2s 30ms/step - loss: 0.1514 - accuracy: 0.9471 - val_loss: 0.2089 - val_accuracy: 0.9162
Epoch 29/100
61/66 [==========================>...] - ETA: 0s - loss: 0.1570 - accuracy: 0.9501
Epoch 00029: val_loss did not improve from 0.20891
66/66 [==============================] - 1s 10ms/step - loss: 0.1562 - accuracy: 0.9502 - val_loss: 0.2290 - val_accuracy: 0.9219
Epoch 30/100
61/66 [==========================>...] - ETA: 0s - loss: 0.1468 - accuracy: 0.9511
Epoch 00030: val_loss did not improve from 0.20891
66/66 [==============================] - 1s 10ms/step - loss: 0.1464 - accuracy: 0.9505 - val_loss: 0.2139 - val_accuracy: 0.9048
Epoch 31/100
61/66 [==========================>...] - ETA: 0s - loss: 0.1662 - accuracy: 0.9439
Epoch 00031: val_loss did not improve from 0.20891
66/66 [==============================] - 1s 10ms/step - loss: 0.1661 - accuracy: 0.9435 - val_loss: 0.2346 - val_accuracy: 0.9086
Epoch 32/100
61/66 [==========================>...] - ETA: 0s - loss: 0.1575 - accuracy: 0.9457
Epoch 00032: val_loss did not improve from 0.20891
66/66 [==============================] - 1s 10ms/step - loss: 0.1554 - accuracy: 0.9466 - val_loss: 0.2417 - val_accuracy: 0.9181
Epoch 33/100
61/66 [==========================>...] - ETA: 0s - loss: 0.1437 - accuracy: 0.9518
Epoch 00033: val_loss did not improve from 0.20891
66/66 [==============================] - 1s 10ms/step - loss: 0.1446 - accuracy: 0.9507 - val_loss: 0.2256 - val_accuracy: 0.9143
Epoch 34/100
62/66 [===========================>..] - ETA: 0s - loss: 0.1401 - accuracy: 0.9498
Epoch 00034: val_loss improved from 0.20891 to 0.18763, saving model to PlantClassifier.h5
66/66 [==============================] - 1s 13ms/step - loss: 0.1408 - accuracy: 0.9500 - val_loss: 0.1876 - val_accuracy: 0.9257
Epoch 35/100
61/66 [==========================>...] - ETA: 0s - loss: 0.1363 - accuracy: 0.9518
Epoch 00035: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.1339 - accuracy: 0.9521 - val_loss: 0.2168 - val_accuracy: 0.9219
Epoch 36/100
61/66 [==========================>...] - ETA: 0s - loss: 0.1231 - accuracy: 0.9582
Epoch 00036: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.1226 - accuracy: 0.9590 - val_loss: 0.1928 - val_accuracy: 0.9238
Epoch 37/100
61/66 [==========================>...] - ETA: 0s - loss: 0.1368 - accuracy: 0.9559
Epoch 00037: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.1415 - accuracy: 0.9535 - val_loss: 0.2270 - val_accuracy: 0.9086
Epoch 38/100
61/66 [==========================>...] - ETA: 0s - loss: 0.1324 - accuracy: 0.9585
Epoch 00038: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.1360 - accuracy: 0.9571 - val_loss: 0.2003 - val_accuracy: 0.9200
Epoch 39/100
66/66 [==============================] - ETA: 0s - loss: 0.1494 - accuracy: 0.9533
Epoch 00039: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.1494 - accuracy: 0.9533 - val_loss: 0.2527 - val_accuracy: 0.9143
Epoch 40/100
61/66 [==========================>...] - ETA: 0s - loss: 0.1091 - accuracy: 0.9598
Epoch 00040: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.1105 - accuracy: 0.9597 - val_loss: 0.2605 - val_accuracy: 0.9200
Epoch 41/100
61/66 [==========================>...] - ETA: 0s - loss: 0.1150 - accuracy: 0.9588
Epoch 00041: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.1178 - accuracy: 0.9576 - val_loss: 0.2385 - val_accuracy: 0.9162
Epoch 42/100
66/66 [==============================] - ETA: 0s - loss: 0.1166 - accuracy: 0.9605
Epoch 00042: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.1166 - accuracy: 0.9605 - val_loss: 0.2407 - val_accuracy: 0.9181
Epoch 43/100
61/66 [==========================>...] - ETA: 0s - loss: 0.1331 - accuracy: 0.9518
Epoch 00043: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.1286 - accuracy: 0.9538 - val_loss: 0.2007 - val_accuracy: 0.9105
Epoch 44/100
61/66 [==========================>...] - ETA: 0s - loss: 0.1180 - accuracy: 0.9575
Epoch 00044: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.1182 - accuracy: 0.9583 - val_loss: 0.2073 - val_accuracy: 0.9162
Epoch 45/100
61/66 [==========================>...] - ETA: 0s - loss: 0.1106 - accuracy: 0.9649
Epoch 00045: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.1128 - accuracy: 0.9633 - val_loss: 0.2329 - val_accuracy: 0.9105
Epoch 46/100
61/66 [==========================>...] - ETA: 0s - loss: 0.1260 - accuracy: 0.9580
Epoch 00046: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.1229 - accuracy: 0.9590 - val_loss: 0.2212 - val_accuracy: 0.9219
Epoch 47/100
65/66 [============================>.] - ETA: 0s - loss: 0.1181 - accuracy: 0.9582
Epoch 00047: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.1181 - accuracy: 0.9583 - val_loss: 0.2081 - val_accuracy: 0.9257
Epoch 48/100
61/66 [==========================>...] - ETA: 0s - loss: 0.1189 - accuracy: 0.9593
Epoch 00048: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.1156 - accuracy: 0.9600 - val_loss: 0.2124 - val_accuracy: 0.9276
Epoch 49/100
61/66 [==========================>...] - ETA: 0s - loss: 0.1094 - accuracy: 0.9626
Epoch 00049: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.1114 - accuracy: 0.9624 - val_loss: 0.2142 - val_accuracy: 0.9162
Epoch 50/100
61/66 [==========================>...] - ETA: 0s - loss: 0.0935 - accuracy: 0.9659
Epoch 00050: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.0968 - accuracy: 0.9645 - val_loss: 0.2351 - val_accuracy: 0.9143
Epoch 51/100
66/66 [==============================] - ETA: 0s - loss: 0.1004 - accuracy: 0.9664
Epoch 00051: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.1004 - accuracy: 0.9664 - val_loss: 0.2027 - val_accuracy: 0.9200
Epoch 52/100
61/66 [==========================>...] - ETA: 0s - loss: 0.1025 - accuracy: 0.9644
Epoch 00052: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.0987 - accuracy: 0.9652 - val_loss: 0.2171 - val_accuracy: 0.9200
Epoch 53/100
66/66 [==============================] - ETA: 0s - loss: 0.0891 - accuracy: 0.9714
Epoch 00053: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.0891 - accuracy: 0.9714 - val_loss: 0.2547 - val_accuracy: 0.9181
Epoch 54/100
66/66 [==============================] - ETA: 0s - loss: 0.0832 - accuracy: 0.9712
Epoch 00054: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.0832 - accuracy: 0.9712 - val_loss: 0.2889 - val_accuracy: 0.9067
Epoch 55/100
66/66 [==============================] - ETA: 0s - loss: 0.0997 - accuracy: 0.9681
Epoch 00055: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.0997 - accuracy: 0.9681 - val_loss: 0.2628 - val_accuracy: 0.9143
Epoch 56/100
61/66 [==========================>...] - ETA: 0s - loss: 0.0907 - accuracy: 0.9713
Epoch 00056: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.0920 - accuracy: 0.9712 - val_loss: 0.2583 - val_accuracy: 0.9181
Epoch 57/100
61/66 [==========================>...] - ETA: 0s - loss: 0.0798 - accuracy: 0.9731
Epoch 00057: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.0808 - accuracy: 0.9731 - val_loss: 0.2292 - val_accuracy: 0.9200
Epoch 58/100
61/66 [==========================>...] - ETA: 0s - loss: 0.1142 - accuracy: 0.9631
Epoch 00058: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.1136 - accuracy: 0.9633 - val_loss: 0.2380 - val_accuracy: 0.9200
Epoch 59/100
63/66 [===========================>..] - ETA: 0s - loss: 0.0783 - accuracy: 0.9742
Epoch 00059: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.0777 - accuracy: 0.9743 - val_loss: 0.2210 - val_accuracy: 0.9276
Epoch 60/100
61/66 [==========================>...] - ETA: 0s - loss: 0.0954 - accuracy: 0.9672
Epoch 00060: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.0974 - accuracy: 0.9674 - val_loss: 0.2104 - val_accuracy: 0.9333
Epoch 61/100
61/66 [==========================>...] - ETA: 0s - loss: 0.0909 - accuracy: 0.9680
Epoch 00061: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.0884 - accuracy: 0.9688 - val_loss: 0.2142 - val_accuracy: 0.9219
Epoch 62/100
61/66 [==========================>...] - ETA: 0s - loss: 0.0986 - accuracy: 0.9657
Epoch 00062: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.0963 - accuracy: 0.9667 - val_loss: 0.2102 - val_accuracy: 0.9314
Epoch 63/100
61/66 [==========================>...] - ETA: 0s - loss: 0.0806 - accuracy: 0.9723
Epoch 00063: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.0838 - accuracy: 0.9721 - val_loss: 0.2426 - val_accuracy: 0.9295
Epoch 64/100
62/66 [===========================>..] - ETA: 0s - loss: 0.1080 - accuracy: 0.9662
Epoch 00064: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.1049 - accuracy: 0.9664 - val_loss: 0.2050 - val_accuracy: 0.9314
Epoch 65/100
61/66 [==========================>...] - ETA: 0s - loss: 0.0752 - accuracy: 0.9757
Epoch 00065: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.0749 - accuracy: 0.9762 - val_loss: 0.2526 - val_accuracy: 0.9257
Epoch 66/100
62/66 [===========================>..] - ETA: 0s - loss: 0.0888 - accuracy: 0.9695
Epoch 00066: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.0907 - accuracy: 0.9697 - val_loss: 0.2492 - val_accuracy: 0.9162
Epoch 67/100
66/66 [==============================] - ETA: 0s - loss: 0.0917 - accuracy: 0.9731
Epoch 00067: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.0917 - accuracy: 0.9731 - val_loss: 0.3068 - val_accuracy: 0.9029
Epoch 68/100
61/66 [==========================>...] - ETA: 0s - loss: 0.1023 - accuracy: 0.9680
Epoch 00068: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.0979 - accuracy: 0.9683 - val_loss: 0.2963 - val_accuracy: 0.9010
Epoch 69/100
61/66 [==========================>...] - ETA: 0s - loss: 0.0910 - accuracy: 0.9718
Epoch 00069: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.0884 - accuracy: 0.9726 - val_loss: 0.2597 - val_accuracy: 0.9181
Epoch 70/100
61/66 [==========================>...] - ETA: 0s - loss: 0.0860 - accuracy: 0.9695
Epoch 00070: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.0849 - accuracy: 0.9707 - val_loss: 0.2506 - val_accuracy: 0.9200
Epoch 71/100
61/66 [==========================>...] - ETA: 0s - loss: 0.0895 - accuracy: 0.9741
Epoch 00071: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.0908 - accuracy: 0.9738 - val_loss: 0.2314 - val_accuracy: 0.9181
Epoch 72/100
61/66 [==========================>...] - ETA: 0s - loss: 0.0705 - accuracy: 0.9762
Epoch 00072: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.0711 - accuracy: 0.9762 - val_loss: 0.2590 - val_accuracy: 0.9257
Epoch 73/100
61/66 [==========================>...] - ETA: 0s - loss: 0.0874 - accuracy: 0.9695
Epoch 00073: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.0901 - accuracy: 0.9690 - val_loss: 0.2313 - val_accuracy: 0.9257
Epoch 74/100
61/66 [==========================>...] - ETA: 0s - loss: 0.1109 - accuracy: 0.9695
Epoch 00074: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.1068 - accuracy: 0.9705 - val_loss: 0.2185 - val_accuracy: 0.9314
Epoch 75/100
61/66 [==========================>...] - ETA: 0s - loss: 0.0829 - accuracy: 0.9705
Epoch 00075: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.0843 - accuracy: 0.9702 - val_loss: 0.2301 - val_accuracy: 0.9162
Epoch 76/100
61/66 [==========================>...] - ETA: 0s - loss: 0.0858 - accuracy: 0.9723
Epoch 00076: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.0847 - accuracy: 0.9731 - val_loss: 0.2077 - val_accuracy: 0.9238
Epoch 77/100
66/66 [==============================] - ETA: 0s - loss: 0.0856 - accuracy: 0.9702
Epoch 00077: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.0856 - accuracy: 0.9702 - val_loss: 0.2249 - val_accuracy: 0.9181
Epoch 78/100
62/66 [===========================>..] - ETA: 0s - loss: 0.0817 - accuracy: 0.9710
Epoch 00078: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.0815 - accuracy: 0.9709 - val_loss: 0.2719 - val_accuracy: 0.9162
Epoch 79/100
61/66 [==========================>...] - ETA: 0s - loss: 0.0889 - accuracy: 0.9708
Epoch 00079: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.0889 - accuracy: 0.9702 - val_loss: 0.2582 - val_accuracy: 0.9124
Epoch 80/100
61/66 [==========================>...] - ETA: 0s - loss: 0.1066 - accuracy: 0.9675
Epoch 00080: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.1047 - accuracy: 0.9683 - val_loss: 0.2157 - val_accuracy: 0.9219
Epoch 81/100
66/66 [==============================] - ETA: 0s - loss: 0.0783 - accuracy: 0.9745
Epoch 00081: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.0783 - accuracy: 0.9745 - val_loss: 0.2437 - val_accuracy: 0.9219
Epoch 82/100
60/66 [==========================>...] - ETA: 0s - loss: 0.0836 - accuracy: 0.9745
Epoch 00082: val_loss did not improve from 0.18763
66/66 [==============================] - 1s 10ms/step - loss: 0.0842 - accuracy: 0.9745 - val_loss: 0.2439 - val_accuracy: 0.9181
Epoch 83/100
60/66 [==========================>...] - ETA: 0s - loss: 0.0758 - accuracy: 0.9771
Epoch 00083: val_loss improved from 0.18763 to 0.18145, saving model to PlantClassifier.h5
66/66 [==============================] - 1s 14ms/step - loss: 0.0743 - accuracy: 0.9774 - val_loss: 0.1815 - val_accuracy: 0.9314
Epoch 84/100
62/66 [===========================>..] - ETA: 0s - loss: 0.0692 - accuracy: 0.9768
Epoch 00084: val_loss did not improve from 0.18145
66/66 [==============================] - 1s 10ms/step - loss: 0.0722 - accuracy: 0.9757 - val_loss: 0.2284 - val_accuracy: 0.9333
Epoch 85/100
66/66 [==============================] - ETA: 0s - loss: 0.0720 - accuracy: 0.9771
Epoch 00085: val_loss did not improve from 0.18145
66/66 [==============================] - 1s 10ms/step - loss: 0.0720 - accuracy: 0.9771 - val_loss: 0.2206 - val_accuracy: 0.9181
Epoch 86/100
65/66 [============================>.] - ETA: 0s - loss: 0.0746 - accuracy: 0.9750
Epoch 00086: val_loss did not improve from 0.18145
66/66 [==============================] - 1s 10ms/step - loss: 0.0753 - accuracy: 0.9747 - val_loss: 0.2646 - val_accuracy: 0.9162
Epoch 87/100
62/66 [===========================>..] - ETA: 0s - loss: 0.0884 - accuracy: 0.9728
Epoch 00087: val_loss did not improve from 0.18145
66/66 [==============================] - 1s 10ms/step - loss: 0.0897 - accuracy: 0.9721 - val_loss: 0.2671 - val_accuracy: 0.9105
Epoch 88/100
62/66 [===========================>..] - ETA: 0s - loss: 0.0923 - accuracy: 0.9718
Epoch 00088: val_loss did not improve from 0.18145
66/66 [==============================] - 1s 10ms/step - loss: 0.0947 - accuracy: 0.9717 - val_loss: 0.2327 - val_accuracy: 0.9124
Epoch 89/100
61/66 [==========================>...] - ETA: 0s - loss: 0.1077 - accuracy: 0.9623
Epoch 00089: val_loss did not improve from 0.18145
66/66 [==============================] - 1s 10ms/step - loss: 0.1066 - accuracy: 0.9633 - val_loss: 0.2379 - val_accuracy: 0.9219
Epoch 90/100
61/66 [==========================>...] - ETA: 0s - loss: 0.0719 - accuracy: 0.9772
Epoch 00090: val_loss did not improve from 0.18145
66/66 [==============================] - 1s 10ms/step - loss: 0.0722 - accuracy: 0.9774 - val_loss: 0.2016 - val_accuracy: 0.9333
Epoch 91/100
61/66 [==========================>...] - ETA: 0s - loss: 0.0629 - accuracy: 0.9803
Epoch 00091: val_loss did not improve from 0.18145
66/66 [==============================] - 1s 10ms/step - loss: 0.0633 - accuracy: 0.9798 - val_loss: 0.2081 - val_accuracy: 0.9238
Epoch 92/100
61/66 [==========================>...] - ETA: 0s - loss: 0.0863 - accuracy: 0.9716
Epoch 00092: val_loss did not improve from 0.18145
66/66 [==============================] - 1s 10ms/step - loss: 0.0848 - accuracy: 0.9719 - val_loss: 0.2036 - val_accuracy: 0.9181
Epoch 93/100
61/66 [==========================>...] - ETA: 0s - loss: 0.0717 - accuracy: 0.9780
Epoch 00093: val_loss did not improve from 0.18145
66/66 [==============================] - 1s 10ms/step - loss: 0.0745 - accuracy: 0.9771 - val_loss: 0.2034 - val_accuracy: 0.9200
Epoch 94/100
62/66 [===========================>..] - ETA: 0s - loss: 0.0891 - accuracy: 0.9700
Epoch 00094: val_loss did not improve from 0.18145
66/66 [==============================] - 1s 10ms/step - loss: 0.0894 - accuracy: 0.9705 - val_loss: 0.1955 - val_accuracy: 0.9295
Epoch 95/100
66/66 [==============================] - ETA: 0s - loss: 0.0621 - accuracy: 0.9793
Epoch 00095: val_loss did not improve from 0.18145
66/66 [==============================] - 1s 10ms/step - loss: 0.0621 - accuracy: 0.9793 - val_loss: 0.2141 - val_accuracy: 0.9295
Epoch 96/100
61/66 [==========================>...] - ETA: 0s - loss: 0.0546 - accuracy: 0.9818
Epoch 00096: val_loss did not improve from 0.18145
66/66 [==============================] - 1s 10ms/step - loss: 0.0535 - accuracy: 0.9817 - val_loss: 0.2351 - val_accuracy: 0.9276
Epoch 97/100
66/66 [==============================] - ETA: 0s - loss: 0.0598 - accuracy: 0.9807
Epoch 00097: val_loss did not improve from 0.18145
66/66 [==============================] - 1s 10ms/step - loss: 0.0598 - accuracy: 0.9807 - val_loss: 0.2117 - val_accuracy: 0.9124
Epoch 98/100
61/66 [==========================>...] - ETA: 0s - loss: 0.0742 - accuracy: 0.9759
Epoch 00098: val_loss did not improve from 0.18145
66/66 [==============================] - 1s 10ms/step - loss: 0.0740 - accuracy: 0.9759 - val_loss: 0.2465 - val_accuracy: 0.9257
Epoch 99/100
61/66 [==========================>...] - ETA: 0s - loss: 0.0755 - accuracy: 0.9782
Epoch 00099: val_loss did not improve from 0.18145
66/66 [==============================] - 1s 10ms/step - loss: 0.0715 - accuracy: 0.9788 - val_loss: 0.2244 - val_accuracy: 0.9276
Epoch 100/100
61/66 [==========================>...] - ETA: 0s - loss: 0.0693 - accuracy: 0.9790
Epoch 00100: val_loss did not improve from 0.18145
66/66 [==============================] - 1s 10ms/step - loss: 0.0672 - accuracy: 0.9793 - val_loss: 0.1868 - val_accuracy: 0.9333
Traing finished.
WARNING:tensorflow:Error in loading the saved optimizer state. As a result, your model is starting with a freshly initialized optimizer.
###Markdown
Evaluating the performance of the classifier
###Code
model.evaluate(X_test, Y_test)
###Output
17/17 [==============================] - 0s 3ms/step - loss: 0.3144 - accuracy: 0.9333
###Markdown
Prediction
###Code
pred = model.predict_classes(X_test)
predictions= pd.DataFrame({'prediction': pred, 'target': target_test, 'label': labels_test})
#order columns
predictions = predictions[['prediction', 'target', 'label']]
predictions.head()
###Output
WARNING:tensorflow:From <ipython-input-44-ceab7f49c6db>:1: Sequential.predict_classes (from tensorflow.python.keras.engine.sequential) is deprecated and will be removed after 2021-01-01.
Instructions for updating:
Please use instead:* `np.argmax(model.predict(x), axis=-1)`, if your model does multi-class classification (e.g. if it uses a `softmax` last-layer activation).* `(model.predict(x) > 0.5).astype("int32")`, if your model does binary classification (e.g. if it uses a `sigmoid` last-layer activation).
###Markdown
Accuracy and Loss Plots
###Code
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (15,10)
plt.plot(results.history['accuracy'])
plt.plot(results.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['Accuracy', 'Validation Accuracy'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(results.history['loss'])
plt.plot(results.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['Loss', 'Validation Loss'], loc='upper left')
plt.show()
###Output
_____no_output_____
|
inprogress/.ipynb_checkpoints/ML_Med_pancreas-checkpoint.ipynb
|
###Markdown
Pancreas (patho)PhysiologyThe pancreas sits at the center of one of the most common diseases we deal with: diabetes.In this notebook we'll use the pancreas and diabetes to illustrate the usefulness of machine learning. Diabetes ExampleScience, and medicine, cares about whether and how variables relate to each other.Let's use a **diabetes example**.In assessing whether a patient has diabetes we can focus on two variables: their Hemoglobin A1c ($A$) and whether they have diabetes ($D$).Based off of scientific understanding and data-driven studies, we know there's a link between the two.If the A1c is high, they're very likely to have diabetes. If it's low, they're not as likely to have diabetes.If a new patient walks in and you measure a high A1c $A=10$ you know almost 100\% that $D = \text{yes}$. Imports
###Code
import numpy as np
import networkx as nx
import scipy.signal as sig
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Model OverviewThe pancreas consists of two types of cells: $\alpha$ and $\beta$ cells. $\beta$ cells detect glucose in the blood and release insulin into the bloodstream.The insulin then acts on all the cells of the body, telling them to *take up glucose from the blood*. The PancreasWe've got two compartments to the Pancreas: $\alpha$ and $\beta$ cell activity.
###Code
alpha = 0
beta = 0
glucose_blood = 0
insulin = 0
c,d,e = 1,1,1
beta_dot = c*glucose_blood
insulin_dot = d * beta
glucose_blood_dot = -e*insulin
k_dot = insulin
###Output
_____no_output_____
|
code-data/dataTransformation-SI.ipynb
|
###Markdown
We start data cleaning by checking the publication years. Our original choice of 2010-2017 remains fundamentally sound here. Next, check document types.
###Code
myvars <- c("AuthorNames", "FoSNames", "Year", "DocType", "DisplayName" , "Doi", "OriginalTitle", "URLs")
data <- block[myvars]
data <- data[data$Year >=2010 & data$Year <=2017,]
data <- data[is.na(data$DocType),]
#write_csv(data, "noTypeOpen.csv")
data <- block2[myvars]
data <- data[data$Year >=2010 & data$Year <=2017,]
data <- data[is.na(data$DocType),]
#write_csv(data, "noTypeRep.csv")
data <- block[myvars]
data <- data[data$Year >=2010 & data$Year <=2017,]
data <- data[!is.na(data$DocType),]
data2 <- data[is.na(data$URLs),]
#write_csv(data2, "noURLOpen.csv")
#write_csv(data, "typeOpen.csv")
data <- block2[myvars]
data <- data[data$Year >=2010 & data$Year <=2017,]
data <- data[!is.na(data$DocType),]
data2 <- data[is.na(data$URLs),]
#write_csv(data2, "noURLRep.csv")
#write_csv(data, "typeRep.csv")
###Output
_____no_output_____
###Markdown
The new data now contains new document types includin books and book chapters, together with Patent document type, they are all excluded. The data quality of the remaining document types (jouranl and conference) is much higher compared with original crawled data, manual cleaning is no longer necessary (check tables in depreciated folder). DOI coverage is only missing 7 in reproducbility and 1 in open science, which is good news for matching WoS records.
###Code
myvars <- c("PaperId", "AuthorIds", "AuthorIdsOrder", "Year", "DocType", "OriginalTitle", "EstimatedCitation")
data <- block[myvars]
data <- data[data$Year >=2010 & data$Year <=2017,]
data <- data[!is.na(data$DocType),]
data <- data[data$DocType!="Book",]
data <- data[data$DocType!="BookChapter",]
data <- data[data$DocType!="Patent",]
data$authorCount <- str_count(data$AuthorIdsOrder, '; ')+1
data <- data[data$authorCount>1,]
data <- arrange(data, tolower(as.character(OriginalTitle)), EstimatedCitation, Year, DocType) #put most cited, last the version of each paper that is published later, have more authors and are journal articles (will retain these)
titles <- tolower(as.character(data$OriginalTitle))
duplicated1 <- which(duplicated(titles)) #
duplicated2 <- which(duplicated(titles, fromLast=TRUE)) #remove these (keep the last appearing one of each set of duplicates)
duplicated_all <- sort(unique(c(duplicated1,duplicated2)))
duplicated_keep <- setdiff(duplicated_all, duplicated2)
data_duplicated <- data[duplicated_all,]
data_duplicated_keep <- data[duplicated_keep,]
#write.csv(data[duplicated2,], 'duplicated_remove0.csv', row.names=FALSE)
data <- data[-(duplicated2),]
AuthorTable <- data %>% separate(AuthorIdsOrder, into = sprintf('%s.%s', rep('Author',100), rep(1:100)), sep = "; ") #max author has exceeded 90
AuthorTable <- AuthorTable %>% gather(authorOrder, AuthorIdsOrder, into = sprintf('%s.%s', rep('Author',100), rep(1:100)))
AuthorTable <- AuthorTable[!is.na(AuthorTable$AuthorIdsOrder), ]
#AuthorTable
PaperCollab <- pairwise_count(AuthorTable, PaperId, AuthorIdsOrder, sort = TRUE)
openG <- graph_from_data_frame(d = PaperCollab, directed=FALSE, vertices=data)
write_graph(openG, "openRaw.graphml", format="graphml")
n1=vcount(openG)
m1=ecount(openG)/2
n1
m1
myvars <- c("PaperId", "AuthorIds", "AuthorIdsOrder", "Year", "DocType", "OriginalTitle", "EstimatedCitation")
data2 <- block2[myvars]
data2 <- data2[data2$Year >=2010 & data2$Year <=2017,]
data2 <- data2[!is.na(data2$DocType),]
data2 <- data2[data2$DocType!="Book",]
data2 <- data2[data2$DocType!="BookChapter",]
data2 <- data2[data2$DocType!="Patent",]
data2$authorCount <- str_count(data2$AuthorIdsOrder, '; ')+1
data2 <- data2[data2$authorCount>1,]
data2 <- arrange(data2, tolower(as.character(OriginalTitle)), EstimatedCitation, Year, DocType) #put most cited, last the version of each paper that is published later, have more authors and are journal articles (will retain these)
titles <- tolower(as.character(data2$OriginalTitle))
duplicated1 <- which(duplicated(titles)) #
duplicated2 <- which(duplicated(titles, fromLast=TRUE)) #remove these (keep the last appearing one of each set of duplicates)
duplicated_all <- sort(unique(c(duplicated1,duplicated2)))
duplicated_keep <- setdiff(duplicated_all, duplicated2)
data2_duplicated <- data2[duplicated_all,]
data2_duplicated_keep <- data2[duplicated_keep,]
#write.csv(data2[duplicated2,], 'duplicated_remove1.csv', row.names=FALSE)
data2 <- data2[-(duplicated2),]
AuthorTable2 <- data2 %>% separate(AuthorIdsOrder, into = sprintf('%s.%s', rep('Author',100), rep(1:100)), sep = "; ") #max author has exceeded 90
AuthorTable2 <- AuthorTable2 %>% gather(authorOrder, AuthorIdsOrder, into = sprintf('%s.%s', rep('Author',100), rep(1:100)))
AuthorTable2 <- AuthorTable2[!is.na(AuthorTable2$AuthorIdsOrder), ]
#AuthorTable2
PaperCollab2 <- pairwise_count(AuthorTable2, PaperId, AuthorIdsOrder, sort = TRUE)
repG <- graph_from_data_frame(d = PaperCollab2, directed=FALSE, vertices=data2)
write_graph(repG, "reproduceRaw.graphml", "graphml")
n2=vcount(repG)
m2=ecount(repG)/2
n2
m2
###Output
_____no_output_____
###Markdown
After creating the networks, we proceed to conduct network analysis
###Code
nrow(data)+nrow(data2)
AuthorTable %>% summarize(n = n_distinct(AuthorIdsOrder))
AuthorTable2 %>% summarize(n = n_distinct(AuthorIdsOrder))
AuthorTable0 <- dplyr::bind_rows(list(OpenScience=AuthorTable, Reproducibility=AuthorTable2), .id = 'Tag')
AuthorTable0 %>% summarize(n = n_distinct(AuthorIdsOrder))
nrow(block)+nrow(block2)
nrow(block)
nrow(block2)
nrow(block01)
nrow(block02)
###Output
_____no_output_____
###Markdown
Network density measures and Fisher's Exact Test
###Code
2*m1/n1/(n1-1)
2*m2/n2/(n2-1)
table <- cbind(c(m1,m2),c(n1*(n1-1)/2-m1,n2*(n2-1)/2-m2))
fisher.test(table, alternative="greater")
table <- rbind(c(m1,m2),c(n1*(n1-1)/2-m1,n2*(n2-1)/2-m2))
fisher.test(table, alternative="greater")
n1/452
n2/1364
###Output
_____no_output_____
###Markdown
Network analysis ends with average component sizes (Other analysis is done in Gephi). Finally, we merge the two data sets into "newdataCombined.csv" for downstream gender detection. Papers of all years are kepted for plotting purposes.
###Code
myvars <- c("PaperId", "AuthorIdsOrder", "AuthorNamesOrder", "FoSNames", "Year", "DocType", "DisplayName", "Publisher", "Doi",
"OriginalTitle", "EstimatedCitation", "URLs", "IndexedAbstract")
data01 <- block[myvars]
data02 <- block2[myvars]
data0 <- dplyr::bind_rows(list(OpenScience=data01, Reproducibility=data02), .id = 'Tag')
#data0 <- data0[data0$Year >=2010 & data0$Year <=2017,]
data0 <- data0[!is.na(data0$DocType),]
data0 <- data0[data0$DocType!="Book",]
data0 <- data0[data0$DocType!="BookChapter",]
data0 <- data0[data0$DocType!="Patent",]
data0 <- arrange(data0, tolower(as.character(OriginalTitle)), EstimatedCitation, Year, DocType) #put most cited, last the version of each paper that is published later, have more authors and are journal articles (will retain these)
titles <- tolower(as.character(data0$OriginalTitle))
duplicated1 <- which(duplicated(titles)) #
duplicated2 <- which(duplicated(titles, fromLast=TRUE)) #remove these (keep the last appearing one of each set of duplicates)
duplicated_all <- sort(unique(c(duplicated1,duplicated2)))
duplicated_keep <- setdiff(duplicated_all, duplicated2)
data0_duplicated <- data0[duplicated_all,]
data0_duplicated_keep <- data0[duplicated_keep,]
write.csv(data0[duplicated2,], 'duplicated_remove.csv', row.names=FALSE)
data0 <- data0[-(duplicated2),]
#data0$ID <- seq.int(nrow(data0))
data0 <- rename(data0, "Journal" = "DisplayName", "Title" = "OriginalTitle")
write_csv(data0, "newdataCombined.csv")
#sum(is.na(data0$IndexedAbstract)&(data0$Tag == "OpenScience"))
#sum(is.na(data0$IndexedAbstract)&(data0$Tag == "Reproducibility"))
233/n1
1704/n2
data9 = read_csv("output/OpenSci3Discipline.csv", col_names = TRUE)
#data9$disc_name
sum((data9$DocType == "Journal")&(data9$Tag == "OpenScience"))
sum((data9$DocType == "Journal")&(data9$Tag == "Reproducibility"))
sum(!is.na(data9$disc_name)&(data9$Tag == "OpenScience")&(data9$DocType == "Conference"))
sum(!is.na(data9$disc_name)&(data9$Tag == "Reproducibility")&(data9$DocType == "Conference"))
sum(!is.na(data9$disc_name)&(data9$Tag == "OpenScience")&(data9$DocType == "Journal"))
sum(!is.na(data9$disc_name)&(data9$Tag == "Reproducibility")&(data9$DocType == "Journal"))
dataO <- data9[data9$Tag == "OpenScience",]
dataR <- data9[data9$Tag == "Reproducibility",]
nrow(filter(dataO, grepl("workflow", FoSNames, fixed = TRUE)))
nrow(filter(dataR, grepl("workflow", FoSNames, fixed = TRUE)))
nrow(filter(dataO, grepl("repeatability", FoSNames, fixed = TRUE)))
nrow(filter(dataR, grepl("repeatability", FoSNames, fixed = TRUE)))
nrow(filter(dataO, grepl("data sharing", FoSNames, fixed = TRUE)))
nrow(filter(dataR, grepl("data sharing", FoSNames, fixed = TRUE)))
nrow(filter(dataO, grepl("open data", FoSNames, fixed = TRUE)))
nrow(filter(dataR, grepl("open data", FoSNames, fixed = TRUE)))
nrow(filter(dataO, grepl("software", FoSNames, fixed = TRUE)))
nrow(filter(dataR, grepl("software", FoSNames, fixed = TRUE)))
###Output
_____no_output_____
###Markdown
Counting papers for supplimentary information
###Code
block01 = read_tsv("input/Open-old.csv", col_names = TRUE, quote = "\"")
block02 = read_tsv("input/Reproduce-old.csv", col_names = TRUE, quote = "\"")
myvars <- c("PaperId", "AuthorIds", "AuthorNames", "FoSNames", "Year", "DocType", "DisplayName", "Publisher", "Doi",
"OriginalTitle", "EstimatedCitation", "IndexedAbstract")
data01 <- block01[myvars]
data02 <- block02[myvars]
AuthorTable <- data01 %>% separate(AuthorIds, into = sprintf('%s.%s', rep('Author',100), rep(1:100)), sep = "; ") #max author has exceeded 90
AuthorTable <- AuthorTable %>% gather(authorOrder, AuthorIds, into = sprintf('%s.%s', rep('Author',100), rep(1:100)))
AuthorTable <- AuthorTable[!is.na(AuthorTable$AuthorIds), ]
AuthorTable2 <- data02 %>% separate(AuthorIds, into = sprintf('%s.%s', rep('Author',100), rep(1:100)), sep = "; ") #max author has exceeded 90
AuthorTable2 <- AuthorTable2 %>% gather(authorOrder, AuthorIds, into = sprintf('%s.%s', rep('Author',100), rep(1:100)))
AuthorTable2 <- AuthorTable2[!is.na(AuthorTable2$AuthorIds), ]
AuthorTable %>% summarize(n = n_distinct(AuthorIds))
AuthorTable2 %>% summarize(n = n_distinct(AuthorIds))
AuthorTable0 <- dplyr::bind_rows(list(OpenScience=AuthorTable, Reproducibility=AuthorTable2), .id = 'Tag')
AuthorTable0 %>% summarize(n = n_distinct(AuthorIds))
length(intersect(AuthorTable$AuthorIds, AuthorTable2$AuthorIds))
nrow(block01)-68
nrow(block02)-68
nrow(block02)+nrow(block01)-68-68
###Output
_____no_output_____
|
Machine Learning/Course files/mean_median_mode/.ipynb_checkpoints/meanmedianmode-checkpoint.ipynb
|
###Markdown
Mean, Median, Mode, and introducing NumPy Mean vs. Median Let's create some fake income data, centered around 27,000 with a normal distribution and standard deviation of 15,000, with 10,000 data points. (We'll discuss those terms more later, if you're not familiar with them.)Then, compute the mean (average) - it should be close to 27,000:
###Code
import numpy as np
incomes = np.random.normal(27000, 15000, 10000)
print(incomes)
np.mean(incomes)
###Output
_____no_output_____
###Markdown
We can segment the income data into 50 buckets, and plot it as a histogram:
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.hist(incomes, 50)
plt.show()
###Output
_____no_output_____
###Markdown
Now compute the median - since we have a nice, even distribution it too should be close to 27,000:
###Code
np.median(incomes)
###Output
_____no_output_____
###Markdown
Now we'll add Donald Trump into the mix. Darn income inequality!
###Code
incomes = np.append(incomes, [1000000000])
###Output
10002
###Markdown
The median won't change much, but the mean does:
###Code
np.median(incomes)
np.mean(incomes)
###Output
_____no_output_____
###Markdown
Mode Next, let's generate some fake age data for 500 people:
###Code
ages = np.random.randint(18, high=90, size=500)
ages
from scipy import stats
stats.mode(ages)
###Output
_____no_output_____
|
nbs/utils/utils.matrix.ipynb
|
###Markdown
Tests
###Code
def test_generate_rating_matrix(num_users=3, num_items=8, n=2, neg_case=False):
"""
Tests the `generate_rating_matrix` method
"""
user_seq = [
[0,2,1,4],
[1,2,5,7],
[0,7,4,4,6,1]
]
if neg_case:
user_seq[0,2] = -1
result = generate_rating_matrix(user_seq, num_users, num_items, n)
return result.todense().astype('int32')
output = test_generate_rating_matrix(num_users=3, num_items=8)
expected = np.array([[1, 0, 1, 0, 0, 0, 0, 0],
[0, 1, 1, 0, 0, 0, 0, 0],
[1, 0, 0, 0, 2, 0, 0, 1]])
test_eq(output, expected)
output = test_generate_rating_matrix(num_users=4, num_items=8)
expected = np.array([[1, 0, 1, 0, 0, 0, 0, 0],
[0, 1, 1, 0, 0, 0, 0, 0],
[1, 0, 0, 0, 2, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 0, 0]])
test_eq(output, expected)
test_fail(lambda: test_generate_rating_matrix(num_users=4, num_items=8, neg_case=True),
msg='list indices must be integers or slices, not tuple')
test_generate_rating_matrix(num_users=2, num_items=8, neg_case=False)
test_fail(lambda: test_generate_rating_matrix(num_users=3, num_items=5, neg_case=True),
msg='column index exceeds matrix dimensions')
#hide
!pip install -q watermark
%reload_ext watermark
%watermark -a "Sparsh A." -m -iv -u -t -d
###Output
Author: Sparsh A.
Last updated: 2021-12-18 06:58:48
Compiler : GCC 7.5.0
OS : Linux
Release : 5.4.104+
Machine : x86_64
Processor : x86_64
CPU cores : 2
Architecture: 64bit
pandas : 1.1.5
numpy : 1.19.5
IPython: 5.5.0
###Markdown
Matrix> Implementation of utilities for transforming data into matrix formats.
###Code
#hide
from nbdev.showdoc import *
from fastcore.nb_imports import *
from fastcore.test import *
#export
import numpy as np
from scipy.sparse import csr_matrix
###Output
_____no_output_____
###Markdown
generate rating matrix
###Code
#export
def generate_rating_matrix(user_seq, num_users, num_items, n):
"""
Converts user-items sequences into a sparse rating matrix
Args:
user_seq (list): a list of list where each inner list is a sequence of items for a user
num_users (int): number of users
num_items (int): number of items
n (int): number of items to ignore from the last for each inner list, for valid/test samples
Returns:
csr_matrix: user item rating matrix
"""
row = []
col = []
data = []
for user_id, item_list in enumerate(user_seq):
for item in item_list[:-n]: #
row.append(user_id)
col.append(item)
data.append(1)
row = np.array(row)
col = np.array(col)
data = np.array(data)
return csr_matrix((data, (row, col)), shape=(num_users, num_items))
###Output
_____no_output_____
|
MusicVAE_with_Led_Zeppelin_songs.ipynb
|
###Markdown
Copyright 2017 Google LLC.Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttps://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music. ___Adam Roberts, Jesse Engel, Colin Raffel, Curtis Hawthorne, and Douglas Eck___[MusicVAE](https://g.co/magenta/music-vae) learns a latent space of musical scores, providing different modesof interactive musical creation, including:* Random sampling from the prior distribution.* Interpolation between existing sequences.* Manipulation of existing sequences via attribute vectors.Examples of these interactions can be generated below, and selections can be heard in our[YouTube playlist](https://www.youtube.com/playlist?list=PLBUMAYA6kvGU8Cgqh709o5SUvo-zHGTxr).For short sequences (e.g., 2-bar "loops"), we use a bidirectional LSTM encoderand LSTM decoder. For longer sequences, we use a novel hierarchical LSTMdecoder, which helps the model learn longer-term structures.We also model the interdependencies between instruments by training multipledecoders on the lowest-level embeddings of the hierarchical decoder.For additional details, check out our [blog post](https://g.co/magenta/music-vae) and [paper](https://goo.gl/magenta/musicvae-paper).___This colab notebook is self-contained and should run natively on google cloud. The [code](https://github.com/tensorflow/magenta/tree/master/magenta/models/music_vae) and [checkpoints](http://download.magenta.tensorflow.org/models/music_vae/checkpoints.tar.gz) can be downloaded separately and run locally, which is required if you want to train your own model. Basic Instructions1. Double click on the hidden cells to make them visible, or select "View > Expand Sections" in the menu at the top.2. Hover over the "`[ ]`" in the top-left corner of each cell and click on the "Play" button to run it, in order.3. Listen to the generated samples.4. Make it your own: copy the notebook, modify the code, train your own models, upload your own MIDI, etc.! Environment SetupIncludes package installation for sequence synthesis. Will take a few minutes.
###Code
#@title Setup Environment { display-mode: "both" }
#@test {"output": "ignore"}
import glob
print 'Copying checkpoints and example MIDI from GCS. This will take a few minutes...'
!gsutil -q -m cp -R gs://download.magenta.tensorflow.org/models/music_vae/colab2/* /content/
print 'Installing dependencies...'
!apt-get update -qq && apt-get install -qq libfluidsynth1 fluid-soundfont-gm build-essential libasound2-dev libjack-dev
!pip install -q pyfluidsynth
!pip install -qU magenta
# Hack to allow python to pick up the newly-installed fluidsynth lib.
# This is only needed for the hosted Colab environment.
import ctypes.util
orig_ctypes_util_find_library = ctypes.util.find_library
def proxy_find_library(lib):
if lib == 'fluidsynth':
return 'libfluidsynth.so.1'
else:
return orig_ctypes_util_find_library(lib)
ctypes.util.find_library = proxy_find_library
print 'Importing libraries and defining some helper functions...'
from google.colab import files
import magenta.music as mm
from magenta.models.music_vae import configs
from magenta.models.music_vae.trained_model import TrainedModel
import numpy as np
import os
import tensorflow as tf
# Necessary until pyfluidsynth is updated (>1.2.5).
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
def play(note_sequence):
mm.play_sequence(note_sequence, synth=mm.fluidsynth)
def interpolate(model, start_seq, end_seq, num_steps, max_length=32,
assert_same_length=True, temperature=0.5,
individual_duration=4.0):
"""Interpolates between a start and end sequence."""
note_sequences = model.interpolate(
start_seq, end_seq,num_steps=num_steps, length=max_length,
temperature=temperature,
assert_same_length=assert_same_length)
print 'Start Seq Reconstruction'
play(note_sequences[0])
print 'End Seq Reconstruction'
play(note_sequences[-1])
print 'Mean Sequence'
play(note_sequences[num_steps // 2])
print 'Start -> End Interpolation'
interp_seq = mm.sequences_lib.concatenate_sequences(
note_sequences, [individual_duration] * len(note_sequences))
play(interp_seq)
mm.plot_sequence(interp_seq)
return interp_seq if num_steps > 3 else note_sequences[num_steps // 2]
def download(note_sequence, filename):
mm.sequence_proto_to_midi_file(note_sequence, filename)
files.download(filename)
print 'Done'
###Output
Copying checkpoints and example MIDI from GCS. This will take a few minutes...
Installing dependencies...
Selecting previously unselected package fluid-soundfont-gm.
(Reading database ... 131284 files and directories currently installed.)
Preparing to unpack .../fluid-soundfont-gm_3.1-5.1_all.deb ...
Unpacking fluid-soundfont-gm (3.1-5.1) ...
Selecting previously unselected package libfluidsynth1:amd64.
Preparing to unpack .../libfluidsynth1_1.1.9-1_amd64.deb ...
Unpacking libfluidsynth1:amd64 (1.1.9-1) ...
Setting up fluid-soundfont-gm (3.1-5.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Setting up libfluidsynth1:amd64 (1.1.9-1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
[K 100% |████████████████████████████████| 1.4MB 10.2MB/s
[K 100% |████████████████████████████████| 2.5MB 9.1MB/s
[K 100% |████████████████████████████████| 204kB 22.8MB/s
[K 100% |████████████████████████████████| 2.3MB 9.1MB/s
[K 100% |████████████████████████████████| 51kB 18.6MB/s
[K 100% |████████████████████████████████| 11.6MB 2.6MB/s
[K 100% |████████████████████████████████| 1.1MB 14.1MB/s
[K 100% |████████████████████████████████| 133kB 29.2MB/s
[K 100% |████████████████████████████████| 81kB 19.6MB/s
[K 100% |████████████████████████████████| 808kB 18.3MB/s
[?25h Building wheel for pygtrie (setup.py) ... [?25ldone
[?25h Building wheel for sonnet (setup.py) ... [?25ldone
[?25h Building wheel for python-rtmidi (setup.py) ... [?25ldone
[?25h Building wheel for avro (setup.py) ... [?25ldone
[?25h Building wheel for hdfs (setup.py) ... [?25ldone
[?25h Building wheel for pyvcf (setup.py) ... [?25ldone
[?25h Building wheel for pydot (setup.py) ... [?25ldone
[?25h Building wheel for oauth2client (setup.py) ... [?25ldone
[?25h Building wheel for networkx (setup.py) ... [?25ldone
[?25h Building wheel for docopt (setup.py) ... [?25ldone
[?25hImporting libraries and defining some helper functions...
WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
If you depend on functionality not listed there, please file an issue.
Done
###Markdown
My Model trained on Led Zeppelin song midis First order of business is to upload the checkpoint ,I have numerous checkpoint for a single model based on the iteration number.
###Code
from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
ls
led_zep_models = {}
led_zep_config = configs.CONFIG_MAP['cat-mel_2bar_small']
led_zep_models['617'] = TrainedModel(led_zep_config, batch_size=4, checkpoint_dir_or_path='model.ckpt-617')
#@title Generate 4 samples from the prior of one of the models listed above.
ledzep_sample_model = "617" #@param ["617"]
temperature = 1 #@param {type:"slider", min:0.1, max:1.5, step:0.1}
ledzep_samples = led_zep_models[ledzep_sample_model].sample(n=4, length=32, temperature=temperature)
for ns in ledzep_samples:
play(ns)
###Output
_____no_output_____
###Markdown
2-Bar Drums ModelBelow are 4 pre-trained models to experiment with. The first 3 map the 61 MIDI drum "pitches" to a reduced set of 9 classes (bass, snare, closed hi-hat, open hi-hat, low tom, mid tom, high tom, crash cymbal, ride cymbal) for a simplified but less expressive output space. The last model uses a [NADE](http://homepages.inf.ed.ac.uk/imurray2/pub/11nade/) to represent all possible MIDI drum "pitches".* **drums_2bar_oh_lokl**: This *low* KL model was trained for more *realistic* sampling. The output is a one-hot encoding of 2^9 combinations of hits. It has a single-layer bidirectional LSTM encoder with 512 nodes in each direction, a 2-layer LSTM decoder with 256 nodes in each layer, and a Z with 256 dimensions. During training it was given 0 free bits, and had a fixed beta value of 0.8. After 300k steps, the final accuracy is 0.73 and KL divergence is 11 bits.* **drums_2bar_oh_hikl**: This *high* KL model was trained for *better reconstruction and interpolation*. The output is a one-hot encoding of 2^9 combinations of hits. It has a single-layer bidirectional LSTM encoder with 512 nodes in each direction, a 2-layer LSTM decoder with 256 nodes in each layer, and a Z with 256 dimensions. During training it was given 96 free bits and had a fixed beta value of 0.2. It was trained with scheduled sampling with an inverse sigmoid schedule and a rate of 1000. After 300k, steps the final accuracy is 0.97 and KL divergence is 107 bits.* **drums_2bar_nade_reduced**: This model outputs a multi-label "pianoroll" with 9 classes. It has a single-layer bidirectional LSTM encoder with 512 nodes in each direction, a 2-layer LSTM-NADE decoder with 512 nodes in each layer and 9-dimensional NADE with 128 hidden units, and a Z with 256 dimensions. During training it was given 96 free bits and has a fixed beta value of 0.2. It was trained with scheduled sampling with an inverse sigmoid schedule and a rate of 1000. After 300k steps, the final accuracy is 0.98 and KL divergence is 110 bits.* **drums_2bar_nade_full**: The output is a multi-label "pianoroll" with 61 classes. A single-layer bidirectional LSTM encoder with 512 nodes in each direction, a 2-layer LSTM-NADE decoder with 512 nodes in each layer and 61-dimensional NADE with 128 hidden units, and a Z with 256 dimensions. During training it was given 0 free bits and has a fixed beta value of 0.2. It was trained with scheduled sampling with an inverse sigmoid schedule and a rate of 1000. After 300k steps, the final accuracy is 0.90 and KL divergence is 116 bits.
###Code
#@title Load Pretrained Models { display-mode: "both" }
drums_models = {}
# One-hot encoded.
drums_config = configs.CONFIG_MAP['cat-drums_2bar_small']
drums_models['drums_2bar_oh_lokl'] = TrainedModel(drums_config, batch_size=4, checkpoint_dir_or_path='/content/checkpoints/drums_2bar_small.lokl.ckpt')
drums_models['drums_2bar_oh_hikl'] = TrainedModel(drums_config, batch_size=4, checkpoint_dir_or_path='/content/checkpoints/drums_2bar_small.hikl.ckpt')
# Multi-label NADE.
drums_nade_reduced_config = configs.CONFIG_MAP['nade-drums_2bar_reduced']
drums_models['drums_2bar_nade_reduced'] = TrainedModel(drums_nade_reduced_config, batch_size=4, checkpoint_dir_or_path='/content/checkpoints/drums_2bar_nade.reduced.ckpt')
drums_nade_full_config = configs.CONFIG_MAP['nade-drums_2bar_full']
drums_models['drums_2bar_nade_full'] = TrainedModel(drums_nade_full_config, batch_size=4, checkpoint_dir_or_path='/content/checkpoints/drums_2bar_nade.full.ckpt')
###Output
INFO:tensorflow:Building MusicVAE model with BidirectionalLstmEncoder, CategoricalLstmDecoder, and hparams:
{'grad_clip': 1.0, 'z_size': 256, 'decay_rate': 0.9999, 'dec_rnn_size': [256, 256], 'free_bits': 48, 'use_cudnn': False, 'residual_encoder': False, 'grad_norm_clip_to_zero': 10000, 'learning_rate': 0.001, 'conditional': True, 'batch_size': 4, 'min_learning_rate': 1e-05, 'sampling_schedule': 'inverse_sigmoid', 'max_seq_len': 32, 'residual_decoder': False, 'beta_rate': 0.0, 'enc_rnn_size': [512], 'sampling_rate': 1000, 'max_beta': 0.2, 'dropout_keep_prob': 1.0, 'clip_mode': 'global_norm'}
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
INFO:tensorflow:
Encoder Cells (bidirectional):
units: [512]
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/magenta/models/music_vae/lstm_utils.py:44: __init__ (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This class is equivalent as tf.keras.layers.StackedRNNCells, and will be replaced by that in Tensorflow 2.0.
INFO:tensorflow:
Decoder Cells:
units: [256, 256]
WARNING:tensorflow:Setting non-training sampling schedule from inverse_sigmoid:1000.000000 to constant:1.0.
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/magenta/models/music_vae/lstm_utils.py:201: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/magenta/models/music_vae/lstm_utils.py:155: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.dense instead.
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow_probability/python/distributions/onehot_categorical.py:172: multinomial (from tensorflow.python.ops.random_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.random.categorical instead.
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/contrib/rnn/python/ops/rnn.py:233: bidirectional_dynamic_rnn (from tensorflow.python.ops.rnn) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `keras.layers.Bidirectional(keras.layers.RNN(cell))`, which is equivalent to this API
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/rnn.py:443: dynamic_rnn (from tensorflow.python.ops.rnn) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `keras.layers.RNN(cell)`, which is equivalent to this API
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/rnn.py:626: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
INFO:tensorflow:Restoring parameters from /content/checkpoints/drums_2bar_small.lokl.ckpt
INFO:tensorflow:Building MusicVAE model with BidirectionalLstmEncoder, CategoricalLstmDecoder, and hparams:
{'grad_clip': 1.0, 'z_size': 256, 'decay_rate': 0.9999, 'dec_rnn_size': [256, 256], 'free_bits': 48, 'use_cudnn': False, 'residual_encoder': False, 'grad_norm_clip_to_zero': 10000, 'learning_rate': 0.001, 'conditional': True, 'batch_size': 4, 'min_learning_rate': 1e-05, 'sampling_schedule': 'inverse_sigmoid', 'max_seq_len': 32, 'residual_decoder': False, 'beta_rate': 0.0, 'enc_rnn_size': [512], 'sampling_rate': 1000, 'max_beta': 0.2, 'dropout_keep_prob': 1.0, 'clip_mode': 'global_norm'}
INFO:tensorflow:
Encoder Cells (bidirectional):
units: [512]
INFO:tensorflow:
Decoder Cells:
units: [256, 256]
WARNING:tensorflow:Setting non-training sampling schedule from inverse_sigmoid:1000.000000 to constant:1.0.
INFO:tensorflow:Restoring parameters from /content/checkpoints/drums_2bar_small.hikl.ckpt
INFO:tensorflow:Building MusicVAE model with BidirectionalLstmEncoder, MultiLabelRnnNadeDecoder, and hparams:
{'grad_clip': 1.0, 'free_bits': 48, 'grad_norm_clip_to_zero': 10000, 'conditional': True, 'min_learning_rate': 1e-05, 'max_beta': 0.2, 'use_cudnn': False, 'nade_num_hidden': 128, 'dropout_keep_prob': 1.0, 'max_seq_len': 32, 'beta_rate': 0.0, 'sampling_rate': 1000, 'z_size': 256, 'residual_encoder': False, 'learning_rate': 0.001, 'batch_size': 4, 'decay_rate': 0.9999, 'enc_rnn_size': [1024], 'sampling_schedule': 'inverse_sigmoid', 'residual_decoder': False, 'dec_rnn_size': [512, 512], 'clip_mode': 'global_norm'}
INFO:tensorflow:
Encoder Cells (bidirectional):
units: [1024]
INFO:tensorflow:
Decoder Cells:
units: [512, 512]
WARNING:tensorflow:Setting non-training sampling schedule from inverse_sigmoid:1000.000000 to constant:1.0.
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/magenta/common/nade.py:241: calling squeeze (from tensorflow.python.ops.array_ops) with squeeze_dims is deprecated and will be removed in a future version.
Instructions for updating:
Use the `axis` argument instead
INFO:tensorflow:Restoring parameters from /content/checkpoints/drums_2bar_nade.reduced.ckpt
INFO:tensorflow:Building MusicVAE model with BidirectionalLstmEncoder, MultiLabelRnnNadeDecoder, and hparams:
{'grad_clip': 1.0, 'free_bits': 48, 'grad_norm_clip_to_zero': 10000, 'conditional': True, 'min_learning_rate': 1e-05, 'max_beta': 0.2, 'use_cudnn': False, 'nade_num_hidden': 128, 'dropout_keep_prob': 1.0, 'max_seq_len': 32, 'beta_rate': 0.0, 'sampling_rate': 1000, 'z_size': 256, 'residual_encoder': False, 'learning_rate': 0.001, 'batch_size': 4, 'decay_rate': 0.9999, 'enc_rnn_size': [1024], 'sampling_schedule': 'inverse_sigmoid', 'residual_decoder': False, 'dec_rnn_size': [512, 512], 'clip_mode': 'global_norm'}
INFO:tensorflow:
Encoder Cells (bidirectional):
units: [1024]
INFO:tensorflow:
Decoder Cells:
units: [512, 512]
WARNING:tensorflow:Setting non-training sampling schedule from inverse_sigmoid:1000.000000 to constant:1.0.
INFO:tensorflow:Restoring parameters from /content/checkpoints/drums_2bar_nade.full.ckpt
###Markdown
Generate Samples
###Code
#@title Generate 4 samples from the prior of one of the models listed above.
drums_sample_model = "drums_2bar_nade_full" #@param ["drums_2bar_oh_lokl", "drums_2bar_oh_hikl", "drums_2bar_nade_reduced", "drums_2bar_nade_full"]
temperature = 0.4 #@param {type:"slider", min:0.1, max:1.5, step:0.1}
drums_samples = drums_models[drums_sample_model].sample(n=4, length=32, temperature=temperature)
for ns in drums_samples:
play(ns)
#@title Optionally download generated MIDI samples.
for i, ns in enumerate(drums_samples):
download(ns, '%s_sample_%d.mid' % (drums_sample_model, i))
###Output
_____no_output_____
###Markdown
Generate Interpolations
###Code
#@title Option 1: Use example MIDI files for interpolation endpoints.
input_drums_midi_data = [
tf.gfile.Open(fn).read()
for fn in sorted(tf.gfile.Glob('/content/midi/drums_2bar*.mid'))]
#@title Option 2: upload your own MIDI files to use for interpolation endpoints instead of those provided.
input_drums_midi_data = files.upload().values() or input_drums_midi_data
#@title Extract drums from MIDI files. This will extract all unique 2-bar drum beats using a sliding window with a stride of 1 bar.
drums_input_seqs = [mm.midi_to_sequence_proto(m) for m in input_drums_midi_data]
extracted_beats = []
for ns in drums_input_seqs:
extracted_beats.extend(drums_nade_full_config.data_converter.to_notesequences(
drums_nade_full_config.data_converter.to_tensors(ns)[1]))
for i, ns in enumerate(extracted_beats):
print "Beat", i
play(ns)
#@title Interpolate between 2 beats, selected from those in the previous cell.
drums_interp_model = "drums_2bar_oh_hikl" #@param ["drums_2bar_oh_lokl", "drums_2bar_oh_hikl", "drums_2bar_nade_reduced", "drums_2bar_nade_full"]
start_beat = 0 #@param {type:"integer"}
end_beat = 1 #@param {type:"integer"}
start_beat = extracted_beats[start_beat]
end_beat = extracted_beats[end_beat]
temperature = 0.5 #@param {type:"slider", min:0.1, max:1.5, step:0.1}
num_steps = 13 #@param {type:"integer"}
drums_interp = interpolate(drums_models[drums_interp_model], start_beat, end_beat, num_steps=num_steps, temperature=temperature)
#@title Optionally download interpolation MIDI file.
download(drums_interp, '%s_interp.mid' % drums_interp_model)
###Output
_____no_output_____
###Markdown
2-Bar Melody ModelThe pre-trained model consists of a single-layer bidirectional LSTM encoder with 2048 nodes in each direction, a 3-layer LSTM decoder with 2048 nodes in each layer, and Z with 512 dimensions. The model was given 0 free bits, and had its beta valued annealed at an exponential rate of 0.99999 from 0 to 0.43 over 200k steps. It was trained with scheduled sampling with an inverse sigmoid schedule and a rate of 1000. The final accuracy is 0.95 and KL divergence is 58 bits.
###Code
#@title Load the pre-trained model.
mel_2bar_config = configs.CONFIG_MAP['cat-mel_2bar_big']
mel_2bar = TrainedModel(mel_2bar_config, batch_size=4, checkpoint_dir_or_path='/content/checkpoints/mel_2bar_big.ckpt')
###Output
INFO:tensorflow:Building MusicVAE model with BidirectionalLstmEncoder, CategoricalLstmDecoder, and hparams:
{'grad_clip': 1.0, 'z_size': 512, 'decay_rate': 0.9999, 'dec_rnn_size': [2048, 2048, 2048], 'free_bits': 0, 'use_cudnn': False, 'residual_encoder': False, 'grad_norm_clip_to_zero': 10000, 'learning_rate': 0.001, 'conditional': True, 'batch_size': 4, 'min_learning_rate': 1e-05, 'sampling_schedule': 'inverse_sigmoid', 'max_seq_len': 32, 'residual_decoder': False, 'beta_rate': 0.99999, 'enc_rnn_size': [2048], 'sampling_rate': 1000, 'max_beta': 0.5, 'dropout_keep_prob': 1.0, 'clip_mode': 'global_norm'}
INFO:tensorflow:
Encoder Cells (bidirectional):
units: [2048]
INFO:tensorflow:
Decoder Cells:
units: [2048, 2048, 2048]
WARNING:tensorflow:Setting non-training sampling schedule from inverse_sigmoid:1000.000000 to constant:1.0.
INFO:tensorflow:Restoring parameters from /content/checkpoints/mel_2bar_big.ckpt
###Markdown
Generate Samples
###Code
#@title Generate 4 samples from the prior.
temperature = 0.5 #@param {type:"slider", min:0.1, max:1.5, step:0.1}
mel_2_samples = mel_2bar.sample(n=4, length=32, temperature=temperature)
for ns in mel_2_samples:
play(ns)
#@title Optionally download samples.
for i, ns in enumerate(mel_2_samples):
download(ns, 'mel_2bar_sample_%d.mid' % i)
###Output
_____no_output_____
###Markdown
Generate Interpolations
###Code
#@title Option 1: Use example MIDI files for interpolation endpoints.
input_mel_midi_data = [
tf.gfile.Open(fn).read()
for fn in sorted(tf.gfile.Glob('/content/midi/mel_2bar*.mid'))]
#@title Option 2: Upload your own MIDI files to use for interpolation endpoints instead of those provided.
input_mel_midi_data = files.upload().values() or input_mel_midi_data
#@title Extract melodies from MIDI files. This will extract all unique 2-bar melodies using a sliding window with a stride of 1 bar.
mel_input_seqs = [mm.midi_to_sequence_proto(m) for m in input_mel_midi_data]
extracted_mels = []
for ns in mel_input_seqs:
extracted_mels.extend(
mel_2bar_config.data_converter.to_notesequences(
mel_2bar_config.data_converter.to_tensors(ns)[1]))
for i, ns in enumerate(extracted_mels):
print "Melody", i
play(ns)
#@title Interpolate between 2 melodies, selected from those in the previous cell.
start_melody = 0 #@param {type:"integer"}
end_melody = 1 #@param {type:"integer"}
start_mel = extracted_mels[start_melody]
end_mel = extracted_mels[end_melody]
temperature = 0.5 #@param {type:"slider", min:0.1, max:1.5, step:0.1}
num_steps = 13 #@param {type:"integer"}
mel_2bar_interp = interpolate(mel_2bar, start_mel, end_mel, num_steps=num_steps, temperature=temperature)
#@title Optionally download interpolation MIDI file.
download(mel_2bar_interp, 'mel_2bar_interp.mid')
###Output
_____no_output_____
###Markdown
16-bar Melody ModelsThe pre-trained hierarchical model consists of a 2-layer stacked bidirectional LSTM encoder with 2048 nodes in each direction for each layer, a 16-step 2-layer LSTM "conductor" decoder with 1024 nodes in each layer, a 2-layer LSTM core decoder with 1024 nodes in each layer, and a Z with 512 dimensions. It was given 256 free bits, and had a fixed beta value of 0.2. After 25k steps, the final accuracy is 0.90 and KL divergence is 277 bits.
###Code
#@title Load the pre-trained models.
mel_16bar_models = {}
hierdec_mel_16bar_config = configs.CONFIG_MAP['hierdec-mel_16bar']
mel_16bar_models['hierdec_mel_16bar'] = TrainedModel(hierdec_mel_16bar_config, batch_size=4, checkpoint_dir_or_path='/content/checkpoints/mel_16bar_hierdec.ckpt')
flat_mel_16bar_config = configs.CONFIG_MAP['flat-mel_16bar']
mel_16bar_models['baseline_flat_mel_16bar'] = TrainedModel(flat_mel_16bar_config, batch_size=4, checkpoint_dir_or_path='/content/checkpoints/mel_16bar_flat.ckpt')
###Output
INFO:tensorflow:Building MusicVAE model with BidirectionalLstmEncoder, HierarchicalLstmDecoder, and hparams:
{'grad_clip': 1.0, 'z_size': 512, 'decay_rate': 0.9999, 'dec_rnn_size': [1024, 1024], 'free_bits': 256, 'use_cudnn': False, 'residual_encoder': False, 'grad_norm_clip_to_zero': 10000, 'learning_rate': 0.001, 'conditional': True, 'batch_size': 4, 'min_learning_rate': 1e-05, 'sampling_schedule': 'constant', 'max_seq_len': 256, 'residual_decoder': False, 'beta_rate': 0.0, 'enc_rnn_size': [2048, 2048], 'sampling_rate': 0.0, 'max_beta': 0.2, 'dropout_keep_prob': 1.0, 'clip_mode': 'global_norm'}
INFO:tensorflow:
Encoder Cells (bidirectional):
units: [2048, 2048]
INFO:tensorflow:
Hierarchical Decoder:
input length: 256
level output lengths: [16, 16]
INFO:tensorflow:
Decoder Cells:
units: [1024, 1024]
INFO:tensorflow:Restoring parameters from /content/checkpoints/mel_16bar_hierdec.ckpt
INFO:tensorflow:Building MusicVAE model with BidirectionalLstmEncoder, CategoricalLstmDecoder, and hparams:
{'grad_clip': 1.0, 'z_size': 512, 'decay_rate': 0.9999, 'dec_rnn_size': [2048, 2048, 2048], 'free_bits': 256, 'use_cudnn': False, 'residual_encoder': False, 'grad_norm_clip_to_zero': 10000, 'learning_rate': 0.001, 'conditional': True, 'batch_size': 4, 'min_learning_rate': 1e-05, 'sampling_schedule': 'constant', 'max_seq_len': 256, 'residual_decoder': False, 'beta_rate': 0.0, 'enc_rnn_size': [2048, 2048], 'sampling_rate': 0.0, 'max_beta': 0.2, 'dropout_keep_prob': 1.0, 'clip_mode': 'global_norm'}
INFO:tensorflow:
Encoder Cells (bidirectional):
units: [2048, 2048]
INFO:tensorflow:
Decoder Cells:
units: [2048, 2048, 2048]
INFO:tensorflow:Restoring parameters from /content/checkpoints/mel_16bar_flat.ckpt
###Markdown
Generate Samples
###Code
#@title Generate 4 samples from the selected model prior.
mel_sample_model = "hierdec_mel_16bar" #@param ["hierdec_mel_16bar", "baseline_flat_mel_16bar"]
temperature = 0.5 #@param {type:"slider", min:0.1, max:1.5, step:0.1}
mel_16_samples = mel_16bar_models[mel_sample_model].sample(n=4, length=256, temperature=temperature)
for ns in mel_16_samples:
play(ns)
#@title Optionally download MIDI samples.
for i, ns in enumerate(mel_16_samples):
download(ns, '%s_sample_%d.mid' % (mel_sample_model, i))
###Output
_____no_output_____
###Markdown
Generate Means
###Code
#@title Option 1: Use example MIDI files for interpolation endpoints.
input_mel_16_midi_data = [
tf.gfile.Open(fn).read()
for fn in sorted(tf.gfile.Glob('/content/midi/mel_16bar*.mid'))]
#@title Option 2: upload your own MIDI files to use for interpolation endpoints instead of those provided.
input_mel_16_midi_data = files.upload().values() or input_mel_16_midi_data
#@title Extract melodies from MIDI files. This will extract all unique 16-bar melodies using a sliding window with a stride of 1 bar.
mel_input_seqs = [mm.midi_to_sequence_proto(m) for m in input_mel_16_midi_data]
extracted_16_mels = []
for ns in mel_input_seqs:
extracted_16_mels.extend(
hierdec_mel_16bar_config.data_converter.to_notesequences(
hierdec_mel_16bar_config.data_converter.to_tensors(ns)[1]))
for i, ns in enumerate(extracted_16_mels):
print "Melody", i
play(ns)
#@title Compute the reconstructions and mean of the two melodies, selected from the previous cell.
mel_interp_model = "hierdec_mel_16bar" #@param ["hierdec_mel_16bar", "baseline_flat_mel_16bar"]
start_melody = 0 #@param {type:"integer"}
end_melody = 1 #@param {type:"integer"}
start_mel = extracted_16_mels[start_melody]
end_mel = extracted_16_mels[end_melody]
temperature = 0.5 #@param {type:"slider", min:0.1, max:1.5, step:0.1}
mel_16bar_mean = interpolate(mel_16bar_models[mel_interp_model], start_mel, end_mel, num_steps=3, max_length=256, individual_duration=32, temperature=temperature)
#@title Optionally download mean MIDI file.
download(mel_16bar_mean, '%s_mean.mid' % mel_interp_model)
###Output
_____no_output_____
###Markdown
16-bar "Trio" Models (lead, bass, drums)We present two pre-trained models for 16-bar trios: a hierarchical model and a flat (baseline) model.The pre-trained hierarchical model consists of a 2-layer stacked bidirectional LSTM encoder with 2048 nodes in each direction for each layer, a 16-step 2-layer LSTM "conductor" decoder with 1024 nodes in each layer, 3 (lead, bass, drums) 2-layer LSTM core decoders with 1024 nodes in each layer, and a Z with 512 dimensions. It was given 1024 free bits, and had a fixed beta value of 0.1. It was trained with scheduled sampling with an inverse sigmoid schedule and a rate of 1000. After 50k steps, the final accuracy is 0.82 for lead, 0.87 for bass, and 0.90 for drums, and the KL divergence is 1027 bits.The pre-trained flat model consists of a 2-layer stacked bidirectional LSTM encoder with 2048 nodes in each direction for each layer, a 3-layer LSTM decoder with 2048 nodes in each layer, and a Z with 512 dimensions. It was given 1024 free bits, and had a fixed beta value of 0.1. It was trained with scheduled sampling with an inverse sigmoid schedule and a rate of 1000. After 50k steps, the final accuracy is 0.67 for lead, 0.66 for bass, and 0.79 for drums, and the KL divergence is 1016 bits.
###Code
#@title Load the pre-trained models.
trio_models = {}
hierdec_trio_16bar_config = configs.CONFIG_MAP['hierdec-trio_16bar']
trio_models['hierdec_trio_16bar'] = TrainedModel(hierdec_trio_16bar_config, batch_size=4, checkpoint_dir_or_path='/content/checkpoints/trio_16bar_hierdec.ckpt')
flat_trio_16bar_config = configs.CONFIG_MAP['flat-trio_16bar']
trio_models['baseline_flat_trio_16bar'] = TrainedModel(flat_trio_16bar_config, batch_size=4, checkpoint_dir_or_path='/content/checkpoints/trio_16bar_flat.ckpt')
###Output
INFO:tensorflow:Building MusicVAE model with BidirectionalLstmEncoder, HierarchicalLstmDecoder, and hparams:
{'grad_clip': 1.0, 'z_size': 512, 'decay_rate': 0.9999, 'dec_rnn_size': [1024, 1024], 'free_bits': 256, 'use_cudnn': False, 'residual_encoder': False, 'grad_norm_clip_to_zero': 10000, 'learning_rate': 0.001, 'conditional': True, 'batch_size': 4, 'min_learning_rate': 1e-05, 'sampling_schedule': 'constant', 'max_seq_len': 256, 'residual_decoder': False, 'beta_rate': 0.0, 'enc_rnn_size': [2048, 2048], 'sampling_rate': 0.0, 'max_beta': 0.2, 'dropout_keep_prob': 1.0, 'clip_mode': 'global_norm'}
INFO:tensorflow:
Encoder Cells (bidirectional):
units: [2048, 2048]
INFO:tensorflow:
Hierarchical Decoder:
input length: 256
level output lengths: [16, 16]
INFO:tensorflow:
Decoder Cells:
units: [1024, 1024]
INFO:tensorflow:
Decoder Cells:
units: [1024, 1024]
INFO:tensorflow:
Decoder Cells:
units: [1024, 1024]
INFO:tensorflow:Restoring parameters from /content/checkpoints/trio_16bar_hierdec.ckpt
INFO:tensorflow:Building MusicVAE model with BidirectionalLstmEncoder, MultiOutCategoricalLstmDecoder, and hparams:
{'grad_clip': 1.0, 'z_size': 512, 'decay_rate': 0.9999, 'dec_rnn_size': [2048, 2048, 2048], 'free_bits': 0.0, 'use_cudnn': False, 'residual_encoder': False, 'grad_norm_clip_to_zero': 10000, 'learning_rate': 0.001, 'conditional': True, 'batch_size': 4, 'min_learning_rate': 1e-05, 'sampling_schedule': 'constant', 'max_seq_len': 256, 'residual_decoder': False, 'beta_rate': 0.0, 'enc_rnn_size': [2048, 2048], 'sampling_rate': 0.0, 'max_beta': 1.0, 'dropout_keep_prob': 1.0, 'clip_mode': 'global_norm'}
INFO:tensorflow:
Encoder Cells (bidirectional):
units: [2048, 2048]
INFO:tensorflow:
Decoder Cells:
units: [2048, 2048, 2048]
INFO:tensorflow:Restoring parameters from /content/checkpoints/trio_16bar_flat.ckpt
###Markdown
Generate Samples
###Code
#@title Generate 4 samples from the selected model prior.
trio_sample_model = "hierdec_trio_16bar" #@param ["hierdec_trio_16bar", "baseline_flat_trio_16bar"]
temperature = 0.3 #@param {type:"slider", min:0.1, max:1.5, step:0.1}
trio_16_samples = trio_models[trio_sample_model].sample(n=4, length=256, temperature=temperature)
for ns in trio_16_samples:
play(ns)
#@title Optionally download MIDI samples.
for i, ns in enumerate(trio_16_samples):
download(ns, '%s_sample_%d.mid' % (trio_sample_model, i))
###Output
_____no_output_____
###Markdown
Generate Means
###Code
#@title Option 1: Use example MIDI files for interpolation endpoints.
input_trio_midi_data = [
tf.gfile.Open(fn).read()
for fn in sorted(tf.gfile.Glob('/content/midi/trio_16bar*.mid'))]
#@title Option 2: Upload your own MIDI files to use for interpolation endpoints instead of those provided.
input_trio_midi_data = files.upload().values() or input_trio_midi_data
#@title Extract trios from MIDI files. This will extract all unique 16-bar trios using a sliding window with a stride of 1 bar.
trio_input_seqs = [mm.midi_to_sequence_proto(m) for m in input_trio_midi_data]
extracted_trios = []
for ns in trio_input_seqs:
extracted_trios.extend(
hierdec_trio_16bar_config.data_converter.to_notesequences(
hierdec_trio_16bar_config.data_converter.to_tensors(ns)[1]))
for i, ns in enumerate(extracted_trios):
print "Trio", i
play(ns)
#@title Compute the reconstructions and mean of the two trios, selected from the previous cell.
trio_interp_model = "hierdec_trio_16bar" #@param ["hierdec_trio_16bar", "baseline_flat_trio_16bar"]
start_trio = 0 #@param {type:"integer"}
end_trio = 1 #@param {type:"integer"}
start_trio = extracted_trios[start_trio]
end_trio = extracted_trios[end_trio]
temperature = 0.5 #@param {type:"slider", min:0.1, max:1.5, step:0.1}
trio_16bar_mean = interpolate(trio_models[trio_interp_model], start_trio, end_trio, num_steps=3, max_length=256, individual_duration=32, temperature=temperature)
#@title Optionally download mean MIDI file.
download(trio_16bar_mean, '%s_mean.mid' % trio_interp_model)
###Output
_____no_output_____
|
lp-manager-jupyter-frontend.ipynb
|
###Markdown
LP Workout Managerauthor = ['Matt Guan'] Inputs
###Code
from io import StringIO
import sqlite3
from plotly.offline import init_notebook_mode
init_notebook_mode(connected=True)
import numpy as np
import pandas as pd
from workout_maker import WorkoutMaker
from functions.db_funcs import get_db_con, create_db
###Output
_____no_output_____
###Markdown
__Create Tables if they Don't Exist__
###Code
con = get_db_con(db='workout.db')
create_db(con)
import datetime
from functions.dt_funcs import now
import json
###Output
_____no_output_____
###Markdown
__initialize module__
###Code
workout = WorkoutMaker(con, 'mguan', '[email protected]')
###Output
Welcome back mguan!
###Markdown
__First Run Only__Enter your workout inputs here..* `accessory` : pandas df containing accessory workout weights* `prog` : dict containing monthly progression weight for each main workout* `orm` : dict containing one rep max weights for each main workout Note: You only need to run this part if it is your first run OR you want to update something.
###Code
s = '''
me_name ae_name ae_weight sets reps
deadlift Rows 100 4 10
deadlift Pullups 0 4 10
deadlift Curls 30 4 10
deadlift Back Extension 225 4 10
squat Leg Press 290 4 10
squat Lying Leg Curl 85 4 10
squat Calf Raise 225 4 10
squat Hip Abd/Add-uction 250 4 10
bench Dips 0 4 10
bench Incline D.Bell Press 45 4 10
bench Cable Tricep Ext. 42.5 4 10
bench Machine Pec Fly 100 4 10
ohp Sitting D.Bell Press 50 5 5
ohp Cable Rear Delt Fly 25 4 10
ohp Machine Rear Delt Fly 70 4 10
ohp Machine Lat Raise 60 4 10
'''
accessory = pd.read_csv(StringIO(s), sep='\t')
prog = {'squat':2.5 , 'deadlift':2.5, 'ohp':0, 'bench':2.5}
orm = {'squat':240 , 'deadlift':290, 'ohp':127.5, 'bench':182.5}
workout.set_accessory(accessory)
workout.set_dim_prog(prog)
workout.set_one_rep_max(orm)
###Output
2018-06-29 16:03:31,826 | INFO | duplicate entry deleted
2018-06-29 16:03:31,828 | INFO | new entry loaded
2018-06-29 16:03:31,846 | INFO | dict is valid - one_rep_max overwitten
###Markdown
Pull Workout
###Code
workout.run()
###Output
2018-06-29 16:03:40,308 | INFO | using current orm - use self.set_one_rep_max if you wish to modify it
2018-06-29 16:03:40,427 | INFO | workout saved to C:/Users/Matt/Dropbox/lp-workout/mguan-lp-workout.html
###Markdown
View ORM Progression
###Code
workout.viz_orm()
###Output
_____no_output_____
###Markdown
Extras __Check out some of the intermediate Output__
###Code
workout.get_accessory().head(3)
orm = workout.get_orm()
orm
from functions.workout_funcs import get_workout
from functions.db_funcs import retrieve_json
orm_dict = retrieve_json(orm, 'orm_dict')
orm_dict
get_workout(orm_dict, [1])
###Output
_____no_output_____
###Markdown
__Check out the Database__
###Code
schema = pd.read_sql('SELECT name FROM sqlite_master WHERE type="table";', con)
schema
pd.read_sql('select * from dim_user', con)
pd.read_sql('select * from one_rep_max', con)
pd.read_sql('select * from dim_progression', con)
###Output
_____no_output_____
|
Course I/Алгоритмы Python/Part2/семинары/pract6/task1/task.ipynb
|
###Markdown
Задание 1 Деменчук Георгий ПИ19-4 Выполнить представление через множества и ленточное представления бинарного дерева, представленного на рис. 1 Программная реализация 
###Code
from tree_module import BinaryTree
def main():
# Элемент на 1 уровне
r = BinaryTree('8')
#Элементы на 2 уровне
r_1 = r.insert_left('4')
r_2 = r.insert_right('12')
# Элементы на 3 уровне
r_11 = r_1.insert_left('2')
r_12 = r_1.insert_right('6')
r_21 = r_2.insert_left('10')
r_22 = r_2.insert_right('14')
# Добавление элементов на 4 уровень
r_11.insert_left('1')
r_11.insert_right('3')
r_12.insert_left('5')
r_12.insert_right('7')
r_21.insert_left('9')
r_21.insert_right('11')
r_22.insert_left('13')
r_22.insert_right('15')
print("Реализованное дерево:")
print(r)
if __name__ == "__main__":
main()
###Output
Реализованное дерево:
______8________
/ \
__4__ ____12___
/ \ / \
2 6 10 _14
/ \ / \ / \ / \
1 3 5 7 9 11 13 15
|
notebooks/ethics/raw/ex5.ipynb
|
###Markdown
In the tutorial, you learned how to use model cards. In this exercise, you'll sharpen your understanding of model cards by engaging with them in a couple of scenarios. IntroductionRun the next code cell to get started.
###Code
from learntools.core import binder
binder.bind(globals())
from learntools.ethics.ex5 import *
###Output
_____no_output_____
###Markdown
Scenario AYou are the creator of the _Simple Zoom_ video editing tool, which uses AI to automatically zoom the video camera in on a presenter as they walk across a room during a presentation. You are launching the _Simple Zoom_ tool and releasing a model card along with the tool, in the interest of transparency. 1) AudienceWhich audiences should you write the model card for? Once you have thought about your answer, check it against the solution below.
###Code
# Check your answer (Run this code cell to receive credit!)
q_1.check()
###Output
_____no_output_____
###Markdown
Scenario BYou are the product manager for _Presenter Pro_, a popular video and audio recording product for people delivering talks and presentations. As a new feature based on customer demand, your team has been planning to add the AI-powered ability for a single video camera to automatically track a presenter, focusing on them as they walk across the room or stage, zooming in and out automatically and continuously adjusting lighting and focus within the video frame. You are hoping to incorporate a different company’s AI tool (called _Simple Zoom_) into your product (_Presenter Pro_). To determine whether _Simple Zoom_ is a good fit for _Presenter Pro_, you are reviewing *Simple Zoom*’s model card. 2) Intended UseThe **Intended Use** section of the model card includes the following bullets:* _Simple Zoom_ is intended to be used for automatic zoom, focus, and lighting adjustment in the real-time video recording of individual presenters by a single camera* _Simple Zoom_ is not suitable for presentations in which there is more than one presenter or for presentations in which the presenter is partially or fully hidden at any timeAs a member of the team evaluating _Simple Zoom_ for potential integration into _Presenter Pro_, you are aware that _Presenter Pro_ only supports one presenter per video.However, you are also aware that in some _Presenter Pro_ customers use large props in their presentation videos. Given the information in the Intended Use section of the model card, what problem do you foresee for these customers if you integrate _Simple Zoom_ into _Presenter Pro_? What are some ways in which you could address this issue?
###Code
# Run this code cell to receive a hint
q_2.hint()
# Check your answer (Run this code cell to receive credit!)
q_2.check()
###Output
_____no_output_____
###Markdown
3) Factors, Evaluation Data, Metrics, and Quantitative AnalysesWe'll continue with **Scenario B**, where you are the product manager for _Presenter Pro_. Four more sections of the model card for _Simple Zoom_ are described below.**Factors**: The model card lists the following factors as potentially relevant to model performance: * Group Factors * Self-reported gender * Skin tone * Self-reported age* Other Factors * Camera angle * Presenter distance from camera * Camera type * Lighting**Evaluation Data**: To generate the performance metrics reported in the **Quantitative Analysis** section (discussed below), the _Simple Zoom_ team used an evaluation data set of 500 presentation videos, each between two and five minutes long. The videos included both regular and unusual presentation and recording scenarios and included presenters from various demographic backgrounds.**Metrics**: Since _Simple Zoom_ model performance is subjective (involving questions like whether a zoom is of appropriate speed or smoothness; or whether a lighting adjustment is well-executed), the _Simple Zoom_ team tested the tool’s performance by asking a diverse viewer group to view _Simple Zoom_’s output videos (using the evaluation dataset’s 500 videos as inputs). Each viewer was asked to rate the quality of video editing for each video on a scale of 1 to 10, and each video’s average rating was used as a proxy for _Simple Zoom_’s performance on that video.**Quantitative Analyses**: The quantitative analyses section of the model card includes a brief summary of performance results. According to the summary, the model generally performs equally well across all the listed demographic groups (gender, skin tone and age).The quantitative analyses section also includes interactive graphs, which allow you to view the performance of the _Simple Zoom_ tool by each factor and by intersections of ‘Group’ and ‘Other’ factors.As a member of the team evaluating _Simple Zoom_ for potential integration into _Presenter Pro_, what are some questions you might be interested in answering and exploring via the interactive graphs?
###Code
# Check your answer (Run this code cell to receive credit!)
q_3.check()
###Output
_____no_output_____
|
notebooks_good_bad_and_ugly/2_worst_notebook_ever.ipynb
|
###Markdown
Normal bugs (opposed to the abnormal ones)
###Code
import pandas as pd
df = pd.read_csv('zoo.data', header=None)
df.columns
df1 = df.drop(17, axis=1)
df1[13] = df1[13] == 4
df1 = df1[df1[13] == True]
print(len(df1))
df1
###Output
_____no_output_____
###Markdown
Order!!!
###Code
arr = np.array([1, 2, 3])
import numpy as np
arr.mean()
###Output
_____no_output_____
###Markdown
I have no idea what environment did I use
###Code
from gensim.models import Word2Vec
from sklearn.model_selection import train_test_split
!nvidia-smi
###Output
/bin/sh: 1: nvidia-smi: not found
###Markdown
It's reproducible, ok
###Code
import csv
# load the data
data = []
with open('/home/georgi/dev/data/question_classification/clean.tsv', 'r') as rf:
csv_reader = csv.reader(rf, delimiter='\t')
for item in csv_reader:
data.append(item)
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.1, random_state=42)
w2v_path = "/mnt/r2d2/large_models/uncased_gigaword_w2v/word2vec.model"
w2v = Word2Vec.load(w2v_path)
###Output
_____no_output_____
###Markdown
It's also easy to modify
###Code
from keras.models import Sequential
from keras import layers
from keras.preprocessing.text import Tokenizer
EMBEDDING_SIZE = 300 # Is it really ?
tokenizer = Tokenizer(num_words=5000)
tokenizer.fit_on_texts(sentences_train)
X_train = tokenizer.texts_to_sequences(X_train)
X_val = tokenizer.texts_to_sequences(X_val)
vocab_size = len(tokenizer.word_index) + 1 # Actually the only good line in this notebook
X_train = pad_sequences(X_train, padding='post', maxlen=50)
X_val = pad_sequences(X_val, padding='post', maxlen=50)
model = Sequential()
model.add(layers.Embedding(input_dim=vocab_size,
output_dim=EMBEDDING_SIZE,
input_length=50))
model.add(layers.LSTM(100))
model.add(layers.Dense(1000, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=100, verbose=False, validation_data=(X_val, y_val), batch_size=10)
###Output
_____no_output_____
|
notebook/test_FeatureGenerate.ipynb
|
###Markdown
特征生成测试 1.导入各个模块
###Code
import sys
sys.path.append('../src/')
import pointcloud as pc
import validpoint as vp
import matplotlib.pyplot as plt
import featuregenerate as fg
%matplotlib inline
###Output
_____no_output_____
###Markdown
2.测试
###Code
tst=pc.PointCloud() # 实例化tst PointCloud类
tst.ReadFromBinFile('../data/test.pcd') # 调用方法读取pcd文件
vldpc=vp.ValidPoints(tst,640,640,5,-5,60) # 实例化vldpc类
vldpc.GetValid() # 调用GetValid方法获取ROI点云数据
fgg=fg.FeatureGenerator() # 实例化fgg FeatureGenerator类
outblob=fgg.Generate(vldpc) # 调用方法Generate 提取各个通道特征,返回numpy.array()类
print(outblob.shape)
###Output
(1, 8, 640, 640)
###Markdown
3.图示各个通道 3.1 单元格最大高度和平均高度
###Code
plt.figure(figsize=(12,5))
plt.subplot(1,2,1)
plt.imshow(outblob[0,0])
plt.title("max_height_data")
plt.colorbar()
plt.subplot(1,2,2)
plt.imshow(outblob[0,1])
plt.title("mean_height_data")
plt.colorbar()
###Output
_____no_output_____
###Markdown
3.2 相对原点距离和方向角
###Code
plt.figure(figsize=(12,5))
plt.subplot(1,2,1)
plt.imshow(outblob[0,3])
plt.title("direction_data")
plt.colorbar()
plt.subplot(1,2,2)
plt.imshow(outblob[0,6])
plt.title("distance_data")
plt.colorbar()
###Output
_____no_output_____
###Markdown
3.3 单元格最大反射强度与平均强度
###Code
plt.figure(figsize=(12,5))
plt.subplot(1,2,1)
plt.imshow(outblob[0,4])
plt.title("top_intensity_data")
plt.colorbar()
plt.subplot(1,2,2)
plt.imshow(outblob[0,5])
plt.title("mean_intensity_data")
plt.colorbar()
###Output
_____no_output_____
###Markdown
3.4 单元格点数与掩码
###Code
plt.figure(figsize=(12,5))
plt.subplot(1,2,1)
plt.imshow(outblob[0,2])
plt.title("Points in each grid")
plt.colorbar()
plt.subplot(1,2,2)
plt.imshow(outblob[0,7])
plt.title("Whether the grid is occupied")
plt.colorbar()
###Output
_____no_output_____
|
blogs/nexrad2/visualize/radardata.ipynb
|
###Markdown
Reading NEXRAD Level II data from Google Cloud public datasets This notebook demonstrates how to use PyART to visualize data from the Google Cloud public dataset.
###Code
%bash
rm -rf data
mkdir data
cd data
RADAR=KIWA
YEAR=2013
MONTH=07
DAY=23
HOUR=23
gsutil cp gs://gcp-public-data-nexrad-l2/$YEAR/$MONTH/$DAY/$RADAR/*_$RADAR_${YEAR}${MONTH}${DAY}${HOUR}0000_${YEAR}${MONTH}${DAY}${HOUR}5959.tar temp.tar
tar xvf temp.tar
rm *.tar
ls
###Output
_____no_output_____
###Markdown
Install Py-ART See https://github.com/ARM-DOE/pyart/wiki/Simple-Install-of-Py-ART-using-Anaconda Plot volume scans using Py-ART within Jupyter
###Code
# Based on
# http://arm-doe.github.io/pyart/dev/auto_examples/plotting/plot_nexrad_multiple_moments.html
# by Jonathan J. Helmus ([email protected])
import matplotlib.pyplot as plt
import pyart
def plot_data(filename):
radar = pyart.io.read_nexrad_archive(infilename)
display = pyart.graph.RadarDisplay(radar)
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(221)
display.plot('velocity', 1, ax=ax, title='Doppler Velocity',
colorbar_label='',
axislabels=('', 'North South distance from radar (km)'))
display.set_limits((-300, 300), (-300, 300), ax=ax)
ax = fig.add_subplot(222)
display.plot('reflectivity', 0, ax=ax,
title='Reflectivity lowest', colorbar_label='',
axislabels=('', ''))
display.set_limits((-300, 300), (-300, 300), ax=ax)
ax = fig.add_subplot(223)
display.plot('reflectivity', 1, ax=ax,
title='Reflectivity second', colorbar_label='')
display.set_limits((-300, 300), (-300, 300), ax=ax)
ax = fig.add_subplot(224)
display.plot('cross_correlation_ratio', 0, ax=ax,
title='Correlation Coefficient', colorbar_label='',
axislabels=('East West distance from radar (km)', ''))
display.set_limits((-300, 300), (-300, 300), ax=ax)
plt.show()
###Output
_____no_output_____
###Markdown
Plot into png
###Code
%writefile plot_pngs.py
import matplotlib.pyplot as plt
import pyart
def plot_data(infilename, outpng):
radar = pyart.io.read_nexrad_archive(infilename)
display = pyart.graph.RadarDisplay(radar)
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(221)
display.plot('velocity', 1, ax=ax, title='Doppler Velocity',
colorbar_label='',
axislabels=('', 'North South distance from radar (km)'))
display.set_limits((-300, 300), (-300, 300), ax=ax)
ax = fig.add_subplot(222)
display.plot('reflectivity', 0, ax=ax,
title='Reflectivity lowest', colorbar_label='',
axislabels=('', ''))
display.set_limits((-300, 300), (-300, 300), ax=ax)
ax = fig.add_subplot(223)
display.plot('reflectivity', 1, ax=ax,
title='Reflectivity second', colorbar_label='')
display.set_limits((-300, 300), (-300, 300), ax=ax)
ax = fig.add_subplot(224)
display.plot('cross_correlation_ratio', 0, ax=ax,
title='Correlation Coefficient', colorbar_label='',
axislabels=('East West distance from radar (km)', ''))
display.set_limits((-300, 300), (-300, 300), ax=ax)
fig.savefig(outpng)
if __name__ == '__main__':
import argparse
parser = argparse.ArgumentParser(description='plot some radar data')
parser.add_argument('nexrad', help="volume scan filename")
parser.add_argument('png', help="output png filename")
args = parser.parse_args()
print "Plotting {} into {}".format(args.nexrad, args.png)
plot_data(args.nexrad, args.png)
%bash
python plot_pngs.py data/KIWA20130723_235451_V06.gz radarplot.png
###Output
_____no_output_____
###Markdown
Create animating PNG
###Code
%bash
rm -rf images
mkdir images
for volumefile in $(ls data); do
base=$(basename $volumefile)
python plot_pngs.py data/$volumefile images/$base.png
done
###Output
_____no_output_____
###Markdown
Reading NEXRAD Level II data from Google Cloud public datasets This notebook demonstrates how to use PyART to visualize data from the Google Cloud public dataset.
###Code
%bash
rm -rf data
mkdir data
cd data
RADAR=KIWA
YEAR=2013
MONTH=07
DAY=23
HOUR=23
gsutil cp gs://gcp-public-data-nexrad-l2/$YEAR/$MONTH/$DAY/$RADAR/*_$RADAR_${YEAR}${MONTH}${DAY}${HOUR}0000_${YEAR}${MONTH}${DAY}${HOUR}5959.tar temp.tar
tar xvf temp.tar
rm *.tar
ls
###Output
_____no_output_____
###Markdown
Install Py-ART See https://github.com/ARM-DOE/pyart/wiki/Simple-Install-of-Py-ART-using-Anaconda Plot volume scans using Py-ART within Jupyter
###Code
# Based on
# http://arm-doe.github.io/pyart/dev/auto_examples/plotting/plot_nexrad_multiple_moments.html
# by Jonathan J. Helmus ([email protected])
import matplotlib.pyplot as plt
import pyart
def plot_data(filename):
radar = pyart.io.read_nexrad_archive(infilename)
display = pyart.graph.RadarDisplay(radar)
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(221)
display.plot('velocity', 1, ax=ax, title='Doppler Velocity',
colorbar_label='',
axislabels=('', 'North South distance from radar (km)'))
display.set_limits((-300, 300), (-300, 300), ax=ax)
ax = fig.add_subplot(222)
display.plot('reflectivity', 0, ax=ax,
title='Reflectivity lowest', colorbar_label='',
axislabels=('', ''))
display.set_limits((-300, 300), (-300, 300), ax=ax)
ax = fig.add_subplot(223)
display.plot('reflectivity', 1, ax=ax,
title='Reflectivity second', colorbar_label='')
display.set_limits((-300, 300), (-300, 300), ax=ax)
ax = fig.add_subplot(224)
display.plot('cross_correlation_ratio', 0, ax=ax,
title='Correlation Coefficient', colorbar_label='',
axislabels=('East West distance from radar (km)', ''))
display.set_limits((-300, 300), (-300, 300), ax=ax)
plt.show()
###Output
_____no_output_____
###Markdown
Plot into png
###Code
%writefile plot_pngs.py
import matplotlib.pyplot as plt
import pyart
def plot_data(infilename, outpng):
radar = pyart.io.read_nexrad_archive(infilename)
display = pyart.graph.RadarDisplay(radar)
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(221)
display.plot('velocity', 1, ax=ax, title='Doppler Velocity',
colorbar_label='',
axislabels=('', 'North South distance from radar (km)'))
display.set_limits((-300, 300), (-300, 300), ax=ax)
ax = fig.add_subplot(222)
display.plot('reflectivity', 0, ax=ax,
title='Reflectivity lowest', colorbar_label='',
axislabels=('', ''))
display.set_limits((-300, 300), (-300, 300), ax=ax)
ax = fig.add_subplot(223)
display.plot('reflectivity', 1, ax=ax,
title='Reflectivity second', colorbar_label='')
display.set_limits((-300, 300), (-300, 300), ax=ax)
ax = fig.add_subplot(224)
display.plot('cross_correlation_ratio', 0, ax=ax,
title='Correlation Coefficient', colorbar_label='',
axislabels=('East West distance from radar (km)', ''))
display.set_limits((-300, 300), (-300, 300), ax=ax)
fig.savefig(outpng)
if __name__ == '__main__':
import argparse
parser = argparse.ArgumentParser(description='plot some radar data')
parser.add_argument('nexrad', help="volume scan filename")
parser.add_argument('png', help="output png filename")
args = parser.parse_args()
print "Plotting {} into {}".format(args.nexrad, args.png)
plot_data(args.nexrad, args.png)
%bash
python plot_pngs.py data/KIWA20130723_235451_V06.gz radarplot.png
###Output
_____no_output_____
###Markdown
Create animating PNG
###Code
%bash
rm -rf images
mkdir images
for volumefile in $(ls data); do
base=$(basename $volumefile)
python plot_pngs.py data/$volumefile images/$base.png
done
###Output
_____no_output_____
|
face_recognition_yale.ipynb
|
###Markdown
Implementation of face recognition using neural net
###Code
%matplotlib inline
import cv2
import numpy as np
import os
from skimage import io
from sklearn.cross_validation import train_test_split
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
from sklearn.metrics import classification_report,accuracy_score
from sklearn.neural_network import MLPClassifier
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras import metrics
###Output
_____no_output_____
###Markdown
Listing the path of all the images
###Code
DatasetPath = []
for i in os.listdir("yalefaces"):
DatasetPath.append(os.path.join("yalefaces", i))
###Output
_____no_output_____
###Markdown
Reading each image and assigning respective labels
###Code
imageData = []
imageLabels = []
for i in DatasetPath:
imgRead = io.imread(i,as_grey=True)
imageData.append(imgRead)
labelRead = int(os.path.split(i)[1].split(".")[0].replace("subject", "")) - 1
imageLabels.append(labelRead)
###Output
_____no_output_____
###Markdown
Preprocessing: Face Detection using OpenCV and cropping the image to a size of 150 * 150
###Code
faceDetectClassifier = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
imageDataFin = []
for i in imageData:
facePoints = faceDetectClassifier.detectMultiScale(i)
x,y = facePoints[0][:2]
cropped = i[y: y + 150, x: x + 150]
imageDataFin.append(cropped)
c = np.array(imageDataFin)
c.shape
###Output
_____no_output_____
###Markdown
Splitting Dataset into train and test
###Code
X_train, X_test, y_train, y_test = train_test_split(np.array(imageDataFin),np.array(imageLabels), train_size=0.5, random_state = 20)
X_train = np.array(X_train)
X_test = np.array(X_test)
X_train.shape
X_test.shape
nb_classes = 15
y_train = np.array(y_train)
y_test = np.array(y_test)
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
###Output
_____no_output_____
###Markdown
Converting each 2d image into 1D vector
###Code
X_train = X_train.reshape(X_train.shape[0], X_train.shape[1]*X_train.shape[2])
X_test = X_test.reshape(X_test.shape[0], X_test.shape[1]*X_test.shape[2])
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
# normalize the data
X_train /= 255
X_test /= 255
###Output
_____no_output_____
###Markdown
Preprocessing -PCA
###Code
computed_pca = PCA(n_components = 20,whiten=True).fit(X_train)
XTr_pca = computed_pca.transform(X_train)
print("Plot of amount of variance explained vs pcs")
plt.plot(range(len(computed_pca.explained_variance_)),np.cumsum(computed_pca.explained_variance_ratio_))
plt.show()
XTs_pca = computed_pca.transform(X_test)
print("Training PCA shape",XTr_pca.shape)
print("Test PCA shape",XTs_pca.shape)
def plot_eigenfaces(images, h, w, rows=5, cols=4):
plt.figure()
for i in range(rows * cols):
plt.subplot(rows, cols, i + 1)
plt.imshow(images[i].reshape((h, w)), cmap=plt.cm.gray)
plt.xticks(())
plt.yticks(())
plot_eigenfaces(computed_pca.components_,150,150)
print("Eigen Faces")
print("Original Training matrix shape", X_train.shape)
print("Original Testing matrix shape", X_test.shape)
###Output
('Original Training matrix shape', (82, 22500))
('Original Testing matrix shape', (83, 22500))
###Markdown
Defining the model
###Code
model = Sequential()
model.add(Dense(512,input_shape=(XTr_pca.shape[1],)))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.summary()
model.compile(loss='categorical_crossentropy', optimizer="adam", metrics=[metrics.mae,metrics.categorical_accuracy])
###Output
_____no_output_____
###Markdown
Training
###Code
model.fit(XTr_pca, Y_train, batch_size=64, epochs=50, verbose=1, validation_data=(XTs_pca, Y_test))
###Output
Train on 82 samples, validate on 83 samples
Epoch 1/50
82/82 [==============================] - 1s 8ms/step - loss: 2.7087 - mean_absolute_error: 0.1242 - categorical_accuracy: 0.0854 - val_loss: 2.4871 - val_mean_absolute_error: 0.1220 - val_categorical_accuracy: 0.4337
Epoch 2/50
82/82 [==============================] - 0s 303us/step - loss: 2.2800 - mean_absolute_error: 0.1191 - categorical_accuracy: 0.6098 - val_loss: 2.2995 - val_mean_absolute_error: 0.1191 - val_categorical_accuracy: 0.6024
Epoch 3/50
82/82 [==============================] - 0s 280us/step - loss: 1.9458 - mean_absolute_error: 0.1130 - categorical_accuracy: 0.7805 - val_loss: 2.1263 - val_mean_absolute_error: 0.1155 - val_categorical_accuracy: 0.6024
Epoch 4/50
82/82 [==============================] - 0s 295us/step - loss: 1.6055 - mean_absolute_error: 0.1046 - categorical_accuracy: 0.9268 - val_loss: 1.9518 - val_mean_absolute_error: 0.1107 - val_categorical_accuracy: 0.6145
Epoch 5/50
82/82 [==============================] - 0s 243us/step - loss: 1.3512 - mean_absolute_error: 0.0948 - categorical_accuracy: 0.9146 - val_loss: 1.7630 - val_mean_absolute_error: 0.1047 - val_categorical_accuracy: 0.6386
Epoch 6/50
82/82 [==============================] - 0s 263us/step - loss: 1.0628 - mean_absolute_error: 0.0818 - categorical_accuracy: 0.9146 - val_loss: 1.5787 - val_mean_absolute_error: 0.0975 - val_categorical_accuracy: 0.6627
Epoch 7/50
82/82 [==============================] - 0s 233us/step - loss: 0.7722 - mean_absolute_error: 0.0651 - categorical_accuracy: 0.9512 - val_loss: 1.4048 - val_mean_absolute_error: 0.0895 - val_categorical_accuracy: 0.6747
Epoch 8/50
82/82 [==============================] - 0s 223us/step - loss: 0.5998 - mean_absolute_error: 0.0530 - categorical_accuracy: 0.9634 - val_loss: 1.2571 - val_mean_absolute_error: 0.0816 - val_categorical_accuracy: 0.6988
Epoch 9/50
82/82 [==============================] - 0s 274us/step - loss: 0.4439 - mean_absolute_error: 0.0417 - categorical_accuracy: 0.9756 - val_loss: 1.1312 - val_mean_absolute_error: 0.0742 - val_categorical_accuracy: 0.7108
Epoch 10/50
82/82 [==============================] - 0s 224us/step - loss: 0.3447 - mean_absolute_error: 0.0331 - categorical_accuracy: 0.9878 - val_loss: 1.0202 - val_mean_absolute_error: 0.0675 - val_categorical_accuracy: 0.7108
Epoch 11/50
82/82 [==============================] - 0s 243us/step - loss: 0.2699 - mean_absolute_error: 0.0267 - categorical_accuracy: 1.0000 - val_loss: 0.9152 - val_mean_absolute_error: 0.0615 - val_categorical_accuracy: 0.7229
Epoch 12/50
82/82 [==============================] - 0s 315us/step - loss: 0.2214 - mean_absolute_error: 0.0222 - categorical_accuracy: 0.9878 - val_loss: 0.8186 - val_mean_absolute_error: 0.0562 - val_categorical_accuracy: 0.7831
Epoch 13/50
82/82 [==============================] - 0s 263us/step - loss: 0.1399 - mean_absolute_error: 0.0151 - categorical_accuracy: 1.0000 - val_loss: 0.7452 - val_mean_absolute_error: 0.0519 - val_categorical_accuracy: 0.7952
Epoch 14/50
82/82 [==============================] - 0s 268us/step - loss: 0.1304 - mean_absolute_error: 0.0135 - categorical_accuracy: 1.0000 - val_loss: 0.6854 - val_mean_absolute_error: 0.0482 - val_categorical_accuracy: 0.8313
Epoch 15/50
82/82 [==============================] - 0s 270us/step - loss: 0.0820 - mean_absolute_error: 0.0095 - categorical_accuracy: 1.0000 - val_loss: 0.6429 - val_mean_absolute_error: 0.0452 - val_categorical_accuracy: 0.8434
Epoch 16/50
82/82 [==============================] - 0s 247us/step - loss: 0.0736 - mean_absolute_error: 0.0084 - categorical_accuracy: 1.0000 - val_loss: 0.6132 - val_mean_absolute_error: 0.0428 - val_categorical_accuracy: 0.8434
Epoch 17/50
82/82 [==============================] - 0s 284us/step - loss: 0.0509 - mean_absolute_error: 0.0062 - categorical_accuracy: 1.0000 - val_loss: 0.5944 - val_mean_absolute_error: 0.0409 - val_categorical_accuracy: 0.8434
Epoch 18/50
82/82 [==============================] - 0s 319us/step - loss: 0.0570 - mean_absolute_error: 0.0068 - categorical_accuracy: 1.0000 - val_loss: 0.5852 - val_mean_absolute_error: 0.0396 - val_categorical_accuracy: 0.8434
Epoch 19/50
82/82 [==============================] - 0s 311us/step - loss: 0.0398 - mean_absolute_error: 0.0049 - categorical_accuracy: 1.0000 - val_loss: 0.5797 - val_mean_absolute_error: 0.0386 - val_categorical_accuracy: 0.8313
Epoch 20/50
82/82 [==============================] - 0s 230us/step - loss: 0.0331 - mean_absolute_error: 0.0041 - categorical_accuracy: 1.0000 - val_loss: 0.5774 - val_mean_absolute_error: 0.0378 - val_categorical_accuracy: 0.8072
Epoch 21/50
82/82 [==============================] - 0s 385us/step - loss: 0.0276 - mean_absolute_error: 0.0035 - categorical_accuracy: 1.0000 - val_loss: 0.5745 - val_mean_absolute_error: 0.0370 - val_categorical_accuracy: 0.8072
Epoch 22/50
82/82 [==============================] - 0s 299us/step - loss: 0.0225 - mean_absolute_error: 0.0029 - categorical_accuracy: 1.0000 - val_loss: 0.5724 - val_mean_absolute_error: 0.0363 - val_categorical_accuracy: 0.8072
Epoch 23/50
82/82 [==============================] - 0s 238us/step - loss: 0.0182 - mean_absolute_error: 0.0024 - categorical_accuracy: 1.0000 - val_loss: 0.5727 - val_mean_absolute_error: 0.0358 - val_categorical_accuracy: 0.7831
Epoch 24/50
82/82 [==============================] - 0s 265us/step - loss: 0.0186 - mean_absolute_error: 0.0024 - categorical_accuracy: 1.0000 - val_loss: 0.5759 - val_mean_absolute_error: 0.0354 - val_categorical_accuracy: 0.7831
Epoch 25/50
82/82 [==============================] - 0s 236us/step - loss: 0.0173 - mean_absolute_error: 0.0022 - categorical_accuracy: 1.0000 - val_loss: 0.5783 - val_mean_absolute_error: 0.0351 - val_categorical_accuracy: 0.7711
Epoch 26/50
82/82 [==============================] - 0s 284us/step - loss: 0.0169 - mean_absolute_error: 0.0022 - categorical_accuracy: 1.0000 - val_loss: 0.5794 - val_mean_absolute_error: 0.0348 - val_categorical_accuracy: 0.7711
Epoch 27/50
82/82 [==============================] - 0s 237us/step - loss: 0.0131 - mean_absolute_error: 0.0017 - categorical_accuracy: 1.0000 - val_loss: 0.5783 - val_mean_absolute_error: 0.0344 - val_categorical_accuracy: 0.7711
Epoch 28/50
82/82 [==============================] - 0s 296us/step - loss: 0.0121 - mean_absolute_error: 0.0016 - categorical_accuracy: 1.0000 - val_loss: 0.5764 - val_mean_absolute_error: 0.0340 - val_categorical_accuracy: 0.7831
Epoch 29/50
82/82 [==============================] - 0s 347us/step - loss: 0.0122 - mean_absolute_error: 0.0016 - categorical_accuracy: 1.0000 - val_loss: 0.5741 - val_mean_absolute_error: 0.0337 - val_categorical_accuracy: 0.7831
Epoch 30/50
82/82 [==============================] - 0s 305us/step - loss: 0.0112 - mean_absolute_error: 0.0015 - categorical_accuracy: 1.0000 - val_loss: 0.5711 - val_mean_absolute_error: 0.0333 - val_categorical_accuracy: 0.7831
Epoch 31/50
82/82 [==============================] - 0s 315us/step - loss: 0.0078 - mean_absolute_error: 0.0010 - categorical_accuracy: 1.0000 - val_loss: 0.5677 - val_mean_absolute_error: 0.0330 - val_categorical_accuracy: 0.7952
Epoch 32/50
82/82 [==============================] - 0s 216us/step - loss: 0.0095 - mean_absolute_error: 0.0012 - categorical_accuracy: 1.0000 - val_loss: 0.5643 - val_mean_absolute_error: 0.0328 - val_categorical_accuracy: 0.7952
Epoch 33/50
82/82 [==============================] - 0s 295us/step - loss: 0.0101 - mean_absolute_error: 0.0013 - categorical_accuracy: 1.0000 - val_loss: 0.5598 - val_mean_absolute_error: 0.0325 - val_categorical_accuracy: 0.7952
Epoch 34/50
82/82 [==============================] - 0s 256us/step - loss: 0.0085 - mean_absolute_error: 0.0011 - categorical_accuracy: 1.0000 - val_loss: 0.5553 - val_mean_absolute_error: 0.0322 - val_categorical_accuracy: 0.7952
Epoch 35/50
82/82 [==============================] - 0s 310us/step - loss: 0.0068 - mean_absolute_error: 8.8939e-04 - categorical_accuracy: 1.0000 - val_loss: 0.5517 - val_mean_absolute_error: 0.0320 - val_categorical_accuracy: 0.7952
Epoch 36/50
###Markdown
Evaluating the performance
###Code
loss,mean_absolute_error,accuracy = model.evaluate(XTs_pca,Y_test, verbose=0)
print("Loss:", loss)
print("Categorical Accuracy: ", accuracy)
print("Mean absolute error: ", mean_absolute_error)
predicted_classes = model.predict_classes(XTs_pca)
correct_classified_indices = np.nonzero(predicted_classes == y_test)[0]
incorrect_classified_indices = np.nonzero(predicted_classes != y_test)[0]
print("Correctly Classified: ", len(correct_classified_indices))
print("Incorrectly Classified: ", len(incorrect_classified_indices))
# Visualization
def plot_gallery(images, titles, h, w, rows=3, cols=3):
plt.figure()
for i in range(rows * cols):
plt.subplot(rows, cols, i + 1)
plt.imshow(images[i].reshape((h, w)), cmap=plt.cm.gray)
plt.title(titles[i])
plt.xticks(())
plt.yticks(())
def titles(y_pred, y_test):
for i in range(y_pred.shape[0]):
pred_name = y_pred[i]
true_name = y_test[i]
yield 'predicted: {0}\ntrue: {1}'.format(pred_name, true_name)
prediction_titles = list(titles(predicted_classes, y_test))
plot_gallery(X_test, prediction_titles, 150, 150)
###Output
_____no_output_____
|
Comandos Git/Comandos Git.ipynb
|
###Markdown
CURSO DE GIT Curso completo: https://www.youtube.com/watch?v=zH3I1DZNovk&index=1&list=PL9xYXqvLX2kMUrXTvDY6GI2hgacfy0rId Configurar Nombre- git config --global user.name "Nombre" Configurar Mail- git config --global user.email "Email" Configurar colores en mensajes de Git- git config --global color.ui true Listar todas las configuraciones- git config --global --list Listar Consola- Clear Iniciar GitNavegar hasta la carpeta donde van a estar los archivos- git init Verificar estados de los archivos y modificaciones- git status Agregar los archivos con los que se va a hacer el commit- git add . Hacer un commit y dejar un mensaje de que se hizo en los archivos que se estan subiendo- git commit -m "Se hizo esto y aquello" Ver que hacen los comandos- git help - git hel status Ver en que rama estamos- git branch Verificar Log- git log Conectar con Git Hub- git remote add origin direccionDelRepositorioDeGithub
###Code
from IPython.display import Image
Image(filename='git.png')
###Output
_____no_output_____
|
notebooks/.ipynb_checkpoints/transfer_to_server-checkpoint.ipynb
|
###Markdown
Part 0: Initialize
###Code
host = 'gcgc_21mer'
time_interval = '0_1us' # '0_1us', '1_2us', '2_3us', '3_4us', '4_5us'
t_agent = TransferAgent(allsys_folder, host, time_interval)
###Output
/home/yizaochen/codes/dna_rna/collect_folder_to_multiscale/atat_21mer exists
/home/yizaochen/codes/dna_rna/collect_folder_to_multiscale/atat_21mer/0_1us exists
###Markdown
Part 1: Convert perfect.gro to perfect.pdb
###Code
t_agent.perfect_gro_to_pdb()
###Output
/usr/bin/gmx editconf -f /home/yizaochen/codes/dna_rna/all_systems/atat_21mer/bdna+bdna/input/allatoms/bdna+bdna.perfect.gro -o /home/yizaochen/codes/dna_rna/all_systems/atat_21mer/bdna+bdna/input/allatoms/bdna+bdna.perfect.pdb
###Markdown
Part 2: Fitting xtc to perfect structure
###Code
t_agent.rmsd_fit_to_perfect()
###Output
/usr/bin/gmx trjconv -fit rot+trans -s /home/yizaochen/codes/dna_rna/all_systems/atat_21mer/bdna+bdna/input/allatoms/bdna+bdna.perfect.pdb -f /home/yizaochen/codes/dna_rna/all_systems/atat_21mer/bdna+bdna/input/allatoms/bdna+bdna.all.xtc -o /home/yizaochen/codes/dna_rna/all_systems/atat_21mer/bdna+bdna/input/allatoms/bdna+bdna.all.fitperfect.xtc
###Markdown
Part 3: Copy to collect folder
###Code
t_agent.copy_to_collect_folder()
###Output
cp /home/yizaochen/codes/dna_rna/all_systems/atat_21mer/bdna+bdna/input/allatoms/bdna+bdna.perfect.pdb /home/yizaochen/codes/dna_rna/collect_folder_to_multiscale/atat_21mer/0_1us/bdna+bdna.perfect.pdb
cp /home/yizaochen/codes/dna_rna/all_systems/atat_21mer/bdna+bdna/input/allatoms/bdna+bdna.all.fitperfect.xtc /home/yizaochen/codes/dna_rna/collect_folder_to_multiscale/atat_21mer/0_1us/bdna+bdna.all.fitperfect.xtc
###Markdown
Part 4: Compress Input
###Code
t_agent.compress_input()
###Output
tar -jcv -f /home/yizaochen/codes/dna_rna/collect_folder_to_multiscale/atat_21mer/0_1us/x3dna.required.tar.bz2 /home/yizaochen/codes/dna_rna/collect_folder_to_multiscale/atat_21mer/0_1us
###Markdown
Part 5: Transfer to server
###Code
server_ip = '140.113.120.131'
t_agent.scp_to_server(server_ip)
###Output
Please excute the following in the terminal:
scp /home/yizaochen/codes/dna_rna/collect_folder_to_multiscale/atat_21mer/0_1us/x3dna.required.tar.bz2 [email protected]:/home/yizaochen/x3dna/paper_2021
###Markdown
Part 6: Decompress in server
###Code
t_agent.decompress_in_server()
###Output
Please excute the following in the terminal:
cd /home/yizaochen/x3dna/paper_2021
tar -jxv -f x3dna.required.tar.bz2 -C ./
|
coding/Intro to Python/Matplotlib.ipynb
|
###Markdown
Plotting data with Matplotlib``` {index} Matplotlib plotting```[Matplotlib](https://matplotlib.org/) is a commonly used Python library for plotting data. Matplotlib functions are not built-in so we need to import it every time we want to plot something. The convention is to import _matplotlib.pyplot_ as _plt_, however, any prefix can be used:
###Code
import matplotlib.pyplot as plt
# Import other libraries to generate data
import numpy as np
###Output
_____no_output_____
###Markdown
Matplotlib functions usually return something, but in most cases you can have them as statements on their own as they update the object (plot) itself. Useful graph generation functions:- plt.plot(xdata, ydata, format, _kwargs_) - plots [point data](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.plot.htmlmatplotlib.pyplot.plot) that can be either markers or joined by lines. The most commonly used graph type.- plt.scatter(xdata, ydata, *kwargs*) - a [scatter plot](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.scatter.htmlmatplotlib.pyplot.scatter) of ydata vs. xdata with varying marker size and/or colour.- plt.hist(data, _kwargs_) - plots a [histogram](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.hist.htmlmatplotlib.pyplot.hist), _kwargs_ include _bins_, i.e. the number of bins to use. Graph decoration functions (used after graph generation function):- plt.grid(_kwargs_) - creates gridlines, commonly used with plt.plot() type plots- plt.legend(_kwargs_) - create a legend in the plot, keyword arguement _loc_ can determine legend's position, e.g. _plt.legend(loc="upper left")_ or _plt.legend(loc="best")_. For documentation check [here](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.legend.htmlmatplotlib.pyplot.legend).- plt.text(x, y, text, _kwargs_) - insert text at a specified position, _kwargs_ include _fontsize_, _style_, _weight_, _color_.- plt.title(text, *kwargs*) - sets the title.- plt.xlabel(text, *kwargs*) and plt.ylabel(text, *kwargs*) - sets x- and y-axis labels.- plt.xlim(minx, maxx) and plt.ylim(miny, maxy) - sets x- and y-axis limits. Graph finalising function- plt.show() - displays the graph Example plots Line and scatter data```{index} Plots: line``````{index} Plots: scatter```
###Code
x = np.linspace(0, 2*np.pi, 50) # Create x values from 0 to 2pi
y1 = np.sin(x) # Compute y1 values as sin(x)
y2 = np.sin(x+np.pi/2) # Compute y2 values as sin(x+pi/2)
# Create figure (optional, but can set figure size)
plt.figure(figsize=(7,5))
# Create line data
plt.plot(x, y1, label="$\sin{(x)}$", color="red")
plt.plot(x, y2, "--", label="$\sin{(x+\pi/2)}$", color="navy", linewidth=3)
# Create scatter data
y3 = np.cos(x+np.pi/2)
plt.scatter(x, y3, label="$\cos{(x+\pi/2)}$", color="yellow",
edgecolor="black", s=80, marker="*")
# Set extra decorations
plt.title("This is my first graph in Matplotlib")
plt.legend(loc="lower right", fontsize=10)
plt.xlabel("x")
plt.ylabel("y")
plt.grid(True)
plt.ylim(-1.5,1.5)
# Show the graph
plt.show()
###Output
_____no_output_____
###Markdown
Histogram``` {index} Plots: histogram```
###Code
# Create random data with normal distribution
data = np.random.randn(1000)
# Create figure
plt.figure(figsize=(7,5))
# Plot histogram
plt.hist(data, bins=10, color="pink", edgecolor="black",
linewidth=2)
# Add extras
plt.xlabel("x")
plt.ylabel("Number of samples")
plt.title("My first histogram")
# Display
plt.show()
###Output
_____no_output_____
###Markdown
Image plot``` {index} Plots: image```
###Code
# Create some 2D data
x = np.linspace(0, 50, 50)
y = np.linspace(0, 50, 50)
# Create a 2D array out of x and y
X, Y = np.meshgrid(x, y)
# For each point in (X, Y) create Z value
Z = X**2 + Y**2
# Create plot
plt.figure(figsize=(5,5))
# Create image plot
# Interpolation methods can be "nearest", "bilinear" and "bicubic"
# Matplotlib provides a range of colour maps (cmap kwarg)
plt.imshow(Z, interpolation='bilinear', cmap="jet",
origin='lower', extent=[0, 50, 0, 50],
vmax=np.max(Z), vmin=np.min(Z))
# Create colour bar
plt.colorbar(label="Some colourbar", fraction=0.046, pad=0.04)
# Add extras
plt.xlabel("x")
plt.ylabel("y")
plt.title("My first imshow graph")
# Display
plt.show()
###Output
_____no_output_____
###Markdown
3D plots``` {index} Plots: 3D```
###Code
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# Create plot
fig = plt.figure(figsize=(7,5))
# Add subplot with 3D axes projection
ax = fig.add_subplot(111, projection='3d')
# Create some data
x = np.linspace(-20, 20, 100)
y = np.linspace(-20, 20, 100)
z = np.sin(x+y)
# Plot line and scatter data
ax.plot(x,y,z, color="k")
ax.scatter(x,y,z, marker="*", s=50, color="magenta")
# Add extras
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_zlabel("z")
ax.set_title("My first 3D plot")
# Display
plt.show()
###Output
_____no_output_____
|
plot_loss_validation_kana_to_alpha.ipynb
|
###Markdown
Plot the loss function output and validation value over epochsKana => Alpha with Attention
###Code
import re
import numpy as np
import matplotlib.pyplot as plt
RE_LOSS = re.compile(r'Epoch \d+ Loss (\d*\.\d*)')
RE_VALIDATION = re.compile(r'Validation Accuracy (\d*\.\d*)')
def get_loss_validation_from_file(file_name):
losses = []
validations = []
with open(file_name) as f:
for line in f:
line = line.strip()
match_loss= RE_LOSS.match(line)
if match_loss:
losses.append(float(match_loss.group(1)))
match_valid= RE_VALIDATION.match(line)
if match_valid:
validations.append(float(match_valid.group(1)))
x = np.linspace(0,len(losses)-1,len(losses))
y_losses = np.array(losses)
y_validations = np.array(validations)
return x, y_losses, y_validations
INPUT_FILE_TEMPLATE = "./training_output/kana_to_alpha_{}/log.txt"
FIG_OUTPUT = 'figs/learning_curve_kana_to_alpha.png'
x_16, L_16, V_16 = get_loss_validation_from_file(INPUT_FILE_TEMPLATE.format("16"))
x_32, L_32, V_32 = get_loss_validation_from_file(INPUT_FILE_TEMPLATE.format("32"))
x_64, L_64, V_64 = get_loss_validation_from_file(INPUT_FILE_TEMPLATE.format("64"))
x_128, L_128, V_128 = get_loss_validation_from_file(INPUT_FILE_TEMPLATE.format("128"))
x_256, L_256, V_256 = get_loss_validation_from_file(INPUT_FILE_TEMPLATE.format("256"))
color_16 = 'tab:blue'
color_32 = 'tab:orange'
color_64 = 'tab:green'
color_128 = 'tab:red'
color_256 = 'tab:purple'
color_512 = 'tab:brown'
fig, ax = plt.subplots(figsize=(10,10))
ax.plot(x_16, L_16, label='loss 16', color=color_16)
ax.plot(x_32, L_32, label='loss 32', color=color_32)
ax.plot(x_64, L_64, label='loss 64', color=color_64)
ax.plot(x_128, L_128, label='loss 128', color=color_128)
ax.plot(x_256, L_256, label='loss 256', color=color_256)
ax.plot(x_16, V_16, '--', label='validation 16', color=color_16)
ax.plot(x_32, V_32, '--', label='validation 32', color=color_32)
ax.plot(x_64, V_64, '--', label='validation 64', color=color_64)
ax.plot(x_128, V_128, '--', label='validation 128', color=color_128)
ax.plot(x_256, V_256, '--', label='validation 256', color=color_256)
ax.set_ylim([0,2.0])
ax.set_xlim([0,100])
ax.set_xlabel('Epochs', fontsize=15)
ax.set_ylabel('Loss / Validation', fontsize=15)
ax.set_title('Learning Curve Kana to Alpha with Attention', fontsize=18)
ax.tick_params(axis='both', which='major', labelsize=12)
ax.legend(prop={'size':12})
plt.savefig(FIG_OUTPUT)
plt.show()
###Output
_____no_output_____
|
Day 9/Copy_of_KalpanaLabs_Day9.ipynb
|
###Markdown
While loops
###Code
from IPython.display import HTML
HTML('<img src="https://www.ultraupdates.com/wp-content/uploads/2015/03/looping-gif-2.gif"">')
###Output
_____no_output_____
###Markdown
Revise- Boolean values- if else statements What are loops?They are statements that allow us to repeat certain statements for a certain amount of time.ex: shift + enter in jupyter notebook Okay dada! But till when are these statements executed?We can determine that beforehand, or give a certain exit condition. Examples- Brushing teeth- Watering plants- Batsmen in a cricket match- Clash Royale- Mobile applications Syntax```while (condition): statements```What does this mean?```while the condition is true: execute these statements``` Coded examples Sample code for example I: Brushing teeth
###Code
mouthIsClean = False
def brush():
print("I am brushing my teeth!")
while not mouthIsClean:
brush()
check = str(input("Is your mouth clean? (Y/N)"))
if (check == "Y" or check == "y"):
mouthIsClean = True
print("Great!")
###Output
_____no_output_____
###Markdown
While loops
###Code
from IPython.display import HTML
HTML('<img src="https://www.ultraupdates.com/wp-content/uploads/2015/03/looping-gif-2.gif"">')
###Output
_____no_output_____
###Markdown
Revise- Boolean values- if else statements What are loops?They are statements that allow us to repeat certain statements for a certain amount of time.ex: shift + enter in jupyter notebook Okay dada! But till when are these statements executed?We can determine that beforehand, or give a certain exit condition. Examples- Brushing teeth- Watering plants- Batsmen in a cricket match- Clash Royale- Mobile applications Syntax```while (condition): statements```What does this mean?```while the condition is true: execute these statements``` Coded examples Sample code for example I: Brushing teeth
###Code
mouthIsClean = False
def brush():
print("I am brushing my teeth!")
while not mouthIsClean:
brush()
check = str(input("Is your mouth clean? (Y/N)"))
if (check == "Y" or check == "y"):
mouthIsClean = True
print("Great!")
###Output
_____no_output_____
###Markdown
While loops
###Code
from IPython.display import HTML
HTML('<img src="https://www.ultraupdates.com/wp-content/uploads/2015/03/looping-gif-2.gif"">')
###Output
_____no_output_____
###Markdown
Revise- Boolean values- if else statements What are loops?They are statements that allow us to repeat certain statements for a certain amount of time.ex: shift + enter in jupyter notebook Okay dada! But till when are these statements executed?We can determine that beforehand, or give a certain exit condition. Examples- Brushing teeth- Watering plants- Batsmen in a cricket match- Clash Royale- Mobile applications Syntax```while (condition): statements```What does this mean?```while the condition is true: execute these statements``` Coded examples Sample code for example I: Brushing teeth
###Code
mouthIsClean = False
def brush():
print("I am brushing my teeth!")
while not mouthIsClean:
brush()
check = str(input("Is your mouth clean? (Y/N)"))
if (check == "Y" or check == "y"):
mouthIsClean = True
print("Great!")
###Output
_____no_output_____
|
scalable-machine-learning-on-big-data-using-apache-spark/Week 2/Exercise on PCA.ipynb
|
###Markdown
This notebook is designed to run in a IBM Watson Studio default runtime (NOT the Watson Studio Apache Spark Runtime as the default runtime with 1 vCPU is free of charge). Therefore, we install Apache Spark in local mode for test purposes only. Please don't use it in production.In case you are facing issues, please read the following two documents first:https://github.com/IBM/skillsnetwork/wiki/Environment-Setuphttps://github.com/IBM/skillsnetwork/wiki/FAQThen, please feel free to ask:https://coursera.org/learn/machine-learning-big-data-apache-spark/discussions/allPlease make sure to follow the guidelines before asking a question:https://github.com/IBM/skillsnetwork/wiki/FAQim-feeling-lost-and-confused-please-help-meIf running outside Watson Studio, this should work as well. In case you are running in an Apache Spark context outside Watson Studio, please remove the Apache Spark setup in the first notebook cells.
###Code
from IPython.display import Markdown, display
def printmd(string):
display(Markdown('# <span style="color:red">' + string + "</span>"))
if "sc" in locals() or "sc" in globals():
printmd(
"<<<<<!!!!! It seems that you are running in a IBM Watson Studio Apache Spark Notebook. Please run it in an IBM Watson Studio Default Runtime (without Apache Spark) !!!!!>>>>>"
)
!pip install pyspark==2.4.5
try:
from pyspark import SparkConf, SparkContext
from pyspark.sql import SparkSession
except ImportError as e:
printmd(
"<<<<<!!!!! Please restart your kernel after installing Apache Spark !!!!!>>>>>"
)
sc = SparkContext.getOrCreate(SparkConf().setMaster("local[*]"))
spark = SparkSession.builder.getOrCreate()
###Output
_____no_output_____
###Markdown
Exercise 3.2Welcome to the last exercise of this course. This is also the most advanced one because it somehow glues everything together you've learned. These are the steps you will do:- load a data frame from cloudant/ApacheCouchDB- perform feature transformation by calculating minimal and maximal values of different properties on time windows (we'll explain what a time windows is later in here)- reduce these now twelve dimensions to three using the PCA (Principal Component Analysis) algorithm of SparkML (Spark Machine Learning) => We'll actually make use of SparkML a lot more in the next course- plot the dimensionality reduced data set Now it is time to grab a PARQUET file and create a dataframe out of it. Using SparkSQL you can handle it like a database.
###Code
!wget https://github.com/IBM/coursera/blob/master/coursera_ds/washing.parquet?raw=true
!mv washing.parquet?raw=true washing.parquet
df = spark.read.parquet("washing.parquet")
df.createOrReplaceTempView("washing")
df.show()
###Output
+--------------------+--------------------+-----+--------+----------+---------+--------+-----+-----------+-------------+-------+
| _id| _rev|count|flowrate|fluidlevel|frequency|hardness|speed|temperature| ts|voltage|
+--------------------+--------------------+-----+--------+----------+---------+--------+-----+-----------+-------------+-------+
|0d86485d0f88d1f9d...|1-57940679fb8a713...| 4| 11|acceptable| null| 77| null| 100|1547808723923| null|
|0d86485d0f88d1f9d...|1-15ff3a0b304d789...| 2| null| null| null| null| 1046| null|1547808729917| null|
|0d86485d0f88d1f9d...|1-97c2742b68c7b07...| 4| null| null| 71| null| null| null|1547808731918| 236|
|0d86485d0f88d1f9d...|1-eefb903dbe45746...| 19| 11|acceptable| null| 75| null| 86|1547808738999| null|
|0d86485d0f88d1f9d...|1-5f68b4c72813c25...| 7| null| null| 75| null| null| null|1547808740927| 235|
|0d86485d0f88d1f9d...|1-cd4b6c57ddbe77e...| 5| null| null| null| null| 1014| null|1547808744923| null|
|0d86485d0f88d1f9d...|1-a35b25b5bf43aaf...| 32| 11|acceptable| null| 73| null| 84|1547808752028| null|
|0d86485d0f88d1f9d...|1-b717f7289a8476d...| 48| 11|acceptable| null| 79| null| 84|1547808768065| null|
|0d86485d0f88d1f9d...|1-c2f1f8fcf178b2f...| 18| null| null| 73| null| null| null|1547808773944| 228|
|0d86485d0f88d1f9d...|1-15033dd9eebb4a8...| 59| 11|acceptable| null| 72| null| 96|1547808779093| null|
|0d86485d0f88d1f9d...|1-753dae825f9a6c2...| 62| 11|acceptable| null| 73| null| 88|1547808782113| null|
|0d86485d0f88d1f9d...|1-b168089f44f03f0...| 13| null| null| null| null| 1097| null|1547808784940| null|
|0d86485d0f88d1f9d...|1-403b687c6be0dea...| 23| null| null| 80| null| null| null|1547808788955| 236|
|0d86485d0f88d1f9d...|1-195551e0455a24b...| 72| 11|acceptable| null| 77| null| 87|1547808792134| null|
|0d86485d0f88d1f9d...|1-060a39fc6c2ddee...| 26| null| null| 62| null| null| null|1547808797959| 233|
|0d86485d0f88d1f9d...|1-2234514bffee465...| 27| null| null| 61| null| null| null|1547808800960| 226|
|0d86485d0f88d1f9d...|1-4265898bb401db0...| 82| 11|acceptable| null| 79| null| 96|1547808802154| null|
|0d86485d0f88d1f9d...|1-2fbf7ca9a0425a0...| 94| 11|acceptable| null| 73| null| 90|1547808814186| null|
|0d86485d0f88d1f9d...|1-203c0ee6d7fbd21...| 97| 11|acceptable| null| 77| null| 88|1547808817190| null|
|0d86485d0f88d1f9d...|1-47e1965db94fcab...| 104| 11|acceptable| null| 75| null| 80|1547808824198| null|
+--------------------+--------------------+-----+--------+----------+---------+--------+-----+-----------+-------------+-------+
only showing top 20 rows
###Markdown
This is the feature transformation part of this exercise. Since our table is mixing schemas from different sensor data sources we are creating new features. In other word we use existing columns to calculate new ones. We only use min and max for now, but using more advanced aggregations as we've learned in week three may improve the results. We are calculating those aggregations over a sliding window "w". This window is defined in the SQL statement and basically reads the table by a one by one stride in direction of increasing timestamp. Whenever a row leaves the window a new one is included. Therefore this window is called sliding window (in contrast to tubling, time or count windows). More on this can be found here: https://flink.apache.org/news/2015/12/04/Introducing-windows.html
###Code
result = spark.sql(
"""
SELECT * from (
SELECT
min(temperature) over w as min_temperature,
max(temperature) over w as max_temperature,
min(voltage) over w as min_voltage,
max(voltage) over w as max_voltage,
min(flowrate) over w as min_flowrate,
max(flowrate) over w as max_flowrate,
min(frequency) over w as min_frequency,
max(frequency) over w as max_frequency,
min(hardness) over w as min_hardness,
max(hardness) over w as max_hardness,
min(speed) over w as min_speed,
max(speed) over w as max_speed
FROM washing
WINDOW w AS (ORDER BY ts ROWS BETWEEN CURRENT ROW AND 10 FOLLOWING)
)
WHERE min_temperature is not null
AND max_temperature is not null
AND min_voltage is not null
AND max_voltage is not null
AND min_flowrate is not null
AND max_flowrate is not null
AND min_frequency is not null
AND max_frequency is not null
AND min_hardness is not null
AND min_speed is not null
AND max_speed is not null
"""
)
###Output
_____no_output_____
###Markdown
Since this table contains null values also our window might contain them. In case for a certain feature all values in that window are null we obtain also null. As we can see here (in my dataset) this is the case for 9 rows.
###Code
df.count() - result.count()
###Output
_____no_output_____
###Markdown
Now we import some classes from SparkML. PCA for the actual algorithm. Vectors for the data structure expected by PCA and VectorAssembler to transform data into these vector structures.
###Code
from pyspark.ml.feature import PCA, VectorAssembler
from pyspark.ml.linalg import Vectors
###Output
_____no_output_____
###Markdown
Let's define a vector transformation helper class which takes all our input features (result.columns) and created one additional column called "features" which contains all our input features as one single column wrapped in "DenseVector" objects
###Code
assembler = VectorAssembler(inputCols=result.columns, outputCol="features")
###Output
_____no_output_____
###Markdown
Now we actually transform the data, note that this is highly optimized code and runs really fast in contrast if we had implemented it.
###Code
features = assembler.transform(result)
###Output
_____no_output_____
###Markdown
Let's have a look at how this new additional column "features" looks like:
###Code
features.rdd.map(lambda r: r.features).take(10)
###Output
_____no_output_____
###Markdown
Since the source data set has been prepared as a list of DenseVectors we can now apply PCA. Note that the first line again only prepares the algorithm by finding the transformation matrices (fit method)
###Code
pca = PCA(k=3, inputCol="features", outputCol="pcaFeatures")
model = pca.fit(features)
###Output
_____no_output_____
###Markdown
Now we can actually transform the data. Let's have a look at the first 20 rows
###Code
result_pca = model.transform(features).select("pcaFeatures")
result_pca.show(truncate=False)
###Output
+-----------------------------------------------------------+
|pcaFeatures |
+-----------------------------------------------------------+
|[1459.9789705814187,-18.745237781780922,70.78430794796873] |
|[1459.995481828676,-19.11343146165273,70.72738871425986] |
|[1460.0895843561282,-20.969471062922928,70.75630600322052] |
|[1469.6993929419532,-20.403124647615513,62.013569674880955]|
|[1469.7159041892107,-20.771318327487293,61.95665044117209] |
|[1469.7128317338704,-20.790751117222456,61.896106678330966]|
|[1478.3530264572928,-20.294557029728722,71.67550104809607] |
|[1478.3530264572928,-20.294557029728722,71.67550104809607] |
|[1478.3686036138165,-20.260626897636314,71.63355353606426] |
|[1478.3686036138165,-20.260626897636314,71.63355353606426] |
|[1483.5412027684088,-20.006222577501354,66.82710394284209] |
|[1483.5171090223353,-20.867020421583753,66.86707301954084] |
|[1483.4224268542928,-19.87574823665505,66.93027077913985] |
|[1483.4224268542928,-19.87574823665505,66.93027077913985] |
|[1488.103073547271,-19.311848573386925,72.1626182636411] |
|[1488.1076926849646,-19.311945711095063,72.27621605605316] |
|[1488.0135901575127,-17.455906109824838,72.2472987670925] |
|[1488.026374556614,-17.47632766649086,72.2214703423] |
|[1465.1644738447062,-17.50333829280811,47.06072898272612] |
|[1465.1644738447062,-17.50333829280811,47.06072898272612] |
+-----------------------------------------------------------+
only showing top 20 rows
###Markdown
So we obtained three completely new columns which we can plot now. Let run a final check if the number of rows is the same.
###Code
result_pca.count()
###Output
_____no_output_____
###Markdown
Cool, this works as expected. Now we obtain a sample and read each of the three columns into a python list
###Code
rdd = result_pca.rdd.sample(False, 0.8)
x = rdd.map(lambda a: a.pcaFeatures).map(lambda a: a[0]).collect()
y = rdd.map(lambda a: a.pcaFeatures).map(lambda a: a[1]).collect()
z = rdd.map(lambda a: a.pcaFeatures).map(lambda a: a[2]).collect()
###Output
_____no_output_____
###Markdown
Finally we plot the three lists and name each of them as dimension 1-3 in the plot
###Code
%matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
ax.scatter(x, y, z, c="r", marker="o")
ax.set_xlabel("dimension1")
ax.set_ylabel("dimension2")
ax.set_zlabel("dimension3")
plt.show()
###Output
_____no_output_____
###Markdown
This notebook is designed to run in a IBM Watson Studio default runtime (NOT the Watson Studio Apache Spark Runtime as the default runtime with 1 vCPU is free of charge). Therefore, we install Apache Spark in local mode for test purposes only. Please don't use it in production.In case you are facing issues, please read the following two documents first:https://github.com/IBM/skillsnetwork/wiki/Environment-Setuphttps://github.com/IBM/skillsnetwork/wiki/FAQThen, please feel free to ask:https://coursera.org/learn/machine-learning-big-data-apache-spark/discussions/allPlease make sure to follow the guidelines before asking a question:https://github.com/IBM/skillsnetwork/wiki/FAQim-feeling-lost-and-confused-please-help-meIf running outside Watson Studio, this should work as well. In case you are running in an Apache Spark context outside Watson Studio, please remove the Apache Spark setup in the first notebook cells.
###Code
from IPython.display import Markdown, display
def printmd(string):
display(Markdown('# <span style="color:red">'+string+'</span>'))
if ('sc' in locals() or 'sc' in globals()):
printmd('<<<<<!!!!! It seems that you are running in a IBM Watson Studio Apache Spark Notebook. Please run it in an IBM Watson Studio Default Runtime (without Apache Spark) !!!!!>>>>>')
!pip install pyspark==2.4.5
try:
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
except ImportError as e:
printmd('<<<<<!!!!! Please restart your kernel after installing Apache Spark !!!!!>>>>>')
sc = SparkContext.getOrCreate(SparkConf().setMaster("local[*]"))
spark = SparkSession \
.builder \
.getOrCreate()
###Output
_____no_output_____
###Markdown
Exercise 3.2Welcome to the last exercise of this course. This is also the most advanced one because it somehow glues everything together you've learned. These are the steps you will do:- load a data frame from cloudant/ApacheCouchDB- perform feature transformation by calculating minimal and maximal values of different properties on time windows (we'll explain what a time windows is later in here)- reduce these now twelve dimensions to three using the PCA (Principal Component Analysis) algorithm of SparkML (Spark Machine Learning) => We'll actually make use of SparkML a lot more in the next course- plot the dimensionality reduced data set Now it is time to grab a PARQUET file and create a dataframe out of it. Using SparkSQL you can handle it like a database.
###Code
!wget https://github.com/IBM/coursera/blob/master/coursera_ds/washing.parquet?raw=true
!mv washing.parquet?raw=true washing.parquet
df = spark.read.parquet('washing.parquet')
df.createOrReplaceTempView('washing')
df.show()
###Output
+--------------------+--------------------+-----+--------+----------+---------+--------+-----+-----------+-------------+-------+
| _id| _rev|count|flowrate|fluidlevel|frequency|hardness|speed|temperature| ts|voltage|
+--------------------+--------------------+-----+--------+----------+---------+--------+-----+-----------+-------------+-------+
|0d86485d0f88d1f9d...|1-57940679fb8a713...| 4| 11|acceptable| null| 77| null| 100|1547808723923| null|
|0d86485d0f88d1f9d...|1-15ff3a0b304d789...| 2| null| null| null| null| 1046| null|1547808729917| null|
|0d86485d0f88d1f9d...|1-97c2742b68c7b07...| 4| null| null| 71| null| null| null|1547808731918| 236|
|0d86485d0f88d1f9d...|1-eefb903dbe45746...| 19| 11|acceptable| null| 75| null| 86|1547808738999| null|
|0d86485d0f88d1f9d...|1-5f68b4c72813c25...| 7| null| null| 75| null| null| null|1547808740927| 235|
|0d86485d0f88d1f9d...|1-cd4b6c57ddbe77e...| 5| null| null| null| null| 1014| null|1547808744923| null|
|0d86485d0f88d1f9d...|1-a35b25b5bf43aaf...| 32| 11|acceptable| null| 73| null| 84|1547808752028| null|
|0d86485d0f88d1f9d...|1-b717f7289a8476d...| 48| 11|acceptable| null| 79| null| 84|1547808768065| null|
|0d86485d0f88d1f9d...|1-c2f1f8fcf178b2f...| 18| null| null| 73| null| null| null|1547808773944| 228|
|0d86485d0f88d1f9d...|1-15033dd9eebb4a8...| 59| 11|acceptable| null| 72| null| 96|1547808779093| null|
|0d86485d0f88d1f9d...|1-753dae825f9a6c2...| 62| 11|acceptable| null| 73| null| 88|1547808782113| null|
|0d86485d0f88d1f9d...|1-b168089f44f03f0...| 13| null| null| null| null| 1097| null|1547808784940| null|
|0d86485d0f88d1f9d...|1-403b687c6be0dea...| 23| null| null| 80| null| null| null|1547808788955| 236|
|0d86485d0f88d1f9d...|1-195551e0455a24b...| 72| 11|acceptable| null| 77| null| 87|1547808792134| null|
|0d86485d0f88d1f9d...|1-060a39fc6c2ddee...| 26| null| null| 62| null| null| null|1547808797959| 233|
|0d86485d0f88d1f9d...|1-2234514bffee465...| 27| null| null| 61| null| null| null|1547808800960| 226|
|0d86485d0f88d1f9d...|1-4265898bb401db0...| 82| 11|acceptable| null| 79| null| 96|1547808802154| null|
|0d86485d0f88d1f9d...|1-2fbf7ca9a0425a0...| 94| 11|acceptable| null| 73| null| 90|1547808814186| null|
|0d86485d0f88d1f9d...|1-203c0ee6d7fbd21...| 97| 11|acceptable| null| 77| null| 88|1547808817190| null|
|0d86485d0f88d1f9d...|1-47e1965db94fcab...| 104| 11|acceptable| null| 75| null| 80|1547808824198| null|
+--------------------+--------------------+-----+--------+----------+---------+--------+-----+-----------+-------------+-------+
only showing top 20 rows
###Markdown
This is the feature transformation part of this exercise. Since our table is mixing schemas from different sensor data sources we are creating new features. In other word we use existing columns to calculate new ones. We only use min and max for now, but using more advanced aggregations as we've learned in week three may improve the results. We are calculating those aggregations over a sliding window "w". This window is defined in the SQL statement and basically reads the table by a one by one stride in direction of increasing timestamp. Whenever a row leaves the window a new one is included. Therefore this window is called sliding window (in contrast to tubling, time or count windows). More on this can be found here: https://flink.apache.org/news/2015/12/04/Introducing-windows.html
###Code
result = spark.sql("""
SELECT * from (
SELECT
min(temperature) over w as min_temperature,
max(temperature) over w as max_temperature,
min(voltage) over w as min_voltage,
max(voltage) over w as max_voltage,
min(flowrate) over w as min_flowrate,
max(flowrate) over w as max_flowrate,
min(frequency) over w as min_frequency,
max(frequency) over w as max_frequency,
min(hardness) over w as min_hardness,
max(hardness) over w as max_hardness,
min(speed) over w as min_speed,
max(speed) over w as max_speed
FROM washing
WINDOW w AS (ORDER BY ts ROWS BETWEEN CURRENT ROW AND 10 FOLLOWING)
)
WHERE min_temperature is not null
AND max_temperature is not null
AND min_voltage is not null
AND max_voltage is not null
AND min_flowrate is not null
AND max_flowrate is not null
AND min_frequency is not null
AND max_frequency is not null
AND min_hardness is not null
AND min_speed is not null
AND max_speed is not null
""")
###Output
_____no_output_____
###Markdown
Since this table contains null values also our window might contain them. In case for a certain feature all values in that window are null we obtain also null. As we can see here (in my dataset) this is the case for 9 rows.
###Code
df.count()-result.count()
###Output
_____no_output_____
###Markdown
Now we import some classes from SparkML. PCA for the actual algorithm. Vectors for the data structure expected by PCA and VectorAssembler to transform data into these vector structures.
###Code
from pyspark.ml.feature import PCA
from pyspark.ml.linalg import Vectors
from pyspark.ml.feature import VectorAssembler
###Output
_____no_output_____
###Markdown
Let's define a vector transformation helper class which takes all our input features (result.columns) and created one additional column called "features" which contains all our input features as one single column wrapped in "DenseVector" objects
###Code
assembler = VectorAssembler(inputCols=result.columns, outputCol="features")
###Output
_____no_output_____
###Markdown
Now we actually transform the data, note that this is highly optimized code and runs really fast in contrast if we had implemented it.
###Code
features = assembler.transform(result)
###Output
_____no_output_____
###Markdown
Let's have a look at how this new additional column "features" looks like:
###Code
features.rdd.map(lambda r : r.features).take(10)
###Output
_____no_output_____
###Markdown
Since the source data set has been prepared as a list of DenseVectors we can now apply PCA. Note that the first line again only prepares the algorithm by finding the transformation matrices (fit method)
###Code
pca = PCA(k=3, inputCol="features", outputCol="pcaFeatures")
model = pca.fit(features)
###Output
_____no_output_____
###Markdown
Now we can actually transform the data. Let's have a look at the first 20 rows
###Code
result_pca = model.transform(features).select("pcaFeatures")
result_pca.show(truncate=False)
###Output
+-----------------------------------------------------------+
|pcaFeatures |
+-----------------------------------------------------------+
|[1459.9789705814187,-18.745237781780922,70.78430794796873] |
|[1459.995481828676,-19.11343146165273,70.72738871425986] |
|[1460.0895843561282,-20.969471062922928,70.75630600322052] |
|[1469.6993929419532,-20.403124647615513,62.013569674880955]|
|[1469.7159041892107,-20.771318327487293,61.95665044117209] |
|[1469.7128317338704,-20.790751117222456,61.896106678330966]|
|[1478.3530264572928,-20.294557029728722,71.67550104809607] |
|[1478.3530264572928,-20.294557029728722,71.67550104809607] |
|[1478.3686036138165,-20.260626897636314,71.63355353606426] |
|[1478.3686036138165,-20.260626897636314,71.63355353606426] |
|[1483.5412027684088,-20.006222577501354,66.82710394284209] |
|[1483.5171090223353,-20.867020421583753,66.86707301954084] |
|[1483.4224268542928,-19.87574823665505,66.93027077913985] |
|[1483.4224268542928,-19.87574823665505,66.93027077913985] |
|[1488.103073547271,-19.311848573386925,72.1626182636411] |
|[1488.1076926849646,-19.311945711095063,72.27621605605316] |
|[1488.0135901575127,-17.455906109824838,72.2472987670925] |
|[1488.026374556614,-17.47632766649086,72.2214703423] |
|[1465.1644738447062,-17.50333829280811,47.06072898272612] |
|[1465.1644738447062,-17.50333829280811,47.06072898272612] |
+-----------------------------------------------------------+
only showing top 20 rows
###Markdown
So we obtained three completely new columns which we can plot now. Let run a final check if the number of rows is the same.
###Code
result_pca.count()
###Output
_____no_output_____
###Markdown
Cool, this works as expected. Now we obtain a sample and read each of the three columns into a python list
###Code
rdd = result_pca.rdd.sample(False,0.8)
x = rdd.map(lambda a : a.pcaFeatures).map(lambda a : a[0]).collect()
y = rdd.map(lambda a : a.pcaFeatures).map(lambda a : a[1]).collect()
z = rdd.map(lambda a : a.pcaFeatures).map(lambda a : a[2]).collect()
###Output
_____no_output_____
###Markdown
Finally we plot the three lists and name each of them as dimension 1-3 in the plot
###Code
%matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x,y,z, c='r', marker='o')
ax.set_xlabel('dimension1')
ax.set_ylabel('dimension2')
ax.set_zlabel('dimension3')
plt.show()
###Output
_____no_output_____
|
notebooks/case_study_weather/3-dwd_konverter_build_df.ipynb
|
###Markdown
Import temperature data from the DWD and process itThis notebook pulls historical temperature data from the DWD server and formats it for future use in other projects. The data is delivered in a hourly frequencs in a .zip file for each of the available weather stations. To use the data, we need everythin in a single .csv-file, all stations side-by-side. Also, we need the daily average.To reduce computing time, we also crop all data earlier than 2007. Files should be executed in the following pipeline:* 1-dwd_konverter_download* 2-dwd_konverter_extract* 3-dwd_konverter_build_df* 4-dwd_konverter_final_processing 3.) Import the .csv files into pandas and concat into a single dfNow we need to import everything that we have extracted. This operation is going to take some time (aprox 20 mins). If you want to save time, you can just delete a few of the .csv-files in the 'import' folder. The script works as well with only a few files. Process individual filesThe files are imported into a single df, stripped of unnecessary columns and filtered by date. Then we set a DateTimeIndex and concatenate them into the main_df. Because the loop takes a long time, we output some status messages, to ensure the process is still running. Process the concatenated main_dfThen we display some infos of the main_df so we can ensure that there are no errors, mainly to ensure all data-types are recognized correctly. Also, we drop duplicate entries, in case some of the .csv files were copied. Unstack and exportFor the final step, we unstack the main_df and save it to a .csv and a .pkl file for the next step. Also, we display some output to get a grasp of what is going on.
###Code
import numpy as np
import pandas as pd
from IPython.display import clear_output
from pathlib import Path
import glob
import_files = glob.glob('import/*')
out_file = Path.cwd() / "export_uncleaned" / "to_clean"
#msum_file= Path.cwd() / "export" / "monatssumme.csv"
obsolete_columns = [
'QN_9',
'RF_TU',
'eor'
]
main_df = pd.DataFrame()
i = 1
for file in import_files:
# Read in the next file
df = pd.read_csv(file, delimiter=";")
# Prepare the df befor merging (Drop obsolete, convert to datetime, filter to date, set index)
df.drop(columns=obsolete_columns, inplace=True)
df["MESS_DATUM"] = pd.to_datetime(df["MESS_DATUM"], format="%Y%m%d%H")
df = df[df['MESS_DATUM']>= "2007-01-01"]
df.set_index(['MESS_DATUM', 'STATIONS_ID'], inplace=True)
# Merge to the main_df
main_df = pd.concat([main_df, df])
# Display some status messages
clear_output(wait=True)
display('Finished file: {}'.format(file), 'This is file {}'.format(i))
display('Shape of the main_df is: {}'.format(main_df.shape))
i+=1
# Check if all types are correct
display(main_df['TT_TU'].apply(lambda x: type(x).__name__).value_counts())
# Make sure that to files or observations a duplicates, eg. scan the index for duplicate entries.
# The ~ is a bitwise operation, meaning it flips all bits.
main_df = main_df[~main_df.index.duplicated(keep='last')]
# Unstack the main_df
main_df = main_df.unstack('STATIONS_ID')
display('Shape of the main_df is: {}'.format(main_df.shape))
# Save main_df to a .csv file and a pickle to continue working in the next cell.
main_df.to_pickle(Path(out_file).with_suffix('.pkl'))
main_df.to_csv(Path(out_file).with_suffix('.csv'), sep=";")
display(main_df.head())
display(main_df.describe())
###Output
_____no_output_____
|
00_soft_dependencies.ipynb
|
###Markdown
00_soft_dependcies> Filterizer for checking what version we have installed (vision, tab, or text) This form of checking is heavily influenced by the [mantisshrimp](https://github.com/airctic/mantisshrimp) library
###Code
#export
from importlib import import_module
from typing import *
#hide
from fastcore.test import test_eq
#export
def soft_import(name:str):
"Tries to import a module"
try:
import_module(name)
return True
except ModuleNotFoundError as e:
if str(e) != f"No module named '{name}'": raise e
return False
#hide
test_eq(soft_import('fastinference[vision]'), False)
#export
def soft_imports(names:list):
"Tries to import a list of modules"
for name in names:
o = soft_import(name)
if not o: return False
return True
#hide
test_eq(soft_imports(['PIL', 'pandas', 'torchvision', 'spacy']), True)
#export
class _SoftDependencies:
"A class which checks what dependencies can be loaded"
def __init__(self):
self.all = soft_imports(['PIL', 'pandas', 'torchvision', 'spacy'])
self.vision = soft_import('torchvision')
self.text = soft_import('spacy')
self.tab = soft_import('pandas')
def check(self) -> Dict[str, bool]: return self.__dict__.copy()
#export
SoftDependencies = _SoftDependencies()
#hide
test_dep = {'all':True, 'vision':True, 'text':True, 'tab':True}
test_eq(SoftDependencies.check(), test_dep)
###Output
_____no_output_____
|
site/en/r1/tutorials/eager/eager_basics.ipynb
|
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution basics Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatibility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. This is an introductory tutorial for using TensorFlow. It will cover:* Importing required packages* Creating and using Tensors* Using GPU acceleration* Datasets Import TensorFlowTo get started, import the `tensorflow` module and enable eager execution.Eager execution enables a more interactive frontend to TensorFlow, the details of which we will discuss much later.
###Code
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
TensorsA Tensor is a multi-dimensional array. Similar to NumPy `ndarray` objects, `Tensor` objects have a data type and a shape. Additionally, Tensors can reside in accelerator (like GPU) memory. TensorFlow offers a rich library of operations ([tf.add](https://www.tensorflow.org/api_docs/python/tf/add), [tf.matmul](https://www.tensorflow.org/api_docs/python/tf/matmul), [tf.linalg.inv](https://www.tensorflow.org/api_docs/python/tf/linalg/inv) etc.) that consume and produce Tensors. These operations automatically convert native Python types. For example:
###Code
print(tf.add(1, 2))
print(tf.add([1, 2], [3, 4]))
print(tf.square(5))
print(tf.reduce_sum([1, 2, 3]))
print(tf.encode_base64("hello world"))
# Operator overloading is also supported
print(tf.square(2) + tf.square(3))
###Output
_____no_output_____
###Markdown
Each Tensor has a shape and a datatype
###Code
x = tf.matmul([[1]], [[2, 3]])
print(x.shape)
print(x.dtype)
###Output
_____no_output_____
###Markdown
The most obvious differences between NumPy arrays and TensorFlow Tensors are:1. Tensors can be backed by accelerator memory (like GPU, TPU).2. Tensors are immutable. NumPy CompatibilityConversion between TensorFlow Tensors and NumPy ndarrays is quite simple as:* TensorFlow operations automatically convert NumPy ndarrays to Tensors.* NumPy operations automatically convert Tensors to NumPy ndarrays.Tensors can be explicitly converted to NumPy ndarrays by invoking the `.numpy()` method on them.These conversions are typically cheap as the array and Tensor share the underlying memory representation if possible. However, sharing the underlying representation isn't always possible since the Tensor may be hosted in GPU memory while NumPy arrays are always backed by host memory, and the conversion will thus involve a copy from GPU to host memory.
###Code
import numpy as np
ndarray = np.ones([3, 3])
print("TensorFlow operations convert numpy arrays to Tensors automatically")
tensor = tf.multiply(ndarray, 42)
print(tensor)
print("And NumPy operations convert Tensors to numpy arrays automatically")
print(np.add(tensor, 1))
print("The .numpy() method explicitly converts a Tensor to a numpy array")
print(tensor.numpy())
###Output
_____no_output_____
###Markdown
GPU accelerationMany TensorFlow operations can be accelerated by using the GPU for computation. Without any annotations, TensorFlow automatically decides whether to use the GPU or CPU for an operation (and copies the tensor between CPU and GPU memory if necessary). Tensors produced by an operation are typically backed by the memory of the device on which the operation executed. For example:
###Code
x = tf.random.uniform([3, 3])
print("Is there a GPU available: "),
print(tf.config.list_physical_devices('GPU'))
print("Is the Tensor on GPU #0: "),
print(x.device.endswith('GPU:0'))
###Output
_____no_output_____
###Markdown
Device NamesThe `Tensor.device` property provides a fully qualified string name of the device hosting the contents of the tensor. This name encodes many details, such as an identifier of the network address of the host on which this program is executing and the device within that host. This is required for distributed execution of a TensorFlow program. The string ends with `GPU:` if the tensor is placed on the `N`-th GPU on the host. Explicit Device PlacementThe term "placement" in TensorFlow refers to how individual operations are assigned (placed on) a device for execution. As mentioned above, when there is no explicit guidance provided, TensorFlow automatically decides which device to execute an operation, and copies Tensors to that device if needed. However, TensorFlow operations can be explicitly placed on specific devices using the `tf.device` context manager. For example:
###Code
import time
def time_matmul(x):
start = time.time()
for loop in range(10):
tf.matmul(x, x)
result = time.time()-start
print("10 loops: {:0.2f}ms".format(1000*result))
# Force execution on CPU
print("On CPU:")
with tf.device("CPU:0"):
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("CPU:0")
time_matmul(x)
# Force execution on GPU #0 if available
if tf.config.list_physical_devices('GPU'):
with tf.device("GPU:0"): # Or GPU:1 for the 2nd GPU, GPU:2 for the 3rd etc.
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("GPU:0")
time_matmul(x)
###Output
_____no_output_____
###Markdown
DatasetsThis section demonstrates the use of the [`tf.data.Dataset` API](https://www.tensorflow.org/r1/guide/datasets) to build pipelines to feed data to your model. It covers:* Creating a `Dataset`.* Iteration over a `Dataset` with eager execution enabled.We recommend using the `Dataset`s API for building performant, complex input pipelines from simple, re-usable pieces that will feed your model's training or evaluation loops.If you're familiar with TensorFlow graphs, the API for constructing the `Dataset` object remains exactly the same when eager execution is enabled, but the process of iterating over elements of the dataset is slightly simpler.You can use Python iteration over the `tf.data.Dataset` object and do not need to explicitly create an `tf.data.Iterator` object.As a result, the discussion on iterators in the [TensorFlow Guide](https://www.tensorflow.org/r1/guide/datasets) is not relevant when eager execution is enabled. Create a source `Dataset`Create a _source_ dataset using one of the factory functions like [`Dataset.from_tensors`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensors), [`Dataset.from_tensor_slices`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensor_slices) or using objects that read from files like [`TextLineDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TextLineDataset) or [`TFRecordDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TFRecordDataset). See the [TensorFlow Guide](https://www.tensorflow.org/r1/guide/datasetsreading_input_data) for more information.
###Code
ds_tensors = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6])
# Create a CSV file
import tempfile
_, filename = tempfile.mkstemp()
with open(filename, 'w') as f:
f.write("""Line 1
Line 2
Line 3
""")
ds_file = tf.data.TextLineDataset(filename)
###Output
_____no_output_____
###Markdown
Apply transformationsUse the transformations functions like [`map`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetmap), [`batch`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetbatch), [`shuffle`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetshuffle) etc. to apply transformations to the records of the dataset. See the [API documentation for `tf.data.Dataset`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) for details.
###Code
ds_tensors = ds_tensors.map(tf.square).shuffle(2).batch(2)
ds_file = ds_file.batch(2)
###Output
_____no_output_____
###Markdown
IterateWhen eager execution is enabled `Dataset` objects support iteration.If you're familiar with the use of `Dataset`s in TensorFlow graphs, note that there is no need for calls to `Dataset.make_one_shot_iterator()` or `get_next()` calls.
###Code
print('Elements of ds_tensors:')
for x in ds_tensors:
print(x)
print('\nElements in ds_file:')
for x in ds_file:
print(x)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution basics Run in Google Colab View source on GitHub This is an introductory tutorial for using TensorFlow. It will cover:* Importing required packages* Creating and using Tensors* Using GPU acceleration* Datasets Import TensorFlowTo get started, import the `tensorflow` module and enable eager execution.Eager execution enables a more interactive frontend to TensorFlow, the details of which we will discuss much later.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
TensorsA Tensor is a multi-dimensional array. Similar to NumPy `ndarray` objects, `Tensor` objects have a data type and a shape. Additionally, Tensors can reside in accelerator (like GPU) memory. TensorFlow offers a rich library of operations ([tf.add](https://www.tensorflow.org/api_docs/python/tf/add), [tf.matmul](https://www.tensorflow.org/api_docs/python/tf/matmul), [tf.linalg.inv](https://www.tensorflow.org/api_docs/python/tf/linalg/inv) etc.) that consume and produce Tensors. These operations automatically convert native Python types. For example:
###Code
print(tf.add(1, 2))
print(tf.add([1, 2], [3, 4]))
print(tf.square(5))
print(tf.reduce_sum([1, 2, 3]))
print(tf.encode_base64("hello world"))
# Operator overloading is also supported
print(tf.square(2) + tf.square(3))
###Output
_____no_output_____
###Markdown
Each Tensor has a shape and a datatype
###Code
x = tf.matmul([[1]], [[2, 3]])
print(x.shape)
print(x.dtype)
###Output
_____no_output_____
###Markdown
The most obvious differences between NumPy arrays and TensorFlow Tensors are:1. Tensors can be backed by accelerator memory (like GPU, TPU).2. Tensors are immutable. NumPy CompatibilityConversion between TensorFlow Tensors and NumPy ndarrays is quite simple as:* TensorFlow operations automatically convert NumPy ndarrays to Tensors.* NumPy operations automatically convert Tensors to NumPy ndarrays.Tensors can be explicitly converted to NumPy ndarrays by invoking the `.numpy()` method on them.These conversions are typically cheap as the array and Tensor share the underlying memory representation if possible. However, sharing the underlying representation isn't always possible since the Tensor may be hosted in GPU memory while NumPy arrays are always backed by host memory, and the conversion will thus involve a copy from GPU to host memory.
###Code
import numpy as np
ndarray = np.ones([3, 3])
print("TensorFlow operations convert numpy arrays to Tensors automatically")
tensor = tf.multiply(ndarray, 42)
print(tensor)
print("And NumPy operations convert Tensors to numpy arrays automatically")
print(np.add(tensor, 1))
print("The .numpy() method explicitly converts a Tensor to a numpy array")
print(tensor.numpy())
###Output
_____no_output_____
###Markdown
GPU accelerationMany TensorFlow operations can be accelerated by using the GPU for computation. Without any annotations, TensorFlow automatically decides whether to use the GPU or CPU for an operation (and copies the tensor between CPU and GPU memory if necessary). Tensors produced by an operation are typically backed by the memory of the device on which the operation executed. For example:
###Code
x = tf.random.uniform([3, 3])
print("Is there a GPU available: "),
print(tf.test.is_gpu_available())
print("Is the Tensor on GPU #0: "),
print(x.device.endswith('GPU:0'))
###Output
_____no_output_____
###Markdown
Device NamesThe `Tensor.device` property provides a fully qualified string name of the device hosting the contents of the tensor. This name encodes many details, such as an identifier of the network address of the host on which this program is executing and the device within that host. This is required for distributed execution of a TensorFlow program. The string ends with `GPU:` if the tensor is placed on the `N`-th GPU on the host. Explicit Device PlacementThe term "placement" in TensorFlow refers to how individual operations are assigned (placed on) a device for execution. As mentioned above, when there is no explicit guidance provided, TensorFlow automatically decides which device to execute an operation, and copies Tensors to that device if needed. However, TensorFlow operations can be explicitly placed on specific devices using the `tf.device` context manager. For example:
###Code
import time
def time_matmul(x):
start = time.time()
for loop in range(10):
tf.matmul(x, x)
result = time.time()-start
print("10 loops: {:0.2f}ms".format(1000*result))
# Force execution on CPU
print("On CPU:")
with tf.device("CPU:0"):
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("CPU:0")
time_matmul(x)
# Force execution on GPU #0 if available
if tf.test.is_gpu_available():
with tf.device("GPU:0"): # Or GPU:1 for the 2nd GPU, GPU:2 for the 3rd etc.
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("GPU:0")
time_matmul(x)
###Output
_____no_output_____
###Markdown
DatasetsThis section demonstrates the use of the [`tf.data.Dataset` API](https://www.tensorflow.org/r1/guide/datasets) to build pipelines to feed data to your model. It covers:* Creating a `Dataset`.* Iteration over a `Dataset` with eager execution enabled.We recommend using the `Dataset`s API for building performant, complex input pipelines from simple, re-usable pieces that will feed your model's training or evaluation loops.If you're familiar with TensorFlow graphs, the API for constructing the `Dataset` object remains exactly the same when eager execution is enabled, but the process of iterating over elements of the dataset is slightly simpler.You can use Python iteration over the `tf.data.Dataset` object and do not need to explicitly create an `tf.data.Iterator` object.As a result, the discussion on iterators in the [TensorFlow Guide](https://www.tensorflow.org/r1/guide/datasets) is not relevant when eager execution is enabled. Create a source `Dataset`Create a _source_ dataset using one of the factory functions like [`Dataset.from_tensors`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensors), [`Dataset.from_tensor_slices`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensor_slices) or using objects that read from files like [`TextLineDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TextLineDataset) or [`TFRecordDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TFRecordDataset). See the [TensorFlow Guide](https://www.tensorflow.org/r1/guide/datasetsreading_input_data) for more information.
###Code
ds_tensors = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6])
# Create a CSV file
import tempfile
_, filename = tempfile.mkstemp()
with open(filename, 'w') as f:
f.write("""Line 1
Line 2
Line 3
""")
ds_file = tf.data.TextLineDataset(filename)
###Output
_____no_output_____
###Markdown
Apply transformationsUse the transformations functions like [`map`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetmap), [`batch`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetbatch), [`shuffle`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetshuffle) etc. to apply transformations to the records of the dataset. See the [API documentation for `tf.data.Dataset`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) for details.
###Code
ds_tensors = ds_tensors.map(tf.square).shuffle(2).batch(2)
ds_file = ds_file.batch(2)
###Output
_____no_output_____
###Markdown
IterateWhen eager execution is enabled `Dataset` objects support iteration.If you're familiar with the use of `Dataset`s in TensorFlow graphs, note that there is no need for calls to `Dataset.make_one_shot_iterator()` or `get_next()` calls.
###Code
print('Elements of ds_tensors:')
for x in ds_tensors:
print(x)
print('\nElements in ds_file:')
for x in ds_file:
print(x)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution basics Run in Google Colab View source on GitHub This is an introductory tutorial for using TensorFlow. It will cover:* Importing required packages* Creating and using Tensors* Using GPU acceleration* Datasets Import TensorFlowTo get started, import the `tensorflow` module and enable eager execution.Eager execution enables a more interactive frontend to TensorFlow, the details of which we will discuss much later.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
tf.enable_eager_execution()
###Output
_____no_output_____
###Markdown
TensorsA Tensor is a multi-dimensional array. Similar to NumPy `ndarray` objects, `Tensor` objects have a data type and a shape. Additionally, Tensors can reside in accelerator (like GPU) memory. TensorFlow offers a rich library of operations ([tf.add](https://www.tensorflow.org/api_docs/python/tf/add), [tf.matmul](https://www.tensorflow.org/api_docs/python/tf/matmul), [tf.linalg.inv](https://www.tensorflow.org/api_docs/python/tf/linalg/inv) etc.) that consume and produce Tensors. These operations automatically convert native Python types. For example:
###Code
print(tf.add(1, 2))
print(tf.add([1, 2], [3, 4]))
print(tf.square(5))
print(tf.reduce_sum([1, 2, 3]))
print(tf.encode_base64("hello world"))
# Operator overloading is also supported
print(tf.square(2) + tf.square(3))
###Output
_____no_output_____
###Markdown
Each Tensor has a shape and a datatype
###Code
x = tf.matmul([[1]], [[2, 3]])
print(x.shape)
print(x.dtype)
###Output
_____no_output_____
###Markdown
The most obvious differences between NumPy arrays and TensorFlow Tensors are:1. Tensors can be backed by accelerator memory (like GPU, TPU).2. Tensors are immutable. NumPy CompatibilityConversion between TensorFlow Tensors and NumPy ndarrays is quite simple as:* TensorFlow operations automatically convert NumPy ndarrays to Tensors.* NumPy operations automatically convert Tensors to NumPy ndarrays.Tensors can be explicitly converted to NumPy ndarrays by invoking the `.numpy()` method on them.These conversions are typically cheap as the array and Tensor share the underlying memory representation if possible. However, sharing the underlying representation isn't always possible since the Tensor may be hosted in GPU memory while NumPy arrays are always backed by host memory, and the conversion will thus involve a copy from GPU to host memory.
###Code
import numpy as np
ndarray = np.ones([3, 3])
print("TensorFlow operations convert numpy arrays to Tensors automatically")
tensor = tf.multiply(ndarray, 42)
print(tensor)
print("And NumPy operations convert Tensors to numpy arrays automatically")
print(np.add(tensor, 1))
print("The .numpy() method explicitly converts a Tensor to a numpy array")
print(tensor.numpy())
###Output
_____no_output_____
###Markdown
GPU accelerationMany TensorFlow operations can be accelerated by using the GPU for computation. Without any annotations, TensorFlow automatically decides whether to use the GPU or CPU for an operation (and copies the tensor between CPU and GPU memory if necessary). Tensors produced by an operation are typically backed by the memory of the device on which the operation executed. For example:
###Code
x = tf.random.uniform([3, 3])
print("Is there a GPU available: "),
print(tf.test.is_gpu_available())
print("Is the Tensor on GPU #0: "),
print(x.device.endswith('GPU:0'))
###Output
_____no_output_____
###Markdown
Device NamesThe `Tensor.device` property provides a fully qualified string name of the device hosting the contents of the tensor. This name encodes many details, such as an identifier of the network address of the host on which this program is executing and the device within that host. This is required for distributed execution of a TensorFlow program. The string ends with `GPU:` if the tensor is placed on the `N`-th GPU on the host. Explicit Device PlacementThe term "placement" in TensorFlow refers to how individual operations are assigned (placed on) a device for execution. As mentioned above, when there is no explicit guidance provided, TensorFlow automatically decides which device to execute an operation, and copies Tensors to that device if needed. However, TensorFlow operations can be explicitly placed on specific devices using the `tf.device` context manager. For example:
###Code
import time
def time_matmul(x):
start = time.time()
for loop in range(10):
tf.matmul(x, x)
result = time.time()-start
print("10 loops: {:0.2f}ms".format(1000*result))
# Force execution on CPU
print("On CPU:")
with tf.device("CPU:0"):
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("CPU:0")
time_matmul(x)
# Force execution on GPU #0 if available
if tf.test.is_gpu_available():
with tf.device("GPU:0"): # Or GPU:1 for the 2nd GPU, GPU:2 for the 3rd etc.
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("GPU:0")
time_matmul(x)
###Output
_____no_output_____
###Markdown
DatasetsThis section demonstrates the use of the [`tf.data.Dataset` API](https://www.tensorflow.org/r1/guide/datasets) to build pipelines to feed data to your model. It covers:* Creating a `Dataset`.* Iteration over a `Dataset` with eager execution enabled.We recommend using the `Dataset`s API for building performant, complex input pipelines from simple, re-usable pieces that will feed your model's training or evaluation loops.If you're familiar with TensorFlow graphs, the API for constructing the `Dataset` object remains exactly the same when eager execution is enabled, but the process of iterating over elements of the dataset is slightly simpler.You can use Python iteration over the `tf.data.Dataset` object and do not need to explicitly create an `tf.data.Iterator` object.As a result, the discussion on iterators in the [TensorFlow Guide](https://www.tensorflow.org/r1/guide/datasets) is not relevant when eager execution is enabled. Create a source `Dataset`Create a _source_ dataset using one of the factory functions like [`Dataset.from_tensors`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensors), [`Dataset.from_tensor_slices`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensor_slices) or using objects that read from files like [`TextLineDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TextLineDataset) or [`TFRecordDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TFRecordDataset). See the [TensorFlow Guide](https://www.tensorflow.org/r1/guide/datasetsreading_input_data) for more information.
###Code
ds_tensors = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6])
# Create a CSV file
import tempfile
_, filename = tempfile.mkstemp()
with open(filename, 'w') as f:
f.write("""Line 1
Line 2
Line 3
""")
ds_file = tf.data.TextLineDataset(filename)
###Output
_____no_output_____
###Markdown
Apply transformationsUse the transformations functions like [`map`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetmap), [`batch`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetbatch), [`shuffle`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetshuffle) etc. to apply transformations to the records of the dataset. See the [API documentation for `tf.data.Dataset`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) for details.
###Code
ds_tensors = ds_tensors.map(tf.square).shuffle(2).batch(2)
ds_file = ds_file.batch(2)
###Output
_____no_output_____
###Markdown
IterateWhen eager execution is enabled `Dataset` objects support iteration.If you're familiar with the use of `Dataset`s in TensorFlow graphs, note that there is no need for calls to `Dataset.make_one_shot_iterator()` or `get_next()` calls.
###Code
print('Elements of ds_tensors:')
for x in ds_tensors:
print(x)
print('\nElements in ds_file:')
for x in ds_file:
print(x)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution basics Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. This is an introductory tutorial for using TensorFlow. It will cover:* Importing required packages* Creating and using Tensors* Using GPU acceleration* Datasets Import TensorFlowTo get started, import the `tensorflow` module and enable eager execution.Eager execution enables a more interactive frontend to TensorFlow, the details of which we will discuss much later.
###Code
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
TensorsA Tensor is a multi-dimensional array. Similar to NumPy `ndarray` objects, `Tensor` objects have a data type and a shape. Additionally, Tensors can reside in accelerator (like GPU) memory. TensorFlow offers a rich library of operations ([tf.add](https://www.tensorflow.org/api_docs/python/tf/add), [tf.matmul](https://www.tensorflow.org/api_docs/python/tf/matmul), [tf.linalg.inv](https://www.tensorflow.org/api_docs/python/tf/linalg/inv) etc.) that consume and produce Tensors. These operations automatically convert native Python types. For example:
###Code
print(tf.add(1, 2))
print(tf.add([1, 2], [3, 4]))
print(tf.square(5))
print(tf.reduce_sum([1, 2, 3]))
print(tf.encode_base64("hello world"))
# Operator overloading is also supported
print(tf.square(2) + tf.square(3))
###Output
_____no_output_____
###Markdown
Each Tensor has a shape and a datatype
###Code
x = tf.matmul([[1]], [[2, 3]])
print(x.shape)
print(x.dtype)
###Output
_____no_output_____
###Markdown
The most obvious differences between NumPy arrays and TensorFlow Tensors are:1. Tensors can be backed by accelerator memory (like GPU, TPU).2. Tensors are immutable. NumPy CompatibilityConversion between TensorFlow Tensors and NumPy ndarrays is quite simple as:* TensorFlow operations automatically convert NumPy ndarrays to Tensors.* NumPy operations automatically convert Tensors to NumPy ndarrays.Tensors can be explicitly converted to NumPy ndarrays by invoking the `.numpy()` method on them.These conversions are typically cheap as the array and Tensor share the underlying memory representation if possible. However, sharing the underlying representation isn't always possible since the Tensor may be hosted in GPU memory while NumPy arrays are always backed by host memory, and the conversion will thus involve a copy from GPU to host memory.
###Code
import numpy as np
ndarray = np.ones([3, 3])
print("TensorFlow operations convert numpy arrays to Tensors automatically")
tensor = tf.multiply(ndarray, 42)
print(tensor)
print("And NumPy operations convert Tensors to numpy arrays automatically")
print(np.add(tensor, 1))
print("The .numpy() method explicitly converts a Tensor to a numpy array")
print(tensor.numpy())
###Output
_____no_output_____
###Markdown
GPU accelerationMany TensorFlow operations can be accelerated by using the GPU for computation. Without any annotations, TensorFlow automatically decides whether to use the GPU or CPU for an operation (and copies the tensor between CPU and GPU memory if necessary). Tensors produced by an operation are typically backed by the memory of the device on which the operation executed. For example:
###Code
x = tf.random.uniform([3, 3])
print("Is there a GPU available: "),
print(tf.config.list_physical_devices('GPU'))
print("Is the Tensor on GPU #0: "),
print(x.device.endswith('GPU:0'))
###Output
_____no_output_____
###Markdown
Device NamesThe `Tensor.device` property provides a fully qualified string name of the device hosting the contents of the tensor. This name encodes many details, such as an identifier of the network address of the host on which this program is executing and the device within that host. This is required for distributed execution of a TensorFlow program. The string ends with `GPU:` if the tensor is placed on the `N`-th GPU on the host. Explicit Device PlacementThe term "placement" in TensorFlow refers to how individual operations are assigned (placed on) a device for execution. As mentioned above, when there is no explicit guidance provided, TensorFlow automatically decides which device to execute an operation, and copies Tensors to that device if needed. However, TensorFlow operations can be explicitly placed on specific devices using the `tf.device` context manager. For example:
###Code
import time
def time_matmul(x):
start = time.time()
for loop in range(10):
tf.matmul(x, x)
result = time.time()-start
print("10 loops: {:0.2f}ms".format(1000*result))
# Force execution on CPU
print("On CPU:")
with tf.device("CPU:0"):
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("CPU:0")
time_matmul(x)
# Force execution on GPU #0 if available
if tf.config.list_physical_devices('GPU'):
with tf.device("GPU:0"): # Or GPU:1 for the 2nd GPU, GPU:2 for the 3rd etc.
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("GPU:0")
time_matmul(x)
###Output
_____no_output_____
###Markdown
DatasetsThis section demonstrates the use of the [`tf.data.Dataset` API](https://www.tensorflow.org/r1/guide/datasets) to build pipelines to feed data to your model. It covers:* Creating a `Dataset`.* Iteration over a `Dataset` with eager execution enabled.We recommend using the `Dataset`s API for building performant, complex input pipelines from simple, re-usable pieces that will feed your model's training or evaluation loops.If you're familiar with TensorFlow graphs, the API for constructing the `Dataset` object remains exactly the same when eager execution is enabled, but the process of iterating over elements of the dataset is slightly simpler.You can use Python iteration over the `tf.data.Dataset` object and do not need to explicitly create an `tf.data.Iterator` object.As a result, the discussion on iterators in the [TensorFlow Guide](https://www.tensorflow.org/r1/guide/datasets) is not relevant when eager execution is enabled. Create a source `Dataset`Create a _source_ dataset using one of the factory functions like [`Dataset.from_tensors`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensors), [`Dataset.from_tensor_slices`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensor_slices) or using objects that read from files like [`TextLineDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TextLineDataset) or [`TFRecordDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TFRecordDataset). See the [TensorFlow Guide](https://www.tensorflow.org/r1/guide/datasetsreading_input_data) for more information.
###Code
ds_tensors = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6])
# Create a CSV file
import tempfile
_, filename = tempfile.mkstemp()
with open(filename, 'w') as f:
f.write("""Line 1
Line 2
Line 3
""")
ds_file = tf.data.TextLineDataset(filename)
###Output
_____no_output_____
###Markdown
Apply transformationsUse the transformations functions like [`map`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetmap), [`batch`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetbatch), [`shuffle`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetshuffle) etc. to apply transformations to the records of the dataset. See the [API documentation for `tf.data.Dataset`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) for details.
###Code
ds_tensors = ds_tensors.map(tf.square).shuffle(2).batch(2)
ds_file = ds_file.batch(2)
###Output
_____no_output_____
###Markdown
IterateWhen eager execution is enabled `Dataset` objects support iteration.If you're familiar with the use of `Dataset`s in TensorFlow graphs, note that there is no need for calls to `Dataset.make_one_shot_iterator()` or `get_next()` calls.
###Code
print('Elements of ds_tensors:')
for x in ds_tensors:
print(x)
print('\nElements in ds_file:')
for x in ds_file:
print(x)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution basics Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. This is an introductory tutorial for using TensorFlow. It will cover:* Importing required packages* Creating and using Tensors* Using GPU acceleration* Datasets Import TensorFlowTo get started, import the `tensorflow` module and enable eager execution.Eager execution enables a more interactive frontend to TensorFlow, the details of which we will discuss much later.
###Code
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
TensorsA Tensor is a multi-dimensional array. Similar to NumPy `ndarray` objects, `Tensor` objects have a data type and a shape. Additionally, Tensors can reside in accelerator (like GPU) memory. TensorFlow offers a rich library of operations ([tf.add](https://www.tensorflow.org/api_docs/python/tf/add), [tf.matmul](https://www.tensorflow.org/api_docs/python/tf/matmul), [tf.linalg.inv](https://www.tensorflow.org/api_docs/python/tf/linalg/inv) etc.) that consume and produce Tensors. These operations automatically convert native Python types. For example:
###Code
print(tf.add(1, 2))
print(tf.add([1, 2], [3, 4]))
print(tf.square(5))
print(tf.reduce_sum([1, 2, 3]))
print(tf.encode_base64("hello world"))
# Operator overloading is also supported
print(tf.square(2) + tf.square(3))
###Output
_____no_output_____
###Markdown
Each Tensor has a shape and a datatype
###Code
x = tf.matmul([[1]], [[2, 3]])
print(x.shape)
print(x.dtype)
###Output
_____no_output_____
###Markdown
The most obvious differences between NumPy arrays and TensorFlow Tensors are:1. Tensors can be backed by accelerator memory (like GPU, TPU).2. Tensors are immutable. NumPy CompatibilityConversion between TensorFlow Tensors and NumPy ndarrays is quite simple as:* TensorFlow operations automatically convert NumPy ndarrays to Tensors.* NumPy operations automatically convert Tensors to NumPy ndarrays.Tensors can be explicitly converted to NumPy ndarrays by invoking the `.numpy()` method on them.These conversions are typically cheap as the array and Tensor share the underlying memory representation if possible. However, sharing the underlying representation isn't always possible since the Tensor may be hosted in GPU memory while NumPy arrays are always backed by host memory, and the conversion will thus involve a copy from GPU to host memory.
###Code
import numpy as np
ndarray = np.ones([3, 3])
print("TensorFlow operations convert numpy arrays to Tensors automatically")
tensor = tf.multiply(ndarray, 42)
print(tensor)
print("And NumPy operations convert Tensors to numpy arrays automatically")
print(np.add(tensor, 1))
print("The .numpy() method explicitly converts a Tensor to a numpy array")
print(tensor.numpy())
###Output
_____no_output_____
###Markdown
GPU accelerationMany TensorFlow operations can be accelerated by using the GPU for computation. Without any annotations, TensorFlow automatically decides whether to use the GPU or CPU for an operation (and copies the tensor between CPU and GPU memory if necessary). Tensors produced by an operation are typically backed by the memory of the device on which the operation executed. For example:
###Code
x = tf.random.uniform([3, 3])
print("Is there a GPU available: "),
print(tf.test.is_gpu_available())
print("Is the Tensor on GPU #0: "),
print(x.device.endswith('GPU:0'))
###Output
_____no_output_____
###Markdown
Device NamesThe `Tensor.device` property provides a fully qualified string name of the device hosting the contents of the tensor. This name encodes many details, such as an identifier of the network address of the host on which this program is executing and the device within that host. This is required for distributed execution of a TensorFlow program. The string ends with `GPU:` if the tensor is placed on the `N`-th GPU on the host. Explicit Device PlacementThe term "placement" in TensorFlow refers to how individual operations are assigned (placed on) a device for execution. As mentioned above, when there is no explicit guidance provided, TensorFlow automatically decides which device to execute an operation, and copies Tensors to that device if needed. However, TensorFlow operations can be explicitly placed on specific devices using the `tf.device` context manager. For example:
###Code
import time
def time_matmul(x):
start = time.time()
for loop in range(10):
tf.matmul(x, x)
result = time.time()-start
print("10 loops: {:0.2f}ms".format(1000*result))
# Force execution on CPU
print("On CPU:")
with tf.device("CPU:0"):
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("CPU:0")
time_matmul(x)
# Force execution on GPU #0 if available
if tf.test.is_gpu_available():
with tf.device("GPU:0"): # Or GPU:1 for the 2nd GPU, GPU:2 for the 3rd etc.
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("GPU:0")
time_matmul(x)
###Output
_____no_output_____
###Markdown
DatasetsThis section demonstrates the use of the [`tf.data.Dataset` API](https://www.tensorflow.org/r1/guide/datasets) to build pipelines to feed data to your model. It covers:* Creating a `Dataset`.* Iteration over a `Dataset` with eager execution enabled.We recommend using the `Dataset`s API for building performant, complex input pipelines from simple, re-usable pieces that will feed your model's training or evaluation loops.If you're familiar with TensorFlow graphs, the API for constructing the `Dataset` object remains exactly the same when eager execution is enabled, but the process of iterating over elements of the dataset is slightly simpler.You can use Python iteration over the `tf.data.Dataset` object and do not need to explicitly create an `tf.data.Iterator` object.As a result, the discussion on iterators in the [TensorFlow Guide](https://www.tensorflow.org/r1/guide/datasets) is not relevant when eager execution is enabled. Create a source `Dataset`Create a _source_ dataset using one of the factory functions like [`Dataset.from_tensors`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensors), [`Dataset.from_tensor_slices`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensor_slices) or using objects that read from files like [`TextLineDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TextLineDataset) or [`TFRecordDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TFRecordDataset). See the [TensorFlow Guide](https://www.tensorflow.org/r1/guide/datasetsreading_input_data) for more information.
###Code
ds_tensors = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6])
# Create a CSV file
import tempfile
_, filename = tempfile.mkstemp()
with open(filename, 'w') as f:
f.write("""Line 1
Line 2
Line 3
""")
ds_file = tf.data.TextLineDataset(filename)
###Output
_____no_output_____
###Markdown
Apply transformationsUse the transformations functions like [`map`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetmap), [`batch`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetbatch), [`shuffle`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetshuffle) etc. to apply transformations to the records of the dataset. See the [API documentation for `tf.data.Dataset`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) for details.
###Code
ds_tensors = ds_tensors.map(tf.square).shuffle(2).batch(2)
ds_file = ds_file.batch(2)
###Output
_____no_output_____
###Markdown
IterateWhen eager execution is enabled `Dataset` objects support iteration.If you're familiar with the use of `Dataset`s in TensorFlow graphs, note that there is no need for calls to `Dataset.make_one_shot_iterator()` or `get_next()` calls.
###Code
print('Elements of ds_tensors:')
for x in ds_tensors:
print(x)
print('\nElements in ds_file:')
for x in ds_file:
print(x)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution basics Run in Google Colab View source on GitHub This is an introductory tutorial for using TensorFlow. It will cover:* Importing required packages* Creating and using Tensors* Using GPU acceleration* Datasets Import TensorFlowTo get started, import the `tensorflow` module and enable eager execution.Eager execution enables a more interactive frontend to TensorFlow, the details of which we will discuss much later.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
TensorsA Tensor is a multi-dimensional array. Similar to NumPy `ndarray` objects, `Tensor` objects have a data type and a shape. Additionally, Tensors can reside in accelerator (like GPU) memory. TensorFlow offers a rich library of operations ([tf.add](https://www.tensorflow.org/api_docs/python/tf/add), [tf.matmul](https://www.tensorflow.org/api_docs/python/tf/matmul), [tf.linalg.inv](https://www.tensorflow.org/api_docs/python/tf/linalg/inv) etc.) that consume and produce Tensors. These operations automatically convert native Python types. For example:
###Code
print(tf.add(1, 2))
print(tf.add([1, 2], [3, 4]))
print(tf.square(5))
print(tf.reduce_sum([1, 2, 3]))
print(tf.encode_base64("hello world"))
# Operator overloading is also supported
print(tf.square(2) + tf.square(3))
###Output
_____no_output_____
###Markdown
Each Tensor has a shape and a datatype
###Code
x = tf.matmul([[1]], [[2, 3]])
print(x.shape)
print(x.dtype)
###Output
_____no_output_____
###Markdown
The most obvious differences between NumPy arrays and TensorFlow Tensors are:1. Tensors can be backed by accelerator memory (like GPU, TPU).2. Tensors are immutable. NumPy CompatibilityConversion between TensorFlow Tensors and NumPy ndarrays is quite simple as:* TensorFlow operations automatically convert NumPy ndarrays to Tensors.* NumPy operations automatically convert Tensors to NumPy ndarrays.Tensors can be explicitly converted to NumPy ndarrays by invoking the `.numpy()` method on them.These conversions are typically cheap as the array and Tensor share the underlying memory representation if possible. However, sharing the underlying representation isn't always possible since the Tensor may be hosted in GPU memory while NumPy arrays are always backed by host memory, and the conversion will thus involve a copy from GPU to host memory.
###Code
import numpy as np
ndarray = np.ones([3, 3])
print("TensorFlow operations convert numpy arrays to Tensors automatically")
tensor = tf.multiply(ndarray, 42)
print(tensor)
print("And NumPy operations convert Tensors to numpy arrays automatically")
print(np.add(tensor, 1))
print("The .numpy() method explicitly converts a Tensor to a numpy array")
print(tensor.numpy())
###Output
_____no_output_____
###Markdown
GPU accelerationMany TensorFlow operations can be accelerated by using the GPU for computation. Without any annotations, TensorFlow automatically decides whether to use the GPU or CPU for an operation (and copies the tensor between CPU and GPU memory if necessary). Tensors produced by an operation are typically backed by the memory of the device on which the operation executed. For example:
###Code
x = tf.random.uniform([3, 3])
print("Is there a GPU available: "),
print(tf.test.is_gpu_available())
print("Is the Tensor on GPU #0: "),
print(x.device.endswith('GPU:0'))
###Output
_____no_output_____
###Markdown
Device NamesThe `Tensor.device` property provides a fully qualified string name of the device hosting the contents of the tensor. This name encodes many details, such as an identifier of the network address of the host on which this program is executing and the device within that host. This is required for distributed execution of a TensorFlow program. The string ends with `GPU:` if the tensor is placed on the `N`-th GPU on the host. Explicit Device PlacementThe term "placement" in TensorFlow refers to how individual operations are assigned (placed on) a device for execution. As mentioned above, when there is no explicit guidance provided, TensorFlow automatically decides which device to execute an operation, and copies Tensors to that device if needed. However, TensorFlow operations can be explicitly placed on specific devices using the `tf.device` context manager. For example:
###Code
import time
def time_matmul(x):
start = time.time()
for loop in range(10):
tf.matmul(x, x)
result = time.time()-start
print("10 loops: {:0.2f}ms".format(1000*result))
# Force execution on CPU
print("On CPU:")
with tf.device("CPU:0"):
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("CPU:0")
time_matmul(x)
# Force execution on GPU #0 if available
if tf.test.is_gpu_available():
with tf.device("GPU:0"): # Or GPU:1 for the 2nd GPU, GPU:2 for the 3rd etc.
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("GPU:0")
time_matmul(x)
###Output
_____no_output_____
###Markdown
DatasetsThis section demonstrates the use of the [`tf.data.Dataset` API](https://www.tensorflow.org/r1/guide/datasets) to build pipelines to feed data to your model. It covers:* Creating a `Dataset`.* Iteration over a `Dataset` with eager execution enabled.We recommend using the `Dataset`s API for building performant, complex input pipelines from simple, re-usable pieces that will feed your model's training or evaluation loops.If you're familiar with TensorFlow graphs, the API for constructing the `Dataset` object remains exactly the same when eager execution is enabled, but the process of iterating over elements of the dataset is slightly simpler.You can use Python iteration over the `tf.data.Dataset` object and do not need to explicitly create an `tf.data.Iterator` object.As a result, the discussion on iterators in the [TensorFlow Guide](https://www.tensorflow.org/r1/guide/datasets) is not relevant when eager execution is enabled. Create a source `Dataset`Create a _source_ dataset using one of the factory functions like [`Dataset.from_tensors`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensors), [`Dataset.from_tensor_slices`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensor_slices) or using objects that read from files like [`TextLineDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TextLineDataset) or [`TFRecordDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TFRecordDataset). See the [TensorFlow Guide](https://www.tensorflow.org/r1/guide/datasetsreading_input_data) for more information.
###Code
ds_tensors = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6])
# Create a CSV file
import tempfile
_, filename = tempfile.mkstemp()
with open(filename, 'w') as f:
f.write("""Line 1
Line 2
Line 3
""")
ds_file = tf.data.TextLineDataset(filename)
###Output
_____no_output_____
###Markdown
Apply transformationsUse the transformations functions like [`map`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetmap), [`batch`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetbatch), [`shuffle`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetshuffle) etc. to apply transformations to the records of the dataset. See the [API documentation for `tf.data.Dataset`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) for details.
###Code
ds_tensors = ds_tensors.map(tf.square).shuffle(2).batch(2)
ds_file = ds_file.batch(2)
###Output
_____no_output_____
###Markdown
IterateWhen eager execution is enabled `Dataset` objects support iteration.If you're familiar with the use of `Dataset`s in TensorFlow graphs, note that there is no need for calls to `Dataset.make_one_shot_iterator()` or `get_next()` calls.
###Code
print('Elements of ds_tensors:')
for x in ds_tensors:
print(x)
print('\nElements in ds_file:')
for x in ds_file:
print(x)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution basics Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. This is an introductory tutorial for using TensorFlow. It will cover:* Importing required packages* Creating and using Tensors* Using GPU acceleration* Datasets Import TensorFlowTo get started, import the `tensorflow` module and enable eager execution.Eager execution enables a more interactive frontend to TensorFlow, the details of which we will discuss much later.
###Code
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
TensorsA Tensor is a multi-dimensional array. Similar to NumPy `ndarray` objects, `Tensor` objects have a data type and a shape. Additionally, Tensors can reside in accelerator (like GPU) memory. TensorFlow offers a rich library of operations ([tf.add](https://www.tensorflow.org/api_docs/python/tf/add), [tf.matmul](https://www.tensorflow.org/api_docs/python/tf/matmul), [tf.linalg.inv](https://www.tensorflow.org/api_docs/python/tf/linalg/inv) etc.) that consume and produce Tensors. These operations automatically convert native Python types. For example:
###Code
print(tf.add(1, 2))
print(tf.add([1, 2], [3, 4]))
print(tf.square(5))
print(tf.reduce_sum([1, 2, 3]))
print(tf.encode_base64("hello world"))
# Operator overloading is also supported
print(tf.square(2) + tf.square(3))
###Output
_____no_output_____
###Markdown
Each Tensor has a shape and a datatype
###Code
x = tf.matmul([[1]], [[2, 3]])
print(x.shape)
print(x.dtype)
###Output
_____no_output_____
###Markdown
The most obvious differences between NumPy arrays and TensorFlow Tensors are:1. Tensors can be backed by accelerator memory (like GPU, TPU).2. Tensors are immutable. NumPy CompatibilityConversion between TensorFlow Tensors and NumPy ndarrays is quite simple as:* TensorFlow operations automatically convert NumPy ndarrays to Tensors.* NumPy operations automatically convert Tensors to NumPy ndarrays.Tensors can be explicitly converted to NumPy ndarrays by invoking the `.numpy()` method on them.These conversions are typically cheap as the array and Tensor share the underlying memory representation if possible. However, sharing the underlying representation isn't always possible since the Tensor may be hosted in GPU memory while NumPy arrays are always backed by host memory, and the conversion will thus involve a copy from GPU to host memory.
###Code
import numpy as np
ndarray = np.ones([3, 3])
print("TensorFlow operations convert numpy arrays to Tensors automatically")
tensor = tf.multiply(ndarray, 42)
print(tensor)
print("And NumPy operations convert Tensors to numpy arrays automatically")
print(np.add(tensor, 1))
print("The .numpy() method explicitly converts a Tensor to a numpy array")
print(tensor.numpy())
###Output
_____no_output_____
###Markdown
GPU accelerationMany TensorFlow operations can be accelerated by using the GPU for computation. Without any annotations, TensorFlow automatically decides whether to use the GPU or CPU for an operation (and copies the tensor between CPU and GPU memory if necessary). Tensors produced by an operation are typically backed by the memory of the device on which the operation executed. For example:
###Code
x = tf.random.uniform([3, 3])
print("Is there a GPU available: "),
print(tf.test.is_gpu_available())
print("Is the Tensor on GPU #0: "),
print(x.device.endswith('GPU:0'))
###Output
_____no_output_____
###Markdown
Device NamesThe `Tensor.device` property provides a fully qualified string name of the device hosting the contents of the tensor. This name encodes many details, such as an identifier of the network address of the host on which this program is executing and the device within that host. This is required for distributed execution of a TensorFlow program. The string ends with `GPU:` if the tensor is placed on the `N`-th GPU on the host. Explicit Device PlacementThe term "placement" in TensorFlow refers to how individual operations are assigned (placed on) a device for execution. As mentioned above, when there is no explicit guidance provided, TensorFlow automatically decides which device to execute an operation, and copies Tensors to that device if needed. However, TensorFlow operations can be explicitly placed on specific devices using the `tf.device` context manager. For example:
###Code
import time
def time_matmul(x):
start = time.time()
for loop in range(10):
tf.matmul(x, x)
result = time.time()-start
print("10 loops: {:0.2f}ms".format(1000*result))
# Force execution on CPU
print("On CPU:")
with tf.device("CPU:0"):
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("CPU:0")
time_matmul(x)
# Force execution on GPU #0 if available
if tf.test.is_gpu_available():
with tf.device("GPU:0"): # Or GPU:1 for the 2nd GPU, GPU:2 for the 3rd etc.
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("GPU:0")
time_matmul(x)
###Output
_____no_output_____
###Markdown
DatasetsThis section demonstrates the use of the [`tf.data.Dataset` API](https://www.tensorflow.org/r1/guide/datasets) to build pipelines to feed data to your model. It covers:* Creating a `Dataset`.* Iteration over a `Dataset` with eager execution enabled.We recommend using the `Dataset`s API for building performant, complex input pipelines from simple, re-usable pieces that will feed your model's training or evaluation loops.If you're familiar with TensorFlow graphs, the API for constructing the `Dataset` object remains exactly the same when eager execution is enabled, but the process of iterating over elements of the dataset is slightly simpler.You can use Python iteration over the `tf.data.Dataset` object and do not need to explicitly create an `tf.data.Iterator` object.As a result, the discussion on iterators in the [TensorFlow Guide](https://www.tensorflow.org/r1/guide/datasets) is not relevant when eager execution is enabled. Create a source `Dataset`Create a _source_ dataset using one of the factory functions like [`Dataset.from_tensors`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensors), [`Dataset.from_tensor_slices`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensor_slices) or using objects that read from files like [`TextLineDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TextLineDataset) or [`TFRecordDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TFRecordDataset). See the [TensorFlow Guide](https://www.tensorflow.org/r1/guide/datasetsreading_input_data) for more information.
###Code
ds_tensors = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6])
# Create a CSV file
import tempfile
_, filename = tempfile.mkstemp()
with open(filename, 'w') as f:
f.write("""Line 1
Line 2
Line 3
""")
ds_file = tf.data.TextLineDataset(filename)
###Output
_____no_output_____
###Markdown
Apply transformationsUse the transformations functions like [`map`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetmap), [`batch`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetbatch), [`shuffle`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetshuffle) etc. to apply transformations to the records of the dataset. See the [API documentation for `tf.data.Dataset`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) for details.
###Code
ds_tensors = ds_tensors.map(tf.square).shuffle(2).batch(2)
ds_file = ds_file.batch(2)
###Output
_____no_output_____
###Markdown
IterateWhen eager execution is enabled `Dataset` objects support iteration.If you're familiar with the use of `Dataset`s in TensorFlow graphs, note that there is no need for calls to `Dataset.make_one_shot_iterator()` or `get_next()` calls.
###Code
print('Elements of ds_tensors:')
for x in ds_tensors:
print(x)
print('\nElements in ds_file:')
for x in ds_file:
print(x)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution basics Run in Google Colab View source on GitHub This is an introductory tutorial for using TensorFlow. It will cover:* Importing required packages* Creating and using Tensors* Using GPU acceleration* Datasets Import TensorFlowTo get started, import the `tensorflow` module and enable eager execution.Eager execution enables a more interactive frontend to TensorFlow, the details of which we will discuss much later.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
TensorsA Tensor is a multi-dimensional array. Similar to NumPy `ndarray` objects, `Tensor` objects have a data type and a shape. Additionally, Tensors can reside in accelerator (like GPU) memory. TensorFlow offers a rich library of operations ([tf.add](https://www.tensorflow.org/api_docs/python/tf/add), [tf.matmul](https://www.tensorflow.org/api_docs/python/tf/matmul), [tf.linalg.inv](https://www.tensorflow.org/api_docs/python/tf/linalg/inv) etc.) that consume and produce Tensors. These operations automatically convert native Python types. For example:
###Code
print(tf.add(1, 2))
print(tf.add([1, 2], [3, 4]))
print(tf.square(5))
print(tf.reduce_sum([1, 2, 3]))
print(tf.encode_base64("hello world"))
# Operator overloading is also supported
print(tf.square(2) + tf.square(3))
###Output
_____no_output_____
###Markdown
Each Tensor has a shape and a datatype
###Code
x = tf.matmul([[1]], [[2, 3]])
print(x.shape)
print(x.dtype)
###Output
_____no_output_____
###Markdown
The most obvious differences between NumPy arrays and TensorFlow Tensors are:1. Tensors can be backed by accelerator memory (like GPU, TPU).2. Tensors are immutable. NumPy CompatibilityConversion between TensorFlow Tensors and NumPy ndarrays is quite simple as:* TensorFlow operations automatically convert NumPy ndarrays to Tensors.* NumPy operations automatically convert Tensors to NumPy ndarrays.Tensors can be explicitly converted to NumPy ndarrays by invoking the `.numpy()` method on them.These conversions are typically cheap as the array and Tensor share the underlying memory representation if possible. However, sharing the underlying representation isn't always possible since the Tensor may be hosted in GPU memory while NumPy arrays are always backed by host memory, and the conversion will thus involve a copy from GPU to host memory.
###Code
import numpy as np
ndarray = np.ones([3, 3])
print("TensorFlow operations convert numpy arrays to Tensors automatically")
tensor = tf.multiply(ndarray, 42)
print(tensor)
print("And NumPy operations convert Tensors to numpy arrays automatically")
print(np.add(tensor, 1))
print("The .numpy() method explicitly converts a Tensor to a numpy array")
print(tensor.numpy())
###Output
_____no_output_____
###Markdown
GPU accelerationMany TensorFlow operations can be accelerated by using the GPU for computation. Without any annotations, TensorFlow automatically decides whether to use the GPU or CPU for an operation (and copies the tensor between CPU and GPU memory if necessary). Tensors produced by an operation are typically backed by the memory of the device on which the operation executed. For example:
###Code
x = tf.random.uniform([3, 3])
print("Is there a GPU available: "),
print(tf.test.is_gpu_available())
print("Is the Tensor on GPU #0: "),
print(x.device.endswith('GPU:0'))
###Output
_____no_output_____
###Markdown
Device NamesThe `Tensor.device` property provides a fully qualified string name of the device hosting the contents of the tensor. This name encodes many details, such as an identifier of the network address of the host on which this program is executing and the device within that host. This is required for distributed execution of a TensorFlow program. The string ends with `GPU:` if the tensor is placed on the `N`-th GPU on the host. Explicit Device PlacementThe term "placement" in TensorFlow refers to how individual operations are assigned (placed on) a device for execution. As mentioned above, when there is no explicit guidance provided, TensorFlow automatically decides which device to execute an operation, and copies Tensors to that device if needed. However, TensorFlow operations can be explicitly placed on specific devices using the `tf.device` context manager. For example:
###Code
import time
def time_matmul(x):
start = time.time()
for loop in range(10):
tf.matmul(x, x)
result = time.time()-start
print("10 loops: {:0.2f}ms".format(1000*result))
# Force execution on CPU
print("On CPU:")
with tf.device("CPU:0"):
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("CPU:0")
time_matmul(x)
# Force execution on GPU #0 if available
if tf.test.is_gpu_available():
with tf.device("GPU:0"): # Or GPU:1 for the 2nd GPU, GPU:2 for the 3rd etc.
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("GPU:0")
time_matmul(x)
###Output
_____no_output_____
###Markdown
DatasetsThis section demonstrates the use of the [`tf.data.Dataset` API](https://www.tensorflow.org/r1/guide/datasets) to build pipelines to feed data to your model. It covers:* Creating a `Dataset`.* Iteration over a `Dataset` with eager execution enabled.We recommend using the `Dataset`s API for building performant, complex input pipelines from simple, re-usable pieces that will feed your model's training or evaluation loops.If you're familiar with TensorFlow graphs, the API for constructing the `Dataset` object remains exactly the same when eager execution is enabled, but the process of iterating over elements of the dataset is slightly simpler.You can use Python iteration over the `tf.data.Dataset` object and do not need to explicitly create an `tf.data.Iterator` object.As a result, the discussion on iterators in the [TensorFlow Guide](https://www.tensorflow.org/r1/guide/datasets) is not relevant when eager execution is enabled. Create a source `Dataset`Create a _source_ dataset using one of the factory functions like [`Dataset.from_tensors`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensors), [`Dataset.from_tensor_slices`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensor_slices) or using objects that read from files like [`TextLineDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TextLineDataset) or [`TFRecordDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TFRecordDataset). See the [TensorFlow Guide](https://www.tensorflow.org/r1/guide/datasetsreading_input_data) for more information.
###Code
ds_tensors = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6])
# Create a CSV file
import tempfile
_, filename = tempfile.mkstemp()
with open(filename, 'w') as f:
f.write("""Line 1
Line 2
Line 3
""")
ds_file = tf.data.TextLineDataset(filename)
###Output
_____no_output_____
###Markdown
Apply transformationsUse the transformations functions like [`map`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetmap), [`batch`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetbatch), [`shuffle`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetshuffle) etc. to apply transformations to the records of the dataset. See the [API documentation for `tf.data.Dataset`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) for details.
###Code
ds_tensors = ds_tensors.map(tf.square).shuffle(2).batch(2)
ds_file = ds_file.batch(2)
###Output
_____no_output_____
###Markdown
IterateWhen eager execution is enabled `Dataset` objects support iteration.If you're familiar with the use of `Dataset`s in TensorFlow graphs, note that there is no need for calls to `Dataset.make_one_shot_iterator()` or `get_next()` calls.
###Code
print('Elements of ds_tensors:')
for x in ds_tensors:
print(x)
print('\nElements in ds_file:')
for x in ds_file:
print(x)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution basics Run in Google Colab View source on GitHub This is an introductory tutorial for using TensorFlow. It will cover:* Importing required packages* Creating and using Tensors* Using GPU acceleration* Datasets Import TensorFlowTo get started, import the `tensorflow` module and enable eager execution.Eager execution enables a more interactive frontend to TensorFlow, the details of which we will discuss much later.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
TensorsA Tensor is a multi-dimensional array. Similar to NumPy `ndarray` objects, `Tensor` objects have a data type and a shape. Additionally, Tensors can reside in accelerator (like GPU) memory. TensorFlow offers a rich library of operations ([tf.add](https://www.tensorflow.org/api_docs/python/tf/add), [tf.matmul](https://www.tensorflow.org/api_docs/python/tf/matmul), [tf.linalg.inv](https://www.tensorflow.org/api_docs/python/tf/linalg/inv) etc.) that consume and produce Tensors. These operations automatically convert native Python types. For example:
###Code
print(tf.add(1, 2))
print(tf.add([1, 2], [3, 4]))
print(tf.square(5))
print(tf.reduce_sum([1, 2, 3]))
print(tf.encode_base64("hello world"))
# Operator overloading is also supported
print(tf.square(2) + tf.square(3))
###Output
_____no_output_____
###Markdown
Each Tensor has a shape and a datatype
###Code
x = tf.matmul([[1]], [[2, 3]])
print(x.shape)
print(x.dtype)
###Output
_____no_output_____
###Markdown
The most obvious differences between NumPy arrays and TensorFlow Tensors are:1. Tensors can be backed by accelerator memory (like GPU, TPU).2. Tensors are immutable. NumPy CompatibilityConversion between TensorFlow Tensors and NumPy ndarrays is quite simple as:* TensorFlow operations automatically convert NumPy ndarrays to Tensors.* NumPy operations automatically convert Tensors to NumPy ndarrays.Tensors can be explicitly converted to NumPy ndarrays by invoking the `.numpy()` method on them.These conversions are typically cheap as the array and Tensor share the underlying memory representation if possible. However, sharing the underlying representation isn't always possible since the Tensor may be hosted in GPU memory while NumPy arrays are always backed by host memory, and the conversion will thus involve a copy from GPU to host memory.
###Code
import numpy as np
ndarray = np.ones([3, 3])
print("TensorFlow operations convert numpy arrays to Tensors automatically")
tensor = tf.multiply(ndarray, 42)
print(tensor)
print("And NumPy operations convert Tensors to numpy arrays automatically")
print(np.add(tensor, 1))
print("The .numpy() method explicitly converts a Tensor to a numpy array")
print(tensor.numpy())
###Output
_____no_output_____
###Markdown
GPU accelerationMany TensorFlow operations can be accelerated by using the GPU for computation. Without any annotations, TensorFlow automatically decides whether to use the GPU or CPU for an operation (and copies the tensor between CPU and GPU memory if necessary). Tensors produced by an operation are typically backed by the memory of the device on which the operation executed. For example:
###Code
x = tf.random.uniform([3, 3])
print("Is there a GPU available: "),
print(tf.test.is_gpu_available())
print("Is the Tensor on GPU #0: "),
print(x.device.endswith('GPU:0'))
###Output
_____no_output_____
###Markdown
Device NamesThe `Tensor.device` property provides a fully qualified string name of the device hosting the contents of the tensor. This name encodes many details, such as an identifier of the network address of the host on which this program is executing and the device within that host. This is required for distributed execution of a TensorFlow program. The string ends with `GPU:` if the tensor is placed on the `N`-th GPU on the host. Explicit Device PlacementThe term "placement" in TensorFlow refers to how individual operations are assigned (placed on) a device for execution. As mentioned above, when there is no explicit guidance provided, TensorFlow automatically decides which device to execute an operation, and copies Tensors to that device if needed. However, TensorFlow operations can be explicitly placed on specific devices using the `tf.device` context manager. For example:
###Code
import time
def time_matmul(x):
start = time.time()
for loop in range(10):
tf.matmul(x, x)
result = time.time()-start
print("10 loops: {:0.2f}ms".format(1000*result))
# Force execution on CPU
print("On CPU:")
with tf.device("CPU:0"):
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("CPU:0")
time_matmul(x)
# Force execution on GPU #0 if available
if tf.test.is_gpu_available():
with tf.device("GPU:0"): # Or GPU:1 for the 2nd GPU, GPU:2 for the 3rd etc.
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("GPU:0")
time_matmul(x)
###Output
_____no_output_____
###Markdown
DatasetsThis section demonstrates the use of the [`tf.data.Dataset` API](https://www.tensorflow.org/r1/guide/datasets) to build pipelines to feed data to your model. It covers:* Creating a `Dataset`.* Iteration over a `Dataset` with eager execution enabled.We recommend using the `Dataset`s API for building performant, complex input pipelines from simple, re-usable pieces that will feed your model's training or evaluation loops.If you're familiar with TensorFlow graphs, the API for constructing the `Dataset` object remains exactly the same when eager execution is enabled, but the process of iterating over elements of the dataset is slightly simpler.You can use Python iteration over the `tf.data.Dataset` object and do not need to explicitly create an `tf.data.Iterator` object.As a result, the discussion on iterators in the [TensorFlow Guide](https://www.tensorflow.org/r1/guide/datasets) is not relevant when eager execution is enabled. Create a source `Dataset`Create a _source_ dataset using one of the factory functions like [`Dataset.from_tensors`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensors), [`Dataset.from_tensor_slices`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensor_slices) or using objects that read from files like [`TextLineDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TextLineDataset) or [`TFRecordDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TFRecordDataset). See the [TensorFlow Guide](https://www.tensorflow.org/r1/guide/datasetsreading_input_data) for more information.
###Code
ds_tensors = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6])
# Create a CSV file
import tempfile
_, filename = tempfile.mkstemp()
with open(filename, 'w') as f:
f.write("""Line 1
Line 2
Line 3
""")
ds_file = tf.data.TextLineDataset(filename)
###Output
_____no_output_____
###Markdown
Apply transformationsUse the transformations functions like [`map`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetmap), [`batch`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetbatch), [`shuffle`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetshuffle) etc. to apply transformations to the records of the dataset. See the [API documentation for `tf.data.Dataset`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) for details.
###Code
ds_tensors = ds_tensors.map(tf.square).shuffle(2).batch(2)
ds_file = ds_file.batch(2)
###Output
_____no_output_____
###Markdown
IterateWhen eager execution is enabled `Dataset` objects support iteration.If you're familiar with the use of `Dataset`s in TensorFlow graphs, note that there is no need for calls to `Dataset.make_one_shot_iterator()` or `get_next()` calls.
###Code
print('Elements of ds_tensors:')
for x in ds_tensors:
print(x)
print('\nElements in ds_file:')
for x in ds_file:
print(x)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution basics Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. This is an introductory tutorial for using TensorFlow. It will cover:* Importing required packages* Creating and using Tensors* Using GPU acceleration* Datasets Import TensorFlowTo get started, import the `tensorflow` module and enable eager execution.Eager execution enables a more interactive frontend to TensorFlow, the details of which we will discuss much later.
###Code
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
TensorsA Tensor is a multi-dimensional array. Similar to NumPy `ndarray` objects, `Tensor` objects have a data type and a shape. Additionally, Tensors can reside in accelerator (like GPU) memory. TensorFlow offers a rich library of operations ([tf.add](https://www.tensorflow.org/api_docs/python/tf/add), [tf.matmul](https://www.tensorflow.org/api_docs/python/tf/matmul), [tf.linalg.inv](https://www.tensorflow.org/api_docs/python/tf/linalg/inv) etc.) that consume and produce Tensors. These operations automatically convert native Python types. For example:
###Code
print(tf.add(1, 2))
print(tf.add([1, 2], [3, 4]))
print(tf.square(5))
print(tf.reduce_sum([1, 2, 3]))
print(tf.encode_base64("hello world"))
# Operator overloading is also supported
print(tf.square(2) + tf.square(3))
###Output
_____no_output_____
###Markdown
Each Tensor has a shape and a datatype
###Code
x = tf.matmul([[1]], [[2, 3]])
print(x.shape)
print(x.dtype)
###Output
_____no_output_____
###Markdown
The most obvious differences between NumPy arrays and TensorFlow Tensors are:1. Tensors can be backed by accelerator memory (like GPU, TPU).2. Tensors are immutable. NumPy CompatibilityConversion between TensorFlow Tensors and NumPy ndarrays is quite simple as:* TensorFlow operations automatically convert NumPy ndarrays to Tensors.* NumPy operations automatically convert Tensors to NumPy ndarrays.Tensors can be explicitly converted to NumPy ndarrays by invoking the `.numpy()` method on them.These conversions are typically cheap as the array and Tensor share the underlying memory representation if possible. However, sharing the underlying representation isn't always possible since the Tensor may be hosted in GPU memory while NumPy arrays are always backed by host memory, and the conversion will thus involve a copy from GPU to host memory.
###Code
import numpy as np
ndarray = np.ones([3, 3])
print("TensorFlow operations convert numpy arrays to Tensors automatically")
tensor = tf.multiply(ndarray, 42)
print(tensor)
print("And NumPy operations convert Tensors to numpy arrays automatically")
print(np.add(tensor, 1))
print("The .numpy() method explicitly converts a Tensor to a numpy array")
print(tensor.numpy())
###Output
_____no_output_____
###Markdown
GPU accelerationMany TensorFlow operations can be accelerated by using the GPU for computation. Without any annotations, TensorFlow automatically decides whether to use the GPU or CPU for an operation (and copies the tensor between CPU and GPU memory if necessary). Tensors produced by an operation are typically backed by the memory of the device on which the operation executed. For example:
###Code
x = tf.random.uniform([3, 3])
print("Is there a GPU available: "),
print(tf.test.is_gpu_available())
print("Is the Tensor on GPU #0: "),
print(x.device.endswith('GPU:0'))
###Output
_____no_output_____
###Markdown
Device NamesThe `Tensor.device` property provides a fully qualified string name of the device hosting the contents of the tensor. This name encodes many details, such as an identifier of the network address of the host on which this program is executing and the device within that host. This is required for distributed execution of a TensorFlow program. The string ends with `GPU:` if the tensor is placed on the `N`-th GPU on the host. Explicit Device PlacementThe term "placement" in TensorFlow refers to how individual operations are assigned (placed on) a device for execution. As mentioned above, when there is no explicit guidance provided, TensorFlow automatically decides which device to execute an operation, and copies Tensors to that device if needed. However, TensorFlow operations can be explicitly placed on specific devices using the `tf.device` context manager. For example:
###Code
import time
def time_matmul(x):
start = time.time()
for loop in range(10):
tf.matmul(x, x)
result = time.time()-start
print("10 loops: {:0.2f}ms".format(1000*result))
# Force execution on CPU
print("On CPU:")
with tf.device("CPU:0"):
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("CPU:0")
time_matmul(x)
# Force execution on GPU #0 if available
if tf.test.is_gpu_available():
with tf.device("GPU:0"): # Or GPU:1 for the 2nd GPU, GPU:2 for the 3rd etc.
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("GPU:0")
time_matmul(x)
###Output
_____no_output_____
###Markdown
DatasetsThis section demonstrates the use of the [`tf.data.Dataset` API](https://www.tensorflow.org/r1/guide/datasets) to build pipelines to feed data to your model. It covers:* Creating a `Dataset`.* Iteration over a `Dataset` with eager execution enabled.We recommend using the `Dataset`s API for building performant, complex input pipelines from simple, re-usable pieces that will feed your model's training or evaluation loops.If you're familiar with TensorFlow graphs, the API for constructing the `Dataset` object remains exactly the same when eager execution is enabled, but the process of iterating over elements of the dataset is slightly simpler.You can use Python iteration over the `tf.data.Dataset` object and do not need to explicitly create an `tf.data.Iterator` object.As a result, the discussion on iterators in the [TensorFlow Guide](https://www.tensorflow.org/r1/guide/datasets) is not relevant when eager execution is enabled. Create a source `Dataset`Create a _source_ dataset using one of the factory functions like [`Dataset.from_tensors`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensors), [`Dataset.from_tensor_slices`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensor_slices) or using objects that read from files like [`TextLineDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TextLineDataset) or [`TFRecordDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TFRecordDataset). See the [TensorFlow Guide](https://www.tensorflow.org/r1/guide/datasetsreading_input_data) for more information.
###Code
ds_tensors = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6])
# Create a CSV file
import tempfile
_, filename = tempfile.mkstemp()
with open(filename, 'w') as f:
f.write("""Line 1
Line 2
Line 3
""")
ds_file = tf.data.TextLineDataset(filename)
###Output
_____no_output_____
###Markdown
Apply transformationsUse the transformations functions like [`map`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetmap), [`batch`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetbatch), [`shuffle`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetshuffle) etc. to apply transformations to the records of the dataset. See the [API documentation for `tf.data.Dataset`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) for details.
###Code
ds_tensors = ds_tensors.map(tf.square).shuffle(2).batch(2)
ds_file = ds_file.batch(2)
###Output
_____no_output_____
###Markdown
IterateWhen eager execution is enabled `Dataset` objects support iteration.If you're familiar with the use of `Dataset`s in TensorFlow graphs, note that there is no need for calls to `Dataset.make_one_shot_iterator()` or `get_next()` calls.
###Code
print('Elements of ds_tensors:')
for x in ds_tensors:
print(x)
print('\nElements in ds_file:')
for x in ds_file:
print(x)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution basics Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. This is an introductory tutorial for using TensorFlow. It will cover:* Importing required packages* Creating and using Tensors* Using GPU acceleration* Datasets Import TensorFlowTo get started, import the `tensorflow` module and enable eager execution.Eager execution enables a more interactive frontend to TensorFlow, the details of which we will discuss much later.
###Code
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
TensorsA Tensor is a multi-dimensional array. Similar to NumPy `ndarray` objects, `Tensor` objects have a data type and a shape. Additionally, Tensors can reside in accelerator (like GPU) memory. TensorFlow offers a rich library of operations ([tf.add](https://www.tensorflow.org/api_docs/python/tf/add), [tf.matmul](https://www.tensorflow.org/api_docs/python/tf/matmul), [tf.linalg.inv](https://www.tensorflow.org/api_docs/python/tf/linalg/inv) etc.) that consume and produce Tensors. These operations automatically convert native Python types. For example:
###Code
print(tf.add(1, 2))
print(tf.add([1, 2], [3, 4]))
print(tf.square(5))
print(tf.reduce_sum([1, 2, 3]))
print(tf.encode_base64("hello world"))
# Operator overloading is also supported
print(tf.square(2) + tf.square(3))
###Output
_____no_output_____
###Markdown
Each Tensor has a shape and a datatype
###Code
x = tf.matmul([[1]], [[2, 3]])
print(x.shape)
print(x.dtype)
###Output
_____no_output_____
###Markdown
The most obvious differences between NumPy arrays and TensorFlow Tensors are:1. Tensors can be backed by accelerator memory (like GPU, TPU).2. Tensors are immutable. NumPy CompatibilityConversion between TensorFlow Tensors and NumPy ndarrays is quite simple as:* TensorFlow operations automatically convert NumPy ndarrays to Tensors.* NumPy operations automatically convert Tensors to NumPy ndarrays.Tensors can be explicitly converted to NumPy ndarrays by invoking the `.numpy()` method on them.These conversions are typically cheap as the array and Tensor share the underlying memory representation if possible. However, sharing the underlying representation isn't always possible since the Tensor may be hosted in GPU memory while NumPy arrays are always backed by host memory, and the conversion will thus involve a copy from GPU to host memory.
###Code
import numpy as np
ndarray = np.ones([3, 3])
print("TensorFlow operations convert numpy arrays to Tensors automatically")
tensor = tf.multiply(ndarray, 42)
print(tensor)
print("And NumPy operations convert Tensors to numpy arrays automatically")
print(np.add(tensor, 1))
print("The .numpy() method explicitly converts a Tensor to a numpy array")
print(tensor.numpy())
###Output
_____no_output_____
###Markdown
GPU accelerationMany TensorFlow operations can be accelerated by using the GPU for computation. Without any annotations, TensorFlow automatically decides whether to use the GPU or CPU for an operation (and copies the tensor between CPU and GPU memory if necessary). Tensors produced by an operation are typically backed by the memory of the device on which the operation executed. For example:
###Code
x = tf.random.uniform([3, 3])
print("Is there a GPU available: "),
print(tf.test.is_gpu_available())
print("Is the Tensor on GPU #0: "),
print(x.device.endswith('GPU:0'))
###Output
_____no_output_____
###Markdown
Device NamesThe `Tensor.device` property provides a fully qualified string name of the device hosting the contents of the tensor. This name encodes many details, such as an identifier of the network address of the host on which this program is executing and the device within that host. This is required for distributed execution of a TensorFlow program. The string ends with `GPU:` if the tensor is placed on the `N`-th GPU on the host. Explicit Device PlacementThe term "placement" in TensorFlow refers to how individual operations are assigned (placed on) a device for execution. As mentioned above, when there is no explicit guidance provided, TensorFlow automatically decides which device to execute an operation, and copies Tensors to that device if needed. However, TensorFlow operations can be explicitly placed on specific devices using the `tf.device` context manager. For example:
###Code
import time
def time_matmul(x):
start = time.time()
for loop in range(10):
tf.matmul(x, x)
result = time.time()-start
print("10 loops: {:0.2f}ms".format(1000*result))
# Force execution on CPU
print("On CPU:")
with tf.device("CPU:0"):
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("CPU:0")
time_matmul(x)
# Force execution on GPU #0 if available
if tf.test.is_gpu_available():
with tf.device("GPU:0"): # Or GPU:1 for the 2nd GPU, GPU:2 for the 3rd etc.
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("GPU:0")
time_matmul(x)
###Output
_____no_output_____
###Markdown
DatasetsThis section demonstrates the use of the [`tf.data.Dataset` API](https://www.tensorflow.org/r1/guide/datasets) to build pipelines to feed data to your model. It covers:* Creating a `Dataset`.* Iteration over a `Dataset` with eager execution enabled.We recommend using the `Dataset`s API for building performant, complex input pipelines from simple, re-usable pieces that will feed your model's training or evaluation loops.If you're familiar with TensorFlow graphs, the API for constructing the `Dataset` object remains exactly the same when eager execution is enabled, but the process of iterating over elements of the dataset is slightly simpler.You can use Python iteration over the `tf.data.Dataset` object and do not need to explicitly create an `tf.data.Iterator` object.As a result, the discussion on iterators in the [TensorFlow Guide](https://www.tensorflow.org/r1/guide/datasets) is not relevant when eager execution is enabled. Create a source `Dataset`Create a _source_ dataset using one of the factory functions like [`Dataset.from_tensors`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensors), [`Dataset.from_tensor_slices`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensor_slices) or using objects that read from files like [`TextLineDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TextLineDataset) or [`TFRecordDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TFRecordDataset). See the [TensorFlow Guide](https://www.tensorflow.org/r1/guide/datasetsreading_input_data) for more information.
###Code
ds_tensors = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6])
# Create a CSV file
import tempfile
_, filename = tempfile.mkstemp()
with open(filename, 'w') as f:
f.write("""Line 1
Line 2
Line 3
""")
ds_file = tf.data.TextLineDataset(filename)
###Output
_____no_output_____
###Markdown
Apply transformationsUse the transformations functions like [`map`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetmap), [`batch`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetbatch), [`shuffle`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetshuffle) etc. to apply transformations to the records of the dataset. See the [API documentation for `tf.data.Dataset`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) for details.
###Code
ds_tensors = ds_tensors.map(tf.square).shuffle(2).batch(2)
ds_file = ds_file.batch(2)
###Output
_____no_output_____
###Markdown
IterateWhen eager execution is enabled `Dataset` objects support iteration.If you're familiar with the use of `Dataset`s in TensorFlow graphs, note that there is no need for calls to `Dataset.make_one_shot_iterator()` or `get_next()` calls.
###Code
print('Elements of ds_tensors:')
for x in ds_tensors:
print(x)
print('\nElements in ds_file:')
for x in ds_file:
print(x)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution basics Run in Google Colab View source on GitHub This is an introductory tutorial for using TensorFlow. It will cover:* Importing required packages* Creating and using Tensors* Using GPU acceleration* Datasets Import TensorFlowTo get started, import the `tensorflow` module and enable eager execution.Eager execution enables a more interactive frontend to TensorFlow, the details of which we will discuss much later.
###Code
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
TensorsA Tensor is a multi-dimensional array. Similar to NumPy `ndarray` objects, `Tensor` objects have a data type and a shape. Additionally, Tensors can reside in accelerator (like GPU) memory. TensorFlow offers a rich library of operations ([tf.add](https://www.tensorflow.org/api_docs/python/tf/add), [tf.matmul](https://www.tensorflow.org/api_docs/python/tf/matmul), [tf.linalg.inv](https://www.tensorflow.org/api_docs/python/tf/linalg/inv) etc.) that consume and produce Tensors. These operations automatically convert native Python types. For example:
###Code
print(tf.add(1, 2))
print(tf.add([1, 2], [3, 4]))
print(tf.square(5))
print(tf.reduce_sum([1, 2, 3]))
print(tf.encode_base64("hello world"))
# Operator overloading is also supported
print(tf.square(2) + tf.square(3))
###Output
_____no_output_____
###Markdown
Each Tensor has a shape and a datatype
###Code
x = tf.matmul([[1]], [[2, 3]])
print(x.shape)
print(x.dtype)
###Output
_____no_output_____
###Markdown
The most obvious differences between NumPy arrays and TensorFlow Tensors are:1. Tensors can be backed by accelerator memory (like GPU, TPU).2. Tensors are immutable. NumPy CompatibilityConversion between TensorFlow Tensors and NumPy ndarrays is quite simple as:* TensorFlow operations automatically convert NumPy ndarrays to Tensors.* NumPy operations automatically convert Tensors to NumPy ndarrays.Tensors can be explicitly converted to NumPy ndarrays by invoking the `.numpy()` method on them.These conversions are typically cheap as the array and Tensor share the underlying memory representation if possible. However, sharing the underlying representation isn't always possible since the Tensor may be hosted in GPU memory while NumPy arrays are always backed by host memory, and the conversion will thus involve a copy from GPU to host memory.
###Code
import numpy as np
ndarray = np.ones([3, 3])
print("TensorFlow operations convert numpy arrays to Tensors automatically")
tensor = tf.multiply(ndarray, 42)
print(tensor)
print("And NumPy operations convert Tensors to numpy arrays automatically")
print(np.add(tensor, 1))
print("The .numpy() method explicitly converts a Tensor to a numpy array")
print(tensor.numpy())
###Output
_____no_output_____
###Markdown
GPU accelerationMany TensorFlow operations can be accelerated by using the GPU for computation. Without any annotations, TensorFlow automatically decides whether to use the GPU or CPU for an operation (and copies the tensor between CPU and GPU memory if necessary). Tensors produced by an operation are typically backed by the memory of the device on which the operation executed. For example:
###Code
x = tf.random.uniform([3, 3])
print("Is there a GPU available: "),
print(tf.test.is_gpu_available())
print("Is the Tensor on GPU #0: "),
print(x.device.endswith('GPU:0'))
###Output
_____no_output_____
###Markdown
Device NamesThe `Tensor.device` property provides a fully qualified string name of the device hosting the contents of the tensor. This name encodes many details, such as an identifier of the network address of the host on which this program is executing and the device within that host. This is required for distributed execution of a TensorFlow program. The string ends with `GPU:` if the tensor is placed on the `N`-th GPU on the host. Explicit Device PlacementThe term "placement" in TensorFlow refers to how individual operations are assigned (placed on) a device for execution. As mentioned above, when there is no explicit guidance provided, TensorFlow automatically decides which device to execute an operation, and copies Tensors to that device if needed. However, TensorFlow operations can be explicitly placed on specific devices using the `tf.device` context manager. For example:
###Code
import time
def time_matmul(x):
start = time.time()
for loop in range(10):
tf.matmul(x, x)
result = time.time()-start
print("10 loops: {:0.2f}ms".format(1000*result))
# Force execution on CPU
print("On CPU:")
with tf.device("CPU:0"):
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("CPU:0")
time_matmul(x)
# Force execution on GPU #0 if available
if tf.test.is_gpu_available():
with tf.device("GPU:0"): # Or GPU:1 for the 2nd GPU, GPU:2 for the 3rd etc.
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("GPU:0")
time_matmul(x)
###Output
_____no_output_____
###Markdown
DatasetsThis section demonstrates the use of the [`tf.data.Dataset` API](https://www.tensorflow.org/r1/guide/datasets) to build pipelines to feed data to your model. It covers:* Creating a `Dataset`.* Iteration over a `Dataset` with eager execution enabled.We recommend using the `Dataset`s API for building performant, complex input pipelines from simple, re-usable pieces that will feed your model's training or evaluation loops.If you're familiar with TensorFlow graphs, the API for constructing the `Dataset` object remains exactly the same when eager execution is enabled, but the process of iterating over elements of the dataset is slightly simpler.You can use Python iteration over the `tf.data.Dataset` object and do not need to explicitly create an `tf.data.Iterator` object.As a result, the discussion on iterators in the [TensorFlow Guide](https://www.tensorflow.org/r1/guide/datasets) is not relevant when eager execution is enabled. Create a source `Dataset`Create a _source_ dataset using one of the factory functions like [`Dataset.from_tensors`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensors), [`Dataset.from_tensor_slices`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensor_slices) or using objects that read from files like [`TextLineDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TextLineDataset) or [`TFRecordDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TFRecordDataset). See the [TensorFlow Guide](https://www.tensorflow.org/r1/guide/datasetsreading_input_data) for more information.
###Code
ds_tensors = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6])
# Create a CSV file
import tempfile
_, filename = tempfile.mkstemp()
with open(filename, 'w') as f:
f.write("""Line 1
Line 2
Line 3
""")
ds_file = tf.data.TextLineDataset(filename)
###Output
_____no_output_____
###Markdown
Apply transformationsUse the transformations functions like [`map`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetmap), [`batch`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetbatch), [`shuffle`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetshuffle) etc. to apply transformations to the records of the dataset. See the [API documentation for `tf.data.Dataset`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) for details.
###Code
ds_tensors = ds_tensors.map(tf.square).shuffle(2).batch(2)
ds_file = ds_file.batch(2)
###Output
_____no_output_____
###Markdown
IterateWhen eager execution is enabled `Dataset` objects support iteration.If you're familiar with the use of `Dataset`s in TensorFlow graphs, note that there is no need for calls to `Dataset.make_one_shot_iterator()` or `get_next()` calls.
###Code
print('Elements of ds_tensors:')
for x in ds_tensors:
print(x)
print('\nElements in ds_file:')
for x in ds_file:
print(x)
###Output
_____no_output_____
|
FEC/FEC_Creer_un_dashboard_PowerBI.ipynb
|
###Markdown
FEC - Creer un dashboard PowerBI **Tags:** fec powerbi dataviz analytics **Author:** [Alexandre STEVENS](https://www.linkedin.com/in/alexandrestevenspbix/) Ce Notebook permet de transformer des fichiers FEC de votre entreprise en un tableau de bord Microsoft Power BI.Le FEC (fichier des écritures comptables) est un export standard des logiciels de comptabilite et une obligation légale en france depuis 2014 afin de déposer ses comptes de manière electronique auprès des services fiscaux.-Durée de l’installation = 5 minutes-Support d’installation = [Page Notion](https://www.notion.so/Mode-d-emploi-FECthis-7fc142f2d7ae4a3889fbca28a83acba2/)-Niveau = Facile Input Librairie
###Code
import pandas as pd
from datetime import datetime, timedelta
import os
import re
import naas
###Output
_____no_output_____
###Markdown
Lien URL vers le logo de l'entreprise
###Code
LOGO = "https://landen.imgix.net/e5hx7wyzf53f/assets/26u7xg7u.png?w=400"
COLOR_1 = None
COLOR_2 = None
###Output
_____no_output_____
###Markdown
Lire les fichiers FEC
###Code
def get_all_fec(file_regex,
sep=",",
decimal=".",
encoding=None,
header=None,
usecols=None,
names=None,
dtype=None):
# Create df init
df = pd.DataFrame()
# Get all files in INPUT_FOLDER
files = [f for f in os.listdir() if re.search(file_regex, f)]
if len(files) == 0:
print(f"Aucun fichier FEC ne correspond au standard de nomination")
else:
for file in files:
# Open file and create df
print(file)
tmp_df = pd.read_csv(file,
sep=sep,
decimal=decimal,
encoding=encoding,
header=header,
usecols=usecols,
names=names,
dtype=dtype)
# Add filename to df
tmp_df['NOM_FICHIER'] = file
# Concat df
df = pd.concat([df, tmp_df], axis=0, sort=False)
return df
file_regex = "^\d{9}FEC\d{8}.txt"
db_init = get_all_fec(file_regex,
sep='\t',
decimal=',',
encoding='ISO-8859-1',
header=0)
db_init
###Output
_____no_output_____
###Markdown
Model Base de donnée FEC Nettoyage des données
###Code
db_clean = db_init.copy()
# Selection des colonnes à conserver
to_select = ['NOM_FICHIER',
'EcritureDate',
'CompteNum',
'CompteLib',
'EcritureLib',
'Debit',
'Credit']
db_clean = db_clean[to_select]
# Renommage des colonnes
to_rename = {'EcritureDate': "DATE",
'CompteNum': "COMPTE_NUM",
'CompteLib': "RUBRIQUE_N3",
'EcritureLib': "RUBRIQUE_N4",
'Debit': "DEBIT",
'Credit': "CREDIT"}
db_clean = db_clean.rename(columns=to_rename)
#suppression des espaces colonne "COMPTE_NUM"
db_clean["COMPTE_NUM"] = db_clean["COMPTE_NUM"].astype(str).str.strip()
# Mise au format des colonnes
db_clean = db_clean.astype({"NOM_FICHIER" : str,
"DATE" : str,
"COMPTE_NUM" : str,
"RUBRIQUE_N3" : str,
"RUBRIQUE_N4" : str,
"DEBIT" : float,
"CREDIT" : float,
})
# Mise au format colonne date
db_clean["DATE"] = pd.to_datetime(db_clean["DATE"])
db_clean.head(5)
###Output
_____no_output_____
###Markdown
Enrichissement de la base
###Code
db_enr = db_clean.copy()
# Ajout colonnes entité et période
db_enr['ENTITY'] = db_enr['NOM_FICHIER'].str[:9]
db_enr['PERIOD'] = db_enr['NOM_FICHIER'].str[12:-6]
db_enr['PERIOD'] = pd.to_datetime(db_enr['PERIOD'], format='%Y%m')
db_enr['PERIOD'] = db_enr['PERIOD'].dt.strftime("%Y-%m")
# Ajout colonne month et month_index
db_enr['MONTH'] = db_enr['DATE'].dt.strftime("%b")
db_enr['MONTH_INDEX'] = db_enr['DATE'].dt.month
# Calcul de la valeur debit-crédit
db_enr["VALUE"] = (db_enr["DEBIT"]) - (db_enr["CREDIT"])
db_enr.head(5)
# Calcul résultat pour équilibrage bilan dans capitaux propre
db_rn = db_enr.copy()
db_rn = db_rn[db_rn['COMPTE_NUM'].str.contains(r'^6|^7')]
to_group = ["ENTITY", "PERIOD"]
to_agg = {"VALUE": "sum"}
db_rn = db_rn.groupby(to_group, as_index=False).agg(to_agg)
db_rn ["COMPTE_NUM"] = "10999999"
db_rn ["RUBRIQUE_N3"] = "RESULTAT"
# Reorganisation colonne
to_select = ['ENTITY',
'PERIOD',
'COMPTE_NUM',
'RUBRIQUE_N3',
'VALUE']
db_rn = db_rn[to_select]
db_rn
###Output
_____no_output_____
###Markdown
Base de données FEC aggrégée avec variation Aggrégation RUBRIQUE N3
###Code
# Calcul var v = création de dataset avec Period_comp pour merge
db_var = db_enr.copy()
# Regroupement
to_group = ["ENTITY",
"PERIOD",
"COMPTE_NUM",
"RUBRIQUE_N3"]
to_agg = {"VALUE": "sum"}
db_var = db_var.groupby(to_group, as_index=False).agg(to_agg)
# Ajout des résultats au dataframe
db_var = pd.concat([db_var, db_rn], axis=0, sort=False)
# Creation colonne COMP
db_var['PERIOD_COMP'] = (db_var['PERIOD'].str[:4].astype(int) - 1).astype(str) + db_var['PERIOD'].str[-3:]
db_var
###Output
_____no_output_____
###Markdown
Création de la base comparable
###Code
db_comp = db_var.copy()
# Suppression de la colonne période
db_comp = db_comp.drop("PERIOD_COMP", axis=1)
# Renommage des colonnes
to_rename = {'VALUE': "VALUE_N-1",
'PERIOD': "PERIOD_COMP"}
db_comp = db_comp.rename(columns=to_rename)
db_comp.head(5)
###Output
_____no_output_____
###Markdown
Jointure des 2 tables et calcul des variations
###Code
# Jointure entre les 2 tables
join_on = ["ENTITY",
"PERIOD_COMP",
"COMPTE_NUM",
"RUBRIQUE_N3"]
db_var = pd.merge(db_var, db_comp, how='left', on=join_on).drop("PERIOD_COMP", axis=1).fillna(0)
#Création colonne Var V
db_var["VARV"] = db_var["VALUE"] - db_var["VALUE_N-1"]
#Création colonne Var P (%)
db_var["VARP"] = db_var["VARV"] / db_var["VALUE_N-1"]
db_var
db_cat = db_var.copy()
# Calcul des rubriques niveau 2
def rubrique_N2(row):
numero_compte = str(row["COMPTE_NUM"])
value = float(row["VALUE"])
# BILAN SIMPLIFIE type IFRS NIV2
to_check = ["^10", "^11", "^12", "^13", "^14"]
if any (re.search(x,numero_compte) for x in to_check):
return "CAPITAUX_PROPRES"
to_check = ["^15", "^16", "^17", "^18", "^19"]
if any (re.search(x,numero_compte) for x in to_check):
return "DETTES_FINANCIERES"
to_check = ["^20", "^21", "^22", "^23", "^25", "^26", "^27", "^28", "^29"]
if any (re.search(x,numero_compte) for x in to_check):
return "IMMOBILISATIONS"
to_check = ["^31", "^32", "^33", "^34", "^35", "^36", "^37", "^38", "^39"]
if any (re.search(x,numero_compte) for x in to_check):
return "STOCKS"
to_check = ["^40"]
if any (re.search(x,numero_compte) for x in to_check):
return "DETTES_FOURNISSEURS"
to_check = ["^41"]
if any (re.search(x,numero_compte) for x in to_check):
return "CREANCES_CLIENTS"
to_check = ["^42", "^43", "^44", "^45", "^46", "^47", "^48", "^49"]
if any (re.search(x,numero_compte) for x in to_check):
if value > 0:
return "AUTRES_CREANCES"
else:
return "AUTRES_DETTES"
to_check = ["^50", "^51", "^52", "^53", "^54", "^58", "^59"]
if any (re.search(x,numero_compte) for x in to_check):
return "DISPONIBILITES"
# COMPTE DE RESULTAT DETAILLE NIV2
to_check = ["^60"]
if any (re.search(x,numero_compte) for x in to_check):
return "ACHATS"
to_check= ["^61", "^62"]
if any (re.search(x,numero_compte) for x in to_check):
return "SERVICES_EXTERIEURS"
to_check = ["^63"]
if any (re.search(x,numero_compte) for x in to_check):
return "TAXES"
to_check = ["^64"]
if any (re.search(x,numero_compte) for x in to_check):
return "CHARGES_PERSONNEL"
to_check = ["^65"]
if any (re.search(x,numero_compte) for x in to_check):
return "AUTRES_CHARGES"
to_check = ["^66"]
if any (re.search(x,numero_compte) for x in to_check):
return "CHARGES_FINANCIERES"
to_check = ["^67"]
if any (re.search(x,numero_compte) for x in to_check):
return "CHARGES_EXCEPTIONNELLES"
to_check = ["^68", "^78"]
if any (re.search(x,numero_compte) for x in to_check):
return "AMORTISSEMENTS"
to_check = ["^69"]
if any (re.search(x,numero_compte) for x in to_check):
return "IMPOT"
to_check = ["^70"]
if any (re.search(x,numero_compte) for x in to_check):
return "VENTES"
to_check = ["^71", "^72"]
if any (re.search(x,numero_compte) for x in to_check):
return "PRODUCTION_STOCKEE_IMMOBILISEE"
to_check = ["^74"]
if any (re.search(x,numero_compte) for x in to_check):
return "SUBVENTIONS_D'EXPL."
to_check = ["^75", "^791"]
if any (re.search(x,numero_compte) for x in to_check):
return "AUTRES_PRODUITS_GESTION_COURANTE"
to_check = ["^76", "^796"]
if any (re.search(x,numero_compte) for x in to_check):
return "PRODUITS_FINANCIERS"
to_check = ["^77", "^797"]
if any (re.search(x,numero_compte) for x in to_check):
return "PRODUITS_EXCEPTIONNELS"
to_check = ["^78"]
if any (re.search(x,numero_compte) for x in to_check):
return "REPRISES_AMORT._DEP."
to_check = ["^8"]
if any (re.search(x,numero_compte) for x in to_check):
return "COMPTES_SPECIAUX"
# Calcul des rubriques niveau 1
def rubrique_N1(row):
categorisation = row.RUBRIQUE_N2
# BILAN SIMPLIFIE type IFRS N1
to_check = ["CAPITAUX_PROPRES", "DETTES_FINANCIERES"]
if any(re.search(x, categorisation) for x in to_check):
return "PASSIF_NON_COURANT"
to_check = ["IMMOBILISATIONS"]
if any(re.search(x, categorisation) for x in to_check):
return "ACTIF_NON_COURANT"
to_check = ["STOCKS", "CREANCES_CLIENTS", "AUTRES_CREANCES"]
if any(re.search(x, categorisation) for x in to_check):
return "ACTIF_COURANT"
to_check = ["DETTES_FOURNISSEURS", "AUTRES_DETTES"]
if any(re.search(x, categorisation) for x in to_check):
return "PASSIF_COURANT"
to_check = ["DISPONIBILITES"]
if any(re.search(x, categorisation) for x in to_check):
return "DISPONIBILITES"
# COMPTE DE RESULTAT SIMPLIFIE N1
to_check = ["ACHATS"]
if any(re.search(x, categorisation) for x in to_check):
return "COUTS_DIRECTS"
to_check = ["SERVICES_EXTERIEURS", "TAXES", "CHARGES_PERSONNEL", "AUTRES_CHARGES", "AMORTISSEMENTS"]
if any(re.search(x, categorisation) for x in to_check):
return "CHARGES_EXPLOITATION"
to_check = ["CHARGES_FINANCIERES"]
if any(re.search(x, categorisation) for x in to_check):
return "CHARGES_FINANCIERES"
to_check = ["CHARGES_EXCEPTIONNELLES", "IMPOT"]
if any(re.search(x, categorisation) for x in to_check):
return "CHARGES_EXCEPTIONNELLES"
to_check = ["VENTES", "PRODUCTION_STOCKEE_IMMOBILISEE"]
if any(re.search(x, categorisation) for x in to_check):
return "CHIFFRE_D'AFFAIRES"
to_check = ["SUBVENTIONS_D'EXPL.", "AUTRES_PRODUITS_GESTION_COURANTE", "REPRISES_AMORT._DEP."]
if any(re.search(x, categorisation) for x in to_check):
return "PRODUITS_EXPLOITATION"
to_check = ["PRODUITS_FINANCIERS"]
if any(re.search(x, categorisation) for x in to_check):
return "PRODUITS_FINANCIERS"
to_check = ["PRODUITS_EXCEPTIONNELS"]
if any(re.search(x, categorisation) for x in to_check):
return "PRODUITS_EXCEPTIONNELS"
# Calcul des rubriques niveau 0
def rubrique_N0(row):
masse = row.RUBRIQUE_N1
to_check = ["ACTIF_NON_COURANT", "ACTIF_COURANT", "DISPONIBILITES"]
if any(re.search(x, masse) for x in to_check):
return "ACTIF"
to_check = ["PASSIF_NON_COURANT", "PASSIF_COURANT"]
if any(re.search(x, masse) for x in to_check):
return "PASSIF"
to_check = ["COUTS_DIRECTS", "CHARGES_EXPLOITATION", "CHARGES_FINANCIERES", "CHARGES_EXCEPTIONNELLES"]
if any(re.search(x, masse) for x in to_check):
return "CHARGES"
to_check = ["CHIFFRE_D'AFFAIRES", "PRODUITS_EXPLOITATION", "PRODUITS_FINANCIERS", "PRODUITS_EXCEPTIONNELS"]
if any(re.search(x, masse) for x in to_check):
return "PRODUITS"
# Mapping des rubriques
db_cat["RUBRIQUE_N2"] = db_cat.apply(lambda row: rubrique_N2(row), axis=1)
db_cat["RUBRIQUE_N1"] = db_cat.apply(lambda row: rubrique_N1(row), axis=1)
db_cat["RUBRIQUE_N0"] = db_cat.apply(lambda row: rubrique_N0(row), axis=1)
# Reorganisation colonne
to_select = ['ENTITY',
'PERIOD',
'COMPTE_NUM',
'RUBRIQUE_N0',
'RUBRIQUE_N1',
'RUBRIQUE_N2',
'RUBRIQUE_N3',
'VALUE',
'VALUE_N-1',
'VARV',
'VARP']
db_cat = db_cat[to_select]
db_cat
###Output
_____no_output_____
###Markdown
Modèles de données des graphiques REF_ENTITE
###Code
# Creation du dataset ref_entite
dataset_entite = db_cat.copy()
# Regrouper par entite
to_group = ["ENTITY"]
to_agg = {"ENTITY": "max"}
dataset_entite = dataset_entite.groupby(to_group, as_index=False).agg(to_agg)
# Affichage du modèle de donnée
dataset_entite
###Output
_____no_output_____
###Markdown
REF_SCENARIO
###Code
# Creation du dataset ref_scenario
dataset_scenario = db_cat.copy()
# Regrouper par entite
to_group = ["PERIOD"]
to_agg = {"PERIOD": "max"}
dataset_scenario = dataset_scenario.groupby(to_group, as_index=False).agg(to_agg)
# Affichage du modèle de donnée
dataset_scenario
###Output
_____no_output_____
###Markdown
KPIS
###Code
# Creation du dataset KPIS (CA, MARGE, EBE, BFR, CC, DF)
dataset_kpis = db_cat.copy()
# KPIs CA
dataset_kpis_ca = dataset_kpis[dataset_kpis.RUBRIQUE_N1.isin(["CHIFFRE_D'AFFAIRES"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N1"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ca = dataset_kpis_ca.groupby(to_group, as_index=False).agg(to_agg)
# Passage value postif
dataset_kpis_ca["VALUE"] = dataset_kpis_ca["VALUE"]*-1
# COUTS_DIRECTS
dataset_kpis_ha = dataset_kpis[dataset_kpis.RUBRIQUE_N1.isin(["COUTS_DIRECTS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N1"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ha = dataset_kpis_ha.groupby(to_group, as_index=False).agg(to_agg)
# Passage value négatif
dataset_kpis_ha["VALUE"] = dataset_kpis_ha["VALUE"]*-1
# KPIs MARGE BRUTE (CA - COUTS DIRECTS)
dataset_kpis_mb = dataset_kpis_ca.copy()
dataset_kpis_mb = pd.concat([dataset_kpis_mb, dataset_kpis_ha], axis=0, sort=False)
to_group = ["ENTITY",
"PERIOD"]
to_agg = {"VALUE": "sum"}
dataset_kpis_mb = dataset_kpis_mb.groupby(to_group, as_index=False).agg(to_agg)
dataset_kpis_mb["RUBRIQUE_N1"] = "MARGE"
dataset_kpis_mb = dataset_kpis_mb[["ENTITY", "PERIOD", "RUBRIQUE_N1", "VALUE"]]
# CHARGES EXTERNES
dataset_kpis_ce = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["SERVICES_EXTERIEURS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ce = dataset_kpis_ce.groupby(to_group, as_index=False).agg(to_agg)
# Passage value negatif
dataset_kpis_ce["VALUE"] = dataset_kpis_ce["VALUE"]*-1
# IMPOTS
dataset_kpis_ip = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["TAXES"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ip = dataset_kpis_ip.groupby(to_group, as_index=False).agg(to_agg)
# Passage value negatif
dataset_kpis_ip["VALUE"] = dataset_kpis_ip["VALUE"]*-1
# CHARGES DE PERSONNEL
dataset_kpis_cp = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["CHARGES_PERSONNEL"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_cp = dataset_kpis_cp.groupby(to_group, as_index=False).agg(to_agg)
# Passage value negatif
dataset_kpis_cp["VALUE"] = dataset_kpis_cp["VALUE"]*-1
# AUTRES_CHARGES
dataset_kpis_ac = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["AUTRES_CHARGES"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ac = dataset_kpis_ac.groupby(to_group, as_index=False).agg(to_agg)
# Passage value negatif
dataset_kpis_ac["VALUE"] = dataset_kpis_ac["VALUE"]*-1
# SUBVENTIONS D'EXPLOITATION
dataset_kpis_ac = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["SUBVENTIONS_D'EXPL."])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ac = dataset_kpis_ac.groupby(to_group, as_index=False).agg(to_agg)
# KPIs EBE = MARGE - CHARGES EXTERNES - TAXES - CHARGES PERSONNEL - AUTRES CHARGES + SUBVENTION D'EXPLOITATION
dataset_kpis_ebe = dataset_kpis_mb.copy()
dataset_kpis_ebe = pd.concat([dataset_kpis_ebe, dataset_kpis_ce, dataset_kpis_ip, dataset_kpis_cp, dataset_kpis_ac], axis=0, sort=False)
to_group = ["ENTITY", "PERIOD"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ebe = dataset_kpis_ebe.groupby(to_group, as_index=False).agg(to_agg)
dataset_kpis_ebe["RUBRIQUE_N1"] = "EBE"
dataset_kpis_ebe = dataset_kpis_ebe[["ENTITY", "PERIOD", "RUBRIQUE_N1", "VALUE"]]
# KPIs CREANCES CLIENTS
dataset_kpis_cc = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["CREANCES_CLIENTS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_cc = dataset_kpis_cc.groupby(to_group, as_index=False).agg(to_agg)
# Renommage colonne
to_rename = {'RUBRIQUE_N2': "RUBRIQUE_N1"}
dataset_kpis_cc = dataset_kpis_cc.rename(columns=to_rename)
# KPIs STOCKS
dataset_kpis_st = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["STOCKS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_st = dataset_kpis_st.groupby(to_group, as_index=False).agg(to_agg)
# Renommage colonne
to_rename = {'RUBRIQUE_N2': "RUBRIQUE_N1"}
dataset_kpis_st = dataset_kpis_st.rename(columns=to_rename)
# KPIs DETTES FOURNISSEURS
dataset_kpis_df = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["DETTES_FOURNISSEURS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_df = dataset_kpis_df.groupby(to_group, as_index=False).agg(to_agg)
# Renommage colonne
to_rename = {'RUBRIQUE_N2': "RUBRIQUE_N1"}
dataset_kpis_df = dataset_kpis_df.rename(columns=to_rename)
# Passage value positif
dataset_kpis_df["VALUE"] = dataset_kpis_df["VALUE"].abs()
# KPIs BFR = CREANCES + STOCKS - DETTES FOURNISSEURS
dataset_kpis_bfr_df = dataset_kpis_df.copy()
# Passage dette fournisseur value négatif
dataset_kpis_bfr_df["VALUE"] = dataset_kpis_bfr_df["VALUE"]*-1
dataset_kpis_bfr_df = pd.concat([dataset_kpis_cc, dataset_kpis_st, dataset_kpis_bfr_df], axis=0, sort=False)
to_group = ["ENTITY", "PERIOD"]
to_agg = {"VALUE": "sum"}
dataset_kpis_bfr_df = dataset_kpis_bfr_df.groupby(to_group, as_index=False).agg(to_agg)
# Creation colonne Rubrique_N1 = BFR
dataset_kpis_bfr_df["RUBRIQUE_N1"] = "BFR"
# Reorganisation colonne
dataset_kpis_bfr_df = dataset_kpis_bfr_df[["ENTITY", "PERIOD", "RUBRIQUE_N1", "VALUE"]]
# Creation du dataset final
dataset_kpis_final = pd.concat([dataset_kpis_ca, dataset_kpis_mb, dataset_kpis_ebe, dataset_kpis_cc, dataset_kpis_st, dataset_kpis_df, dataset_kpis_bfr_df], axis=0, sort=False)
# Creation colonne COMP
dataset_kpis_final['PERIOD_COMP'] = (dataset_kpis_final['PERIOD'].str[:4].astype(int) - 1).astype(str) + dataset_kpis_final['PERIOD'].str[-3:]
dataset_kpis_final
# creation base comparable pour dataset_kpis
dataset_kpis_final_comp = dataset_kpis_final.copy()
# Suppression de la colonne période
dataset_kpis_final_comp = dataset_kpis_final_comp.drop("PERIOD_COMP", axis=1)
# Renommage des colonnes
to_rename = {'VALUE': "VALUE_N-1",
'PERIOD': "PERIOD_COMP"}
dataset_kpis_final_comp = dataset_kpis_final_comp.rename(columns=to_rename)
dataset_kpis_final_comp
# Jointure entre les 2 tables dataset_kpis_final et dataset_kpis_vf
join_on = ["ENTITY",
"PERIOD_COMP",
"RUBRIQUE_N1"]
dataset_kpis_final = pd.merge(dataset_kpis_final, dataset_kpis_final_comp, how='left', on=join_on).drop("PERIOD_COMP", axis=1).fillna(0)
#Création colonne Var V
dataset_kpis_final["VARV"] = dataset_kpis_final["VALUE"] - dataset_kpis_final["VALUE_N-1"]
#Création colonne Var P (%)
dataset_kpis_final["VARP"] = dataset_kpis_final["VARV"] / dataset_kpis_final["VALUE_N-1"]
dataset_kpis_final
###Output
_____no_output_____
###Markdown
EVOLUTION CA
###Code
# Creation du dataset evol_ca
dataset_evol_ca = db_enr.copy()
# Filtre COMPTE_NUM = Chiffre d'Affaire (RUBRIQUE N1)
dataset_evol_ca = dataset_evol_ca[dataset_evol_ca['COMPTE_NUM'].str.contains(r'^70|^71|^72')]
# Regroupement
to_group = ["ENTITY",
"PERIOD",
"MONTH",
"MONTH_INDEX",
"RUBRIQUE_N3"]
to_agg = {"VALUE": "sum"}
dataset_evol_ca = dataset_evol_ca.groupby(to_group, as_index=False).agg(to_agg)
dataset_evol_ca["VALUE"] = dataset_evol_ca["VALUE"].abs()
# Calcul de la somme cumulée
dataset_evol_ca = dataset_evol_ca.sort_values(by=["ENTITY", 'PERIOD', 'MONTH_INDEX']).reset_index(drop=True)
dataset_evol_ca['MONTH_INDEX'] = pd.to_datetime(dataset_evol_ca['MONTH_INDEX'], format="%m").dt.strftime("%m")
dataset_evol_ca['VALUE_CUM'] = dataset_evol_ca.groupby(["ENTITY", "PERIOD"], as_index=True).agg({"VALUE": "cumsum"})
# Affichage du modèle de donnée
dataset_evol_ca
###Output
_____no_output_____
###Markdown
CHARGES
###Code
#Creation du dataset charges
dataset_charges = db_cat.copy()
# Filtre RUBRIQUE_N0 = CHARGES
dataset_charges = dataset_charges[dataset_charges["RUBRIQUE_N0"] == "CHARGES"]
# Mettre en valeur positive VALUE
dataset_charges["VALUE"] = dataset_charges["VALUE"].abs()
# Affichage du modèle de donnée
dataset_charges
###Output
_____no_output_____
###Markdown
POSITIONS TRESORERIE
###Code
# Creation du dataset trésorerie
dataset_treso = db_enr.copy()
# Filtre RUBRIQUE_N1 = TRESORERIE
dataset_treso = dataset_treso[dataset_treso['COMPTE_NUM'].str.contains(r'^5')].reset_index(drop=True)
# Cash in / Cash out ?
dataset_treso.loc[dataset_treso.VALUE > 0, "CASH_IN"] = dataset_treso.VALUE
dataset_treso.loc[dataset_treso.VALUE < 0, "CASH_OUT"] = dataset_treso.VALUE
# Regroupement
to_group = ["ENTITY",
"PERIOD",
"MONTH",
"MONTH_INDEX"]
to_agg = {"VALUE": "sum",
"CASH_IN": "sum",
"CASH_OUT": "sum"}
dataset_treso = dataset_treso.groupby(to_group, as_index = False).agg(to_agg).fillna(0)
# Cumul par période
dataset_treso = dataset_treso.sort_values(["ENTITY", "PERIOD", "MONTH_INDEX"])
dataset_treso['MONTH_INDEX'] = pd.to_datetime(dataset_treso['MONTH_INDEX'], format="%m").dt.strftime("%m")
dataset_treso['VALUE_LINE'] = dataset_treso.groupby(["ENTITY", 'PERIOD'], as_index=True).agg({"VALUE": "cumsum"})
# Mettre en valeur positive CASH_OUT
dataset_treso["CASH_OUT"] = dataset_treso["CASH_OUT"].abs()
# Affichage du modèle de donnée
dataset_treso
###Output
_____no_output_____
###Markdown
BILAN
###Code
# Creation du dataset Bilan
dataset_bilan = db_cat.copy()
# Filtre RUBRIQUE_N0 = ACTIF & PASSIF
dataset_bilan = dataset_bilan[(dataset_bilan["RUBRIQUE_N0"].isin(["ACTIF", "PASSIF"]))]
# Regroupement R0/R1/R2
to_group = ["ENTITY",
"PERIOD",
"RUBRIQUE_N0",
"RUBRIQUE_N1",
"RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_bilan = dataset_bilan.groupby(to_group, as_index = False).agg(to_agg).fillna(0)
# Mettre en valeur positive VALUE
dataset_bilan["VALUE"] = dataset_bilan["VALUE"].abs()
# Selectionner les colonnes
to_select = ["ENTITY",
"PERIOD",
"RUBRIQUE_N0",
"RUBRIQUE_N1",
"RUBRIQUE_N2",
"VALUE"]
dataset_bilan = dataset_bilan[to_select]
# Affichage du modèle de donnée
dataset_bilan
###Output
_____no_output_____
###Markdown
Output Sauvegarde des fichiers en csv
###Code
def df_to_csv(df, filename):
# Sauvegarde en csv
df.to_csv(filename,
sep=";",
decimal=",",
index=False)
# Création du lien url
naas_link = naas.asset.add(filename)
# Création de la ligne
data = {
"OBJET": filename,
"URL": naas_link,
"DATE_EXTRACT": datetime.now().strftime("%Y-%m-%d %H:%M:%S")
}
return pd.DataFrame([data])
dataset_logo = {
"OBJET": "Logo",
"URL": LOGO,
"DATE_EXTRACT": datetime.now().strftime("%Y-%m-%d %H:%M:%S")
}
logo = pd.DataFrame([dataset_logo])
logo
import json
color = {"name":"Color",
"dataColors":[COLOR_1, COLOR_2]}
with open("color.json", "w") as write_file:
json.dump(color, write_file)
dataset_color = {
"OBJET": "Color",
"URL": naas.asset.add("color.json"),
"DATE_EXTRACT": datetime.now().strftime("%Y-%m-%d %H:%M:%S")
}
pbi_color = pd.DataFrame([dataset_color])
pbi_color
entite = df_to_csv(dataset_entite, "dataset_entite.csv")
entite
scenario = df_to_csv(dataset_scenario, "dataset_scenario.csv")
scenario
kpis = df_to_csv(dataset_kpis_final, "dataset_kpis_final.csv")
kpis
evol_ca = df_to_csv(dataset_evol_ca, "dataset_evol_ca.csv")
evol_ca
charges = df_to_csv(dataset_charges, "dataset_charges.csv")
charges
treso = df_to_csv(dataset_treso, "dataset_treso.csv")
treso
bilan = df_to_csv(dataset_bilan, "dataset_bilan.csv")
bilan
###Output
_____no_output_____
###Markdown
Création du fichier à intégrer dans PowerBI
###Code
db_powerbi = pd.concat([logo, pbi_color, entite, scenario, kpis, evol_ca, charges, treso, bilan], axis=0)
db_powerbi
df_to_csv(db_powerbi, "powerbi.csv")
###Output
_____no_output_____
###Markdown
FEC - Creer un dashboard PowerBI **Tags:** fec powerbi dataviz analytics finance **Author:** [Alexandre STEVENS](https://www.linkedin.com/in/alexandrestevenspbix/) Ce Notebook permet de transformer des fichiers FEC de votre entreprise en un tableau de bord Microsoft Power BI.Le FEC (fichier des écritures comptables) est un export standard des logiciels de comptabilite et une obligation légale en france depuis 2014 afin de déposer ses comptes de manière electronique auprès des services fiscaux.-Durée de l’installation = 5 minutes-Support d’installation = [Page Notion](https://www.notion.so/Mode-d-emploi-FECthis-7fc142f2d7ae4a3889fbca28a83acba2/)-Niveau = Facile Input Librairie
###Code
import pandas as pd
from datetime import datetime, timedelta
import os
import re
import naas
###Output
_____no_output_____
###Markdown
Lien URL vers le logo de l'entreprise
###Code
LOGO = "https://landen.imgix.net/e5hx7wyzf53f/assets/26u7xg7u.png?w=400"
COLOR_1 = None
COLOR_2 = None
###Output
_____no_output_____
###Markdown
Lire les fichiers FEC
###Code
def get_all_fec(file_regex,
sep=",",
decimal=".",
encoding=None,
header=None,
usecols=None,
names=None,
dtype=None):
# Create df init
df = pd.DataFrame()
# Get all files in INPUT_FOLDER
files = [f for f in os.listdir() if re.search(file_regex, f)]
if len(files) == 0:
print(f"Aucun fichier FEC ne correspond au standard de nomination")
else:
for file in files:
# Open file and create df
print(file)
tmp_df = pd.read_csv(file,
sep=sep,
decimal=decimal,
encoding=encoding,
header=header,
usecols=usecols,
names=names,
dtype=dtype)
# Add filename to df
tmp_df['NOM_FICHIER'] = file
# Concat df
df = pd.concat([df, tmp_df], axis=0, sort=False)
return df
file_regex = "^\d{9}FEC\d{8}.txt"
db_init = get_all_fec(file_regex,
sep='\t',
decimal=',',
encoding='ISO-8859-1',
header=0)
db_init
###Output
_____no_output_____
###Markdown
Model Base de donnée FEC Nettoyage des données
###Code
db_clean = db_init.copy()
# Selection des colonnes à conserver
to_select = ['NOM_FICHIER',
'EcritureDate',
'CompteNum',
'CompteLib',
'EcritureLib',
'Debit',
'Credit']
db_clean = db_clean[to_select]
# Renommage des colonnes
to_rename = {'EcritureDate': "DATE",
'CompteNum': "COMPTE_NUM",
'CompteLib': "RUBRIQUE_N3",
'EcritureLib': "RUBRIQUE_N4",
'Debit': "DEBIT",
'Credit': "CREDIT"}
db_clean = db_clean.rename(columns=to_rename)
#suppression des espaces colonne "COMPTE_NUM"
db_clean["COMPTE_NUM"] = db_clean["COMPTE_NUM"].astype(str).str.strip()
# Mise au format des colonnes
db_clean = db_clean.astype({"NOM_FICHIER" : str,
"DATE" : str,
"COMPTE_NUM" : str,
"RUBRIQUE_N3" : str,
"RUBRIQUE_N4" : str,
"DEBIT" : float,
"CREDIT" : float,
})
# Mise au format colonne date
db_clean["DATE"] = pd.to_datetime(db_clean["DATE"])
db_clean.head(5)
###Output
_____no_output_____
###Markdown
Enrichissement de la base
###Code
db_enr = db_clean.copy()
# Ajout colonnes entité et période
db_enr['ENTITY'] = db_enr['NOM_FICHIER'].str[:9]
db_enr['PERIOD'] = db_enr['NOM_FICHIER'].str[12:-6]
db_enr['PERIOD'] = pd.to_datetime(db_enr['PERIOD'], format='%Y%m')
db_enr['PERIOD'] = db_enr['PERIOD'].dt.strftime("%Y-%m")
# Ajout colonne month et month_index
db_enr['MONTH'] = db_enr['DATE'].dt.strftime("%b")
db_enr['MONTH_INDEX'] = db_enr['DATE'].dt.month
# Calcul de la valeur debit-crédit
db_enr["VALUE"] = (db_enr["DEBIT"]) - (db_enr["CREDIT"])
db_enr.head(5)
# Calcul résultat pour équilibrage bilan dans capitaux propre
db_rn = db_enr.copy()
db_rn = db_rn[db_rn['COMPTE_NUM'].str.contains(r'^6|^7')]
to_group = ["ENTITY", "PERIOD"]
to_agg = {"VALUE": "sum"}
db_rn = db_rn.groupby(to_group, as_index=False).agg(to_agg)
db_rn ["COMPTE_NUM"] = "10999999"
db_rn ["RUBRIQUE_N3"] = "RESULTAT"
# Reorganisation colonne
to_select = ['ENTITY',
'PERIOD',
'COMPTE_NUM',
'RUBRIQUE_N3',
'VALUE']
db_rn = db_rn[to_select]
db_rn
###Output
_____no_output_____
###Markdown
Base de données FEC aggrégée avec variation Aggrégation RUBRIQUE N3
###Code
# Calcul var v = création de dataset avec Period_comp pour merge
db_var = db_enr.copy()
# Regroupement
to_group = ["ENTITY",
"PERIOD",
"COMPTE_NUM",
"RUBRIQUE_N3"]
to_agg = {"VALUE": "sum"}
db_var = db_var.groupby(to_group, as_index=False).agg(to_agg)
# Ajout des résultats au dataframe
db_var = pd.concat([db_var, db_rn], axis=0, sort=False)
# Creation colonne COMP
db_var['PERIOD_COMP'] = (db_var['PERIOD'].str[:4].astype(int) - 1).astype(str) + db_var['PERIOD'].str[-3:]
db_var
###Output
_____no_output_____
###Markdown
Création de la base comparable
###Code
db_comp = db_var.copy()
# Suppression de la colonne période
db_comp = db_comp.drop("PERIOD_COMP", axis=1)
# Renommage des colonnes
to_rename = {'VALUE': "VALUE_N-1",
'PERIOD': "PERIOD_COMP"}
db_comp = db_comp.rename(columns=to_rename)
db_comp.head(5)
###Output
_____no_output_____
###Markdown
Jointure des 2 tables et calcul des variations
###Code
# Jointure entre les 2 tables
join_on = ["ENTITY",
"PERIOD_COMP",
"COMPTE_NUM",
"RUBRIQUE_N3"]
db_var = pd.merge(db_var, db_comp, how='left', on=join_on).drop("PERIOD_COMP", axis=1).fillna(0)
#Création colonne Var V
db_var["VARV"] = db_var["VALUE"] - db_var["VALUE_N-1"]
#Création colonne Var P (%)
db_var["VARP"] = db_var["VARV"] / db_var["VALUE_N-1"]
db_var
db_cat = db_var.copy()
# Calcul des rubriques niveau 2
def rubrique_N2(row):
numero_compte = str(row["COMPTE_NUM"])
value = float(row["VALUE"])
# BILAN SIMPLIFIE type IFRS NIV2
to_check = ["^10", "^11", "^12", "^13", "^14"]
if any (re.search(x,numero_compte) for x in to_check):
return "CAPITAUX_PROPRES"
to_check = ["^15", "^16", "^17", "^18", "^19"]
if any (re.search(x,numero_compte) for x in to_check):
return "DETTES_FINANCIERES"
to_check = ["^20", "^21", "^22", "^23", "^25", "^26", "^27", "^28", "^29"]
if any (re.search(x,numero_compte) for x in to_check):
return "IMMOBILISATIONS"
to_check = ["^31", "^32", "^33", "^34", "^35", "^36", "^37", "^38", "^39"]
if any (re.search(x,numero_compte) for x in to_check):
return "STOCKS"
to_check = ["^40"]
if any (re.search(x,numero_compte) for x in to_check):
return "DETTES_FOURNISSEURS"
to_check = ["^41"]
if any (re.search(x,numero_compte) for x in to_check):
return "CREANCES_CLIENTS"
to_check = ["^42", "^43", "^44", "^45", "^46", "^47", "^48", "^49"]
if any (re.search(x,numero_compte) for x in to_check):
if value > 0:
return "AUTRES_CREANCES"
else:
return "AUTRES_DETTES"
to_check = ["^50", "^51", "^52", "^53", "^54", "^58", "^59"]
if any (re.search(x,numero_compte) for x in to_check):
return "DISPONIBILITES"
# COMPTE DE RESULTAT DETAILLE NIV2
to_check = ["^60"]
if any (re.search(x,numero_compte) for x in to_check):
return "ACHATS"
to_check= ["^61", "^62"]
if any (re.search(x,numero_compte) for x in to_check):
return "SERVICES_EXTERIEURS"
to_check = ["^63"]
if any (re.search(x,numero_compte) for x in to_check):
return "TAXES"
to_check = ["^64"]
if any (re.search(x,numero_compte) for x in to_check):
return "CHARGES_PERSONNEL"
to_check = ["^65"]
if any (re.search(x,numero_compte) for x in to_check):
return "AUTRES_CHARGES"
to_check = ["^66"]
if any (re.search(x,numero_compte) for x in to_check):
return "CHARGES_FINANCIERES"
to_check = ["^67"]
if any (re.search(x,numero_compte) for x in to_check):
return "CHARGES_EXCEPTIONNELLES"
to_check = ["^68", "^78"]
if any (re.search(x,numero_compte) for x in to_check):
return "AMORTISSEMENTS"
to_check = ["^69"]
if any (re.search(x,numero_compte) for x in to_check):
return "IMPOT"
to_check = ["^70"]
if any (re.search(x,numero_compte) for x in to_check):
return "VENTES"
to_check = ["^71", "^72"]
if any (re.search(x,numero_compte) for x in to_check):
return "PRODUCTION_STOCKEE_IMMOBILISEE"
to_check = ["^74"]
if any (re.search(x,numero_compte) for x in to_check):
return "SUBVENTIONS_D'EXPL."
to_check = ["^75", "^791"]
if any (re.search(x,numero_compte) for x in to_check):
return "AUTRES_PRODUITS_GESTION_COURANTE"
to_check = ["^76", "^796"]
if any (re.search(x,numero_compte) for x in to_check):
return "PRODUITS_FINANCIERS"
to_check = ["^77", "^797"]
if any (re.search(x,numero_compte) for x in to_check):
return "PRODUITS_EXCEPTIONNELS"
to_check = ["^78"]
if any (re.search(x,numero_compte) for x in to_check):
return "REPRISES_AMORT._DEP."
to_check = ["^8"]
if any (re.search(x,numero_compte) for x in to_check):
return "COMPTES_SPECIAUX"
# Calcul des rubriques niveau 1
def rubrique_N1(row):
categorisation = row.RUBRIQUE_N2
# BILAN SIMPLIFIE type IFRS N1
to_check = ["CAPITAUX_PROPRES", "DETTES_FINANCIERES"]
if any(re.search(x, categorisation) for x in to_check):
return "PASSIF_NON_COURANT"
to_check = ["IMMOBILISATIONS"]
if any(re.search(x, categorisation) for x in to_check):
return "ACTIF_NON_COURANT"
to_check = ["STOCKS", "CREANCES_CLIENTS", "AUTRES_CREANCES"]
if any(re.search(x, categorisation) for x in to_check):
return "ACTIF_COURANT"
to_check = ["DETTES_FOURNISSEURS", "AUTRES_DETTES"]
if any(re.search(x, categorisation) for x in to_check):
return "PASSIF_COURANT"
to_check = ["DISPONIBILITES"]
if any(re.search(x, categorisation) for x in to_check):
return "DISPONIBILITES"
# COMPTE DE RESULTAT SIMPLIFIE N1
to_check = ["ACHATS"]
if any(re.search(x, categorisation) for x in to_check):
return "COUTS_DIRECTS"
to_check = ["SERVICES_EXTERIEURS", "TAXES", "CHARGES_PERSONNEL", "AUTRES_CHARGES", "AMORTISSEMENTS"]
if any(re.search(x, categorisation) for x in to_check):
return "CHARGES_EXPLOITATION"
to_check = ["CHARGES_FINANCIERES"]
if any(re.search(x, categorisation) for x in to_check):
return "CHARGES_FINANCIERES"
to_check = ["CHARGES_EXCEPTIONNELLES", "IMPOT"]
if any(re.search(x, categorisation) for x in to_check):
return "CHARGES_EXCEPTIONNELLES"
to_check = ["VENTES", "PRODUCTION_STOCKEE_IMMOBILISEE"]
if any(re.search(x, categorisation) for x in to_check):
return "CHIFFRE_D'AFFAIRES"
to_check = ["SUBVENTIONS_D'EXPL.", "AUTRES_PRODUITS_GESTION_COURANTE", "REPRISES_AMORT._DEP."]
if any(re.search(x, categorisation) for x in to_check):
return "PRODUITS_EXPLOITATION"
to_check = ["PRODUITS_FINANCIERS"]
if any(re.search(x, categorisation) for x in to_check):
return "PRODUITS_FINANCIERS"
to_check = ["PRODUITS_EXCEPTIONNELS"]
if any(re.search(x, categorisation) for x in to_check):
return "PRODUITS_EXCEPTIONNELS"
# Calcul des rubriques niveau 0
def rubrique_N0(row):
masse = row.RUBRIQUE_N1
to_check = ["ACTIF_NON_COURANT", "ACTIF_COURANT", "DISPONIBILITES"]
if any(re.search(x, masse) for x in to_check):
return "ACTIF"
to_check = ["PASSIF_NON_COURANT", "PASSIF_COURANT"]
if any(re.search(x, masse) for x in to_check):
return "PASSIF"
to_check = ["COUTS_DIRECTS", "CHARGES_EXPLOITATION", "CHARGES_FINANCIERES", "CHARGES_EXCEPTIONNELLES"]
if any(re.search(x, masse) for x in to_check):
return "CHARGES"
to_check = ["CHIFFRE_D'AFFAIRES", "PRODUITS_EXPLOITATION", "PRODUITS_FINANCIERS", "PRODUITS_EXCEPTIONNELS"]
if any(re.search(x, masse) for x in to_check):
return "PRODUITS"
# Mapping des rubriques
db_cat["RUBRIQUE_N2"] = db_cat.apply(lambda row: rubrique_N2(row), axis=1)
db_cat["RUBRIQUE_N1"] = db_cat.apply(lambda row: rubrique_N1(row), axis=1)
db_cat["RUBRIQUE_N0"] = db_cat.apply(lambda row: rubrique_N0(row), axis=1)
# Reorganisation colonne
to_select = ['ENTITY',
'PERIOD',
'COMPTE_NUM',
'RUBRIQUE_N0',
'RUBRIQUE_N1',
'RUBRIQUE_N2',
'RUBRIQUE_N3',
'VALUE',
'VALUE_N-1',
'VARV',
'VARP']
db_cat = db_cat[to_select]
db_cat
###Output
_____no_output_____
###Markdown
Modèles de données des graphiques REF_ENTITE
###Code
# Creation du dataset ref_entite
dataset_entite = db_cat.copy()
# Regrouper par entite
to_group = ["ENTITY"]
to_agg = {"ENTITY": "max"}
dataset_entite = dataset_entite.groupby(to_group, as_index=False).agg(to_agg)
# Affichage du modèle de donnée
dataset_entite
###Output
_____no_output_____
###Markdown
REF_SCENARIO
###Code
# Creation du dataset ref_scenario
dataset_scenario = db_cat.copy()
# Regrouper par entite
to_group = ["PERIOD"]
to_agg = {"PERIOD": "max"}
dataset_scenario = dataset_scenario.groupby(to_group, as_index=False).agg(to_agg)
# Affichage du modèle de donnée
dataset_scenario
###Output
_____no_output_____
###Markdown
KPIS
###Code
# Creation du dataset KPIS (CA, MARGE, EBE, BFR, CC, DF)
dataset_kpis = db_cat.copy()
# KPIs CA
dataset_kpis_ca = dataset_kpis[dataset_kpis.RUBRIQUE_N1.isin(["CHIFFRE_D'AFFAIRES"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N1"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ca = dataset_kpis_ca.groupby(to_group, as_index=False).agg(to_agg)
# Passage value postif
dataset_kpis_ca["VALUE"] = dataset_kpis_ca["VALUE"]*-1
# COUTS_DIRECTS
dataset_kpis_ha = dataset_kpis[dataset_kpis.RUBRIQUE_N1.isin(["COUTS_DIRECTS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N1"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ha = dataset_kpis_ha.groupby(to_group, as_index=False).agg(to_agg)
# Passage value négatif
dataset_kpis_ha["VALUE"] = dataset_kpis_ha["VALUE"]*-1
# KPIs MARGE BRUTE (CA - COUTS DIRECTS)
dataset_kpis_mb = dataset_kpis_ca.copy()
dataset_kpis_mb = pd.concat([dataset_kpis_mb, dataset_kpis_ha], axis=0, sort=False)
to_group = ["ENTITY",
"PERIOD"]
to_agg = {"VALUE": "sum"}
dataset_kpis_mb = dataset_kpis_mb.groupby(to_group, as_index=False).agg(to_agg)
dataset_kpis_mb["RUBRIQUE_N1"] = "MARGE"
dataset_kpis_mb = dataset_kpis_mb[["ENTITY", "PERIOD", "RUBRIQUE_N1", "VALUE"]]
# CHARGES EXTERNES
dataset_kpis_ce = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["SERVICES_EXTERIEURS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ce = dataset_kpis_ce.groupby(to_group, as_index=False).agg(to_agg)
# Passage value negatif
dataset_kpis_ce["VALUE"] = dataset_kpis_ce["VALUE"]*-1
# IMPOTS
dataset_kpis_ip = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["TAXES"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ip = dataset_kpis_ip.groupby(to_group, as_index=False).agg(to_agg)
# Passage value negatif
dataset_kpis_ip["VALUE"] = dataset_kpis_ip["VALUE"]*-1
# CHARGES DE PERSONNEL
dataset_kpis_cp = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["CHARGES_PERSONNEL"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_cp = dataset_kpis_cp.groupby(to_group, as_index=False).agg(to_agg)
# Passage value negatif
dataset_kpis_cp["VALUE"] = dataset_kpis_cp["VALUE"]*-1
# AUTRES_CHARGES
dataset_kpis_ac = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["AUTRES_CHARGES"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ac = dataset_kpis_ac.groupby(to_group, as_index=False).agg(to_agg)
# Passage value negatif
dataset_kpis_ac["VALUE"] = dataset_kpis_ac["VALUE"]*-1
# SUBVENTIONS D'EXPLOITATION
dataset_kpis_ac = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["SUBVENTIONS_D'EXPL."])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ac = dataset_kpis_ac.groupby(to_group, as_index=False).agg(to_agg)
# KPIs EBE = MARGE - CHARGES EXTERNES - TAXES - CHARGES PERSONNEL - AUTRES CHARGES + SUBVENTION D'EXPLOITATION
dataset_kpis_ebe = dataset_kpis_mb.copy()
dataset_kpis_ebe = pd.concat([dataset_kpis_ebe, dataset_kpis_ce, dataset_kpis_ip, dataset_kpis_cp, dataset_kpis_ac], axis=0, sort=False)
to_group = ["ENTITY", "PERIOD"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ebe = dataset_kpis_ebe.groupby(to_group, as_index=False).agg(to_agg)
dataset_kpis_ebe["RUBRIQUE_N1"] = "EBE"
dataset_kpis_ebe = dataset_kpis_ebe[["ENTITY", "PERIOD", "RUBRIQUE_N1", "VALUE"]]
# KPIs CREANCES CLIENTS
dataset_kpis_cc = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["CREANCES_CLIENTS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_cc = dataset_kpis_cc.groupby(to_group, as_index=False).agg(to_agg)
# Renommage colonne
to_rename = {'RUBRIQUE_N2': "RUBRIQUE_N1"}
dataset_kpis_cc = dataset_kpis_cc.rename(columns=to_rename)
# KPIs STOCKS
dataset_kpis_st = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["STOCKS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_st = dataset_kpis_st.groupby(to_group, as_index=False).agg(to_agg)
# Renommage colonne
to_rename = {'RUBRIQUE_N2': "RUBRIQUE_N1"}
dataset_kpis_st = dataset_kpis_st.rename(columns=to_rename)
# KPIs DETTES FOURNISSEURS
dataset_kpis_df = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["DETTES_FOURNISSEURS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_df = dataset_kpis_df.groupby(to_group, as_index=False).agg(to_agg)
# Renommage colonne
to_rename = {'RUBRIQUE_N2': "RUBRIQUE_N1"}
dataset_kpis_df = dataset_kpis_df.rename(columns=to_rename)
# Passage value positif
dataset_kpis_df["VALUE"] = dataset_kpis_df["VALUE"].abs()
# KPIs BFR = CREANCES + STOCKS - DETTES FOURNISSEURS
dataset_kpis_bfr_df = dataset_kpis_df.copy()
# Passage dette fournisseur value négatif
dataset_kpis_bfr_df["VALUE"] = dataset_kpis_bfr_df["VALUE"]*-1
dataset_kpis_bfr_df = pd.concat([dataset_kpis_cc, dataset_kpis_st, dataset_kpis_bfr_df], axis=0, sort=False)
to_group = ["ENTITY", "PERIOD"]
to_agg = {"VALUE": "sum"}
dataset_kpis_bfr_df = dataset_kpis_bfr_df.groupby(to_group, as_index=False).agg(to_agg)
# Creation colonne Rubrique_N1 = BFR
dataset_kpis_bfr_df["RUBRIQUE_N1"] = "BFR"
# Reorganisation colonne
dataset_kpis_bfr_df = dataset_kpis_bfr_df[["ENTITY", "PERIOD", "RUBRIQUE_N1", "VALUE"]]
# Creation du dataset final
dataset_kpis_final = pd.concat([dataset_kpis_ca, dataset_kpis_mb, dataset_kpis_ebe, dataset_kpis_cc, dataset_kpis_st, dataset_kpis_df, dataset_kpis_bfr_df], axis=0, sort=False)
# Creation colonne COMP
dataset_kpis_final['PERIOD_COMP'] = (dataset_kpis_final['PERIOD'].str[:4].astype(int) - 1).astype(str) + dataset_kpis_final['PERIOD'].str[-3:]
dataset_kpis_final
# creation base comparable pour dataset_kpis
dataset_kpis_final_comp = dataset_kpis_final.copy()
# Suppression de la colonne période
dataset_kpis_final_comp = dataset_kpis_final_comp.drop("PERIOD_COMP", axis=1)
# Renommage des colonnes
to_rename = {'VALUE': "VALUE_N-1",
'PERIOD': "PERIOD_COMP"}
dataset_kpis_final_comp = dataset_kpis_final_comp.rename(columns=to_rename)
dataset_kpis_final_comp
# Jointure entre les 2 tables dataset_kpis_final et dataset_kpis_vf
join_on = ["ENTITY",
"PERIOD_COMP",
"RUBRIQUE_N1"]
dataset_kpis_final = pd.merge(dataset_kpis_final, dataset_kpis_final_comp, how='left', on=join_on).drop("PERIOD_COMP", axis=1).fillna(0)
#Création colonne Var V
dataset_kpis_final["VARV"] = dataset_kpis_final["VALUE"] - dataset_kpis_final["VALUE_N-1"]
#Création colonne Var P (%)
dataset_kpis_final["VARP"] = dataset_kpis_final["VARV"] / dataset_kpis_final["VALUE_N-1"]
dataset_kpis_final
###Output
_____no_output_____
###Markdown
EVOLUTION CA
###Code
# Creation du dataset evol_ca
dataset_evol_ca = db_enr.copy()
# Filtre COMPTE_NUM = Chiffre d'Affaire (RUBRIQUE N1)
dataset_evol_ca = dataset_evol_ca[dataset_evol_ca['COMPTE_NUM'].str.contains(r'^70|^71|^72')]
# Regroupement
to_group = ["ENTITY",
"PERIOD",
"MONTH",
"MONTH_INDEX",
"RUBRIQUE_N3"]
to_agg = {"VALUE": "sum"}
dataset_evol_ca = dataset_evol_ca.groupby(to_group, as_index=False).agg(to_agg)
dataset_evol_ca["VALUE"] = dataset_evol_ca["VALUE"].abs()
# Calcul de la somme cumulée
dataset_evol_ca = dataset_evol_ca.sort_values(by=["ENTITY", 'PERIOD', 'MONTH_INDEX']).reset_index(drop=True)
dataset_evol_ca['MONTH_INDEX'] = pd.to_datetime(dataset_evol_ca['MONTH_INDEX'], format="%m").dt.strftime("%m")
dataset_evol_ca['VALUE_CUM'] = dataset_evol_ca.groupby(["ENTITY", "PERIOD"], as_index=True).agg({"VALUE": "cumsum"})
# Affichage du modèle de donnée
dataset_evol_ca
###Output
_____no_output_____
###Markdown
CHARGES
###Code
#Creation du dataset charges
dataset_charges = db_cat.copy()
# Filtre RUBRIQUE_N0 = CHARGES
dataset_charges = dataset_charges[dataset_charges["RUBRIQUE_N0"] == "CHARGES"]
# Mettre en valeur positive VALUE
dataset_charges["VALUE"] = dataset_charges["VALUE"].abs()
# Affichage du modèle de donnée
dataset_charges
###Output
_____no_output_____
###Markdown
POSITIONS TRESORERIE
###Code
# Creation du dataset trésorerie
dataset_treso = db_enr.copy()
# Filtre RUBRIQUE_N1 = TRESORERIE
dataset_treso = dataset_treso[dataset_treso['COMPTE_NUM'].str.contains(r'^5')].reset_index(drop=True)
# Cash in / Cash out ?
dataset_treso.loc[dataset_treso.VALUE > 0, "CASH_IN"] = dataset_treso.VALUE
dataset_treso.loc[dataset_treso.VALUE < 0, "CASH_OUT"] = dataset_treso.VALUE
# Regroupement
to_group = ["ENTITY",
"PERIOD",
"MONTH",
"MONTH_INDEX"]
to_agg = {"VALUE": "sum",
"CASH_IN": "sum",
"CASH_OUT": "sum"}
dataset_treso = dataset_treso.groupby(to_group, as_index = False).agg(to_agg).fillna(0)
# Cumul par période
dataset_treso = dataset_treso.sort_values(["ENTITY", "PERIOD", "MONTH_INDEX"])
dataset_treso['MONTH_INDEX'] = pd.to_datetime(dataset_treso['MONTH_INDEX'], format="%m").dt.strftime("%m")
dataset_treso['VALUE_LINE'] = dataset_treso.groupby(["ENTITY", 'PERIOD'], as_index=True).agg({"VALUE": "cumsum"})
# Mettre en valeur positive CASH_OUT
dataset_treso["CASH_OUT"] = dataset_treso["CASH_OUT"].abs()
# Affichage du modèle de donnée
dataset_treso
###Output
_____no_output_____
###Markdown
BILAN
###Code
# Creation du dataset Bilan
dataset_bilan = db_cat.copy()
# Filtre RUBRIQUE_N0 = ACTIF & PASSIF
dataset_bilan = dataset_bilan[(dataset_bilan["RUBRIQUE_N0"].isin(["ACTIF", "PASSIF"]))]
# Regroupement R0/R1/R2
to_group = ["ENTITY",
"PERIOD",
"RUBRIQUE_N0",
"RUBRIQUE_N1",
"RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_bilan = dataset_bilan.groupby(to_group, as_index = False).agg(to_agg).fillna(0)
# Mettre en valeur positive VALUE
dataset_bilan["VALUE"] = dataset_bilan["VALUE"].abs()
# Selectionner les colonnes
to_select = ["ENTITY",
"PERIOD",
"RUBRIQUE_N0",
"RUBRIQUE_N1",
"RUBRIQUE_N2",
"VALUE"]
dataset_bilan = dataset_bilan[to_select]
# Affichage du modèle de donnée
dataset_bilan
###Output
_____no_output_____
###Markdown
Output Sauvegarde des fichiers en csv
###Code
def df_to_csv(df, filename):
# Sauvegarde en csv
df.to_csv(filename,
sep=";",
decimal=",",
index=False)
# Création du lien url
naas_link = naas.asset.add(filename)
# Création de la ligne
data = {
"OBJET": filename,
"URL": naas_link,
"DATE_EXTRACT": datetime.now().strftime("%Y-%m-%d %H:%M:%S")
}
return pd.DataFrame([data])
dataset_logo = {
"OBJET": "Logo",
"URL": LOGO,
"DATE_EXTRACT": datetime.now().strftime("%Y-%m-%d %H:%M:%S")
}
logo = pd.DataFrame([dataset_logo])
logo
import json
color = {"name":"Color",
"dataColors":[COLOR_1, COLOR_2]}
with open("color.json", "w") as write_file:
json.dump(color, write_file)
dataset_color = {
"OBJET": "Color",
"URL": naas.asset.add("color.json"),
"DATE_EXTRACT": datetime.now().strftime("%Y-%m-%d %H:%M:%S")
}
pbi_color = pd.DataFrame([dataset_color])
pbi_color
entite = df_to_csv(dataset_entite, "dataset_entite.csv")
entite
scenario = df_to_csv(dataset_scenario, "dataset_scenario.csv")
scenario
kpis = df_to_csv(dataset_kpis_final, "dataset_kpis_final.csv")
kpis
evol_ca = df_to_csv(dataset_evol_ca, "dataset_evol_ca.csv")
evol_ca
charges = df_to_csv(dataset_charges, "dataset_charges.csv")
charges
treso = df_to_csv(dataset_treso, "dataset_treso.csv")
treso
bilan = df_to_csv(dataset_bilan, "dataset_bilan.csv")
bilan
###Output
_____no_output_____
###Markdown
Création du fichier à intégrer dans PowerBI
###Code
db_powerbi = pd.concat([logo, pbi_color, entite, scenario, kpis, evol_ca, charges, treso, bilan], axis=0)
db_powerbi
df_to_csv(db_powerbi, "powerbi.csv")
###Output
_____no_output_____
###Markdown
FEC - Creer un dashboard PowerBI **Tags:** fec powerbi dataviz analytics finance **Author:** [Alexandre STEVENS](https://www.linkedin.com/in/alexandrestevenspbix/) Ce Notebook permet de transformer des fichiers FEC de votre entreprise en un tableau de bord Microsoft Power BI.Le FEC (fichier des écritures comptables) est un export standard des logiciels de comptabilite et une obligation légale en france depuis 2014 afin de déposer ses comptes de manière electronique auprès des services fiscaux.-Durée de l’installation = 5 minutes-Support d’installation = [Page Notion](https://www.notion.so/Mode-d-emploi-FECthis-7fc142f2d7ae4a3889fbca28a83acba2/)-Niveau = Facile Input Librairie
###Code
import pandas as pd
from datetime import datetime, timedelta
import os
import re
import naas
###Output
_____no_output_____
###Markdown
Lien URL vers le logo de l'entreprise
###Code
LOGO = "https://landen.imgix.net/e5hx7wyzf53f/assets/26u7xg7u.png?w=400"
COLOR_1 = None
COLOR_2 = None
###Output
_____no_output_____
###Markdown
Lire les fichiers FEC
###Code
def get_all_fec(file_regex,
sep=",",
decimal=".",
encoding=None,
header=None,
usecols=None,
names=None,
dtype=None):
# Create df init
df = pd.DataFrame()
# Get all files in INPUT_FOLDER
files = [f for f in os.listdir() if re.search(file_regex, f)]
if len(files) == 0:
print(f"Aucun fichier FEC ne correspond au standard de nomination")
else:
for file in files:
# Open file and create df
print(file)
tmp_df = pd.read_csv(file,
sep=sep,
decimal=decimal,
encoding=encoding,
header=header,
usecols=usecols,
names=names,
dtype=dtype)
# Add filename to df
tmp_df['NOM_FICHIER'] = file
# Concat df
df = pd.concat([df, tmp_df], axis=0, sort=False)
return df
file_regex = "^\d{9}FEC\d{8}.txt"
db_init = get_all_fec(file_regex,
sep='\t',
decimal=',',
encoding='ISO-8859-1',
header=0)
db_init
###Output
_____no_output_____
###Markdown
Model Base de donnée FEC Nettoyage des données
###Code
db_clean = db_init.copy()
# Selection des colonnes à conserver
to_select = ['NOM_FICHIER',
'EcritureDate',
'CompteNum',
'CompteLib',
'EcritureLib',
'Debit',
'Credit']
db_clean = db_clean[to_select]
# Renommage des colonnes
to_rename = {'EcritureDate': "DATE",
'CompteNum': "COMPTE_NUM",
'CompteLib': "RUBRIQUE_N3",
'EcritureLib': "RUBRIQUE_N4",
'Debit': "DEBIT",
'Credit': "CREDIT"}
db_clean = db_clean.rename(columns=to_rename)
#suppression des espaces colonne "COMPTE_NUM"
db_clean["COMPTE_NUM"] = db_clean["COMPTE_NUM"].astype(str).str.strip()
# Mise au format des colonnes
db_clean = db_clean.astype({"NOM_FICHIER" : str,
"DATE" : str,
"COMPTE_NUM" : str,
"RUBRIQUE_N3" : str,
"RUBRIQUE_N4" : str,
"DEBIT" : float,
"CREDIT" : float,
})
# Mise au format colonne date
db_clean["DATE"] = pd.to_datetime(db_clean["DATE"])
db_clean.head(5)
###Output
_____no_output_____
###Markdown
Enrichissement de la base
###Code
db_enr = db_clean.copy()
# Ajout colonnes entité et période
db_enr['ENTITY'] = db_enr['NOM_FICHIER'].str[:9]
db_enr['PERIOD'] = db_enr['NOM_FICHIER'].str[12:-6]
db_enr['PERIOD'] = pd.to_datetime(db_enr['PERIOD'], format='%Y%m')
db_enr['PERIOD'] = db_enr['PERIOD'].dt.strftime("%Y-%m")
# Ajout colonne month et month_index
db_enr['MONTH'] = db_enr['DATE'].dt.strftime("%b")
db_enr['MONTH_INDEX'] = db_enr['DATE'].dt.month
# Calcul de la valeur debit-crédit
db_enr["VALUE"] = (db_enr["DEBIT"]) - (db_enr["CREDIT"])
db_enr.head(5)
# Calcul résultat pour équilibrage bilan dans capitaux propre
db_rn = db_enr.copy()
db_rn = db_rn[db_rn['COMPTE_NUM'].str.contains(r'^6|^7')]
to_group = ["ENTITY", "PERIOD"]
to_agg = {"VALUE": "sum"}
db_rn = db_rn.groupby(to_group, as_index=False).agg(to_agg)
db_rn ["COMPTE_NUM"] = "10999999"
db_rn ["RUBRIQUE_N3"] = "RESULTAT"
# Reorganisation colonne
to_select = ['ENTITY',
'PERIOD',
'COMPTE_NUM',
'RUBRIQUE_N3',
'VALUE']
db_rn = db_rn[to_select]
db_rn
###Output
_____no_output_____
###Markdown
Base de données FEC aggrégée avec variation Aggrégation RUBRIQUE N3
###Code
# Calcul var v = création de dataset avec Period_comp pour merge
db_var = db_enr.copy()
# Regroupement
to_group = ["ENTITY",
"PERIOD",
"COMPTE_NUM",
"RUBRIQUE_N3"]
to_agg = {"VALUE": "sum"}
db_var = db_var.groupby(to_group, as_index=False).agg(to_agg)
# Ajout des résultats au dataframe
db_var = pd.concat([db_var, db_rn], axis=0, sort=False)
# Creation colonne COMP
db_var['PERIOD_COMP'] = (db_var['PERIOD'].str[:4].astype(int) - 1).astype(str) + db_var['PERIOD'].str[-3:]
db_var
###Output
_____no_output_____
###Markdown
Création de la base comparable
###Code
db_comp = db_var.copy()
# Suppression de la colonne période
db_comp = db_comp.drop("PERIOD_COMP", axis=1)
# Renommage des colonnes
to_rename = {'VALUE': "VALUE_N-1",
'PERIOD': "PERIOD_COMP"}
db_comp = db_comp.rename(columns=to_rename)
db_comp.head(5)
###Output
_____no_output_____
###Markdown
Jointure des 2 tables et calcul des variations
###Code
# Jointure entre les 2 tables
join_on = ["ENTITY",
"PERIOD_COMP",
"COMPTE_NUM",
"RUBRIQUE_N3"]
db_var = pd.merge(db_var, db_comp, how='left', on=join_on).drop("PERIOD_COMP", axis=1).fillna(0)
#Création colonne Var V
db_var["VARV"] = db_var["VALUE"] - db_var["VALUE_N-1"]
#Création colonne Var P (%)
db_var["VARP"] = db_var["VARV"] / db_var["VALUE_N-1"]
db_var
db_cat = db_var.copy()
# Calcul des rubriques niveau 2
def rubrique_N2(row):
numero_compte = str(row["COMPTE_NUM"])
value = float(row["VALUE"])
# BILAN SIMPLIFIE type IFRS NIV2
to_check = ["^10", "^11", "^12", "^13", "^14"]
if any (re.search(x,numero_compte) for x in to_check):
return "CAPITAUX_PROPRES"
to_check = ["^15", "^16", "^17", "^18", "^19"]
if any (re.search(x,numero_compte) for x in to_check):
return "DETTES_FINANCIERES"
to_check = ["^20", "^21", "^22", "^23", "^25", "^26", "^27", "^28", "^29"]
if any (re.search(x,numero_compte) for x in to_check):
return "IMMOBILISATIONS"
to_check = ["^31", "^32", "^33", "^34", "^35", "^36", "^37", "^38", "^39"]
if any (re.search(x,numero_compte) for x in to_check):
return "STOCKS"
to_check = ["^40"]
if any (re.search(x,numero_compte) for x in to_check):
return "DETTES_FOURNISSEURS"
to_check = ["^41"]
if any (re.search(x,numero_compte) for x in to_check):
return "CREANCES_CLIENTS"
to_check = ["^42", "^43", "^44", "^45", "^46", "^47", "^48", "^49"]
if any (re.search(x,numero_compte) for x in to_check):
if value > 0:
return "AUTRES_CREANCES"
else:
return "AUTRES_DETTES"
to_check = ["^50", "^51", "^52", "^53", "^54", "^58", "^59"]
if any (re.search(x,numero_compte) for x in to_check):
return "DISPONIBILITES"
# COMPTE DE RESULTAT DETAILLE NIV2
to_check = ["^60"]
if any (re.search(x,numero_compte) for x in to_check):
return "ACHATS"
to_check= ["^61", "^62"]
if any (re.search(x,numero_compte) for x in to_check):
return "SERVICES_EXTERIEURS"
to_check = ["^63"]
if any (re.search(x,numero_compte) for x in to_check):
return "TAXES"
to_check = ["^64"]
if any (re.search(x,numero_compte) for x in to_check):
return "CHARGES_PERSONNEL"
to_check = ["^65"]
if any (re.search(x,numero_compte) for x in to_check):
return "AUTRES_CHARGES"
to_check = ["^66"]
if any (re.search(x,numero_compte) for x in to_check):
return "CHARGES_FINANCIERES"
to_check = ["^67"]
if any (re.search(x,numero_compte) for x in to_check):
return "CHARGES_EXCEPTIONNELLES"
to_check = ["^68", "^78"]
if any (re.search(x,numero_compte) for x in to_check):
return "AMORTISSEMENTS"
to_check = ["^69"]
if any (re.search(x,numero_compte) for x in to_check):
return "IMPOT"
to_check = ["^70"]
if any (re.search(x,numero_compte) for x in to_check):
return "VENTES"
to_check = ["^71", "^72"]
if any (re.search(x,numero_compte) for x in to_check):
return "PRODUCTION_STOCKEE_IMMOBILISEE"
to_check = ["^74"]
if any (re.search(x,numero_compte) for x in to_check):
return "SUBVENTIONS_D'EXPL."
to_check = ["^75", "^791"]
if any (re.search(x,numero_compte) for x in to_check):
return "AUTRES_PRODUITS_GESTION_COURANTE"
to_check = ["^76", "^796"]
if any (re.search(x,numero_compte) for x in to_check):
return "PRODUITS_FINANCIERS"
to_check = ["^77", "^797"]
if any (re.search(x,numero_compte) for x in to_check):
return "PRODUITS_EXCEPTIONNELS"
to_check = ["^78"]
if any (re.search(x,numero_compte) for x in to_check):
return "REPRISES_AMORT._DEP."
to_check = ["^8"]
if any (re.search(x,numero_compte) for x in to_check):
return "COMPTES_SPECIAUX"
# Calcul des rubriques niveau 1
def rubrique_N1(row):
categorisation = row.RUBRIQUE_N2
# BILAN SIMPLIFIE type IFRS N1
to_check = ["CAPITAUX_PROPRES", "DETTES_FINANCIERES"]
if any(re.search(x, categorisation) for x in to_check):
return "PASSIF_NON_COURANT"
to_check = ["IMMOBILISATIONS"]
if any(re.search(x, categorisation) for x in to_check):
return "ACTIF_NON_COURANT"
to_check = ["STOCKS", "CREANCES_CLIENTS", "AUTRES_CREANCES"]
if any(re.search(x, categorisation) for x in to_check):
return "ACTIF_COURANT"
to_check = ["DETTES_FOURNISSEURS", "AUTRES_DETTES"]
if any(re.search(x, categorisation) for x in to_check):
return "PASSIF_COURANT"
to_check = ["DISPONIBILITES"]
if any(re.search(x, categorisation) for x in to_check):
return "DISPONIBILITES"
# COMPTE DE RESULTAT SIMPLIFIE N1
to_check = ["ACHATS"]
if any(re.search(x, categorisation) for x in to_check):
return "COUTS_DIRECTS"
to_check = ["SERVICES_EXTERIEURS", "TAXES", "CHARGES_PERSONNEL", "AUTRES_CHARGES", "AMORTISSEMENTS"]
if any(re.search(x, categorisation) for x in to_check):
return "CHARGES_EXPLOITATION"
to_check = ["CHARGES_FINANCIERES"]
if any(re.search(x, categorisation) for x in to_check):
return "CHARGES_FINANCIERES"
to_check = ["CHARGES_EXCEPTIONNELLES", "IMPOT"]
if any(re.search(x, categorisation) for x in to_check):
return "CHARGES_EXCEPTIONNELLES"
to_check = ["VENTES", "PRODUCTION_STOCKEE_IMMOBILISEE"]
if any(re.search(x, categorisation) for x in to_check):
return "CHIFFRE_D'AFFAIRES"
to_check = ["SUBVENTIONS_D'EXPL.", "AUTRES_PRODUITS_GESTION_COURANTE", "REPRISES_AMORT._DEP."]
if any(re.search(x, categorisation) for x in to_check):
return "PRODUITS_EXPLOITATION"
to_check = ["PRODUITS_FINANCIERS"]
if any(re.search(x, categorisation) for x in to_check):
return "PRODUITS_FINANCIERS"
to_check = ["PRODUITS_EXCEPTIONNELS"]
if any(re.search(x, categorisation) for x in to_check):
return "PRODUITS_EXCEPTIONNELS"
# Calcul des rubriques niveau 0
def rubrique_N0(row):
masse = row.RUBRIQUE_N1
to_check = ["ACTIF_NON_COURANT", "ACTIF_COURANT", "DISPONIBILITES"]
if any(re.search(x, masse) for x in to_check):
return "ACTIF"
to_check = ["PASSIF_NON_COURANT", "PASSIF_COURANT"]
if any(re.search(x, masse) for x in to_check):
return "PASSIF"
to_check = ["COUTS_DIRECTS", "CHARGES_EXPLOITATION", "CHARGES_FINANCIERES", "CHARGES_EXCEPTIONNELLES"]
if any(re.search(x, masse) for x in to_check):
return "CHARGES"
to_check = ["CHIFFRE_D'AFFAIRES", "PRODUITS_EXPLOITATION", "PRODUITS_FINANCIERS", "PRODUITS_EXCEPTIONNELS"]
if any(re.search(x, masse) for x in to_check):
return "PRODUITS"
# Mapping des rubriques
db_cat["RUBRIQUE_N2"] = db_cat.apply(lambda row: rubrique_N2(row), axis=1)
db_cat["RUBRIQUE_N1"] = db_cat.apply(lambda row: rubrique_N1(row), axis=1)
db_cat["RUBRIQUE_N0"] = db_cat.apply(lambda row: rubrique_N0(row), axis=1)
# Reorganisation colonne
to_select = ['ENTITY',
'PERIOD',
'COMPTE_NUM',
'RUBRIQUE_N0',
'RUBRIQUE_N1',
'RUBRIQUE_N2',
'RUBRIQUE_N3',
'VALUE',
'VALUE_N-1',
'VARV',
'VARP']
db_cat = db_cat[to_select]
db_cat
###Output
_____no_output_____
###Markdown
Modèles de données des graphiques REF_ENTITE
###Code
# Creation du dataset ref_entite
dataset_entite = db_cat.copy()
# Regrouper par entite
to_group = ["ENTITY"]
to_agg = {"ENTITY": "max"}
dataset_entite = dataset_entite.groupby(to_group, as_index=False).agg(to_agg)
# Affichage du modèle de donnée
dataset_entite
###Output
_____no_output_____
###Markdown
REF_SCENARIO
###Code
# Creation du dataset ref_scenario
dataset_scenario = db_cat.copy()
# Regrouper par entite
to_group = ["PERIOD"]
to_agg = {"PERIOD": "max"}
dataset_scenario = dataset_scenario.groupby(to_group, as_index=False).agg(to_agg)
# Affichage du modèle de donnée
dataset_scenario
###Output
_____no_output_____
###Markdown
KPIS
###Code
# Creation du dataset KPIS (CA, MARGE, EBE, BFR, CC, DF)
dataset_kpis = db_cat.copy()
# KPIs CA
dataset_kpis_ca = dataset_kpis[dataset_kpis.RUBRIQUE_N1.isin(["CHIFFRE_D'AFFAIRES"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N1"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ca = dataset_kpis_ca.groupby(to_group, as_index=False).agg(to_agg)
# Passage value postif
dataset_kpis_ca["VALUE"] = dataset_kpis_ca["VALUE"]*-1
# COUTS_DIRECTS
dataset_kpis_ha = dataset_kpis[dataset_kpis.RUBRIQUE_N1.isin(["COUTS_DIRECTS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N1"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ha = dataset_kpis_ha.groupby(to_group, as_index=False).agg(to_agg)
# Passage value négatif
dataset_kpis_ha["VALUE"] = dataset_kpis_ha["VALUE"]*-1
# KPIs MARGE BRUTE (CA - COUTS DIRECTS)
dataset_kpis_mb = dataset_kpis_ca.copy()
dataset_kpis_mb = pd.concat([dataset_kpis_mb, dataset_kpis_ha], axis=0, sort=False)
to_group = ["ENTITY",
"PERIOD"]
to_agg = {"VALUE": "sum"}
dataset_kpis_mb = dataset_kpis_mb.groupby(to_group, as_index=False).agg(to_agg)
dataset_kpis_mb["RUBRIQUE_N1"] = "MARGE"
dataset_kpis_mb = dataset_kpis_mb[["ENTITY", "PERIOD", "RUBRIQUE_N1", "VALUE"]]
# CHARGES EXTERNES
dataset_kpis_ce = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["SERVICES_EXTERIEURS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ce = dataset_kpis_ce.groupby(to_group, as_index=False).agg(to_agg)
# Passage value negatif
dataset_kpis_ce["VALUE"] = dataset_kpis_ce["VALUE"]*-1
# IMPOTS
dataset_kpis_ip = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["TAXES"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ip = dataset_kpis_ip.groupby(to_group, as_index=False).agg(to_agg)
# Passage value negatif
dataset_kpis_ip["VALUE"] = dataset_kpis_ip["VALUE"]*-1
# CHARGES DE PERSONNEL
dataset_kpis_cp = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["CHARGES_PERSONNEL"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_cp = dataset_kpis_cp.groupby(to_group, as_index=False).agg(to_agg)
# Passage value negatif
dataset_kpis_cp["VALUE"] = dataset_kpis_cp["VALUE"]*-1
# AUTRES_CHARGES
dataset_kpis_ac = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["AUTRES_CHARGES"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ac = dataset_kpis_ac.groupby(to_group, as_index=False).agg(to_agg)
# Passage value negatif
dataset_kpis_ac["VALUE"] = dataset_kpis_ac["VALUE"]*-1
# SUBVENTIONS D'EXPLOITATION
dataset_kpis_ac = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["SUBVENTIONS_D'EXPL."])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ac = dataset_kpis_ac.groupby(to_group, as_index=False).agg(to_agg)
# KPIs EBE = MARGE - CHARGES EXTERNES - TAXES - CHARGES PERSONNEL - AUTRES CHARGES + SUBVENTION D'EXPLOITATION
dataset_kpis_ebe = dataset_kpis_mb.copy()
dataset_kpis_ebe = pd.concat([dataset_kpis_ebe, dataset_kpis_ce, dataset_kpis_ip, dataset_kpis_cp, dataset_kpis_ac], axis=0, sort=False)
to_group = ["ENTITY", "PERIOD"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ebe = dataset_kpis_ebe.groupby(to_group, as_index=False).agg(to_agg)
dataset_kpis_ebe["RUBRIQUE_N1"] = "EBE"
dataset_kpis_ebe = dataset_kpis_ebe[["ENTITY", "PERIOD", "RUBRIQUE_N1", "VALUE"]]
# KPIs CREANCES CLIENTS
dataset_kpis_cc = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["CREANCES_CLIENTS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_cc = dataset_kpis_cc.groupby(to_group, as_index=False).agg(to_agg)
# Renommage colonne
to_rename = {'RUBRIQUE_N2': "RUBRIQUE_N1"}
dataset_kpis_cc = dataset_kpis_cc.rename(columns=to_rename)
# KPIs STOCKS
dataset_kpis_st = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["STOCKS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_st = dataset_kpis_st.groupby(to_group, as_index=False).agg(to_agg)
# Renommage colonne
to_rename = {'RUBRIQUE_N2': "RUBRIQUE_N1"}
dataset_kpis_st = dataset_kpis_st.rename(columns=to_rename)
# KPIs DETTES FOURNISSEURS
dataset_kpis_df = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["DETTES_FOURNISSEURS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_df = dataset_kpis_df.groupby(to_group, as_index=False).agg(to_agg)
# Renommage colonne
to_rename = {'RUBRIQUE_N2': "RUBRIQUE_N1"}
dataset_kpis_df = dataset_kpis_df.rename(columns=to_rename)
# Passage value positif
dataset_kpis_df["VALUE"] = dataset_kpis_df["VALUE"].abs()
# KPIs BFR = CREANCES + STOCKS - DETTES FOURNISSEURS
dataset_kpis_bfr_df = dataset_kpis_df.copy()
# Passage dette fournisseur value négatif
dataset_kpis_bfr_df["VALUE"] = dataset_kpis_bfr_df["VALUE"]*-1
dataset_kpis_bfr_df = pd.concat([dataset_kpis_cc, dataset_kpis_st, dataset_kpis_bfr_df], axis=0, sort=False)
to_group = ["ENTITY", "PERIOD"]
to_agg = {"VALUE": "sum"}
dataset_kpis_bfr_df = dataset_kpis_bfr_df.groupby(to_group, as_index=False).agg(to_agg)
# Creation colonne Rubrique_N1 = BFR
dataset_kpis_bfr_df["RUBRIQUE_N1"] = "BFR"
# Reorganisation colonne
dataset_kpis_bfr_df = dataset_kpis_bfr_df[["ENTITY", "PERIOD", "RUBRIQUE_N1", "VALUE"]]
# Creation du dataset final
dataset_kpis_final = pd.concat([dataset_kpis_ca, dataset_kpis_mb, dataset_kpis_ebe, dataset_kpis_cc, dataset_kpis_st, dataset_kpis_df, dataset_kpis_bfr_df], axis=0, sort=False)
# Creation colonne COMP
dataset_kpis_final['PERIOD_COMP'] = (dataset_kpis_final['PERIOD'].str[:4].astype(int) - 1).astype(str) + dataset_kpis_final['PERIOD'].str[-3:]
dataset_kpis_final
# creation base comparable pour dataset_kpis
dataset_kpis_final_comp = dataset_kpis_final.copy()
# Suppression de la colonne période
dataset_kpis_final_comp = dataset_kpis_final_comp.drop("PERIOD_COMP", axis=1)
# Renommage des colonnes
to_rename = {'VALUE': "VALUE_N-1",
'PERIOD': "PERIOD_COMP"}
dataset_kpis_final_comp = dataset_kpis_final_comp.rename(columns=to_rename)
dataset_kpis_final_comp
# Jointure entre les 2 tables dataset_kpis_final et dataset_kpis_vf
join_on = ["ENTITY",
"PERIOD_COMP",
"RUBRIQUE_N1"]
dataset_kpis_final = pd.merge(dataset_kpis_final, dataset_kpis_final_comp, how='left', on=join_on).drop("PERIOD_COMP", axis=1).fillna(0)
#Création colonne Var V
dataset_kpis_final["VARV"] = dataset_kpis_final["VALUE"] - dataset_kpis_final["VALUE_N-1"]
#Création colonne Var P (%)
dataset_kpis_final["VARP"] = dataset_kpis_final["VARV"] / dataset_kpis_final["VALUE_N-1"]
dataset_kpis_final
###Output
_____no_output_____
###Markdown
EVOLUTION CA
###Code
# Creation du dataset evol_ca
dataset_evol_ca = db_enr.copy()
# Filtre COMPTE_NUM = Chiffre d'Affaire (RUBRIQUE N1)
dataset_evol_ca = dataset_evol_ca[dataset_evol_ca['COMPTE_NUM'].str.contains(r'^70|^71|^72')]
# Regroupement
to_group = ["ENTITY",
"PERIOD",
"MONTH",
"MONTH_INDEX",
"RUBRIQUE_N3"]
to_agg = {"VALUE": "sum"}
dataset_evol_ca = dataset_evol_ca.groupby(to_group, as_index=False).agg(to_agg)
dataset_evol_ca["VALUE"] = dataset_evol_ca["VALUE"].abs()
# Calcul de la somme cumulée
dataset_evol_ca = dataset_evol_ca.sort_values(by=["ENTITY", 'PERIOD', 'MONTH_INDEX']).reset_index(drop=True)
dataset_evol_ca['MONTH_INDEX'] = pd.to_datetime(dataset_evol_ca['MONTH_INDEX'], format="%m").dt.strftime("%m")
dataset_evol_ca['VALUE_CUM'] = dataset_evol_ca.groupby(["ENTITY", "PERIOD"], as_index=True).agg({"VALUE": "cumsum"})
# Affichage du modèle de donnée
dataset_evol_ca
###Output
_____no_output_____
###Markdown
CHARGES
###Code
#Creation du dataset charges
dataset_charges = db_cat.copy()
# Filtre RUBRIQUE_N0 = CHARGES
dataset_charges = dataset_charges[dataset_charges["RUBRIQUE_N0"] == "CHARGES"]
# Mettre en valeur positive VALUE
dataset_charges["VALUE"] = dataset_charges["VALUE"].abs()
# Affichage du modèle de donnée
dataset_charges
###Output
_____no_output_____
###Markdown
POSITIONS TRESORERIE
###Code
# Creation du dataset trésorerie
dataset_treso = db_enr.copy()
# Filtre RUBRIQUE_N1 = TRESORERIE
dataset_treso = dataset_treso[dataset_treso['COMPTE_NUM'].str.contains(r'^5')].reset_index(drop=True)
# Cash in / Cash out ?
dataset_treso.loc[dataset_treso.VALUE > 0, "CASH_IN"] = dataset_treso.VALUE
dataset_treso.loc[dataset_treso.VALUE < 0, "CASH_OUT"] = dataset_treso.VALUE
# Regroupement
to_group = ["ENTITY",
"PERIOD",
"MONTH",
"MONTH_INDEX"]
to_agg = {"VALUE": "sum",
"CASH_IN": "sum",
"CASH_OUT": "sum"}
dataset_treso = dataset_treso.groupby(to_group, as_index = False).agg(to_agg).fillna(0)
# Cumul par période
dataset_treso = dataset_treso.sort_values(["ENTITY", "PERIOD", "MONTH_INDEX"])
dataset_treso['MONTH_INDEX'] = pd.to_datetime(dataset_treso['MONTH_INDEX'], format="%m").dt.strftime("%m")
dataset_treso['VALUE_LINE'] = dataset_treso.groupby(["ENTITY", 'PERIOD'], as_index=True).agg({"VALUE": "cumsum"})
# Mettre en valeur positive CASH_OUT
dataset_treso["CASH_OUT"] = dataset_treso["CASH_OUT"].abs()
# Affichage du modèle de donnée
dataset_treso
###Output
_____no_output_____
###Markdown
BILAN
###Code
# Creation du dataset Bilan
dataset_bilan = db_cat.copy()
# Filtre RUBRIQUE_N0 = ACTIF & PASSIF
dataset_bilan = dataset_bilan[(dataset_bilan["RUBRIQUE_N0"].isin(["ACTIF", "PASSIF"]))]
# Regroupement R0/R1/R2
to_group = ["ENTITY",
"PERIOD",
"RUBRIQUE_N0",
"RUBRIQUE_N1",
"RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_bilan = dataset_bilan.groupby(to_group, as_index = False).agg(to_agg).fillna(0)
# Mettre en valeur positive VALUE
dataset_bilan["VALUE"] = dataset_bilan["VALUE"].abs()
# Selectionner les colonnes
to_select = ["ENTITY",
"PERIOD",
"RUBRIQUE_N0",
"RUBRIQUE_N1",
"RUBRIQUE_N2",
"VALUE"]
dataset_bilan = dataset_bilan[to_select]
# Affichage du modèle de donnée
dataset_bilan
###Output
_____no_output_____
###Markdown
Output Sauvegarde des fichiers en csv
###Code
def df_to_csv(df, filename):
# Sauvegarde en csv
df.to_csv(filename,
sep=";",
decimal=",",
index=False)
# Création du lien url
naas_link = naas.asset.add(filename)
# Création de la ligne
data = {
"OBJET": filename,
"URL": naas_link,
"DATE_EXTRACT": datetime.now().strftime("%Y-%m-%d %H:%M:%S")
}
return pd.DataFrame([data])
dataset_logo = {
"OBJET": "Logo",
"URL": LOGO,
"DATE_EXTRACT": datetime.now().strftime("%Y-%m-%d %H:%M:%S")
}
logo = pd.DataFrame([dataset_logo])
logo
import json
color = {"name":"Color",
"dataColors":[COLOR_1, COLOR_2]}
with open("color.json", "w") as write_file:
json.dump(color, write_file)
dataset_color = {
"OBJET": "Color",
"URL": naas.asset.add("color.json"),
"DATE_EXTRACT": datetime.now().strftime("%Y-%m-%d %H:%M:%S")
}
pbi_color = pd.DataFrame([dataset_color])
pbi_color
entite = df_to_csv(dataset_entite, "dataset_entite.csv")
entite
scenario = df_to_csv(dataset_scenario, "dataset_scenario.csv")
scenario
kpis = df_to_csv(dataset_kpis_final, "dataset_kpis_final.csv")
kpis
evol_ca = df_to_csv(dataset_evol_ca, "dataset_evol_ca.csv")
evol_ca
charges = df_to_csv(dataset_charges, "dataset_charges.csv")
charges
treso = df_to_csv(dataset_treso, "dataset_treso.csv")
treso
bilan = df_to_csv(dataset_bilan, "dataset_bilan.csv")
bilan
###Output
_____no_output_____
###Markdown
Création du fichier à intégrer dans PowerBI
###Code
db_powerbi = pd.concat([logo, pbi_color, entite, scenario, kpis, evol_ca, charges, treso, bilan], axis=0)
db_powerbi
df_to_csv(db_powerbi, "powerbi.csv")
###Output
_____no_output_____
###Markdown
FEC - Creer un dashboard PowerBI Ce Notebook permet de transformer des fichiers FEC de votre entreprise en un tableau de bord Microsoft Power BI.Le FEC (fichier des écritures comptables) est un export standard des logiciels de comptabilite et une obligation légale en france depuis 2014 afin de déposer ses comptes de manière electronique auprès des services fiscaux.-Durée de l’installation = 5 minutes-Support d’installation = [Page Notion](https://www.notion.so/Mode-d-emploi-FECthis-7fc142f2d7ae4a3889fbca28a83acba2/)-Niveau = Facile **Author:** [Alexandre STEVENS](https://www.linkedin.com/in/alexandrestevenspbix/) **Tag:** powerbi dashboard tableaudebord FEC comptabilite accounting Input Librairie
###Code
import pandas as pd
from datetime import datetime, timedelta
import os
import re
import naas
###Output
_____no_output_____
###Markdown
Lien URL vers le logo de l'entreprise
###Code
LOGO = "https://landen.imgix.net/e5hx7wyzf53f/assets/26u7xg7u.png?w=400"
###Output
_____no_output_____
###Markdown
Lire les fichiers FEC
###Code
def get_all_fec(file_regex,
sep=",",
decimal=".",
encoding=None,
header=None,
usecols=None,
names=None,
dtype=None):
# Create df init
df = pd.DataFrame()
# Get all files in INPUT_FOLDER
files = [f for f in os.listdir() if re.search(file_regex, f)]
if len(files) == 0:
print(f"Aucun fichier FEC ne correspond au standard de nomination")
else:
for file in files:
# Open file and create df
print(file)
tmp_df = pd.read_csv(file,
sep=sep,
decimal=decimal,
encoding=encoding,
header=header,
usecols=usecols,
names=names,
dtype=dtype)
# Add filename to df
tmp_df['NOM_FICHIER'] = file
# Concat df
df = pd.concat([df, tmp_df], axis=0, sort=False)
return df
file_regex = "^\d{9}FEC\d{8}.txt"
db_init = get_all_fec(file_regex,
sep='\t',
decimal=',',
encoding='ISO-8859-1',
header=0)
db_init
###Output
_____no_output_____
###Markdown
Model Base de donnée FEC Nettoyage des données
###Code
db_clean = db_init.copy()
# Selection des colonnes à conserver
to_select = ['NOM_FICHIER',
'EcritureDate',
'CompteNum',
'CompteLib',
'EcritureLib',
'Debit',
'Credit']
db_clean = db_clean[to_select]
# Renommage des colonnes
to_rename = {'EcritureDate': "DATE",
'CompteNum': "COMPTE_NUM",
'CompteLib': "RUBRIQUE_N3",
'EcritureLib': "RUBRIQUE_N4",
'Debit': "DEBIT",
'Credit': "CREDIT" }
db_clean = db_clean.rename(columns=to_rename)
#suppression des espaces colonne "COMPTE_NUM"
db_clean["COMPTE_NUM"] = db_clean["COMPTE_NUM"].str.strip()
# Mise au format des colonnes
db_clean = db_clean.astype({"NOM_FICHIER" : str,
"DATE" : str,
"COMPTE_NUM" : str,
"RUBRIQUE_N3" : str,
"RUBRIQUE_N4" : str,
"DEBIT" : float,
"CREDIT" : float,
})
# Mise au format colonne date
db_clean["DATE"] = pd.to_datetime(db_clean["DATE"])
db_clean.head(5)
###Output
_____no_output_____
###Markdown
Enrichissement de la base
###Code
db_enr = db_clean.copy()
# Ajout colonnes entité et période
db_enr['ENTITY'] = db_enr['NOM_FICHIER'].str[:9]
db_enr['PERIOD'] = db_enr['NOM_FICHIER'].str[12:-6]
db_enr['PERIOD'] = pd.to_datetime(db_enr['PERIOD'], format='%Y%m')
db_enr['PERIOD'] = db_enr['PERIOD'].dt.strftime("%Y-%m")
# Ajout colonne month et month_index
db_enr['MONTH'] = db_enr['DATE'].dt.strftime("%b")
db_enr['MONTH_INDEX'] = db_enr['DATE'].dt.month
# Calcul de la valeur debit-crédit
db_enr["VALUE"] = (db_enr["DEBIT"]) - (db_enr["CREDIT"])
db_enr.head(5)
# Calcul résultat pour équilibrage bilan dans capitaux propre
db_rn = db_enr.copy()
db_rn = db_rn[db_rn['COMPTE_NUM'].str.contains(r'^6|^7')]
to_group = ["ENTITY", "PERIOD"]
to_agg = {"VALUE": "sum"}
db_rn = db_rn.groupby(to_group, as_index=False).agg(to_agg)
db_rn ["COMPTE_NUM"] = "10999999"
db_rn ["RUBRIQUE_N3"] = "RESULTAT"
# Reorganisation colonne
to_select = ['ENTITY',
'PERIOD',
'COMPTE_NUM',
'RUBRIQUE_N3',
'VALUE']
db_rn = db_rn[to_select]
db_rn
###Output
_____no_output_____
###Markdown
Base de données FEC aggrégée avec variation Aggrégation RUBRIQUE N3
###Code
# Calcul var v = création de dataset avec Period_comp pour merge
db_var = db_enr.copy()
# Regroupement
to_group = ["ENTITY",
"PERIOD",
"COMPTE_NUM",
"RUBRIQUE_N3"]
to_agg = {"VALUE": "sum"}
db_var = db_var.groupby(to_group, as_index=False).agg(to_agg)
# Ajout des résultats au dataframe
db_var = pd.concat([db_var, db_rn], axis=0, sort=False)
# Creation colonne COMP
db_var['PERIOD_COMP'] = (db_var['PERIOD'].str[:4].astype(int) - 1).astype(str) + db_var['PERIOD'].str[-3:]
db_var
###Output
_____no_output_____
###Markdown
Création de la base comparable
###Code
db_comp = db_var.copy()
# Suppression de la colonne période
db_comp = db_comp.drop("PERIOD_COMP", axis=1)
# Renommage des colonnes
to_rename = {'VALUE': "VALUE_N-1",
'PERIOD': "PERIOD_COMP"}
db_comp = db_comp.rename(columns=to_rename)
db_comp.head(5)
###Output
_____no_output_____
###Markdown
Jointure des 2 tables et calcul des variations
###Code
# Jointure entre les 2 tables
join_on = ["ENTITY",
"PERIOD_COMP",
"COMPTE_NUM",
"RUBRIQUE_N3"]
db_var = pd.merge(db_var, db_comp, how='left', on=join_on).drop("PERIOD_COMP", axis=1).fillna(0)
#Création colonne Var V
db_var["VARV"] = db_var["VALUE"] - db_var["VALUE_N-1"]
#Création colonne Var P (%)
db_var["VARP"] = db_var["VARV"] / db_var["VALUE_N-1"]
db_var
db_cat = db_var.copy()
# Calcul des rubriques niveau 2
def rubrique_N2(row):
numero_compte = str(row["COMPTE_NUM"])
value = float(row["VALUE"])
# BILAN SIMPLIFIE type IFRS NIV2
to_check = ["^10", "^11", "^12", "^13", "^14"]
if any (re.search(x,numero_compte) for x in to_check):
return "CAPITAUX_PROPRES"
to_check = ["^15", "^16", "^17", "^18", "^19"]
if any (re.search(x,numero_compte) for x in to_check):
return "DETTES_FINANCIERES"
to_check = ["^20", "^21", "^22", "^23", "^25", "^26", "^27", "^28", "^29"]
if any (re.search(x,numero_compte) for x in to_check):
return "IMMOBILISATIONS"
to_check = ["^31", "^32", "^33", "^34", "^35", "^36", "^37", "^38", "^39"]
if any (re.search(x,numero_compte) for x in to_check):
return "STOCKS"
to_check = ["^40"]
if any (re.search(x,numero_compte) for x in to_check):
return "DETTES_FOURNISSEURS"
to_check = ["^41"]
if any (re.search(x,numero_compte) for x in to_check):
return "CREANCES_CLIENTS"
to_check = ["^42", "^43", "^44", "^45", "^46", "^47", "^48", "^49"]
if any (re.search(x,numero_compte) for x in to_check):
if value > 0:
return "AUTRES_CREANCES"
else:
return "AUTRES_DETTES"
to_check = ["^50", "^51", "^52", "^53", "^54", "^58", "^59"]
if any (re.search(x,numero_compte) for x in to_check):
return "DISPONIBILITES"
# COMPTE DE RESULTAT DETAILLE NIV2
to_check = ["^60"]
if any (re.search(x,numero_compte) for x in to_check):
return "ACHATS"
to_check= ["^61", "^62"]
if any (re.search(x,numero_compte) for x in to_check):
return "SERVICES_EXTERIEURS"
to_check = ["^63"]
if any (re.search(x,numero_compte) for x in to_check):
return "TAXES"
to_check = ["^64"]
if any (re.search(x,numero_compte) for x in to_check):
return "CHARGES_PERSONNEL"
to_check = ["^65"]
if any (re.search(x,numero_compte) for x in to_check):
return "AUTRES_CHARGES"
to_check = ["^66"]
if any (re.search(x,numero_compte) for x in to_check):
return "CHARGES_FINANCIERES"
to_check = ["^67"]
if any (re.search(x,numero_compte) for x in to_check):
return "CHARGES_EXCEPTIONNELLES"
to_check = ["^68", "^78"]
if any (re.search(x,numero_compte) for x in to_check):
return "AMORTISSEMENTS"
to_check = ["^69"]
if any (re.search(x,numero_compte) for x in to_check):
return "IMPOT"
to_check = ["^70"]
if any (re.search(x,numero_compte) for x in to_check):
return "VENTES"
to_check = ["^71", "^72"]
if any (re.search(x,numero_compte) for x in to_check):
return "PRODUCTION_STOCKEE_IMMOBILISEE"
to_check = ["^74"]
if any (re.search(x,numero_compte) for x in to_check):
return "SUBVENTIONS_D'EXPL."
to_check = ["^75", "^791"]
if any (re.search(x,numero_compte) for x in to_check):
return "AUTRES_PRODUITS_GESTION_COURANTE"
to_check = ["^76", "^796"]
if any (re.search(x,numero_compte) for x in to_check):
return "PRODUITS_FINANCIERS"
to_check = ["^77", "^797"]
if any (re.search(x,numero_compte) for x in to_check):
return "PRODUITS_EXCEPTIONNELS"
to_check = ["^78"]
if any (re.search(x,numero_compte) for x in to_check):
return "REPRISES_AMORT._DEP."
to_check = ["^8"]
if any (re.search(x,numero_compte) for x in to_check):
return "COMPTES_SPECIAUX"
# Calcul des rubriques niveau 1
def rubrique_N1(row):
categorisation = row.RUBRIQUE_N2
# BILAN SIMPLIFIE type IFRS N1
to_check = ["CAPITAUX_PROPRES", "DETTES_FINANCIERES"]
if any(re.search(x, categorisation) for x in to_check):
return "PASSIF_NON_COURANT"
to_check = ["IMMOBILISATIONS"]
if any(re.search(x, categorisation) for x in to_check):
return "ACTIF_NON_COURANT"
to_check = ["STOCKS", "CREANCES_CLIENTS", "AUTRES_CREANCES"]
if any(re.search(x, categorisation) for x in to_check):
return "ACTIF_COURANT"
to_check = ["DETTES_FOURNISSEURS", "AUTRES_DETTES"]
if any(re.search(x, categorisation) for x in to_check):
return "PASSIF_COURANT"
to_check = ["DISPONIBILITES"]
if any(re.search(x, categorisation) for x in to_check):
return "DISPONIBILITES"
# COMPTE DE RESULTAT SIMPLIFIE N1
to_check = ["ACHATS"]
if any(re.search(x, categorisation) for x in to_check):
return "COUTS_DIRECTS"
to_check = ["SERVICES_EXTERIEURS", "TAXES", "CHARGES_PERSONNEL", "AUTRES_CHARGES", "AMORTISSEMENTS"]
if any(re.search(x, categorisation) for x in to_check):
return "CHARGES_EXPLOITATION"
to_check = ["CHARGES_FINANCIERES"]
if any(re.search(x, categorisation) for x in to_check):
return "CHARGES_FINANCIERES"
to_check = ["CHARGES_EXCEPTIONNELLES", "IMPOT"]
if any(re.search(x, categorisation) for x in to_check):
return "CHARGES_EXCEPTIONNELLES"
to_check = ["VENTES", "PRODUCTION_STOCKEE_IMMOBILISEE"]
if any(re.search(x, categorisation) for x in to_check):
return "CHIFFRE_D'AFFAIRES"
to_check = ["SUBVENTIONS_D'EXPL.", "AUTRES_PRODUITS_GESTION_COURANTE", "REPRISES_AMORT._DEP."]
if any(re.search(x, categorisation) for x in to_check):
return "PRODUITS_EXPLOITATION"
to_check = ["PRODUITS_FINANCIERS"]
if any(re.search(x, categorisation) for x in to_check):
return "PRODUITS_FINANCIERS"
to_check = ["PRODUITS_EXCEPTIONNELS"]
if any(re.search(x, categorisation) for x in to_check):
return "PRODUITS_EXCEPTIONNELS"
# Calcul des rubriques niveau 0
def rubrique_N0(row):
masse = row.RUBRIQUE_N1
to_check = ["ACTIF_NON_COURANT", "ACTIF_COURANT", "DISPONIBILITES"]
if any(re.search(x, masse) for x in to_check):
return "ACTIF"
to_check = ["PASSIF_NON_COURANT", "PASSIF_COURANT"]
if any(re.search(x, masse) for x in to_check):
return "PASSIF"
to_check = ["COUTS_DIRECTS", "CHARGES_EXPLOITATION", "CHARGES_FINANCIERES", "CHARGES_EXCEPTIONNELLES"]
if any(re.search(x, masse) for x in to_check):
return "CHARGES"
to_check = ["CHIFFRE_D'AFFAIRES", "PRODUITS_EXPLOITATION", "PRODUITS_FINANCIERS", "PRODUITS_EXCEPTIONNELS"]
if any(re.search(x, masse) for x in to_check):
return "PRODUITS"
# Mapping des rubriques
db_cat["RUBRIQUE_N2"] = db_cat.apply(lambda row: rubrique_N2(row), axis=1)
db_cat["RUBRIQUE_N1"] = db_cat.apply(lambda row: rubrique_N1(row), axis=1)
db_cat["RUBRIQUE_N0"] = db_cat.apply(lambda row: rubrique_N0(row), axis=1)
# Reorganisation colonne
to_select = ['ENTITY',
'PERIOD',
'COMPTE_NUM',
'RUBRIQUE_N0',
'RUBRIQUE_N1',
'RUBRIQUE_N2',
'RUBRIQUE_N3',
'VALUE',
'VALUE_N-1',
'VARV',
'VARP']
db_cat = db_cat[to_select]
db_cat
###Output
_____no_output_____
###Markdown
Modèles de données des graphiques REF_ENTITE
###Code
# Creation du dataset ref_entite
dataset_entite = db_cat.copy()
# Regrouper par entite
to_group = ["ENTITY"]
to_agg = {"ENTITY": "max"}
dataset_entite = dataset_entite.groupby(to_group, as_index=False).agg(to_agg)
# Affichage du modèle de donnée
dataset_entite
###Output
_____no_output_____
###Markdown
REF_SCENARIO
###Code
# Creation du dataset ref_scenario
dataset_scenario = db_cat.copy()
# Regrouper par entite
to_group = ["PERIOD"]
to_agg = {"PERIOD": "max"}
dataset_scenario = dataset_scenario.groupby(to_group, as_index=False).agg(to_agg)
# Affichage du modèle de donnée
dataset_scenario
###Output
_____no_output_____
###Markdown
KPIS
###Code
# Creation du dataset KPIS (CA, MARGE, EBE, BFR, CC, DF)
dataset_kpis = db_cat.copy()
# KPIs CA
dataset_kpis_ca = dataset_kpis[dataset_kpis.RUBRIQUE_N1.isin(["CHIFFRE_D'AFFAIRES"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N1"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ca = dataset_kpis_ca.groupby(to_group, as_index=False).agg(to_agg)
# Passage value postif
dataset_kpis_ca["VALUE"] = dataset_kpis_ca["VALUE"]*-1
# COUTS_DIRECTS
dataset_kpis_ha = dataset_kpis[dataset_kpis.RUBRIQUE_N1.isin(["COUTS_DIRECTS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N1"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ha = dataset_kpis_ha.groupby(to_group, as_index=False).agg(to_agg)
# Passage value négatif
dataset_kpis_ha["VALUE"] = dataset_kpis_ha["VALUE"]*-1
# KPIs MARGE BRUTE (CA - COUTS DIRECTS)
dataset_kpis_mb = dataset_kpis_ca.copy()
dataset_kpis_mb = pd.concat([dataset_kpis_mb, dataset_kpis_ha], axis=0, sort=False)
to_group = ["ENTITY",
"PERIOD"]
to_agg = {"VALUE": "sum"}
dataset_kpis_mb = dataset_kpis_mb.groupby(to_group, as_index=False).agg(to_agg)
dataset_kpis_mb["RUBRIQUE_N1"] = "MARGE"
dataset_kpis_mb = dataset_kpis_mb[["ENTITY", "PERIOD", "RUBRIQUE_N1", "VALUE"]]
# CHARGES EXTERNES
dataset_kpis_ce = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["SERVICES_EXTERIEURS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ce = dataset_kpis_ce.groupby(to_group, as_index=False).agg(to_agg)
# Passage value negatif
dataset_kpis_ce["VALUE"] = dataset_kpis_ce["VALUE"]*-1
# IMPOTS
dataset_kpis_ip = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["TAXES"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ip = dataset_kpis_ip.groupby(to_group, as_index=False).agg(to_agg)
# Passage value negatif
dataset_kpis_ip["VALUE"] = dataset_kpis_ip["VALUE"]*-1
# CHARGES DE PERSONNEL
dataset_kpis_cp = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["CHARGES_PERSONNEL"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_cp = dataset_kpis_cp.groupby(to_group, as_index=False).agg(to_agg)
# Passage value negatif
dataset_kpis_cp["VALUE"] = dataset_kpis_cp["VALUE"]*-1
# AUTRES_CHARGES
dataset_kpis_ac = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["AUTRES_CHARGES"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ac = dataset_kpis_ac.groupby(to_group, as_index=False).agg(to_agg)
# Passage value negatif
dataset_kpis_ac["VALUE"] = dataset_kpis_ac["VALUE"]*-1
# SUBVENTIONS D'EXPLOITATION
dataset_kpis_ac = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["SUBVENTIONS_D'EXPL."])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ac = dataset_kpis_ac.groupby(to_group, as_index=False).agg(to_agg)
# KPIs EBE = MARGE - CHARGES EXTERNES - TAXES - CHARGES PERSONNEL - AUTRES CHARGES + SUBVENTION D'EXPLOITATION
dataset_kpis_ebe = dataset_kpis_mb.copy()
dataset_kpis_ebe = pd.concat([dataset_kpis_ebe, dataset_kpis_ce, dataset_kpis_ip, dataset_kpis_cp, dataset_kpis_ac], axis=0, sort=False)
to_group = ["ENTITY", "PERIOD"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ebe = dataset_kpis_ebe.groupby(to_group, as_index=False).agg(to_agg)
dataset_kpis_ebe["RUBRIQUE_N1"] = "EBE"
dataset_kpis_ebe = dataset_kpis_ebe[["ENTITY", "PERIOD", "RUBRIQUE_N1", "VALUE"]]
# KPIs CREANCES CLIENTS
dataset_kpis_cc = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["CREANCES_CLIENTS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_cc = dataset_kpis_cc.groupby(to_group, as_index=False).agg(to_agg)
# Renommage colonne
to_rename = {'RUBRIQUE_N2': "RUBRIQUE_N1"}
dataset_kpis_cc = dataset_kpis_cc.rename(columns=to_rename)
# KPIs STOCKS
dataset_kpis_st = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["STOCKS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_st = dataset_kpis_st.groupby(to_group, as_index=False).agg(to_agg)
# Renommage colonne
to_rename = {'RUBRIQUE_N2': "RUBRIQUE_N1"}
dataset_kpis_st = dataset_kpis_st.rename(columns=to_rename)
# KPIs DETTES FOURNISSEURS
dataset_kpis_df = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["DETTES_FOURNISSEURS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_df = dataset_kpis_df.groupby(to_group, as_index=False).agg(to_agg)
# Renommage colonne
to_rename = {'RUBRIQUE_N2': "RUBRIQUE_N1"}
dataset_kpis_df = dataset_kpis_df.rename(columns=to_rename)
# Passage value positif
dataset_kpis_df["VALUE"] = dataset_kpis_df["VALUE"].abs()
# KPIs BFR = CREANCES + STOCKS - DETTES FOURNISSEURS
dataset_kpis_bfr_df = dataset_kpis_df.copy()
# Passage dette fournisseur value négatif
dataset_kpis_bfr_df["VALUE"] = dataset_kpis_bfr_df["VALUE"]*-1
dataset_kpis_bfr_df = pd.concat([dataset_kpis_cc, dataset_kpis_st, dataset_kpis_bfr_df], axis=0, sort=False)
to_group = ["ENTITY", "PERIOD"]
to_agg = {"VALUE": "sum"}
dataset_kpis_bfr_df = dataset_kpis_bfr_df.groupby(to_group, as_index=False).agg(to_agg)
# Creation colonne Rubrique_N1 = BFR
dataset_kpis_bfr_df["RUBRIQUE_N1"] = "BFR"
# Reorganisation colonne
dataset_kpis_bfr_df = dataset_kpis_bfr_df[["ENTITY", "PERIOD", "RUBRIQUE_N1", "VALUE"]]
# Creation du dataset final
dataset_kpis_final = pd.concat([dataset_kpis_ca, dataset_kpis_mb, dataset_kpis_ebe, dataset_kpis_cc, dataset_kpis_st, dataset_kpis_df, dataset_kpis_bfr_df], axis=0, sort=False)
# Creation colonne COMP
dataset_kpis_final['PERIOD_COMP'] = (dataset_kpis_final['PERIOD'].str[:4].astype(int) - 1).astype(str) + dataset_kpis_final['PERIOD'].str[-3:]
dataset_kpis_final
# creation base comparable pour dataset_kpis
dataset_kpis_final_comp = dataset_kpis_final.copy()
# Suppression de la colonne période
dataset_kpis_final_comp = dataset_kpis_final_comp.drop("PERIOD_COMP", axis=1)
# Renommage des colonnes
to_rename = {'VALUE': "VALUE_N-1",
'PERIOD': "PERIOD_COMP"}
dataset_kpis_final_comp = dataset_kpis_final_comp.rename(columns=to_rename)
dataset_kpis_final_comp
# Jointure entre les 2 tables dataset_kpis_final et dataset_kpis_vf
join_on = ["ENTITY",
"PERIOD_COMP",
"RUBRIQUE_N1"]
dataset_kpis_final = pd.merge(dataset_kpis_final, dataset_kpis_final_comp, how='left', on=join_on).drop("PERIOD_COMP", axis=1).fillna(0)
#Création colonne Var V
dataset_kpis_final["VARV"] = dataset_kpis_final["VALUE"] - dataset_kpis_final["VALUE_N-1"]
#Création colonne Var P (%)
dataset_kpis_final["VARP"] = dataset_kpis_final["VARV"] / dataset_kpis_final["VALUE_N-1"]
dataset_kpis_final
###Output
_____no_output_____
###Markdown
EVOLUTION CA
###Code
# Creation du dataset evol_ca
dataset_evol_ca = db_enr.copy()
# Filtre COMPTE_NUM = Chiffre d'Affaire (RUBRIQUE N1)
dataset_evol_ca = dataset_evol_ca[dataset_evol_ca['COMPTE_NUM'].str.contains(r'^70|^71|^72')]
# Regroupement
to_group = ["ENTITY",
"PERIOD",
"MONTH",
"MONTH_INDEX",
"RUBRIQUE_N3"]
to_agg = {"VALUE": "sum"}
dataset_evol_ca = dataset_evol_ca.groupby(to_group, as_index=False).agg(to_agg)
dataset_evol_ca["VALUE"] = dataset_evol_ca["VALUE"].abs()
# Calcul de la somme cumulée
dataset_evol_ca = dataset_evol_ca.sort_values(by=["ENTITY", 'PERIOD', 'MONTH_INDEX']).reset_index(drop=True)
dataset_evol_ca['MONTH_INDEX'] = pd.to_datetime(dataset_evol_ca['MONTH_INDEX'], format="%m").dt.strftime("%m")
dataset_evol_ca['VALUE_CUM'] = dataset_evol_ca.groupby(["ENTITY", "PERIOD"], as_index=True).agg({"VALUE": "cumsum"})
# Affichage du modèle de donnée
dataset_evol_ca
###Output
_____no_output_____
###Markdown
CHARGES
###Code
#Creation du dataset charges
dataset_charges = db_cat.copy()
# Filtre RUBRIQUE_N0 = CHARGES
dataset_charges = dataset_charges[dataset_charges["RUBRIQUE_N0"] == "CHARGES"]
# Mettre en valeur positive VALUE
dataset_charges["VALUE"] = dataset_charges["VALUE"].abs()
# Affichage du modèle de donnée
dataset_charges
###Output
_____no_output_____
###Markdown
POSITIONS TRESORERIE
###Code
# Creation du dataset trésorerie
dataset_treso = db_enr.copy()
# Filtre RUBRIQUE_N1 = TRESORERIE
dataset_treso = dataset_treso[dataset_treso['COMPTE_NUM'].str.contains(r'^5')].reset_index(drop=True)
# Cash in / Cash out ?
dataset_treso.loc[dataset_treso.VALUE > 0, "CASH_IN"] = dataset_treso.VALUE
dataset_treso.loc[dataset_treso.VALUE < 0, "CASH_OUT"] = dataset_treso.VALUE
# Regroupement
to_group = ["ENTITY",
"PERIOD",
"MONTH",
"MONTH_INDEX"]
to_agg = {"VALUE": "sum",
"CASH_IN": "sum",
"CASH_OUT": "sum"}
dataset_treso = dataset_treso.groupby(to_group, as_index = False).agg(to_agg).fillna(0)
# Cumul par période
dataset_treso = dataset_treso.sort_values(["ENTITY", "PERIOD", "MONTH_INDEX"])
dataset_treso['MONTH_INDEX'] = pd.to_datetime(dataset_treso['MONTH_INDEX'], format="%m").dt.strftime("%m")
dataset_treso['VALUE_LINE'] = dataset_treso.groupby(["ENTITY", 'PERIOD'], as_index=True).agg({"VALUE": "cumsum"})
# Mettre en valeur positive CASH_OUT
dataset_treso["CASH_OUT"] = dataset_treso["CASH_OUT"].abs()
# Affichage du modèle de donnée
dataset_treso
###Output
_____no_output_____
###Markdown
BILAN
###Code
# Creation du dataset Bilan
dataset_bilan = db_cat.copy()
# Filtre RUBRIQUE_N0 = ACTIF & PASSIF
dataset_bilan = dataset_bilan[(dataset_bilan["RUBRIQUE_N0"].isin(["ACTIF", "PASSIF"]))]
# Regroupement R0/R1/R2
to_group = ["ENTITY",
"PERIOD",
"RUBRIQUE_N0",
"RUBRIQUE_N1",
"RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_bilan = dataset_bilan.groupby(to_group, as_index = False).agg(to_agg).fillna(0)
# Mettre en valeur positive VALUE
dataset_bilan["VALUE"] = dataset_bilan["VALUE"].abs()
# Selectionner les colonnes
to_select = ["ENTITY",
"PERIOD",
"RUBRIQUE_N0",
"RUBRIQUE_N1",
"RUBRIQUE_N2",
"VALUE"]
dataset_bilan = dataset_bilan[to_select]
# Affichage du modèle de donnée
dataset_bilan
###Output
_____no_output_____
###Markdown
Output Sauvegarde des fichiers en csv
###Code
def df_to_csv(df, filename):
# Sauvegarde en csv
df.to_csv(filename,
sep=";",
decimal=",",
index=False)
# Création du lien url
naas_link = naas.asset.add(filename)
# Création de la ligne
data = {
"OBJET": filename,
"URL": naas_link,
"DATE_EXTRACT": datetime.now().strftime("%Y-%m-%d %H:%M:%S")
}
return pd.DataFrame([data])
dataset_logo = {
"OBJET": "Logo",
"URL": LOGO,
"DATE_EXTRACT": datetime.now().strftime("%Y-%m-%d %H:%M:%S")
}
logo = pd.DataFrame([dataset_logo])
logo
import json
color = {"name":"Color",
"dataColors":[COLOR_1, COLOR_2]}
with open("color.json", "w") as write_file:
json.dump(color, write_file)
dataset_color = {
"OBJET": "Color",
"URL": naas.asset.add("color.json"),
"DATE_EXTRACT": datetime.now().strftime("%Y-%m-%d %H:%M:%S")
}
pbi_color = pd.DataFrame([dataset_color])
pbi_color
entite = df_to_csv(dataset_entite, "dataset_entite.csv")
entite
scenario = df_to_csv(dataset_scenario, "dataset_scenario.csv")
scenario
kpis = df_to_csv(dataset_kpis_final, "dataset_kpis_final.csv")
kpis
evol_ca = df_to_csv(dataset_evol_ca, "dataset_evol_ca.csv")
evol_ca
charges = df_to_csv(dataset_charges, "dataset_charges.csv")
charges
treso = df_to_csv(dataset_treso, "dataset_treso.csv")
treso
bilan = df_to_csv(dataset_bilan, "dataset_bilan.csv")
bilan
###Output
_____no_output_____
###Markdown
Création du fichier à intégrer dans PowerBI
###Code
db_powerbi = pd.concat([logo, pbi_color, entite, scenario, kpis, evol_ca, charges, treso, bilan], axis=0)
db_powerbi
df_to_csv(db_powerbi, "powerbi.csv")
###Output
_____no_output_____
###Markdown
FEC - Creer un dashboard PowerBI **Tags:** fec powerbi dataviz Ce Notebook permet de transformer des fichiers FEC de votre entreprise en un tableau de bord Microsoft Power BI.Le FEC (fichier des écritures comptables) est un export standard des logiciels de comptabilite et une obligation légale en france depuis 2014 afin de déposer ses comptes de manière electronique auprès des services fiscaux.-Durée de l’installation = 5 minutes-Support d’installation = [Page Notion](https://www.notion.so/Mode-d-emploi-FECthis-7fc142f2d7ae4a3889fbca28a83acba2/)-Niveau = Facile **Author:** [Alexandre STEVENS](https://www.linkedin.com/in/alexandrestevenspbix/) Input Librairie
###Code
import pandas as pd
from datetime import datetime, timedelta
import os
import re
import naas
###Output
_____no_output_____
###Markdown
Lien URL vers le logo de l'entreprise
###Code
LOGO = "https://landen.imgix.net/e5hx7wyzf53f/assets/26u7xg7u.png?w=400"
COLOR_1 = None
COLOR_2 = None
###Output
_____no_output_____
###Markdown
Lire les fichiers FEC
###Code
def get_all_fec(file_regex,
sep=",",
decimal=".",
encoding=None,
header=None,
usecols=None,
names=None,
dtype=None):
# Create df init
df = pd.DataFrame()
# Get all files in INPUT_FOLDER
files = [f for f in os.listdir() if re.search(file_regex, f)]
if len(files) == 0:
print(f"Aucun fichier FEC ne correspond au standard de nomination")
else:
for file in files:
# Open file and create df
print(file)
tmp_df = pd.read_csv(file,
sep=sep,
decimal=decimal,
encoding=encoding,
header=header,
usecols=usecols,
names=names,
dtype=dtype)
# Add filename to df
tmp_df['NOM_FICHIER'] = file
# Concat df
df = pd.concat([df, tmp_df], axis=0, sort=False)
return df
file_regex = "^\d{9}FEC\d{8}.txt"
db_init = get_all_fec(file_regex,
sep='\t',
decimal=',',
encoding='ISO-8859-1',
header=0)
db_init
###Output
_____no_output_____
###Markdown
Model Base de donnée FEC Nettoyage des données
###Code
db_clean = db_init.copy()
# Selection des colonnes à conserver
to_select = ['NOM_FICHIER',
'EcritureDate',
'CompteNum',
'CompteLib',
'EcritureLib',
'Debit',
'Credit']
db_clean = db_clean[to_select]
# Renommage des colonnes
to_rename = {'EcritureDate': "DATE",
'CompteNum': "COMPTE_NUM",
'CompteLib': "RUBRIQUE_N3",
'EcritureLib': "RUBRIQUE_N4",
'Debit': "DEBIT",
'Credit': "CREDIT"}
db_clean = db_clean.rename(columns=to_rename)
#suppression des espaces colonne "COMPTE_NUM"
db_clean["COMPTE_NUM"] = db_clean["COMPTE_NUM"].astype(str).str.strip()
# Mise au format des colonnes
db_clean = db_clean.astype({"NOM_FICHIER" : str,
"DATE" : str,
"COMPTE_NUM" : str,
"RUBRIQUE_N3" : str,
"RUBRIQUE_N4" : str,
"DEBIT" : float,
"CREDIT" : float,
})
# Mise au format colonne date
db_clean["DATE"] = pd.to_datetime(db_clean["DATE"])
db_clean.head(5)
###Output
_____no_output_____
###Markdown
Enrichissement de la base
###Code
db_enr = db_clean.copy()
# Ajout colonnes entité et période
db_enr['ENTITY'] = db_enr['NOM_FICHIER'].str[:9]
db_enr['PERIOD'] = db_enr['NOM_FICHIER'].str[12:-6]
db_enr['PERIOD'] = pd.to_datetime(db_enr['PERIOD'], format='%Y%m')
db_enr['PERIOD'] = db_enr['PERIOD'].dt.strftime("%Y-%m")
# Ajout colonne month et month_index
db_enr['MONTH'] = db_enr['DATE'].dt.strftime("%b")
db_enr['MONTH_INDEX'] = db_enr['DATE'].dt.month
# Calcul de la valeur debit-crédit
db_enr["VALUE"] = (db_enr["DEBIT"]) - (db_enr["CREDIT"])
db_enr.head(5)
# Calcul résultat pour équilibrage bilan dans capitaux propre
db_rn = db_enr.copy()
db_rn = db_rn[db_rn['COMPTE_NUM'].str.contains(r'^6|^7')]
to_group = ["ENTITY", "PERIOD"]
to_agg = {"VALUE": "sum"}
db_rn = db_rn.groupby(to_group, as_index=False).agg(to_agg)
db_rn ["COMPTE_NUM"] = "10999999"
db_rn ["RUBRIQUE_N3"] = "RESULTAT"
# Reorganisation colonne
to_select = ['ENTITY',
'PERIOD',
'COMPTE_NUM',
'RUBRIQUE_N3',
'VALUE']
db_rn = db_rn[to_select]
db_rn
###Output
_____no_output_____
###Markdown
Base de données FEC aggrégée avec variation Aggrégation RUBRIQUE N3
###Code
# Calcul var v = création de dataset avec Period_comp pour merge
db_var = db_enr.copy()
# Regroupement
to_group = ["ENTITY",
"PERIOD",
"COMPTE_NUM",
"RUBRIQUE_N3"]
to_agg = {"VALUE": "sum"}
db_var = db_var.groupby(to_group, as_index=False).agg(to_agg)
# Ajout des résultats au dataframe
db_var = pd.concat([db_var, db_rn], axis=0, sort=False)
# Creation colonne COMP
db_var['PERIOD_COMP'] = (db_var['PERIOD'].str[:4].astype(int) - 1).astype(str) + db_var['PERIOD'].str[-3:]
db_var
###Output
_____no_output_____
###Markdown
Création de la base comparable
###Code
db_comp = db_var.copy()
# Suppression de la colonne période
db_comp = db_comp.drop("PERIOD_COMP", axis=1)
# Renommage des colonnes
to_rename = {'VALUE': "VALUE_N-1",
'PERIOD': "PERIOD_COMP"}
db_comp = db_comp.rename(columns=to_rename)
db_comp.head(5)
###Output
_____no_output_____
###Markdown
Jointure des 2 tables et calcul des variations
###Code
# Jointure entre les 2 tables
join_on = ["ENTITY",
"PERIOD_COMP",
"COMPTE_NUM",
"RUBRIQUE_N3"]
db_var = pd.merge(db_var, db_comp, how='left', on=join_on).drop("PERIOD_COMP", axis=1).fillna(0)
#Création colonne Var V
db_var["VARV"] = db_var["VALUE"] - db_var["VALUE_N-1"]
#Création colonne Var P (%)
db_var["VARP"] = db_var["VARV"] / db_var["VALUE_N-1"]
db_var
db_cat = db_var.copy()
# Calcul des rubriques niveau 2
def rubrique_N2(row):
numero_compte = str(row["COMPTE_NUM"])
value = float(row["VALUE"])
# BILAN SIMPLIFIE type IFRS NIV2
to_check = ["^10", "^11", "^12", "^13", "^14"]
if any (re.search(x,numero_compte) for x in to_check):
return "CAPITAUX_PROPRES"
to_check = ["^15", "^16", "^17", "^18", "^19"]
if any (re.search(x,numero_compte) for x in to_check):
return "DETTES_FINANCIERES"
to_check = ["^20", "^21", "^22", "^23", "^25", "^26", "^27", "^28", "^29"]
if any (re.search(x,numero_compte) for x in to_check):
return "IMMOBILISATIONS"
to_check = ["^31", "^32", "^33", "^34", "^35", "^36", "^37", "^38", "^39"]
if any (re.search(x,numero_compte) for x in to_check):
return "STOCKS"
to_check = ["^40"]
if any (re.search(x,numero_compte) for x in to_check):
return "DETTES_FOURNISSEURS"
to_check = ["^41"]
if any (re.search(x,numero_compte) for x in to_check):
return "CREANCES_CLIENTS"
to_check = ["^42", "^43", "^44", "^45", "^46", "^47", "^48", "^49"]
if any (re.search(x,numero_compte) for x in to_check):
if value > 0:
return "AUTRES_CREANCES"
else:
return "AUTRES_DETTES"
to_check = ["^50", "^51", "^52", "^53", "^54", "^58", "^59"]
if any (re.search(x,numero_compte) for x in to_check):
return "DISPONIBILITES"
# COMPTE DE RESULTAT DETAILLE NIV2
to_check = ["^60"]
if any (re.search(x,numero_compte) for x in to_check):
return "ACHATS"
to_check= ["^61", "^62"]
if any (re.search(x,numero_compte) for x in to_check):
return "SERVICES_EXTERIEURS"
to_check = ["^63"]
if any (re.search(x,numero_compte) for x in to_check):
return "TAXES"
to_check = ["^64"]
if any (re.search(x,numero_compte) for x in to_check):
return "CHARGES_PERSONNEL"
to_check = ["^65"]
if any (re.search(x,numero_compte) for x in to_check):
return "AUTRES_CHARGES"
to_check = ["^66"]
if any (re.search(x,numero_compte) for x in to_check):
return "CHARGES_FINANCIERES"
to_check = ["^67"]
if any (re.search(x,numero_compte) for x in to_check):
return "CHARGES_EXCEPTIONNELLES"
to_check = ["^68", "^78"]
if any (re.search(x,numero_compte) for x in to_check):
return "AMORTISSEMENTS"
to_check = ["^69"]
if any (re.search(x,numero_compte) for x in to_check):
return "IMPOT"
to_check = ["^70"]
if any (re.search(x,numero_compte) for x in to_check):
return "VENTES"
to_check = ["^71", "^72"]
if any (re.search(x,numero_compte) for x in to_check):
return "PRODUCTION_STOCKEE_IMMOBILISEE"
to_check = ["^74"]
if any (re.search(x,numero_compte) for x in to_check):
return "SUBVENTIONS_D'EXPL."
to_check = ["^75", "^791"]
if any (re.search(x,numero_compte) for x in to_check):
return "AUTRES_PRODUITS_GESTION_COURANTE"
to_check = ["^76", "^796"]
if any (re.search(x,numero_compte) for x in to_check):
return "PRODUITS_FINANCIERS"
to_check = ["^77", "^797"]
if any (re.search(x,numero_compte) for x in to_check):
return "PRODUITS_EXCEPTIONNELS"
to_check = ["^78"]
if any (re.search(x,numero_compte) for x in to_check):
return "REPRISES_AMORT._DEP."
to_check = ["^8"]
if any (re.search(x,numero_compte) for x in to_check):
return "COMPTES_SPECIAUX"
# Calcul des rubriques niveau 1
def rubrique_N1(row):
categorisation = row.RUBRIQUE_N2
# BILAN SIMPLIFIE type IFRS N1
to_check = ["CAPITAUX_PROPRES", "DETTES_FINANCIERES"]
if any(re.search(x, categorisation) for x in to_check):
return "PASSIF_NON_COURANT"
to_check = ["IMMOBILISATIONS"]
if any(re.search(x, categorisation) for x in to_check):
return "ACTIF_NON_COURANT"
to_check = ["STOCKS", "CREANCES_CLIENTS", "AUTRES_CREANCES"]
if any(re.search(x, categorisation) for x in to_check):
return "ACTIF_COURANT"
to_check = ["DETTES_FOURNISSEURS", "AUTRES_DETTES"]
if any(re.search(x, categorisation) for x in to_check):
return "PASSIF_COURANT"
to_check = ["DISPONIBILITES"]
if any(re.search(x, categorisation) for x in to_check):
return "DISPONIBILITES"
# COMPTE DE RESULTAT SIMPLIFIE N1
to_check = ["ACHATS"]
if any(re.search(x, categorisation) for x in to_check):
return "COUTS_DIRECTS"
to_check = ["SERVICES_EXTERIEURS", "TAXES", "CHARGES_PERSONNEL", "AUTRES_CHARGES", "AMORTISSEMENTS"]
if any(re.search(x, categorisation) for x in to_check):
return "CHARGES_EXPLOITATION"
to_check = ["CHARGES_FINANCIERES"]
if any(re.search(x, categorisation) for x in to_check):
return "CHARGES_FINANCIERES"
to_check = ["CHARGES_EXCEPTIONNELLES", "IMPOT"]
if any(re.search(x, categorisation) for x in to_check):
return "CHARGES_EXCEPTIONNELLES"
to_check = ["VENTES", "PRODUCTION_STOCKEE_IMMOBILISEE"]
if any(re.search(x, categorisation) for x in to_check):
return "CHIFFRE_D'AFFAIRES"
to_check = ["SUBVENTIONS_D'EXPL.", "AUTRES_PRODUITS_GESTION_COURANTE", "REPRISES_AMORT._DEP."]
if any(re.search(x, categorisation) for x in to_check):
return "PRODUITS_EXPLOITATION"
to_check = ["PRODUITS_FINANCIERS"]
if any(re.search(x, categorisation) for x in to_check):
return "PRODUITS_FINANCIERS"
to_check = ["PRODUITS_EXCEPTIONNELS"]
if any(re.search(x, categorisation) for x in to_check):
return "PRODUITS_EXCEPTIONNELS"
# Calcul des rubriques niveau 0
def rubrique_N0(row):
masse = row.RUBRIQUE_N1
to_check = ["ACTIF_NON_COURANT", "ACTIF_COURANT", "DISPONIBILITES"]
if any(re.search(x, masse) for x in to_check):
return "ACTIF"
to_check = ["PASSIF_NON_COURANT", "PASSIF_COURANT"]
if any(re.search(x, masse) for x in to_check):
return "PASSIF"
to_check = ["COUTS_DIRECTS", "CHARGES_EXPLOITATION", "CHARGES_FINANCIERES", "CHARGES_EXCEPTIONNELLES"]
if any(re.search(x, masse) for x in to_check):
return "CHARGES"
to_check = ["CHIFFRE_D'AFFAIRES", "PRODUITS_EXPLOITATION", "PRODUITS_FINANCIERS", "PRODUITS_EXCEPTIONNELS"]
if any(re.search(x, masse) for x in to_check):
return "PRODUITS"
# Mapping des rubriques
db_cat["RUBRIQUE_N2"] = db_cat.apply(lambda row: rubrique_N2(row), axis=1)
db_cat["RUBRIQUE_N1"] = db_cat.apply(lambda row: rubrique_N1(row), axis=1)
db_cat["RUBRIQUE_N0"] = db_cat.apply(lambda row: rubrique_N0(row), axis=1)
# Reorganisation colonne
to_select = ['ENTITY',
'PERIOD',
'COMPTE_NUM',
'RUBRIQUE_N0',
'RUBRIQUE_N1',
'RUBRIQUE_N2',
'RUBRIQUE_N3',
'VALUE',
'VALUE_N-1',
'VARV',
'VARP']
db_cat = db_cat[to_select]
db_cat
###Output
_____no_output_____
###Markdown
Modèles de données des graphiques REF_ENTITE
###Code
# Creation du dataset ref_entite
dataset_entite = db_cat.copy()
# Regrouper par entite
to_group = ["ENTITY"]
to_agg = {"ENTITY": "max"}
dataset_entite = dataset_entite.groupby(to_group, as_index=False).agg(to_agg)
# Affichage du modèle de donnée
dataset_entite
###Output
_____no_output_____
###Markdown
REF_SCENARIO
###Code
# Creation du dataset ref_scenario
dataset_scenario = db_cat.copy()
# Regrouper par entite
to_group = ["PERIOD"]
to_agg = {"PERIOD": "max"}
dataset_scenario = dataset_scenario.groupby(to_group, as_index=False).agg(to_agg)
# Affichage du modèle de donnée
dataset_scenario
###Output
_____no_output_____
###Markdown
KPIS
###Code
# Creation du dataset KPIS (CA, MARGE, EBE, BFR, CC, DF)
dataset_kpis = db_cat.copy()
# KPIs CA
dataset_kpis_ca = dataset_kpis[dataset_kpis.RUBRIQUE_N1.isin(["CHIFFRE_D'AFFAIRES"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N1"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ca = dataset_kpis_ca.groupby(to_group, as_index=False).agg(to_agg)
# Passage value postif
dataset_kpis_ca["VALUE"] = dataset_kpis_ca["VALUE"]*-1
# COUTS_DIRECTS
dataset_kpis_ha = dataset_kpis[dataset_kpis.RUBRIQUE_N1.isin(["COUTS_DIRECTS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N1"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ha = dataset_kpis_ha.groupby(to_group, as_index=False).agg(to_agg)
# Passage value négatif
dataset_kpis_ha["VALUE"] = dataset_kpis_ha["VALUE"]*-1
# KPIs MARGE BRUTE (CA - COUTS DIRECTS)
dataset_kpis_mb = dataset_kpis_ca.copy()
dataset_kpis_mb = pd.concat([dataset_kpis_mb, dataset_kpis_ha], axis=0, sort=False)
to_group = ["ENTITY",
"PERIOD"]
to_agg = {"VALUE": "sum"}
dataset_kpis_mb = dataset_kpis_mb.groupby(to_group, as_index=False).agg(to_agg)
dataset_kpis_mb["RUBRIQUE_N1"] = "MARGE"
dataset_kpis_mb = dataset_kpis_mb[["ENTITY", "PERIOD", "RUBRIQUE_N1", "VALUE"]]
# CHARGES EXTERNES
dataset_kpis_ce = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["SERVICES_EXTERIEURS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ce = dataset_kpis_ce.groupby(to_group, as_index=False).agg(to_agg)
# Passage value negatif
dataset_kpis_ce["VALUE"] = dataset_kpis_ce["VALUE"]*-1
# IMPOTS
dataset_kpis_ip = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["TAXES"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ip = dataset_kpis_ip.groupby(to_group, as_index=False).agg(to_agg)
# Passage value negatif
dataset_kpis_ip["VALUE"] = dataset_kpis_ip["VALUE"]*-1
# CHARGES DE PERSONNEL
dataset_kpis_cp = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["CHARGES_PERSONNEL"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_cp = dataset_kpis_cp.groupby(to_group, as_index=False).agg(to_agg)
# Passage value negatif
dataset_kpis_cp["VALUE"] = dataset_kpis_cp["VALUE"]*-1
# AUTRES_CHARGES
dataset_kpis_ac = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["AUTRES_CHARGES"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ac = dataset_kpis_ac.groupby(to_group, as_index=False).agg(to_agg)
# Passage value negatif
dataset_kpis_ac["VALUE"] = dataset_kpis_ac["VALUE"]*-1
# SUBVENTIONS D'EXPLOITATION
dataset_kpis_ac = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["SUBVENTIONS_D'EXPL."])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ac = dataset_kpis_ac.groupby(to_group, as_index=False).agg(to_agg)
# KPIs EBE = MARGE - CHARGES EXTERNES - TAXES - CHARGES PERSONNEL - AUTRES CHARGES + SUBVENTION D'EXPLOITATION
dataset_kpis_ebe = dataset_kpis_mb.copy()
dataset_kpis_ebe = pd.concat([dataset_kpis_ebe, dataset_kpis_ce, dataset_kpis_ip, dataset_kpis_cp, dataset_kpis_ac], axis=0, sort=False)
to_group = ["ENTITY", "PERIOD"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ebe = dataset_kpis_ebe.groupby(to_group, as_index=False).agg(to_agg)
dataset_kpis_ebe["RUBRIQUE_N1"] = "EBE"
dataset_kpis_ebe = dataset_kpis_ebe[["ENTITY", "PERIOD", "RUBRIQUE_N1", "VALUE"]]
# KPIs CREANCES CLIENTS
dataset_kpis_cc = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["CREANCES_CLIENTS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_cc = dataset_kpis_cc.groupby(to_group, as_index=False).agg(to_agg)
# Renommage colonne
to_rename = {'RUBRIQUE_N2': "RUBRIQUE_N1"}
dataset_kpis_cc = dataset_kpis_cc.rename(columns=to_rename)
# KPIs STOCKS
dataset_kpis_st = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["STOCKS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_st = dataset_kpis_st.groupby(to_group, as_index=False).agg(to_agg)
# Renommage colonne
to_rename = {'RUBRIQUE_N2': "RUBRIQUE_N1"}
dataset_kpis_st = dataset_kpis_st.rename(columns=to_rename)
# KPIs DETTES FOURNISSEURS
dataset_kpis_df = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["DETTES_FOURNISSEURS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_df = dataset_kpis_df.groupby(to_group, as_index=False).agg(to_agg)
# Renommage colonne
to_rename = {'RUBRIQUE_N2': "RUBRIQUE_N1"}
dataset_kpis_df = dataset_kpis_df.rename(columns=to_rename)
# Passage value positif
dataset_kpis_df["VALUE"] = dataset_kpis_df["VALUE"].abs()
# KPIs BFR = CREANCES + STOCKS - DETTES FOURNISSEURS
dataset_kpis_bfr_df = dataset_kpis_df.copy()
# Passage dette fournisseur value négatif
dataset_kpis_bfr_df["VALUE"] = dataset_kpis_bfr_df["VALUE"]*-1
dataset_kpis_bfr_df = pd.concat([dataset_kpis_cc, dataset_kpis_st, dataset_kpis_bfr_df], axis=0, sort=False)
to_group = ["ENTITY", "PERIOD"]
to_agg = {"VALUE": "sum"}
dataset_kpis_bfr_df = dataset_kpis_bfr_df.groupby(to_group, as_index=False).agg(to_agg)
# Creation colonne Rubrique_N1 = BFR
dataset_kpis_bfr_df["RUBRIQUE_N1"] = "BFR"
# Reorganisation colonne
dataset_kpis_bfr_df = dataset_kpis_bfr_df[["ENTITY", "PERIOD", "RUBRIQUE_N1", "VALUE"]]
# Creation du dataset final
dataset_kpis_final = pd.concat([dataset_kpis_ca, dataset_kpis_mb, dataset_kpis_ebe, dataset_kpis_cc, dataset_kpis_st, dataset_kpis_df, dataset_kpis_bfr_df], axis=0, sort=False)
# Creation colonne COMP
dataset_kpis_final['PERIOD_COMP'] = (dataset_kpis_final['PERIOD'].str[:4].astype(int) - 1).astype(str) + dataset_kpis_final['PERIOD'].str[-3:]
dataset_kpis_final
# creation base comparable pour dataset_kpis
dataset_kpis_final_comp = dataset_kpis_final.copy()
# Suppression de la colonne période
dataset_kpis_final_comp = dataset_kpis_final_comp.drop("PERIOD_COMP", axis=1)
# Renommage des colonnes
to_rename = {'VALUE': "VALUE_N-1",
'PERIOD': "PERIOD_COMP"}
dataset_kpis_final_comp = dataset_kpis_final_comp.rename(columns=to_rename)
dataset_kpis_final_comp
# Jointure entre les 2 tables dataset_kpis_final et dataset_kpis_vf
join_on = ["ENTITY",
"PERIOD_COMP",
"RUBRIQUE_N1"]
dataset_kpis_final = pd.merge(dataset_kpis_final, dataset_kpis_final_comp, how='left', on=join_on).drop("PERIOD_COMP", axis=1).fillna(0)
#Création colonne Var V
dataset_kpis_final["VARV"] = dataset_kpis_final["VALUE"] - dataset_kpis_final["VALUE_N-1"]
#Création colonne Var P (%)
dataset_kpis_final["VARP"] = dataset_kpis_final["VARV"] / dataset_kpis_final["VALUE_N-1"]
dataset_kpis_final
###Output
_____no_output_____
###Markdown
EVOLUTION CA
###Code
# Creation du dataset evol_ca
dataset_evol_ca = db_enr.copy()
# Filtre COMPTE_NUM = Chiffre d'Affaire (RUBRIQUE N1)
dataset_evol_ca = dataset_evol_ca[dataset_evol_ca['COMPTE_NUM'].str.contains(r'^70|^71|^72')]
# Regroupement
to_group = ["ENTITY",
"PERIOD",
"MONTH",
"MONTH_INDEX",
"RUBRIQUE_N3"]
to_agg = {"VALUE": "sum"}
dataset_evol_ca = dataset_evol_ca.groupby(to_group, as_index=False).agg(to_agg)
dataset_evol_ca["VALUE"] = dataset_evol_ca["VALUE"].abs()
# Calcul de la somme cumulée
dataset_evol_ca = dataset_evol_ca.sort_values(by=["ENTITY", 'PERIOD', 'MONTH_INDEX']).reset_index(drop=True)
dataset_evol_ca['MONTH_INDEX'] = pd.to_datetime(dataset_evol_ca['MONTH_INDEX'], format="%m").dt.strftime("%m")
dataset_evol_ca['VALUE_CUM'] = dataset_evol_ca.groupby(["ENTITY", "PERIOD"], as_index=True).agg({"VALUE": "cumsum"})
# Affichage du modèle de donnée
dataset_evol_ca
###Output
_____no_output_____
###Markdown
CHARGES
###Code
#Creation du dataset charges
dataset_charges = db_cat.copy()
# Filtre RUBRIQUE_N0 = CHARGES
dataset_charges = dataset_charges[dataset_charges["RUBRIQUE_N0"] == "CHARGES"]
# Mettre en valeur positive VALUE
dataset_charges["VALUE"] = dataset_charges["VALUE"].abs()
# Affichage du modèle de donnée
dataset_charges
###Output
_____no_output_____
###Markdown
POSITIONS TRESORERIE
###Code
# Creation du dataset trésorerie
dataset_treso = db_enr.copy()
# Filtre RUBRIQUE_N1 = TRESORERIE
dataset_treso = dataset_treso[dataset_treso['COMPTE_NUM'].str.contains(r'^5')].reset_index(drop=True)
# Cash in / Cash out ?
dataset_treso.loc[dataset_treso.VALUE > 0, "CASH_IN"] = dataset_treso.VALUE
dataset_treso.loc[dataset_treso.VALUE < 0, "CASH_OUT"] = dataset_treso.VALUE
# Regroupement
to_group = ["ENTITY",
"PERIOD",
"MONTH",
"MONTH_INDEX"]
to_agg = {"VALUE": "sum",
"CASH_IN": "sum",
"CASH_OUT": "sum"}
dataset_treso = dataset_treso.groupby(to_group, as_index = False).agg(to_agg).fillna(0)
# Cumul par période
dataset_treso = dataset_treso.sort_values(["ENTITY", "PERIOD", "MONTH_INDEX"])
dataset_treso['MONTH_INDEX'] = pd.to_datetime(dataset_treso['MONTH_INDEX'], format="%m").dt.strftime("%m")
dataset_treso['VALUE_LINE'] = dataset_treso.groupby(["ENTITY", 'PERIOD'], as_index=True).agg({"VALUE": "cumsum"})
# Mettre en valeur positive CASH_OUT
dataset_treso["CASH_OUT"] = dataset_treso["CASH_OUT"].abs()
# Affichage du modèle de donnée
dataset_treso
###Output
_____no_output_____
###Markdown
BILAN
###Code
# Creation du dataset Bilan
dataset_bilan = db_cat.copy()
# Filtre RUBRIQUE_N0 = ACTIF & PASSIF
dataset_bilan = dataset_bilan[(dataset_bilan["RUBRIQUE_N0"].isin(["ACTIF", "PASSIF"]))]
# Regroupement R0/R1/R2
to_group = ["ENTITY",
"PERIOD",
"RUBRIQUE_N0",
"RUBRIQUE_N1",
"RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_bilan = dataset_bilan.groupby(to_group, as_index = False).agg(to_agg).fillna(0)
# Mettre en valeur positive VALUE
dataset_bilan["VALUE"] = dataset_bilan["VALUE"].abs()
# Selectionner les colonnes
to_select = ["ENTITY",
"PERIOD",
"RUBRIQUE_N0",
"RUBRIQUE_N1",
"RUBRIQUE_N2",
"VALUE"]
dataset_bilan = dataset_bilan[to_select]
# Affichage du modèle de donnée
dataset_bilan
###Output
_____no_output_____
###Markdown
Output Sauvegarde des fichiers en csv
###Code
def df_to_csv(df, filename):
# Sauvegarde en csv
df.to_csv(filename,
sep=";",
decimal=",",
index=False)
# Création du lien url
naas_link = naas.asset.add(filename)
# Création de la ligne
data = {
"OBJET": filename,
"URL": naas_link,
"DATE_EXTRACT": datetime.now().strftime("%Y-%m-%d %H:%M:%S")
}
return pd.DataFrame([data])
dataset_logo = {
"OBJET": "Logo",
"URL": LOGO,
"DATE_EXTRACT": datetime.now().strftime("%Y-%m-%d %H:%M:%S")
}
logo = pd.DataFrame([dataset_logo])
logo
import json
color = {"name":"Color",
"dataColors":[COLOR_1, COLOR_2]}
with open("color.json", "w") as write_file:
json.dump(color, write_file)
dataset_color = {
"OBJET": "Color",
"URL": naas.asset.add("color.json"),
"DATE_EXTRACT": datetime.now().strftime("%Y-%m-%d %H:%M:%S")
}
pbi_color = pd.DataFrame([dataset_color])
pbi_color
entite = df_to_csv(dataset_entite, "dataset_entite.csv")
entite
scenario = df_to_csv(dataset_scenario, "dataset_scenario.csv")
scenario
kpis = df_to_csv(dataset_kpis_final, "dataset_kpis_final.csv")
kpis
evol_ca = df_to_csv(dataset_evol_ca, "dataset_evol_ca.csv")
evol_ca
charges = df_to_csv(dataset_charges, "dataset_charges.csv")
charges
treso = df_to_csv(dataset_treso, "dataset_treso.csv")
treso
bilan = df_to_csv(dataset_bilan, "dataset_bilan.csv")
bilan
###Output
_____no_output_____
###Markdown
Création du fichier à intégrer dans PowerBI
###Code
db_powerbi = pd.concat([logo, pbi_color, entite, scenario, kpis, evol_ca, charges, treso, bilan], axis=0)
db_powerbi
df_to_csv(db_powerbi, "powerbi.csv")
###Output
_____no_output_____
###Markdown
FEC - Creer un dashboard PowerBI **Tags:** fec powerbi dataviz Ce Notebook permet de transformer des fichiers FEC de votre entreprise en un tableau de bord Microsoft Power BI.Le FEC (fichier des écritures comptables) est un export standard des logiciels de comptabilite et une obligation légale en france depuis 2014 afin de déposer ses comptes de manière electronique auprès des services fiscaux.-Durée de l’installation = 5 minutes-Support d’installation = [Page Notion](https://www.notion.so/Mode-d-emploi-FECthis-7fc142f2d7ae4a3889fbca28a83acba2/)-Niveau = Facile **Author:** [Alexandre STEVENS](https://www.linkedin.com/in/alexandrestevenspbix/) Input Librairie
###Code
import pandas as pd
from datetime import datetime, timedelta
import os
import re
import naas
###Output
_____no_output_____
###Markdown
Lien URL vers le logo de l'entreprise
###Code
LOGO = "https://landen.imgix.net/e5hx7wyzf53f/assets/26u7xg7u.png?w=400"
###Output
_____no_output_____
###Markdown
Lire les fichiers FEC
###Code
def get_all_fec(file_regex,
sep=",",
decimal=".",
encoding=None,
header=None,
usecols=None,
names=None,
dtype=None):
# Create df init
df = pd.DataFrame()
# Get all files in INPUT_FOLDER
files = [f for f in os.listdir() if re.search(file_regex, f)]
if len(files) == 0:
print(f"Aucun fichier FEC ne correspond au standard de nomination")
else:
for file in files:
# Open file and create df
print(file)
tmp_df = pd.read_csv(file,
sep=sep,
decimal=decimal,
encoding=encoding,
header=header,
usecols=usecols,
names=names,
dtype=dtype)
# Add filename to df
tmp_df['NOM_FICHIER'] = file
# Concat df
df = pd.concat([df, tmp_df], axis=0, sort=False)
return df
file_regex = "^\d{9}FEC\d{8}.txt"
db_init = get_all_fec(file_regex,
sep='\t',
decimal=',',
encoding='ISO-8859-1',
header=0)
db_init
###Output
_____no_output_____
###Markdown
Model Base de donnée FEC Nettoyage des données
###Code
db_clean = db_init.copy()
# Selection des colonnes à conserver
to_select = ['NOM_FICHIER',
'EcritureDate',
'CompteNum',
'CompteLib',
'EcritureLib',
'Debit',
'Credit']
db_clean = db_clean[to_select]
# Renommage des colonnes
to_rename = {'EcritureDate': "DATE",
'CompteNum': "COMPTE_NUM",
'CompteLib': "RUBRIQUE_N3",
'EcritureLib': "RUBRIQUE_N4",
'Debit': "DEBIT",
'Credit': "CREDIT" }
db_clean = db_clean.rename(columns=to_rename)
#suppression des espaces colonne "COMPTE_NUM"
db_clean["COMPTE_NUM"] = db_clean["COMPTE_NUM"].str.strip()
# Mise au format des colonnes
db_clean = db_clean.astype({"NOM_FICHIER" : str,
"DATE" : str,
"COMPTE_NUM" : str,
"RUBRIQUE_N3" : str,
"RUBRIQUE_N4" : str,
"DEBIT" : float,
"CREDIT" : float,
})
# Mise au format colonne date
db_clean["DATE"] = pd.to_datetime(db_clean["DATE"])
db_clean.head(5)
###Output
_____no_output_____
###Markdown
Enrichissement de la base
###Code
db_enr = db_clean.copy()
# Ajout colonnes entité et période
db_enr['ENTITY'] = db_enr['NOM_FICHIER'].str[:9]
db_enr['PERIOD'] = db_enr['NOM_FICHIER'].str[12:-6]
db_enr['PERIOD'] = pd.to_datetime(db_enr['PERIOD'], format='%Y%m')
db_enr['PERIOD'] = db_enr['PERIOD'].dt.strftime("%Y-%m")
# Ajout colonne month et month_index
db_enr['MONTH'] = db_enr['DATE'].dt.strftime("%b")
db_enr['MONTH_INDEX'] = db_enr['DATE'].dt.month
# Calcul de la valeur debit-crédit
db_enr["VALUE"] = (db_enr["DEBIT"]) - (db_enr["CREDIT"])
db_enr.head(5)
# Calcul résultat pour équilibrage bilan dans capitaux propre
db_rn = db_enr.copy()
db_rn = db_rn[db_rn['COMPTE_NUM'].str.contains(r'^6|^7')]
to_group = ["ENTITY", "PERIOD"]
to_agg = {"VALUE": "sum"}
db_rn = db_rn.groupby(to_group, as_index=False).agg(to_agg)
db_rn ["COMPTE_NUM"] = "10999999"
db_rn ["RUBRIQUE_N3"] = "RESULTAT"
# Reorganisation colonne
to_select = ['ENTITY',
'PERIOD',
'COMPTE_NUM',
'RUBRIQUE_N3',
'VALUE']
db_rn = db_rn[to_select]
db_rn
###Output
_____no_output_____
###Markdown
Base de données FEC aggrégée avec variation Aggrégation RUBRIQUE N3
###Code
# Calcul var v = création de dataset avec Period_comp pour merge
db_var = db_enr.copy()
# Regroupement
to_group = ["ENTITY",
"PERIOD",
"COMPTE_NUM",
"RUBRIQUE_N3"]
to_agg = {"VALUE": "sum"}
db_var = db_var.groupby(to_group, as_index=False).agg(to_agg)
# Ajout des résultats au dataframe
db_var = pd.concat([db_var, db_rn], axis=0, sort=False)
# Creation colonne COMP
db_var['PERIOD_COMP'] = (db_var['PERIOD'].str[:4].astype(int) - 1).astype(str) + db_var['PERIOD'].str[-3:]
db_var
###Output
_____no_output_____
###Markdown
Création de la base comparable
###Code
db_comp = db_var.copy()
# Suppression de la colonne période
db_comp = db_comp.drop("PERIOD_COMP", axis=1)
# Renommage des colonnes
to_rename = {'VALUE': "VALUE_N-1",
'PERIOD': "PERIOD_COMP"}
db_comp = db_comp.rename(columns=to_rename)
db_comp.head(5)
###Output
_____no_output_____
###Markdown
Jointure des 2 tables et calcul des variations
###Code
# Jointure entre les 2 tables
join_on = ["ENTITY",
"PERIOD_COMP",
"COMPTE_NUM",
"RUBRIQUE_N3"]
db_var = pd.merge(db_var, db_comp, how='left', on=join_on).drop("PERIOD_COMP", axis=1).fillna(0)
#Création colonne Var V
db_var["VARV"] = db_var["VALUE"] - db_var["VALUE_N-1"]
#Création colonne Var P (%)
db_var["VARP"] = db_var["VARV"] / db_var["VALUE_N-1"]
db_var
db_cat = db_var.copy()
# Calcul des rubriques niveau 2
def rubrique_N2(row):
numero_compte = str(row["COMPTE_NUM"])
value = float(row["VALUE"])
# BILAN SIMPLIFIE type IFRS NIV2
to_check = ["^10", "^11", "^12", "^13", "^14"]
if any (re.search(x,numero_compte) for x in to_check):
return "CAPITAUX_PROPRES"
to_check = ["^15", "^16", "^17", "^18", "^19"]
if any (re.search(x,numero_compte) for x in to_check):
return "DETTES_FINANCIERES"
to_check = ["^20", "^21", "^22", "^23", "^25", "^26", "^27", "^28", "^29"]
if any (re.search(x,numero_compte) for x in to_check):
return "IMMOBILISATIONS"
to_check = ["^31", "^32", "^33", "^34", "^35", "^36", "^37", "^38", "^39"]
if any (re.search(x,numero_compte) for x in to_check):
return "STOCKS"
to_check = ["^40"]
if any (re.search(x,numero_compte) for x in to_check):
return "DETTES_FOURNISSEURS"
to_check = ["^41"]
if any (re.search(x,numero_compte) for x in to_check):
return "CREANCES_CLIENTS"
to_check = ["^42", "^43", "^44", "^45", "^46", "^47", "^48", "^49"]
if any (re.search(x,numero_compte) for x in to_check):
if value > 0:
return "AUTRES_CREANCES"
else:
return "AUTRES_DETTES"
to_check = ["^50", "^51", "^52", "^53", "^54", "^58", "^59"]
if any (re.search(x,numero_compte) for x in to_check):
return "DISPONIBILITES"
# COMPTE DE RESULTAT DETAILLE NIV2
to_check = ["^60"]
if any (re.search(x,numero_compte) for x in to_check):
return "ACHATS"
to_check= ["^61", "^62"]
if any (re.search(x,numero_compte) for x in to_check):
return "SERVICES_EXTERIEURS"
to_check = ["^63"]
if any (re.search(x,numero_compte) for x in to_check):
return "TAXES"
to_check = ["^64"]
if any (re.search(x,numero_compte) for x in to_check):
return "CHARGES_PERSONNEL"
to_check = ["^65"]
if any (re.search(x,numero_compte) for x in to_check):
return "AUTRES_CHARGES"
to_check = ["^66"]
if any (re.search(x,numero_compte) for x in to_check):
return "CHARGES_FINANCIERES"
to_check = ["^67"]
if any (re.search(x,numero_compte) for x in to_check):
return "CHARGES_EXCEPTIONNELLES"
to_check = ["^68", "^78"]
if any (re.search(x,numero_compte) for x in to_check):
return "AMORTISSEMENTS"
to_check = ["^69"]
if any (re.search(x,numero_compte) for x in to_check):
return "IMPOT"
to_check = ["^70"]
if any (re.search(x,numero_compte) for x in to_check):
return "VENTES"
to_check = ["^71", "^72"]
if any (re.search(x,numero_compte) for x in to_check):
return "PRODUCTION_STOCKEE_IMMOBILISEE"
to_check = ["^74"]
if any (re.search(x,numero_compte) for x in to_check):
return "SUBVENTIONS_D'EXPL."
to_check = ["^75", "^791"]
if any (re.search(x,numero_compte) for x in to_check):
return "AUTRES_PRODUITS_GESTION_COURANTE"
to_check = ["^76", "^796"]
if any (re.search(x,numero_compte) for x in to_check):
return "PRODUITS_FINANCIERS"
to_check = ["^77", "^797"]
if any (re.search(x,numero_compte) for x in to_check):
return "PRODUITS_EXCEPTIONNELS"
to_check = ["^78"]
if any (re.search(x,numero_compte) for x in to_check):
return "REPRISES_AMORT._DEP."
to_check = ["^8"]
if any (re.search(x,numero_compte) for x in to_check):
return "COMPTES_SPECIAUX"
# Calcul des rubriques niveau 1
def rubrique_N1(row):
categorisation = row.RUBRIQUE_N2
# BILAN SIMPLIFIE type IFRS N1
to_check = ["CAPITAUX_PROPRES", "DETTES_FINANCIERES"]
if any(re.search(x, categorisation) for x in to_check):
return "PASSIF_NON_COURANT"
to_check = ["IMMOBILISATIONS"]
if any(re.search(x, categorisation) for x in to_check):
return "ACTIF_NON_COURANT"
to_check = ["STOCKS", "CREANCES_CLIENTS", "AUTRES_CREANCES"]
if any(re.search(x, categorisation) for x in to_check):
return "ACTIF_COURANT"
to_check = ["DETTES_FOURNISSEURS", "AUTRES_DETTES"]
if any(re.search(x, categorisation) for x in to_check):
return "PASSIF_COURANT"
to_check = ["DISPONIBILITES"]
if any(re.search(x, categorisation) for x in to_check):
return "DISPONIBILITES"
# COMPTE DE RESULTAT SIMPLIFIE N1
to_check = ["ACHATS"]
if any(re.search(x, categorisation) for x in to_check):
return "COUTS_DIRECTS"
to_check = ["SERVICES_EXTERIEURS", "TAXES", "CHARGES_PERSONNEL", "AUTRES_CHARGES", "AMORTISSEMENTS"]
if any(re.search(x, categorisation) for x in to_check):
return "CHARGES_EXPLOITATION"
to_check = ["CHARGES_FINANCIERES"]
if any(re.search(x, categorisation) for x in to_check):
return "CHARGES_FINANCIERES"
to_check = ["CHARGES_EXCEPTIONNELLES", "IMPOT"]
if any(re.search(x, categorisation) for x in to_check):
return "CHARGES_EXCEPTIONNELLES"
to_check = ["VENTES", "PRODUCTION_STOCKEE_IMMOBILISEE"]
if any(re.search(x, categorisation) for x in to_check):
return "CHIFFRE_D'AFFAIRES"
to_check = ["SUBVENTIONS_D'EXPL.", "AUTRES_PRODUITS_GESTION_COURANTE", "REPRISES_AMORT._DEP."]
if any(re.search(x, categorisation) for x in to_check):
return "PRODUITS_EXPLOITATION"
to_check = ["PRODUITS_FINANCIERS"]
if any(re.search(x, categorisation) for x in to_check):
return "PRODUITS_FINANCIERS"
to_check = ["PRODUITS_EXCEPTIONNELS"]
if any(re.search(x, categorisation) for x in to_check):
return "PRODUITS_EXCEPTIONNELS"
# Calcul des rubriques niveau 0
def rubrique_N0(row):
masse = row.RUBRIQUE_N1
to_check = ["ACTIF_NON_COURANT", "ACTIF_COURANT", "DISPONIBILITES"]
if any(re.search(x, masse) for x in to_check):
return "ACTIF"
to_check = ["PASSIF_NON_COURANT", "PASSIF_COURANT"]
if any(re.search(x, masse) for x in to_check):
return "PASSIF"
to_check = ["COUTS_DIRECTS", "CHARGES_EXPLOITATION", "CHARGES_FINANCIERES", "CHARGES_EXCEPTIONNELLES"]
if any(re.search(x, masse) for x in to_check):
return "CHARGES"
to_check = ["CHIFFRE_D'AFFAIRES", "PRODUITS_EXPLOITATION", "PRODUITS_FINANCIERS", "PRODUITS_EXCEPTIONNELS"]
if any(re.search(x, masse) for x in to_check):
return "PRODUITS"
# Mapping des rubriques
db_cat["RUBRIQUE_N2"] = db_cat.apply(lambda row: rubrique_N2(row), axis=1)
db_cat["RUBRIQUE_N1"] = db_cat.apply(lambda row: rubrique_N1(row), axis=1)
db_cat["RUBRIQUE_N0"] = db_cat.apply(lambda row: rubrique_N0(row), axis=1)
# Reorganisation colonne
to_select = ['ENTITY',
'PERIOD',
'COMPTE_NUM',
'RUBRIQUE_N0',
'RUBRIQUE_N1',
'RUBRIQUE_N2',
'RUBRIQUE_N3',
'VALUE',
'VALUE_N-1',
'VARV',
'VARP']
db_cat = db_cat[to_select]
db_cat
###Output
_____no_output_____
###Markdown
Modèles de données des graphiques REF_ENTITE
###Code
# Creation du dataset ref_entite
dataset_entite = db_cat.copy()
# Regrouper par entite
to_group = ["ENTITY"]
to_agg = {"ENTITY": "max"}
dataset_entite = dataset_entite.groupby(to_group, as_index=False).agg(to_agg)
# Affichage du modèle de donnée
dataset_entite
###Output
_____no_output_____
###Markdown
REF_SCENARIO
###Code
# Creation du dataset ref_scenario
dataset_scenario = db_cat.copy()
# Regrouper par entite
to_group = ["PERIOD"]
to_agg = {"PERIOD": "max"}
dataset_scenario = dataset_scenario.groupby(to_group, as_index=False).agg(to_agg)
# Affichage du modèle de donnée
dataset_scenario
###Output
_____no_output_____
###Markdown
KPIS
###Code
# Creation du dataset KPIS (CA, MARGE, EBE, BFR, CC, DF)
dataset_kpis = db_cat.copy()
# KPIs CA
dataset_kpis_ca = dataset_kpis[dataset_kpis.RUBRIQUE_N1.isin(["CHIFFRE_D'AFFAIRES"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N1"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ca = dataset_kpis_ca.groupby(to_group, as_index=False).agg(to_agg)
# Passage value postif
dataset_kpis_ca["VALUE"] = dataset_kpis_ca["VALUE"]*-1
# COUTS_DIRECTS
dataset_kpis_ha = dataset_kpis[dataset_kpis.RUBRIQUE_N1.isin(["COUTS_DIRECTS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N1"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ha = dataset_kpis_ha.groupby(to_group, as_index=False).agg(to_agg)
# Passage value négatif
dataset_kpis_ha["VALUE"] = dataset_kpis_ha["VALUE"]*-1
# KPIs MARGE BRUTE (CA - COUTS DIRECTS)
dataset_kpis_mb = dataset_kpis_ca.copy()
dataset_kpis_mb = pd.concat([dataset_kpis_mb, dataset_kpis_ha], axis=0, sort=False)
to_group = ["ENTITY",
"PERIOD"]
to_agg = {"VALUE": "sum"}
dataset_kpis_mb = dataset_kpis_mb.groupby(to_group, as_index=False).agg(to_agg)
dataset_kpis_mb["RUBRIQUE_N1"] = "MARGE"
dataset_kpis_mb = dataset_kpis_mb[["ENTITY", "PERIOD", "RUBRIQUE_N1", "VALUE"]]
# CHARGES EXTERNES
dataset_kpis_ce = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["SERVICES_EXTERIEURS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ce = dataset_kpis_ce.groupby(to_group, as_index=False).agg(to_agg)
# Passage value negatif
dataset_kpis_ce["VALUE"] = dataset_kpis_ce["VALUE"]*-1
# IMPOTS
dataset_kpis_ip = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["TAXES"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ip = dataset_kpis_ip.groupby(to_group, as_index=False).agg(to_agg)
# Passage value negatif
dataset_kpis_ip["VALUE"] = dataset_kpis_ip["VALUE"]*-1
# CHARGES DE PERSONNEL
dataset_kpis_cp = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["CHARGES_PERSONNEL"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_cp = dataset_kpis_cp.groupby(to_group, as_index=False).agg(to_agg)
# Passage value negatif
dataset_kpis_cp["VALUE"] = dataset_kpis_cp["VALUE"]*-1
# AUTRES_CHARGES
dataset_kpis_ac = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["AUTRES_CHARGES"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ac = dataset_kpis_ac.groupby(to_group, as_index=False).agg(to_agg)
# Passage value negatif
dataset_kpis_ac["VALUE"] = dataset_kpis_ac["VALUE"]*-1
# SUBVENTIONS D'EXPLOITATION
dataset_kpis_ac = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["SUBVENTIONS_D'EXPL."])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ac = dataset_kpis_ac.groupby(to_group, as_index=False).agg(to_agg)
# KPIs EBE = MARGE - CHARGES EXTERNES - TAXES - CHARGES PERSONNEL - AUTRES CHARGES + SUBVENTION D'EXPLOITATION
dataset_kpis_ebe = dataset_kpis_mb.copy()
dataset_kpis_ebe = pd.concat([dataset_kpis_ebe, dataset_kpis_ce, dataset_kpis_ip, dataset_kpis_cp, dataset_kpis_ac], axis=0, sort=False)
to_group = ["ENTITY", "PERIOD"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ebe = dataset_kpis_ebe.groupby(to_group, as_index=False).agg(to_agg)
dataset_kpis_ebe["RUBRIQUE_N1"] = "EBE"
dataset_kpis_ebe = dataset_kpis_ebe[["ENTITY", "PERIOD", "RUBRIQUE_N1", "VALUE"]]
# KPIs CREANCES CLIENTS
dataset_kpis_cc = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["CREANCES_CLIENTS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_cc = dataset_kpis_cc.groupby(to_group, as_index=False).agg(to_agg)
# Renommage colonne
to_rename = {'RUBRIQUE_N2': "RUBRIQUE_N1"}
dataset_kpis_cc = dataset_kpis_cc.rename(columns=to_rename)
# KPIs STOCKS
dataset_kpis_st = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["STOCKS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_st = dataset_kpis_st.groupby(to_group, as_index=False).agg(to_agg)
# Renommage colonne
to_rename = {'RUBRIQUE_N2': "RUBRIQUE_N1"}
dataset_kpis_st = dataset_kpis_st.rename(columns=to_rename)
# KPIs DETTES FOURNISSEURS
dataset_kpis_df = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["DETTES_FOURNISSEURS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_df = dataset_kpis_df.groupby(to_group, as_index=False).agg(to_agg)
# Renommage colonne
to_rename = {'RUBRIQUE_N2': "RUBRIQUE_N1"}
dataset_kpis_df = dataset_kpis_df.rename(columns=to_rename)
# Passage value positif
dataset_kpis_df["VALUE"] = dataset_kpis_df["VALUE"].abs()
# KPIs BFR = CREANCES + STOCKS - DETTES FOURNISSEURS
dataset_kpis_bfr_df = dataset_kpis_df.copy()
# Passage dette fournisseur value négatif
dataset_kpis_bfr_df["VALUE"] = dataset_kpis_bfr_df["VALUE"]*-1
dataset_kpis_bfr_df = pd.concat([dataset_kpis_cc, dataset_kpis_st, dataset_kpis_bfr_df], axis=0, sort=False)
to_group = ["ENTITY", "PERIOD"]
to_agg = {"VALUE": "sum"}
dataset_kpis_bfr_df = dataset_kpis_bfr_df.groupby(to_group, as_index=False).agg(to_agg)
# Creation colonne Rubrique_N1 = BFR
dataset_kpis_bfr_df["RUBRIQUE_N1"] = "BFR"
# Reorganisation colonne
dataset_kpis_bfr_df = dataset_kpis_bfr_df[["ENTITY", "PERIOD", "RUBRIQUE_N1", "VALUE"]]
# Creation du dataset final
dataset_kpis_final = pd.concat([dataset_kpis_ca, dataset_kpis_mb, dataset_kpis_ebe, dataset_kpis_cc, dataset_kpis_st, dataset_kpis_df, dataset_kpis_bfr_df], axis=0, sort=False)
# Creation colonne COMP
dataset_kpis_final['PERIOD_COMP'] = (dataset_kpis_final['PERIOD'].str[:4].astype(int) - 1).astype(str) + dataset_kpis_final['PERIOD'].str[-3:]
dataset_kpis_final
# creation base comparable pour dataset_kpis
dataset_kpis_final_comp = dataset_kpis_final.copy()
# Suppression de la colonne période
dataset_kpis_final_comp = dataset_kpis_final_comp.drop("PERIOD_COMP", axis=1)
# Renommage des colonnes
to_rename = {'VALUE': "VALUE_N-1",
'PERIOD': "PERIOD_COMP"}
dataset_kpis_final_comp = dataset_kpis_final_comp.rename(columns=to_rename)
dataset_kpis_final_comp
# Jointure entre les 2 tables dataset_kpis_final et dataset_kpis_vf
join_on = ["ENTITY",
"PERIOD_COMP",
"RUBRIQUE_N1"]
dataset_kpis_final = pd.merge(dataset_kpis_final, dataset_kpis_final_comp, how='left', on=join_on).drop("PERIOD_COMP", axis=1).fillna(0)
#Création colonne Var V
dataset_kpis_final["VARV"] = dataset_kpis_final["VALUE"] - dataset_kpis_final["VALUE_N-1"]
#Création colonne Var P (%)
dataset_kpis_final["VARP"] = dataset_kpis_final["VARV"] / dataset_kpis_final["VALUE_N-1"]
dataset_kpis_final
###Output
_____no_output_____
###Markdown
EVOLUTION CA
###Code
# Creation du dataset evol_ca
dataset_evol_ca = db_enr.copy()
# Filtre COMPTE_NUM = Chiffre d'Affaire (RUBRIQUE N1)
dataset_evol_ca = dataset_evol_ca[dataset_evol_ca['COMPTE_NUM'].str.contains(r'^70|^71|^72')]
# Regroupement
to_group = ["ENTITY",
"PERIOD",
"MONTH",
"MONTH_INDEX",
"RUBRIQUE_N3"]
to_agg = {"VALUE": "sum"}
dataset_evol_ca = dataset_evol_ca.groupby(to_group, as_index=False).agg(to_agg)
dataset_evol_ca["VALUE"] = dataset_evol_ca["VALUE"].abs()
# Calcul de la somme cumulée
dataset_evol_ca = dataset_evol_ca.sort_values(by=["ENTITY", 'PERIOD', 'MONTH_INDEX']).reset_index(drop=True)
dataset_evol_ca['MONTH_INDEX'] = pd.to_datetime(dataset_evol_ca['MONTH_INDEX'], format="%m").dt.strftime("%m")
dataset_evol_ca['VALUE_CUM'] = dataset_evol_ca.groupby(["ENTITY", "PERIOD"], as_index=True).agg({"VALUE": "cumsum"})
# Affichage du modèle de donnée
dataset_evol_ca
###Output
_____no_output_____
###Markdown
CHARGES
###Code
#Creation du dataset charges
dataset_charges = db_cat.copy()
# Filtre RUBRIQUE_N0 = CHARGES
dataset_charges = dataset_charges[dataset_charges["RUBRIQUE_N0"] == "CHARGES"]
# Mettre en valeur positive VALUE
dataset_charges["VALUE"] = dataset_charges["VALUE"].abs()
# Affichage du modèle de donnée
dataset_charges
###Output
_____no_output_____
###Markdown
POSITIONS TRESORERIE
###Code
# Creation du dataset trésorerie
dataset_treso = db_enr.copy()
# Filtre RUBRIQUE_N1 = TRESORERIE
dataset_treso = dataset_treso[dataset_treso['COMPTE_NUM'].str.contains(r'^5')].reset_index(drop=True)
# Cash in / Cash out ?
dataset_treso.loc[dataset_treso.VALUE > 0, "CASH_IN"] = dataset_treso.VALUE
dataset_treso.loc[dataset_treso.VALUE < 0, "CASH_OUT"] = dataset_treso.VALUE
# Regroupement
to_group = ["ENTITY",
"PERIOD",
"MONTH",
"MONTH_INDEX"]
to_agg = {"VALUE": "sum",
"CASH_IN": "sum",
"CASH_OUT": "sum"}
dataset_treso = dataset_treso.groupby(to_group, as_index = False).agg(to_agg).fillna(0)
# Cumul par période
dataset_treso = dataset_treso.sort_values(["ENTITY", "PERIOD", "MONTH_INDEX"])
dataset_treso['MONTH_INDEX'] = pd.to_datetime(dataset_treso['MONTH_INDEX'], format="%m").dt.strftime("%m")
dataset_treso['VALUE_LINE'] = dataset_treso.groupby(["ENTITY", 'PERIOD'], as_index=True).agg({"VALUE": "cumsum"})
# Mettre en valeur positive CASH_OUT
dataset_treso["CASH_OUT"] = dataset_treso["CASH_OUT"].abs()
# Affichage du modèle de donnée
dataset_treso
###Output
_____no_output_____
###Markdown
BILAN
###Code
# Creation du dataset Bilan
dataset_bilan = db_cat.copy()
# Filtre RUBRIQUE_N0 = ACTIF & PASSIF
dataset_bilan = dataset_bilan[(dataset_bilan["RUBRIQUE_N0"].isin(["ACTIF", "PASSIF"]))]
# Regroupement R0/R1/R2
to_group = ["ENTITY",
"PERIOD",
"RUBRIQUE_N0",
"RUBRIQUE_N1",
"RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_bilan = dataset_bilan.groupby(to_group, as_index = False).agg(to_agg).fillna(0)
# Mettre en valeur positive VALUE
dataset_bilan["VALUE"] = dataset_bilan["VALUE"].abs()
# Selectionner les colonnes
to_select = ["ENTITY",
"PERIOD",
"RUBRIQUE_N0",
"RUBRIQUE_N1",
"RUBRIQUE_N2",
"VALUE"]
dataset_bilan = dataset_bilan[to_select]
# Affichage du modèle de donnée
dataset_bilan
###Output
_____no_output_____
###Markdown
Output Sauvegarde des fichiers en csv
###Code
def df_to_csv(df, filename):
# Sauvegarde en csv
df.to_csv(filename,
sep=";",
decimal=",",
index=False)
# Création du lien url
naas_link = naas.asset.add(filename)
# Création de la ligne
data = {
"OBJET": filename,
"URL": naas_link,
"DATE_EXTRACT": datetime.now().strftime("%Y-%m-%d %H:%M:%S")
}
return pd.DataFrame([data])
dataset_logo = {
"OBJET": "Logo",
"URL": LOGO,
"DATE_EXTRACT": datetime.now().strftime("%Y-%m-%d %H:%M:%S")
}
logo = pd.DataFrame([dataset_logo])
logo
import json
color = {"name":"Color",
"dataColors":[COLOR_1, COLOR_2]}
with open("color.json", "w") as write_file:
json.dump(color, write_file)
dataset_color = {
"OBJET": "Color",
"URL": naas.asset.add("color.json"),
"DATE_EXTRACT": datetime.now().strftime("%Y-%m-%d %H:%M:%S")
}
pbi_color = pd.DataFrame([dataset_color])
pbi_color
entite = df_to_csv(dataset_entite, "dataset_entite.csv")
entite
scenario = df_to_csv(dataset_scenario, "dataset_scenario.csv")
scenario
kpis = df_to_csv(dataset_kpis_final, "dataset_kpis_final.csv")
kpis
evol_ca = df_to_csv(dataset_evol_ca, "dataset_evol_ca.csv")
evol_ca
charges = df_to_csv(dataset_charges, "dataset_charges.csv")
charges
treso = df_to_csv(dataset_treso, "dataset_treso.csv")
treso
bilan = df_to_csv(dataset_bilan, "dataset_bilan.csv")
bilan
###Output
_____no_output_____
###Markdown
Création du fichier à intégrer dans PowerBI
###Code
db_powerbi = pd.concat([logo, pbi_color, entite, scenario, kpis, evol_ca, charges, treso, bilan], axis=0)
db_powerbi
df_to_csv(db_powerbi, "powerbi.csv")
###Output
_____no_output_____
###Markdown
FEC - Creer un dashboard PowerBI **Tags:** fec powerbi dataviz Ce Notebook permet de transformer des fichiers FEC de votre entreprise en un tableau de bord Microsoft Power BI.Le FEC (fichier des écritures comptables) est un export standard des logiciels de comptabilite et une obligation légale en france depuis 2014 afin de déposer ses comptes de manière electronique auprès des services fiscaux.-Durée de l’installation = 5 minutes-Support d’installation = [Page Notion](https://www.notion.so/Mode-d-emploi-FECthis-7fc142f2d7ae4a3889fbca28a83acba2/)-Niveau = Facile **Author:** [Alexandre STEVENS](https://www.linkedin.com/in/alexandrestevenspbix/) Input Librairie
###Code
import pandas as pd
from datetime import datetime, timedelta
import os
import re
import naas
###Output
_____no_output_____
###Markdown
Lien URL vers le logo de l'entreprise
###Code
LOGO = "https://landen.imgix.net/e5hx7wyzf53f/assets/26u7xg7u.png?w=400"
COLOR_1 = None
COLOR_2 = None
###Output
_____no_output_____
###Markdown
Lire les fichiers FEC
###Code
def get_all_fec(file_regex,
sep=",",
decimal=".",
encoding=None,
header=None,
usecols=None,
names=None,
dtype=None):
# Create df init
df = pd.DataFrame()
# Get all files in INPUT_FOLDER
files = [f for f in os.listdir() if re.search(file_regex, f)]
if len(files) == 0:
print(f"Aucun fichier FEC ne correspond au standard de nomination")
else:
for file in files:
# Open file and create df
print(file)
tmp_df = pd.read_csv(file,
sep=sep,
decimal=decimal,
encoding=encoding,
header=header,
usecols=usecols,
names=names,
dtype=dtype)
# Add filename to df
tmp_df['NOM_FICHIER'] = file
# Concat df
df = pd.concat([df, tmp_df], axis=0, sort=False)
return df
file_regex = "^\d{9}FEC\d{8}.txt"
db_init = get_all_fec(file_regex,
sep='\t',
decimal=',',
encoding='ISO-8859-1',
header=0)
db_init
###Output
_____no_output_____
###Markdown
Model Base de donnée FEC Nettoyage des données
###Code
db_clean = db_init.copy()
# Selection des colonnes à conserver
to_select = ['NOM_FICHIER',
'EcritureDate',
'CompteNum',
'CompteLib',
'EcritureLib',
'Debit',
'Credit']
db_clean = db_clean[to_select]
# Renommage des colonnes
to_rename = {'EcritureDate': "DATE",
'CompteNum': "COMPTE_NUM",
'CompteLib': "RUBRIQUE_N3",
'EcritureLib': "RUBRIQUE_N4",
'Debit': "DEBIT",
'Credit': "CREDIT"}
db_clean = db_clean.rename(columns=to_rename)
#suppression des espaces colonne "COMPTE_NUM"
db_clean["COMPTE_NUM"] = db_clean["COMPTE_NUM"].astype(str).str.strip()
# Mise au format des colonnes
db_clean = db_clean.astype({"NOM_FICHIER" : str,
"DATE" : str,
"COMPTE_NUM" : str,
"RUBRIQUE_N3" : str,
"RUBRIQUE_N4" : str,
"DEBIT" : float,
"CREDIT" : float,
})
# Mise au format colonne date
db_clean["DATE"] = pd.to_datetime(db_clean["DATE"])
db_clean.head(5)
###Output
_____no_output_____
###Markdown
Enrichissement de la base
###Code
db_enr = db_clean.copy()
# Ajout colonnes entité et période
db_enr['ENTITY'] = db_enr['NOM_FICHIER'].str[:9]
db_enr['PERIOD'] = db_enr['NOM_FICHIER'].str[12:-6]
db_enr['PERIOD'] = pd.to_datetime(db_enr['PERIOD'], format='%Y%m')
db_enr['PERIOD'] = db_enr['PERIOD'].dt.strftime("%Y-%m")
# Ajout colonne month et month_index
db_enr['MONTH'] = db_enr['DATE'].dt.strftime("%b")
db_enr['MONTH_INDEX'] = db_enr['DATE'].dt.month
# Calcul de la valeur debit-crédit
db_enr["VALUE"] = (db_enr["DEBIT"]) - (db_enr["CREDIT"])
db_enr.head(5)
# Calcul résultat pour équilibrage bilan dans capitaux propre
db_rn = db_enr.copy()
db_rn = db_rn[db_rn['COMPTE_NUM'].str.contains(r'^6|^7')]
to_group = ["ENTITY", "PERIOD"]
to_agg = {"VALUE": "sum"}
db_rn = db_rn.groupby(to_group, as_index=False).agg(to_agg)
db_rn ["COMPTE_NUM"] = "10999999"
db_rn ["RUBRIQUE_N3"] = "RESULTAT"
# Reorganisation colonne
to_select = ['ENTITY',
'PERIOD',
'COMPTE_NUM',
'RUBRIQUE_N3',
'VALUE']
db_rn = db_rn[to_select]
db_rn
###Output
_____no_output_____
###Markdown
Base de données FEC aggrégée avec variation Aggrégation RUBRIQUE N3
###Code
# Calcul var v = création de dataset avec Period_comp pour merge
db_var = db_enr.copy()
# Regroupement
to_group = ["ENTITY",
"PERIOD",
"COMPTE_NUM",
"RUBRIQUE_N3"]
to_agg = {"VALUE": "sum"}
db_var = db_var.groupby(to_group, as_index=False).agg(to_agg)
# Ajout des résultats au dataframe
db_var = pd.concat([db_var, db_rn], axis=0, sort=False)
# Creation colonne COMP
db_var['PERIOD_COMP'] = (db_var['PERIOD'].str[:4].astype(int) - 1).astype(str) + db_var['PERIOD'].str[-3:]
db_var
###Output
_____no_output_____
###Markdown
Création de la base comparable
###Code
db_comp = db_var.copy()
# Suppression de la colonne période
db_comp = db_comp.drop("PERIOD_COMP", axis=1)
# Renommage des colonnes
to_rename = {'VALUE': "VALUE_N-1",
'PERIOD': "PERIOD_COMP"}
db_comp = db_comp.rename(columns=to_rename)
db_comp.head(5)
###Output
_____no_output_____
###Markdown
Jointure des 2 tables et calcul des variations
###Code
# Jointure entre les 2 tables
join_on = ["ENTITY",
"PERIOD_COMP",
"COMPTE_NUM",
"RUBRIQUE_N3"]
db_var = pd.merge(db_var, db_comp, how='left', on=join_on).drop("PERIOD_COMP", axis=1).fillna(0)
#Création colonne Var V
db_var["VARV"] = db_var["VALUE"] - db_var["VALUE_N-1"]
#Création colonne Var P (%)
db_var["VARP"] = db_var["VARV"] / db_var["VALUE_N-1"]
db_var
db_cat = db_var.copy()
# Calcul des rubriques niveau 2
def rubrique_N2(row):
numero_compte = str(row["COMPTE_NUM"])
value = float(row["VALUE"])
# BILAN SIMPLIFIE type IFRS NIV2
to_check = ["^10", "^11", "^12", "^13", "^14"]
if any (re.search(x,numero_compte) for x in to_check):
return "CAPITAUX_PROPRES"
to_check = ["^15", "^16", "^17", "^18", "^19"]
if any (re.search(x,numero_compte) for x in to_check):
return "DETTES_FINANCIERES"
to_check = ["^20", "^21", "^22", "^23", "^25", "^26", "^27", "^28", "^29"]
if any (re.search(x,numero_compte) for x in to_check):
return "IMMOBILISATIONS"
to_check = ["^31", "^32", "^33", "^34", "^35", "^36", "^37", "^38", "^39"]
if any (re.search(x,numero_compte) for x in to_check):
return "STOCKS"
to_check = ["^40"]
if any (re.search(x,numero_compte) for x in to_check):
return "DETTES_FOURNISSEURS"
to_check = ["^41"]
if any (re.search(x,numero_compte) for x in to_check):
return "CREANCES_CLIENTS"
to_check = ["^42", "^43", "^44", "^45", "^46", "^47", "^48", "^49"]
if any (re.search(x,numero_compte) for x in to_check):
if value > 0:
return "AUTRES_CREANCES"
else:
return "AUTRES_DETTES"
to_check = ["^50", "^51", "^52", "^53", "^54", "^58", "^59"]
if any (re.search(x,numero_compte) for x in to_check):
return "DISPONIBILITES"
# COMPTE DE RESULTAT DETAILLE NIV2
to_check = ["^60"]
if any (re.search(x,numero_compte) for x in to_check):
return "ACHATS"
to_check= ["^61", "^62"]
if any (re.search(x,numero_compte) for x in to_check):
return "SERVICES_EXTERIEURS"
to_check = ["^63"]
if any (re.search(x,numero_compte) for x in to_check):
return "TAXES"
to_check = ["^64"]
if any (re.search(x,numero_compte) for x in to_check):
return "CHARGES_PERSONNEL"
to_check = ["^65"]
if any (re.search(x,numero_compte) for x in to_check):
return "AUTRES_CHARGES"
to_check = ["^66"]
if any (re.search(x,numero_compte) for x in to_check):
return "CHARGES_FINANCIERES"
to_check = ["^67"]
if any (re.search(x,numero_compte) for x in to_check):
return "CHARGES_EXCEPTIONNELLES"
to_check = ["^68", "^78"]
if any (re.search(x,numero_compte) for x in to_check):
return "AMORTISSEMENTS"
to_check = ["^69"]
if any (re.search(x,numero_compte) for x in to_check):
return "IMPOT"
to_check = ["^70"]
if any (re.search(x,numero_compte) for x in to_check):
return "VENTES"
to_check = ["^71", "^72"]
if any (re.search(x,numero_compte) for x in to_check):
return "PRODUCTION_STOCKEE_IMMOBILISEE"
to_check = ["^74"]
if any (re.search(x,numero_compte) for x in to_check):
return "SUBVENTIONS_D'EXPL."
to_check = ["^75", "^791"]
if any (re.search(x,numero_compte) for x in to_check):
return "AUTRES_PRODUITS_GESTION_COURANTE"
to_check = ["^76", "^796"]
if any (re.search(x,numero_compte) for x in to_check):
return "PRODUITS_FINANCIERS"
to_check = ["^77", "^797"]
if any (re.search(x,numero_compte) for x in to_check):
return "PRODUITS_EXCEPTIONNELS"
to_check = ["^78"]
if any (re.search(x,numero_compte) for x in to_check):
return "REPRISES_AMORT._DEP."
to_check = ["^8"]
if any (re.search(x,numero_compte) for x in to_check):
return "COMPTES_SPECIAUX"
# Calcul des rubriques niveau 1
def rubrique_N1(row):
categorisation = row.RUBRIQUE_N2
# BILAN SIMPLIFIE type IFRS N1
to_check = ["CAPITAUX_PROPRES", "DETTES_FINANCIERES"]
if any(re.search(x, categorisation) for x in to_check):
return "PASSIF_NON_COURANT"
to_check = ["IMMOBILISATIONS"]
if any(re.search(x, categorisation) for x in to_check):
return "ACTIF_NON_COURANT"
to_check = ["STOCKS", "CREANCES_CLIENTS", "AUTRES_CREANCES"]
if any(re.search(x, categorisation) for x in to_check):
return "ACTIF_COURANT"
to_check = ["DETTES_FOURNISSEURS", "AUTRES_DETTES"]
if any(re.search(x, categorisation) for x in to_check):
return "PASSIF_COURANT"
to_check = ["DISPONIBILITES"]
if any(re.search(x, categorisation) for x in to_check):
return "DISPONIBILITES"
# COMPTE DE RESULTAT SIMPLIFIE N1
to_check = ["ACHATS"]
if any(re.search(x, categorisation) for x in to_check):
return "COUTS_DIRECTS"
to_check = ["SERVICES_EXTERIEURS", "TAXES", "CHARGES_PERSONNEL", "AUTRES_CHARGES", "AMORTISSEMENTS"]
if any(re.search(x, categorisation) for x in to_check):
return "CHARGES_EXPLOITATION"
to_check = ["CHARGES_FINANCIERES"]
if any(re.search(x, categorisation) for x in to_check):
return "CHARGES_FINANCIERES"
to_check = ["CHARGES_EXCEPTIONNELLES", "IMPOT"]
if any(re.search(x, categorisation) for x in to_check):
return "CHARGES_EXCEPTIONNELLES"
to_check = ["VENTES", "PRODUCTION_STOCKEE_IMMOBILISEE"]
if any(re.search(x, categorisation) for x in to_check):
return "CHIFFRE_D'AFFAIRES"
to_check = ["SUBVENTIONS_D'EXPL.", "AUTRES_PRODUITS_GESTION_COURANTE", "REPRISES_AMORT._DEP."]
if any(re.search(x, categorisation) for x in to_check):
return "PRODUITS_EXPLOITATION"
to_check = ["PRODUITS_FINANCIERS"]
if any(re.search(x, categorisation) for x in to_check):
return "PRODUITS_FINANCIERS"
to_check = ["PRODUITS_EXCEPTIONNELS"]
if any(re.search(x, categorisation) for x in to_check):
return "PRODUITS_EXCEPTIONNELS"
# Calcul des rubriques niveau 0
def rubrique_N0(row):
masse = row.RUBRIQUE_N1
to_check = ["ACTIF_NON_COURANT", "ACTIF_COURANT", "DISPONIBILITES"]
if any(re.search(x, masse) for x in to_check):
return "ACTIF"
to_check = ["PASSIF_NON_COURANT", "PASSIF_COURANT"]
if any(re.search(x, masse) for x in to_check):
return "PASSIF"
to_check = ["COUTS_DIRECTS", "CHARGES_EXPLOITATION", "CHARGES_FINANCIERES", "CHARGES_EXCEPTIONNELLES"]
if any(re.search(x, masse) for x in to_check):
return "CHARGES"
to_check = ["CHIFFRE_D'AFFAIRES", "PRODUITS_EXPLOITATION", "PRODUITS_FINANCIERS", "PRODUITS_EXCEPTIONNELS"]
if any(re.search(x, masse) for x in to_check):
return "PRODUITS"
# Mapping des rubriques
db_cat["RUBRIQUE_N2"] = db_cat.apply(lambda row: rubrique_N2(row), axis=1)
db_cat["RUBRIQUE_N1"] = db_cat.apply(lambda row: rubrique_N1(row), axis=1)
db_cat["RUBRIQUE_N0"] = db_cat.apply(lambda row: rubrique_N0(row), axis=1)
# Reorganisation colonne
to_select = ['ENTITY',
'PERIOD',
'COMPTE_NUM',
'RUBRIQUE_N0',
'RUBRIQUE_N1',
'RUBRIQUE_N2',
'RUBRIQUE_N3',
'VALUE',
'VALUE_N-1',
'VARV',
'VARP']
db_cat = db_cat[to_select]
db_cat
###Output
_____no_output_____
###Markdown
Modèles de données des graphiques REF_ENTITE
###Code
# Creation du dataset ref_entite
dataset_entite = db_cat.copy()
# Regrouper par entite
to_group = ["ENTITY"]
to_agg = {"ENTITY": "max"}
dataset_entite = dataset_entite.groupby(to_group, as_index=False).agg(to_agg)
# Affichage du modèle de donnée
dataset_entite
###Output
_____no_output_____
###Markdown
REF_SCENARIO
###Code
# Creation du dataset ref_scenario
dataset_scenario = db_cat.copy()
# Regrouper par entite
to_group = ["PERIOD"]
to_agg = {"PERIOD": "max"}
dataset_scenario = dataset_scenario.groupby(to_group, as_index=False).agg(to_agg)
# Affichage du modèle de donnée
dataset_scenario
###Output
_____no_output_____
###Markdown
KPIS
###Code
# Creation du dataset KPIS (CA, MARGE, EBE, BFR, CC, DF)
dataset_kpis = db_cat.copy()
# KPIs CA
dataset_kpis_ca = dataset_kpis[dataset_kpis.RUBRIQUE_N1.isin(["CHIFFRE_D'AFFAIRES"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N1"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ca = dataset_kpis_ca.groupby(to_group, as_index=False).agg(to_agg)
# Passage value postif
dataset_kpis_ca["VALUE"] = dataset_kpis_ca["VALUE"]*-1
# COUTS_DIRECTS
dataset_kpis_ha = dataset_kpis[dataset_kpis.RUBRIQUE_N1.isin(["COUTS_DIRECTS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N1"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ha = dataset_kpis_ha.groupby(to_group, as_index=False).agg(to_agg)
# Passage value négatif
dataset_kpis_ha["VALUE"] = dataset_kpis_ha["VALUE"]*-1
# KPIs MARGE BRUTE (CA - COUTS DIRECTS)
dataset_kpis_mb = dataset_kpis_ca.copy()
dataset_kpis_mb = pd.concat([dataset_kpis_mb, dataset_kpis_ha], axis=0, sort=False)
to_group = ["ENTITY",
"PERIOD"]
to_agg = {"VALUE": "sum"}
dataset_kpis_mb = dataset_kpis_mb.groupby(to_group, as_index=False).agg(to_agg)
dataset_kpis_mb["RUBRIQUE_N1"] = "MARGE"
dataset_kpis_mb = dataset_kpis_mb[["ENTITY", "PERIOD", "RUBRIQUE_N1", "VALUE"]]
# CHARGES EXTERNES
dataset_kpis_ce = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["SERVICES_EXTERIEURS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ce = dataset_kpis_ce.groupby(to_group, as_index=False).agg(to_agg)
# Passage value negatif
dataset_kpis_ce["VALUE"] = dataset_kpis_ce["VALUE"]*-1
# IMPOTS
dataset_kpis_ip = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["TAXES"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ip = dataset_kpis_ip.groupby(to_group, as_index=False).agg(to_agg)
# Passage value negatif
dataset_kpis_ip["VALUE"] = dataset_kpis_ip["VALUE"]*-1
# CHARGES DE PERSONNEL
dataset_kpis_cp = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["CHARGES_PERSONNEL"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_cp = dataset_kpis_cp.groupby(to_group, as_index=False).agg(to_agg)
# Passage value negatif
dataset_kpis_cp["VALUE"] = dataset_kpis_cp["VALUE"]*-1
# AUTRES_CHARGES
dataset_kpis_ac = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["AUTRES_CHARGES"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ac = dataset_kpis_ac.groupby(to_group, as_index=False).agg(to_agg)
# Passage value negatif
dataset_kpis_ac["VALUE"] = dataset_kpis_ac["VALUE"]*-1
# SUBVENTIONS D'EXPLOITATION
dataset_kpis_ac = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["SUBVENTIONS_D'EXPL."])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ac = dataset_kpis_ac.groupby(to_group, as_index=False).agg(to_agg)
# KPIs EBE = MARGE - CHARGES EXTERNES - TAXES - CHARGES PERSONNEL - AUTRES CHARGES + SUBVENTION D'EXPLOITATION
dataset_kpis_ebe = dataset_kpis_mb.copy()
dataset_kpis_ebe = pd.concat([dataset_kpis_ebe, dataset_kpis_ce, dataset_kpis_ip, dataset_kpis_cp, dataset_kpis_ac], axis=0, sort=False)
to_group = ["ENTITY", "PERIOD"]
to_agg = {"VALUE": "sum"}
dataset_kpis_ebe = dataset_kpis_ebe.groupby(to_group, as_index=False).agg(to_agg)
dataset_kpis_ebe["RUBRIQUE_N1"] = "EBE"
dataset_kpis_ebe = dataset_kpis_ebe[["ENTITY", "PERIOD", "RUBRIQUE_N1", "VALUE"]]
# KPIs CREANCES CLIENTS
dataset_kpis_cc = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["CREANCES_CLIENTS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_cc = dataset_kpis_cc.groupby(to_group, as_index=False).agg(to_agg)
# Renommage colonne
to_rename = {'RUBRIQUE_N2': "RUBRIQUE_N1"}
dataset_kpis_cc = dataset_kpis_cc.rename(columns=to_rename)
# KPIs STOCKS
dataset_kpis_st = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["STOCKS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_st = dataset_kpis_st.groupby(to_group, as_index=False).agg(to_agg)
# Renommage colonne
to_rename = {'RUBRIQUE_N2': "RUBRIQUE_N1"}
dataset_kpis_st = dataset_kpis_st.rename(columns=to_rename)
# KPIs DETTES FOURNISSEURS
dataset_kpis_df = dataset_kpis[dataset_kpis.RUBRIQUE_N2.isin(["DETTES_FOURNISSEURS"])]
to_group = ["ENTITY", "PERIOD", "RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_kpis_df = dataset_kpis_df.groupby(to_group, as_index=False).agg(to_agg)
# Renommage colonne
to_rename = {'RUBRIQUE_N2': "RUBRIQUE_N1"}
dataset_kpis_df = dataset_kpis_df.rename(columns=to_rename)
# Passage value positif
dataset_kpis_df["VALUE"] = dataset_kpis_df["VALUE"].abs()
# KPIs BFR = CREANCES + STOCKS - DETTES FOURNISSEURS
dataset_kpis_bfr_df = dataset_kpis_df.copy()
# Passage dette fournisseur value négatif
dataset_kpis_bfr_df["VALUE"] = dataset_kpis_bfr_df["VALUE"]*-1
dataset_kpis_bfr_df = pd.concat([dataset_kpis_cc, dataset_kpis_st, dataset_kpis_bfr_df], axis=0, sort=False)
to_group = ["ENTITY", "PERIOD"]
to_agg = {"VALUE": "sum"}
dataset_kpis_bfr_df = dataset_kpis_bfr_df.groupby(to_group, as_index=False).agg(to_agg)
# Creation colonne Rubrique_N1 = BFR
dataset_kpis_bfr_df["RUBRIQUE_N1"] = "BFR"
# Reorganisation colonne
dataset_kpis_bfr_df = dataset_kpis_bfr_df[["ENTITY", "PERIOD", "RUBRIQUE_N1", "VALUE"]]
# Creation du dataset final
dataset_kpis_final = pd.concat([dataset_kpis_ca, dataset_kpis_mb, dataset_kpis_ebe, dataset_kpis_cc, dataset_kpis_st, dataset_kpis_df, dataset_kpis_bfr_df], axis=0, sort=False)
# Creation colonne COMP
dataset_kpis_final['PERIOD_COMP'] = (dataset_kpis_final['PERIOD'].str[:4].astype(int) - 1).astype(str) + dataset_kpis_final['PERIOD'].str[-3:]
dataset_kpis_final
# creation base comparable pour dataset_kpis
dataset_kpis_final_comp = dataset_kpis_final.copy()
# Suppression de la colonne période
dataset_kpis_final_comp = dataset_kpis_final_comp.drop("PERIOD_COMP", axis=1)
# Renommage des colonnes
to_rename = {'VALUE': "VALUE_N-1",
'PERIOD': "PERIOD_COMP"}
dataset_kpis_final_comp = dataset_kpis_final_comp.rename(columns=to_rename)
dataset_kpis_final_comp
# Jointure entre les 2 tables dataset_kpis_final et dataset_kpis_vf
join_on = ["ENTITY",
"PERIOD_COMP",
"RUBRIQUE_N1"]
dataset_kpis_final = pd.merge(dataset_kpis_final, dataset_kpis_final_comp, how='left', on=join_on).drop("PERIOD_COMP", axis=1).fillna(0)
#Création colonne Var V
dataset_kpis_final["VARV"] = dataset_kpis_final["VALUE"] - dataset_kpis_final["VALUE_N-1"]
#Création colonne Var P (%)
dataset_kpis_final["VARP"] = dataset_kpis_final["VARV"] / dataset_kpis_final["VALUE_N-1"]
dataset_kpis_final
###Output
_____no_output_____
###Markdown
EVOLUTION CA
###Code
# Creation du dataset evol_ca
dataset_evol_ca = db_enr.copy()
# Filtre COMPTE_NUM = Chiffre d'Affaire (RUBRIQUE N1)
dataset_evol_ca = dataset_evol_ca[dataset_evol_ca['COMPTE_NUM'].str.contains(r'^70|^71|^72')]
# Regroupement
to_group = ["ENTITY",
"PERIOD",
"MONTH",
"MONTH_INDEX",
"RUBRIQUE_N3"]
to_agg = {"VALUE": "sum"}
dataset_evol_ca = dataset_evol_ca.groupby(to_group, as_index=False).agg(to_agg)
dataset_evol_ca["VALUE"] = dataset_evol_ca["VALUE"].abs()
# Calcul de la somme cumulée
dataset_evol_ca = dataset_evol_ca.sort_values(by=["ENTITY", 'PERIOD', 'MONTH_INDEX']).reset_index(drop=True)
dataset_evol_ca['MONTH_INDEX'] = pd.to_datetime(dataset_evol_ca['MONTH_INDEX'], format="%m").dt.strftime("%m")
dataset_evol_ca['VALUE_CUM'] = dataset_evol_ca.groupby(["ENTITY", "PERIOD"], as_index=True).agg({"VALUE": "cumsum"})
# Affichage du modèle de donnée
dataset_evol_ca
###Output
_____no_output_____
###Markdown
CHARGES
###Code
#Creation du dataset charges
dataset_charges = db_cat.copy()
# Filtre RUBRIQUE_N0 = CHARGES
dataset_charges = dataset_charges[dataset_charges["RUBRIQUE_N0"] == "CHARGES"]
# Mettre en valeur positive VALUE
dataset_charges["VALUE"] = dataset_charges["VALUE"].abs()
# Affichage du modèle de donnée
dataset_charges
###Output
_____no_output_____
###Markdown
POSITIONS TRESORERIE
###Code
# Creation du dataset trésorerie
dataset_treso = db_enr.copy()
# Filtre RUBRIQUE_N1 = TRESORERIE
dataset_treso = dataset_treso[dataset_treso['COMPTE_NUM'].str.contains(r'^5')].reset_index(drop=True)
# Cash in / Cash out ?
dataset_treso.loc[dataset_treso.VALUE > 0, "CASH_IN"] = dataset_treso.VALUE
dataset_treso.loc[dataset_treso.VALUE < 0, "CASH_OUT"] = dataset_treso.VALUE
# Regroupement
to_group = ["ENTITY",
"PERIOD",
"MONTH",
"MONTH_INDEX"]
to_agg = {"VALUE": "sum",
"CASH_IN": "sum",
"CASH_OUT": "sum"}
dataset_treso = dataset_treso.groupby(to_group, as_index = False).agg(to_agg).fillna(0)
# Cumul par période
dataset_treso = dataset_treso.sort_values(["ENTITY", "PERIOD", "MONTH_INDEX"])
dataset_treso['MONTH_INDEX'] = pd.to_datetime(dataset_treso['MONTH_INDEX'], format="%m").dt.strftime("%m")
dataset_treso['VALUE_LINE'] = dataset_treso.groupby(["ENTITY", 'PERIOD'], as_index=True).agg({"VALUE": "cumsum"})
# Mettre en valeur positive CASH_OUT
dataset_treso["CASH_OUT"] = dataset_treso["CASH_OUT"].abs()
# Affichage du modèle de donnée
dataset_treso
###Output
_____no_output_____
###Markdown
BILAN
###Code
# Creation du dataset Bilan
dataset_bilan = db_cat.copy()
# Filtre RUBRIQUE_N0 = ACTIF & PASSIF
dataset_bilan = dataset_bilan[(dataset_bilan["RUBRIQUE_N0"].isin(["ACTIF", "PASSIF"]))]
# Regroupement R0/R1/R2
to_group = ["ENTITY",
"PERIOD",
"RUBRIQUE_N0",
"RUBRIQUE_N1",
"RUBRIQUE_N2"]
to_agg = {"VALUE": "sum"}
dataset_bilan = dataset_bilan.groupby(to_group, as_index = False).agg(to_agg).fillna(0)
# Mettre en valeur positive VALUE
dataset_bilan["VALUE"] = dataset_bilan["VALUE"].abs()
# Selectionner les colonnes
to_select = ["ENTITY",
"PERIOD",
"RUBRIQUE_N0",
"RUBRIQUE_N1",
"RUBRIQUE_N2",
"VALUE"]
dataset_bilan = dataset_bilan[to_select]
# Affichage du modèle de donnée
dataset_bilan
###Output
_____no_output_____
###Markdown
Output Sauvegarde des fichiers en csv
###Code
def df_to_csv(df, filename):
# Sauvegarde en csv
df.to_csv(filename,
sep=";",
decimal=",",
index=False)
# Création du lien url
naas_link = naas.asset.add(filename)
# Création de la ligne
data = {
"OBJET": filename,
"URL": naas_link,
"DATE_EXTRACT": datetime.now().strftime("%Y-%m-%d %H:%M:%S")
}
return pd.DataFrame([data])
dataset_logo = {
"OBJET": "Logo",
"URL": LOGO,
"DATE_EXTRACT": datetime.now().strftime("%Y-%m-%d %H:%M:%S")
}
logo = pd.DataFrame([dataset_logo])
logo
import json
color = {"name":"Color",
"dataColors":[COLOR_1, COLOR_2]}
with open("color.json", "w") as write_file:
json.dump(color, write_file)
dataset_color = {
"OBJET": "Color",
"URL": naas.asset.add("color.json"),
"DATE_EXTRACT": datetime.now().strftime("%Y-%m-%d %H:%M:%S")
}
pbi_color = pd.DataFrame([dataset_color])
pbi_color
entite = df_to_csv(dataset_entite, "dataset_entite.csv")
entite
scenario = df_to_csv(dataset_scenario, "dataset_scenario.csv")
scenario
kpis = df_to_csv(dataset_kpis_final, "dataset_kpis_final.csv")
kpis
evol_ca = df_to_csv(dataset_evol_ca, "dataset_evol_ca.csv")
evol_ca
charges = df_to_csv(dataset_charges, "dataset_charges.csv")
charges
treso = df_to_csv(dataset_treso, "dataset_treso.csv")
treso
bilan = df_to_csv(dataset_bilan, "dataset_bilan.csv")
bilan
###Output
_____no_output_____
###Markdown
Création du fichier à intégrer dans PowerBI
###Code
db_powerbi = pd.concat([logo, pbi_color, entite, scenario, kpis, evol_ca, charges, treso, bilan], axis=0)
db_powerbi
df_to_csv(db_powerbi, "powerbi.csv")
###Output
_____no_output_____
|
Introduction-of-Statistic-Learning/exercises/Exercise04.ipynb
|
###Markdown
**Chapter 04**
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
10 10(a)
###Code
weekly_file_path = '../data/Weekly.csv'
weekly = pd.read_csv(weekly_file_path, index_col=0)
weekly.head()
weekly.describe()
from pandas.tools.plotting import scatter_matrix
weekly_refine = weekly[['Lag1','Lag2','Lag3','Lag4','Lag5','Volume','Today','Direction']]
weekly_refine=weekly_refine.replace('Up',1)
weekly_refine=weekly_refine.replace('Down',0)
fig, ax = plt.subplots(figsize=(15, 15))
scatter_matrix(weekly_refine,alpha=0.5,diagonal='kde', ax=ax);
weekly_refine.corr()
###Output
_____no_output_____
###Markdown
10(b)
###Code
from sklearn.linear_model import LogisticRegression
X = weekly_refine[['Lag1','Lag2','Lag3','Lag4','Lag5','Volume']].values
y = weekly_refine['Direction'].values
y = y.reshape((len(y),1))
lg = LogisticRegression()
lg.fit(X,y)
print(lg.coef_,lg.intercept_)
###Output
[[-0.04117292 0.05846974 -0.01599122 -0.02769998 -0.01440289 -0.02212844]] [ 0.26484745]
###Markdown
10(c)
###Code
from sklearn.metrics import confusion_matrix,accuracy_score
pred = lg.predict(X)
confusion_matrix(y,pred)
print(accuracy_score(y,pred))
###Output
0.56290174472
###Markdown
10(d)
###Code
df_train = weekly[weekly['Year'].isin(range(1990,2009))]
df_test = weekly[weekly['Year'].isin(range(2009,2011))]
# training data
X_train = df_train['Lag2'].values
X_train = X_train.reshape((len(X_train),1))
y_train = df_train['Direction'].values
# test data
X_test = df_test['Lag2'].values
X_test = X_test.reshape((len(X_test),1))
y_test = df_test['Direction'].values
# lg
lg = LogisticRegression()
lg.fit(X_train, y_train)
pred_test = lg.predict(X_test)
print(confusion_matrix(y_test, pred_test))
print(accuracy_score(y_test,pred_test))
###Output
0.625
###Markdown
10(e)
###Code
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis, QuadraticDiscriminantAnalysis
lda = LinearDiscriminantAnalysis()
lda.fit(X_train,y_train)
pred_test = lda.predict(X_test)
print(confusion_matrix(y_test, pred_test))
print(accuracy_score(y_test,pred_test))
###Output
0.625
###Markdown
10(f)
###Code
qda = QuadraticDiscriminantAnalysis()
qda.fit(X_train, y_train)
pred_test = qda.predict(X_test)
print(confusion_matrix(y_test, pred_test))
print(accuracy_score(y_test, pred_test))
###Output
0.586538461538
###Markdown
10(g)
###Code
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X_train, y_train)
pred_test = knn.predict(X_test)
print(confusion_matrix(y_test, pred_test))
print(accuracy_score(y_test, pred_test))
###Output
0.528846153846
###Markdown
11
###Code
auto_file_path = '../data/Auto'
autos = pd.read_table(auto_file_path,sep='\s+')
autos=autos.replace('?',np.NAN).dropna()
autos['horsepower']=autos['horsepower'].astype('float')
autos.head()
###Output
_____no_output_____
###Markdown
11(a)
###Code
mpgs = autos['mpg'].values
mpg_med = np.median(mpgs)
mpg0 = [1 if mpg > mpg_med else 0 for mpg in mpgs]
autos['mpg0'] = mpg0
autos.head()
###Output
_____no_output_____
###Markdown
11(b)
###Code
fig, ax = plt.subplots(figsize=(15, 15))
scatter_matrix(autos[['cylinders','displacement','horsepower','weight','acceleration','mpg0']],ax=ax);
autos[['cylinders','displacement','horsepower','weight','acceleration','mpg0']].boxplot(by='mpg0');
###Output
_____no_output_____
###Markdown
11(c)
###Code
autos_train = autos[autos.apply(lambda x: x['year'] %2 ==0, axis=1)]
autos_test = autos[autos.apply(lambda x:x['year'] % 2 != 0, axis=1)]
variables = ['cylinders','weight','displacement','horsepower']
response = ['mpg0']
X_train = autos_train[variables].values
y_train = autos_train[response].values
y_train = y_train.reshape((len(y_train)))
X_test = autos_test[variables].values
y_test = autos_test[response].values
y_test = y_test.reshape((len(y_test)))
###Output
_____no_output_____
###Markdown
11(d)
###Code
lda = LinearDiscriminantAnalysis()
lda.fit(X_train, y_train)
pred = lda.predict(X_test)
print(accuracy_score(y_test, pred))
###Output
0.873626373626
###Markdown
11(e)
###Code
qda = QuadraticDiscriminantAnalysis()
qda.fit(X_train, y_train)
pred = qda.predict(X_test)
print(accuracy_score(y_test, pred))
###Output
0.868131868132
###Markdown
11(f)
###Code
lg = LogisticRegression()
lg.fit(X_train, y_train)
pred = lg.predict(X_test)
print(accuracy_score(y_test, pred))
###Output
0.873626373626
###Markdown
11(g)
###Code
ks=[1,3,5,7,9,11,13,15]
accur={}
for k in ks:
knn = KNeighborsClassifier(n_neighbors=k)
knn.fit(X_train,y_train)
pred = knn.predict(X_test)
accur[k]=accuracy_score(y_test,pred)
for k,v in accur.items():
print('k is %d :'%k, 'accuracy is %f'%v)
###Output
k is 1 : accuracy is 0.846154
k is 3 : accuracy is 0.862637
k is 5 : accuracy is 0.851648
k is 7 : accuracy is 0.851648
k is 9 : accuracy is 0.840659
k is 11 : accuracy is 0.846154
k is 13 : accuracy is 0.846154
k is 15 : accuracy is 0.840659
###Markdown
When $k$ equals to $3$, the knn classifier has the highest accuracy scores, $0.8625$. 12 12(a)
###Code
def Power():
return 2*2*2
Power()
###Output
_____no_output_____
###Markdown
12(b)
###Code
def Power2(x,a):
res = 1
while a>=1:
res *= x
a -= 1
return res
Power2(3,8)
###Output
_____no_output_____
###Markdown
12(c,d)
###Code
Power2(10,3)
Power2(8,17)
Power2(131,3)
###Output
_____no_output_____
###Markdown
12(e)
###Code
x = range(1,11,1)
y = [Power2(item,2) for item in x]
plt.plot(x,y)
plt.show()
###Output
_____no_output_____
###Markdown
13
###Code
boston_file_name = '../data/Boston.csv'
bostons = pd.read_csv(boston_file_name, index_col=0)
bostons.head()
crims = bostons['crim'].values
crims_med = np.median(crims)
cime_statue = [1 if crim > crims_med else 0 for crim in crims]
bostons['crim_statue'] = cime_statue
X = bostons[['dis']].values
y = bostons['crim_statue'].values
lg = LogisticRegression()
lg.fit(X,y)
print(lg.coef_,lg.intercept_)
###Output
[[-0.95466145]] [ 3.31112177]
|
05-ONNX.ipynb
|
###Markdown
ONNX (Open Neural Network eXchange)Originally created by Facebook and Microsoft as an industry collaboration for import/export of neural networks* ONNX has grown to include support for "traditional" ML models* interop with many software libraries* has both software (CPU, optional GPU accelerated) and hardware (Intel, Qualcomm, etc.) runtimes.https://onnx.ai/* DAG-based model* Built-in operators, data types* Extensible -- e.g., ONNX-ML* Goal is to allow tools to share a single model format*Of the "standard/open" formats, ONNX clearly has the most momentum in the past year or two.* Viewing a ModelONNX models are not directly (as raw data) human-readable, but, as they represent a graph, can easily be converted into textual or graphical representations.Here is a snippet of the [SqueezeNet](https://arxiv.org/abs/1602.07360) image-recognition model, as rendered in the ONNX visualization tutorial at https://github.com/onnx/tutorials/blob/master/tutorials/VisualizingAModel.md. > The ONNX codebase comes with the visualization converter used in this example -- it's a simple script currently located at https://github.com/onnx/onnx/blob/master/onnx/tools/net_drawer.py Let's Build a Model and Convert it to ONNX
###Code
import pandas as pd
from sklearn.linear_model import LinearRegression
data = pd.read_csv('data/diamonds.csv')
X = data.carat
y = data.price
model = LinearRegression().fit(X.values.reshape(-1,1), y)
###Output
_____no_output_____
###Markdown
ONNX can be generated from many modeling tools. A partial list is here: https://github.com/onnx/tutorialsconverting-to-onnx-formatMicrosoft has contributed a lot of resources toward open-source ONNX capabilities, including, in early 2019, support for Apache Spark ML Pipelines: https://github.com/onnx/onnxmltools/blob/master/onnxmltools/convert/sparkml/README.md__Convert to ONNX__Note that we can print a string representation of the converted graph.
###Code
from skl2onnx import convert_sklearn
from skl2onnx.common.data_types import FloatTensorType
initial_type = [('float_input', FloatTensorType([1, 1]))]
onx = convert_sklearn(model, initial_types=initial_type)
print(onx)
###Output
_____no_output_____
###Markdown
Let's save it as a file:
###Code
with open("diamonds.onnx", "wb") as f:
f.write(onx.SerializeToString())
###Output
_____no_output_____
###Markdown
The file itself is binary:
###Code
! head diamonds.onnx
###Output
_____no_output_____
###Markdown
How Do We Consume ONNX and Make Predictions?One of the things that makes ONNX a compelling solution in 2019 is the wide industry support not just for model creation, but also for performant model inference.Here is a partial list of tools that consume ONNX: https://github.com/onnx/tutorialsscoring-onnx-modelsOf particular interest for productionizing models are* Apple CoreML* Microsoft `onnxruntime` and `onnxruntime-gpu` for CPU & GPU-accelerated inference* TensorRT for NVIDIA GPU* Conversion for Qualcomm Snapdragon hardware: https://developer.qualcomm.com/docs/snpe/model_conv_onnx.htmlToday, we'll look at "regular" server-based inference with a sample REST server, using `onnxruntime` We'll start by loading the `onnxruntime` library, and seeing how we make predictions
###Code
import onnxruntime as rt
sess = rt.InferenceSession("diamonds.onnx")
print("In", [(i.name, i.type, i.shape) for i in sess.get_inputs()])
print("Out", [(i.name, i.type, i.shape) for i in sess.get_outputs()])
###Output
_____no_output_____
###Markdown
We've skipped some metadata annotation in the model creation for this quick example -- that's why our input field name is "float_input" and the output is called "variable"
###Code
import numpy as np
sample_to_score = np.array([[1.0]], dtype=np.float32)
output = sess.run(['variable'], {'float_input': sample_to_score})
output
output[0][0][0]
###Output
_____no_output_____
###Markdown
ONNX (Open Neural Network eXchange)Originally created by Facebook and Microsoft as an industry collaboration for import/export of neural networks* ONNX has grown to include support for "traditional" ML models* interop with many software libraries* has both software (CPU, optional GPU accelerated) and hardware (Intel, Qualcomm, etc.) runtimes.https://onnx.ai/* DAG-based model* Built-in operators, data types* Extensible -- e.g., ONNX-ML* Goal is to allow tools to share a single model format*Of the "standard/open" formats, ONNX clearly has the most momentum in the past year or two.* Viewing a ModelONNX models are not directly (as raw data) human-readable, but, as they represent a graph, can easily be converted into textual or graphical representations.Here is a snippet of the [SqueezeNet](https://arxiv.org/abs/1602.07360) image-recognition model, as rendered in the ONNX visualization tutorial at https://github.com/onnx/tutorials/blob/master/tutorials/VisualizingAModel.md. > The ONNX codebase comes with the visualization converter used in this example -- it's a simple script currently located at https://github.com/onnx/onnx/blob/master/onnx/tools/net_drawer.py Let's Build a Model and Convert it to ONNX
###Code
import pandas as pd
from sklearn.linear_model import LinearRegression
data = pd.read_csv('data/diamonds.csv')
X = data.carat
y = data.price
model = LinearRegression().fit(X.values.reshape(-1,1), y)
###Output
_____no_output_____
###Markdown
ONNX can be generated from many modeling tools. A partial list is here: https://github.com/onnx/tutorialsconverting-to-onnx-formatMicrosoft has contributed a lot of resources toward open-source ONNX capabilities, including, in early 2019, support for Apache Spark ML Pipelines: https://github.com/onnx/onnxmltools/blob/master/onnxmltools/convert/sparkml/README.md__Convert to ONNX__Note that we can print a string representation of the converted graph.
###Code
from skl2onnx import convert_sklearn
from skl2onnx.common.data_types import FloatTensorType
initial_type = [('float_input', FloatTensorType([1, 1]))]
onx = convert_sklearn(model, initial_types=initial_type)
print(onx)
###Output
_____no_output_____
###Markdown
Let's save it as a file:
###Code
with open("diamonds.onnx", "wb") as f:
f.write(onx.SerializeToString())
###Output
_____no_output_____
###Markdown
The file itself is binary:
###Code
! head diamonds.onnx
###Output
_____no_output_____
###Markdown
How Do We Consume ONNX and Make Predictions?One of the things that makes ONNX a compelling solution in 2019 is the wide industry support not just for model creation, but also for performant model inference.Here is a partial list of tools that consume ONNX: https://github.com/onnx/tutorialsscoring-onnx-modelsOf particular interest for productionizing models are* Apple CoreML* Microsoft `onnxruntime` and `onnxruntime-gpu` for CPU & GPU-accelerated inference* TensorRT for NVIDIA GPU* Conversion for Qualcomm Snapdragon hardware: https://developer.qualcomm.com/docs/snpe/model_conv_onnx.htmlToday, we'll look at "regular" server-based inference with a sample REST server, using `onnxruntime` We'll start by loading the `onnxruntime` library, and seeing how we make predictions
###Code
import onnxruntime as rt
sess = rt.InferenceSession("diamonds.onnx")
print("In", [(i.name, i.type, i.shape) for i in sess.get_inputs()])
print("Out", [(i.name, i.type, i.shape) for i in sess.get_outputs()])
###Output
_____no_output_____
###Markdown
We've skipped some metadata annotation in the model creation for this quick example -- that's why our input field name is "float_input" and the output is called "variable"
###Code
import numpy as np
sample_to_score = np.array([[1.0]], dtype=np.float32)
output = sess.run(['variable'], {'float_input': sample_to_score})
output
output[0][0][0]
###Output
_____no_output_____
|
Chapter 8/08_Gaussian_processes.ipynb
|
###Markdown
Kernelized Regression
###Code
np.random.seed(1)
x = np.random.uniform(0, 10, size=20)
y = np.random.normal(np.sin(x), 0.2)
plt.plot(x, y, 'o')
plt.xlabel('$x$', fontsize=16)
plt.ylabel('$f(x)$', fontsize=16, rotation=0)
plt.savefig('B04958_08_01.png', dpi=300, figsize=[5.5, 5.5])
def gauss_kernel(x, n_knots):
"""
Simple Gaussian radial kernel
"""
knots = np.linspace(x.min(), x.max(), n_knots)
w = 2
return np.array([np.exp(-(x-k)**2/w) for k in knots])
n_knots = 5
with pm.Model() as kernel_model:
gamma = pm.Cauchy('gamma', alpha=0, beta=1, shape=n_knots)
sd = pm.Uniform('sd',0, 10)
mu = pm.math.dot(gamma, gauss_kernel(x, n_knots))
yl = pm.Normal('yl', mu=mu, sd=sd, observed=y)
kernel_trace = pm.sample(10000, step=pm.Metropolis())
chain = kernel_trace[5000:]
pm.traceplot(chain);
plt.savefig('B04958_08_02.png', dpi=300, figsize=[5.5, 5.5])
pm.df_summary(chain)
ppc = pm.sample_ppc(chain, model=kernel_model, samples=100)
plt.plot(x, ppc['yl'].T, 'ro', alpha=0.1)
plt.plot(x, y, 'bo')
plt.xlabel('$x$', fontsize=16)
plt.ylabel('$f(x)$', fontsize=16, rotation=0)
plt.savefig('B04958_08_03.png', dpi=300, figsize=[5.5, 5.5])
new_x = np.linspace(x.min(), x.max(), 100)
k = gauss_kernel(new_x, n_knots)
gamma_pred = chain['gamma']
for i in range(100):
idx = np.random.randint(0, len(gamma_pred))
y_pred = np.dot(gamma_pred[idx], k)
plt.plot(new_x, y_pred, 'r-', alpha=0.1)
plt.plot(x, y, 'bo')
plt.xlabel('$x$', fontsize=16)
plt.ylabel('$f(x)$', fontsize=16, rotation=0)
plt.savefig('B04958_08_04.png', dpi=300, figsize=[5.5, 5.5])
###Output
_____no_output_____
###Markdown
Gaussian Processes
###Code
squared_distance = lambda x, y: np.array([[(x[i] - y[j])**2 for i in range(len(x))] for j in range(len(y))])
np.random.seed(1)
test_points = np.linspace(0, 10, 100)
cov = np.exp(-squared_distance(test_points, test_points))
plt.plot(test_points, stats.multivariate_normal.rvs(cov=cov, size=6).T)
plt.xlabel('$x$', fontsize=16)
plt.ylabel('$f(x)$', fontsize=16, rotation=0)
plt.savefig('B04958_08_05.png', dpi=300, figsize=[5.5, 5.5]);
np.random.seed(1)
eta = 1
rho = 0.5
sigma = 0.03
D = squared_distance(test_points, test_points)
cov = eta * np.exp(-rho * D)
diag = eta * sigma
np.fill_diagonal(cov, diag)
for i in range(6):
plt.plot(test_points, stats.multivariate_normal.rvs(cov=cov))
plt.xlabel('$x$', fontsize=16)
plt.ylabel('$f(x)$', fontsize=16, rotation=0)
plt.savefig('B04958_08_06.png', dpi=300, figsize=[5.5, 5.5]);
np.random.seed(1)
K_oo = eta * np.exp(-rho * D)
D_x = squared_distance(x, x)
K = eta * np.exp(-rho * D_x)
diag_x = eta + sigma
np.fill_diagonal(K, diag_x)
D_off_diag = squared_distance(x, test_points)
K_o = eta * np.exp(-rho * D_off_diag)
# Posterior mean
mu_post = np.dot(np.dot(K_o, np.linalg.inv(K)), y)
# Posterior covariance
SIGMA_post = K_oo - np.dot(np.dot(K_o, np.linalg.inv(K)), K_o.T)
for i in range(100):
fx = stats.multivariate_normal.rvs(mean=mu_post, cov=SIGMA_post)
plt.plot(test_points, fx, 'r-', alpha=0.1)
plt.plot(x, y, 'o')
plt.xlabel('$x$', fontsize=16)
plt.ylabel('$f(x)$', fontsize=16, rotation=0)
plt.savefig('B04958_08_07.png', dpi=300, figsize=[5.5, 5.5]);
np.random.seed(1)
eta = 1
rho = 0.5
sigma = 0.03
# This is the true unknown function we are trying to approximate
f = lambda x: np.sin(x).flatten()
# Define the kernel
def kernel(a, b):
""" GP squared exponential kernel """
D = np.sum(a**2,1).reshape(-1,1) + np.sum(b**2,1) - 2*np.dot(a, b.T)
return eta * np.exp(- rho * D)
N = 20 # number of training points.
n = 100 # number of test points.
# Sample some input points and noisy versions of the function evaluated at
# these points.
X = np.random.uniform(0, 10, size=(N,1))
y = f(X) + sigma * np.random.randn(N)
K = kernel(X, X)
L = np.linalg.cholesky(K + sigma * np.eye(N))
# points we're going to make predictions at.
Xtest = np.linspace(0, 10, n).reshape(-1,1)
# compute the mean at our test points.
Lk = np.linalg.solve(L, kernel(X, Xtest))
mu = np.dot(Lk.T, np.linalg.solve(L, y))
# compute the variance at our test points.
K_ = kernel(Xtest, Xtest)
sd_pred = (np.diag(K_) - np.sum(Lk**2, axis=0))**0.5
plt.fill_between(Xtest.flat, mu - 2 * sd_pred, mu + 2 * sd_pred, color="r", alpha=0.2)
plt.plot(Xtest, mu, 'r', lw=2)
plt.plot(x, y, 'o')
plt.xlabel('$x$', fontsize=16)
plt.ylabel('$f(x)$', fontsize=16, rotation=0)
plt.savefig('B04958_08_08.png', dpi=300, figsize=[5.5, 5.5]);
with pm.Model() as GP:
mu = np.zeros(N)
eta = pm.HalfCauchy('eta', 5)
rho = pm.HalfCauchy('rho', 5)
sigma = pm.HalfCauchy('sigma', 5)
D = squared_distance(x, x)
K = tt.fill_diagonal(eta * pm.math.exp(-rho * D), eta + sigma)
obs = pm.MvNormal('obs', mu, cov=K, observed=y)
test_points = np.linspace(0, 10, 100)
D_pred = squared_distance(test_points, test_points)
D_off_diag = squared_distance(x, test_points)
K_oo = eta * pm.math.exp(-rho * D_pred)
K_o = eta * pm.math.exp(-rho * D_off_diag)
mu_post = pm.Deterministic('mu_post', pm.math.dot(pm.math.dot(K_o, tt.nlinalg.matrix_inverse(K)), y))
SIGMA_post = pm.Deterministic('SIGMA_post', K_oo - pm.math.dot(pm.math.dot(K_o, tt.nlinalg.matrix_inverse(K)), K_o.T))
start = pm.find_MAP()
trace = pm.sample(1000, start=start)
varnames = ['eta', 'rho', 'sigma']
chain = trace[100:]
pm.traceplot(chain, varnames)
plt.savefig('B04958_08_09.png', dpi=300, figsize=[5.5, 5.5]);
pm.df_summary(chain, varnames).round(4)
y_pred = [np.random.multivariate_normal(m, S) for m,S in zip(chain['mu_post'][::5], chain['SIGMA_post'][::5])]
for yp in y_pred:
plt.plot(test_points, yp, 'r-', alpha=0.1)
plt.plot(x, y, 'bo')
plt.xlabel('$x$', fontsize=16)
plt.ylabel('$f(x)$', fontsize=16, rotation=0)
plt.savefig('B04958_08_10.png', dpi=300, figsize=[5.5, 5.5]);
###Output
/home/osvaldo/anaconda3/lib/python3.5/site-packages/ipykernel/__main__.py:1: RuntimeWarning: covariance is not positive-semidefinite.
if __name__ == '__main__':
###Markdown
Periodic Kernel
###Code
periodic = lambda x, y: np.array([[np.sin((x[i] - y[j])/2)**2 for i in range(len(x))] for j in range(len(y))])
with pm.Model() as GP_periodic:
mu = np.zeros(N)
eta = pm.HalfCauchy('eta', 5)
rho = pm.HalfCauchy('rho', 5)
sigma = pm.HalfCauchy('sigma', 5)
P = periodic(x, x)
K = tt.fill_diagonal(eta * pm.math.exp(-rho * P), eta + sigma)
obs = pm.MvNormal('obs', mu, cov=K, observed=y)
test_points = np.linspace(0, 10, 100)
D_pred = periodic(test_points, test_points)
D_off_diag = periodic(x, test_points)
K_oo = eta * pm.math.exp(-rho * D_pred)
K_o = eta * pm.math.exp(-rho * D_off_diag)
mu_post = pm.Deterministic('mu_post', pm.math.dot(pm.math.dot(K_o, tt.nlinalg.matrix_inverse(K)), y))
SIGMA_post = pm.Deterministic('SIGMA_post', K_oo - pm.math.dot(pm.math.dot(K_o, tt.nlinalg.matrix_inverse(K)), K_o.T))
start = pm.find_MAP()
trace = pm.sample(1000, start=start)
varnames = ['eta', 'rho', 'sigma']
chain = trace[100:]
pm.traceplot(chain, varnames);
y_pred = [np.random.multivariate_normal(m, S) for m,S in zip(chain['mu_post'][::5], chain['SIGMA_post'][::5])]
for yp in y_pred:
plt.plot(test_points, yp, 'r-', alpha=0.1)
plt.plot(x, y, 'bo')
plt.xlabel('$x$', fontsize=16)
plt.ylabel('$f(x)$', fontsize=16, rotation=0)
plt.savefig('B04958_08_11.png', dpi=300, figsize=[5.5, 5.5]);
import sys, IPython, scipy, matplotlib, platform
print("This notebook was created on a computer %s running %s and using:\nPython %s\nIPython %s\nPyMC3 %s\nNumPy %s\nSciPy %s\nPandas %s\nMatplotlib %s\nSeaborn %s\n" % (platform.machine(), ' '.join(platform.linux_distribution()[:2]), sys.version[:5], IPython.__version__, pm.__version__, np.__version__, scipy.__version__, pd.__version__, matplotlib.__version__, sns.__version__))
###Output
This notebook was created on a computer x86_64 running debian stretch/sid and using:
Python 3.5.2
IPython 5.0.0
PyMC3 3.0.rc2
NumPy 1.11.2
SciPy 0.18.1
Pandas 0.19.1
Matplotlib 1.5.3
Seaborn 0.7.1
###Markdown
Kernelized Regression
###Code
np.random.seed(1)
x = np.random.uniform(0, 10, size=20)
y = np.random.normal(np.sin(x), 0.2)
plt.plot(x, y, 'o')
plt.xlabel('$x$', fontsize=16)
plt.ylabel('$f(x)$', fontsize=16, rotation=0)
plt.savefig('B04958_08_01.png', dpi=300, figsize=[5.5, 5.5])
def gauss_kernel(x, n_knots):
"""
Simple Gaussian radial kernel
"""
knots = np.linspace(x.min(), x.max(), n_knots)
w = 2
return np.array([np.exp(-(x-k)**2/w) for k in knots])
n_knots = 5
with pm.Model() as kernel_model:
gamma = pm.Cauchy('gamma', alpha=0, beta=1, shape=n_knots)
sd = pm.Uniform('sd',0, 10)
mu = pm.math.dot(gamma, gauss_kernel(x, n_knots))
yl = pm.Normal('yl', mu=mu, sd=sd, observed=y)
kernel_trace = pm.sample(10000, step=pm.Metropolis())
chain = kernel_trace[5000:]
pm.traceplot(chain);
plt.savefig('B04958_08_02.png', dpi=300, figsize=[5.5, 5.5])
pm.df_summary(chain)
ppc = pm.sample_ppc(chain, model=kernel_model, samples=100)
plt.plot(x, ppc['yl'].T, 'ro', alpha=0.1)
plt.plot(x, y, 'bo')
plt.xlabel('$x$', fontsize=16)
plt.ylabel('$f(x)$', fontsize=16, rotation=0)
plt.savefig('B04958_08_03.png', dpi=300, figsize=[5.5, 5.5])
new_x = np.linspace(x.min(), x.max(), 100)
k = gauss_kernel(new_x, n_knots)
gamma_pred = chain['gamma']
for i in range(100):
idx = np.random.randint(0, len(gamma_pred))
y_pred = np.dot(gamma_pred[idx], k)
plt.plot(new_x, y_pred, 'r-', alpha=0.1)
plt.plot(x, y, 'bo')
plt.xlabel('$x$', fontsize=16)
plt.ylabel('$f(x)$', fontsize=16, rotation=0)
plt.savefig('B04958_08_04.png', dpi=300, figsize=[5.5, 5.5])
###Output
_____no_output_____
###Markdown
Gaussian Processes
###Code
squared_distance = lambda x, y: np.array([[(x[i] - y[j])**2 for i in range(len(x))] for j in range(len(y))])
np.random.seed(1)
test_points = np.linspace(0, 10, 100)
cov = np.exp(-squared_distance(test_points, test_points))
plt.plot(test_points, stats.multivariate_normal.rvs(cov=cov, size=6).T)
plt.xlabel('$x$', fontsize=16)
plt.ylabel('$f(x)$', fontsize=16, rotation=0)
plt.savefig('B04958_08_05.png', dpi=300, figsize=[5.5, 5.5]);
np.random.seed(1)
eta = 1
rho = 0.5
sigma = 0.03
D = squared_distance(test_points, test_points)
cov = eta * np.exp(-rho * D)
diag = eta * sigma
np.fill_diagonal(cov, diag)
for i in range(6):
plt.plot(test_points, stats.multivariate_normal.rvs(cov=cov))
plt.xlabel('$x$', fontsize=16)
plt.ylabel('$f(x)$', fontsize=16, rotation=0)
plt.savefig('B04958_08_06.png', dpi=300, figsize=[5.5, 5.5]);
np.random.seed(1)
K_oo = eta * np.exp(-rho * D)
D_x = squared_distance(x, x)
K = eta * np.exp(-rho * D_x)
diag_x = eta + sigma
np.fill_diagonal(K, diag_x)
D_off_diag = squared_distance(x, test_points)
K_o = eta * np.exp(-rho * D_off_diag)
# Posterior mean
mu_post = np.dot(np.dot(K_o, np.linalg.inv(K)), y)
# Posterior covariance
SIGMA_post = K_oo - np.dot(np.dot(K_o, np.linalg.inv(K)), K_o.T)
for i in range(100):
fx = stats.multivariate_normal.rvs(mean=mu_post, cov=SIGMA_post)
plt.plot(test_points, fx, 'r-', alpha=0.1)
plt.plot(x, y, 'o')
plt.xlabel('$x$', fontsize=16)
plt.ylabel('$f(x)$', fontsize=16, rotation=0)
plt.savefig('B04958_08_07.png', dpi=300, figsize=[5.5, 5.5]);
np.random.seed(1)
eta = 1
rho = 0.5
sigma = 0.03
# This is the true unknown function we are trying to approximate
f = lambda x: np.sin(x).flatten()
# Define the kernel
def kernel(a, b):
""" GP squared exponential kernel """
D = np.sum(a**2,1).reshape(-1,1) + np.sum(b**2,1) - 2*np.dot(a, b.T)
return eta * np.exp(- rho * D)
N = 20 # number of training points.
n = 100 # number of test points.
# Sample some input points and noisy versions of the function evaluated at
# these points.
X = np.random.uniform(0, 10, size=(N,1))
y = f(X) + sigma * np.random.randn(N)
K = kernel(X, X)
L = np.linalg.cholesky(K + sigma * np.eye(N))
# points we're going to make predictions at.
Xtest = np.linspace(0, 10, n).reshape(-1,1)
# compute the mean at our test points.
Lk = np.linalg.solve(L, kernel(X, Xtest))
mu = np.dot(Lk.T, np.linalg.solve(L, y))
# compute the variance at our test points.
K_ = kernel(Xtest, Xtest)
sd_pred = (np.diag(K_) - np.sum(Lk**2, axis=0))**0.5
plt.fill_between(Xtest.flat, mu - 2 * sd_pred, mu + 2 * sd_pred, color="r", alpha=0.2)
plt.plot(Xtest, mu, 'r', lw=2)
plt.plot(x, y, 'o')
plt.xlabel('$x$', fontsize=16)
plt.ylabel('$f(x)$', fontsize=16, rotation=0)
plt.savefig('B04958_08_08.png', dpi=300, figsize=[5.5, 5.5]);
with pm.Model() as GP:
mu = np.zeros(N)
eta = pm.HalfCauchy('eta', 5)
rho = pm.HalfCauchy('rho', 5)
sigma = pm.HalfCauchy('sigma', 5)
D = squared_distance(x, x)
K = tt.fill_diagonal(eta * pm.math.exp(-rho * D), eta + sigma)
obs = pm.MvNormal('obs', mu, cov=K, observed=y)
test_points = np.linspace(0, 10, 100)
D_pred = squared_distance(test_points, test_points)
D_off_diag = squared_distance(x, test_points)
K_oo = eta * pm.math.exp(-rho * D_pred)
K_o = eta * pm.math.exp(-rho * D_off_diag)
mu_post = pm.Deterministic('mu_post', pm.math.dot(pm.math.dot(K_o, tt.nlinalg.matrix_inverse(K)), y))
SIGMA_post = pm.Deterministic('SIGMA_post', K_oo - pm.math.dot(pm.math.dot(K_o, tt.nlinalg.matrix_inverse(K)), K_o.T))
start = pm.find_MAP()
trace = pm.sample(1000, start=start)
varnames = ['eta', 'rho', 'sigma']
chain = trace[100:]
pm.traceplot(chain, varnames)
plt.savefig('B04958_08_09.png', dpi=300, figsize=[5.5, 5.5]);
pm.df_summary(chain, varnames).round(4)
y_pred = [np.random.multivariate_normal(m, S) for m,S in zip(chain['mu_post'][::5], chain['SIGMA_post'][::5])]
for yp in y_pred:
plt.plot(test_points, yp, 'r-', alpha=0.1)
plt.plot(x, y, 'bo')
plt.xlabel('$x$', fontsize=16)
plt.ylabel('$f(x)$', fontsize=16, rotation=0)
plt.savefig('B04958_08_10.png', dpi=300, figsize=[5.5, 5.5]);
###Output
/Users/didi/anaconda2/envs/py3/lib/python3.6/site-packages/ipykernel/__main__.py:1: RuntimeWarning: covariance is not positive-semidefinite.
if __name__ == '__main__':
###Markdown
Periodic Kernel
###Code
periodic = lambda x, y: np.array([[np.sin((x[i] - y[j])/2)**2 for i in range(len(x))] for j in range(len(y))])
with pm.Model() as GP_periodic:
mu = np.zeros(N)
eta = pm.HalfCauchy('eta', 5)
rho = pm.HalfCauchy('rho', 5)
sigma = pm.HalfCauchy('sigma', 5)
P = periodic(x, x)
K = tt.fill_diagonal(eta * pm.math.exp(-rho * P), eta + sigma)
obs = pm.MvNormal('obs', mu, cov=K, observed=y)
test_points = np.linspace(0, 10, 100)
D_pred = periodic(test_points, test_points)
D_off_diag = periodic(x, test_points)
K_oo = eta * pm.math.exp(-rho * D_pred)
K_o = eta * pm.math.exp(-rho * D_off_diag)
mu_post = pm.Deterministic('mu_post', pm.math.dot(pm.math.dot(K_o, tt.nlinalg.matrix_inverse(K)), y))
SIGMA_post = pm.Deterministic('SIGMA_post', K_oo - pm.math.dot(pm.math.dot(K_o, tt.nlinalg.matrix_inverse(K)), K_o.T))
start = pm.find_MAP()
trace = pm.sample(1000, start=start)
varnames = ['eta', 'rho', 'sigma']
chain = trace[100:]
pm.traceplot(chain, varnames);
y_pred = [np.random.multivariate_normal(m, S) for m,S in zip(chain['mu_post'][::5], chain['SIGMA_post'][::5])]
for yp in y_pred:
plt.plot(test_points, yp, 'r-', alpha=0.1)
plt.plot(x, y, 'bo')
plt.xlabel('$x$', fontsize=16)
plt.ylabel('$f(x)$', fontsize=16, rotation=0)
plt.savefig('B04958_08_11.png', dpi=300, figsize=[5.5, 5.5]);
import sys, IPython, scipy, matplotlib, platform
print("This notebook was created on a computer %s running %s and using:\nPython %s\nIPython %s\nPyMC3 %s\nNumPy %s\nSciPy %s\nPandas %s\nMatplotlib %s\nSeaborn %s\n" % (platform.machine(), ' '.join(platform.linux_distribution()[:2]), sys.version[:5], IPython.__version__, pm.__version__, np.__version__, scipy.__version__, pd.__version__, matplotlib.__version__, sns.__version__))
###Output
This notebook was created on a computer x86_64 running and using:
Python 3.6.0
IPython 5.1.0
PyMC3 3.0
NumPy 1.11.3
SciPy 0.18.1
Pandas 0.19.2
Matplotlib 2.0.0
Seaborn 0.7.1
|
N_additional_information/timeseries/TimeSeries.ipynb
|
###Markdown
Attention: exercise needs to be renewed. Replace the google finance package api Working with Time Series Pandas was developed in the context of financial modeling, so as you might expect, it contains a fairly extensive set of tools for working with dates, times, and time-indexed data.Date and time data comes in a few flavors, which we will discuss here:- *Time stamps* reference particular moments in time (e.g., July 4th, 2015 at 7:00am).- *Time intervals* and *periods* reference a length of time between a particular beginning and end point; for example, the year 2015. Periods usually reference a special case of time intervals in which each interval is of uniform length and does not overlap (e.g., 24 hour-long periods comprising days).- *Time deltas* or *durations* reference an exact length of time (e.g., a duration of 22.56 seconds).In this section, we will introduce how to work with each of these types of date/time data in Pandas.This short section is by no means a complete guide to the time series tools available in Python or Pandas, but instead is intended as a broad overview of how you as a user should approach working with time series.We will start with a brief discussion of tools for dealing with dates and times in Python, before moving more specifically to a discussion of the tools provided by Pandas.After listing some resources that go into more depth, we will review some short examples of working with time series data in Pandas. Dates and Times in PythonThe Python world has a number of available representations of dates, times, deltas, and timespans.While the time series tools provided by Pandas tend to be the most useful for data science applications, it is helpful to see their relationship to other packages used in Python. Native Python dates and times: ``datetime`` and ``dateutil``Python's basic objects for working with dates and times reside in the built-in ``datetime`` module.Along with the third-party ``dateutil`` module, you can use it to quickly perform a host of useful functionalities on dates and times.For example, you can manually build a date using the ``datetime`` type:
###Code
from datetime import datetime
datetime(year=2015, month=7, day=4)
###Output
_____no_output_____
###Markdown
Or, using the ``dateutil`` module, you can parse dates from a variety of string formats:
###Code
from dateutil import parser
date = parser.parse("4th of July, 2015")
date
###Output
_____no_output_____
###Markdown
Once you have a ``datetime`` object, you can do things like printing the day of the week:
###Code
date.strftime('%A')
###Output
_____no_output_____
###Markdown
Dates and times in pandas: best of both worldsPandas builds upon all the tools just discussed to provide a ``Timestamp`` object, which combines the ease-of-use of ``datetime`` and ``dateutil`` with the efficient storage and vectorized interface of ``numpy.datetime64``.From a group of these ``Timestamp`` objects, Pandas can construct a ``DatetimeIndex`` that can be used to index data in a ``Series`` or ``DataFrame``; we'll see many examples of this below.For example, we can use Pandas tools to repeat the demonstration from above.We can parse a flexibly formatted string date, and use format codes to output the day of the week:
###Code
import pandas as pd
date = pd.to_datetime("4th of July, 2015")
date
date.strftime('%A')
###Output
_____no_output_____
###Markdown
Additionally, we can do NumPy-style vectorized operations directly on this same object:
###Code
date + pd.to_timedelta(np.arange(12), 'D')
###Output
_____no_output_____
###Markdown
In the next section, we will take a closer look at manipulating time series data with the tools provided by Pandas. Pandas Time Series: Indexing by TimeWhere the Pandas time series tools really become useful is when you begin to *index data by timestamps*.For example, we can construct a ``Series`` object that has time indexed data:
###Code
index = pd.DatetimeIndex(['2014-07-04', '2014-08-04',
'2015-07-04', '2015-08-04'])
data = pd.Series([0, 1, 2, 3], index=index)
data
###Output
_____no_output_____
###Markdown
Now that we have this data in a ``Series``, we can make use of any of the ``Series`` indexing patterns we discussed in previous sections, passing values that can be coerced into dates:
###Code
data['2014-07-04':'2015-07-04']
###Output
_____no_output_____
###Markdown
There are additional special date-only indexing operations, such as passing a year to obtain a slice of all data from that year:
###Code
data['2015']
###Output
_____no_output_____
###Markdown
Later, we will see additional examples of the convenience of dates-as-indices.But first, a closer look at the available time series data structures. Pandas Time Series Data StructuresThis section will introduce the fundamental Pandas data structures for working with time series data:- For *time stamps*, Pandas provides the ``Timestamp`` type. As mentioned before, it is essentially a replacement for Python's native ``datetime``, but is based on the more efficient ``numpy.datetime64`` data type. The associated Index structure is ``DatetimeIndex``.- For *time Periods*, Pandas provides the ``Period`` type. This encodes a fixed-frequency interval based on ``numpy.datetime64``. The associated index structure is ``PeriodIndex``.- For *time deltas* or *durations*, Pandas provides the ``Timedelta`` type. ``Timedelta`` is a more efficient replacement for Python's native ``datetime.timedelta`` type, and is based on ``numpy.timedelta64``. The associated index structure is ``TimedeltaIndex``. The most fundamental of these date/time objects are the ``Timestamp`` and ``DatetimeIndex`` objects.While these class objects can be invoked directly, it is more common to use the ``pd.to_datetime()`` function, which can parse a wide variety of formats.Passing a single date to ``pd.to_datetime()`` yields a ``Timestamp``; passing a series of dates by default yields a ``DatetimeIndex``:
###Code
dates = pd.to_datetime([datetime(2015, 7, 3), '4th of July, 2015',
'2015-Jul-6', '07-07-2015', '20150708'])
dates
###Output
_____no_output_____
###Markdown
Any ``DatetimeIndex`` can be converted to a ``PeriodIndex`` with the ``to_period()`` function with the addition of a frequency code; here we'll use ``'D'`` to indicate daily frequency:
###Code
dates.to_period('D')
###Output
_____no_output_____
###Markdown
A ``TimedeltaIndex`` is created, for example, when a date is subtracted from another:
###Code
dates - dates[0]
###Output
_____no_output_____
###Markdown
Regular sequences: ``pd.date_range()``To make the creation of regular date sequences more convenient, Pandas offers a few functions for this purpose: ``pd.date_range()`` for timestamps, ``pd.period_range()`` for periods, and ``pd.timedelta_range()`` for time deltas.We've seen that Python's ``range()`` and NumPy's ``np.arange()`` turn a startpoint, endpoint, and optional stepsize into a sequence.Similarly, ``pd.date_range()`` accepts a start date, an end date, and an optional frequency code to create a regular sequence of dates.By default, the frequency is one day:
###Code
pd.date_range('2015-07-03', '2015-07-10')
###Output
_____no_output_____
###Markdown
Alternatively, the date range can be specified not with a start and endpoint, but with a startpoint and a number of periods:
###Code
pd.date_range('2015-07-03', periods=8)
###Output
_____no_output_____
###Markdown
The spacing can be modified by altering the ``freq`` argument, which defaults to ``D``.For example, here we will construct a range of hourly timestamps:
###Code
pd.date_range('2015-07-03', periods=8, freq='H')
###Output
_____no_output_____
###Markdown
To create regular sequences of ``Period`` or ``Timedelta`` values, the very similar ``pd.period_range()`` and ``pd.timedelta_range()`` functions are useful.Here are some monthly periods:
###Code
pd.period_range('2015-07', periods=8, freq='M')
###Output
_____no_output_____
###Markdown
And a sequence of durations increasing by an hour:
###Code
pd.timedelta_range(0, periods=10, freq='H')
###Output
_____no_output_____
###Markdown
All of these require an understanding of Pandas frequency codes, which we'll summarize in the next section. Frequencies and OffsetsFundamental to these Pandas time series tools is the concept of a frequency or date offset.Just as we saw the ``D`` (day) and ``H`` (hour) codes above, we can use such codes to specify any desired frequency spacing.The following table summarizes the main codes available: | Code | Description | Code | Description ||--------|---------------------|--------|----------------------|| ``D`` | Calendar day | ``B`` | Business day || ``W`` | Weekly | | || ``M`` | Month end | ``BM`` | Business month end || ``Q`` | Quarter end | ``BQ`` | Business quarter end || ``A`` | Year end | ``BA`` | Business year end || ``H`` | Hours | ``BH`` | Business hours || ``T`` | Minutes | | || ``S`` | Seconds | | || ``L`` | Milliseonds | | || ``U`` | Microseconds | | || ``N`` | nanoseconds | | | The monthly, quarterly, and annual frequencies are all marked at the end of the specified period.By adding an ``S`` suffix to any of these, they instead will be marked at the beginning: | Code | Description || Code | Description ||---------|------------------------||---------|------------------------|| ``MS`` | Month start ||``BMS`` | Business month start || ``QS`` | Quarter start ||``BQS`` | Business quarter start || ``AS`` | Year start ||``BAS`` | Business year start | Additionally, you can change the month used to mark any quarterly or annual code by adding a three-letter month code as a suffix:- ``Q-JAN``, ``BQ-FEB``, ``QS-MAR``, ``BQS-APR``, etc.- ``A-JAN``, ``BA-FEB``, ``AS-MAR``, ``BAS-APR``, etc.In the same way, the split-point of the weekly frequency can be modified by adding a three-letter weekday code:- ``W-SUN``, ``W-MON``, ``W-TUE``, ``W-WED``, etc.On top of this, codes can be combined with numbers to specify other frequencies.For example, for a frequency of 2 hours 30 minutes, we can combine the hour (``H``) and minute (``T``) codes as follows:
###Code
pd.timedelta_range(0, periods=9, freq="2H30T")
###Output
_____no_output_____
###Markdown
All of these short codes refer to specific instances of Pandas time series offsets, which can be found in the ``pd.tseries.offsets`` module.For example, we can create a business day offset directly as follows:
###Code
from pandas.tseries.offsets import BDay
pd.date_range('2015-07-01', periods=5, freq=BDay())
###Output
_____no_output_____
###Markdown
For more discussion of the use of frequencies and offsets, see the ["DateOffset" section](http://pandas.pydata.org/pandas-docs/stable/timeseries.htmldateoffset-objects) of the Pandas documentation. Resampling, Shifting, and WindowingThe ability to use dates and times as indices to intuitively organize and access data is an important piece of the Pandas time series tools.The benefits of indexed data in general (automatic alignment during operations, intuitive data slicing and access, etc.) still apply, and Pandas provides several additional time series-specific operations.We will take a look at a few of those here, using some stock price data as an example.Because Pandas was developed largely in a finance context, it includes some very specific tools for financial data.For example, the accompanying ``pandas-datareader`` package (installable via ``conda install pandas-datareader``), knows how to import financial data from a number of available sources, including Yahoo finance, Google Finance, and others.Here we will load Google's closing price history:
###Code
from pandas_datareader import data
goog = data.DataReader('GOOG', start='2004', end='2016',
data_source='google')
goog.head()
###Output
_____no_output_____
###Markdown
For simplicity, we'll use just the closing price:
###Code
goog = goog['Close']
###Output
_____no_output_____
###Markdown
We can visualize this using the ``plot()`` method, after the normal Matplotlib setup boilerplate (see [Chapter 4](04.00-Introduction-To-Matplotlib.ipynb)):
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn; seaborn.set()
goog.plot();
###Output
_____no_output_____
###Markdown
Resampling and converting frequenciesOne common need for time series data is resampling at a higher or lower frequency.This can be done using the ``resample()`` method, or the much simpler ``asfreq()`` method.The primary difference between the two is that ``resample()`` is fundamentally a *data aggregation*, while ``asfreq()`` is fundamentally a *data selection*.Taking a look at the Google closing price, let's compare what the two return when we down-sample the data.Here we will resample the data at the end of business year:
###Code
goog.plot(alpha=0.5, style='-')
goog.resample('BA').mean().plot(style=':')
goog.asfreq('BA').plot(style='--');
plt.legend(['input', 'resample', 'asfreq'],
loc='upper left');
###Output
_____no_output_____
###Markdown
Notice the difference: at each point, ``resample`` reports the *average of the previous year*, while ``asfreq`` reports the *value at the end of the year*. For up-sampling, ``resample()`` and ``asfreq()`` are largely equivalent, though resample has many more options available.In this case, the default for both methods is to leave the up-sampled points empty, that is, filled with NA values.Just as with the ``pd.fillna()`` function discussed previously, ``asfreq()`` accepts a ``method`` argument to specify how values are imputed.Here, we will resample the business day data at a daily frequency (i.e., including weekends):
###Code
fig, ax = plt.subplots(2, sharex=True)
data = goog.iloc[:10]
data.asfreq('D').plot(ax=ax[0], marker='o')
data.asfreq('D', method='bfill').plot(ax=ax[1], style='-o')
data.asfreq('D', method='ffill').plot(ax=ax[1], style='--o')
ax[1].legend(["back-fill", "forward-fill"]);
###Output
_____no_output_____
###Markdown
The top panel is the default: non-business days are left as NA values and do not appear on the plot.The bottom panel shows the differences between two strategies for filling the gaps: forward-filling and backward-filling. Time-shiftsAnother common time series-specific operation is shifting of data in time.Pandas has two closely related methods for computing this: ``shift()`` and ``tshift()``In short, the difference between them is that ``shift()`` *shifts the data*, while ``tshift()`` *shifts the index*.In both cases, the shift is specified in multiples of the frequency.Here we will both ``shift()`` and ``tshift()`` by 900 days;
###Code
fig, ax = plt.subplots(3, sharey=True)
# apply a frequency to the data
goog = goog.asfreq('D', method='pad')
goog.plot(ax=ax[0])
goog.shift(900).plot(ax=ax[1])
goog.tshift(900).plot(ax=ax[2])
# legends and annotations
local_max = pd.to_datetime('2007-11-05')
offset = pd.Timedelta(900, 'D')
ax[0].legend(['input'], loc=2)
ax[0].get_xticklabels()[2].set(weight='heavy', color='red')
ax[0].axvline(local_max, alpha=0.3, color='red')
ax[1].legend(['shift(900)'], loc=2)
ax[1].get_xticklabels()[2].set(weight='heavy', color='red')
ax[1].axvline(local_max + offset, alpha=0.3, color='red')
ax[2].legend(['tshift(900)'], loc=2)
ax[2].get_xticklabels()[1].set(weight='heavy', color='red')
ax[2].axvline(local_max + offset, alpha=0.3, color='red');
###Output
_____no_output_____
###Markdown
We see here that ``shift(900)`` shifts the *data* by 900 days, pushing some of it off the end of the graph (and leaving NA values at the other end), while ``tshift(900)`` shifts the *index values* by 900 days.A common context for this type of shift is in computing differences over time. For example, we use shifted values to compute the one-year return on investment for Google stock over the course of the dataset:
###Code
ROI = 100 * (goog.tshift(-365) / goog - 1)
ROI.plot()
plt.ylabel('% Return on Investment');
###Output
_____no_output_____
###Markdown
This helps us to see the overall trend in Google stock: thus far, the most profitable times to invest in Google have been (unsurprisingly, in retrospect) shortly after its IPO, and in the middle of the 2009 recession. Rolling windowsRolling statistics are a third type of time series-specific operation implemented by Pandas.These can be accomplished via the ``rolling()`` attribute of ``Series`` and ``DataFrame`` objects, which returns a view similar to what we saw with the ``groupby`` operation (see [Aggregation and Grouping](03.08-Aggregation-and-Grouping.ipynb)).This rolling view makes available a number of aggregation operations by default.For example, here is the one-year centered rolling mean and standard deviation of the Google stock prices:
###Code
rolling = goog.rolling(365, center=True)
data = pd.DataFrame({'input': goog,
'one-year rolling_mean': rolling.mean(),
'one-year rolling_std': rolling.std()})
ax = data.plot(style=['-', '--', ':'])
ax.lines[0].set_alpha(0.3)
###Output
_____no_output_____
###Markdown
As with group-by operations, the ``aggregate()`` and ``apply()`` methods can be used for custom rolling computations. Where to Learn MoreThis section has provided only a brief summary of some of the most essential features of time series tools provided by Pandas; for a more complete discussion, you can refer to the ["Time Series/Date" section](http://pandas.pydata.org/pandas-docs/stable/timeseries.html) of the Pandas online documentation.Another excellent resource is the textbook [Python for Data Analysis](http://shop.oreilly.com/product/0636920023784.do) by Wes McKinney (OReilly, 2012).Although it is now a few years old, it is an invaluable resource on the use of Pandas.In particular, this book emphasizes time series tools in the context of business and finance, and focuses much more on particular details of business calendars, time zones, and related topics.As always, you can also use the IPython help functionality to explore and try further options available to the functions and methods discussed here. I find this often is the best way to learn a new Python tool. Example: Visualizing Seattle Bicycle CountsAs a more involved example of working with some time series data, let's take a look at bicycle counts on Seattle's [Fremont Bridge](http://www.openstreetmap.org/map=17/47.64813/-122.34965).This data comes from an automated bicycle counter, installed in late 2012, which has inductive sensors on the east and west sidewalks of the bridge.The hourly bicycle counts can be downloaded from http://data.seattle.gov/; here is the [direct link to the dataset](https://data.seattle.gov/Transportation/Fremont-Bridge-Hourly-Bicycle-Counts-by-Month-Octo/65db-xm6k).As of summer 2016, the CSV can be downloaded as follows:
###Code
# !curl -o FremontBridge.csv https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD
###Output
_____no_output_____
###Markdown
Once this dataset is downloaded, we can use Pandas to read the CSV output into a ``DataFrame``.We will specify that we want the Date as an index, and we want these dates to be automatically parsed:
###Code
data = pd.read_csv('FremontBridge.csv', index_col='Date', parse_dates=True)
data.head()
###Output
_____no_output_____
###Markdown
For convenience, we'll further process this dataset by shortening the column names and adding a "Total" column:
###Code
data.columns = ['West', 'East']
data['Total'] = data.eval('West + East')
###Output
_____no_output_____
###Markdown
Now let's take a look at the summary statistics for this data:
###Code
data.dropna().describe()
###Output
_____no_output_____
###Markdown
Visualizing the dataWe can gain some insight into the dataset by visualizing it.Let's start by plotting the raw data:
###Code
%matplotlib inline
import seaborn; seaborn.set()
data.plot()
plt.ylabel('Hourly Bicycle Count');
###Output
_____no_output_____
###Markdown
The ~25,000 hourly samples are far too dense for us to make much sense of.We can gain more insight by resampling the data to a coarser grid.Let's resample by week:
###Code
weekly = data.resample('W').sum()
weekly.plot(style=[':', '--', '-'])
plt.ylabel('Weekly bicycle count');
###Output
_____no_output_____
###Markdown
This shows us some interesting seasonal trends: as you might expect, people bicycle more in the summer than in the winter, and even within a particular season the bicycle use varies from week to week (likely dependent on weather; see [In Depth: Linear Regression](05.06-Linear-Regression.ipynb) where we explore this further).Another way that comes in handy for aggregating the data is to use a rolling mean, utilizing the ``pd.rolling_mean()`` function.Here we'll do a 30 day rolling mean of our data, making sure to center the window:
###Code
daily = data.resample('D').sum()
daily.rolling(30, center=True).sum().plot(style=[':', '--', '-'])
plt.ylabel('mean hourly count');
###Output
_____no_output_____
###Markdown
The jaggedness of the result is due to the hard cutoff of the window.We can get a smoother version of a rolling mean using a window function–for example, a Gaussian window.The following code specifies both the width of the window (we chose 50 days) and the width of the Gaussian within the window (we chose 10 days):
###Code
daily.rolling(50, center=True,
win_type='gaussian').sum(std=10).plot(style=[':', '--', '-']);
###Output
_____no_output_____
###Markdown
Digging into the dataWhile these smoothed data views are useful to get an idea of the general trend in the data, they hide much of the interesting structure.For example, we might want to look at the average traffic as a function of the time of day.We can do this using the GroupBy functionality discussed in [Aggregation and Grouping](03.08-Aggregation-and-Grouping.ipynb):
###Code
by_time = data.groupby(data.index.time).mean()
hourly_ticks = 4 * 60 * 60 * np.arange(6)
by_time.plot(xticks=hourly_ticks, style=[':', '--', '-']);
###Output
_____no_output_____
###Markdown
The hourly traffic is a strongly bimodal distribution, with peaks around 8:00 in the morning and 5:00 in the evening.This is likely evidence of a strong component of commuter traffic crossing the bridge.This is further evidenced by the differences between the western sidewalk (generally used going toward downtown Seattle), which peaks more strongly in the morning, and the eastern sidewalk (generally used going away from downtown Seattle), which peaks more strongly in the evening.We also might be curious about how things change based on the day of the week. Again, we can do this with a simple groupby:
###Code
by_weekday = data.groupby(data.index.dayofweek).mean()
by_weekday.index = ['Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat', 'Sun']
by_weekday.plot(style=[':', '--', '-']);
###Output
_____no_output_____
###Markdown
This shows a strong distinction between weekday and weekend totals, with around twice as many average riders crossing the bridge on Monday through Friday than on Saturday and Sunday.With this in mind, let's do a compound GroupBy and look at the hourly trend on weekdays versus weekends.We'll start by grouping by both a flag marking the weekend, and the time of day:
###Code
weekend = np.where(data.index.weekday < 5, 'Weekday', 'Weekend')
by_time = data.groupby([weekend, data.index.time]).mean()
###Output
_____no_output_____
###Markdown
Now we'll use some of the Matplotlib tools described in [Multiple Subplots](04.08-Multiple-Subplots.ipynb) to plot two panels side by side:
###Code
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 2, figsize=(14, 5))
by_time.ix['Weekday'].plot(ax=ax[0], title='Weekdays',
xticks=hourly_ticks, style=[':', '--', '-'])
by_time.ix['Weekend'].plot(ax=ax[1], title='Weekends',
xticks=hourly_ticks, style=[':', '--', '-']);
###Output
_____no_output_____
|
jupyter_notebooks/notebooks/NB20_CXVII-Keras_VAE_ising.ipynb
|
###Markdown
Notebook 20: Variational autoencoder for the Ising Model with Keras Learning GoalThe goal of this notebook is to implement a VAE to learn a generative model for the 2D Ising model. The goal will be to understand how latent variables can capture physical quantities (such as the order parameter) and the effect of hyperparameters on VAE results. OverviewIn this notebook, we will write a variational autoencoder (VAE) in Keras for the 2D Ising model dataset. The code in this notebook is adapted from (https://blog.keras.io/building-autoencoders-in-keras.html) and reproduces some of the results found in (https://arxiv.org/pdf/1703.02435.pdf). The goal of the notebook is to show how to implement a variational autoencoder in Keras in order to learn effective low-dimensional representations of equilibrium samples drawn from the 2D ferromagnetic Ising model with periodic boundary conditions. Structure of the notebookThe notebook is structured as follows. 1. We load in the Ising dataset 2. We construct the variational auto encoder model using Keras 3. We train the model on a training set and then visualize the latent representation 4. We compare the learned representation with the magnetization order parameter Load the dataset
###Code
import pickle
from sklearn.model_selection import train_test_split
import collections
def load_data_set(root="IsingMC/", train_size = 0.5):
"""Loads the Ising dataset in the format required for training the tensorflow VAE
Parameters
-------
root: str, default = "IsingMC/"
Location of the directory containing the Ising dataset
train_size: float, default = 0.5
Size ratio of the training set. 1-train_size corresponds to the test set size ratio.
"""
# The Ising dataset contains 16*10000 samples taken in T=np.arange(0.25,4.0001,0.25)
data = pickle.load(open(root+'Ising2DFM_reSample_L40_T=All.pkl','rb'))
data = np.unpackbits(data).astype(int).reshape(-1,1600) # decompression of data and casting to int.
Y = np.hstack([t]*10000 for t in np.arange(0.25,4.01,0.25)) # labels
# Here we downsample the dataset and use 1000 samples at each temperature
tmp = np.arange(10000)
np.random.shuffle(tmp)
rand_idx=tmp[:10000]
X = np.vstack(data[i*10000:(i+1)*10000][rand_idx] for i, _ in enumerate(np.arange(0.25,4.01,0.25)))
Y = np.hstack(Y[i*10000:(i+1)*10000][rand_idx] for i, _ in enumerate(np.arange(0.25,4.01,0.25)))
# Note that data is not currently shuffled
return X, Y
###Output
_____no_output_____
###Markdown
Constructing a VAE classHere, we implement the VAE in a slightly different way than we did for the MNIST dataset. We have chosen to create a new VAE class so that the parameters can be easily changed for new data.
###Code
from __future__ import print_function
import os
import numpy as np
from scipy.stats import norm
from keras.layers import Input, Dense, Lambda
from keras.models import Model
from keras import backend as K
from keras import metrics, losses
from keras.datasets import mnist
class VAE:
def __init__(self, batch_size=100, original_dim =1600, latent_dim = 100, epochs=50, root="IsingMC/", epsilon=0.5):
'''
#Reference
- Auto-Encoding Variational Bayes
https://arxiv.org/abs/1312.6114
This code is taken from Keras VAE tutorial available at https://blog.keras.io/building-autoencoders-in-keras.html
Parameters
----------
batch_size : int, default=100
Size of batches for gradient descent
original_dim : int, default =1600
Number of features
latent_dim: int, default = 100
Dimensionality of the latent space
epochs: int, default = 50
Number of epochs for training
'''
self.batch_size = batch_size
self.original_dim = original_dim
self.latent_dim = latent_dim
self.intermediate_dim = 256
self.epochs = epochs
self.epsilon_std = epsilon
def sampling(self, args):
''' Sampling from the latent variables using the means and log-variances'''
z_mean, z_log_var = args
epsilon = K.random_normal(shape=(K.shape(z_mean)[0], self.latent_dim), mean=0.,
stddev=self.epsilon_std)
return z_mean + K.exp(z_log_var / 2) * epsilon
def build(self):
""" This class method constructs the VAE model
"""
original_dim = self.original_dim
latent_dim = self.latent_dim
intermediate_dim = self.intermediate_dim
# encoder
self.x = Input(shape=(original_dim,))
h = Dense(intermediate_dim, activation='relu')(self.x)
self.z_mean = Dense(latent_dim)(h)
self.z_log_var = Dense(latent_dim)(h)
# note that "output_shape" isn't necessary with the TensorFlow backend
z = Lambda(self.sampling, output_shape=(latent_dim,))([self.z_mean, self.z_log_var])
# we instantiate these layers separately so as to reuse them later
decoder_h = Dense(intermediate_dim, activation='relu')
decoder_mean = Dense(original_dim, activation='sigmoid')
h_decoded = decoder_h(z)
x_decoded_mean = decoder_mean(h_decoded)
#decoder
decoder_input = Input(shape=(latent_dim,))
_h_decoded = decoder_h(decoder_input)
_x_decoded_mean = decoder_mean(_h_decoded)
self.generator = Model(decoder_input, _x_decoded_mean)
# end-to-end VAE model
self.vae = Model(self.x, x_decoded_mean)
# encoder, from inputs to latent space
self.encoder = Model(self.x, self.z_mean)
# decoder
#self.decoder = Model(decoder_input, _x_decoded_mean)
# Compute VAE loss
self.vae.compile(optimizer='rmsprop', loss=self.vae_loss)
# Prints a summary of the architecture used
self.vae.summary()
def vae_loss(self, x, x_decoded_mean):
xent_loss = losses.binary_crossentropy(x, x_decoded_mean)
kl_loss = - 0.5 * K.mean(1 + self.z_log_var - K.square(self.z_mean) - K.exp(self.z_log_var), axis=-1)
return xent_loss + kl_loss
def train(self, x_train, x_test):
from sklearn.preprocessing import minmax_scale
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:]))) # flatten each sample out
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
x_train = minmax_scale(x_train) # this step is required in order to use cross-entropy loss for reconstruction
x_test = minmax_scale(x_train) # scaling features in 0,1 interval
self.vae.fit(x_train, x_train,
shuffle=True,
epochs=self.epochs,
batch_size=self.batch_size,
validation_data=(x_test, x_test)
)
# build a model to project inputs on the latent space
#encoder = Model(self.x, self.z_mean)
def predict_latent(self, xnew):
# build a model to project inputs on the latent space
return self.encoder.predict(xnew)
def generate_decoding(self, znew):
# Generate new fantasy particles
return self.generator.predict(znew)
###Output
/Users/mbukov/anaconda3/envs/DNN/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
###Markdown
Load the 2D Ising dataset
###Code
# The y labels are the temperatures in np.arange(0.25,4.01,0.2) at which X was drawn
#Directory where data is stored
root=path_to_data=os.path.expanduser('~')+'/Dropbox/MachineLearningReview/Datasets/isingMC/'
X, Y = load_data_set(root= root)
###Output
_____no_output_____
###Markdown
Construct a training and a validation set
###Code
from sklearn.model_selection import train_test_split
xtrain, xtest, ytrain, ytest = train_test_split(X, Y, test_size=0.8)
print(xtrain.shape)
###Output
(32000, 1600)
###Markdown
Construct and train the variational autoencoder model
###Code
model = VAE(epochs=5, latent_dim=2, epsilon=0.2) # Choose model parameters
model.build() # Construct VAE model using Keras
model.train(xtrain, xtest) # Trains VAE model based on custom loss function
###Output
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 1600) 0
__________________________________________________________________________________________________
dense_1 (Dense) (None, 256) 409856 input_1[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 2) 514 dense_1[0][0]
__________________________________________________________________________________________________
dense_3 (Dense) (None, 2) 514 dense_1[0][0]
__________________________________________________________________________________________________
lambda_1 (Lambda) (None, 2) 0 dense_2[0][0]
dense_3[0][0]
__________________________________________________________________________________________________
dense_4 (Dense) (None, 256) 768 lambda_1[0][0]
__________________________________________________________________________________________________
dense_5 (Dense) (None, 1600) 411200 dense_4[0][0]
==================================================================================================
Total params: 822,852
Trainable params: 822,852
Non-trainable params: 0
__________________________________________________________________________________________________
###Markdown
Encoding samples to latent space:We predict the latent variable coordinates for the test set:
###Code
zpred = model.predict_latent(xtest)
print(zpred.shape)
###Output
(128000, 2)
###Markdown
Let's visualize this 2-dimensional space. We also color each sample according to the temperature at which it was drawn. The largest temperature is red ($T=4.0$) and lowest is blue ($T=0.25$).
###Code
# To make plots pretty
golden_size = lambda width: (width, 2. * width / (1 + np.sqrt(5)))
%matplotlib inline
import matplotlib.pyplot as plt
plt.rc('font',**{'size':16})
fig, ax = plt.subplots(1,figsize=golden_size(8))
sc = ax.scatter(zpred[:,0], zpred[:,1], c=ytest/4.0, s=4, cmap="coolwarm")
ax.set_xlabel('First latent dimension of the VAE')
ax.set_ylabel('Second latent dimension of the VAE')
plt.colorbar(sc, label='$0.25\\times$Temperature')
plt.savefig('VAE_ISING_latent.png')
plt.show()
###Output
_____no_output_____
###Markdown
Understanding the latent space embeddingsTo better understand the latent space, we can plot each of the latent dimension coordinates against the corresponding magnetization of each sample.
###Code
plt.rc('font',**{'size':16})
fig, ax = plt.subplots(1,2,figsize=(15,8))
ax[0].scatter(zpred[:,0], np.mean(xtest, axis=1), c=ytest/4.0, s=2, cmap="coolwarm")
ax[0].set_xlabel('First latent dimension of the VAE')
ax[0].set_ylabel('Magnetization')
sc = ax[1].scatter(zpred[:,1], np.mean(xtest, axis=1), c=ytest/4.0, s=2, cmap="coolwarm")
ax[1].set_xlabel('Second latent dimension of the VAE')
ax[1].set_ylabel('Magnetization')
plt.colorbar(sc, label='$0.25\\times$Temperature')
plt.savefig('VAE_ISING_latent_magnetization.png')
plt.show()
###Output
_____no_output_____
###Markdown
It appears that these dimensions are strongly correlated, meaning that the learned representation is effectively one-dimensional. This can be understood by the fact that in order to draw samples at high and low temperatures, we only require the information about the magnetization order parameter (we only have to draw samples from a factorized mean-field distribution):\begin{equation}p(s_i=\pm) = \frac{1\pm m}{2},\end{equation}where $p(s_i=\pm)$ is the probability that spin $i$ is up ($+$) or down ($-$), given that the magnetization sector is fixed.Note that this is not true in the vicinity of the critical point, where mean-field theory fails as the system develops long-range correlations.We see that the VAE correctly captures the structure of the data. The high-temperature samples cluster at intermediate values and the ordered samples with positive and negative magnetization cluster in opposite regions. This can be more effectively visualized using a 1-D histogram:
###Code
# Make histogram at the
plt.hist(zpred[:,0],bins=50)
plt.show()
###Output
_____no_output_____
###Markdown
Generating New ExamplesSo far in this notebook, we have shown that the latent structure of VAEs can automatically identify order parameters. This is not surprising since even the first principle component in a PCA is essentially the magnetization. The interesting feature of VAEs is that they are also a generative model. We now ask how well the VAE can generate new examples. Our decoder returns probabilities for each pixel being 1. We then can draw random numbers to generate samples. This is done in the short function below.One again, as in the VAE MNIST notebook, we will sample our latent space togenerate the particles in two different ways* Sampling uniformally in the latent space * Sampling accounting for the fact that the latent space is Gaussian so that we expect most of the data points to be centered around (0,0) and fall off exponentially in all directions. This is done by transforming the uniform grid using the inverse Cumulative Distribution Function (CDF) for the Gaussian.
###Code
# Generate fantasy particles
def generate_samples(model, z_input):
temp=model.generate_decoding(z_input).reshape(n*n,1600)
draws=np.random.uniform(size=temp.shape)
samples=np.array(draws<temp).astype(int)
return samples
# display a 2D manifold of the images
n = 5 # figure with 15x15 images
quantile_min = 0.01
quantile_max = 0.99
latent_dim=2
img_rows=40
img_cols=40
# Linear Sampling
# we will sample n points within [-15, 15] standard deviations
z1_u = np.linspace(5, -5, n)
z2_u = np.linspace(5, -5, n)
z_grid = np.dstack(np.meshgrid(z1_u, z2_u))
z_input=np.array(z_grid.reshape(n*n, latent_dim))
print(z_input.shape)
x_pred_grid = generate_samples(model,z_input) \
.reshape(n, n, img_rows, img_cols)
print(x_pred_grid.shape)
# Inverse CDF sampling
z1 = norm.ppf(np.linspace(quantile_min, quantile_max, n))
z2 = norm.ppf(np.linspace(quantile_max, quantile_min, n))
z_grid2 = np.dstack(np.meshgrid(z1, z2))
z_input=np.array(z_grid2.reshape(n*n, latent_dim))
x_pred_grid2 = generate_samples(model,z_input) \
.reshape(n, n, img_rows, img_cols)
# Plot figure
fig, ax = plt.subplots(figsize=golden_size(10))
ax.imshow(np.block(list(map(list, x_pred_grid))), cmap='gray')
ax.set_xticks(np.arange(0, n*img_rows, img_rows) + .5 * img_rows)
ax.set_xticklabels(map('{:.2f}'.format, z1), rotation=90)
ax.set_yticks(np.arange(0, n*img_cols, img_cols) + .5 * img_cols)
ax.set_yticklabels(map('{:.2f}'.format, z2))
ax.set_xlabel('$z_1$')
ax.set_ylabel('$z_2$')
ax.set_title('Uniform')
ax.grid(False)
plt.savefig('VAE_ISING_fantasy_uniform.pdf')
plt.show()
# Plot figure Inverse CDF sampling
fig, ax = plt.subplots(figsize=golden_size(10))
ax.imshow(np.block(list(map(list, x_pred_grid2))), cmap='gray_r', vmin=0, vmax=1)
ax.set_xticks(np.arange(0, n*img_rows, img_rows) + .5 * img_rows)
ax.set_xticklabels(map('{:.2f}'.format, z1), rotation=90)
ax.set_yticks(np.arange(0, n*img_cols, img_cols) + .5 * img_cols)
ax.set_yticklabels(map('{:.2f}'.format, z2))
ax.set_xlabel('$z_1$')
ax.set_ylabel('$z_2$')
ax.set_title('Inverse CDF')
ax.grid(False)
plt.savefig('VAE_ISING_fantasy_invCDF.pdf')
plt.show()
###Output
(25, 2)
(5, 5, 40, 40)
|
3.1.1+K+Nearest+Neighbors+Classifiers.ipynb
|
###Markdown
K Nearest Neighbors ClassifiersSo far we've covered learning via probability (naive Bayes) and learning via errors (regression). Here we'll cover learning via similarity. This means we look for the datapoints that are most similar to the observation we are trying to predict.Let's start by the simplest example: **Nearest Neighbor**. Nearest NeighborLet's use this example: classifying a song as either "rock" or "jazz". For this data we have measures of duration in seconds and loudness in loudness units (we're not going to be using decibels since that isn't a linear measure, which would create some problems we'll get into later).
###Code
music = pd.DataFrame()
# Some data to play with.
music['duration'] = [184, 134, 243, 186, 122, 197, 294, 382, 102, 264,
205, 110, 307, 110, 397, 153, 190, 192, 210, 403,
164, 198, 204, 253, 234, 190, 182, 401, 376, 102]
music['loudness'] = [18, 34, 43, 36, 22, 9, 29, 22, 10, 24,
20, 10, 17, 51, 7, 13, 19, 12, 21, 22,
16, 18, 4, 23, 34, 19, 14, 11, 37, 42]
# We know whether the songs in our training data are jazz or not.
music['jazz'] = [ 1, 0, 0, 0, 1, 1, 0, 1, 1, 0,
0, 1, 1, 0, 1, 1, 0, 1, 1, 1,
1, 1, 1, 1, 0, 0, 1, 1, 0, 0]
# Look at our data.
plt.scatter(
music[music['jazz'] == 1].duration,
music[music['jazz'] == 1].loudness,
color='red'
)
plt.scatter(
music[music['jazz'] == 0].duration,
music[music['jazz'] == 0].loudness,
color='blue'
)
plt.legend(['Jazz', 'Rock'])
plt.title('Jazz and Rock Characteristics')
plt.xlabel('Duration')
plt.ylabel('Loudness')
plt.show()
###Output
_____no_output_____
###Markdown
The simplest form of a similarity model is the Nearest Neighbor model. This works quite simply: when trying to predict an observation, we find the closest (or _nearest_) known observation in our training data and use that value to make our prediction. Here we'll use the model as a classifier, the outcome of interest will be a category.To find which observation is "nearest" we need some kind of way to measure distance. Typically we use _Euclidean distance_, the standard distance measure that you're familiar with from geometry. With one observation in n-dimensions $(x_1, x_2, ...,x_n)$ and the other $(w_1, w_2,...,w_n)$:$$ \sqrt{(x_1-w_1)^2 + (x_2-w_2)^2+...+(x_n-w_n)^2} $$You might recognize this formula, (taking distances, squaring them, adding the squares together, and taking the root) as a generalization of the [Pythagorean theorem](https://en.wikipedia.org/wiki/Pythagorean_theorem) into n-dimensions. You can technically define any distance measure you want, and there are times where this customization may be valuable. As a general standard, however, we'll use Euclidean distance.Now that we have a distance measure from each point in our training data to the point we're trying to predict the model can find the datapoint with the smallest distance and then apply that category to our prediction.Let's try running this model, using the SKLearn package.
###Code
from sklearn.neighbors import KNeighborsClassifier
neighbors = KNeighborsClassifier(n_neighbors=1)
X = music[['loudness', 'duration']]
Y = music.jazz
neighbors.fit(X,Y)
## Predict for a song with 24 loudness that's 190 seconds long.
neighbors.predict([[24, 190]])
###Output
_____no_output_____
###Markdown
It's as simple as that. Looks like our model is predicting that 24 loudness, 190 second long song is _not_ jazz. All it takes to train the model is a dataframe of independent variables and a dataframe of dependent outcomes. You'll note that for this example, we used the `KNeighborsClassifier` method from SKLearn. This is because Nearest Neighbor is a simplification of K-Nearest Neighbors. The jump, however, isn't that far. K-Nearest Neighbors**K-Nearest Neighbors** (or "**KNN**") is the logical extension of Nearest Neighbor. Instead of looking at just the single nearest datapoint to predict an outcome, we look at several of the nearest neighbors, with $k$ representing the number of neighbors we choose to look at. Each of the $k$ neighbors gets to vote on what the predicted outcome should be.This does a couple of valuable things. Firstly, it smooths out the predictions. If only one neighbor gets to influence the outcome, the model explicitly overfits to the training data. Any single outlier can create pockets of one category prediction surrounded by a sea of the other category.This also means instead of just predicting classes, we get implicit probabilities. If each of the $k$ neighbors gets a vote on the outcome, then the probability of the test example being from any given class $i$ is:$$ \frac{votes_i}{k} $$And this applies for all classes present in the training set. Our example only has two classes, but this model can accommodate as many classes as the data set necessitates. To come up with a classifier prediction it simply takes the class for which that fraction is maximized.Let's expand our initial nearest neighbors model from above to a KNN with a $k$ of 5.
###Code
neighbors = KNeighborsClassifier(n_neighbors=5)
X = music[['loudness', 'duration']]
Y = music.jazz
neighbors.fit(X,Y)
## Predict for a 24 loudness, 190 seconds long song.
print(neighbors.predict([[24, 190]]))
print(neighbors.predict_proba([[24, 190]]))
###Output
[1]
[[0.4 0.6]]
###Markdown
Now our test prediction has changed. In using the five nearest neighbors it appears that there were two votes for rock and three for jazz, so it was classified as a jazz song. This is different than our simpler Nearest Neighbors model. While the closest observation was in fact rock, there are more jazz songs in the nearest $k$ neighbors than rock.We can visualize our decision bounds with something called a _mesh_. This allows us to generate a prediction over the whole space. Read the code below and make sure you can pull out what the individual lines do, consulting the documentation for unfamiliar methods if necessary.
###Code
# Our data. Converting from data frames to arrays for the mesh.
X = np.array(X)
Y = np.array(Y)
# Mesh size.
h = 4.0
# Plot the decision boundary. We assign a color to each point in the mesh.
x_min = X[:, 0].min() - .5
x_max = X[:, 0].max() + .5
y_min = X[:, 1].min() - .5
y_max = X[:, 1].max() + .5
xx, yy = np.meshgrid(
np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h)
)
Z = neighbors.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot.
Z = Z.reshape(xx.shape)
plt.figure(1, figsize=(6, 4))
plt.set_cmap(plt.cm.Paired)
plt.pcolormesh(xx, yy, Z)
# Add the training points to the plot.
plt.scatter(X[:, 0], X[:, 1], c=Y)
plt.xlabel('Loudness')
plt.ylabel('Duration')
plt.title('Mesh visualization')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.show()
# Our data. Converting from data frames to arrays for the mesh.
X = np.array(X)
Y = np.array(Y)
# Mesh size.
h = 0.05
# Plot the decision boundary. We assign a color to each point in the mesh.
x_min = X[:, 0].min() - .5
x_max = X[:, 0].max() + .5
y_min = X[:, 1].min() - .5
y_max = X[:, 1].max() + .5
xx, yy = np.meshgrid(
np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h)
)
Z = neighbors.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot.
Z = Z.reshape(xx.shape)
plt.figure(1, figsize=(6, 4))
plt.set_cmap(plt.cm.Paired)
plt.pcolormesh(xx, yy, Z)
# Add the training points to the plot.
plt.scatter(X[:, 0], X[:, 1], c=Y)
plt.xlabel('Loudness')
plt.ylabel('Duration')
plt.title('Mesh visualization')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.show()
###Output
_____no_output_____
###Markdown
Looking at the visualization above, any new point that fell within a blue area would be predicted to be jazz, and any point that fell within a brown area would be predicted to be rock.The boundaries above are strangly jagged here, and we'll get into that in more detail in the next lesson.Also note that the visualization isn't completely continuous. There are an infinite number of points in this space, and we can't calculate the value for each one. That's where the mesh comes in. We set our mesh size (`h = 4.0`) to 4.0 above, which means we calculate the value for each point in a grid where the points are spaced 4.0 away from each other.You can make the mesh size smaller to get a more continuous visualization, but at the cost of a more computationally demanding calculation. In the cell below, recreate the plot above with a mesh size of `10.0`. Then reduce the mesh size until you get a plot that looks good but still renders in a reasonable amount of time. When do you get a visualization that looks acceptably continuous? When do you start to get a noticeable delay?
###Code
# Play with different mesh sizes here.
###Output
_____no_output_____
###Markdown
Now you've built a KNN model! Challenge: Implement the Nearest Neighbor algorithm The Nearest Neighbor algorithm is extremely simple. So simple, in fact, that you should be able to build it yourself from scratch using the Python you already know. Code a Nearest Neighbors algorithm that works for two dimensional data. You can use either arrays or dataframes to do this. Test it against the SKLearn package on the music dataset from above to ensure that it's correct. The goal here is to confirm your understanding of the model and continue to practice your Python skills. We're just expecting a brute force method here. After doing this, look up "ball tree" methods to see a more performant algorithm design.
###Code
# Your nearest neighbor algorithm here.
music.head()
def distance(a,b,x,y):
return np.sqrt((a-x)**2+(b-y)**2)
k_order = 1
def the_closest_points(a,b,df,k_order):
distance_serie = distance(a,b,df.duration,df.loudness)
the_limit = distance_serie.sort_values(ascending=True)[:k_order].max()
return distance_serie[distance_serie<=the_limit]
def manual_knn(a,b,df,k_order):
return int(music[music.index.isin(the_closest_points(a,b,music,k_order).index)].jazz.mean()>=0.5)
for _ in range(500):
duration, loudness = np.random.randint(100,400), np.random.randint(1,50)
manuel = manual_knn(duration,loudness,music,5)
predicted = neighbors.predict(np.c_[loudness,duration])[0]
if manuel != predicted:
print(duration,loudness,manuel,predicted)
###Output
106 40 1 0
|
07_NMT_Project/bi-pt-en/02_English_Portuguese.ipynb
|
###Markdown
NMT (Nueral Machine Translation)In these series of notebooks we are going to do create bidirectional NMT model for our application. We are going to use the following notebooks as reference to this notebook.1. [17_Custom_Dataset_and_Translation.ipynb](https://github.com/CrispenGari/pytorch-python/blob/main/09_NLP/03_Sequence_To_Sequence/17_Custom_Dataset_and_Translation.ipynb)2. [16_Data_Preparation_Translation_Dataset.ipynb](https://github.com/CrispenGari/pytorch-python/blob/main/09_NLP/03_Sequence_To_Sequence/16_Data_Preparation_Translation_Dataset.ipynb)3. [07_Attention_is_all_you_need](https://github.com/CrispenGari/pytorch-python/blob/main/09_NLP/03_Sequence_To_Sequence/07_Attention_is_all_you_need.ipynb)I will be loading the data from my google drive.
###Code
from google.colab import drive
from google.colab import files
drive.mount('/content/drive')
###Output
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
###Markdown
Imports
###Code
import torch
from torch import nn
from torch.nn import functional as F
import spacy, math, random
import numpy as np
from torchtext.legacy import datasets, data
import time, os, json
from prettytable import PrettyTable
from matplotlib import pyplot as plt
SEED = 42
np.random.seed(SEED)
torch.manual_seed(SEED)
random.seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.deteministic = True
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu'
)
device
base_path = '/content/drive/My Drive/NLP Data/seq2seq/manythings'
path_to_files = os.path.join(base_path, "Portuguese - English")
os.listdir(path_to_files)
###Output
_____no_output_____
###Markdown
File extensions
###Code
exts = (".en", ".po")
###Output
_____no_output_____
###Markdown
Tokenizer modelsAll the tokenization models that we are going to use are going to be found [here](https://spacy.io/usage/models) but to those languages that doesn't have tokenization models we are going to create our own tokenizers.
###Code
import spacy
spacy.cli.download("pt_core_news_sm")
spacy_en = spacy.load('en_core_web_sm')
spacy_pt = spacy.load('pt_core_news_sm')
def tokenize_pt(sent):
return [tok.text for tok in spacy_pt.tokenizer(sent)]
def tokenize_en(sent):
return [tok.text for tok in spacy_en.tokenizer(sent)]
###Output
_____no_output_____
###Markdown
Fields
###Code
SRC = data.Field(
tokenize = tokenize_en,
lower= True,
init_token = "<sos>",
eos_token = "<eos>",
include_lengths =True
)
TRG = data.Field(
tokenize = tokenize_pt,
lower= True,
init_token = "<sos>",
eos_token = "<eos>"
)
###Output
_____no_output_____
###Markdown
Creating dataset
###Code
train_data, valid_data, test_data = datasets.TranslationDataset.splits(
exts= exts,
path=path_to_files,
train='train', validation='valid', test='test',
fields = (SRC, TRG)
)
print(vars(train_data.examples[0]))
print(vars(valid_data.examples[0]))
print(vars(test_data.examples[0]))
###Output
{'src': ['tom', 'looks', 'relieved', '.'], 'trg': ['tom', 'parece', 'aliviado', '.']}
###Markdown
Counting examples
###Code
from prettytable import PrettyTable
def tabulate(column_names, data):
table = PrettyTable(column_names)
table.title= "VISUALIZING SETS EXAMPLES"
table.align[column_names[0]] = 'l'
table.align[column_names[1]] = 'r'
for row in data:
table.add_row(row)
print(table)
column_names = ["SUBSET", "EXAMPLE(s)"]
row_data = [
["training", len(train_data)],
['validation', len(valid_data)],
['test', len(test_data)]
]
tabulate(column_names, row_data)
###Output
+-----------------------------+
| VISUALIZING SETS EXAMPLES |
+--------------+--------------+
| SUBSET | EXAMPLE(s) |
+--------------+--------------+
| training | 174203 |
| validation | 1760 |
| test | 1778 |
+--------------+--------------+
###Markdown
Our dataset is very small so we are not going to set the `min_freq` to a number greater than 1 dring building of the vocabulary.
###Code
SRC.build_vocab(train_data, min_freq=2)
TRG.build_vocab(train_data, min_freq=2)
###Output
_____no_output_____
###Markdown
Saving the dictionary maping of our SRC and TRG to a json file.
###Code
len(SRC.vocab.stoi), len(TRG.vocab.stoi)
# src = dict(SRC.vocab.stoi)
# trg = dict(TRG.vocab.stoi)
# src_vocab_path = "src_vocab.json"
# trg_vocab_path = "trg_vocab.json"
# with open(src_vocab_path, "w") as f:
# json.dump(src, f, indent=2)
# with open(trg_vocab_path, "w") as f:
# json.dump(trg, f, indent=2)
# print("Done")
# files.download(src_vocab_path)
# files.download(trg_vocab_path)
###Output
_____no_output_____
###Markdown
Iterators
###Code
BATCH_SIZE = 128 # 128 for languages with good vocab corpus
sort_key = lambda x: len(x.src)
train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size = BATCH_SIZE,
sort_key= sort_key,
sort_within_batch = True
)
###Output
_____no_output_____
###Markdown
Encoder
###Code
class Encoder(nn.Module):
def __init__(self, input_dim, emb_dim, enc_hid_dim, dec_hid_dim, dropout):
super(Encoder, self).__init__()
self.embedding = nn.Embedding(input_dim, embedding_dim=emb_dim)
self.gru = nn.GRU(emb_dim, enc_hid_dim, bidirectional = True)
self.fc = nn.Linear(enc_hid_dim * 2, dec_hid_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, src, src_len):
embedded = self.dropout(self.embedding(src)) # embedded = [src len, batch size, emb dim]
# need to explicitly put lengths on cpu!
packed_embedded = nn.utils.rnn.pack_padded_sequence(embedded, src_len.to('cpu'))
packed_outputs, hidden = self.gru(packed_embedded)
outputs, _ = nn.utils.rnn.pad_packed_sequence(packed_outputs)
hidden = torch.tanh(self.fc(torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim = 1)))
return outputs, hidden
###Output
_____no_output_____
###Markdown
Attention layer
###Code
class Attention(nn.Module):
def __init__(self, enc_hid_dim, dec_hid_dim):
super(Attention, self).__init__()
self.attn = nn.Linear((enc_hid_dim * 2) + dec_hid_dim, dec_hid_dim)
self.v = nn.Linear(dec_hid_dim, 1, bias = False)
def forward(self, hidden, encoder_outputs, mask):
batch_size = encoder_outputs.shape[1]
src_len = encoder_outputs.shape[0]
# repeat decoder hidden state src_len times
hidden = hidden.unsqueeze(1).repeat(1, src_len, 1)
encoder_outputs = encoder_outputs.permute(1, 0, 2)
energy = torch.tanh(self.attn(torch.cat((hidden, encoder_outputs), dim = 2))) # energy = [batch size, src len, dec hid dim]
attention = self.v(energy).squeeze(2) # attention= [batch size, src len]
attention = attention.masked_fill(mask == 0, -1e10)
return F.softmax(attention, dim=1)
###Output
_____no_output_____
###Markdown
Decoder
###Code
class Decoder(nn.Module):
def __init__(self, output_dim, emb_dim, enc_hid_dim, dec_hid_dim, dropout, attention):
super(Decoder, self).__init__()
self.output_dim = output_dim
self.attention = attention
self.embedding = nn.Embedding(output_dim, emb_dim)
self.gru = nn.GRU((enc_hid_dim * 2) + emb_dim, dec_hid_dim)
self.fc = nn.Linear((enc_hid_dim * 2) + dec_hid_dim + emb_dim, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, input, hidden, encoder_outputs, mask):
input = input.unsqueeze(0) # input = [1, batch size]
embedded = self.dropout(self.embedding(input)) # embedded = [1, batch size, emb dim]
a = self.attention(hidden, encoder_outputs, mask)# a = [batch size, src len]
a = a.unsqueeze(1) # a = [batch size, 1, src len]
encoder_outputs = encoder_outputs.permute(1, 0, 2) # encoder_outputs = [batch size, src len, enc hid dim * 2]
weighted = torch.bmm(a, encoder_outputs) # weighted = [batch size, 1, enc hid dim * 2]
weighted = weighted.permute(1, 0, 2) # weighted = [1, batch size, enc hid dim * 2]
rnn_input = torch.cat((embedded, weighted), dim = 2) # rnn_input = [1, batch size, (enc hid dim * 2) + emb dim]
output, hidden = self.gru(rnn_input, hidden.unsqueeze(0))
assert (output == hidden).all()
embedded = embedded.squeeze(0)
output = output.squeeze(0)
weighted = weighted.squeeze(0)
prediction = self.fc(torch.cat((output, weighted, embedded), dim = 1)) # prediction = [batch size, output dim]
return prediction, hidden.squeeze(0), a.squeeze(1)
###Output
_____no_output_____
###Markdown
Seq2Seq
###Code
class Seq2Seq(nn.Module):
def __init__(self, encoder, decoder, src_pad_idx, device):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.device = device
self.src_pad_idx = src_pad_idx
def create_mask(self, src):
mask = (src != self.src_pad_idx).permute(1, 0)
return mask
def forward(self, src, src_len, trg, teacher_forcing_ratio = 0.5):
"""
src = [src len, batch size]
src_len = [batch size]
trg = [trg len, batch size]
teacher_forcing_ratio is probability to use teacher forcing
e.g. if teacher_forcing_ratio is 0.75 we use teacher forcing 75% of the time
"""
trg_len, batch_size = trg.shape
trg_vocab_size = self.decoder.output_dim
# tensor to store decoder outputs
outputs = torch.zeros(trg_len, batch_size, trg_vocab_size).to(self.device)
"""
encoder_outputs is all hidden states of the input sequence, back and forwards
hidden is the final forward and backward hidden states, passed through a linear layer
"""
encoder_outputs, hidden = self.encoder(src, src_len)
# first input to the decoder is the <sos> tokens
input = trg[0,:]
mask = self.create_mask(src) # mask = [batch size, src len]
for t in range(1, trg_len):
# insert input token embedding, previous hidden state and all encoder hidden states and mask
# receive output tensor (predictions) and new hidden state
output, hidden, _ = self.decoder(input, hidden, encoder_outputs, mask)
# place predictions in a tensor holding predictions for each token
outputs[t] = output
# decide if we are going to use teacher forcing or not
teacher_force = random.random() < teacher_forcing_ratio
# get the highest predicted token from our predictions
top1 = output.argmax(1)
# if teacher forcing, use actual next token as next input
# if not, use predicted token
input = trg[t] if teacher_force else top1
return outputs
###Output
_____no_output_____
###Markdown
Seq2Seq model instance
###Code
INPUT_DIM = len(SRC.vocab)
OUTPUT_DIM = len(TRG.vocab)
ENC_EMB_DIM = DEC_EMB_DIM = 256
ENC_HID_DIM = DEC_HID_DIM = 128
ENC_DROPOUT = DEC_DROPOUT = 0.5
SRC_PAD_IDX = SRC.vocab.stoi[SRC.pad_token]
attn = Attention(ENC_HID_DIM, DEC_HID_DIM)
enc = Encoder(INPUT_DIM, ENC_EMB_DIM, ENC_HID_DIM, DEC_HID_DIM, ENC_DROPOUT)
dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, ENC_HID_DIM, DEC_HID_DIM, DEC_DROPOUT, attn)
model = Seq2Seq(enc, dec, SRC_PAD_IDX, device).to(device)
model
###Output
_____no_output_____
###Markdown
Model parameters
###Code
def count_trainable_params(model):
return sum(p.numel() for p in model.parameters()), sum(p.numel() for p in model.parameters() if p.requires_grad)
n_params, trainable_params = count_trainable_params(model)
print(f"Total number of paramaters: {n_params:,}\nTotal tainable parameters: {trainable_params:,}")
###Output
Total number of paramaters: 15,501,305
Total tainable parameters: 15,501,305
###Markdown
Initialize model weights
###Code
def init_weights(m):
for name, param in m.named_parameters():
if 'weight' in name:
nn.init.normal_(param.data, mean=0, std=0.01)
else:
nn.init.constant_(param.data, 0)
model.apply(init_weights)
###Output
_____no_output_____
###Markdown
Optimizer and Criterion
###Code
optimizer = torch.optim.Adam(model.parameters())
TRG_PAD_IDX = TRG.vocab.stoi[TRG.pad_token]
criterion = nn.CrossEntropyLoss(ignore_index = TRG_PAD_IDX).to(device)
###Output
_____no_output_____
###Markdown
Train and evaluation functions
###Code
def train(model, iterator, optimizer, criterion, clip):
model.train()
epoch_loss = 0
for i, batch in enumerate(iterator):
src, src_len = batch.src
src = src.to(device)
src_len = src_len.to(device)
trg = batch.trg
trg = trg.to(device)
optimizer.zero_grad()
output = model(src, src_len, trg)
"""
trg = [trg len, batch size]
output = [trg len, batch size, output dim]
"""
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
"""
trg = [(trg len - 1) * batch size]
output = [(trg len - 1) * batch size, output dim]
"""
loss = criterion(output, trg)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
epoch_loss += loss.item()
return epoch_loss / len(iterator)
def evaluate(model, iterator, criterion):
model.eval()
epoch_loss = 0
with torch.no_grad():
for i, batch in enumerate(iterator):
src, src_len = batch.src
src = src.to(device)
src_len = src_len.to(device)
trg = batch.trg
trg = trg.to(device)
optimizer.zero_grad()
output = model(src, src_len, trg, 0) ## Turn off the teacher forcing ratio.
"""
trg = [trg len, batch size]
output = [trg len, batch size, output dim]
"""
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
"""
trg = [(trg len - 1) * batch size]
output = [(trg len - 1) * batch size, output dim]
"""
loss = criterion(output, trg)
epoch_loss += loss.item()
return epoch_loss / len(iterator)
###Output
_____no_output_____
###Markdown
Training the model
###Code
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60
return "{}:{:>02}:{:>05.2f}".format(h, m, s)
def tabulate_training(column_names, data, title):
table = PrettyTable(column_names)
table.title= title
table.align[column_names[0]] = 'l'
table.align[column_names[1]] = 'r'
table.align[column_names[2]] = 'r'
table.align[column_names[3]] = 'r'
for row in data:
table.add_row(row)
print(table)
###Output
_____no_output_____
###Markdown
Model Name
###Code
MODEL_NAME = "eng-po.pt"
N_EPOCHS = 10
CLIP = 1
best_valid_loss = float('inf')
column_names = ["SET", "LOSS", "PPL", "ETA"]
print("TRAINING START....")
for epoch in range(N_EPOCHS):
start = time.time()
train_loss = train(model, train_iterator, optimizer, criterion, CLIP)
valid_loss = evaluate(model, valid_iterator, criterion)
end = time.time()
title = f"EPOCH: {epoch+1:02}/{N_EPOCHS:02} | {'saving model...' if valid_loss < best_valid_loss else 'not saving...'}"
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), MODEL_NAME)
rows_data =[
["train", f"{train_loss:.3f}", f"{math.exp(train_loss):7.3f}", hms_string(end - start) ],
["val", f"{valid_loss:.3f}", f"{math.exp(valid_loss):7.3f}", '' ]
]
tabulate_training(column_names, rows_data, title)
print("TRAINING ENDS....")
model.load_state_dict(torch.load(MODEL_NAME))
test_loss = evaluate(model, test_iterator, criterion)
title = "Model Evaluation Summary"
data_rows = [["Test", f'{test_loss:.3f}', f'{math.exp(test_loss):7.3f}', ""]]
tabulate_training(["SET", "LOSS", "PPL", "ETA"], data_rows, title)
###Output
+------------------------------+
| Model Evaluation Summary |
+------+-------+---------+-----+
| SET | LOSS | PPL | ETA |
+------+-------+---------+-----+
| Test | 2.044 | 7.721 | |
+------+-------+---------+-----+
###Markdown
Model inference
###Code
import en_core_web_sm
nlp = en_core_web_sm.load()
def translate_sentence(sent, src_field, trg_field, mdoel, device, max_len=50):
model.eval()
if isinstance(sent, str):
tokens = [token.text.lower() for token in nlp(sent)]
else:
tokens = [token.lower() for token in sent]
tokens = [src_field.init_token] + tokens + [src_field.eos_token]
src_indexes = [src_field.vocab.stoi[token] for token in tokens]
src_tensor = torch.LongTensor(src_indexes).unsqueeze(1).to(device)
src_len = torch.LongTensor([len(src_indexes)])
with torch.no_grad():
encoder_outputs, hidden = model.encoder(src_tensor, src_len)
mask = model.create_mask(src_tensor)
trg_indexes = [trg_field.vocab.stoi[trg_field.init_token]]
attentions = torch.zeros(max_len, 1, len(src_indexes)).to(device)
for i in range(max_len):
trg_tensor = torch.LongTensor([trg_indexes[-1]]).to(device)
with torch.no_grad():
output, hidden, attention = model.decoder(trg_tensor, hidden, encoder_outputs, mask)
attentions[i] = attention
pred_token = output.argmax(1).item()
trg_indexes.append(pred_token)
if pred_token == trg_field.vocab.stoi[trg_field.eos_token]:
break
trg_tokens = [trg_field.vocab.itos[i] for i in trg_indexes]
return trg_tokens[1:], attentions[:len(trg_tokens)-1]
example_idx = 6
src = vars(test_data.examples[example_idx])['src']
trg = vars(test_data.examples[example_idx])['trg']
translation, attention = translate_sentence(src, SRC, TRG, model, device)
print(f'src = {src}')
print(f'trg = {trg}')
print(f'predicted trg = {translation}')
example_idx = 0
src = vars(train_data.examples[example_idx])['src']
trg = vars(train_data.examples[example_idx])['trg']
print(f'src = {src}')
print(f'trg = {trg}')
tokens, attention = translate_sentence(src, SRC, TRG, model, device)
print(f'pred = {tokens}')
example_idx = 0
src = vars(test_data.examples[example_idx])['src']
trg = vars(test_data.examples[example_idx])['trg']
print(f'src = {src}')
print(f'trg = {trg}')
tokens, attention = translate_sentence(src, SRC, TRG, model, device)
print(f'pred = {tokens}')
###Output
src = ['tom', 'looks', 'relieved', '.']
trg = ['tom', 'parece', 'aliviado', '.']
pred = ['tom', 'parece', 'aliviado', '.', '<eos>']
###Markdown
Downloading the model name
###Code
files.download(MODEL_NAME)
###Output
_____no_output_____
###Markdown
BLEU SCORE
###Code
from torchtext.data.metrics import bleu_score
def calculate_bleu(data, src_field, trg_field, model, device, max_len = 50):
trgs = []
pred_trgs = []
for datum in data:
src = vars(datum)['src']
trg = vars(datum)['trg']
pred_trg, _ = translate_sentence(src, src_field, trg_field, model, device, max_len)
# cut off <eos> token
pred_trg = pred_trg[:-1]
pred_trgs.append(pred_trg)
trgs.append([trg])
return bleu_score(pred_trgs, trgs)
bleu_score = calculate_bleu(test_data, SRC, TRG, model, device)
print(f'BLEU score = {bleu_score*100:.2f}')
###Output
BLEU score = 45.92
|
Drawing/OldDrawing/Bricks-Copy7.ipynb
|
###Markdown
Draw a Brick PatternAttempt at programmatically making a brick pattern.All units in mm. ```1``` = ```1 mm```. "Napkin" scratches.Drawn by hand. ~18mm brick height. > Standard bricks. The standard co-ordinating size for brickwork is 225 mm x 112.5 mm x 75 mm (length x depth x height). This includes 10 mm mortar joints, and so the standard size for a brick itself is 215 mm x 102.5 mm x 65 mm (length x depth x height).
###Code
# Standard brick dimensions.
BrickHeight = 65 # [mm]
BrickLength = 225 # [mm]
BrickDepth = 12.5 # [mm]
BrickRatio = 215 / 65 # [dimensionless]
# Poplar 1x4". Cut
BlockHeight = 89.0 # mm
BlockLength = 2 * BlockHeight # mm
# Drawing configuration.
# How many rows of bricks to draw on the block.
N_BrickRows = 5 # [dimensionless]
# Dimensions of a 'brick' projected onto the block of wood.
H_Block_Brick = BlockHeight / N_BrickRows # [mm]
L_Block_Brick = H_Block_Brick * BrickRatio # [mm]
###Output
_____no_output_____
###Markdown
Code:
###Code
flip = np.array([[1, 1], [1, 0]])
transform_tuple = (
np.eye(2), # Identity matrix, do nothing.
np.eye(2), # Do nothing, for debugging.
flip, # Flip the matrix, reduces travel time.
)
vertical_brick_lines_tuple = (
np.round(
np.arange(L_Block_Brick, BlockLength, L_Block_Brick), 4
), # Odd rows.
np.round(
np.arange(L_Block_Brick / 2, BlockLength, L_Block_Brick), 4
), # Even rows.
)
horizontal_brick_lines = np.round(
np.linspace(0, BlockHeight, N_BrickRows, endpoint=False), 4
)
###Output
_____no_output_____
###Markdown
Lines parallel to the X-axis.Separates rows of bricks.
###Code
horizontal_brick_lines
###Output
_____no_output_____
###Markdown
Lines parallel to the Y-axis.
###Code
vertical_brick_lines_tuple
points = list()
lines = list()
for idx in range(1, len(horizontal_brick_lines)):
# Top horizontal line that defines each 'brick'
horizontal_brick_line = horizontal_brick_lines[idx]
row_line_points = np.array(
[[0, horizontal_brick_line], [BlockLength, horizontal_brick_line]]
)
# Transform to perform on the row points.
transform = transform_tuple[np.mod(idx, 2)]
row_line_points = np.round(np.matmul(transform, row_line_points), 4)
lines.append(GCode.Line(row_line_points))
# Vertical brick line.
vertical_brick_lines = vertical_brick_lines_tuple[np.mod(idx, 2)]
start_point_y = horizontal_brick_lines[idx - 1]
end_point_y = horizontal_brick_lines[idx]
for idx2, vertical_brick_line in enumerate(vertical_brick_lines):
transform = np.round(transform_tuple[np.mod(idx2, 2)], 4)
column_line_points = (
np.array(
[
[start_point_y, vertical_brick_line],
[end_point_y, vertical_brick_line],
]
),
)
column_line_points = np.matmul(transform, column_line_points)
lines.append(GCode.Line(row_line_points))
lines
for line in lines:
break
line.generate_gcode()
lines
list(map(lambda line: line.generate_gcode(), lines))
line
line.points
current_pos = np.array([[0], [0]])
X = np.array([[current_pos[0]], [line.points[0][0]]])
Y = np.array([[current_pos[1]], [line.points[0][1]]])
plt.xkcd()
plt.plot(X, Y)
for idx in range(1, len(line.points)):
print(idx)
(line.points[idx - 1][0], line.points[idx][0])
(line.points[idx - 1][1], line.points[idx][1])
for idx in range(1, len(line.points)):
X = (line.points[idx - 1][0], line.points[idx][0])
Y = (line.points[idx - 1][1], line.points[idx][1])
plt.plot(X, Y)
default_triangle = GCode.Line()
default_triangle.points
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.set_xlabel("X Position (mm)")
ax.set_ylabel("Y Position (mm)")
current_pos = np.array([[0], [0]])
X = np.array([[current_pos[0]], [line.points[0][0]]])
Y = np.array([[current_pos[1]], [line.points[0][1]]])
plt.plot(X, Y, color="red", linestyle="dashed")
for idx in range(1, len(line.points)):
X = (line.points[idx - 1][0], line.points[idx][0])
Y = (line.points[idx - 1][1], line.points[idx][1])
plt.plot(X, Y, color="blue")
plt.title()
default_triangle.points
self = default_triangle
self.points[0]
self.points[0][0]
###Output
_____no_output_____
|
00-2-shipman-times/00-2-shipman-times-x.ipynb
|
###Markdown
Art of Statistics: 00-2-shipman-times-x Altair prefers long-form (a.k.a. tidy-form) over wide-form see: https://altair-viz.github.io/user_guide/data.htmllong-form-vs-wide-form-data
###Code
import altair as alt
import pandas as pd
df = pd.read_csv("00-2-shipman-times-x.csv")
df.head()
###Output
_____no_output_____
###Markdown
Pure Altair implementation transform_fold()
###Code
variable_domain = ["Shipman", "Comparison"]
variable_range = ['blue', 'red']
alt.Chart(df).transform_fold(
['Shipman', 'Comparison'],
as_=['entity', 'percentage']
).mark_line().encode(
alt.X("Hour",
title="Hour of Day"),
alt.Y("percentage:Q",
title="% of Deaths"),
color=alt.Color("entity:N",
scale=alt.Scale(domain=variable_domain, range=variable_range),
title=None)
)
###Output
_____no_output_____
###Markdown
Tidy-form implementation, this is the prefered way to do it.
###Code
# rename column Comparison
renamed_df = df.rename(columns={"Comparison": "Comparison GP's"})
tidy_df = renamed_df.melt('Hour', var_name='entity', value_name='percentage')
tidy_df.head()
variable_domain = ["Shipman", "Comparison GP's"]
variable_range = ['blue', 'red']
alt.Chart(tidy_df).mark_line().encode(
alt.X("Hour",
title="Hour of Day"),
alt.Y("percentage",
title="% of Deaths"),
color=alt.Color("entity",
scale=alt.Scale(domain=variable_domain, range=variable_range),
title=None)
)
###Output
_____no_output_____
###Markdown
Data
###Code
df = pd.read_csv("00-2-shipman-times-x.csv")
df.head()
# transform data from wide to long format
df_melt = pd.melt(df, id_vars=["Hour"], var_name="dataset", value_name="death_percentage")
df_melt.head()
###Output
_____no_output_____
###Markdown
With Plotly
###Code
import plotly.graph_objects as go
import plotly.express as px
fig = px.line(df_melt, x="Hour", y="death_percentage", color="dataset")
fig.update_layout(
xaxis_title="Hour of Day",
yaxis_title="% of Death",
title=go.layout.Title(text="Deaths by Hour of Day", xref="paper", x=0),
annotations=[
go.layout.Annotation(
showarrow=False,
text='From Shipman dataset',
xref='paper',
x=0,
#xshift=275,
yref='paper',
y=1.08,
font=dict(
#family="Courier New, monospace",
size=14,
#color="#0000FF"
)
)
],
yaxis = dict(
tickmode='array',
tickvals=list(range(0,16,5))
)
)
fig.update_xaxes(
showgrid=True,
ticks="outside",
tickson="boundaries",
ticklen=5
)
fig.update_yaxes(
showgrid=True,
ticks="outside",
tickson="boundaries",
ticklen=5,
range=[0,16]
)
fig.update_traces(line_width=2)
fig
###Output
_____no_output_____
###Markdown
With Plotnine
###Code
from plotnine import *
p = ggplot(df, aes(x="Hour")) + ylim(0, 15) # constructs initial plot object, p
p += geom_line(aes(y="Comparison"), size=1.5, color="#e41a1c") # adds a y-series
p += geom_line(aes(y="Shipman"), size=1.5, color="#377eb8") # adds a y-series
p += scale_color_brewer(type="qual", palette="Set1")
p += labs(title="Deaths by Hour of Day", subtitle="From Shipman dataset", y="% of Deaths", x="Hour of Day") # Adds title, subtitle
p += theme(legend_position="none")#, legend.box = "horizontal") # removes the legend
p += annotate('text', x=11, y=12, label='Shipman', fontweight='normal', ha='right', size=12, color="#377eb8")
p += annotate('text', x=8, y=6, label="Comparison GP's", fontweight='normal', ha='right', size=12, color="#e41a1c")
p += theme(figure_size=(10,6))
p
###Output
_____no_output_____
|
analyses/SERV-1/Tc-5min-RNN.ipynb
|
###Markdown
Time series forecasting using recurrent neural networks Import necessary libraries
###Code
%matplotlib notebook
import numpy
import pandas
import math
import time
import sys
import datetime
import matplotlib.pyplot as ma
import keras.models as km
import keras.layers as kl
import sklearn.preprocessing as sp
###Output
/usr/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
return f(*args, **kwds)
Using TensorFlow backend.
###Markdown
Initialize random seed for constant neural network initialization
###Code
numpy.random.seed(42)
###Output
_____no_output_____
###Markdown
Load necessary CSV file
###Code
try:
ts = pandas.read_csv('../../datasets/srv-1-tc-5m.csv')
except:
print("I am unable to connect to read .csv file", sep=',', header=1)
ts.index = pandas.to_datetime(ts['ts'])
# delete unnecessary columns
del ts['id']
del ts['ts']
del ts['l1']
del ts['l2']
del ts['l3']
del ts['l4']
del ts['apmi']
# print table info
ts.info()
###Output
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 23727 entries, 2018-04-11 20:05:00 to 2018-07-16 13:25:00
Data columns (total 1 columns):
cnt 23727 non-null int64
dtypes: int64(1)
memory usage: 370.7 KB
###Markdown
Get values from specified range
###Code
ts = ts['2018-06-16':'2018-07-15']
###Output
_____no_output_____
###Markdown
Remove possible zero and NA values (by interpolation)We are using MAPE formula for counting the final score, so there cannot occure any zero values in the time series. Replace them with NA values. NA values are later explicitely removed by linear interpolation.
###Code
def print_values_stats():
print("Zero Values:\n",sum([(1 if x == 0 else 0) for x in ts.values]),"\n\nMissing Values:\n",ts.isnull().sum(),"\n\nFilled in Values:\n",ts.notnull().sum(), "\n")
idx = pandas.date_range(ts.index.min(), ts.index.max(), freq="5min")
ts = ts.reindex(idx, fill_value=None)
print("Before interpolation:\n")
print_values_stats()
ts = ts.replace(0, numpy.nan)
ts = ts.interpolate(limit_direction="both")
print("After interpolation:\n")
print_values_stats()
###Output
Before interpolation:
Zero Values:
0
Missing Values:
cnt 99
dtype: int64
Filled in Values:
cnt 8541
dtype: int64
After interpolation:
Zero Values:
0
Missing Values:
cnt 0
dtype: int64
Filled in Values:
cnt 8640
dtype: int64
###Markdown
Plot values
###Code
# Idea: Plot figure now and do not wait on ma.show() at the end of the notebook
def plot_without_waiting(ts_to_plot):
ma.ion()
ma.show()
fig = ma.figure(plot_without_waiting.figure_counter)
plot_without_waiting.figure_counter += 1
ma.plot(ts_to_plot, color="blue")
ma.draw()
try:
ma.pause(0.001) # throws NotImplementedError, ignore it
except:
pass
plot_without_waiting.figure_counter = 1
plot_without_waiting(ts)
###Output
_____no_output_____
###Markdown
Normalize time series for neural networkLSTM cells are very sensitive to large scaled values. It's generally better to normalize them into interval.
###Code
dates = ts.index # save dates for further use
scaler = sp.MinMaxScaler(feature_range=(0,1))
ts = scaler.fit_transform(ts)
###Output
_____no_output_____
###Markdown
Split time series into train and test seriesWe have decided to split train and test time series by two weeks.
###Code
train_data_length = 12*24*7
ts_train = ts[:train_data_length]
ts_test = ts[train_data_length+1:]
###Output
_____no_output_____
###Markdown
Create train and test dataset for neural networksThe neural network takes input from TS at time t and returns predicted output at time *t+1*. Generally, we could create neural network that would return predicted output at time *t+n*, just by adjusting *loop_samples* parameter.
###Code
def dataset_create(ts, loop_samples):
x = []
y = []
for i in range(len(ts)-loop_samples-1):
x.append(ts[i:(i+loop_samples), 0])
y.append(ts[i+loop_samples, 0])
return numpy.array(x), numpy.array(y)
train_dataset_x, train_dataset_y = dataset_create(ts_train, 1)
test_dataset_x, test_dataset_y = dataset_create(ts_test, 1)
###Output
_____no_output_____
###Markdown
Reshape datasets for NN into [batch size; timesteps; input dimensionality] formatKeras library have specific needs in case of provided input's format. See https://keras.io/layers/recurrent/ for more details.
###Code
def dataset_reshape_for_nn(dataset):
return dataset.reshape((dataset.shape[0], 1, dataset.shape[1]))
train_dataset_x = dataset_reshape_for_nn(train_dataset_x)
test_dataset_x = dataset_reshape_for_nn(test_dataset_x)
###Output
_____no_output_____
###Markdown
Create recurrent neural networkThis recurrent neural network (RNN) consists of three layers (*input, hidden* and *output*). The input layer is implicitly specified by the hidden layer (*input_shape* parameter). Logically, we need to have exactly one input and one output node for one-step prediction. Number of hidden neurons is specified by *number_lstm_cells* variable.In this RNN we use LSTM cells with sigmoid (http://mathworld.wolfram.com/SigmoidFunction.html) activation function. Network is configured to use *mean square error* (MSE) as optimalization function that is going to be minimized during backpropagation and *stochastic gradient descend* (SGD) optimizer with default parameters (https://keras.io/optimizers/).
###Code
number_lstm_cells = 2
# Layer based network
network = km.Sequential()
# Hidden layer is made from LSTM nodes
network.add(kl.LSTM(number_lstm_cells, activation="sigmoid", input_shape=(1,1)))
# Output layer with one output (one step prediction)
network.add(kl.Dense(1))
network.compile(loss="mse", optimizer="sgd", metrics=['mean_squared_error'])
###Output
_____no_output_____
###Markdown
Train neural networkTrain neural network on train data and plot MSE metrics for each iteration. Results and time of training process depends on *train_iterations* value.
###Code
train_iterations = 100
start_time = time.time()
print("Network fit started...\n")
network_history = network.fit(train_dataset_x, train_dataset_y, epochs=train_iterations, batch_size=1, verbose=0)
print("Network fit finished. Time elapsed: ", time.time() - start_time, "\n")
plot_without_waiting(network_history.history['mean_squared_error'])
###Output
Network fit started...
Network fit finished. Time elapsed: 500.774316072464
###Markdown
Predict new valuesThe array *test_dataset_x* is used as an input for the network.
###Code
predicted_values_unscaled = network.predict(test_dataset_x)
# Scale the predicted values back using MinMaxScaler
predicted_values_scaled = scaler.inverse_transform(predicted_values_unscaled)
# Scale test values back so we can compare the result
test_values_scaled = scaler.inverse_transform(ts_test)
###Output
_____no_output_____
###Markdown
Count mean absolute percentage errorWe use MAPE (https://www.forecastpro.com/Trends/forecasting101August2011.html) instead of MSE because the result of MAPE does not depend on size of values.
###Code
values_sum = 0
for value in zip(test_values_scaled, predicted_values_scaled):
actual = value[0][0]
predicted = value[1][0]
values_sum += abs((actual - predicted) / actual)
values_sum *= 100/len(test_values_scaled)
print("MAPE: ", values_sum, "%\n")
###Output
MAPE: 252.6680973825517 %
###Markdown
Plot predicted values
###Code
fig = ma.figure(plot_without_waiting.figure_counter)
ma.plot(test_values_scaled, color="blue")
ma.plot(predicted_values_scaled, color="red")
ts_len = len(ts)
date_offset_indices = ts_len // 6
ma.xticks(range(0, ts_len-train_data_length, date_offset_indices), [x.date().strftime('%Y-%m-%d') for x in dates[train_data_length::date_offset_indices]])
fig.show()
###Output
_____no_output_____
|
ML project based on different algorithm/ML_project_getting_started.ipynb
|
###Markdown
###Code
#checking the versions
import sys
print('python: {}'.format(sys.version))
import scipy
print('Scipy: {}'.format(scipy.__version__))
import numpy
print('numpy: {}'.format(numpy.__version__))
import pandas
print('pandas: {}'.format(pandas.__version__))
import matplotlib
print('Matplotlib: {}'.format(matplotlib.__version__))
import sklearn
print('Sklearn: {}'.format(sklearn.__version__))
#import dependencies
import pandas
from pandas.plotting import scatter_matrix
from matplotlib import pyplot
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn import model_selection
from sklearn.ensemble import VotingClassifier
#loading the data
url = 'https://raw.githubusercontent.com/yusufkhan004/dataforpractice/master/iris.csv'
names = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width','species']
dataset = pandas.read_csv(url, names=names)
#dimensions of the datasets
print(dataset.shape)
#take a peek ata the data
print(dataset.head(20))
#statistical summary
print(dataset.describe())
#species distribution
print(dataset.groupby('species').size())
#univariate plots -box and whisker plots
dataset.plot(kind='box', subplots=True, layout=(2,2), sharex=False, sharey=False)
pyplot.show()
dataset.hist()
pyplot.show()
#multivariate plots
scatter_matrix(dataset)
pyplot.show()
#google it--> 10 fold cross validation for detail info
#creating a validation dataset
#splitting dataset
array = dataset.values
x = array[:, 0:4]
y =array[:, 4]
x_train, x_validation, y_train, y_validation = train_test_split(x, y, test_size=0.2, random_state =1)
#accuracy = (number of correctly predicted instances)/(total number of instances in the datasets)*100
#as we dont know that which algorithm is good for this problem or what configuration to use
#thatwhy will use all 6 algorithm
#1. Logistic Regression |
#2. Linear Discriminant Analysis } #these are linear
#3. K-Neighbors )
#4. Classification and Regression Tree } fall under non linear category
#5. Gausian Naive Bayes |
#6. Support Vector Machines )
#building models
models = []
models.append(('LR', LogisticRegression(solver='liblinear', multi_class='ovr')))
models.append(('LDA', LinearDiscriminantAnalysis()))
models.append(('KKN',KNeighborsClassifier()))
models.append(('NB', GaussianNB()))
models.append(('SVM', SVC(gamma='auto')))
# so we'hv created all our models no its time to evaluate them
#evaluate the created models
results = []
names = []
for name, model in models:
kfold = StratifiedKFold(n_splits=10, random_state=1)
cv_results = cross_val_score(model, x_train, y_train, cv=kfold, scoring='accuracy')
results.append(cv_results)
names.append(name)
print('%s: %f (%f)' % (name, cv_results.mean(), cv_results.std()))
# we have done with evauating each model and moving ahead to find out accuracy, last output says that the largest estimated accuracy is of support vector machine
# we can also create a plot for model evaluation results and compare the mean accuracy of each model visually
# each model is evaluted 10 time with (10 fold cross validation)
# useful way to compare the sample results is to create a box and plot for each distribution
#compare each model and select the most accurate one
pyplot.boxplot(results, labels=names)
pyplot.title('Algorithms Comparison')
pyplot.show()
#So now that we know SVM has fair the best algorithm in all models
#lets make a prediction of best fit model
# make a prediction
model = SVC(gamma='auto')
model.fit(x_train, y_train)
predictions = model.predict(x_validation)
#evaluate our predictions
print(accuracy_score(y_validation, predictions)) #
print(confusion_matrix(y_validation, predictions)) #provide a indication of 3 error that could'hv made
print(classification_report(y_validation, predictions)) #classification report provides a accident breakdown of each species by precision recall and support showing accident results
#so SVM is faired the best and made pretty good prediction compare to others
###Output
0.9666666666666667
[[11 0 0]
[ 0 12 1]
[ 0 0 6]]
precision recall f1-score support
Iris-setosa 1.00 1.00 1.00 11
Iris-versicolor 1.00 0.92 0.96 13
Iris-virginica 0.86 1.00 0.92 6
accuracy 0.97 30
macro avg 0.95 0.97 0.96 30
weighted avg 0.97 0.97 0.97 30
|
Data_Cleaning_Drafts/Data_Cleaning_and_EDA.ipynb
|
###Markdown
Data Cleaning and EDA ObjectivesIn this Jupyter Notebook we will be focusing on:- Taking a look at each table- Dealing with Null values and missing data- EDA to get more insight on the data Import Libraries
###Code
# Import all necessary libraries
import sqlite3
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Connect to Database
###Code
# Connect to database and create cursor
conn = sqlite3.connect('movies_db.sqlite')
cur = conn.cursor()
# Print out table names
cur.execute('''SELECT name
FROM sqlite_master
WHERE type='table';
''').fetchall()
# One off example to ensure the data loads correctly
conn.execute('''SELECT *
FROM title_akas''').fetchall()
###Output
_____no_output_____
###Markdown
Convert SQL tables to Pandas Dataframes
###Code
# I'm choosing to convert all my tables to Dataframes because I feel it will be easier to work with
tmdb_movies_df = pd.read_sql('''SELECT *
FROM tmdb_movies''', conn)
tmdb_movies_df.head()
budgets_df = pd.read_sql('''SELECT *
FROM tn_movie_budgets''', conn)
budgets_df.head()
# Irrelevant
name_basics_df = pd.read_sql('''SELECT *
FROM imdb_name_basics''', conn)
name_basics_df.head()
# Irrelevant
title_akas_df = pd.read_sql('''SELECT *
FROM title_akas''', conn)
title_akas_df.head()
movie_gross_df = pd.read_sql('''SELECT *
FROM bom_movie_gross''', conn)
movie_gross_df.head()
title_basics_df = pd.read_sql('''SELECT *
FROM imdb_title_basics''', conn)
title_basics_df.head()
# Irrelevant
title_ratings_df = pd.read_sql('''SELECT *
FROM title_ratings''', conn)
print(title_ratings_df.head())
title_ratings_df.info()
rt_reviews_df = pd.read_sql('''SELECT *
FROM rt_reviews''', conn)
rt_reviews_df.head()
rt_movie_info_df = pd.read_sql('''SELECT *
FROM rt_movie_info''', conn)
rt_movie_info_df.head()
###Output
_____no_output_____
###Markdown
Data Cleaning In the next couple of lines I will be:- Checking each DataFrame for null values- Deciding what the next course of action for the null values will be ie. deleting/filling in null values- Reformating dtypes if needed Cleaning - tmdb_movies
###Code
# View Dataframe
tmdb_movies_df.head()
# Set 'Index' as the datasets index
tmdb_movies_df.set_index('index', inplace=True)
tmdb_movies_df.head()
# Check df shape to see how many rows and columns we have
print(tmdb_movies_df.shape)
# View info for each column
tmdb_movies_df.info()
# Check for null values if any
print(tmdb_movies_df.isnull().sum())
# It appears we don't have any null value -not any we can see right now at least-
# We'll explore the dataset more by checking out the Unique values
print('Number of Unique values:\n', tmdb_movies_df.nunique())
# Going off the original_title and title columns its safe to assume there are some duplicate
tmdb_movies_df[tmdb_movies_df.duplicated(keep=False)].sort_values(by='title')
# We'll delete the duplicates as part of our data cleaning process
tmdb_movies_df.drop_duplicates(subset=['title'], inplace=True)
# Check to make sure changes were made to the dataset
tmdb_movies_df[tmdb_movies_df.duplicated(keep=False)].sort_values(by='title')
# Going back to the columns datatype we can see that release_date is stored as an object
print(tmdb_movies_df.dtypes)
# That can be fixed by changing it to a datetime dtype
# We can clearly see the before and after below
tmdb_movies_df['release_date'] = pd.to_datetime(tmdb_movies_df['release_date'])
tmdb_movies_df.info()
###Output
genre_ids object
id int64
original_language object
original_title object
popularity float64
release_date object
title object
vote_average float64
vote_count int64
dtype: object
<class 'pandas.core.frame.DataFrame'>
Int64Index: 24688 entries, 0 to 26516
Data columns (total 9 columns):
genre_ids 24688 non-null object
id 24688 non-null int64
original_language 24688 non-null object
original_title 24688 non-null object
popularity 24688 non-null float64
release_date 24688 non-null datetime64[ns]
title 24688 non-null object
vote_average 24688 non-null float64
vote_count 24688 non-null int64
dtypes: datetime64[ns](1), float64(2), int64(2), object(4)
memory usage: 1.9+ MB
###Markdown
Cleaning - budgets_df
###Code
# View dataset
budgets_df.head()
# Check for null values\
budgets_df.isnull().sum()
# Get blueprints of dataset
budgets_df.info()
# Set 'id' as index
budgets_df.set_index('id', inplace=True)
# Check for duplicates
budgets_df[budgets_df.duplicated(keep=False)].sort_values(by='movie')
# Looking at the datatypes the last 3 columns are stored as an object datatype
# We'll be converting the last 3 columns to Integer datatypes
budgets_df['production_budget'] = budgets_df['production_budget'].str.replace('$','').str.replace(',','').astype('int')
budgets_df['domestic_gross'] = budgets_df['domestic_gross'].str.replace('$','').str.replace(',','').astype('int')
budgets_df['worldwide_gross'] = budgets_df['worldwide_gross'].str.replace('$','').str.replace(',','').astype('int')
budgets_df.head()
budgets_df.dtypes
###Output
_____no_output_____
###Markdown
Cleaning - movie_gross_df
###Code
# View data
movie_gross_df.head()
# Check dtype
movie_gross_df.dtypes
# Check for null values
movie_gross_df.isna().sum()
movie_gross_df.foreign_gross.dropna()
movie_gross_df = movie_gross_df.dropna()
movie_gross_df.isna().sum()
###Output
_____no_output_____
|
content/section-03/2/propiedades-de-un-grafico.ipynb
|
###Markdown
Propiedades de un GráficoUtilizamos `.mark_line()` para especificar las propiedades del marcador y `.encode()` para codificar nuestros datos de la misma manera podemos utilizar `.properties()` para especificar ciertos atributos de nuestro gráfico. Más adelante aprenderemos como configurar minuciosamente todos los detalles de cada parte del gráfico, por ahora agreguemos un título y especifiquemos las dimensiones de nuestra visualización. Como siempre, comenzamos importando nuestros paquetes de `python` y asignando nuestros datos, en este caso en formato `CSV` a un __DataFrame__ de `pandas`.
###Code
import altair as alt
import pandas as pd
datos = pd.read_csv("../../datos/norteamerica_CO2.csv")
datos.head()
###Output
_____no_output_____
###Markdown
VisualizacionEn la sección anterior aprendimos sobre `alt.X()` y `alt.Y()` para representar los valores X y Y en nuestro gráfico y especificar detalles más complejos de ellos. De ahora en adelante los usaremos por defecto. También aprendimos sobre como `altair` y `pandas` trabajan con objetos `datetime` y fechas. En este ejercicio llevaremos acabo el mismo proceso para transformar nuestra columna `"periodo"` de un __array__ de números a uno de fechas.
###Code
datos['periodo'] = pd.to_datetime(datos['periodo'], format = '%Y')
###Output
_____no_output_____
###Markdown
Nuestro gráfico base bastante simple y algo que ya hemos aprendido a construir."
###Code
alt.Chart(datos).mark_line().encode(
x = alt.X('periodo:T', title = 'Año'),
y = alt.Y('CO2:Q'),
color = alt.Color('codigo:N')
)
###Output
_____no_output_____
###Markdown
Para agregar un título al gráfico solo hay que especificarlo en el método `.properties()`.
###Code
alt.Chart(datos).mark_line().encode(
x = alt.X('periodo:T', title = 'Año'),
y = alt.Y('CO2:Q'),
color = alt.Color('codigo:N')
).properties(
title = "Emisiones de CO2 (toneladas metricas per capita)"
)
###Output
_____no_output_____
###Markdown
Las dimensiones de tu gráfico también las puedes especificar en `.properties()` bajo los argumentos `width` y `height`, lo largo y lo alto, respectivamente.
###Code
alt.Chart(datos).mark_line().encode(
x = alt.X('periodo:T', title = 'Año'),
y = alt.Y('CO2:Q'),
color = alt.Color('codigo:N')
).properties(
title = "Emisiones de CO2 (toneladas metricas per capita)",
width = 800,
height = 300,
)
###Output
_____no_output_____
|
examples/affine_transform.ipynb
|
###Markdown
Make twice bigger and shear
###Code
tmatrix = np.array([
[1, 0, 0, 0],
[0, 0.5, 0.1, 0],
[0, 0.1, 0.5, 0],
[0, 0, 0, 1],
])
data3d_2 = scipy.ndimage.affine_transform(
data3d, tmatrix, output_shape=[20,70,70]
)
sed3.show_slices(data3d_2, slice_number=12)
plt.show()
###Output
_____no_output_____
|
mammography/Extract_metadata_to_JSON.ipynb
|
###Markdown
Use this to extract CBIS-DDSM metadata from provided excel file and save the annotations we need in JSON file
###Code
import os
import csv
import pandas as pd
# import xlrd
import json
# Load lesion data
# Root directory of the project
ROOT_DIR = os.getcwd()
print(ROOT_DIR)
# if ROOT_DIR.endswith("samples/nucleus"):
if ROOT_DIR.endswith("mammography"):
# Go up two levels to the repo root
ROOT_DIR = os.path.dirname(ROOT_DIR)
print(ROOT_DIR)
DATASET_DIR = os.path.join(ROOT_DIR, "datasets/mammo")
IMAGES_DIR = "/home/chevy/Mammo/"
file_names = ["mass_case_description_train_set.csv", "mass_case_description_test_set.csv",
"calc_case_description_train_set.csv", "calc_case_description_test_set.csv"]
file_name = file_names[0]
sheet_name="LesionMarkAttributesWithImageDim"
file_path = os.path.join(DATASET_DIR, file_name)
print("Loading:", file_path)
annotations = pd.read_csv(file_path)
# Initialise
xml_annotations = {'type': 'instances',
'images': [],
'categories': [{'id': 1, 'name': 'MALIGNANT'}, {'id': 2, 'name': 'BENIGN'}]
}
names = []
for i in range(0, len(annotations)):
row = annotations.loc[i, :]
name = row['patient_id'] + '_' + row['left or right breast'] + '_' + row['image view']
names.append(name)
unique_names = set(names)
unique_names_pathology = {k:[] for k in unique_names}
unique_names_catID = {k:[] for k in unique_names}
for i in range(0, len(annotations)):
row = annotations.loc[i, :]
name = row['patient_id'] + '_' + row['left or right breast'] + '_' + row['image view']
assert row['pathology'] in ['MALIGNANT', 'BENIGN', 'BENIGN_WITHOUT_CALLBACK']
pathology = row['pathology']
if pathology in 'BENIGN_WITHOUT_CALLBACK':
pathology = 'BENIGN'
if pathology == 'BENIGN':
catID = 2
else:
catID = 1
print(name, ":\t", pathology, "\t", catID)
unique_names_catID[name].append(catID)
unique_names_pathology[name].append(pathology)
for name in unique_names:
xml_annotations['images'] += [
{
'id': name,
'pathology': unique_names_pathology[name],
'catID': unique_names_catID[name]
}
]
with open(DATASET_DIR + '_ddsm_mass_train.json', 'w') as fp:
json.dump(xml_annotations, fp, indent=4)
###Output
P_00001_LEFT_CC : MALIGNANT 1
P_00001_LEFT_MLO : MALIGNANT 1
P_00004_LEFT_CC : BENIGN 2
P_00004_LEFT_MLO : BENIGN 2
P_00004_RIGHT_MLO : BENIGN 2
P_00009_RIGHT_CC : MALIGNANT 1
P_00009_RIGHT_MLO : MALIGNANT 1
P_00015_LEFT_MLO : MALIGNANT 1
P_00018_RIGHT_CC : BENIGN 2
P_00018_RIGHT_MLO : BENIGN 2
P_00021_LEFT_CC : BENIGN 2
P_00021_LEFT_MLO : BENIGN 2
P_00021_RIGHT_CC : BENIGN 2
P_00021_RIGHT_MLO : BENIGN 2
P_00023_RIGHT_CC : MALIGNANT 1
P_00023_RIGHT_MLO : MALIGNANT 1
P_00026_LEFT_CC : BENIGN 2
P_00026_LEFT_MLO : BENIGN 2
P_00027_RIGHT_CC : BENIGN 2
P_00027_RIGHT_MLO : BENIGN 2
P_00034_RIGHT_CC : MALIGNANT 1
P_00034_RIGHT_MLO : MALIGNANT 1
P_00039_RIGHT_CC : MALIGNANT 1
P_00039_RIGHT_MLO : MALIGNANT 1
P_00041_LEFT_CC : BENIGN 2
P_00041_LEFT_MLO : BENIGN 2
P_00044_RIGHT_CC : BENIGN 2
P_00044_RIGHT_CC : BENIGN 2
P_00044_RIGHT_CC : BENIGN 2
P_00044_RIGHT_CC : BENIGN 2
P_00044_RIGHT_MLO : BENIGN 2
P_00044_RIGHT_MLO : BENIGN 2
P_00045_LEFT_CC : MALIGNANT 1
P_00045_LEFT_MLO : MALIGNANT 1
P_00046_RIGHT_MLO : MALIGNANT 1
P_00051_LEFT_CC : MALIGNANT 1
P_00051_LEFT_MLO : MALIGNANT 1
P_00054_RIGHT_MLO : BENIGN 2
P_00055_LEFT_CC : BENIGN 2
P_00057_RIGHT_CC : MALIGNANT 1
P_00057_RIGHT_MLO : MALIGNANT 1
P_00058_RIGHT_CC : MALIGNANT 1
P_00059_LEFT_CC : MALIGNANT 1
P_00059_LEFT_MLO : MALIGNANT 1
P_00061_RIGHT_CC : BENIGN 2
P_00061_RIGHT_MLO : BENIGN 2
P_00064_RIGHT_MLO : BENIGN 2
P_00065_LEFT_CC : BENIGN 2
P_00065_LEFT_MLO : BENIGN 2
P_00068_RIGHT_CC : MALIGNANT 1
P_00068_RIGHT_MLO : MALIGNANT 1
P_00074_LEFT_MLO : MALIGNANT 1
P_00074_RIGHT_CC : MALIGNANT 1
P_00074_RIGHT_MLO : MALIGNANT 1
P_00076_LEFT_CC : BENIGN 2
P_00076_LEFT_MLO : BENIGN 2
P_00079_RIGHT_CC : MALIGNANT 1
P_00079_RIGHT_MLO : MALIGNANT 1
P_00080_RIGHT_CC : MALIGNANT 1
P_00080_RIGHT_MLO : MALIGNANT 1
P_00081_RIGHT_CC : BENIGN 2
P_00081_RIGHT_MLO : BENIGN 2
P_00086_RIGHT_CC : MALIGNANT 1
P_00086_RIGHT_MLO : MALIGNANT 1
P_00090_LEFT_CC : BENIGN 2
P_00090_LEFT_MLO : BENIGN 2
P_00092_LEFT_CC : MALIGNANT 1
P_00092_LEFT_MLO : MALIGNANT 1
P_00092_LEFT_MLO : MALIGNANT 1
P_00092_RIGHT_CC : MALIGNANT 1
P_00092_RIGHT_MLO : MALIGNANT 1
P_00092_RIGHT_MLO : MALIGNANT 1
P_00094_RIGHT_CC : BENIGN 2
P_00094_RIGHT_MLO : BENIGN 2
P_00095_LEFT_CC : MALIGNANT 1
P_00095_LEFT_MLO : MALIGNANT 1
P_00096_RIGHT_CC : BENIGN 2
P_00096_RIGHT_MLO : BENIGN 2
P_00106_LEFT_CC : BENIGN 2
P_00106_LEFT_CC : BENIGN 2
P_00106_LEFT_CC : BENIGN 2
P_00106_LEFT_MLO : BENIGN 2
P_00106_LEFT_MLO : BENIGN 2
P_00106_LEFT_MLO : BENIGN 2
P_00106_LEFT_MLO : BENIGN 2
P_00106_LEFT_MLO : BENIGN 2
P_00106_RIGHT_CC : BENIGN 2
P_00106_RIGHT_CC : BENIGN 2
P_00106_RIGHT_CC : BENIGN 2
P_00106_RIGHT_MLO : BENIGN 2
P_00106_RIGHT_MLO : BENIGN 2
P_00106_RIGHT_MLO : BENIGN 2
P_00107_RIGHT_MLO : BENIGN 2
P_00108_LEFT_CC : BENIGN 2
P_00108_LEFT_MLO : BENIGN 2
P_00109_LEFT_CC : BENIGN 2
P_00110_LEFT_CC : MALIGNANT 1
P_00110_LEFT_MLO : MALIGNANT 1
P_00110_RIGHT_CC : MALIGNANT 1
P_00117_LEFT_MLO : BENIGN 2
P_00119_LEFT_CC : BENIGN 2
P_00119_LEFT_MLO : BENIGN 2
P_00120_LEFT_CC : BENIGN 2
P_00120_LEFT_MLO : BENIGN 2
P_00122_RIGHT_MLO : BENIGN 2
P_00128_LEFT_CC : MALIGNANT 1
P_00133_LEFT_CC : MALIGNANT 1
P_00133_LEFT_MLO : MALIGNANT 1
P_00134_LEFT_CC : MALIGNANT 1
P_00134_LEFT_MLO : MALIGNANT 1
P_00137_LEFT_CC : BENIGN 2
P_00137_LEFT_MLO : BENIGN 2
P_00146_RIGHT_CC : MALIGNANT 1
P_00146_RIGHT_MLO : MALIGNANT 1
P_00148_RIGHT_CC : MALIGNANT 1
P_00148_RIGHT_MLO : MALIGNANT 1
P_00149_LEFT_CC : MALIGNANT 1
P_00149_LEFT_MLO : MALIGNANT 1
P_00160_LEFT_CC : MALIGNANT 1
P_00160_LEFT_MLO : MALIGNANT 1
P_00160_RIGHT_CC : BENIGN 2
P_00160_RIGHT_MLO : BENIGN 2
P_00166_RIGHT_CC : BENIGN 2
P_00166_RIGHT_MLO : BENIGN 2
P_00169_RIGHT_MLO : BENIGN 2
P_00172_LEFT_CC : MALIGNANT 1
P_00172_LEFT_MLO : MALIGNANT 1
P_00174_RIGHT_CC : MALIGNANT 1
P_00174_RIGHT_MLO : MALIGNANT 1
P_00175_RIGHT_CC : MALIGNANT 1
P_00175_RIGHT_MLO : MALIGNANT 1
P_00187_LEFT_CC : BENIGN 2
P_00187_LEFT_MLO : BENIGN 2
P_00190_LEFT_MLO : MALIGNANT 1
P_00199_LEFT_CC : MALIGNANT 1
P_00199_LEFT_MLO : MALIGNANT 1
P_00205_RIGHT_CC : BENIGN 2
P_00205_RIGHT_MLO : BENIGN 2
P_00206_RIGHT_MLO : BENIGN 2
P_00207_LEFT_CC : MALIGNANT 1
P_00207_LEFT_CC : MALIGNANT 1
P_00207_LEFT_CC : MALIGNANT 1
P_00207_LEFT_MLO : MALIGNANT 1
P_00207_LEFT_MLO : MALIGNANT 1
P_00207_LEFT_MLO : MALIGNANT 1
P_00208_RIGHT_MLO : BENIGN 2
P_00215_RIGHT_CC : BENIGN 2
P_00217_LEFT_CC : MALIGNANT 1
P_00218_LEFT_CC : MALIGNANT 1
P_00218_LEFT_MLO : MALIGNANT 1
P_00219_RIGHT_CC : BENIGN 2
P_00221_RIGHT_MLO : BENIGN 2
P_00224_LEFT_CC : BENIGN 2
P_00224_LEFT_MLO : BENIGN 2
P_00224_RIGHT_CC : BENIGN 2
P_00224_RIGHT_MLO : BENIGN 2
P_00225_RIGHT_CC : BENIGN 2
P_00225_RIGHT_MLO : BENIGN 2
P_00226_LEFT_CC : MALIGNANT 1
P_00226_LEFT_MLO : MALIGNANT 1
P_00229_LEFT_CC : BENIGN 2
P_00229_LEFT_MLO : BENIGN 2
P_00231_LEFT_CC : BENIGN 2
P_00231_LEFT_MLO : BENIGN 2
P_00234_LEFT_CC : BENIGN 2
P_00235_RIGHT_CC : MALIGNANT 1
P_00235_RIGHT_MLO : MALIGNANT 1
P_00236_RIGHT_MLO : MALIGNANT 1
P_00239_RIGHT_CC : BENIGN 2
P_00239_RIGHT_MLO : BENIGN 2
P_00240_RIGHT_CC : MALIGNANT 1
P_00240_RIGHT_MLO : MALIGNANT 1
P_00241_RIGHT_CC : MALIGNANT 1
P_00241_RIGHT_MLO : MALIGNANT 1
P_00242_RIGHT_CC : BENIGN 2
P_00242_RIGHT_MLO : BENIGN 2
P_00247_RIGHT_CC : BENIGN 2
P_00247_RIGHT_MLO : BENIGN 2
P_00248_LEFT_MLO : MALIGNANT 1
P_00254_LEFT_CC : MALIGNANT 1
P_00254_LEFT_MLO : MALIGNANT 1
P_00259_RIGHT_CC : MALIGNANT 1
P_00259_RIGHT_MLO : MALIGNANT 1
P_00264_LEFT_CC : MALIGNANT 1
P_00264_LEFT_MLO : MALIGNANT 1
P_00265_RIGHT_CC : MALIGNANT 1
P_00265_RIGHT_MLO : MALIGNANT 1
P_00273_LEFT_CC : BENIGN 2
P_00279_LEFT_CC : BENIGN 2
P_00281_LEFT_MLO : BENIGN 2
P_00283_RIGHT_CC : MALIGNANT 1
P_00283_RIGHT_MLO : MALIGNANT 1
P_00287_RIGHT_CC : MALIGNANT 1
P_00287_RIGHT_MLO : MALIGNANT 1
P_00289_LEFT_CC : BENIGN 2
P_00289_LEFT_MLO : BENIGN 2
P_00294_LEFT_CC : BENIGN 2
P_00294_LEFT_MLO : BENIGN 2
P_00298_LEFT_CC : BENIGN 2
P_00298_LEFT_MLO : BENIGN 2
P_00303_LEFT_CC : BENIGN 2
P_00303_LEFT_MLO : BENIGN 2
P_00303_RIGHT_CC : BENIGN 2
P_00303_RIGHT_MLO : BENIGN 2
P_00304_LEFT_MLO : BENIGN 2
P_00309_LEFT_CC : BENIGN 2
P_00309_LEFT_CC : BENIGN 2
P_00309_LEFT_MLO : BENIGN 2
P_00309_LEFT_MLO : BENIGN 2
P_00313_RIGHT_CC : MALIGNANT 1
P_00313_RIGHT_MLO : MALIGNANT 1
P_00314_RIGHT_CC : MALIGNANT 1
P_00314_RIGHT_MLO : MALIGNANT 1
P_00317_RIGHT_MLO : MALIGNANT 1
P_00319_LEFT_CC : MALIGNANT 1
P_00319_LEFT_MLO : MALIGNANT 1
P_00320_LEFT_CC : BENIGN 2
P_00320_LEFT_MLO : BENIGN 2
P_00328_LEFT_MLO : BENIGN 2
P_00328_RIGHT_CC : MALIGNANT 1
P_00328_RIGHT_CC : MALIGNANT 1
P_00328_RIGHT_MLO : MALIGNANT 1
P_00328_RIGHT_MLO : MALIGNANT 1
P_00330_LEFT_CC : BENIGN 2
P_00330_LEFT_MLO : BENIGN 2
P_00332_LEFT_CC : BENIGN 2
P_00332_LEFT_MLO : BENIGN 2
P_00332_RIGHT_CC : BENIGN 2
P_00332_RIGHT_MLO : BENIGN 2
P_00333_RIGHT_CC : BENIGN 2
P_00333_RIGHT_MLO : BENIGN 2
P_00334_LEFT_MLO : BENIGN 2
P_00335_LEFT_MLO : MALIGNANT 1
P_00342_RIGHT_CC : BENIGN 2
P_00342_RIGHT_MLO : BENIGN 2
P_00342_RIGHT_MLO : BENIGN 2
P_00342_RIGHT_MLO : BENIGN 2
P_00348_LEFT_CC : BENIGN 2
P_00348_LEFT_MLO : BENIGN 2
P_00351_LEFT_CC : BENIGN 2
P_00351_LEFT_MLO : BENIGN 2
P_00354_LEFT_CC : MALIGNANT 1
P_00354_LEFT_MLO : MALIGNANT 1
P_00356_LEFT_CC : MALIGNANT 1
P_00361_RIGHT_MLO : MALIGNANT 1
P_00363_LEFT_CC : MALIGNANT 1
P_00363_LEFT_MLO : MALIGNANT 1
P_00366_RIGHT_CC : BENIGN 2
P_00366_RIGHT_MLO : BENIGN 2
P_00370_RIGHT_CC : MALIGNANT 1
P_00370_RIGHT_MLO : MALIGNANT 1
P_00376_RIGHT_CC : BENIGN 2
P_00376_RIGHT_MLO : BENIGN 2
P_00376_RIGHT_MLO : BENIGN 2
P_00376_RIGHT_MLO : BENIGN 2
P_00376_RIGHT_MLO : BENIGN 2
P_00383_LEFT_MLO : MALIGNANT 1
P_00384_RIGHT_CC : BENIGN 2
P_00384_RIGHT_MLO : BENIGN 2
P_00385_RIGHT_CC : BENIGN 2
P_00385_RIGHT_MLO : BENIGN 2
P_00386_LEFT_CC : MALIGNANT 1
P_00386_LEFT_MLO : MALIGNANT 1
P_00389_LEFT_MLO : BENIGN 2
P_00396_LEFT_CC : MALIGNANT 1
P_00396_LEFT_MLO : MALIGNANT 1
P_00399_RIGHT_CC : MALIGNANT 1
P_00399_RIGHT_MLO : MALIGNANT 1
P_00401_LEFT_CC : BENIGN 2
P_00401_LEFT_MLO : BENIGN 2
P_00406_RIGHT_CC : MALIGNANT 1
P_00406_RIGHT_MLO : MALIGNANT 1
P_00408_RIGHT_CC : MALIGNANT 1
P_00408_RIGHT_MLO : MALIGNANT 1
P_00411_RIGHT_CC : BENIGN 2
P_00411_RIGHT_MLO : BENIGN 2
P_00412_RIGHT_CC : BENIGN 2
P_00412_RIGHT_MLO : BENIGN 2
P_00413_LEFT_CC : MALIGNANT 1
P_00413_LEFT_MLO : MALIGNANT 1
P_00414_LEFT_CC : MALIGNANT 1
P_00414_LEFT_MLO : MALIGNANT 1
P_00415_RIGHT_CC : MALIGNANT 1
P_00415_RIGHT_MLO : MALIGNANT 1
P_00417_RIGHT_CC : BENIGN 2
P_00417_RIGHT_MLO : BENIGN 2
P_00419_LEFT_CC : BENIGN 2
P_00419_LEFT_CC : MALIGNANT 1
P_00419_LEFT_MLO : MALIGNANT 1
P_00419_LEFT_MLO : BENIGN 2
P_00419_RIGHT_CC : BENIGN 2
P_00419_RIGHT_MLO : BENIGN 2
P_00420_RIGHT_CC : MALIGNANT 1
P_00420_RIGHT_CC : MALIGNANT 1
P_00420_RIGHT_MLO : MALIGNANT 1
P_00420_RIGHT_MLO : MALIGNANT 1
P_00421_LEFT_CC : BENIGN 2
P_00421_LEFT_MLO : BENIGN 2
P_00423_RIGHT_CC : MALIGNANT 1
P_00426_RIGHT_CC : BENIGN 2
P_00426_RIGHT_MLO : BENIGN 2
P_00427_RIGHT_MLO : BENIGN 2
P_00428_LEFT_CC : BENIGN 2
P_00430_LEFT_CC : BENIGN 2
P_00430_LEFT_MLO : BENIGN 2
P_00431_RIGHT_CC : BENIGN 2
P_00431_RIGHT_MLO : BENIGN 2
P_00432_LEFT_CC : MALIGNANT 1
P_00432_LEFT_CC : MALIGNANT 1
P_00432_LEFT_MLO : MALIGNANT 1
P_00432_LEFT_MLO : MALIGNANT 1
P_00435_RIGHT_MLO : MALIGNANT 1
P_00436_LEFT_CC : BENIGN 2
P_00436_LEFT_MLO : BENIGN 2
P_00437_LEFT_CC : BENIGN 2
P_00437_LEFT_MLO : BENIGN 2
P_00439_RIGHT_CC : MALIGNANT 1
P_00439_RIGHT_MLO : MALIGNANT 1
P_00440_RIGHT_MLO : MALIGNANT 1
P_00441_RIGHT_MLO : BENIGN 2
P_00442_RIGHT_CC : BENIGN 2
P_00442_RIGHT_MLO : BENIGN 2
P_00444_LEFT_CC : MALIGNANT 1
P_00444_LEFT_MLO : MALIGNANT 1
P_00450_LEFT_CC : MALIGNANT 1
P_00450_LEFT_MLO : MALIGNANT 1
P_00451_LEFT_CC : BENIGN 2
P_00451_LEFT_MLO : BENIGN 2
P_00453_LEFT_CC : BENIGN 2
P_00453_LEFT_MLO : BENIGN 2
P_00454_RIGHT_CC : BENIGN 2
P_00454_RIGHT_MLO : BENIGN 2
P_00462_LEFT_CC : BENIGN 2
P_00462_LEFT_MLO : BENIGN 2
P_00465_LEFT_CC : MALIGNANT 1
P_00465_LEFT_MLO : MALIGNANT 1
P_00468_LEFT_CC : BENIGN 2
P_00468_LEFT_MLO : BENIGN 2
P_00471_RIGHT_CC : MALIGNANT 1
P_00471_RIGHT_MLO : MALIGNANT 1
P_00475_LEFT_MLO : BENIGN 2
P_00475_RIGHT_MLO : BENIGN 2
P_00484_RIGHT_CC : MALIGNANT 1
P_00487_RIGHT_CC : BENIGN 2
P_00487_RIGHT_MLO : BENIGN 2
P_00488_LEFT_CC : BENIGN 2
P_00488_LEFT_MLO : BENIGN 2
P_00492_RIGHT_MLO : BENIGN 2
P_00495_RIGHT_CC : BENIGN 2
P_00495_RIGHT_MLO : BENIGN 2
P_00496_LEFT_CC : BENIGN 2
P_00496_LEFT_MLO : BENIGN 2
P_00499_RIGHT_CC : BENIGN 2
P_00499_RIGHT_MLO : BENIGN 2
P_00504_RIGHT_MLO : MALIGNANT 1
P_00506_LEFT_MLO : BENIGN 2
P_00509_RIGHT_CC : BENIGN 2
P_00509_RIGHT_MLO : BENIGN 2
P_00512_RIGHT_CC : BENIGN 2
P_00512_RIGHT_MLO : BENIGN 2
P_00515_LEFT_CC : BENIGN 2
P_00515_LEFT_MLO : BENIGN 2
P_00515_RIGHT_CC : BENIGN 2
P_00517_LEFT_CC : BENIGN 2
P_00517_LEFT_MLO : BENIGN 2
P_00518_LEFT_CC : BENIGN 2
P_00518_LEFT_MLO : BENIGN 2
P_00519_RIGHT_CC : MALIGNANT 1
P_00519_RIGHT_MLO : MALIGNANT 1
P_00522_RIGHT_MLO : BENIGN 2
P_00526_RIGHT_CC : MALIGNANT 1
P_00526_RIGHT_MLO : MALIGNANT 1
P_00528_LEFT_MLO : BENIGN 2
P_00528_LEFT_MLO : BENIGN 2
P_00528_RIGHT_CC : BENIGN 2
P_00528_RIGHT_MLO : BENIGN 2
P_00532_LEFT_CC : BENIGN 2
P_00534_LEFT_CC : BENIGN 2
P_00534_LEFT_MLO : BENIGN 2
P_00535_LEFT_CC : BENIGN 2
P_00539_RIGHT_CC : MALIGNANT 1
P_00539_RIGHT_MLO : MALIGNANT 1
P_00540_LEFT_CC : MALIGNANT 1
P_00540_LEFT_MLO : MALIGNANT 1
P_00543_RIGHT_CC : MALIGNANT 1
P_00543_RIGHT_MLO : MALIGNANT 1
P_00545_LEFT_MLO : MALIGNANT 1
P_00549_LEFT_CC : BENIGN 2
P_00549_LEFT_MLO : BENIGN 2
P_00550_RIGHT_MLO : BENIGN 2
P_00553_LEFT_CC : MALIGNANT 1
P_00553_LEFT_MLO : MALIGNANT 1
P_00554_LEFT_CC : MALIGNANT 1
P_00554_LEFT_MLO : MALIGNANT 1
P_00559_LEFT_CC : BENIGN 2
P_00559_LEFT_MLO : BENIGN 2
P_00560_RIGHT_MLO : BENIGN 2
P_00564_RIGHT_CC : MALIGNANT 1
P_00566_RIGHT_CC : MALIGNANT 1
P_00566_RIGHT_MLO : MALIGNANT 1
P_00568_LEFT_CC : MALIGNANT 1
P_00568_LEFT_MLO : MALIGNANT 1
P_00569_RIGHT_CC : BENIGN 2
P_00569_RIGHT_MLO : BENIGN 2
P_00572_RIGHT_CC : BENIGN 2
P_00572_RIGHT_MLO : BENIGN 2
P_00573_RIGHT_MLO : MALIGNANT 1
P_00573_RIGHT_MLO : MALIGNANT 1
P_00575_RIGHT_MLO : MALIGNANT 1
P_00577_RIGHT_CC : MALIGNANT 1
P_00577_RIGHT_MLO : MALIGNANT 1
P_00581_LEFT_MLO : MALIGNANT 1
P_00584_LEFT_MLO : MALIGNANT 1
P_00586_LEFT_CC : MALIGNANT 1
P_00586_LEFT_MLO : MALIGNANT 1
P_00586_LEFT_MLO : MALIGNANT 1
P_00592_LEFT_CC : BENIGN 2
P_00592_LEFT_MLO : BENIGN 2
P_00596_LEFT_CC : BENIGN 2
P_00596_LEFT_MLO : BENIGN 2
P_00596_RIGHT_CC : MALIGNANT 1
P_00596_RIGHT_MLO : MALIGNANT 1
P_00597_RIGHT_MLO : BENIGN 2
P_00604_LEFT_CC : MALIGNANT 1
P_00604_LEFT_MLO : MALIGNANT 1
P_00605_RIGHT_MLO : MALIGNANT 1
P_00607_RIGHT_CC : MALIGNANT 1
P_00607_RIGHT_MLO : MALIGNANT 1
P_00611_RIGHT_CC : BENIGN 2
P_00611_RIGHT_MLO : BENIGN 2
P_00616_LEFT_MLO : MALIGNANT 1
P_00617_RIGHT_MLO : BENIGN 2
P_00622_LEFT_CC : BENIGN 2
P_00626_LEFT_CC : BENIGN 2
P_00626_LEFT_MLO : BENIGN 2
P_00630_LEFT_CC : BENIGN 2
P_00630_LEFT_MLO : BENIGN 2
P_00634_LEFT_CC : BENIGN 2
P_00634_LEFT_CC : BENIGN 2
P_00634_LEFT_MLO : BENIGN 2
P_00634_LEFT_MLO : BENIGN 2
P_00634_RIGHT_MLO : MALIGNANT 1
P_00637_RIGHT_CC : MALIGNANT 1
P_00640_RIGHT_CC : BENIGN 2
P_00640_RIGHT_MLO : BENIGN 2
P_00644_LEFT_CC : MALIGNANT 1
P_00644_LEFT_MLO : MALIGNANT 1
P_00648_LEFT_CC : MALIGNANT 1
P_00648_LEFT_MLO : MALIGNANT 1
P_00651_RIGHT_MLO : BENIGN 2
P_00653_LEFT_CC : MALIGNANT 1
P_00653_LEFT_MLO : MALIGNANT 1
P_00659_RIGHT_MLO : BENIGN 2
P_00660_LEFT_CC : MALIGNANT 1
P_00660_LEFT_MLO : MALIGNANT 1
P_00661_LEFT_CC : MALIGNANT 1
P_00661_LEFT_MLO : MALIGNANT 1
P_00665_LEFT_MLO : MALIGNANT 1
P_00666_RIGHT_CC : BENIGN 2
P_00666_RIGHT_MLO : BENIGN 2
P_00670_RIGHT_CC : BENIGN 2
P_00670_RIGHT_MLO : BENIGN 2
P_00673_RIGHT_MLO : BENIGN 2
P_00673_RIGHT_MLO : BENIGN 2
P_00673_RIGHT_MLO : BENIGN 2
P_00675_LEFT_CC : BENIGN 2
P_00675_LEFT_MLO : BENIGN 2
P_00678_LEFT_CC : MALIGNANT 1
P_00678_LEFT_MLO : MALIGNANT 1
P_00687_LEFT_CC : BENIGN 2
P_00687_LEFT_MLO : BENIGN 2
P_00690_LEFT_MLO : MALIGNANT 1
P_00692_LEFT_MLO : BENIGN 2
P_00694_RIGHT_CC : BENIGN 2
P_00694_RIGHT_MLO : BENIGN 2
P_00695_RIGHT_CC : MALIGNANT 1
P_00695_RIGHT_MLO : MALIGNANT 1
P_00698_RIGHT_CC : MALIGNANT 1
P_00698_RIGHT_MLO : MALIGNANT 1
P_00700_RIGHT_CC : BENIGN 2
P_00700_RIGHT_MLO : BENIGN 2
P_00702_RIGHT_CC : MALIGNANT 1
P_00702_RIGHT_MLO : MALIGNANT 1
P_00703_LEFT_CC : BENIGN 2
P_00703_LEFT_MLO : BENIGN 2
P_00706_RIGHT_CC : MALIGNANT 1
P_00706_RIGHT_MLO : MALIGNANT 1
P_00708_RIGHT_CC : BENIGN 2
P_00708_RIGHT_MLO : BENIGN 2
P_00710_LEFT_CC : BENIGN 2
P_00710_LEFT_CC : BENIGN 2
P_00710_LEFT_MLO : BENIGN 2
P_00710_RIGHT_CC : BENIGN 2
P_00710_RIGHT_MLO : BENIGN 2
P_00711_LEFT_CC : MALIGNANT 1
P_00711_LEFT_MLO : MALIGNANT 1
P_00711_RIGHT_CC : BENIGN 2
P_00711_RIGHT_MLO : BENIGN 2
P_00715_RIGHT_CC : BENIGN 2
P_00715_RIGHT_MLO : BENIGN 2
P_00716_LEFT_MLO : MALIGNANT 1
P_00717_RIGHT_CC : MALIGNANT 1
P_00717_RIGHT_MLO : MALIGNANT 1
P_00719_LEFT_CC : MALIGNANT 1
P_00719_LEFT_MLO : MALIGNANT 1
P_00720_RIGHT_MLO : MALIGNANT 1
P_00723_LEFT_MLO : BENIGN 2
P_00726_RIGHT_CC : BENIGN 2
P_00726_RIGHT_MLO : BENIGN 2
P_00728_RIGHT_CC : MALIGNANT 1
P_00728_RIGHT_MLO : MALIGNANT 1
P_00730_RIGHT_CC : MALIGNANT 1
P_00731_RIGHT_CC : BENIGN 2
P_00731_RIGHT_MLO : BENIGN 2
P_00732_LEFT_CC : MALIGNANT 1
P_00732_LEFT_MLO : MALIGNANT 1
P_00733_RIGHT_CC : BENIGN 2
P_00733_RIGHT_MLO : BENIGN 2
P_00734_RIGHT_MLO : MALIGNANT 1
P_00737_LEFT_CC : MALIGNANT 1
P_00737_LEFT_MLO : MALIGNANT 1
P_00739_LEFT_CC : BENIGN 2
P_00739_LEFT_MLO : BENIGN 2
P_00742_LEFT_CC : BENIGN 2
P_00742_LEFT_MLO : BENIGN 2
P_00746_LEFT_CC : MALIGNANT 1
P_00746_LEFT_MLO : MALIGNANT 1
P_00747_LEFT_CC : BENIGN 2
P_00747_LEFT_MLO : BENIGN 2
P_00753_RIGHT_CC : MALIGNANT 1
P_00753_RIGHT_MLO : MALIGNANT 1
P_00754_LEFT_CC : BENIGN 2
P_00754_LEFT_MLO : BENIGN 2
P_00764_RIGHT_CC : BENIGN 2
P_00764_RIGHT_MLO : BENIGN 2
P_00765_RIGHT_CC : BENIGN 2
P_00765_RIGHT_MLO : BENIGN 2
P_00770_RIGHT_CC : MALIGNANT 1
P_00775_LEFT_CC : BENIGN 2
P_00775_LEFT_MLO : BENIGN 2
P_00776_RIGHT_CC : MALIGNANT 1
P_00776_RIGHT_MLO : MALIGNANT 1
P_00778_RIGHT_CC : BENIGN 2
P_00778_RIGHT_CC : BENIGN 2
P_00778_RIGHT_MLO : BENIGN 2
P_00778_RIGHT_MLO : BENIGN 2
P_00779_LEFT_CC : MALIGNANT 1
P_00779_LEFT_MLO : MALIGNANT 1
P_00781_RIGHT_CC : BENIGN 2
P_00781_RIGHT_MLO : BENIGN 2
P_00782_RIGHT_CC : MALIGNANT 1
P_00794_LEFT_CC : BENIGN 2
P_00794_LEFT_MLO : BENIGN 2
P_00797_LEFT_CC : BENIGN 2
P_00797_LEFT_CC : MALIGNANT 1
P_00797_LEFT_MLO : BENIGN 2
P_00797_LEFT_MLO : MALIGNANT 1
P_00797_RIGHT_CC : BENIGN 2
P_00797_RIGHT_MLO : BENIGN 2
P_00798_RIGHT_CC : BENIGN 2
P_00798_RIGHT_MLO : BENIGN 2
P_00801_LEFT_CC : MALIGNANT 1
P_00801_LEFT_MLO : MALIGNANT 1
P_00802_LEFT_CC : MALIGNANT 1
P_00802_LEFT_MLO : MALIGNANT 1
P_00802_LEFT_MLO : MALIGNANT 1
P_00802_LEFT_MLO : MALIGNANT 1
P_00803_LEFT_CC : MALIGNANT 1
P_00810_RIGHT_MLO : MALIGNANT 1
P_00814_LEFT_CC : BENIGN 2
P_00814_LEFT_MLO : BENIGN 2
P_00815_LEFT_CC : MALIGNANT 1
P_00815_LEFT_MLO : MALIGNANT 1
P_00816_RIGHT_CC : BENIGN 2
P_00816_RIGHT_MLO : BENIGN 2
P_00818_RIGHT_CC : MALIGNANT 1
P_00818_RIGHT_MLO : MALIGNANT 1
P_00823_RIGHT_CC : BENIGN 2
P_00823_RIGHT_MLO : BENIGN 2
P_00826_LEFT_CC : BENIGN 2
P_00826_LEFT_MLO : BENIGN 2
P_00828_LEFT_CC : MALIGNANT 1
P_00829_LEFT_CC : BENIGN 2
P_00829_LEFT_MLO : BENIGN 2
P_00831_RIGHT_MLO : MALIGNANT 1
P_00836_LEFT_CC : MALIGNANT 1
P_00836_LEFT_MLO : MALIGNANT 1
P_00841_RIGHT_CC : BENIGN 2
P_00841_RIGHT_MLO : BENIGN 2
P_00844_RIGHT_CC : BENIGN 2
P_00844_RIGHT_MLO : BENIGN 2
P_00845_RIGHT_CC : MALIGNANT 1
P_00847_LEFT_MLO : BENIGN 2
P_00848_LEFT_CC : MALIGNANT 1
P_00848_LEFT_MLO : MALIGNANT 1
P_00849_LEFT_CC : MALIGNANT 1
P_00849_LEFT_MLO : MALIGNANT 1
P_00851_LEFT_CC : MALIGNANT 1
P_00851_LEFT_MLO : MALIGNANT 1
P_00853_RIGHT_CC : MALIGNANT 1
P_00859_LEFT_CC : BENIGN 2
P_00859_LEFT_MLO : BENIGN 2
P_00863_RIGHT_CC : BENIGN 2
P_00863_RIGHT_MLO : BENIGN 2
P_00865_RIGHT_MLO : MALIGNANT 1
P_00865_RIGHT_MLO : MALIGNANT 1
P_00869_LEFT_CC : BENIGN 2
P_00869_LEFT_MLO : BENIGN 2
P_00870_LEFT_CC : MALIGNANT 1
P_00870_LEFT_MLO : MALIGNANT 1
P_00871_LEFT_CC : MALIGNANT 1
P_00871_LEFT_MLO : MALIGNANT 1
P_00881_LEFT_CC : MALIGNANT 1
P_00881_LEFT_MLO : MALIGNANT 1
P_00884_LEFT_MLO : BENIGN 2
P_00886_LEFT_CC : MALIGNANT 1
P_00886_LEFT_MLO : MALIGNANT 1
P_00889_LEFT_CC : MALIGNANT 1
P_00889_LEFT_MLO : MALIGNANT 1
P_00891_RIGHT_CC : MALIGNANT 1
P_00891_RIGHT_MLO : MALIGNANT 1
P_00892_LEFT_CC : MALIGNANT 1
P_00892_LEFT_MLO : MALIGNANT 1
P_00894_RIGHT_MLO : BENIGN 2
P_00896_LEFT_CC : BENIGN 2
P_00898_LEFT_CC : MALIGNANT 1
P_00900_LEFT_MLO : MALIGNANT 1
P_00901_RIGHT_CC : MALIGNANT 1
P_00901_RIGHT_MLO : MALIGNANT 1
P_00902_LEFT_CC : MALIGNANT 1
P_00902_LEFT_MLO : MALIGNANT 1
P_00903_LEFT_MLO : MALIGNANT 1
P_00906_LEFT_CC : MALIGNANT 1
P_00906_LEFT_MLO : MALIGNANT 1
P_00911_LEFT_CC : BENIGN 2
P_00913_LEFT_CC : MALIGNANT 1
P_00913_LEFT_MLO : MALIGNANT 1
P_00914_LEFT_CC : MALIGNANT 1
P_00914_LEFT_CC : MALIGNANT 1
P_00914_LEFT_CC : MALIGNANT 1
P_00914_LEFT_MLO : MALIGNANT 1
P_00914_LEFT_MLO : MALIGNANT 1
P_00914_LEFT_MLO : MALIGNANT 1
P_00915_RIGHT_CC : MALIGNANT 1
P_00915_RIGHT_MLO : MALIGNANT 1
P_00917_RIGHT_MLO : BENIGN 2
P_00920_RIGHT_CC : BENIGN 2
P_00920_RIGHT_MLO : BENIGN 2
P_00921_RIGHT_CC : MALIGNANT 1
P_00927_LEFT_CC : BENIGN 2
P_00927_LEFT_MLO : BENIGN 2
P_00929_LEFT_CC : MALIGNANT 1
P_00929_LEFT_MLO : MALIGNANT 1
P_00931_LEFT_CC : MALIGNANT 1
P_00931_LEFT_MLO : MALIGNANT 1
P_00935_LEFT_CC : MALIGNANT 1
P_00935_LEFT_MLO : MALIGNANT 1
P_00939_RIGHT_CC : BENIGN 2
P_00939_RIGHT_MLO : BENIGN 2
P_00941_LEFT_MLO : BENIGN 2
P_00949_LEFT_CC : BENIGN 2
P_00949_LEFT_MLO : BENIGN 2
P_00950_LEFT_CC : MALIGNANT 1
P_00952_RIGHT_CC : BENIGN 2
P_00952_RIGHT_MLO : BENIGN 2
P_00958_LEFT_CC : BENIGN 2
P_00958_LEFT_MLO : BENIGN 2
P_00959_RIGHT_MLO : MALIGNANT 1
P_00961_LEFT_MLO : MALIGNANT 1
P_00963_LEFT_CC : BENIGN 2
P_00963_LEFT_MLO : BENIGN 2
P_00968_LEFT_CC : MALIGNANT 1
P_00968_LEFT_MLO : MALIGNANT 1
P_00970_LEFT_MLO : BENIGN 2
P_00972_LEFT_CC : MALIGNANT 1
P_00972_LEFT_MLO : MALIGNANT 1
P_00976_LEFT_CC : BENIGN 2
P_00976_LEFT_MLO : BENIGN 2
P_00978_RIGHT_CC : BENIGN 2
P_00978_RIGHT_MLO : BENIGN 2
P_00982_RIGHT_CC : BENIGN 2
P_00982_RIGHT_MLO : BENIGN 2
P_00984_RIGHT_CC : MALIGNANT 1
P_00984_RIGHT_MLO : MALIGNANT 1
P_00988_LEFT_CC : BENIGN 2
P_00988_LEFT_MLO : BENIGN 2
P_00990_RIGHT_CC : MALIGNANT 1
P_00990_RIGHT_MLO : MALIGNANT 1
P_00994_RIGHT_MLO : BENIGN 2
P_00995_LEFT_CC : MALIGNANT 1
P_00995_LEFT_MLO : MALIGNANT 1
P_00996_RIGHT_CC : BENIGN 2
P_00997_LEFT_CC : MALIGNANT 1
P_00997_LEFT_MLO : MALIGNANT 1
P_00999_LEFT_CC : MALIGNANT 1
P_00999_LEFT_MLO : MALIGNANT 1
P_01000_RIGHT_CC : MALIGNANT 1
P_01000_RIGHT_MLO : MALIGNANT 1
P_01008_RIGHT_CC : MALIGNANT 1
P_01008_RIGHT_MLO : MALIGNANT 1
P_01009_RIGHT_CC : MALIGNANT 1
P_01009_RIGHT_MLO : MALIGNANT 1
P_01018_RIGHT_MLO : BENIGN 2
P_01031_LEFT_CC : MALIGNANT 1
P_01032_RIGHT_CC : BENIGN 2
P_01034_RIGHT_CC : MALIGNANT 1
P_01034_RIGHT_MLO : MALIGNANT 1
P_01035_RIGHT_CC : MALIGNANT 1
P_01035_RIGHT_MLO : MALIGNANT 1
P_01036_RIGHT_MLO : BENIGN 2
###Markdown
Use this to extract CBIS-DDSM metadata from provided excel file and save the annotations we need in JSON file
###Code
import os
import csv
import pandas as pd
# import xlrd
import json
# Load lesion data
# Root directory of the project
ROOT_DIR = os.path.abspath("../")
print(ROOT_DIR)
# ROOT_DIR = os.getcwd() # May17
# print(ROOT_DIR)
# if ROOT_DIR.endswith("samples/nucleus"):
# if ROOT_DIR.endswith("mammography"): # May17
# # Go up two levels to the repo root
# ROOT_DIR = os.path.dirname(ROOT_DIR)
# print(ROOT_DIR)
DATASET_DIR = os.path.join(ROOT_DIR, "dataset/mammo_all")
# IMAGES_DIR = "/home/chevy/Mammo/" # May17
file_names = ["mass_case_description_train_set.csv", "mass_case_description_test_set.csv",
"calc_case_description_train_set.csv", "calc_case_description_test_set.csv"]
file_name = file_names[0]
sheet_name="LesionMarkAttributesWithImageDim"
file_path = os.path.join(ROOT_DIR, "dataset/descriptions", file_name)
# file_path = os.path.join(DATASET_DIR, file_name)
print("Loading:", file_path)
annotations = pd.read_csv(file_path)
# Initialise
xml_annotations = {'type': 'instances',
'images': [],
'categories': [{'id': 1, 'name': 'MALIGNANT'}, {'id': 2, 'name': 'BENIGN'}]
}
names = []
for i in range(0, len(annotations)):
row = annotations.loc[i, :]
name = 'Mass-Training_' + row['patient_id'] + '_' + row['left or right breast'] + '_' + row['image view'] # May17
# name = row['patient_id'] + '_' + row['left or right breast'] + '_' + row['image view']
names.append(name)
unique_names = set(names)
unique_names_pathology = {k:[] for k in unique_names}
unique_names_catID = {k:[] for k in unique_names}
for i in range(0, len(annotations)):
row = annotations.loc[i, :]
name = 'Mass-Training_' + row['patient_id'] + '_' + row['left or right breast'] + '_' + row['image view'] # May17
# name = row['patient_id'] + '_' + row['left or right breast'] + '_' + row['image view']
assert row['pathology'] in ['MALIGNANT', 'BENIGN', 'BENIGN_WITHOUT_CALLBACK']
pathology = row['pathology']
if pathology in 'BENIGN_WITHOUT_CALLBACK':
pathology = 'BENIGN'
if pathology == 'BENIGN':
catID = 2
else:
catID = 1
print(name, ":\t", pathology, "\t", catID)
unique_names_catID[name].append(catID)
unique_names_pathology[name].append(pathology)
for name in unique_names:
xml_annotations['images'] += [
{
'id': name,
'pathology': unique_names_pathology[name],
'catID': unique_names_catID[name]
}
]
with open(DATASET_DIR + '_ddsm_mass_train.json', 'w') as fp:
json.dump(xml_annotations, fp, indent=4)
###Output
Mass-Training_P_00001_LEFT_CC : MALIGNANT 1
Mass-Training_P_00001_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00004_LEFT_CC : BENIGN 2
Mass-Training_P_00004_LEFT_MLO : BENIGN 2
Mass-Training_P_00004_RIGHT_MLO : BENIGN 2
Mass-Training_P_00009_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00009_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00015_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00018_RIGHT_CC : BENIGN 2
Mass-Training_P_00018_RIGHT_MLO : BENIGN 2
Mass-Training_P_00021_LEFT_CC : BENIGN 2
Mass-Training_P_00021_LEFT_MLO : BENIGN 2
Mass-Training_P_00021_RIGHT_CC : BENIGN 2
Mass-Training_P_00021_RIGHT_MLO : BENIGN 2
Mass-Training_P_00023_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00023_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00026_LEFT_CC : BENIGN 2
Mass-Training_P_00026_LEFT_MLO : BENIGN 2
Mass-Training_P_00027_RIGHT_CC : BENIGN 2
Mass-Training_P_00027_RIGHT_MLO : BENIGN 2
Mass-Training_P_00034_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00034_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00039_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00039_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00041_LEFT_CC : BENIGN 2
Mass-Training_P_00041_LEFT_MLO : BENIGN 2
Mass-Training_P_00044_RIGHT_CC : BENIGN 2
Mass-Training_P_00044_RIGHT_CC : BENIGN 2
Mass-Training_P_00044_RIGHT_CC : BENIGN 2
Mass-Training_P_00044_RIGHT_CC : BENIGN 2
Mass-Training_P_00044_RIGHT_MLO : BENIGN 2
Mass-Training_P_00044_RIGHT_MLO : BENIGN 2
Mass-Training_P_00045_LEFT_CC : MALIGNANT 1
Mass-Training_P_00045_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00046_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00051_LEFT_CC : MALIGNANT 1
Mass-Training_P_00051_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00054_RIGHT_MLO : BENIGN 2
Mass-Training_P_00055_LEFT_CC : BENIGN 2
Mass-Training_P_00057_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00057_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00058_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00059_LEFT_CC : MALIGNANT 1
Mass-Training_P_00059_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00061_RIGHT_CC : BENIGN 2
Mass-Training_P_00061_RIGHT_MLO : BENIGN 2
Mass-Training_P_00064_RIGHT_MLO : BENIGN 2
Mass-Training_P_00065_LEFT_CC : BENIGN 2
Mass-Training_P_00065_LEFT_MLO : BENIGN 2
Mass-Training_P_00068_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00068_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00074_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00074_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00074_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00076_LEFT_CC : BENIGN 2
Mass-Training_P_00076_LEFT_MLO : BENIGN 2
Mass-Training_P_00079_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00079_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00080_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00080_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00081_RIGHT_CC : BENIGN 2
Mass-Training_P_00081_RIGHT_MLO : BENIGN 2
Mass-Training_P_00086_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00086_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00090_LEFT_CC : BENIGN 2
Mass-Training_P_00090_LEFT_MLO : BENIGN 2
Mass-Training_P_00092_LEFT_CC : MALIGNANT 1
Mass-Training_P_00092_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00092_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00092_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00092_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00092_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00094_RIGHT_CC : BENIGN 2
Mass-Training_P_00094_RIGHT_MLO : BENIGN 2
Mass-Training_P_00095_LEFT_CC : MALIGNANT 1
Mass-Training_P_00095_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00096_RIGHT_CC : BENIGN 2
Mass-Training_P_00096_RIGHT_MLO : BENIGN 2
Mass-Training_P_00106_LEFT_CC : BENIGN 2
Mass-Training_P_00106_LEFT_CC : BENIGN 2
Mass-Training_P_00106_LEFT_CC : BENIGN 2
Mass-Training_P_00106_LEFT_MLO : BENIGN 2
Mass-Training_P_00106_LEFT_MLO : BENIGN 2
Mass-Training_P_00106_LEFT_MLO : BENIGN 2
Mass-Training_P_00106_LEFT_MLO : BENIGN 2
Mass-Training_P_00106_LEFT_MLO : BENIGN 2
Mass-Training_P_00106_RIGHT_CC : BENIGN 2
Mass-Training_P_00106_RIGHT_CC : BENIGN 2
Mass-Training_P_00106_RIGHT_CC : BENIGN 2
Mass-Training_P_00106_RIGHT_MLO : BENIGN 2
Mass-Training_P_00106_RIGHT_MLO : BENIGN 2
Mass-Training_P_00106_RIGHT_MLO : BENIGN 2
Mass-Training_P_00107_RIGHT_MLO : BENIGN 2
Mass-Training_P_00108_LEFT_CC : BENIGN 2
Mass-Training_P_00108_LEFT_MLO : BENIGN 2
Mass-Training_P_00109_LEFT_CC : BENIGN 2
Mass-Training_P_00110_LEFT_CC : MALIGNANT 1
Mass-Training_P_00110_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00110_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00117_LEFT_MLO : BENIGN 2
Mass-Training_P_00119_LEFT_CC : BENIGN 2
Mass-Training_P_00119_LEFT_MLO : BENIGN 2
Mass-Training_P_00120_LEFT_CC : BENIGN 2
Mass-Training_P_00120_LEFT_MLO : BENIGN 2
Mass-Training_P_00122_RIGHT_MLO : BENIGN 2
Mass-Training_P_00128_LEFT_CC : MALIGNANT 1
Mass-Training_P_00133_LEFT_CC : MALIGNANT 1
Mass-Training_P_00133_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00134_LEFT_CC : MALIGNANT 1
Mass-Training_P_00134_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00137_LEFT_CC : BENIGN 2
Mass-Training_P_00137_LEFT_MLO : BENIGN 2
Mass-Training_P_00146_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00146_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00148_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00148_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00149_LEFT_CC : MALIGNANT 1
Mass-Training_P_00149_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00160_LEFT_CC : MALIGNANT 1
Mass-Training_P_00160_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00160_RIGHT_CC : BENIGN 2
Mass-Training_P_00160_RIGHT_MLO : BENIGN 2
Mass-Training_P_00166_RIGHT_CC : BENIGN 2
Mass-Training_P_00166_RIGHT_MLO : BENIGN 2
Mass-Training_P_00169_RIGHT_MLO : BENIGN 2
Mass-Training_P_00172_LEFT_CC : MALIGNANT 1
Mass-Training_P_00172_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00174_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00174_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00175_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00175_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00187_LEFT_CC : BENIGN 2
Mass-Training_P_00187_LEFT_MLO : BENIGN 2
Mass-Training_P_00190_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00199_LEFT_CC : MALIGNANT 1
Mass-Training_P_00199_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00205_RIGHT_CC : BENIGN 2
Mass-Training_P_00205_RIGHT_MLO : BENIGN 2
Mass-Training_P_00206_RIGHT_MLO : BENIGN 2
Mass-Training_P_00207_LEFT_CC : MALIGNANT 1
Mass-Training_P_00207_LEFT_CC : MALIGNANT 1
Mass-Training_P_00207_LEFT_CC : MALIGNANT 1
Mass-Training_P_00207_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00207_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00207_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00208_RIGHT_MLO : BENIGN 2
Mass-Training_P_00215_RIGHT_CC : BENIGN 2
Mass-Training_P_00217_LEFT_CC : MALIGNANT 1
Mass-Training_P_00218_LEFT_CC : MALIGNANT 1
Mass-Training_P_00218_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00219_RIGHT_CC : BENIGN 2
Mass-Training_P_00221_RIGHT_MLO : BENIGN 2
Mass-Training_P_00224_LEFT_CC : BENIGN 2
Mass-Training_P_00224_LEFT_MLO : BENIGN 2
Mass-Training_P_00224_RIGHT_CC : BENIGN 2
Mass-Training_P_00224_RIGHT_MLO : BENIGN 2
Mass-Training_P_00225_RIGHT_CC : BENIGN 2
Mass-Training_P_00225_RIGHT_MLO : BENIGN 2
Mass-Training_P_00226_LEFT_CC : MALIGNANT 1
Mass-Training_P_00226_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00229_LEFT_CC : BENIGN 2
Mass-Training_P_00229_LEFT_MLO : BENIGN 2
Mass-Training_P_00231_LEFT_CC : BENIGN 2
Mass-Training_P_00231_LEFT_MLO : BENIGN 2
Mass-Training_P_00234_LEFT_CC : BENIGN 2
Mass-Training_P_00235_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00235_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00236_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00239_RIGHT_CC : BENIGN 2
Mass-Training_P_00239_RIGHT_MLO : BENIGN 2
Mass-Training_P_00240_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00240_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00241_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00241_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00242_RIGHT_CC : BENIGN 2
Mass-Training_P_00242_RIGHT_MLO : BENIGN 2
Mass-Training_P_00247_RIGHT_CC : BENIGN 2
Mass-Training_P_00247_RIGHT_MLO : BENIGN 2
Mass-Training_P_00248_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00254_LEFT_CC : MALIGNANT 1
Mass-Training_P_00254_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00259_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00259_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00264_LEFT_CC : MALIGNANT 1
Mass-Training_P_00264_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00265_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00265_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00273_LEFT_CC : BENIGN 2
Mass-Training_P_00279_LEFT_CC : BENIGN 2
Mass-Training_P_00281_LEFT_MLO : BENIGN 2
Mass-Training_P_00283_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00283_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00287_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00287_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00289_LEFT_CC : BENIGN 2
Mass-Training_P_00289_LEFT_MLO : BENIGN 2
Mass-Training_P_00294_LEFT_CC : BENIGN 2
Mass-Training_P_00294_LEFT_MLO : BENIGN 2
Mass-Training_P_00298_LEFT_CC : BENIGN 2
Mass-Training_P_00298_LEFT_MLO : BENIGN 2
Mass-Training_P_00303_LEFT_CC : BENIGN 2
Mass-Training_P_00303_LEFT_MLO : BENIGN 2
Mass-Training_P_00303_RIGHT_CC : BENIGN 2
Mass-Training_P_00303_RIGHT_MLO : BENIGN 2
Mass-Training_P_00304_LEFT_MLO : BENIGN 2
Mass-Training_P_00309_LEFT_CC : BENIGN 2
Mass-Training_P_00309_LEFT_CC : BENIGN 2
Mass-Training_P_00309_LEFT_MLO : BENIGN 2
Mass-Training_P_00309_LEFT_MLO : BENIGN 2
Mass-Training_P_00313_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00313_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00314_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00314_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00317_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00319_LEFT_CC : MALIGNANT 1
Mass-Training_P_00319_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00320_LEFT_CC : BENIGN 2
Mass-Training_P_00320_LEFT_MLO : BENIGN 2
Mass-Training_P_00328_LEFT_MLO : BENIGN 2
Mass-Training_P_00328_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00328_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00328_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00328_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00330_LEFT_CC : BENIGN 2
Mass-Training_P_00330_LEFT_MLO : BENIGN 2
Mass-Training_P_00332_LEFT_CC : BENIGN 2
Mass-Training_P_00332_LEFT_MLO : BENIGN 2
Mass-Training_P_00332_RIGHT_CC : BENIGN 2
Mass-Training_P_00332_RIGHT_MLO : BENIGN 2
Mass-Training_P_00333_RIGHT_CC : BENIGN 2
Mass-Training_P_00333_RIGHT_MLO : BENIGN 2
Mass-Training_P_00334_LEFT_MLO : BENIGN 2
Mass-Training_P_00335_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00342_RIGHT_CC : BENIGN 2
Mass-Training_P_00342_RIGHT_MLO : BENIGN 2
Mass-Training_P_00342_RIGHT_MLO : BENIGN 2
Mass-Training_P_00342_RIGHT_MLO : BENIGN 2
Mass-Training_P_00348_LEFT_CC : BENIGN 2
Mass-Training_P_00348_LEFT_MLO : BENIGN 2
Mass-Training_P_00351_LEFT_CC : BENIGN 2
Mass-Training_P_00351_LEFT_MLO : BENIGN 2
Mass-Training_P_00354_LEFT_CC : MALIGNANT 1
Mass-Training_P_00354_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00356_LEFT_CC : MALIGNANT 1
Mass-Training_P_00361_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00363_LEFT_CC : MALIGNANT 1
Mass-Training_P_00363_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00366_RIGHT_CC : BENIGN 2
Mass-Training_P_00366_RIGHT_MLO : BENIGN 2
Mass-Training_P_00370_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00370_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00376_RIGHT_CC : BENIGN 2
Mass-Training_P_00376_RIGHT_MLO : BENIGN 2
Mass-Training_P_00376_RIGHT_MLO : BENIGN 2
Mass-Training_P_00376_RIGHT_MLO : BENIGN 2
Mass-Training_P_00376_RIGHT_MLO : BENIGN 2
Mass-Training_P_00383_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00384_RIGHT_CC : BENIGN 2
Mass-Training_P_00384_RIGHT_MLO : BENIGN 2
Mass-Training_P_00385_RIGHT_CC : BENIGN 2
Mass-Training_P_00385_RIGHT_MLO : BENIGN 2
Mass-Training_P_00386_LEFT_CC : MALIGNANT 1
Mass-Training_P_00386_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00389_LEFT_MLO : BENIGN 2
Mass-Training_P_00396_LEFT_CC : MALIGNANT 1
Mass-Training_P_00396_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00399_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00399_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00401_LEFT_CC : BENIGN 2
Mass-Training_P_00401_LEFT_MLO : BENIGN 2
Mass-Training_P_00406_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00406_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00408_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00408_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00411_RIGHT_CC : BENIGN 2
Mass-Training_P_00411_RIGHT_MLO : BENIGN 2
Mass-Training_P_00412_RIGHT_CC : BENIGN 2
Mass-Training_P_00412_RIGHT_MLO : BENIGN 2
Mass-Training_P_00413_LEFT_CC : MALIGNANT 1
Mass-Training_P_00413_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00414_LEFT_CC : MALIGNANT 1
Mass-Training_P_00414_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00415_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00415_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00417_RIGHT_CC : BENIGN 2
Mass-Training_P_00417_RIGHT_MLO : BENIGN 2
Mass-Training_P_00419_LEFT_CC : BENIGN 2
Mass-Training_P_00419_LEFT_CC : MALIGNANT 1
Mass-Training_P_00419_LEFT_MLO : MALIGNANT 1
Mass-Training_P_00419_LEFT_MLO : BENIGN 2
Mass-Training_P_00419_RIGHT_CC : BENIGN 2
Mass-Training_P_00419_RIGHT_MLO : BENIGN 2
Mass-Training_P_00420_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00420_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00420_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00420_RIGHT_MLO : MALIGNANT 1
Mass-Training_P_00421_LEFT_CC : BENIGN 2
Mass-Training_P_00421_LEFT_MLO : BENIGN 2
Mass-Training_P_00423_RIGHT_CC : MALIGNANT 1
Mass-Training_P_00426_RIGHT_CC : BENIGN 2
Mass-Training_P_00426_RIGHT_MLO : BENIGN 2
Mass-Training_P_00427_RIGHT_MLO : BENIGN 2
Mass-Training_P_00428_LEFT_CC : BENIGN 2
Mass-Training_P_00430_LEFT_CC : BENIGN 2
Mass-Training_P_00430_LEFT_MLO : BENIGN 2
|
SIC_AI_Coding_Exercises/SIC_AI_Chapter_05_Coding_Exercises/ex_0408.ipynb
|
###Markdown
Coding Exercise 0408 1. Linear regression prediction and confidence interval:
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import scipy.stats as st
from sklearn.linear_model import LinearRegression
from sklearn import metrics
%matplotlib inline
###Output
_____no_output_____
###Markdown
1.1. Data: study = an array that contains the hours of study. This is the explanatory variable. score = an array that contains the test scores. This is the response variable.
###Code
study = np.array([ 3, 4.5, 6, 1.2, 2, 6.9, 6.7, 5.5])
score = np.array([ 88, 85, 90, 80, 81, 92, 95, 90])
n = study.size
###Output
_____no_output_____
###Markdown
1.2. Training:
###Code
# Instantiate a linear regression object.
lm = LinearRegression()
# Train.
lm.fit(study.reshape(-1,1), score.reshape(-1,1))
# Get the parameters.
b0 = lm.intercept_[0]
b1 = lm.coef_[0][0]
print(b0)
print(b1)
# Calculate the in-sample RMSE.
predScore = lm.predict(study.reshape(-1,1))
mse = metrics.mean_squared_error(score, predScore)
rmse=np.sqrt(mse)
np.round(rmse,2)
###Output
_____no_output_____
###Markdown
1.3. Confidence interval and visualization:
###Code
# We define here the function that calculates standard error.
# Refer to the formula given in the lecture note.
def StdError(x_star, x_vec, mse, n):
x_mean = np.mean(x_vec)
return (np.sqrt(mse*(1/n+(x_star-x_mean)**2/np.sum((x_vec-x_mean)**2))))
# y_hat : the predicted y.
# y_low : lower bound of the confidence interval (95%).
# y_up : upper bound of the confidence interval (95%).
x_star = np.linspace(1,9,10)
y_hat = b0 + b1*x_star
y_low = y_hat - st.t.ppf(0.975,n-2)*StdError(x_star,study,mse,n)
y_up = y_hat + st.t.ppf(0.975,n-2)*StdError(x_star,study,mse,n)
# Now, disaply.
plt.scatter(study, score, c='red')
plt.plot(x_star,y_low,c = 'blue',linestyle='--',linewidth=1)
plt.plot(x_star,y_hat,c = 'green',linewidth = 1.5, alpha=0.5)
plt.plot(x_star,y_up,c = 'blue',linestyle='--',linewidth=1)
plt.xlabel('Study')
plt.ylabel('Score')
plt.show()
###Output
_____no_output_____
|
SVM/SVM_wesad_hrv.ipynb
|
###Markdown
SVM
###Code
import pandas as pd
from sklearn import svm, metrics
from sklearn.model_selection import train_test_split
wesad_hrv = pd.read_csv('D:\data\wesad-chest-combined-classification-hrv.csv') # need to adjust a path of dataset
wesad_hrv.columns
original_column_list = ['MEAN_RR', 'MEDIAN_RR', 'SDRR', 'RMSSD', 'SDSD', 'SDRR_RMSSD', 'HR',
'pNN25', 'pNN50', 'SD1', 'SD2', 'KURT', 'SKEW', 'MEAN_REL_RR',
'MEDIAN_REL_RR', 'SDRR_REL_RR', 'RMSSD_REL_RR', 'SDSD_REL_RR',
'SDRR_RMSSD_REL_RR', 'KURT_REL_RR', 'SKEW_REL_RR', 'VLF', 'VLF_PCT',
'LF', 'LF_PCT', 'LF_NU', 'HF', 'HF_PCT', 'HF_NU', 'TP', 'LF_HF',
'HF_LF', 'MEAN_RR_LOG', 'MEAN_RR_SQRT', 'TP_SQRT', 'MEDIAN_REL_RR_LOG',
'RMSSD_REL_RR_LOG', 'SDSD_REL_RR_LOG', 'VLF_LOG', 'LF_LOG', 'HF_LOG',
'TP_LOG', 'LF_HF_LOG', 'RMSSD_LOG', 'SDRR_RMSSD_LOG', 'pNN25_LOG',
'pNN50_LOG', 'SD1_LOG', 'KURT_YEO_JONSON', 'SKEW_YEO_JONSON',
'MEAN_REL_RR_YEO_JONSON', 'SKEW_REL_RR_YEO_JONSON', 'LF_BOXCOX',
'HF_BOXCOX', 'SD1_BOXCOX', 'KURT_SQUARE', 'HR_SQRT',
'MEAN_RR_MEAN_MEAN_REL_RR', 'SD2_LF', 'HR_LF', 'HR_HF', 'HF_VLF',
'subject id', 'condition', 'SSSQ class', 'SSSQ Label',
'condition label']
original_column_list_withoutString = ['MEAN_RR', 'MEDIAN_RR', 'SDRR', 'RMSSD', 'SDSD', 'SDRR_RMSSD', 'HR',
'pNN25', 'pNN50', 'SD1', 'SD2', 'KURT', 'SKEW', 'MEAN_REL_RR',
'MEDIAN_REL_RR', 'SDRR_REL_RR', 'RMSSD_REL_RR', 'SDSD_REL_RR',
'SDRR_RMSSD_REL_RR', 'KURT_REL_RR', 'SKEW_REL_RR', 'VLF', 'VLF_PCT',
'LF', 'LF_PCT', 'LF_NU', 'HF', 'HF_PCT', 'HF_NU', 'TP', 'LF_HF',
'HF_LF', 'MEAN_RR_LOG', 'MEAN_RR_SQRT', 'TP_SQRT', 'MEDIAN_REL_RR_LOG',
'RMSSD_REL_RR_LOG', 'SDSD_REL_RR_LOG', 'VLF_LOG', 'LF_LOG', 'HF_LOG',
'TP_LOG', 'LF_HF_LOG', 'RMSSD_LOG', 'SDRR_RMSSD_LOG', 'pNN25_LOG',
'pNN50_LOG', 'SD1_LOG', 'KURT_YEO_JONSON', 'SKEW_YEO_JONSON',
'MEAN_REL_RR_YEO_JONSON', 'SKEW_REL_RR_YEO_JONSON', 'LF_BOXCOX',
'HF_BOXCOX', 'SD1_BOXCOX', 'KURT_SQUARE', 'HR_SQRT',
'MEAN_RR_MEAN_MEAN_REL_RR', 'SD2_LF', 'HR_LF', 'HR_HF', 'HF_VLF']
selected_colum_list = ['MEAN_RR', 'MEDIAN_RR', 'SDRR', 'RMSSD', 'SDSD', 'SDRR_RMSSD', 'HR',
'pNN25', 'pNN50', 'SD1', 'SD2', 'KURT', 'SKEW', 'MEAN_REL_RR',
'MEDIAN_REL_RR', 'SDRR_REL_RR', 'RMSSD_REL_RR', 'SDSD_REL_RR',
'SDRR_RMSSD_REL_RR']
stress_data = wesad_hrv[original_column_list_withoutString]
stress_label = wesad_hrv['condition label']
stress_data
train_data, test_data, train_label, test_label = train_test_split(stress_data, stress_label)
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(train_data)
X_t_train = pca.transform(train_data)
X_t_test = pca.transform(test_data)
model = svm.SVC()
model.fit(X_t_train, train_label)
predict = model.predict(X_t_test)
acc_score = metrics.accuracy_score(test_label, predict)
print(acc_score)
import pickle
from sklearn.externals import joblib
saved_model = pickle.dumps(model)
joblib.dump(model, 'SVMmodel1.pkl')
model_from_pickle = joblib.load('SVMmodel1.pkl')
predict = model_from_pickle.predict(test_data)
acc_score = metrics.accuracy_score(test_label, predict)
print(acc_score)
###Output
0.9998672250025533
|
Corretor.ipynb
|
###Markdown
Projeto de criação de um corretor ortográfico utilizando Processamento de Linguagem Natural Geração dos tokens a partir do Corpus textual para realizar o processamento
###Code
with open('/content/drive/My Drive/Analise_de_dados/artigos.txt', 'r') as f:
artigos = f.read()
print(artigos[:500])
texto_exemplo = 'Olá, tudo bem?'
tokens = texto_exemplo.split()
print(tokens)
len(tokens)
import nltk
nltk.download('punkt')
palavras_separadas = nltk.tokenize.word_tokenize(texto_exemplo)
print(palavras_separadas)
def separa_palavras(lista_tokens):
lista_palavras = []
for token in lista_tokens:
if token.isalpha():
lista_palavras.append(token)
return lista_palavras
separa_palavras(palavras_separadas)
lista_tokens = nltk.tokenize.word_tokenize(artigos)
lista_palavras = separa_palavras(lista_tokens)
print(f'O número de palavras é {len(lista_palavras)}')
print(lista_palavras[:5])
def normalizacao(lista_palavras):
lista_normalizada = []
for palavra in lista_palavras:
lista_normalizada.append(palavra.lower())
return lista_normalizada
lista_normalizada = normalizacao(lista_palavras)
print(lista_normalizada[:5])
len(set(lista_normalizada))
lista = 'lgica'
(lista[:1], lista[1:])
###Output
_____no_output_____
###Markdown
Função para inserir letras
###Code
palavra_exemplo = 'lgica'
def insere_letras(fatias):
novas_palavras = []
letras = 'abcdefghijklmnopqrstuvwxyzàáâãèéêìíîòóôõùúû'
for E, D in fatias:
for letra in letras:
novas_palavras.append(E + letra + D)
return novas_palavras
def gerador_palavras(palavra):
fatias = []
for i in range(len(palavra)+1):
fatias.append((palavra[:i], palavra[i:]))
palavras_geradas = insere_letras(fatias)
return palavras_geradas
palavras_geradas = gerador_palavras(palavra_exemplo)
print(palavras_geradas)
def corretor(palavra):
palavras_geradas = gerador_palavras(palavra)
palavra_correta = max(palavras_geradas, key=probabilidade)
return palavra_correta
frequencia = nltk.FreqDist(lista_normalizada)
total_palavras = len(lista_normalizada)
frequencia.most_common(10)
def probabilidade(palavra_gerada):
return frequencia[palavra_gerada]/total_palavras
probabilidade('logica')
probabilidade('lógica')
corretor(palavra_exemplo)
def cria_dados_teste(nome_arquivo):
lista_palavras_teste = []
f = open(nome_arquivo, 'r')
for linha in f:
correta, errada = linha.split()
lista_palavras_teste.append((correta, errada))
f.close()
return lista_palavras_teste
lista_teste = cria_dados_teste('/content/drive/My Drive/Analise_de_dados/palavras.txt')
lista_teste
def avaliador(testes):
numero_palavras = len(testes)
acertou = 0
for correta, errada in testes:
palavra_corrigida = corretor(errada)
if palavra_corrigida == correta:
acertou += 1
# print(f'{correta}, {errada}, {palavra_corrigida}')
taxa_acerto = acertou/numero_palavras
print(f'Taxa de acerto: {taxa_acerto*100:.2f}% de {numero_palavras} palavras')
avaliador(lista_teste)
###Output
Taxa de acerto: 1.08% de 186 palavras
###Markdown
Função para deletar letras
###Code
def deletando_caracteres(fatias):
novas_palavras = []
for E, D in fatias:
novas_palavras.append(E + D[1:])
return novas_palavras
def gerador_palavras(palavra):
fatias = []
for i in range(len(palavra)+1):
fatias.append((palavra[:i], palavra[i:]))
palavras_geradas = insere_letras(fatias)
palavras_geradas += deletando_caracteres(fatias)
return palavras_geradas
palavra_exemplo = 'lóigica'
palavras_geradas = gerador_palavras(palavra_exemplo)
print(palavras_geradas)
avaliador(lista_teste)
###Output
Taxa de acerto: 41.40% de 186 palavras
###Markdown
Função para trocar letras
###Code
def insere_letras(fatias):
novas_palavras = []
letras = 'abcdefghijklmnopqrstuvwxyzàáâãèéêìíîòóôõùúû'
for E, D in fatias:
for letra in letras:
novas_palavras.append(E + letra + D)
return novas_palavras
def deletando_caracteres(fatias):
novas_palavras = []
for E, D in fatias:
novas_palavras.append(E + D[1:])
return novas_palavras
def troca_letra(fatias):
novas_palavras = []
letras = 'abcdefghijklmnopqrstuvwxyzàáâãèéêìíîòóôõùúû'
for E, D in fatias:
for letra in letras:
novas_palavras.append(E + letra + D[1:])
return novas_palavras
def gerador_palavras(palavra):
fatias = []
for i in range(len(palavra)+1):
fatias.append((palavra[:i], palavra[i:]))
palavras_geradas = insere_letras(fatias)
palavras_geradas += deletando_caracteres(fatias)
palavras_geradas += troca_letra(fatias)
return palavras_geradas
palavra_exemplo = 'lígica'
palavras_geradas = gerador_palavras(palavra_exemplo)
print(palavras_geradas)
avaliador(lista_teste)
###Output
Taxa de acerto: 76.34% de 186 palavras
###Markdown
Função para inverter letras
###Code
def insere_letras(fatias):
novas_palavras = []
letras = 'abcdefghijklmnopqrstuvwxyzàáâãèéêìíîòóôõùúû'
for E, D in fatias:
for letra in letras:
novas_palavras.append(E + letra + D)
return novas_palavras
def deletando_caracteres(fatias):
novas_palavras = []
for E, D in fatias:
novas_palavras.append(E + D[1:])
return novas_palavras
def troca_letra(fatias):
novas_palavras = []
letras = 'abcdefghijklmnopqrstuvwxyzàáâãèéêìíîòóôõùúû'
for E, D in fatias:
for letra in letras:
novas_palavras.append(E + letra + D[1:])
return novas_palavras
def inverte_letras(fatias):
novas_palavras = []
for E, D in fatias:
if len(D) > 1:
novas_palavras.append(E + D[1] + D[0] + D[2:])
return novas_palavras
def gerador_palavras(palavra):
fatias = []
for i in range(len(palavra)+1):
fatias.append((palavra[:i], palavra[i:]))
palavras_geradas = insere_letras(fatias)
palavras_geradas += deletando_caracteres(fatias)
palavras_geradas += troca_letra(fatias)
palavras_geradas += inverte_letras(fatias)
return palavras_geradas
palavra_exemplo = 'lgóica'
palavras_geradas = gerador_palavras(palavra_exemplo)
print(palavras_geradas)
avaliador(lista_teste)
def avaliador(testes, vocabulario):
numero_palavras = len(testes)
acertou = 0
desconhecida = 0
for correta, errada in testes:
palavra_corrigida = corretor(errada)
if palavra_corrigida == correta:
acertou += 1
else:
desconhecida += (correta not in vocabulario)
# print(f'{correta}, {errada}, {palavra_corrigida}')
taxa_acerto = acertou/numero_palavras*100
taxa_desconhecida = desconhecida/numero_palavras*100
print(f'Taxa de acerto: {taxa_acerto:.2f}% de {numero_palavras} palavras\nTaxa desconhecida: {taxa_desconhecida:.2f}% de {numero_palavras} palavras')
vocabulario = set(lista_normalizada)
avaliador(lista_teste, vocabulario)
###Output
Taxa de acerto: 76.34% de 186 palavras
Taxa desconhecida: 6.99% de 186 palavras
###Markdown
Teste do gerador "turbinado"
###Code
def gerador_turbinado(palavras_geradas):
novas_palavras = []
for palavra in palavras_geradas:
novas_palavras += gerador_palavras(palavra)
return novas_palavras
palavra = "lóiigica"
palavras_g = gerador_turbinado(gerador_palavras(palavra))
"lógica" in palavras_g
len(palavras_g)
def novo_corretor(palavra):
palavras_geradas = gerador_palavras(palavra)
palavras_turbinado = gerador_turbinado(palavras_geradas)
todas_palavras = set(palavras_geradas + palavras_turbinado)
candidatos = [palavra]
for palavra in todas_palavras:
if palavra in vocabulario:
candidatos.append(palavra)
# print(len(candidatos))
palavra_correta = max(candidatos, key=probabilidade)
return palavra_correta
novo_corretor(palavra)
def avaliador(testes, vocabulario):
numero_palavras = len(testes)
acertou = 0
desconhecida = 0
for correta, errada in testes:
palavra_corrigida = novo_corretor(errada)
desconhecida += (correta not in vocabulario)
if palavra_corrigida == correta:
acertou += 1
# print(f'{correta}, {errada}, {palavra_corrigida}')
taxa_acerto = acertou/numero_palavras*100
taxa_desconhecida = desconhecida/numero_palavras*100
print(f'Taxa de acerto: {taxa_acerto:.2f}% de {numero_palavras} palavras\nTaxa desconhecida: {taxa_desconhecida:.2f}% de {numero_palavras} palavras')
vocabulario = set(lista_normalizada)
avaliador(lista_teste, vocabulario)
def avaliador(testes, vocabulario):
numero_palavras = len(testes)
acertou = 0
desconhecida = 0
for correta, errada in testes:
palavra_corrigida = novo_corretor(errada)
desconhecida += (correta not in vocabulario)
if palavra_corrigida == correta:
acertou += 1
else:
print(errada + '-' + corretor(errada) + '-' + palavra_corrigida)
# print(f'{correta}, {errada}, {palavra_corrigida}')
taxa_acerto = acertou/numero_palavras*100
taxa_desconhecida = desconhecida/numero_palavras*100
print(f'\nTaxa de acerto: {taxa_acerto:.2f}% de {numero_palavras} palavras\nTaxa desconhecida: {taxa_desconhecida:.2f}% de {numero_palavras} palavras')
vocabulario = set(lista_normalizada)
avaliador(lista_teste, vocabulario)
###Output
esje-esse-se
sãêo-são-não
dosa-dos-do
eme-em-de
eàssa-essa-esse
daõs-das-da
céda-cada-da
noâ-no-o
enêão-então-não
tĩem-tem-em
nossah-nossa-nosso
teb-tem-de
atĩ-até-a
âem-em-de
foo-foi-o
serr-ser-se
entke-entre-então
van-vai-a
çeus-seus-seu
eû-e-de
temeo-tempo-temos
semre-sempre-ser
elaá-ela-ele
síó-só-se
siàe-site-se
seém-sem-em
peln-pelo-ele
aléra-alura-agora
tdia-dia-da
jé-é-de
sãô-são-não
odos-dos-do
siua-sua-seu
elpe-ele-esse
teos-temos-os
eũsa-essa-esse
vjmos-vamos-temos
dms-dos-de
cava-java-para
ános-nos-no
èaso-caso-as
túem-tem-em
daáos-dados-dos
nossk-nosso-nosso
tãer-ter-ser
vté-até-é
búm-bem-um
sçerá-será-ser
entró-entre-então
uai-vai-a
sâus-seus-seu
ìeu-seu-de
fual-qual-sua
elal-ela-ele
skó-só-se
secm-sem-em
aluéa-alura-além
dil-dia-de
sód-só-se
eúaa-aeúaa-essa
ró-só-de
dĩaz-adĩaz-da
correptor-corretor-correto
trtica-tática-prática
ewpoderamento-aewpoderamento-ewpoderamento
îgato-gato-fato
cakvalo-acakvalo-carvalho
canelac-acanelac-janela
tênisy-atênisy-tênisy
anciosa-aanciosa-ansioso
ancciosa-aancciosa-ancciosa
ansioa-aansioa-ensina
asterístico-aasterístico-asterístico
entertido-aentertido-entendido
ritimo-ritmo-ótimo
indiota-aindiota-indica
tomare-tomar-tomar
seje-seja-se
provalecer-aprovalecer-prevalece
esteje-esteja-este
mindigo-amindigo-indico
pertubar-apertubar-derrubar
Taxa de acerto: 55.91% de 186 palavras
Taxa desconhecida: 6.99% de 186 palavras
|
research/data/Simple prototype.ipynb
|
###Markdown
Training a prototype neural network for scoring person and job.
###Code
import tensorflow as tf
import numpy as np
import random as r
import json
import pickle as p
from matplotlib import pyplot as plt
from urllib import parse as urlparse
from urllib import request as urlreq
from os import path
from collections import defaultdict as dd
%matplotlib inline
# Create person-ids dictionary pickle
names_ids_dict = {}
if not path.exists('people_ids.pickle'):
with open('persons','r') as names_ids_file:
for names_ids in names_ids_file.readlines():
name, ids = names_ids.strip().split('\t')
names_ids_dict[name] = '/' + ids.replace('.', '/')
with open('people_ids.pickle', 'wb') as pfile:
p.dump(names_ids_dict, pfile)
else:
with open('people_ids.pickle', 'rb') as names_ids_pfile:
names_ids_dict = p.load(names_ids_pfile)
api_key = open('../.knowledge_graph_api_key').read()
params = {
'indent': True,
'key': api_key,
}
service_url = 'https://kgsearch.googleapis.com/v1/entities:search?'
def get_score(ids):
params['ids'] = ids
url = service_url + urlparse.urlencode(params)
with urlreq.urlopen(url) as response:
data = json.loads(response.read().decode('utf8'))
info = data['itemListElement'][0]
return info['resultScore']
names_ids_dict['Alfred Einstein']
names_ids_dict['Albert Einstein']
names_scores = {}
with open('profession.train') as labeled_data:
for sample in labeled_data.readlines():
name, job, label = sample.strip().split('\t')
ids = names_ids_dict[name]
score = get_score(ids)
names_scores[(name, job)] = (score, label)
names_ids_dict['Barack Obama']
names_scores['Barack Obama']
names_scores['Albert Einstein']
with open('dict_name_cnt.pickle', 'rb') as pfile:
names_freq = p.load(pfile)
names_freq['Albert Einstein']
names_freq['Barack Obama']
max(names_freq.values())
with open('./profession_w2v.pickle', 'rb') as f:
profession_w2v = p.load(f)
names_joblist = dd(list)
with open('./profession.kb', 'r') as f:
for sample in f.readlines():
name, job = sample.strip().split('\t')
names_joblist[name].append(job)
len(names_joblist)
names_joblist['Albert Einstein']
len(names_scores)
profession_w2v['Professor']
profession_w2v['Theoretical Physicist']
def similarity(w1, w2):
v1 = w1 / np.sqrt(sum(i*i for i in w1))
v2 = w2 / np.sqrt(sum(i*i for i in w2))
return np.dot(v1, v2)
similarity(profession_w2v['Theoretical Physicist'], profession_w2v['Professor'])
for j in names_joblist['Albert Einstein']:
print(similarity(profession_w2v['Professor'], profession_w2v[j]))
0.45407 + 0.576939 + 0.485823 + 0.467878 + 0.59476
names_scores[('Albert Einstein', 'Theoretical Physicist')]
train_names = r.sample(names_scores.keys(), 412)
names_scores[('Mark Ciardi', 'Film Producer')]
test_names = list()
for i in names_scores.keys():
if i not in train_names:
test_names.append(i)
len(train_names)
train_names[1]
train_data = np.ndarray(shape=(412,2), dtype=np.float32)
train_label = np.ndarray(shape=(412,8), dtype=np.float32)
train_label = np.zeros_like(train_label)
for i, name_job in enumerate(train_names):
name, job = name_job
pr_score, score = names_scores[name_job]
job_sim = 0.0
for jobs in names_joblist[name]:
job_sim += similarity(profession_w2v[job], profession_w2v[jobs])
job_sim /= len(names_joblist[name])
train_data[i] = [pr_score, job_sim]
train_label[i][int(score)] = 1
train_names[1]
train_data[1]
names_joblist['Mark Ciardi']
train_label[1]
from sklearn.multiclass import OneVsRestClassifier
from sklearn.svm import LinearSVC
X, y = train_data, train_label
predictor = OneVsRestClassifier(LinearSVC(random_state=0)).fit(X, y)
predictor.predict(X[0:20])
x = tf.placeholder(tf.float32, shape=[None, 2])
y_ = tf.placeholder(tf.float32, shape=[None, 8])
W1 = tf.Variable(tf.zeros([2,8]))
b = tf.Variable(tf.zeros([8]))
y = tf.nn.relu(tf.matmul(x, W1) + b)
xentropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y, y_))
train_step = tf.train.GradientDescentOptimizer(0.005).minimize(xentropy)
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
for i in range(10000):
sess.run([train_step], feed_dict={x: train_data, y_: train_label})
result = sess.run([y], feed_dict={x: train_data})[0]
prscores = [i for i,j in train_data]
mprs = max(prscores)
prscores = [i / mprs for i in prscores]
for i, _ in enumerate(train_data):
train_data[i][0] = prscores[i]
train_data[6]
train_label[6]
from sklearn.metrics import accuracy_score
result = result[0]
result[1]
result[0:20]
np.argmax(result[0])
[np.argmax(i) for i in result[0:10]]
[np.argmax(i) for i in result[0:200]]
train_data[0:10]
[np.argmax(i) for i in train_label[0:10]]
c = 0
for i in train_label:
if np.argmax(i) == 7:
c += 1
c
train_names[3]
names_joblist['Ewan McGregor']
###Output
_____no_output_____
|
demo/clustering.ipynb
|
###Markdown
Prepare data
###Code
def prepare_data():
X, y = make_blobs(n_samples=300, centers=7, n_features=2, cluster_std=0.6, random_state=0)
scaler = MinMaxScaler()
X = scaler.fit_transform(X)
return X, y
###Output
_____no_output_____
###Markdown
Training
###Code
# prepare data
X, y = prepare_data()
sns.scatterplot(x=X[:, 0], y=X[:, 1], hue=y, palette="Set2")
plt.show()
mu_0 = np.mean(X, axis=0)
kappa_0 = 1.0
Lam_0 = np.eye(2) * 10
nu_0 = 2
alpha = 3.0
prob = NormInvWish(mu_0, kappa_0, Lam_0, nu_0)
model = DPMM(prob, alpha, max_iter=300, random_state=0, verbose=True)
model.fit(X)
###Output
iter=init -- New label created: 0
iter=init -- New label created: 1
iter=init -- New label created: 2
iter=init -- New label created: 3
iter=1 -- New label created: 4
iter=1 -- New label created: 5
iter=1 -- New label created: 6
iter=1 -- New label created: 7
iter=2 -- New label created: 8
iter=2 -- New label created: 9
iter=2 -- New label created: 10
iter=2 -- Label deleted: 6
iter=2 -- Label deleted: 6
iter=3 -- Label deleted: 4
iter=3 -- Label deleted: 7
iter=4 -- New label created: 7
iter=4 -- New label created: 8
iter=4 -- Label deleted: 1
iter=4 -- Label deleted: 5
iter=5 -- New label created: 7
iter=5 -- New label created: 8
iter=6 -- New label created: 9
iter=6 -- Label deleted: 8
iter=7 -- Label deleted: 1
iter=12 -- Label deleted: 5
iter=108 -- New label created: 7
iter=109 -- Label deleted: 7
========== Finished! ==========
best_iter=202 -- n_labels: 7
###Markdown
Result
###Code
sns.scatterplot(x=X[:, 0], y=X[:, 1], hue=model.labels_, palette="Set2")
plt.show()
###Output
_____no_output_____
|
Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/Exercise4_Question.ipynb
|
###Markdown
###Code
import tensorflow as tf
import os
import zipfile
from os import path, getcwd, chdir
# DO NOT CHANGE THE LINE BELOW. If you are developing in a local
# environment, then grab happy-or-sad.zip from the Coursera Jupyter Notebook
# and place it inside a local folder and edit the path to that location
path = f"{getcwd()}/../tmp2/happy-or-sad.zip"
zip_ref = zipfile.ZipFile(path, 'r')
zip_ref.extractall("/tmp/h-or-s")
zip_ref.close()
# GRADED FUNCTION: train_happy_sad_model
def train_happy_sad_model():
# Please write your code only where you are indicated.
# please do not remove # model fitting inline comments.
DESIRED_ACCURACY = 0.999
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('acc')>DESIRED_ACCURACY):
print("Reached 99.9% accuracy so cancelling training!")
self.model.stop_training = True
callbacks = myCallback()
# This Code Block should Define and Compile the Model. Please assume the images are 150 X 150 in your implementation.
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
from tensorflow.keras.optimizers import RMSprop
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=0.001),
metrics=['acc'])
# This code block should create an instance of an ImageDataGenerator called train_datagen
# And a train_generator by calling train_datagen.flow_from_directory
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale=1/255)
# Please use a target_size of 150 X 150.
train_generator = train_datagen.flow_from_directory(
"/tmp/h-or-s",
target_size=(150, 150),
batch_size=10,
class_mode='binary')
# Expected output: 'Found 80 images belonging to 2 classes'
# This code block should call model.fit_generator and train for
# a number of epochs.
# model fitting
history = model.fit_generator(
train_generator,
steps_per_epoch=2,
epochs=15,
verbose=1,
callbacks=[callbacks])
# model fitting
return history.history['acc'][-1]
# The Expected output: "Reached 99.9% accuracy so cancelling training!""
train_happy_sad_model()
###Output
_____no_output_____
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.