markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Adding, Removing Columns, Combining `DataFrames`/`Series`It is all well and good when you already have a `DataFrame` filled with data, but it is also important to be able to add to the data that you have.We add a new column simply by assigning data to a column that does not already exist. Here we use the `.loc[:, 'COL_NAME']` notation and store the output of `get_pricing()` (which returns a pandas `Series` if we only pass one security) there. This is the method that we would use to add a `Series` to an existing `DataFrame`. | securities = get_securities(symbols="AAPL", vendors='usstock')
securities
AAPL = securities.index[0]
s_1 = get_prices("usstock-free-1min", data_frequency="daily", sids=AAPL, start_date=start, end_date=end, fields='Close').loc["Close"][AAPL]
prices.loc[:, AAPL] = s_1
prices.head(5) | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
It is also just as easy to remove a column. | prices = prices.drop(AAPL, axis=1)
prices.head(5) | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
Time Series Analysis with pandasUsing the built-in statistics methods for `DataFrames`, we can perform calculations on multiple time series at once! The code to perform calculations on `DataFrames` here is almost exactly the same as the methods used for `Series` above, so don't worry about re-learning everything.The `plot()` method makes another appearance here, this time with a built-in legend that corresponds to the names of the columns that you are plotting. | prices.plot()
plt.title("Collected Stock Prices")
plt.ylabel("Price")
plt.xlabel("Date"); | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
The same statistical functions from our interactions with `Series` resurface here with the addition of the `axis` parameter. By specifying the `axis`, we tell pandas to calculate the desired function along either the rows (`axis=0`) or the columns (`axis=1`). We can easily calculate the mean of each columns like so: | prices.mean(axis=0) | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
As well as the standard deviation: | prices.std(axis=0) | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
Again, the `describe()` function will provide us with summary statistics of our data if we would rather have all of our typical statistics in a convenient visual instead of calculating them individually. | prices.describe() | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
We can scale and add scalars to our `DataFrame`, as you might suspect after dealing with `Series`. This again works element-wise. | (2 * prices - 50).head(5) | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
Here we use the `pct_change()` method to get a `DataFrame` of the multiplicative returns of the securities that we are looking at. | mult_returns = prices.pct_change()[1:]
mult_returns.head() | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
If we use our statistics methods to standardize the returns, a common procedure when examining data, then we can get a better idea of how they all move relative to each other on the same scale. | norm_returns = (mult_returns - mult_returns.mean(axis=0))/mult_returns.std(axis=0)
norm_returns.loc['2014-01-01':'2015-01-01'].plot(); | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
This makes it easier to compare the motion of the different time series contained in our example. Rolling means and standard deviations also work with `DataFrames`. | rolling_mean = prices.rolling(30).mean()
rolling_mean.columns = prices.columns
rolling_mean.plot()
plt.title("Rolling Mean of Prices")
plt.xlabel("Date")
plt.ylabel("Price")
plt.legend(); | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
Metacal Log JSON data| columns | Description ||---------------------|------------------------------------------------------------|| tract | || patch | || cputime | CPU time returned by SRS || cputimeseconds | CPU time returned by SRS in seconds || deblendedsources | Number of Deblended Sources || metcalmax_success | True if processDeblendedCoaddsMetacalMax ended successfully| | metacalmax_time | Running time (minutes) of processDeblendedCoaddsMetacalMax || metacalmax_timeper | Running time (seconds) per source || ngmixmax_success | True if processDeblendedCoaddsNGMixMax ended successfully || ngmixmax_time | Running time (minutes) of processDeblendedCoaddsNGMixMax || ngmixmax_timeper | Running time (seconds) per source || maxfev | Set to True if calls to functin has reached maxfev logged || maxfevstr | if maxfev==True, stores the full log message || slots | Number of cores used for this job, expect it is alwasy 1 || skiptract | Set to True if Skipping tract message was logged || skiptracttstr | if skiptract==True, stores the full log message | | # Read metacal log data
df = pd.DataFrame()
#df = pd.read_json('/global/cfs/cdirs/lsst/groups/CO/heatherk/Run2.2i/metacal/metacalEval/data/metacal_logs.json', convert_dates=False)
df = pd.read_json('../data/metacal_logs.json', convert_dates=False)
# Read coadd ?,?_nImage.fits data
df_coadds = pd.DataFrame()
df_coadds.append(pd.read_json('../data/g_band.json'))
df_coadds.append(pd.read_json('../data/i_band.json'))
df_coadds.append(pd.read_json('../data/r_band.json'))
df_coadds.append(pd.read_json('../data/i_band.json'))
#with open('/global/cfs/cdirs/lsst/groups/CO/heatherk/Run2.2i/metacal/metacalEval/data/metacal_logs.json') as f:
# data = json.load(f)
# print(data)
print(df)
df['cpuminutes'] = df.apply(lambda row: row.cpuseconds/60.0, axis = 1)
columns = sorted(list(df))
print(columns)
print(df.shape) | (3506, 32)
| BSD-3-Clause | notebooks/metacal_stats.ipynb | heather999/metacallEval |
CPU Time vs Number of Deblended Sources | df.loc[(df['metacalmax_success']==True)&(df['ngmixmax_success']==True),"deblendedsources"].max()
# Focus on jobs where both processDeblendedCoaddsMetacalMax and processDeblendedCoaddsNGMixMax ran to completion successfully
successful_jobs = df.loc[(df['metacalmax_success'] == True)&(df['ngmixmax_success']==True)]
successful_jobs.loc[(successful_jobs['metacalmax_success']==True)&(successful_jobs['ngmixmax_success']==True),"metacalmax_time"].max()
successful_jobs.loc[(successful_jobs['metacalmax_success']==True)&(successful_jobs['ngmixmax_success']==True),"metacalmax_time"].min()
successful_jobs.loc[(successful_jobs['metacalmax_success']==True)&(successful_jobs['ngmixmax_success']==True),"metacalmax_time"].median()
# shamelessly "borrowed"
def hist_hover(dataframe, column, colors=["SteelBlue", "Tan"], bins=30, log_scale=False, show_plot=True):
# build histogram data with Numpy
hist, edges = np.histogram(dataframe[column], bins = bins)
hist_df = pd.DataFrame({column: hist,
"left": edges[:-1],
"right": edges[1:]})
hist_df["interval"] = ["%d to %d" % (left, right) for left,
right in zip(hist_df["left"], hist_df["right"])]
# bokeh histogram with hover tool
if log_scale == True:
hist_df["log"] = np.log(hist_df[column])
src = ColumnDataSource(hist_df)
plot = figure(plot_height = 600, plot_width = 600,
title = "Histogram of {}".format(column.capitalize()),
x_axis_label = column.capitalize(),
y_axis_label = "Log Count")
plot.quad(bottom = 0, top = "log",left = "left",
right = "right", source = src, fill_color = colors[0],
line_color = "black", fill_alpha = 0.7,
hover_fill_alpha = 1.0, hover_fill_color = colors[1])
else:
src = bkh.ColumnDataSource(hist_df)
plot = bkh.figure(plot_height = 600, plot_width = 600,
title = "Histogram of {}".format(column.capitalize()),
x_axis_label = column.capitalize(),
y_axis_label = "Count")
plot.quad(bottom = 0, top = column,left = "left",
right = "right", source = src, fill_color = colors[0],
line_color = "black", fill_alpha = 0.7,
hover_fill_alpha = 1.0, hover_fill_color = colors[1])
# hover tool
hover = bkhmodels.HoverTool(tooltips = [('Interval', '@interval'),
('Count', str("@" + column))])
plot.add_tools(hover)
# output
if show_plot == True:
bkh.show(plot)
else:
return plot
# There were some jobs where number of deblended sources was NaN - need to look at that, but for now, just discarding
nonan_jobs = successful_jobs.loc[(successful_jobs['deblendedsources'].notna())]
hist_hover(nonan_jobs.fillna(value=-1,axis=1),"deblendedsources", bins=100)
hist_hover(nonan_jobs, "cpuseconds", bins=100)
hist_hover(nonan_jobs, "cpuminutes", bins=100)
hist_hover(nonan_jobs, "metacalmax_time", bins=100)
def hist2d_hover(dataframe, xcol, ycol, title, xaxis, yaxis, colors=["SteelBlue", "Tan"], bins=30, show_plot=True):
p = bkh.figure()
p.scatter(x=xcol, y=ycol,
source=dataframe,
size=10, color='green')
p.title.text = title
p.xaxis.axis_label = xaxis
p.yaxis.axis_label = yaxis
hover = HoverTool()
hover.tooltips=[
('CPUseconds', '@cpuseconds'),
('tract', '@tract'),
('patch', '@patch'),
('metacalMax Time (min)', '@metacalmax_time'),
('ngmixMax Time (min)', '@ngmixmax_time')
]
p.add_tools(hover)
if show_plot == True:
show(p)
else:
return p
hist2d_hover(nonan_jobs,'metacalmax_time', 'deblendedsources', "metacalMax Time vs Deblended Sources", "metacalMax Time (min)", "Number of Deblended Sources" )
hist2d_hover(nonan_jobs,'ngmixmax_time', 'deblendedsources', "ngmixMax Time vs Deblended Sources", "ngmixMax Time (min)", "Number of Deblended Sources" )
hist2d_hover(nonan_jobs,'cpuminutes', 'deblendedsources', "Total CPU Time vs Deblended Sources", "CPU Time (min)", "Number of Deblended Sources" ) | _____no_output_____ | BSD-3-Clause | notebooks/metacal_stats.ipynb | heather999/metacallEval |
Extended data frame | df_new['duration'] = (df_new.deadline-df_new.launched_at)/(3600*24)
df_new['duration'] = df_new['duration'].round(2)
df_new['goal_usd'] = df_new['goal'] * df_new['static_usd_rate']
df_new['goal_usd'] = df_new['goal_usd'].round(2)
#df_new['launched_at_full'] = pd.to_datetime(df_new['launched_at'], unit='s')
df_new['launched_at_full'] = pd.to_datetime(df_new['launched_at'], unit='s')
df_new['launched_at_year'] = pd.DatetimeIndex(df_new['launched_at_full']).year
df_new['launched_at_month'] = pd.DatetimeIndex(df_new['launched_at_full']).month
df_new['created_at_full'] = pd.to_datetime(df_new['created_at'], unit='s')
df_new['created_at_year'] = pd.DatetimeIndex(df_new['created_at_full']).year
df_new['created_at_month'] = pd.DatetimeIndex(df_new['created_at_full']).month
df_new['deadline_full'] = pd.to_datetime(df_new['deadline'], unit='s')
df_new['deadline_year'] = pd.DatetimeIndex(df_new['deadline_full']).year
df_new['deadline_month'] = pd.DatetimeIndex(df_new['deadline_full']).month
from math import isnan
category_dict = pd.Series(df_new['category_name'].values,index=df_new['category_id']).to_dict()
def parent_cat_mapper(row):
if isnan(row['category_parent_id']):
return row['category_name']
else:
return category_dict[row['category_parent_id']]
category_parent_name = df_new.apply(parent_cat_mapper, axis=1)
df_new['category_parent_name'] = category_parent_name
df_new.info() | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 209222 entries, 0 to 209221
Columns: 108 entries, backers_count to category_parent_name
dtypes: bool(5), datetime64[ns](3), float64(18), int64(16), object(66)
memory usage: 165.4+ MB
| MIT | kickstarter_02_preparation.ipynb | dominikmn/ds-kickstarter-project |
Save frame | save_dataframe(df_new, './data_frame_full_2021-03-12.pickle') | _____no_output_____ | MIT | kickstarter_02_preparation.ipynb | dominikmn/ds-kickstarter-project |
Reduced data frame | for i , val in df_new.iloc[60060,:].items():
print(i)
print(val)
print()
survival_lst = ['backers_count', 'blurb', 'country', 'created_at', 'currency', 'deadline','disable_communication', 'goal', 'launched_at','name', 'staff_pick','state',
'usd_pledged','usd_type','category_id','category_name','category_slug','category_parent_id', 'category_parent_name', 'location_id', 'location_name','location_type',
'photo_key', 'photo_full', 'duration', 'goal_usd',
'launched_at_full', 'launched_at_year', 'launched_at_month', 'created_at_full', 'created_at_year', 'created_at_month', 'deadline_full', 'deadline_year', 'deadline_month']
df_eda = df_new[survival_lst]
save_dataframe(df_eda, './data_frame_small_2021-03-12.pickle')
df_eda.head(2) | _____no_output_____ | MIT | kickstarter_02_preparation.ipynb | dominikmn/ds-kickstarter-project |
Tutorial 4: A two-asset HANK modelIn this notebook we solve the two-asset HANK model from Auclert, Bardóczy, Rognlie, Straub (2021): "Using the Sequence-Space Jacobian to Solve and Estimate Heterogeneous-Agent Models" ([link to paper](https://www.bencebardoczy.com/publication/sequence-jacobian/sequence-jacobian.pdf)).New concepts:- **Solved block**: an extension of simple blocks that enables much more efficient DAG representations of large macro models.- **Re-using saved Jacobians**: as the cost of these computations becomes non-trivial, avoiding redundancy becomes key.- **Fine-tuning options**: how to access and modify various options for each (block, method) pairFor more examples and information on the SSJ toolkit, please visit our [GitHub page](https://github.com/shade-econ/sequence-jacobian). | import numpy as np
import matplotlib.pyplot as plt
from sequence_jacobian import simple, solved, combine, create_model # functions
from sequence_jacobian import grids, hetblocks # modules | _____no_output_____ | MIT | notebooks/two_asset.ipynb | gboehl/sequence-jacobian |
1 Model descriptionThe household problem is characterized by the Bellman equation$$\begin{align} \tag{1}V_t(e, b_{-}, a_{-}) = \max_{c, b, a} &\left\{\frac{c^{1-\sigma}}{1-\sigma} + \beta \mathbb{E}_t V_{t+1}(e', b, a) \right\}\\c + a + b &= z_t(e) + (1 + r_t^a)a_{-} + (1 + r_t^b)b_{-} - \Psi(a, a_{-}) \\a &\geq \underline{a}, \quad b \geq \underline{b},\end{align}$$where $z_t(e)$ is labor income and the adjustment cost function is specified as$$\Psi(a, a_{-}) = \frac{\chi_1}{\chi_2}\left|\frac{a - (1 + r_t^a) a_{-}}{(1 + r_{t}^a) a_{-} + \chi_0}\right|^{\chi_2} \left[(1 + r_t^a) a_{-} + \chi_0 \right],$$with $\chi_0, \chi_1 > 0$ and $\chi_2 > 1.$ For the full description of the model, including the problems of the other agents, please see appendix B.3 of the paper. We embed this household block in a New Keynesian model with sticky prices, sticky wages, and capital adjustment costs. Thanks to the **solved blocks** (in green), we can write a DAG for this model in just 3 unknowns $(r, w, Y)$ and 3 targets, asset market clearing, Fisher equation, wage Phillips curve. 2 Define solved blocksSolved blocks are miniature models embedded as blocks inside of our larger model. Like simple blocks, solved blocks correspond to aggregate equilibrium conditions: they map sequences of aggregate inputs directly into sequences of aggregate outputs. The difference is that in the case of simple blocks, this mapping has to be analytical, while solved blocks are designed to accommodate implicit relationships that can only be evaluated numerically. Such implicit mappings between variables become more common as macro complexity increases. Solved blocks are a valuable tool to simplify the DAG of large macro models. 2.1 Price setting (NKPC-p)The Phillips curve characterizes $(\pi)$ conditional on $(Y, mc, r):$ $$\log(1+\pi_t) = \kappa_p \left(mc_t - \frac{1}{\mu_p} \right) + \frac{1}{1+r_{t+1}} \frac{Y_{t+1}}{Y_t} \log(1+\pi_{t+1})$$Inflation shows up with two different time displacements, which means that inflation in any given period depends on the entire sequence of $(Y, mc, r)$. Simple blocks are not meant to represent such relationships. Instead, we write a function that returns the residual of the equation, and use the decorator `@solved` to make it into a `SolvedBlock`. | @solved(unknowns={'pi': (-0.1, 0.1)}, targets=['nkpc'], solver="brentq")
def pricing_solved(pi, mc, r, Y, kappap, mup):
nkpc = kappap * (mc - 1/mup) + Y(+1) / Y * (1 + pi(+1)).apply(np.log) / \
(1 + r(+1)) - (1 + pi).apply(np.log)
return nkpc | _____no_output_____ | MIT | notebooks/two_asset.ipynb | gboehl/sequence-jacobian |
When our routines encounter a solved block in `blocks`, they compute its Jacobian via the the implicit function theorem, as if it was a model on its own. Given the Jacobian, the rest of the code applies without modification. 2.2 Equity price (equity & dividend)The no arbitrage condition characterizes $(p)$ conditional on $(d, p, r).$$$p_t = \frac{d_{t+1} + p_{t+1}}{1 + r_{t+1}}$$ | @solved(unknowns={'p': (5, 15)}, targets=['equity'], solver="brentq")
def arbitrage_solved(div, p, r):
equity = div(+1) + p(+1) - p * (1 + r(+1))
return equity | _____no_output_____ | MIT | notebooks/two_asset.ipynb | gboehl/sequence-jacobian |
2.3 Investment with adjustment costs (prod)Sometimes multiple equilibrium conditions can be combined in a self-contained solved block. Investment subject to capital adjustment costs is such a case. In particular, we can use the following four equations to solve for $(K, Q)$ conditional on $(Y, w, r)$. - Production: $$ Y_t = Z_t K_{t-1}^\alpha N_t^{1-\alpha} $$ - Labor demand: $$ w_t = (1-\alpha)\frac{Y_t}{N_t} mc_t $$ - Investment equation:$$Q_t = 1 + \frac{1}{\delta \epsilon_I}\left(\frac{K_t-K_{t-1}}{K_{t-1}}\right)$$- Valuation equation$$(1+r_{t})Q_{t} = \alpha Z_{t+1} \left(\frac{N_{t+1}}{K_t}\right)^{1-\alpha} mc_{t+1} - \left[\frac{K_{t+1}}{K_t} - (1-\delta) + \frac{1}{2\delta \epsilon_I}\left(\frac{K_{t+1} - K_t}{K_t}\right)^2\right] + \frac{K_{t+1}}{K_t}Q_{t+1}$$ Solved blocks that contain multiple simple blocks have to be initialized with the `CombinedBlock.solved` method instead of the decorator `@solved`. | @simple
def labor(Y, w, K, Z, alpha):
N = (Y / Z / K(-1) ** alpha) ** (1 / (1 - alpha))
mc = w * N / (1 - alpha) / Y
return N, mc
@simple
def investment(Q, K, r, N, mc, Z, delta, epsI, alpha):
inv = (K / K(-1) - 1) / (delta * epsI) + 1 - Q
val = alpha * Z(+1) * (N(+1) / K) ** (1 - alpha) * mc(+1) -\
(K(+1) / K - (1 - delta) + (K(+1) / K - 1) ** 2 / (2 * delta * epsI)) +\
K(+1) / K * Q(+1) - (1 + r(+1)) * Q
return inv, val
production = combine([labor, investment]) # create combined block
production_solved = production.solved(unknowns={'Q': 1., 'K': 10.}, # turn it into solved block
targets=['inv', 'val'],
solver='broyden_custom') | _____no_output_____ | MIT | notebooks/two_asset.ipynb | gboehl/sequence-jacobian |
3 Build DAGsOne for transition dynamics (pictured above) and one for calibrating the steady state. Step 1: Adapt HA blockWe developed an efficient backward iteration function to solve the Bellman equation in (1). Although we view this as a contribution on its own, discussing the algorithm goes beyond the scope of this notebook. If you are interested in how we solve a two-asset model with convex portfolio-adjustment costs in discrete time, please see appendix E of the paper for a detailed description and `sequence_jacobian/hetblocks/hh_twoasset.py` for the implementation.Here, we take this generic two-asset model off the shelf, and embed it in our New Keynesian model with the help of two hetinputs. | def make_grids(bmax, amax, kmax, nB, nA, nK, nZ, rho_z, sigma_z):
b_grid = grids.agrid(amax=bmax, n=nB)
a_grid = grids.agrid(amax=amax, n=nA)
k_grid = grids.agrid(amax=kmax, n=nK)[::-1].copy()
e_grid, _, Pi = grids.markov_rouwenhorst(rho=rho_z, sigma=sigma_z, N=nZ)
return b_grid, a_grid, k_grid, e_grid, Pi
def income(e_grid, tax, w, N):
z_grid = (1 - tax) * w * N * e_grid
return z_grid
hh = hetblocks.hh_twoasset.hh
hh_ext = hh.add_hetinputs([income, make_grids]) | _____no_output_____ | MIT | notebooks/two_asset.ipynb | gboehl/sequence-jacobian |
Step 2: Complete dynamic DAG with simple blocksWe have set up all the blocks in the `sequence_jacobian/examples/two_asset.py` module. We omit the step-by-step discussion of these blocks since they should be familiar from the other model notebooks. | import sequence_jacobian.examples.two_asset as m
blocks = [hh_ext, production_solved, pricing_solved, arbitrage_solved,
m.dividend, m.taylor, m.fiscal, m.share_value,
m.finance, m.wage, m.union, m.mkt_clearing]
hank = create_model(blocks, name='Two-Asset HANK')
print(*hank.blocks, sep='\n') | <SolvedBlock 'labor_to_investment_combined_solved'>
<SolvedBlock 'pricing_solved'>
<SimpleBlock 'wage'>
<SimpleBlock 'taylor'>
<SimpleBlock 'dividend'>
<SolvedBlock 'arbitrage_solved'>
<SimpleBlock 'share_value'>
<SimpleBlock 'finance'>
<SimpleBlock 'fiscal'>
<HetBlock 'hh' with hetinput 'make_grids_marginal_cost_grid' and with hetoutput `adjustment_costs'>
<SimpleBlock 'union'>
<SimpleBlock 'mkt_clearing'>
| MIT | notebooks/two_asset.ipynb | gboehl/sequence-jacobian |
Step 3: Complete calibration DAGAnalytical:- find TFP `Z` to hit target for output `Y`- find markup `mup` to hit target for total wealth `p + Bg`- find capital share `alpha` to hit target for capital `K`- find wage `w` to hit Phillips curve given zero inflation - find disutility of labor `vphi` to hit wage Phillips curve given a target for employmentNumerical:- find discount factor `beta` to satisfy asset market clearing given an interest rate `r`- find adjustment cost scale `chi1` to hit target for average liquid wealth `Bh` | blocks_ss = [hh_ext, m.partial_ss, m.union_ss,
m.dividend, m.taylor, m.fiscal, m.share_value, m.finance, m.mkt_clearing]
hank_ss = create_model(blocks_ss, name='Two-Asset HANK SS')
print(hank_ss)
print(f"Inputs: {hank_ss.inputs}")
| <Model 'Two-Asset HANK SS'>
Inputs: ['beta', 'eis', 'chi0', 'chi1', 'chi2', 'N', 'bmax', 'amax', 'kmax', 'nB', 'nA', 'nK', 'nZ', 'rho_z', 'sigma_z', 'Y', 'K', 'r', 'tot_wealth', 'Bg', 'delta', 'muw', 'frisch', 'pi', 'kappap', 'epsI', 'rstar', 'phi', 'G', 'Bh', 'omega']
| MIT | notebooks/two_asset.ipynb | gboehl/sequence-jacobian |
4 ResultsWe cover how to pass precomputed Jacobians to the main methods. This is useful when methods that need Jacobians are used repeatedly. These are- Solve methods: `solve_impulse_linear`, `solve_impulse_nonlinear`- Jacobian methods: `jacobian`, `solve_jacobian` 4.1 Calibrate steady stateUse the calibration DAG to internally calibrate the seven parameters (analytical + numerical). Evaluate the dynamic DAG at the resulting steady state `cali`. | calibration = {'Y': 1., 'N': 1.0, 'K': 10., 'r': 0.0125, 'rstar': 0.0125, 'tot_wealth': 14,
'delta': 0.02, 'pi': 0., 'kappap': 0.1, 'muw': 1.1, 'Bh': 1.04, 'Bg': 2.8,
'G': 0.2, 'eis': 0.5, 'frisch': 1., 'chi0': 0.25, 'chi2': 2, 'epsI': 4,
'omega': 0.005, 'kappaw': 0.1, 'phi': 1.5, 'nZ': 3, 'nB': 50, 'nA': 70, 'nK': 50,
'bmax': 50, 'amax': 4000, 'kmax': 1, 'rho_z': 0.966, 'sigma_z': 0.92}
unknowns_ss = {'beta': 0.976, 'chi1': 6.5}
targets_ss = {'asset_mkt': 0., 'B': 'Bh'}
cali = hank_ss.solve_steady_state(calibration, unknowns_ss, targets_ss, solver='broyden_custom') | _____no_output_____ | MIT | notebooks/two_asset.ipynb | gboehl/sequence-jacobian |
Verify solution, generate `ss` from dynamic DAG. | ss = hank.steady_state(cali)
print(f"Liquid assets: {ss['B']: 0.2f}")
print(f"Asset market clearing: {ss['asset_mkt']: 0.2e}")
print(f"Goods market clearing (untargeted): {ss['goods_mkt']: 0.2e}") | Liquid assets: 1.04
Asset market clearing: 8.22e-13
Goods market clearing (untargeted): 3.29e-08
| MIT | notebooks/two_asset.ipynb | gboehl/sequence-jacobian |
4.2 Linearized impulse responsesAs before, we can compute the general equilibrium Jacobians $G$ which is sufficient to map any shock into impulse responses. When the cost of computing a block Jacobian is non-trivial, it's a good idea to precompute it. We can supply block Jacobians for specific blocks via the `Js=` keyword argument. The precomputed Jacobians will only be used if they are **complete** (have all required inputs and outputs) and have the right **size** (truncation horizon). | exogenous = ['rstar', 'Z', 'G']
unknowns = ['r', 'w', 'Y']
targets = ['asset_mkt', 'fisher', 'wnkpc']
T = 300
J_ha = hh_ext.jacobian(ss, inputs=['N', 'r', 'ra', 'rb', 'tax', 'w'], T=T)
G = hank.solve_jacobian(ss, unknowns, targets, exogenous, T=T, Js={'hh': J_ha}) | _____no_output_____ | MIT | notebooks/two_asset.ipynb | gboehl/sequence-jacobian |
The time saving from re-using the Jacobian of the household block is considerable. | %time G = hank.solve_jacobian(ss, unknowns, targets, exogenous, T=T, Js={'hh': J_ha})
%time G = hank.solve_jacobian(ss, unknowns, targets, exogenous, T=T) | Wall time: 4.94 s
| MIT | notebooks/two_asset.ipynb | gboehl/sequence-jacobian |
Note that some block Jacobians may be precomputed even if others are changing. For example, we can re-use `J_ha` while evalutating the model likelihood for 100,000 draws of price and wage adjustment cost.When we're not planning to change any part of the model, it's even better to store the `H_U` directly. (To be precise, we store the LU-factorized version of the matrix, which facilitates operations with its inverse.) That way, we save time on computing and packing all the block Jacobians. | from sequence_jacobian.classes import FactoredJacobianDict
H_U = hank.jacobian(ss, unknowns, targets, T=T, Js={'hh': J_ha})
H_U_factored = FactoredJacobianDict(H_U, T)
%time G = hank.solve_jacobian(ss, unknowns, targets, exogenous, T=T, Js={'hh': J_ha}, H_U_factored=H_U_factored) | Wall time: 343 ms
| MIT | notebooks/two_asset.ipynb | gboehl/sequence-jacobian |
Let's plot some impulse responses: | rhos = np.array([0.2, 0.4, 0.6, 0.8])
drstar = -0.0025 * rhos ** (np.arange(T)[:, np.newaxis])
dY = 100 * G['Y']['rstar'] @ drstar
plt.plot(dY[:21])
plt.title(r'Output response to 25 bp monetary policy shocks with $\rho=(0.2 ... 0.8)$')
plt.xlabel('quarters')
plt.ylabel('% deviation from ss')
plt.show() | _____no_output_____ | MIT | notebooks/two_asset.ipynb | gboehl/sequence-jacobian |
4.3 Nonlinear impulse responsesLet's compute the nonlinear impulse response for the $\rho=0.6$ shock above.- Don't forget to use the saved Jacobian.- Note how to look up and change options specific to (block type, method) pairs. | hank['pricing_solved'].solve_impulse_nonlinear_options | _____no_output_____ | MIT | notebooks/two_asset.ipynb | gboehl/sequence-jacobian |
By default, `SolvedBlock.solve_impulse_linear` prints the error in each iteration (`verbose=True`). Let's turn this off for the internal solved blocks. | td_nonlin = hank.solve_impulse_nonlinear(ss, unknowns, targets, {"rstar": drstar[:, 2]},
Js={'hh': J_ha}, H_U_factored=H_U_factored,
options={'pricing_solved': {'verbose': False},
'arbitrage_solved': {'verbose': False},
'labor_to_investment_combined_solved': {'verbose': False}}) | Solving Two-Asset HANK for ['r', 'w', 'Y'] to hit ['asset_mkt', 'fisher', 'wnkpc']
On iteration 0
max error for asset_mkt is 3.92E-06
max error for fisher is 2.50E-03
max error for wnkpc is 4.72E-08
On iteration 1
max error for asset_mkt is 2.66E-04
max error for fisher is 1.55E-06
max error for wnkpc is 2.15E-05
On iteration 2
max error for asset_mkt is 7.56E-06
max error for fisher is 9.69E-08
max error for wnkpc is 6.57E-07
On iteration 3
max error for asset_mkt is 4.01E-07
max error for fisher is 2.24E-09
max error for wnkpc is 1.64E-08
On iteration 4
max error for asset_mkt is 2.20E-08
max error for fisher is 1.06E-10
max error for wnkpc is 7.46E-10
On iteration 5
max error for asset_mkt is 1.23E-09
max error for fisher is 5.44E-12
max error for wnkpc is 3.72E-11
| MIT | notebooks/two_asset.ipynb | gboehl/sequence-jacobian |
We see rapid convergence and mild nonlinearities in the solution. | dY_nonlin = 100 * td_nonlin['Y']
plt.plot(dY[:21, 2], label='linear', linestyle='-', linewidth=2.5)
plt.plot(dY_nonlin[:21], label='nonlinear', linestyle='--', linewidth=2.5)
plt.title(r'Consumption response to 1% monetary policy shock')
plt.xlabel('quarters')
plt.ylabel('% deviation from ss')
plt.legend()
plt.show() | _____no_output_____ | MIT | notebooks/two_asset.ipynb | gboehl/sequence-jacobian |
Alternatively, we can compute the impulse response to a version of the shock scaled down to 10% of its original size. | td_nonlin = hank.solve_impulse_nonlinear(ss, unknowns, targets, {"rstar": 0.1 * drstar[:, 2]},
Js={'hh': J_ha},
options={'pricing_solved': {'verbose': False},
'arbitrage_solved': {'verbose': False},
'labor_to_investment_combined_solved': {'verbose': False}})
dY_nonlin = 100 * td_nonlin['Y']
plt.plot(0.1*dY[:21, 2], label='linear', linestyle='-', linewidth=2.5)
plt.plot(dY_nonlin[:21], label='nonlinear', linestyle='--', linewidth=2.5)
plt.title(r'Consumption response to 0.1% monetary policy shock')
plt.xlabel('quarters')
plt.ylabel('% deviation from ss')
plt.legend()
plt.show() | _____no_output_____ | MIT | notebooks/two_asset.ipynb | gboehl/sequence-jacobian |
損失関数とトレーニング誤差・テスト誤差 | par = np.linspace(-3,3,50) # パラメータの範囲
te_err = (1+par**2)/2 # テスト誤差
# テスト誤差をプロット
for i in range(10):
z = np.random.normal(size=20) # データ生成
trerr = np.mean(np.subtract.outer(z,par)**2/2, axis=0) # トレーニング誤差
plt.plot(par,trerr,'b--',linewidth=2) # トレーニング誤差をプロット
plt.xlabel("par")
plt.ylabel("training/test errors")
plt.plot(par, te_err,'r-',linewidth=4)
plt.show() | _____no_output_____ | MIT | ch04eval.ipynb | kanamori-takafumi/book_StatMachineLearn_with_Python |
テスト誤差の推定:交差検証法 | from sklearn.tree import DecisionTreeRegressor
n, K = 100, 10 # 設定:データ数100, 10重CV.
# データ生成
x = np.random.uniform(-2,2,n) # 区間[-2,2]上の一様分布
y = np.sin(2*np.pi*x)/x + np.random.normal(scale=0.5,size=n)
# データをグループ分け
cv_idx = np.tile(np.arange(K), int(np.ceil(n/K)))[:n]
maxdepths = np.arange(2,10) # 決定木の深さの候補
cverr = np.array([])
for mp in maxdepths:
cverr_lambda = np.array([])
for k in range(K):
tr_idx = (cv_idx!=k)
te_idx = (cv_idx==k)
cvx = x[tr_idx]; cvy = y[tr_idx] # CVのためデータを分割
dtreg = DecisionTreeRegressor(max_depth=mp)
dtreg.fit(np.array([cvx]).T, cvy) # 決定木で推定
ypred = dtreg.predict(np.array([x[te_idx]]).T) # 予測
# CV誤差の計算
cl = np.append(cverr_lambda, np.mean((y[te_idx]-ypred)**2/2))
cverr = np.append(cverr, np.mean(cl))
plt.scatter(maxdepths, cverr,c='k') # cv誤差のプロット
plt.xlabel("max depth"); plt.ylabel('cv error')
plt.show() | _____no_output_____ | MIT | ch04eval.ipynb | kanamori-takafumi/book_StatMachineLearn_with_Python |
ROC曲線とAUC | n = 100 # データ数 100
xp = np.random.normal(loc=1,size=n*2).reshape(n,2) # 信号アリ
xn = np.random.normal(size=n*2).reshape(n,2) # 信号ナシ
# F1 のAUC
np.mean(np.subtract.outer(xp[:,0],xn[:,0]) >= 0)
# F2 のAUC
np.mean(np.subtract.outer(np.sum(xp,1),np.sum(xn,1)) >= 0)
n = 10000 # データ数 10000
xp = np.random.normal(loc=1,size=n*2).reshape(n,2) # 信号アリ
xn = np.random.normal(size=n*2).reshape(n,2) # 信号ナシ
# F1 のAUC
np.mean(np.subtract.outer(xp[:,0],xn[:,0]) >= 0)
# F2 のAUC
np.mean(np.subtract.outer(np.sum(xp,1),np.sum(xn,1)) >= 0)
import scipy as sp
from scipy import integrate # integrateを使う
# F1のAUC
def fpr(c):
return(1-sp.stats.norm.cdf(c))
def tpr(c):
return(1-sp.stats.norm.cdf(c,loc=1))
c = np.arange(-10, 10, 0.01)
sp.integrate.cumtrapz(tpr(c)[::-1],fpr(c)[::-1])[-1] # F1のAUCの計算
# F2のAUC
def fpr(c):
return(1-sp.stats.norm.cdf(c,scale=np.sqrt(2)))
def tpr(c):
return(1-sp.stats.norm.cdf(c,loc=2,scale=np.sqrt(2)))
sp.integrate.cumtrapz(tpr(c)[::-1],fpr(c)[::-1])[-1] # F2のAUCの計算 | _____no_output_____ | MIT | ch04eval.ipynb | kanamori-takafumi/book_StatMachineLearn_with_Python |
Automatic correspondences matching. GoalIn this chapter,We will mix up the feature matching and findHomography from calib3d module to find known objects in a complex image.BasicsSo what we did in last session? We used a queryImage, found some feature points in it, we took another trainImage, found the features in that image too and we found the best matches among them. In short, we found locations of some parts of an object in another cluttered image. This information is sufficient to find the object exactly on the trainImage.For that, we can use a function from calib3d module, ie cv.findHomography(). If we pass the set of points from both the images, it will find the perspective transformation of that object. Then we can use cv.perspectiveTransform() to find the object. It needs atleast four correct points to find the transformation.We have seen that there can be some possible errors while matching which may affect the result. To solve this problem, algorithm uses RANSAC or LEAST_MEDIAN (which can be decided by the flags). So good matches which provide correct estimation are called inliers and remaining are called outliers. cv.findHomography() returns a mask which specifies the inlier and outlier points.So let's do it !!! CodeFirst, as usual, let's find SIFT features in images and apply the ratio test to find the best matches. | import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt
img1 = cv.imread('hg_2_2.jpg',0) # queryImage
img2 = cv.imread('hg_2_8.jpg',0) # trainImage
# Initiate SIFT detector
sift = cv.SIFT_create()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
FLANN_INDEX_KDTREE = 1
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks = 50)
flann = cv.FlannBasedMatcher(index_params, search_params)
matches = flann.knnMatch(des1,des2,k=2)
# store all the good matches as per Lowe's ratio test.
good = []
for m,n in matches:
if m.distance < 0.7*n.distance:
good.append(m)
print(len(good)) | 3283
| MIT | CW1/OpenCV_Implementation/T2.hg.ipynb | lampard2a4/ICL-CVPR-Workspace |
Now we set a condition that atleast 10 matches (defined by MIN_MATCH_COUNT) are to be there to find the object. Otherwise simply show a message saying not enough matches are present.If enough matches are found, we extract the locations of matched keypoints in both the images. They are passed to find the perspective transformation. Once we get this 3x3 transformation matrix, we use it to transform the corners of queryImage to corresponding points in trainImage. Then we draw it. | src_pts = np.float32([ kp1[m.queryIdx].pt for m in good ]).reshape(-1,1,2)
dst_pts = np.float32([ kp2[m.trainIdx].pt for m in good ]).reshape(-1,1,2)
print(len(src_pts))
M, mask = cv.findHomography(src_pts, dst_pts, cv.RANSAC,5.0)
matchesMask = mask.ravel().tolist()
print(M)
h,w = img1.shape
pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
dst = cv.perspectiveTransform(pts,M)
img2 = cv.polylines(img2,[np.int32(dst)],True,255,3, cv.LINE_AA) | 3283
[[ 6.53453067e-01 2.06501669e-01 -5.51251650e+00]
[-2.02967609e-01 6.57965961e-01 9.85007522e+02]
[ 1.24191783e-06 -2.13203451e-07 1.00000000e+00]]
| MIT | CW1/OpenCV_Implementation/T2.hg.ipynb | lampard2a4/ICL-CVPR-Workspace |
Finally we draw our inliers (if successfully found the object) or matching keypoints (if failed). | draw_params = dict(matchColor = (0,255,0), # draw matches in green color
singlePointColor = None,
matchesMask = matchesMask, # draw only inliers
flags = 2)
img3 = cv.drawMatches(img1,kp1,img2,kp2,good,(0,255,0),**draw_params)
plt.imshow(img3, 'gray'),plt.show()
# Radius of circle
radius = 2
# Blue color in BGR
color = (0, 255, 0)
# Line thickness of 2 px
thickness = 1
# Using cv2.circle() method
# Draw a circle with blue line borders of thickness of 2 px
#image = cv2.circle(image, center_coordinates, radius, color, thickness)
for p1 in kp1:
img4=cv.circle(img1, p1, radius, color, thickness)
plt.imshow(img4, 'gray'),plt.show() | _____no_output_____ | MIT | CW1/OpenCV_Implementation/T2.hg.ipynb | lampard2a4/ICL-CVPR-Workspace |
ColorAIBy: Mark John A. VelmonteColorAi is a type of simple supervised classification machine learning AI. It can classify what shade of color the given rgb is and can also learn new color base on what the teacher teach it. The performance of this AI will depend on what you teach it. It uses KNN (K-nearest neighbor) and Random Forest algorithm's to calculate inputs dependencies1. python31. pandas1. numpy1. sklearn1. matplotlib Class ColorAI(n_neighbors): parameters:> n_neigbors : default value 15> set number of neigbors for KNN algorithm Methods showMethods() parameter:> Accept no parameter> Print all availble methods showDataMemory() parameter:> Accept no parameter> Print out all the data in datasets accuracyTest() parameter:> Accept no parameter> Print out the accuracy of the data set being use getColor(color_inp, data_ref, ret_val, show_predicted) parameter:> color_inp = list of 3 intiger.> data_ref = A read csv file> ret_val = bool, if True it will return a value of a color name. default value False> show_predicted = bool, if True will print a value of a color name. default value True teach(save_count)This method will ask for a rgb value of color and will try to predict that color. parameter> save_count = int, number of times the new data will added to the data sets, default value 1 getColorFromImage(show_plot, show_info, read_img ): parameter> show_plot = bool, if True will plot the image, default value False> show_info = bool, if True will print all the information about the image, default value False> read_img = string either ("strips", "full"), "strips" value will proccess the image by slicing it on top bottom and middle to find a prominent color on image, "full" value will process the whole image (This method takes longer to process) | import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import re
from datetime import datetime
from sklearn import metrics
from sklearn.model_selection import train_test_split
import matplotlib.image as mpimg
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from PIL import Image
class ColorAI():
def __init__(self, n_neighbors = 15):
self.trained_data = pd.read_csv("learned_color_data.csv")
self.number_of_neighbors = n_neighbors
def showMethods(self):
print("showDataMemory, accuracyTest, getColor, showDataFrame, teach, getColorFromImage")
def showDataMemory(self):
color_name_guide = self.trained_data["Color name"]
result_color_name = color_name_guide.drop_duplicates()
color_id = self.trained_data["Id"]
result_color_id = color_id.drop_duplicates()
user_guide = pd.DataFrame({"Color family" : result_color_name, "ID" : result_color_id})
print(user_guide)
def accuracyTest(self):
test_data = self.trained_data
X = test_data.iloc[:, :-2].values
y = test_data["Id"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
knn = KNeighborsClassifier(n_neighbors = self.number_of_neighbors)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print(y_pred)
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
def getColor(self, color_inp, data_ref, ret_val = "False", show_predicted = True):
self.data = data_ref
R = self.data["R"]
G = self.data["G"]
B = self.data["B"]
X = self.data.iloc[:, :-2].values
y = self.data["Id"]
model = KNeighborsClassifier(n_neighbors = self.number_of_neighbors)
model.fit(X, y)
u_input = color_inp
prediction = model.predict([u_input])
self.prediction_index = np.where(self.data == prediction[0])[0][0]
if show_predicted == True:
print("prediction:", self.data["Color name"][self.prediction_index])
elif show_predicted == False:
pass
else:
print("no such parameter")
if ret_val == True:
return prediction
def showDataFrame(self):
pd.set_option("display.max_rows", 10000)
print(self.trained_data)
def teach(self, save_count = 1):
teach_status = "T"
n_test = 0
n_correct = 0
n_wrong = 0
while teach_status == "T":
print("-" * 10 + str(n_test) +"-" * 10)
n_test += 1
uinp = input("color:")
uinp_enc = re.split(",", uinp)
print("inp", uinp_enc)
RGB = []
for num in uinp_enc:
print(num)
RGB.append(int(num))
print("rgb", type(RGB[1]))
self.getColor(RGB, self.trained_data)
answer_status = input("answer status C/W:")
if answer_status == "C":
n_correct += 1
R = RGB[0]
G = RGB[1]
B = RGB[2]
print(R, G, B)
save_count = save_count
while save_count > 0:
shade_fam = self.data["Color name"][self.prediction_index]
data_id = self.data["Id"][self.prediction_index]
new_data = pd.DataFrame({"R":R, "G":G, "B":B, "Color name":shade_fam, "Id":data_id}, index = [0])
self.trained_data = pd.concat([new_data, self.trained_data]).reset_index(drop = True)
self.trained_data.to_csv("learned_color_data.csv", index=False)
save_count -= 1
if save_count == 0: break
elif answer_status == "W":
R = RGB[0]
G = RGB[1]
B = RGB[2]
n_wrong += 1
add_learnings = input("Add New Lesson? Y/N :")
if add_learnings == "Y":
self.showDataMemory()
shade_fam = input("shader family:")
data_id = int(input("new data id:"))
save_count = save_count
while save_count > 0:
new_data = pd.DataFrame({"R":R, "G":G, "B":B, "Color name":shade_fam, "Id":data_id}, index = [0])
self.trained_data = pd.concat([new_data, self.trained_data]).reset_index(drop = True)
self.trained_data.to_csv("learned_color_data.csv", index=False)
save_count -= 1
if save_count == 0: break
else:
print("input error")
break
if teach_status == "F":
print("-" * 10 + "teaching ended" + "-" * 10)
print("number of tests : ", n_test)
print("correct answer : ", n_correct)
print("wrong answer : ", n_wrong)
break
teach_status = input("teaching status T/F:")
def getColorFromImage(self, show_plot = False, show_info = False, read_img = "strips"):
uinp = input("image:")
if read_img == "strips":
print("analizyng image")
try:
image_inp = mpimg.imread(uinp)
except Exception as error:
print(error)
image_size = np.array(image_inp)
image_total_pixel = int((image_inp.shape[2] * image_inp.shape[1] * image_inp.shape[0]))
dim1 = int(image_total_pixel / 3)
image_data = image_size.reshape(dim1, 3)
seq_shape = int((image_inp[0:50].shape[2] * image_inp[0:50].shape[1] * image_inp[0:50].shape[0]) / 3)
sequence_1 = np.array(image_inp[0:50]).reshape(seq_shape, 3)
sequence_2 = np.array(image_inp[ int(image_inp.shape[0] / 2): int((image_inp.shape[0] / 2) + 50)]).reshape(seq_shape, 3)
sequence_3 = np.array(image_inp[ int(image_inp.shape[0] - 50 ): int(image_inp.shape[0])]).reshape(seq_shape, 3)
print("---" * 15 + "---" * 15 )
if show_plot == True:
fig, axs = plt.subplots(3)
axs[0].imshow(image_inp[0:50])
axs[1].imshow(image_inp[ int(image_inp.shape[0] / 2): int((image_inp.shape[0] / 2) + 50)])
axs[2].imshow(image_inp[ int(image_inp.shape[0] - 50 ): int(image_inp.shape[0])])
readings = np.array([sequence_1, sequence_2, sequence_3])
tota_pixels = readings.shape[2]* readings.shape[1] * readings.shape[0]
enc_reading = readings.reshape(int(tota_pixels / 3), 3)
data = pd.read_csv("learned_color_data.csv")
Red_pixel = data["R"]
Green_pixel = data["G"]
Blue_pixel = data["B"]
feat = np.array([Red_pixel, Green_pixel, Blue_pixel])
X = data.iloc[:, :-2].values
y = data["Id"]
model = RandomForestClassifier(max_depth=100, random_state=0)
model.fit(X, y)
prediction = model.predict(enc_reading)
result_color_name = pd.DataFrame({"answers" : prediction}).drop_duplicates()
answers = np.array(result_color_name["answers"])
if show_info == True:
print("INFORMATION:" + "\n")
print("colors found", answers)
self.showDataMemory()
turn = 0
n_total = 0
ans_arr = []
for index in answers:
for pixel in prediction:
if pixel == answers[turn]:
n_total += 1
turn += 1
ans_arr.append(n_total)
n_total -= n_total
if turn >= answers.shape[0]:
break
superior = np.max(ans_arr)
answer_index = ans_arr.index(superior)
final_answer_index = answers[answer_index]
final_answer = np.where(data["Id"] == final_answer_index)[0][0]
print("\n" + "Prominent Color:", data["Color name"].iloc[final_answer])
if read_img == "full":
print("analyzing image. It will take time depending on the size of the image and tour proccesssing power")
res_img = Image.open(uinp)
img_height = res_img.size[1]
img_width = res_img.size[0]
res_img = res_img.resize((int(img_width / 2), int(img_height / 2)),Image.ANTIALIAS)
res_img.save("images/res_image.jpg",optimize=True,quality=100)
res_img = mpimg.imread("images/res_image.jpg")
img_array = np.array(res_img)
print(img_array.shape)
dimension = img_array.shape[0] * img_array.shape[1]
img_array = img_array.reshape(dimension, 3)
print(img_array.shape)
color_found = []
data = pd.read_csv("learned_color_data.csv")
count = 0
for color in img_array:
color_found.append(self.getColor(color, data, ret_val = True, show_predicted = False)[0])
print(color_found)
count += 1
if count >= 20:
break
result_color_name = pd.DataFrame({"answers" : color_found}).drop_duplicates()
answers = np.array(result_color_name["answers"])
self.showDataMemory()
print("found colors:", answers) | _____no_output_____ | MIT | Color_Ai/Learner_Color_Ai/Color AI.ipynb | xxmeowxx/AI-s |
A.2.5 The LBM Code (D2Q9) | # LBM advection-diffusion D2Q9
import numpy as np
import matplotlib.pyplot as plt
% matplotlib inline
n = 100
m = 100
f = np.zeros((9,n+1,m+1), dtype=float)
feq = np.zeros(9,dtype=float)
rho = np.zeros((n+1,m+1), dtype=float)
x = np.zeros(n+1, dtype=float)
y = np.zeros(m+1,dtype=float)
w = np.zeros(9,dtype=float)
u = 1.0
v = 0.4
dt = 1.0
dx = 1.0
dy = 1.0
for i in range(1, n+1):
x[i] = x[i-1] + dx
for j in range(1, m+1):
y[i] = y[i-1] + dy
tw = 1.0
alpha = 1.0
ck = dx/dt
csq = ck*ck
omega = 1.0/(3.*alpha/(csq*dt) + 0.5)
mstep = 400
w = [4/9,1/9, 1/9, 1/9, 1/9, 1/36, 1/36, 1/36, 1/36]
density = 0.
for j in range(0,m+1):
for i in range(0,n+1):
for k in range(0,9):
f[k,i,j] = w[k] * density
if(i == 0) :
f[k,i,j] = w[k] * tw
for kk in range(1,mstep+1):
for j in range(0,m+1):
for i in range(0,n+1):
sum = 0.0
for k in range(0,9):
sum += f[k,i,j]
rho[i,j] = sum
for j in range(0,m+1):
for i in range(0,n+1):
feq[0] = w[0]*rho[i,j]
feq[1] = w[1]*rho[i,j]*(1. + 3.*u/ck)
feq[2] = w[2]*rho[i,j]*(1. + 3.*v/ck)
feq[3] = w[3]*rho[i,j]*(1. - 3.*u/ck)
feq[4] = w[4]*rho[i,j]*(1. - 3.*v/ck)
feq[5] = w[5]*rho[i,j]*(1. + 3.*(u+v)/ck)
feq[6] = w[6]*rho[i,j]*(1. + 3.*(-u+v)/ck)
feq[7] = w[7]*rho[i,j]*(1. + 3.*(-u-v)/ck)
feq[8] = w[8]*rho[i,j]*(1. + 3.*(u-v)/ck)
for k in range(0,9):
f[k,i,j] = omega*feq[k] + (1.-omega)*f[k,i,j]
# streaming
for j in range(m,-1,-1):
for i in range(0,n):
f[2,i,j] = f[2,i,j-1]
f[6,i,j] = f[6,i+1,j-1]
for j in range(m,-1,-1):
for i in range(n,0,-1):
f[1,i,j] = f[1,i-1,j]
f[5,i,j] = f[5,i-1,j-1]
for j in range(0,m):
for i in range(n,0,-1):
f[4,i,j] = f[4,i,j+1]
f[8,i,j] = f[8,i-1,j+1]
for j in range(0,m):
for i in range(0,n):
f[3,i,j] = f[3,i+1,j]
f[7,i,j] = f[7,i+1,j+1]
# boundary condition
# left boundary condition ,the temperature is given,tw
for j in range(0,m+1):
f[1,0,j] = w[1]*tw + w[3]*tw - f[3,0,j]
f[5,0,j] = w[5]*tw + w[7]*tw - f[7,0,j]
f[8,0,j] = w[8]*tw + w[6]*tw - f[6,0,j]
# right boundary condition, T = 0
for j in range(0,m+1):
f[6,n,j] = -f[8,n,j]
f[3,n,j] = -f[1,n,j]
f[7,n,j] = -f[5,n,j]
f[2,n,j] = -f[4,n,j]
f[0,n,j] = 0.0
# top boundary condition, T = 0.0
for i in range(0,n+1):
f[8,i,m] = -f[6,i,m]
f[7,i,m] = -f[5,i,m]
f[4,i,m] = -f[2,i,m]
f[1,i,m] = -f[3,i,m]
f[0,i,m] = 0.0
# bottom boundary condition, T = 0.0
for i in range(0,n+1):
f[2,i,0] = -f[4,i,0]
f[6,i,0] = -f[8,i,0]
f[5,i,0] = -f[7,i,0]
f[1,i,0] = -f[3,i,0]
f[0,i,0] = 0.0
for j in range(0,m+1):
for i in range(0,n+1):
sum = 0.0
for k in range(0,9):
sum += f[k,i,j]
rho[i,j] = sum
temp = rho[:,50]
fig = plt.figure(figsize=(15, 5))
plt.subplot(1, 3, 3)
plt.plot(temp)
plt.subplot(1, 3, 2)
plt.contour(rho,16,linewidths=0.5)
plt.colorbar()
plt.subplot(1, 3, 1)
plt.imshow(rho, interpolation='nearest', origin='lower')
plt.colorbar()
plt.show() | _____no_output_____ | MIT | Chapter4-3.ipynb | huiselilun/LBM_Applications |
Music Recommendation using AutoML Tables OverviewIn this notebook we will see how [AutoML Tables](https://cloud.google.com/automl-tables/) can be used to make music recommendations to users. AutoML Tables is a supervised learning service for structured data that can vastly simplify the model building process. DatasetAutoML Tables allows data to be imported from either GCS or BigQuery. This tutorial uses the [ListenBrainz](https://console.cloud.google.com/marketplace/details/metabrainz/listenbrainz) dataset from [Cloud Marketplace](https://console.cloud.google.com/marketplace), hosted in BigQuery.The ListenBrainz dataset is a log of songs played by users, some notable pieces of the schema include: - **user_name:** a user id. - **track_name:** a song id. - **artist_name:** the artist of the song. - **release_name:** the album of the song. - **tags:** the genres of the song. ObjectiveThe goal of this notebook is to demonstrate how to create a lookup table in BigQuery of songs to recommend to users using a log of user-song listens and AutoML Tables. This will be done by training a binary classification model to predict whether or not a `user` will like a given `song`. In the training data, liking a song was defined as having listened to a song more than twice. **Using the predictions for every `(user, song)` pair to generate a ranking of the most similar songs for each user.**As the number of `(user, song)` pairs grows exponentially with the number of unique users and songs, this approach may not be optimal for extremely large datasets. One workaround would be to train a model that learns to embed users and songs in the same embedding space, and use a nearest-neighbors algorithm to get recommendations for users. Unfortunately, AutoML Tables does not expose any feature for training and using embeddings, so a [custom ML model](https://github.com/GoogleCloudPlatform/professional-services/tree/master/examples/cloudml-collaborative-filtering) would need to be used instead.Another recommendation approach that is worth mentioning is [using extreme multiclass classification](https://ai.google/research/pubs/pub45530), as that also circumvents storing every possible pair of users and songs. Unfortunately, AutoML Tables does not support the multiclass classification of more than [100 classes](https://cloud.google.com/automl-tables/docs/preparetarget-requirements). CostsThis tutorial uses billable components of Google Cloud Platform (GCP):- Cloud AutoML TablesLearn about [AutoML Tables pricing](https://cloud.google.com/automl-tables/pricing), and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage. 1. Setup Follow the [AutoML Tables documentation](https://cloud.google.com/automl-tables/docs/) to* [Enable billing](https://cloud.google.com/billing/docs/how-to/modify-project).* [Enable AutoML API](https://console.cloud.google.com/apis/library/automl.googleapis.com?q=automl) 1.1 PIP Install Packages and dependenciesInstall addional dependencies not installed in the notebook environment. | ! pip install --upgrade --quiet google-cloud-automl google-cloud-bigquery | _____no_output_____ | Apache-2.0 | tables/automl/notebooks/music_recommendation/music_recommendation.ipynb | CodingFanSteve/python-docs-samples |
Restart the kernel to allow `automl_v1beta1` to be imported. The following cell should succeed after a kernel restart: | from google.cloud import automl_v1beta1 | _____no_output_____ | Apache-2.0 | tables/automl/notebooks/music_recommendation/music_recommendation.ipynb | CodingFanSteve/python-docs-samples |
1.2 Import libraries and define constants Populate the following cell with the necessary constants and run it to initialize constants and create clients for BigQuery and AutoML Tables. | # The GCP project id.
PROJECT_ID = ""
# The region to use for compute resources (AutoML isn't supported in some regions).
LOCATION = "us-central1"
# A name for the AutoML tables Dataset to create.
DATASET_DISPLAY_NAME = ""
# The BigQuery dataset to import data from (doesn't need to exist).
INPUT_BQ_DATASET = ""
# The BigQuery table to import data from (doesn't need to exist).
INPUT_BQ_TABLE = ""
# A name for the AutoML tables model to create.
MODEL_DISPLAY_NAME = ""
# The number of hours to train the model.
MODEL_TRAIN_HOURS = 0
assert all([
PROJECT_ID,
LOCATION,
DATASET_DISPLAY_NAME,
INPUT_BQ_DATASET,
INPUT_BQ_TABLE,
MODEL_DISPLAY_NAME,
MODEL_TRAIN_HOURS,
]) | _____no_output_____ | Apache-2.0 | tables/automl/notebooks/music_recommendation/music_recommendation.ipynb | CodingFanSteve/python-docs-samples |
Import relevant packages and initialize clients for BigQuery and AutoML Tables. | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from google.cloud import automl_v1beta1
from google.cloud import bigquery
from google.cloud import exceptions
import seaborn as sns
%matplotlib inline
tables_client = automl_v1beta1.TablesClient(project=PROJECT_ID, region=LOCATION)
bq_client = bigquery.Client() | _____no_output_____ | Apache-2.0 | tables/automl/notebooks/music_recommendation/music_recommendation.ipynb | CodingFanSteve/python-docs-samples |
2. Create a Dataset In order to train a model, a structured dataset must be injested into AutoML tables from either BigQuery or Google Cloud Storage. Once injested, the user will be able to cherry pick columns to use as features, labels, or weights and configure the loss function. 2.1 Create BigQuery table First, do some feature engineering on the original ListenBrainz dataset to turn it into a dataset for training and export it into a seperate BigQuery table: 1. Make each sample a unique `(user, song)` pair. 2. For features, use the user's top 10 songs ever played and the song's number of albums, artist, and genres. 3. For a label, use the number of times the user has listened to the song, normalized by dividing by the maximum number of times that user has listened to any song. Normalizing the listen counts ensures active users don't have disproportionate effect on the model error. 4. Add a weight equal to the label to give songs more popular with the user higher weights. This is to help account for the skew in the label distribution. | query = """
WITH
songs AS (
SELECT CONCAT(track_name, " by ", artist_name) AS song,
MAX(tags) as tags
FROM `listenbrainz.listenbrainz.listen`
GROUP BY song
HAVING tags != ""
ORDER BY COUNT(*) DESC
LIMIT 10000
),
user_songs AS (
SELECT user_name AS user, ANY_VALUE(artist_name) AS artist,
CONCAT(track_name, " by ", artist_name) AS song,
SPLIT(ANY_VALUE(songs.tags), ",") AS tags,
COUNT(*) AS user_song_listens
FROM `listenbrainz.listenbrainz.listen`
JOIN songs ON songs.song = CONCAT(track_name, " by ", artist_name)
GROUP BY user_name, song
),
user_tags AS (
SELECT user, tag, COUNT(*) AS COUNT
FROM user_songs,
UNNEST(tags) tag
WHERE tag != ""
GROUP BY user, tag
),
top_tags AS (
SELECT tag
FROM user_tags
GROUP BY tag
ORDER BY SUM(count) DESC
LIMIT 20
),
tag_table AS (
SELECT user, b.tag
FROM user_tags a, top_tags b
GROUP BY user, b.tag
),
user_tag_features AS (
SELECT user,
ARRAY_AGG(IFNULL(count, 0) ORDER BY tag) as user_tags,
SUM(count) as tag_count
FROM tag_table
LEFT JOIN user_tags USING (user, tag)
GROUP BY user
), user_features AS (
SELECT user, MAX(user_song_listens) AS user_max_listen,
ANY_VALUE(user_tags)[OFFSET(0)]/ANY_VALUE(tag_count) as user_tags0,
ANY_VALUE(user_tags)[OFFSET(1)]/ANY_VALUE(tag_count) as user_tags1,
ANY_VALUE(user_tags)[OFFSET(2)]/ANY_VALUE(tag_count) as user_tags2,
ANY_VALUE(user_tags)[OFFSET(3)]/ANY_VALUE(tag_count) as user_tags3,
ANY_VALUE(user_tags)[OFFSET(4)]/ANY_VALUE(tag_count) as user_tags4,
ANY_VALUE(user_tags)[OFFSET(5)]/ANY_VALUE(tag_count) as user_tags5,
ANY_VALUE(user_tags)[OFFSET(6)]/ANY_VALUE(tag_count) as user_tags6,
ANY_VALUE(user_tags)[OFFSET(7)]/ANY_VALUE(tag_count) as user_tags7,
ANY_VALUE(user_tags)[OFFSET(8)]/ANY_VALUE(tag_count) as user_tags8,
ANY_VALUE(user_tags)[OFFSET(9)]/ANY_VALUE(tag_count) as user_tags9,
ANY_VALUE(user_tags)[OFFSET(10)]/ANY_VALUE(tag_count) as user_tags10,
ANY_VALUE(user_tags)[OFFSET(11)]/ANY_VALUE(tag_count) as user_tags11,
ANY_VALUE(user_tags)[OFFSET(12)]/ANY_VALUE(tag_count) as user_tags12,
ANY_VALUE(user_tags)[OFFSET(13)]/ANY_VALUE(tag_count) as user_tags13,
ANY_VALUE(user_tags)[OFFSET(14)]/ANY_VALUE(tag_count) as user_tags14,
ANY_VALUE(user_tags)[OFFSET(15)]/ANY_VALUE(tag_count) as user_tags15,
ANY_VALUE(user_tags)[OFFSET(16)]/ANY_VALUE(tag_count) as user_tags16,
ANY_VALUE(user_tags)[OFFSET(17)]/ANY_VALUE(tag_count) as user_tags17,
ANY_VALUE(user_tags)[OFFSET(18)]/ANY_VALUE(tag_count) as user_tags18,
ANY_VALUE(user_tags)[OFFSET(19)]/ANY_VALUE(tag_count) as user_tags19
FROM user_songs
LEFT JOIN user_tag_features USING (user)
GROUP BY user
HAVING COUNT(*) < 5000 AND user_max_listen > 2
),
item_features AS (
SELECT CONCAT(track_name, " by ", artist_name) AS song,
COUNT(DISTINCT(release_name)) AS albums
FROM `listenbrainz.listenbrainz.listen`
WHERE track_name != ""
GROUP BY song
)
SELECT user, song, artist, tags, albums,
user_tags0,
user_tags1,
user_tags2,
user_tags3,
user_tags4,
user_tags5,
user_tags6,
user_tags7,
user_tags8,
user_tags9,
user_tags10,
user_tags11,
user_tags12,
user_tags13,
user_tags14,
user_tags15,
user_tags16,
user_tags17,
user_tags18,
user_tags19,
IF(user_song_listens > 2,
SQRT(user_song_listens/user_max_listen),
.5/user_song_listens) AS weight,
IF(user_song_listens > 2, 1, 0) as label
FROM user_songs
JOIN user_features USING(user)
JOIN item_features USING(song)
"""
def create_table_from_query(query, table):
"""Creates a new table using the results from the given query.
Args:
query: a query string.
table: a name to give the new table.
"""
job_config = bigquery.QueryJobConfig()
bq_dataset = bigquery.Dataset("{0}.{1}".format(PROJECT_ID, INPUT_BQ_DATASET))
bq_dataset.location = "US"
try:
bq_dataset = bq_client.create_dataset(bq_dataset)
except exceptions.Conflict:
pass
table_ref = bq_client.dataset(INPUT_BQ_DATASET).table(table)
job_config.destination = table_ref
query_job = bq_client.query(query,
location=bq_dataset.location,
job_config=job_config)
query_job.result()
print('Query results loaded to table {}'.format(table_ref.path))
create_table_from_query(query, INPUT_BQ_TABLE) | _____no_output_____ | Apache-2.0 | tables/automl/notebooks/music_recommendation/music_recommendation.ipynb | CodingFanSteve/python-docs-samples |
2.2 Create AutoML Dataset Create a Dataset by importing the BigQuery table that was just created. Importing data may take a few minutes or hours depending on the size of your data. | dataset = tables_client.create_dataset(
dataset_display_name=DATASET_DISPLAY_NAME)
dataset_bq_input_uri = 'bq://{0}.{1}.{2}'.format(
PROJECT_ID, INPUT_BQ_DATASET, INPUT_BQ_TABLE)
import_data_response = tables_client.import_data(
dataset=dataset, bigquery_input_uri=dataset_bq_input_uri)
import_data_result = import_data_response.result()
import_data_result | _____no_output_____ | Apache-2.0 | tables/automl/notebooks/music_recommendation/music_recommendation.ipynb | CodingFanSteve/python-docs-samples |
Inspect the datatypes assigned to each column. In this case, the `song` and `artist` should be categorical, not textual. | list_column_specs_response = tables_client.list_column_specs(
dataset_display_name=DATASET_DISPLAY_NAME)
column_specs = {s.display_name: s for s in list_column_specs_response}
def print_column_specs(column_specs):
"""Parses the given specs and prints each column and column type."""
data_types = automl_v1beta1.proto.data_types_pb2
return [(x, data_types.TypeCode.Name(
column_specs[x].data_type.type_code)) for x in column_specs.keys()]
print_column_specs(column_specs) | _____no_output_____ | Apache-2.0 | tables/automl/notebooks/music_recommendation/music_recommendation.ipynb | CodingFanSteve/python-docs-samples |
2.3 Update Dataset params Sometimes, the types AutoML Tables automatically assigns each column will be off from that they were intended to be. When that happens, we need to update Tables with different types for certain columns.In this case, set the `song` and `artist` column types to `CATEGORY`. | for col in ["song", "artist"]:
tables_client.update_column_spec(dataset_display_name=DATASET_DISPLAY_NAME,
column_spec_display_name=col,
type_code="CATEGORY")
list_column_specs_response = tables_client.list_column_specs(
dataset_display_name=DATASET_DISPLAY_NAME)
column_specs = {s.display_name: s for s in list_column_specs_response}
print_column_specs(column_specs) | _____no_output_____ | Apache-2.0 | tables/automl/notebooks/music_recommendation/music_recommendation.ipynb | CodingFanSteve/python-docs-samples |
Not all columns are feature columns, in order to train a model, we need to tell Tables which column should be used as the target variable and, optionally, which column should be used as sample weights. | tables_client.set_target_column(dataset_display_name=DATASET_DISPLAY_NAME,
column_spec_display_name="label")
tables_client.set_weight_column(dataset_display_name=DATASET_DISPLAY_NAME,
column_spec_display_name="weight") | _____no_output_____ | Apache-2.0 | tables/automl/notebooks/music_recommendation/music_recommendation.ipynb | CodingFanSteve/python-docs-samples |
3. Create a Model Once the Dataset has been configured correctly, we can tell AutoML Tables to train a new model. The amount of resources spent to train this model can be adjusted using a parameter called `train_budget_milli_node_hours`. As the name implies, this puts a maximum budget on how many resources a training job can use up before exporting a servable model.Even with a budget of 1 node hour (the minimum possible budget), training a model can take several hours. | tables_client.create_model(
model_display_name=MODEL_DISPLAY_NAME,
dataset_display_name=DATASET_DISPLAY_NAME,
train_budget_milli_node_hours= MODEL_TRAIN_HOURS * 1000).result() | _____no_output_____ | Apache-2.0 | tables/automl/notebooks/music_recommendation/music_recommendation.ipynb | CodingFanSteve/python-docs-samples |
4. Model Evaluation Because we are optimizing a surrogate problem (predicting the similarity between `(user, song)` pairs) in order to achieve our final objective of producing a list of recommended songs for a user, it's difficult to tell how well the model performs by looking only at the final loss function. Instead, an evaluation metric we can use for our model is `recall@n` for the top `m` most listened to songs for each user. This metric will give us the probability that one of a user's top `m` most listened to songs will appear in the top `n` recommendations we make.In order to get the top recommendations for each user, we need to create a batch job to predict similarity scores between each user and item pair. These similarity scores would then be sorted per user to produce an ordered list of recommended songs. 4.1 Create an evaluation table Instead of creating a lookup table for all users, let's just focus on the performance for a few users for this demo. We will focus especially on recommendations for the user `rob`, and demonstrate how the others can be included in an overall evaluation metric for the model. We start by creatings a dataset for prediction to feed into the trained model; this is a table of every possible `(user, song)` pair containing the users and corresponding features. | users = ["rob", "fiveofoh", "Aerion"]
training_table = "{}.{}.{}".format(PROJECT_ID, INPUT_BQ_DATASET, INPUT_BQ_TABLE)
query = """
WITH user as (
SELECT user,
user_tags0, user_tags1, user_tags2, user_tags3, user_tags4,
user_tags5, user_tags6, user_tags7, user_tags8, user_tags9,
user_tags10,user_tags11, user_tags12, user_tags13, user_tags14,
user_tags15, user_tags16, user_tags17, user_tags18, user_tags19, label
FROM `{0}`
WHERE user in ({1})
)
SELECT ANY_VALUE(a).*, song, ANY_VALUE(artist) as artist,
ANY_VALUE(tags) as tags, ANY_VALUE(albums) as albums
FROM `{0}`, user a
GROUP BY song
""".format(training_table, ",".join(["\"{}\"".format(x) for x in users]))
eval_table = "{}_example".format(INPUT_BQ_TABLE)
create_table_from_query(query, eval_table) | _____no_output_____ | Apache-2.0 | tables/automl/notebooks/music_recommendation/music_recommendation.ipynb | CodingFanSteve/python-docs-samples |
4.2 Make predictions Once the prediction table is created, start a batch prediction job. This may take a few minutes. | preds_bq_input_uri = "bq://{}.{}.{}".format(PROJECT_ID, INPUT_BQ_DATASET, eval_table)
preds_bq_output_uri = "bq://{}".format(PROJECT_ID)
response = tables_client.batch_predict(model_display_name=MODEL_DISPLAY_NAME,
bigquery_input_uri=preds_bq_input_uri,
bigquery_output_uri=preds_bq_output_uri)
response.result()
output_uri = response.metadata.batch_predict_details.output_info.bigquery_output_dataset | _____no_output_____ | Apache-2.0 | tables/automl/notebooks/music_recommendation/music_recommendation.ipynb | CodingFanSteve/python-docs-samples |
With the similarity predictions for `rob`, we can order by the predictions to get a ranked list of songs to recommend to `rob`. | n = 10
query = """
SELECT user, song, tables.score as score, a.label as pred_label,
b.label as true_label
FROM `{}.predictions` a, UNNEST(predicted_label)
LEFT JOIN `{}` b USING(user, song)
WHERE user = "{}" AND CAST(tables.value AS INT64) = 1
ORDER BY score DESC
LIMIT {}
""".format(output_uri[5:].replace(":", "."), training_table, user, n)
query_job = bq_client.query(query)
print("Top {} song recommended for {}:".format(n, user))
for idx, row in enumerate(query_job):
print("{}.".format(idx + 1), row["song"]) | _____no_output_____ | Apache-2.0 | tables/automl/notebooks/music_recommendation/music_recommendation.ipynb | CodingFanSteve/python-docs-samples |
4.3 Evaluate predictions Precision@k and Recall@kTo evaluate the recommendations, we can look at the precision@k and recall@k of our predictions for `rob`. Run the cells below to load the recommendations into a pandas dataframe and plot the precisions and recalls at various top-k recommendations. | query = """
WITH
top_k AS (
SELECT user, song, label,
ROW_NUMBER() OVER (PARTITION BY user ORDER BY label + weight DESC) as user_rank
FROM `{0}`
)
SELECT user, song, tables.score as score, b.label,
ROW_NUMBER() OVER (ORDER BY tables.score DESC) as rank, user_rank
FROM `{1}.predictions` a, UNNEST(predicted_label)
LEFT JOIN top_k b USING(user, song)
WHERE CAST(tables.value AS INT64) = 1
ORDER BY score DESC
""".format(training_table, output_uri[5:].replace(":", "."))
df = bq_client.query(query).result().to_dataframe()
df.head()
precision_at_k = {}
recall_at_k = {}
for user in users:
precision_at_k[user] = []
recall_at_k[user] = []
for k in range(1, 1000):
precision = df["label"][:k].sum() / k
recall = df["label"][:k].sum() / df["label"].sum()
precision_at_k[user].append(precision)
recall_at_k[user].append(recall)
# plot the precision-recall curve
ax = sns.lineplot(recall_at_k[users[0]], precision_at_k[users[0]])
ax.set_title("precision-recall curve for varying k")
ax.set_xlabel("recall@k")
ax.set_ylabel("precision@k") | _____no_output_____ | Apache-2.0 | tables/automl/notebooks/music_recommendation/music_recommendation.ipynb | CodingFanSteve/python-docs-samples |
Achieving a high precision@k means a large proportion of top-k recommended items are relevant to the user. Recall@k shows what proportion of all relevant items appeared in the top-k recommendations. Mean Average Precision (MAP)Precision@k is a good metric for understanding how many relevant recommendations we might make at each top-k. However, we would prefer relevant items to be recommended first when possible and should encode that into our evaluation metric. __Average Precision (AP)__ is a running average of precision@k, rewarding recommendations where the revelant items are seen earlier rather than later. When the averaged across all users for some k, the AP metric is called MAP. | def calculate_ap(precision):
ap = [precision[0]]
for p in precision[1:]:
ap.append(ap[-1] + p)
ap = [x / (n + 1) for x, n in zip(ap, range(len(ap)))]
return ap
ap_at_k = {user: calculate_ap(pk)
for user, pk in precision_at_k.items()}
num_k = 500
map_at_k = [sum([ap_at_k[user][k] for user in users]) / len(users)
for k in range(num_k)]
print("MAP@50: {}".format(map_at_k[49]))
# plot average precision
ax = sns.lineplot(range(num_k), map_at_k)
ax.set_title("MAP@k for varying k")
ax.set_xlabel("k")
ax.set_ylabel("MAP") | _____no_output_____ | Apache-2.0 | tables/automl/notebooks/music_recommendation/music_recommendation.ipynb | CodingFanSteve/python-docs-samples |
5. Cleanup The following cells clean up the BigQuery tables and AutoML Table Datasets that were created with this notebook to avoid additional charges for storage. 5.1 Delete the Model and Dataset | tables_client.delete_model(model_display_name=MODEL_DISPLAY_NAME)
tables_client.delete_dataset(dataset_display_name=DATASET_DISPLAY_NAME) | _____no_output_____ | Apache-2.0 | tables/automl/notebooks/music_recommendation/music_recommendation.ipynb | CodingFanSteve/python-docs-samples |
5.2 Delete BigQuery datasets In order to delete BigQuery tables, make sure the service account linked to this notebook has a role with the `bigquery.tables.delete` permission such as `Big Query Data Owner`. The following command displays the current service account.IAM permissions can be adjusted [here](https://console.cloud.google.com/iam-admin/iam). | !gcloud config list account --format "value(core.account)" | _____no_output_____ | Apache-2.0 | tables/automl/notebooks/music_recommendation/music_recommendation.ipynb | CodingFanSteve/python-docs-samples |
Clean up the BigQuery tables created by this notebook. | # Delete the prediction dataset.
dataset_id = str(output_uri[5:].replace(":", "."))
bq_client.delete_dataset(dataset_id, delete_contents=True, not_found_ok=True)
# Delete the training dataset.
dataset_id = "{0}.{1}".format(PROJECT_ID, INPUT_BQ_DATASET)
bq_client.delete_dataset(dataset_id, delete_contents=True, not_found_ok=True) | _____no_output_____ | Apache-2.0 | tables/automl/notebooks/music_recommendation/music_recommendation.ipynb | CodingFanSteve/python-docs-samples |
Data School's top 25 pandas tricks ([video](https://www.youtube.com/watch?v=RlIiVeig3hc))- Watch the [complete pandas video series](https://www.dataschool.io/easier-data-analysis-with-pandas/)- Connect on [Twitter](https://twitter.com/justmarkham), [Facebook](https://www.facebook.com/DataScienceSchool/), and [LinkedIn](https://www.linkedin.com/in/justmarkham/)- Subscribe on [YouTube](https://www.youtube.com/dataschool?sub_confirmation=1)- Join the [email newsletter](https://www.dataschool.io/subscribe/) Table of contents1. Show installed versions2. Create an example DataFrame3. Rename columns4. Reverse row order5. Reverse column order6. Select columns by data type7. Convert strings to numbers8. Reduce DataFrame size9. Build a DataFrame from multiple files (row-wise)10. Build a DataFrame from multiple files (column-wise)11. Create a DataFrame from the clipboard12. Split a DataFrame into two random subsets13. Filter a DataFrame by multiple categories14. Filter a DataFrame by largest categories15. Handle missing values16. Split a string into multiple columns17. Expand a Series of lists into a DataFrame18. Aggregate by multiple functions19. Combine the output of an aggregation with a DataFrame20. Select a slice of rows and columns21. Reshape a MultiIndexed Series22. Create a pivot table23. Convert continuous data into categorical data24. Change display options25. Style a DataFrame26. Bonus trick: Profile a DataFrame Load example datasets | import pandas as pd
import numpy as np
drinks = pd.read_csv('http://bit.ly/drinksbycountry')
movies = pd.read_csv('http://bit.ly/imdbratings')
orders = pd.read_csv('http://bit.ly/chiporders', sep='\t')
orders['item_price'] = orders.item_price.str.replace('$', '').astype('float')
stocks = pd.read_csv('http://bit.ly/smallstocks', parse_dates=['Date'])
titanic = pd.read_csv('http://bit.ly/kaggletrain')
ufo = pd.read_csv('http://bit.ly/uforeports', parse_dates=['Time']) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
1. Show installed versions Sometimes you need to know the pandas version you're using, especially when reading the pandas documentation. You can show the pandas version by typing: | pd.__version__ | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
But if you also need to know the versions of pandas' dependencies, you can use the `show_versions()` function: | pd.show_versions() |
INSTALLED VERSIONS
------------------
commit: None
python: 3.7.1.final.0
python-bits: 64
OS: Windows
OS-release: 10
machine: AMD64
processor: Intel64 Family 6 Model 60 Stepping 3, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
LOCALE: None.None
pandas: 0.23.4
pytest: 4.0.2
pip: 18.1
setuptools: 40.6.3
Cython: 0.29.2
numpy: 1.15.4
scipy: 1.1.0
pyarrow: None
xarray: None
IPython: 7.2.0
sphinx: 1.8.2
patsy: 0.5.1
dateutil: 2.7.5
pytz: 2018.7
blosc: None
bottleneck: 1.2.1
tables: 3.4.4
numexpr: 2.6.8
feather: None
matplotlib: 3.0.2
openpyxl: 2.5.12
xlrd: 1.2.0
xlwt: 1.3.0
xlsxwriter: 1.1.2
lxml: 4.2.5
bs4: 4.6.3
html5lib: 1.0.1
sqlalchemy: 1.2.15
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
| MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
You can see the versions of Python, pandas, NumPy, matplotlib, and more. 2. Create an example DataFrame Let's say that you want to demonstrate some pandas code. You need an example DataFrame to work with.There are many ways to do this, but my favorite way is to pass a dictionary to the DataFrame constructor, in which the dictionary keys are the column names and the dictionary values are lists of column values: | df = pd.DataFrame({'col one':[100, 200], 'col two':[300, 400]})
df | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
Now if you need a much larger DataFrame, the above method will require way too much typing. In that case, you can use NumPy's `random.rand()` function, tell it the number of rows and columns, and pass that to the DataFrame constructor: | pd.DataFrame(np.random.rand(4, 8)) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
That's pretty good, but if you also want non-numeric column names, you can coerce a string of letters to a list and then pass that list to the columns parameter: | pd.DataFrame(np.random.rand(4, 8), columns=list('abcdefgh')) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
As you might guess, your string will need to have the same number of characters as there are columns. 3. Rename columns Let's take a look at the example DataFrame we created in the last trick: | df | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
I prefer to use dot notation to select pandas columns, but that won't work since the column names have spaces. Let's fix this.The most flexible method for renaming columns is the `rename()` method. You pass it a dictionary in which the keys are the old names and the values are the new names, and you also specify the axis: | df = df.rename({'col one':'col_one', 'col two':'col_two'}, axis='columns') | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
The best thing about this method is that you can use it to rename any number of columns, whether it be just one column or all columns.Now if you're going to rename all of the columns at once, a simpler method is just to overwrite the columns attribute of the DataFrame: | df.columns = ['col_one', 'col_two'] | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
Now if the only thing you're doing is replacing spaces with underscores, an even better method is to use the `str.replace()` method, since you don't have to type out all of the column names: | df.columns = df.columns.str.replace(' ', '_') | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
All three of these methods have the same result, which is to rename the columns so that they don't have any spaces: | df | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
Finally, if you just need to add a prefix or suffix to all of your column names, you can use the `add_prefix()` method... | df.add_prefix('X_') | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
...or the `add_suffix()` method: | df.add_suffix('_Y') | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
4. Reverse row order Let's take a look at the drinks DataFrame: | drinks.head() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
This is a dataset of average alcohol consumption by country. What if you wanted to reverse the order of the rows?The most straightforward method is to use the `loc` accessor and pass it `::-1`, which is the same slicing notation used to reverse a Python list: | drinks.loc[::-1].head() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
What if you also wanted to reset the index so that it starts at zero?You would use the `reset_index()` method and tell it to drop the old index entirely: | drinks.loc[::-1].reset_index(drop=True).head() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
As you can see, the rows are in reverse order but the index has been reset to the default integer index. 5. Reverse column order Similar to the previous trick, you can also use `loc` to reverse the left-to-right order of your columns: | drinks.loc[:, ::-1].head() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
The colon before the comma means "select all rows", and the `::-1` after the comma means "reverse the columns", which is why "country" is now on the right side. 6. Select columns by data type Here are the data types of the drinks DataFrame: | drinks.dtypes | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
Let's say you need to select only the numeric columns. You can use the `select_dtypes()` method: | drinks.select_dtypes(include='number').head() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
This includes both int and float columns.You could also use this method to select just the object columns: | drinks.select_dtypes(include='object').head() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
You can tell it to include multiple data types by passing a list: | drinks.select_dtypes(include=['number', 'object', 'category', 'datetime']).head() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
You can also tell it to exclude certain data types: | drinks.select_dtypes(exclude='number').head() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
7. Convert strings to numbers Let's create another example DataFrame: | df = pd.DataFrame({'col_one':['1.1', '2.2', '3.3'],
'col_two':['4.4', '5.5', '6.6'],
'col_three':['7.7', '8.8', '-']})
df | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
These numbers are actually stored as strings, which results in object columns: | df.dtypes | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
In order to do mathematical operations on these columns, we need to convert the data types to numeric. You can use the `astype()` method on the first two columns: | df.astype({'col_one':'float', 'col_two':'float'}).dtypes | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
However, this would have resulted in an error if you tried to use it on the third column, because that column contains a dash to represent zero and pandas doesn't understand how to handle it.Instead, you can use the `to_numeric()` function on the third column and tell it to convert any invalid input into `NaN` values: | pd.to_numeric(df.col_three, errors='coerce') | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
If you know that the `NaN` values actually represent zeros, you can fill them with zeros using the `fillna()` method: | pd.to_numeric(df.col_three, errors='coerce').fillna(0) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
Finally, you can apply this function to the entire DataFrame all at once by using the `apply()` method: | df = df.apply(pd.to_numeric, errors='coerce').fillna(0)
df | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
This one line of code accomplishes our goal, because all of the data types have now been converted to float: | df.dtypes | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
8. Reduce DataFrame size pandas DataFrames are designed to fit into memory, and so sometimes you need to reduce the DataFrame size in order to work with it on your system.Here's the size of the drinks DataFrame: | drinks.info(memory_usage='deep') | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 193 entries, 0 to 192
Data columns (total 6 columns):
country 193 non-null object
beer_servings 193 non-null int64
spirit_servings 193 non-null int64
wine_servings 193 non-null int64
total_litres_of_pure_alcohol 193 non-null float64
continent 193 non-null object
dtypes: float64(1), int64(3), object(2)
memory usage: 30.4 KB
| MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
You can see that it currently uses 30.4 KB.If you're having performance problems with your DataFrame, or you can't even read it into memory, there are two easy steps you can take during the file reading process to reduce the DataFrame size.The first step is to only read in the columns that you actually need, which we specify with the "usecols" parameter: | cols = ['beer_servings', 'continent']
small_drinks = pd.read_csv('http://bit.ly/drinksbycountry', usecols=cols)
small_drinks.info(memory_usage='deep') | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 193 entries, 0 to 192
Data columns (total 2 columns):
beer_servings 193 non-null int64
continent 193 non-null object
dtypes: int64(1), object(1)
memory usage: 13.6 KB
| MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
By only reading in these two columns, we've reduced the DataFrame size to 13.6 KB.The second step is to convert any object columns containing categorical data to the category data type, which we specify with the "dtype" parameter: | dtypes = {'continent':'category'}
smaller_drinks = pd.read_csv('http://bit.ly/drinksbycountry', usecols=cols, dtype=dtypes)
smaller_drinks.info(memory_usage='deep') | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 193 entries, 0 to 192
Data columns (total 2 columns):
beer_servings 193 non-null int64
continent 193 non-null category
dtypes: category(1), int64(1)
memory usage: 2.3 KB
| MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
By reading in the continent column as the category data type, we've further reduced the DataFrame size to 2.3 KB.Keep in mind that the category data type will only reduce memory usage if you have a small number of categories relative to the number of rows. 9. Build a DataFrame from multiple files (row-wise) Let's say that your dataset is spread across multiple files, but you want to read the dataset into a single DataFrame.For example, I have a small dataset of stock data in which each CSV file only includes a single day. Here's the first day: | pd.read_csv('data/stocks1.csv') | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
Here's the second day: | pd.read_csv('data/stocks2.csv') | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
And here's the third day: | pd.read_csv('data/stocks3.csv') | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
You could read each CSV file into its own DataFrame, combine them together, and then delete the original DataFrames, but that would be memory inefficient and require a lot of code.A better solution is to use the built-in glob module: | from glob import glob | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
You can pass a pattern to `glob()`, including wildcard characters, and it will return a list of all files that match that pattern.In this case, glob is looking in the "data" subdirectory for all CSV files that start with the word "stocks": | stock_files = sorted(glob('data/stocks*.csv'))
stock_files | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
glob returns filenames in an arbitrary order, which is why we sorted the list using Python's built-in `sorted()` function.We can then use a generator expression to read each of the files using `read_csv()` and pass the results to the `concat()` function, which will concatenate the rows into a single DataFrame: | pd.concat((pd.read_csv(file) for file in stock_files)) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
Unfortunately, there are now duplicate values in the index. To avoid that, we can tell the `concat()` function to ignore the index and instead use the default integer index: | pd.concat((pd.read_csv(file) for file in stock_files), ignore_index=True) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
10. Build a DataFrame from multiple files (column-wise) The previous trick is useful when each file contains rows from your dataset. But what if each file instead contains columns from your dataset?Here's an example in which the drinks dataset has been split into two CSV files, and each file contains three columns: | pd.read_csv('data/drinks1.csv').head()
pd.read_csv('data/drinks2.csv').head() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
Similar to the previous trick, we'll start by using `glob()`: | drink_files = sorted(glob('data/drinks*.csv')) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
And this time, we'll tell the `concat()` function to concatenate along the columns axis: | pd.concat((pd.read_csv(file) for file in drink_files), axis='columns').head() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.