markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
As before, a list of tuples with custom names can be passed: | ftuples = [('Durchschnitt', 'mean'), ('Abweichung', np.var)] #@P: mean and var is named with different name as Durchschnitt and Abweichung)
grouped[['tip_pct', 'total_bill']].agg(ftuples) | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
Now, suppose you wanted to *apply potentially different functions* to one or more of the columns. To do this, `pass a dict to agg that contains a mapping of column names` to any of the function specifications listed so far: | grouped.agg({'tip' : np.max, 'size' : 'sum'})
grouped.agg({'tip_pct' : ['min', 'max', 'mean', 'std'],
'size' : 'sum'}) | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
10.2.2. Returning Aggregated Data Without Row Indexes In all of the examples up until now, `the aggregated data comes back with an index, potentially hierarchical, composed from the unique group key combinations`. Since this isn’t always desirable, *you can disable this behavior in most cases by passing* `as_index=False` to groupby: | tips.groupby(['day', 'smoker'], as_index=False).mean() | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
10.3. Apply: General split-apply-combine 10.3.0 Returning to the tipping dataset from before, *suppose you wanted to select the top five tip_pct values by group*.
* First, write a function that selects the rows with the largest values in a particular column: | def top(df, n=5, column='tip_pct'):
return df.sort_values(by=column)[-n:]
top(tips, n=6) | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
Now, if we group by smoker, say, and call apply with this function, we get the following: | tips.groupby('smoker').apply(top) | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
What has happened here?
* The top function is called on each row group from the DataFrame, and then the results are glued together using `pandas.concat`, labeling the pieces with the group names.
The result therefore has *a hierarchical index* whose inner level contains index values from the original DataFrame. If you pass a function to `apply` that takes other arguments or keywords, you can pass these after the function: | tips.groupby(['smoker', 'day']).apply(top, n=1, column='total_bill')
#@P: additional config to the function: n= 1 and apply the function on column total_bill, not the default n =5 and column='tip_pct' | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
**NOTE**
Beyond these basic usage mechanics, getting the most out of apply may require some creativity. What occurs inside the function passed is up to you; `it only needs to return a pandas object or a scalar value`. The rest of this chapter will mainly consist of examples showing you how to solve various problems using groupby. You may recall that I earlier called `describe` on a GroupBy object: | result = tips.groupby('smoker')['tip_pct'].describe()
result
result.unstack('smoker') | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
Inside GroupBy, when you invoke a method like `describe`, *it is actually just a shortcut for*: | f = lambda x: x.describe()
grouped.apply(f) | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
10.3.1. Suppressing the Group Keys In the preceding examples, you see that the *resulting object has a hierarchical index* formed from the group keys along with the indexes of each piece of the original object. You can *disable this by passing* `group_keys=False`to *groupby*: | tips.groupby('smoker', group_keys=False).apply(top) | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
10.3.2. Quantile and Bucket Analysis As you may `recall from Chapter 8`, pandas has some tools, in particular `cut and qcut`, *for slicing data up into buckets with bins of your choosing or by sample quantiles*. `Combining these functions with groupby makes it convenient to perform bucket or quantile analysis on a dataset`.
Consider a simple random dataset and an equal-length bucket categorization using cut: | #@P checking syntax docstring
np.random.randn??
#@P checking syntax docstring
pd.cut?
frame = pd.DataFrame({'data1': np.random.randn(1000),
'data2': np.random.randn(1000)})
quartiles = pd.cut(frame.data1, 4) #@P: cut the data1 to 4 group (not equal)
quartiles[:10] | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
The `Categorical object` `returned by cut` can be passed directly to groupby. So we could compute a set of statistics for the data2 column like so: | #@P vietsub: get_stats is to obtain min, max, count, mean of the group
def get_stats(group):
return {'min': group.min(), 'max': group.max(),
'count': group.count(), 'mean': group.mean()}
grouped = frame.data2.groupby(quartiles)
grouped.apply(get_stats).unstack() | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
These were equal-length buckets; to `compute equal-size buckets based on sample quantiles`, use `qcut`. I’ll pass labels=False to just get quantile numbers: | pd.qcut??
# Return quantile numbers
grouping = pd.qcut(frame.data1, 10, labels=False) #cut data1 to 10 equal group
grouped = frame.data2.groupby(grouping)
grouped.apply(get_stats).unstack() | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
10.3.3. Example: Filling Missing Values with Group-Specific Values When cleaning up missing data, in some cases you will replace data observations using dropna, but in others you may want to impute (fill in) the null (NA) values using a fixed value or some value derived from the data. `fillna is the right tool to use`;
for example, here I fill in NA values with the mean: | s = pd.Series(np.random.randn(6))
s[::2] = np.nan
s
s.fillna(s.mean()) | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
`Suppose you need the fill value to vary by group`. One way to do this is to group the data and use apply with a function that calls fillna on each data chunk. Here is some sample data on US states divided into eastern and western regions: | states = ['Ohio', 'New York', 'Vermont', 'Florida',
'Oregon', 'Nevada', 'California', 'Idaho']
group_key = ['East'] * 4 + ['West'] * 4
data = pd.Series(np.random.randn(8), index=states)
data | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
Note that the syntax ['East'] * 4 produces a list containing four copies of the elements in ['East']. Adding lists together concatenates them.
Let’s set some values in the data to be missing: | data[['Vermont', 'Nevada', 'Idaho']] = np.nan
data
data.groupby(group_key).mean() | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
We can fill the NA values using the group means like so: | fill_mean = lambda g: g.fillna(g.mean())
data.groupby(group_key).apply(fill_mean) | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
In another case, you might have predefined fill values in your code that vary by group. Since the groups have a `name` attribute set internally, we can use that: | fill_values = {'East': 0.5, 'West': -1}
fill_func = lambda g: g.fillna(fill_values[g.name])
data.groupby(group_key).apply(fill_func) | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
10.3.4. Example: Random Sampling and Permutation `Suppose you wanted to draw a random sample (with or without replacement) from a large dataset` for Monte Carlo simulation purposes or some other application. *There are a number of ways to perform the “draws”*; *here we use the sample method for Series*. To demonstrate, here’s a way to construct a deck of English-style playing cards: | # Hearts, Spades, Clubs, Diamonds
suits = ['H', 'S', 'C', 'D']
card_val = (list(range(1, 11)) + [10] * 3) * 4 #@P: đoạn 10 *3 để thêm value cho J, K, Q = 10 như kết quả bên dưới
base_names = ['A'] + list(range(2, 11)) + ['J', 'K', 'Q']
cards = []
for suit in ['H', 'S', 'C', 'D']:
cards.extend(str(num) + suit for num in base_names)
deck = pd.Series(card_val, index=cards)
deck | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
So now we have a Series of length 52 whose index contains card names and values are the ones used in Blackjack and other games (to keep things simple, I just let the ace 'A' be 1): | deck[:20] | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
Now, based on what I said before, drawing a hand of five cards from the deck could be written as: | def draw(deck, n=5):
return deck.sample(n)
draw(deck) | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
Suppose you wanted two random cards from each suit (@P: suits = ['H', 'S', 'C', 'D'] -- Hearts, Spades, Clubs, Diamonds). Because the suit is the last character of each card name, we can group based on this and use apply: | get_suit = lambda card: card[-1] # last letter is suit
deck.groupby(get_suit).apply(draw, n=2) | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
Alternatively, we could write: | deck.groupby(get_suit, group_keys=False).apply(draw, n=2) | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
10.3.5. Example: Group Weighted Average and Correlation Under the split-apply-combine paradigm of `groupby`, operations between columns in a DataFrame or two Series, such as a group weighted average, are possible. As an example, take this dataset containing group keys, values, and some weights: | df = pd.DataFrame({'category': ['a', 'a', 'a', 'a',
'b', 'b', 'b', 'b'],
'data': np.random.randn(8),
'weights': np.random.rand(8)})
df | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
The group weighted average by `category` would then be: | grouped = df.groupby('category')
get_wavg = lambda g: np.average(g['data'], weights=g['weights'])
grouped.apply(get_wavg) | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
As another example, consider a financial dataset originally obtained from Yahoo! Finance containing end-of-day prices for a few stocks and the S&P 500 index (the SPX symbol): | close_px = pd.read_csv('examples/stock_px_2.csv', parse_dates=True,
index_col=0)
close_px.info()
close_px[-4:] | <class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 2214 entries, 2003-01-02 to 2011-10-14
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 AAPL 2214 non-null float64
1 MSFT 2214 non-null float64
2 XOM 2214 non-null float64
3 SPX 2214 non-null float64
dtypes: float64(4)
memory usage: 86.5 KB
| MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
One task of interest might be to compute a DataFrame consisting of the yearly correlations of daily returns (computed from percent changes) with SPX. As one way to do this, we first create a function that computes the pairwise correlation of each column with the 'SPX' column: | spx_corr = lambda x: x.corrwith(x['SPX']) #@P 20210903: SPX: S&P500 Index | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
Next, we compute percent change on `close_px` using `pct_change`: | rets = close_px.pct_change().dropna() | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
Lastly, we group these percent changes by year, which can be extracted from each row label with a one-line function that returns the `year` attribute of each `datetime` label: | get_year = lambda x: x.year
by_year = rets.groupby(get_year)
by_year.apply(spx_corr) | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
You could also compute inter-column correlations. Here we compute the annual correlation between Apple and Microsoft: | by_year.apply(lambda g: g['AAPL'].corr(g['MSFT'])) | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
10.3.6. Example: Group-Wise Linear Regression In the same theme as the previous example, you can use `groupby` to perform more complex group-wise statistical analysis, as long as the function returns a pandas object or scalar value.
For example, I can define the following `regress` function (using the `statsmodels` econometrics library), which executes an `ordinary least squares (OLS) regression` on each chunk of data: | import statsmodels.api as sm
def regress(data, yvar, xvars):
Y = data[yvar]
X = data[xvars]
X['intercept'] = 1.
result = sm.OLS(Y, X).fit()
return result.params | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
Now, to run a yearly linear regression of *AAPL* on *SPX* returns, execute: | by_year.apply(regress, 'AAPL', ['SPX']) | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
10.4. Pivot Tables and Cross-Tabulation A pivot table is a data summarization tool frequently found in spreadsheet programs and other data analysis software. It aggregates a table of data by one or more keys, arranging the data in a rectangle with some of the group keys along the rows and some along the columns. `Pivot tables in Python with pandas are made possible through the groupby facility described in this chapter combined with reshape operations utilizing hierarchical indexing`. DataFrame has a pivot_table method, and there is also a top-level pandas.pivot_table function. In addition to providing a convenience interface to groupby, pivot_table can add partial totals, also known as margins. Returning to the *tipping dataset*, suppose you wanted to *compute a table of group means* (the default pivot_table aggregation type) arranged *by day and smoker* on the rows: | tips.pivot_table(index=['day', 'smoker']) | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
`This could have been produced with groupby directly`. Now, suppose we want to aggregate only *tip_pct* and *size*, and additionally group by time. I’ll put *smoker* in the table columns and *day* in the rows: | tips.pivot_table(['tip_pct', 'size'], index=['time', 'day'],
columns='smoker') | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
We could augment this table to include partial totals by passing *margins=True*. This has the effect of *adding All row and column labels*, with corresponding values being the group statistics for all the data within a single tier:
@P 20210903: it like a subtotal for each group in Excel pivotable? (P guess they use the term margin to indicate the subotal is put at the margin of the calculated table) | tips.pivot_table(['tip_pct', 'size'], index=['time', 'day'],
columns='smoker', margins=True) | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
Here, the All values are means without taking into account smoker versus non-smoker (the All columns) or any of the two levels of grouping on the rows (the All row). *To use a different aggregation function*, pass it to `aggfunc`. For example, *'count'* or *len* will give you a cross-tabulation (count or frequency) of group sizes: | tips.pivot_table('tip_pct', index=['time', 'smoker'], columns='day',
aggfunc=len, margins=True) | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
If some combinations are empty (or otherwise NA), you may wish to pass a `fill_value`: | tips.pivot_table('tip_pct', index=['time', 'size', 'smoker'],
columns='day', aggfunc='mean', fill_value=0) | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
10.4.1 Cross-Tabulations: Crosstab A `cross-tabulation` (or `crosstab` for short) is *a special case* of a pivot table that computes group frequencies. Here is an example: | from io import StringIO
data = """\
Sample Nationality Handedness
1 USA Right-handed
2 Japan Left-handed
3 USA Right-handed
4 Japan Right-handed
5 Japan Left-handed
6 Japan Right-handed
7 USA Right-handed
8 USA Left-handed
9 Japan Right-handed
10 USA Right-handed"""
data = pd.read_table(StringIO(data), sep='\s+')
data | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
As part of some survey analysis, we might want to summarize this data by nationality and handedness. *You could use pivot_table to do this*, but the `pandas.crosstab function` can be more convenient: | pd.crosstab(data.Nationality, data.Handedness, margins=True) | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
*The first two arguments to crosstab *can each either `be an array` or `Series` or `a list of arrays`. As in the tips data: | pd.crosstab([tips.time, tips.day], tips.smoker, margins=True)
pd.options.display.max_rows = PREVIOUS_MAX_ROWS | _____no_output_____ | MIT | 10_Data Aggregation and Group Operations.ipynb | quangphu1912/Py-data-analysis-McKinney |
区域卷积神经网络(R-CNN)系列:label:`sec_rcnn`除了 :numref:`sec_ssd` 中描述的单发多框检测之外,区域卷积神经网络(region-based CNN或regions with CNN features,R-CNN) :cite:`Girshick.Donahue.Darrell.ea.2014` 也是将深度模型应用于目标检测的开创性工作之一。在本节中,我们将介绍R-CNN及其一系列改进方法:快速的R-CNN(Fast R-CNN) :cite:`Girshick.2015` 、更快的R-CNN(Faster R-CNN) :cite:`Ren.He.Girshick.ea.2015` 和掩码R-CNN(Mask R-CNN) :cite:`He.Gkioxari.Dollar.ea.2017` 。限于篇幅,我们只着重介绍这些模型的设计思路。 R-CNN*R-CNN* 首先从输入图像中选取若干(例如2000个)*提议区域*(如锚框也是一种选取方法),并标注它们的类别和边界框(如偏移量)。 :cite:`Girshick.Donahue.Darrell.ea.2014` 然后,用卷积神经网络对每个提议区域进行前向计算以抽取其特征。接下来,我们用每个提议区域的特征来预测类别和边界框。 :label:`fig_r-cnn`:numref:`fig_r-cnn` 展示了R-CNN模型。具体来说,R-CNN包括以下四个步骤: 1. 对输入图像使用 *选择性搜索* 来选取多个高质量的提议区域 :cite:`Uijlings.Van-De-Sande.Gevers.ea.2013` 。这些提议区域通常是在多个尺度下选取的,并具有不同的形状和大小。每个提议区域都将被标注类别和真实边界框。1. 选择一个预训练的卷积神经网络,并将其在输出层之前截断。将每个提议区域变形为网络需要的输入尺寸,并通过前向计算输出抽取的提议区域特征。 1. 将每个提议区域的特征连同其标注的类别作为一个样本。训练多个支持向量机对目标分类,其中每个支持向量机用来判断样本是否属于某一个类别。1. 将每个提议区域的特征连同其标注的边界框作为一个样本,训练线性回归模型来预测真实边界框。尽管 R-CNN 模型通过预训练的卷积神经网络有效地抽取了图像特征,但它的速度很慢。想象一下,我们可能从一张图像中选出上千个提议区域,这需要上千次的卷积神经网络的前向计算来执行目标检测。这种庞大的计算量使得 R-CNN 在现实世界中难以被广泛应用。 Fast R-CNNR-CNN 的主要性能瓶颈在于,对每个提议区域,卷积神经网络的前向计算是独立的,而没有共享计算。由于这些区域通常有重叠,独立的特征抽取会导致重复的计算。*Fast R-CNN* :cite:`Girshick.2015` 对 R-CNN 的主要改进之一,是仅在整张图象上执行卷积神经网络的前向计算。 :label:`fig_fast_r-cnn`:numref:`fig_fast_r-cnn` 中描述了 Fast R-CNN 模型。它的主要计算如下: 1. 与 R-CNN 相比,Fast R-CNN 用来提取特征的卷积神经网络的输入是整个图像,而不是各个提议区域。此外,这个网络通常会参与训练。设输入为一张图像,将卷积神经网络的输出的形状记为 $1 \times c \times h_1 \times w_1$。1. 假设选择性搜索生成了$n$个提议区域。这些形状各异的提议区域在卷积神经网络的输出上分别标出了形状各异的兴趣区域。然后,这些感兴趣的区域需要进一步抽取出形状相同的特征(比如指定高度$h_2$和宽度$w_2$),以便于连结后输出。为了实现这一目标,Fast R-CNN 引入了 *兴趣区域 (RoI) 池化* 层:将卷积神经网络的输出和提议区域作为输入,输出连结后的各个提议区域抽取的特征,形状为$n \times c \times h_2 \times w_2$。1. 通过全连接层将输出形状变换为$n \times d$,其中超参数$d$取决于模型设计。1. 预测$n$个提议区域中每个区域的类别和边界框。更具体地说,在预测类别和边界框时,将全连接层的输出分别转换为形状为 $n \times q$($q$ 是类别的数量)的输出和形状为 $n \times 4$ 的输出。其中预测类别时使用 softmax 回归。在Fast R-CNN 中提出的兴趣区域汇聚层与 :numref:`sec_pooling` 中介绍的汇聚层有所不同。在汇聚层中,我们通过设置池化窗口、填充和步幅的大小来间接控制输出形状。而兴趣区域汇聚层对每个区域的输出形状是可以直接指定的。 例如,指定每个区域输出的高和宽分别为 $h_2$ 和 $w_2$。对于任何形状为 $h \times w$ 的兴趣区域窗口,该窗口将被划分为 $h_2 \times w_2$ 子窗口网格,其中每个子窗口的大小约为$(h/h_2) \times (w/w_2)$。在实践中,任何子窗口的高度和宽度都应向上取整,其中的最大元素作为该子窗口的输出。因此,兴趣区域汇聚层可从形状各异的兴趣区域中均抽取出形状相同的特征。作为说明性示例, :numref:`fig_roi` 中提到,在$4 \times 4$的输入中,我们选取了左上角 $3\times 3$ 的兴趣区域。对于该兴趣区域,我们通过 $2\times 2$ 的兴趣区域汇聚层得到一个 $2\times 2$ 的输出。请注意,四个划分后的子窗口中分别含有元素 0、1、4、5(5最大);2、6(6最大);8、9(9最大);以及10。 :label:`fig_roi`下面,我们演示了兴趣区域汇聚层的计算方法。假设卷积神经网络抽取的特征 `X` 的高度和宽度都是 4,且只有单通道。 | import torch
import torchvision
X = torch.arange(16.).reshape(1, 1, 4, 4)
X | _____no_output_____ | MIT | d2l/chapter_computer-vision/rcnn.ipynb | atlasbioinfo/myDLNotes_Pytorch |
让我们进一步假设输入图像的高度和宽度都是40像素,且选择性搜索在此图像上生成了两个提议区域。每个区域由5个元素表示:区域目标类别、左上角和右下角的 $(x, y)$ 坐标。 | rois = torch.Tensor([[0, 0, 0, 20, 20], [0, 0, 10, 30, 30]]) | _____no_output_____ | MIT | d2l/chapter_computer-vision/rcnn.ipynb | atlasbioinfo/myDLNotes_Pytorch |
由于 `X` 的高和宽是输入图像高和宽的 $1/10$,因此,两个提议区域的坐标先按 `spatial_scale` 乘以 0.1。然后,在 `X` 上分别标出这两个兴趣区域 `X[:, :, 1:4, 0:4]` 和 `X[:, :, 1:4, 0:4]` 。最后,在 $2\times 2$ 的兴趣区域汇聚层中,每个兴趣区域被划分为子窗口网格,并进一步抽取相同形状 $2\times 2$ 的特征。 | torchvision.ops.roi_pool(X, rois, output_size=(2, 2), spatial_scale=0.1) | _____no_output_____ | MIT | d2l/chapter_computer-vision/rcnn.ipynb | atlasbioinfo/myDLNotes_Pytorch |
Clean ZIMAS / zoning file* Dissolve zoning file so they are multipolygons* Use parser in `laplan.zoning` to parse ZONE_CMPLT* Manually list the failed to parse observations and fix* Use this to build crosswalk of height, density, etc restrictions | import boto3
import geopandas as gpd
import intake
import numpy as np
import os
import pandas as pd
import laplan
import utils
catalog = intake.open_catalog("../catalogs/*.yml")
s3 = boto3.client('s3')
bucket_name = 'city-planning-entitlements'
# Default value of display.max_rows is 10 i.e. at max 10 rows will be printed.
# Set it None to display all rows in the dataframe
pd.set_option('display.max_rows', 25)
# Dissolve zoning to get multipolygons
# File is large, but we only care about unique ZONE_CMPLT, which need to be parsed
zones = catalog.zoning.read()
zones = zones[['ZONE_CMPLT', 'ZONE_SMRY', 'geometry']].assign(
zone2 = zones.ZONE_CMPLT
)
df = zones.dissolve(by='zone2').reset_index(drop=True)
df.head()
print(f'# obs in zoning: {len(zones)}')
print(f'# unique types of zoning: {len(df)}') | # obs in zoning: 60588
# unique types of zoning: 1934
| Apache-2.0 | notebooks/A3-parse-zoning.ipynb | CityOfLosAngeles/planning-entitlements |
Parse zoning string | parsed_col_names = ['Q', 'T', 'zone_class', 'specific_plan', 'height_district', 'D', 'overlay']
def parse_zoning(row):
try:
z = laplan.zoning.ZoningInfo(row.ZONE_CMPLT)
return pd.Series([z.Q, z.T, z.zone_class, z.specific_plan, z.height_district, z.D, z.overlay],
index = parsed_col_names)
except ValueError:
return pd.Series(['failed', 'failed', 'failed', 'failed', 'failed', 'failed', ''],
index = parsed_col_names)
parsed = df.apply(parse_zoning, axis = 1)
df = pd.concat([df, parsed], axis = 1)
df.head() | _____no_output_____ | Apache-2.0 | notebooks/A3-parse-zoning.ipynb | CityOfLosAngeles/planning-entitlements |
Fix parse fails | fails_crosswalk = pd.read_parquet(f's3://{bucket_name}/data/crosswalk_zone_parse_fails.parquet')
print(f'# obs in fails_crosswalk: {len(fails_crosswalk)}')
# Grab all obs in our df that shows up in the fails_crosswalk, even if it was parsed correctly
# There were some other ones that were added because they weren't valid zone classes
fails = df[df.ZONE_CMPLT.isin(fails_crosswalk.ZONE_CMPLT)]
print(f'# obs in fails: {len(fails)}')
# Convert the overlay column from string to list
fails_crosswalk.overlay = fails_crosswalk.overlay.str[1:-1].str.split(',').tolist()
# Fill in Nones with empty list
fails_crosswalk['overlay'] = fails_crosswalk['overlay'].apply(lambda row: row if isinstance(row, list) else [])
df1 = df[~ df.ZONE_CMPLT.isin(fails_crosswalk.ZONE_CMPLT)]
# Append the successfully parsed obs with the failed ones
df2 = df1.append(fails_crosswalk)
# Make sure cols are the same type again
for col in ['zone_class', 'specific_plan', 'height_district']:
df2[col] = df2[col].astype(str)
for col in ['Q', 'T', 'D']:
df2[col] = df2[col].astype(int)
print(f'# obs in df: {len(df)}')
print(f'# obs in df2: {len(df2)}') | # obs in df: 1934
# obs in df2: 1934
| Apache-2.0 | notebooks/A3-parse-zoning.ipynb | CityOfLosAngeles/planning-entitlements |
Need to do something about overlays and specific plans...* leave as list? -> then split (ZONE_CMPLT, geometry) from the rest, so we can save geojson and tabular separately* GeoJSON can't take lists. Convert to strings...later make it a list again? | # Fill in Nones, otherwise cannot do the apply to make the list a string
df2.overlay = df2.overlay.fillna('')
just_overlay = df2[df2.overlay != ''][['ZONE_CMPLT', 'overlay']]
just_overlay['no_brackets'] = just_overlay['overlay'].apply(', '.join)
split = just_overlay.no_brackets.str.split(',', expand = True).fillna('')
split.rename(columns = {0: 'o1', 1: 'o2', 2: 'o3'}, inplace = True)
just_overlay = pd.concat([just_overlay, split], axis = 1)
supplemental_use = pd.read_parquet(f's3://{bucket_name}/data/crosswalk_supplemental_use_overlay.parquet')
specific_plan = pd.read_parquet(f's3://{bucket_name}/data/crosswalk_specific_plan.parquet')
supplemental_use_dict = supplemental_use.set_index('supplemental_use').to_dict()['supplemental_use_description']
specific_plan_dict = specific_plan.set_index('specific_plan').to_dict()['specific_plan_description']
# Trouble mapping it across all columns
for col in ['o1', 'o2', 'o3']:
just_overlay[col] = just_overlay[col].str.strip()
new_col = f'{col}_descrip'
just_overlay[new_col] = just_overlay[col].map(supplemental_use_dict)
just_overlay[new_col] = just_overlay[new_col].fillna('')
# Put df back together
df3 = pd.merge(df2, just_overlay, on = 'ZONE_CMPLT', how = 'left', validate = '1:1')
df3.head()
# Invalid overlays
# What is SP? Specific Plan?
# Also, can't find H | _____no_output_____ | Apache-2.0 | notebooks/A3-parse-zoning.ipynb | CityOfLosAngeles/planning-entitlements |
Merge and export | col_order = ['ZONE_CMPLT', 'ZONE_SMRY',
'Q', 'T', 'zone_class', 'height_district', 'D',
'specific_plan', 'no_brackets', 'geometry']
# Geometry is messed up, so let's get it back from original dissolve
final = (pd.merge(df[['ZONE_CMPLT', 'geometry']], df3.drop(columns = "geometry"),
on = "ZONE_CMPLT", how = "left", validate = "1:1")
[col_order]
.rename(columns = {'no_brackets': 'overlay'})
.sort_values(['ZONE_CMPLT', 'ZONE_SMRY'])
.reset_index(drop=True)
)
final.head()
# Fix CRS. It's EPSG:2229, not EPSG:4326
final.crs = "EPSG:2229"
file_name = 'gis/raw/parsed_zoning'
utils.make_zipped_shapefile(final, f'../{file_name}')
s3.upload_file(f'../{file_name}.zip', bucket_name, f'{file_name}.zip') | Path name: ../gis/raw/parsed_zoning
Dirname (1st element of path): ../gis/raw/parsed_zoning
Shapefile name: parsed_zoning.shp
Shapefile component parts folder: ../gis/raw/parsed_zoning/parsed_zoning.shp
| Apache-2.0 | notebooks/A3-parse-zoning.ipynb | CityOfLosAngeles/planning-entitlements |
Flax BasicsThis notebook will walk you through the following workflow:* Instantiating a model from Flax built-in layers or third-party models.* Initializing parameters of the model and manually written training.* Using optimizers provided by Flax to ease training.* Serialization of parameters and other objects.* Creating your own models and managing state. Setting up our environmentHere we provide the code needed to set up the environment for our notebook. | # Install the latest JAXlib version.
!pip install --upgrade -q pip jax jaxlib
# Install Flax at head:
!pip install --upgrade -q git+https://github.com/google/flax.git
import jax
from typing import Any, Callable, Sequence, Optional
from jax import lax, random, numpy as jnp
import flax
from flax.core import freeze, unfreeze
from flax import linen as nn | _____no_output_____ | Apache-2.0 | docs/notebooks/flax_basics.ipynb | harry-stark/flax |
Linear regression with FlaxIn the previous *JAX for the impatient* notebook, we finished up with a linear regression example. As we know, linear regression can also be written as a single dense neural network layer, which we will show in the following so that we can compare how it's done.A dense layer is a layer that has a kernel parameter $W\in\mathcal{M}_{m,n}(\mathbb{R})$ where $m$ is the number of features as an output of the model, and $n$ the dimensionality of the input, and a bias parameter $b\in\mathbb{R}^m$. The dense layers returns $Wx+b$ from an input $x\in\mathbb{R}^n$.This dense layer is already provided by Flax in the `flax.linen` module (here imported as `nn`). | # We create one dense layer instance (taking 'features' parameter as input)
model = nn.Dense(features=5) | _____no_output_____ | Apache-2.0 | docs/notebooks/flax_basics.ipynb | harry-stark/flax |
Layers (and models in general, we'll use that word from now on) are subclasses of the `linen.Module` class. Model parameters & initializationParameters are not stored with the models themselves. You need to initialize parameters by calling the `init` function, using a PRNGKey and a dummy input parameter. | key1, key2 = random.split(random.PRNGKey(0))
x = random.normal(key1, (10,)) # Dummy input
params = model.init(key2, x) # Initialization call
jax.tree_map(lambda x: x.shape, params) # Checking output shapes | WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
| Apache-2.0 | docs/notebooks/flax_basics.ipynb | harry-stark/flax |
*Note: JAX and Flax, like NumPy, are row-based systems, meaning that vectors are represented as row vectors and not column vectors. This can be seen in the shape of the kernel here.*The result is what we expect: bias and kernel parameters of the correct size. Under the hood:* The dummy input variable `x` is used to trigger shape inference: we only declared the number of features we wanted in the output of the model, not the size of the input. Flax finds out by itself the correct size of the kernel.* The random PRNG key is used to trigger the initialization functions (those have default values provided by the module here).* Initialization functions are called to generate the initial set of parameters that the model will use. Those are functions that take as arguments `(PRNG Key, shape, dtype)` and return an Array of shape `shape`.* The init function returns the initialized set of parameters (you can also get the output of the evaluation on the dummy input with the same syntax but using the `init_with_output` method instead of `init`. We see in the output that parameters are stored in a `FrozenDict` instance which helps deal with the functional nature of JAX by preventing any mutation of the underlying dict and making the user aware of it. Read more about it in the Flax docs. As a consequence, the following doesn't work: | try:
params['new_key'] = jnp.ones((2,2))
except ValueError as e:
print("Error: ", e) | Error: FrozenDict is immutable.
| Apache-2.0 | docs/notebooks/flax_basics.ipynb | harry-stark/flax |
To evaluate the model with a given set of parameters (never stored with the model), we just use the `apply` method by providing it the parameters to use as well as the input: | model.apply(params, x) | _____no_output_____ | Apache-2.0 | docs/notebooks/flax_basics.ipynb | harry-stark/flax |
Gradient descentIf you jumped here directly without going through the JAX part, here is the linear regression formulation we're going to use: from a set of data points $\{(x_i,y_i), i\in \{1,\ldots, k\}, x_i\in\mathbb{R}^n,y_i\in\mathbb{R}^m\}$, we try to find a set of parameters $W\in \mathcal{M}_{m,n}(\mathbb{R}), b\in\mathbb{R}^m$ such that the function $f_{W,b}(x)=Wx+b$ minimizes the mean squared error:$$\mathcal{L}(W,b)\rightarrow\frac{1}{k}\sum_{i=1}^{k} \frac{1}{2}\|y_i-f_{W,b}(x_i)\|^2_2$$Here, we see that the tuple $(W,b)$ matches the parameters of the Dense layer. We'll perform gradient descent using those. Let's first generate the fake data we'll use. The data is exactly the same as in the JAX part's linear regression pytree example. | # Set problem dimensions.
n_samples = 20
x_dim = 10
y_dim = 5
# Generate random ground truth W and b.
key = random.PRNGKey(0)
k1, k2 = random.split(key)
W = random.normal(k1, (x_dim, y_dim))
b = random.normal(k2, (y_dim,))
# Store the parameters in a pytree.
true_params = freeze({'params': {'bias': b, 'kernel': W}})
# Generate samples with additional noise.
key_sample, key_noise = random.split(k1)
x_samples = random.normal(key_sample, (n_samples, x_dim))
y_samples = jnp.dot(x_samples, W) + b + 0.1 * random.normal(key_noise,(n_samples, y_dim))
print('x shape:', x_samples.shape, '; y shape:', y_samples.shape) | x shape: (20, 10) ; y shape: (20, 5)
| Apache-2.0 | docs/notebooks/flax_basics.ipynb | harry-stark/flax |
We copy the same training loop that we used in the JAX pytree linear regression example with `jax.value_and_grad()`, but here we can use `model.apply()` instead of having to define our own feed-forward function (`predict_pytree()` in the JAX example). | # Same as JAX version but using model.apply().
def mse(params, x_batched, y_batched):
# Define the squared loss for a single pair (x,y)
def squared_error(x, y):
pred = model.apply(params, x)
return jnp.inner(y-pred, y-pred) / 2.0
# Vectorize the previous to compute the average of the loss on all samples.
return jnp.mean(jax.vmap(squared_error)(x_batched,y_batched), axis=0) | _____no_output_____ | Apache-2.0 | docs/notebooks/flax_basics.ipynb | harry-stark/flax |
And finally perform the gradient descent. | learning_rate = 0.3 # Gradient step size.
print('Loss for "true" W,b: ', mse(true_params, x_samples, y_samples))
loss_grad_fn = jax.value_and_grad(mse)
@jax.jit
def update_params(params, learning_rate, grads):
params = jax.tree_map(
lambda p, g: p - learning_rate * g, params, grads)
return params
for i in range(101):
# Perform one gradient update.
loss_val, grads = loss_grad_fn(params, x_samples, y_samples)
params = update_params(params, learning_rate, grads)
if i % 10 == 0:
print(f'Loss step {i}: ', loss_val) | Loss for "true" W,b: 0.023639778
Loss step 0: 38.094772
Loss step 10: 0.44692168
Loss step 20: 0.10053458
Loss step 30: 0.035822745
Loss step 40: 0.018846875
Loss step 50: 0.013864839
Loss step 60: 0.012312559
Loss step 70: 0.011812928
Loss step 80: 0.011649306
Loss step 90: 0.011595251
Loss step 100: 0.0115773035
| Apache-2.0 | docs/notebooks/flax_basics.ipynb | harry-stark/flax |
Optimizing with OptaxFlax used to use its own `flax.optim` package for optimization, but with[FLIP 1009](https://github.com/google/flax/blob/main/docs/flip/1009-optimizer-api.md)this was deprecated in favor of[Optax](https://github.com/deepmind/optax).Basic usage of Optax is straightforward:1. Choose an optimization method (e.g. `optax.sgd`).2. Create optimizer state from parameters.3. Compute the gradients of your loss with `jax.value_and_grad()`.4. At every iteration, call the Optax `update` function to update the internal optimizer state and create an update to the parameters. Then add the update to the parameters with Optax's `apply_updates` method.Note that Optax can do a lot more: it's designed for composing simple gradienttransformations into more complex transformations that allows to implement awide range of optimizers. There is also support for changing optimizerhyperparameters over time ("schedules"), applying different updates to differentparts of the parameter tree ("masking") and much more. For details please referto the[official documentation](https://optax.readthedocs.io/en/latest/). | import optax
tx = optax.sgd(learning_rate=learning_rate)
opt_state = tx.init(params)
loss_grad_fn = jax.value_and_grad(mse)
for i in range(101):
loss_val, grads = loss_grad_fn(params, x_samples, y_samples)
updates, opt_state = tx.update(grads, opt_state)
params = optax.apply_updates(params, updates)
if i % 10 == 0:
print('Loss step {}: '.format(i), loss_val) | Loss step 0: 0.011576377
Loss step 10: 0.0115710115
Loss step 20: 0.011569244
Loss step 30: 0.011568661
Loss step 40: 0.011568454
Loss step 50: 0.011568379
Loss step 60: 0.011568358
Loss step 70: 0.01156836
Loss step 80: 0.01156835
Loss step 90: 0.011568353
Loss step 100: 0.011568348
| Apache-2.0 | docs/notebooks/flax_basics.ipynb | harry-stark/flax |
Serializing the resultNow that we're happy with the result of our training, we might want to save the model parameters to load them back later. Flax provides a serialization package to enable you to do that. | from flax import serialization
bytes_output = serialization.to_bytes(params)
dict_output = serialization.to_state_dict(params)
print('Dict output')
print(dict_output)
print('Bytes output')
print(bytes_output) | Dict output
{'params': {'bias': DeviceArray([-1.4540135, -2.0262308, 2.0806582, 1.2201802, -0.9964547], dtype=float32), 'kernel': DeviceArray([[ 1.0106664 , 0.19014716, 0.04533899, -0.92722285,
0.34720102],
[ 1.7320251 , 0.9901233 , 1.1662225 , 1.1027892 ,
-0.10574618],
[-1.2009128 , 0.28837162, 1.4176372 , 0.12073109,
-1.3132601 ],
[-1.1944956 , -0.18993308, 0.03379077, 1.3165942 ,
0.07996067],
[ 0.14103189, 1.3737966 , -1.3162128 , 0.53401774,
-2.239638 ],
[ 0.5643044 , 0.813604 , 0.31888172, 0.5359193 ,
0.90352124],
[-0.37948322, 1.7408353 , 1.0788013 , -0.5041964 ,
0.9286919 ],
[ 0.9701384 , -1.3158673 , 0.33630812, 0.80941117,
-1.202457 ],
[ 1.0198247 , -0.6198277 , 1.0822718 , -1.8385581 ,
-0.45790705],
[-0.64384323, 0.4564892 , -1.1331053 , -0.68556863,
0.17010891]], dtype=float32)}}
Bytes output
b'\x81\xa6params\x82\xa4bias\xc7!\x01\x93\x91\x05\xa7float32\xc4\x14\x1d\x1d\xba\xbf\xc4\xad\x01\xc0\x81)\x05@\xdd.\x9c?\xa8\x17\x7f\xbf\xa6kernel\xc7\xd6\x01\x93\x92\n\x05\xa7float32\xc4\xc8\x84]\x81?\xf0\xb5B>`\xb59=z^m\xbfU\xc4\xb1>\x00\xb3\xdd?\xb8x}?\xc7F\x95?2(\x8d?t\x91\xd8\xbd\x83\xb7\x99\xbfr\xa5\x93>#u\xb5?\xdcA\xf7=\xe8\x18\xa8\xbf;\xe5\x98\xbf\xd1}B\xbe0h\n=)\x86\xa8?k\xc2\xa3=\xaaj\x10>\x91\xd8\xaf?\xa9y\xa8\xbfc\xb5\x08?;V\x0f\xc0Av\x10?ZHP?wD\xa3>\x022\t?+Mg?\xa0K\xc2\xbe\xb1\xd3\xde?)\x16\x8a?\x04\x13\x01\xbf\xc1\xbem?\xfdZx?Wn\xa8\xbf\x940\xac>\x925O?\x1c\xea\x99\xbf\x9e\x89\x82?\x07\xad\x1e\xbf\xe2\x87\x8a?\xdfU\xeb\xbf\xcbr\xea\xbe\xe9\xd2$\xbf\xf4\xb8\xe9>\x98\t\x91\xbfm\x81/\xbf\x081.>'
| Apache-2.0 | docs/notebooks/flax_basics.ipynb | harry-stark/flax |
To load the model back, you'll need to use as a template the model parameter structure, like the one you would get from the model initialization. Here, we use the previously generated `params` as a template. Note that this will produce a new variable structure, and not mutate in-place.*The point of enforcing structure through template is to avoid users issues downstream, so you need to first have the right model that generates the parameters structure.* | serialization.from_bytes(params, bytes_output) | _____no_output_____ | Apache-2.0 | docs/notebooks/flax_basics.ipynb | harry-stark/flax |
Defining your own modelsFlax allows you to define your own models, which should be a bit more complicated than a linear regression. In this section, we'll show you how to build simple models. To do so, you'll need to create subclasses of the base `nn.Module` class.*Keep in mind that we imported* `linen as nn` *and this only works with the new linen API* Module basicsThe base abstraction for models is the `nn.Module` class, and every type of predefined layers in Flax (like the previous `Dense`) is a subclass of `nn.Module`. Let's take a look and start by defining a simple but custom multi-layer perceptron i.e. a sequence of Dense layers interleaved with calls to a non-linear activation function. | class ExplicitMLP(nn.Module):
features: Sequence[int]
def setup(self):
# we automatically know what to do with lists, dicts of submodules
self.layers = [nn.Dense(feat) for feat in self.features]
# for single submodules, we would just write:
# self.layer1 = nn.Dense(feat1)
def __call__(self, inputs):
x = inputs
for i, lyr in enumerate(self.layers):
x = lyr(x)
if i != len(self.layers) - 1:
x = nn.relu(x)
return x
key1, key2 = random.split(random.PRNGKey(0), 2)
x = random.uniform(key1, (4,4))
model = ExplicitMLP(features=[3,4,5])
params = model.init(key2, x)
y = model.apply(params, x)
print('initialized parameter shapes:\n', jax.tree_map(jnp.shape, unfreeze(params)))
print('output:\n', y) | initialized parameter shapes:
{'params': {'layers_0': {'bias': (3,), 'kernel': (4, 3)}, 'layers_1': {'bias': (4,), 'kernel': (3, 4)}, 'layers_2': {'bias': (5,), 'kernel': (4, 5)}}}
output:
[[ 4.2292815e-02 -4.3807115e-02 2.9323792e-02 6.5492536e-03
-1.7147182e-02]
[ 1.2967806e-01 -1.4551792e-01 9.4432183e-02 1.2521387e-02
-4.5417298e-02]
[ 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00]
[ 9.3024032e-04 2.7864395e-05 2.4478821e-04 8.1344310e-04
-1.0110770e-03]]
| Apache-2.0 | docs/notebooks/flax_basics.ipynb | harry-stark/flax |
As we can see, a `nn.Module` subclass is made of:* A collection of data fields (`nn.Module` are Python dataclasses) - here we only have the `features` field of type `Sequence[int]`.* A `setup()` method that is being called at the end of the `__postinit__` where you can register submodules, variables, parameters you will need in your model.* A `__call__` function that returns the output of the model from a given input.* The model structure defines a pytree of parameters following the same tree structure as the model: the params tree contains one `layers_n` sub dict per layer, and each of those contain the parameters of the associated Dense layer. The layout is very explicit.*Note: lists are mostly managed as you would expect (WIP), there are corner cases you should be aware of as pointed out* [here](https://github.com/google/flax/issues/524)Since the module structure and its parameters are not tied to each other, you can't directly call `model(x)` on a given input as it will return an error. The `__call__` function is being wrapped up in the `apply` one, which is the one to call on an input: | try:
y = model(x) # Returns an error
except AttributeError as e:
print(e) | "ExplicitMLP" object has no attribute "layers"
| Apache-2.0 | docs/notebooks/flax_basics.ipynb | harry-stark/flax |
Since here we have a very simple model, we could have used an alternative (but equivalent) way of declaring the submodules inline in the `__call__` using the `@nn.compact` annotation like so: | class SimpleMLP(nn.Module):
features: Sequence[int]
@nn.compact
def __call__(self, inputs):
x = inputs
for i, feat in enumerate(self.features):
x = nn.Dense(feat, name=f'layers_{i}')(x)
if i != len(self.features) - 1:
x = nn.relu(x)
# providing a name is optional though!
# the default autonames would be "Dense_0", "Dense_1", ...
return x
key1, key2 = random.split(random.PRNGKey(0), 2)
x = random.uniform(key1, (4,4))
model = SimpleMLP(features=[3,4,5])
params = model.init(key2, x)
y = model.apply(params, x)
print('initialized parameter shapes:\n', jax.tree_map(jnp.shape, unfreeze(params)))
print('output:\n', y) | initialized parameter shapes:
{'params': {'layers_0': {'bias': (3,), 'kernel': (4, 3)}, 'layers_1': {'bias': (4,), 'kernel': (3, 4)}, 'layers_2': {'bias': (5,), 'kernel': (4, 5)}}}
output:
[[ 4.2292815e-02 -4.3807115e-02 2.9323792e-02 6.5492536e-03
-1.7147182e-02]
[ 1.2967806e-01 -1.4551792e-01 9.4432183e-02 1.2521387e-02
-4.5417298e-02]
[ 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00]
[ 9.3024032e-04 2.7864395e-05 2.4478821e-04 8.1344310e-04
-1.0110770e-03]]
| Apache-2.0 | docs/notebooks/flax_basics.ipynb | harry-stark/flax |
There are, however, a few differences you should be aware of between the two declaration modes:* In `setup`, you are able to name some sublayers and keep them around for further use (e.g. encoder/decoder methods in autoencoders).* If you want to have multiple methods, then you **need** to declare the module using `setup`, as the `@nn.compact` annotation only allows one method to be annotated.* The last initialization will be handled differently. See these notes for more details (TODO: add notes link). Module parametersIn the previous MLP example, we relied only on predefined layers and operators (`Dense`, `relu`). Let's imagine that you didn't have a Dense layer provided by Flax and you wanted to write it on your own. Here is what it would look like using the `@nn.compact` way to declare a new modules: | class SimpleDense(nn.Module):
features: int
kernel_init: Callable = nn.initializers.lecun_normal()
bias_init: Callable = nn.initializers.zeros
@nn.compact
def __call__(self, inputs):
kernel = self.param('kernel',
self.kernel_init, # Initialization function
(inputs.shape[-1], self.features)) # shape info.
y = lax.dot_general(inputs, kernel,
(((inputs.ndim - 1,), (0,)), ((), ())),) # TODO Why not jnp.dot?
bias = self.param('bias', self.bias_init, (self.features,))
y = y + bias
return y
key1, key2 = random.split(random.PRNGKey(0), 2)
x = random.uniform(key1, (4,4))
model = SimpleDense(features=3)
params = model.init(key2, x)
y = model.apply(params, x)
print('initialized parameters:\n', params)
print('output:\n', y) | initialized parameters:
FrozenDict({
params: {
kernel: DeviceArray([[ 0.6503669 , 0.86789787, 0.4604268 ],
[ 0.05673932, 0.9909285 , -0.63536596],
[ 0.76134115, -0.3250529 , -0.65221626],
[-0.82430327, 0.4150194 , 0.19405058]], dtype=float32),
bias: DeviceArray([0., 0., 0.], dtype=float32),
},
})
output:
[[ 0.5035518 1.8548558 -0.4270195 ]
[ 0.0279097 0.5589246 -0.43061772]
[ 0.3547128 1.5740999 -0.32865518]
[ 0.5264864 1.2928858 0.10089308]]
| Apache-2.0 | docs/notebooks/flax_basics.ipynb | harry-stark/flax |
Here, we see how to both declare and assign a parameter to the model using the `self.param` method. It takes as input `(name, init_fn, *init_args)` : * `name` is simply the name of the parameter that will end up in the parameter structure.* `init_fn` is a function with input `(PRNGKey, *init_args)` returning an Array, with `init_args` being the arguments needed to call the initialisation function.* `init_args` are the arguments to provide to the initialization function.Such params can also be declared in the `setup` method; it won't be able to use shape inference because Flax is using lazy initialization at the first call site. Variables and collections of variablesAs we've seen so far, working with models means working with:* A subclass of `nn.Module`;* A pytree of parameters for the model (typically from `model.init()`);However this is not enough to cover everything that we would need for machine learning, especially neural networks. In some cases, you might want your neural network to keep track of some internal state while it runs (e.g. batch normalization layers). There is a way to declare variables beyond the parameters of the model with the `variable` method.For demonstration purposes, we'll implement a simplified but similar mechanism to batch normalization: we'll store running averages and subtract those to the input at training time. For proper batchnorm, you should use (and look at) the implementation [here](https://github.com/google/flax/blob/main/flax/linen/normalization.py). | class BiasAdderWithRunningMean(nn.Module):
decay: float = 0.99
@nn.compact
def __call__(self, x):
# easy pattern to detect if we're initializing via empty variable tree
is_initialized = self.has_variable('batch_stats', 'mean')
ra_mean = self.variable('batch_stats', 'mean',
lambda s: jnp.zeros(s),
x.shape[1:])
mean = ra_mean.value # This will either get the value or trigger init
bias = self.param('bias', lambda rng, shape: jnp.zeros(shape), x.shape[1:])
if is_initialized:
ra_mean.value = self.decay * ra_mean.value + (1.0 - self.decay) * jnp.mean(x, axis=0, keepdims=True)
return x - ra_mean.value + bias
key1, key2 = random.split(random.PRNGKey(0), 2)
x = jnp.ones((10,5))
model = BiasAdderWithRunningMean()
variables = model.init(key1, x)
print('initialized variables:\n', variables)
y, updated_state = model.apply(variables, x, mutable=['batch_stats'])
print('updated state:\n', updated_state) | initialized variables:
FrozenDict({
batch_stats: {
mean: DeviceArray([0., 0., 0., 0., 0.], dtype=float32),
},
params: {
bias: DeviceArray([0., 0., 0., 0., 0.], dtype=float32),
},
})
updated state:
FrozenDict({
batch_stats: {
mean: DeviceArray([[0.01, 0.01, 0.01, 0.01, 0.01]], dtype=float32),
},
})
| Apache-2.0 | docs/notebooks/flax_basics.ipynb | harry-stark/flax |
Here, `updated_state` returns only the state variables that are being mutated by the model while applying it on data. To update the variables and get the new parameters of the model, we can use the following pattern: | for val in [1.0, 2.0, 3.0]:
x = val * jnp.ones((10,5))
y, updated_state = model.apply(variables, x, mutable=['batch_stats'])
old_state, params = variables.pop('params')
variables = freeze({'params': params, **updated_state})
print('updated state:\n', updated_state) # Shows only the mutable part | updated state:
FrozenDict({
batch_stats: {
mean: DeviceArray([[0.01, 0.01, 0.01, 0.01, 0.01]], dtype=float32),
},
})
updated state:
FrozenDict({
batch_stats: {
mean: DeviceArray([[0.0299, 0.0299, 0.0299, 0.0299, 0.0299]], dtype=float32),
},
})
updated state:
FrozenDict({
batch_stats: {
mean: DeviceArray([[0.059601, 0.059601, 0.059601, 0.059601, 0.059601]], dtype=float32),
},
})
| Apache-2.0 | docs/notebooks/flax_basics.ipynb | harry-stark/flax |
From this simplified example, you should be able to derive a full BatchNorm implementation, or any layer involving a state. To finish, let's add an optimizer to see how to play with both parameters updated by an optimizer and state variables.*This example isn't doing anything and is only for demonstration purposes.* | def update_step(tx, apply_fn, x, opt_state, params, state):
def loss(params):
y, updated_state = apply_fn({'params': params, **state},
x, mutable=list(state.keys()))
l = ((x - y) ** 2).sum()
return l, updated_state
(l, state), grads = jax.value_and_grad(loss, has_aux=True)(params)
updates, opt_state = tx.update(grads, opt_state)
params = optax.apply_updates(params, updates)
return opt_state, params, state
x = jnp.ones((10,5))
variables = model.init(random.PRNGKey(0), x)
state, params = variables.pop('params')
del variables
tx = optax.sgd(learning_rate=0.02)
opt_state = tx.init(params)
for _ in range(3):
opt_state, params, state = update_step(tx, model.apply, x, opt_state, params, state)
print('Updated state: ', state) | Updated state: FrozenDict({
batch_stats: {
mean: DeviceArray([[0.01, 0.01, 0.01, 0.01, 0.01]], dtype=float32),
},
})
Updated state: FrozenDict({
batch_stats: {
mean: DeviceArray([[0.0199, 0.0199, 0.0199, 0.0199, 0.0199]], dtype=float32),
},
})
Updated state: FrozenDict({
batch_stats: {
mean: DeviceArray([[0.029701, 0.029701, 0.029701, 0.029701, 0.029701]], dtype=float32),
},
})
| Apache-2.0 | docs/notebooks/flax_basics.ipynb | harry-stark/flax |
SIRAH Candidate Pipeline Demo NotebookThis notebook demostrates how to set up and run the SIRAH candidate pipeline and desribes its basic features by Mi DaiApril 13, 2020 **0. prerequisites** **0.0 Before you start, follow the instructions under README [here](https://github.com/mi-dai/sirahtargetspipeline-to-select-sirah-target) to set up conda enviroment and install other required packages** **0.1 Credentials**The following credentials need to be obtained and set up in order to use all functions of the pipeline- **[alerce]**- **[lasair]** - **[TNS api key]**- **[SDSS CasJobs]** In the directory of this notebook`cp -r credentials_template/ credentials/` Then fill in the credential info for each file. **0.2 local GLADE database**I have converted the downloaded GLADE catalogue into an sqlite3 database for easy query. For now you can grab the compiled database [here](https://www.dropbox.com/s/aib3ze9vaxmknp7/glade_v23.db?dl=0), and put it into a directory named `db/` in the directory of this notebook (Later I will make sure the code that generates the db works so that you can make your own) **0.3 The following cells set up autoreload and import necessary packages for this notebook** | %load_ext autoreload
%autoreload 2
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import os | _____no_output_____ | MIT | demo_notebook.ipynb | mi-dai/sirahtargets-public |
**0.4 Define the follow environment variables if you haven't done so** | os.environ['SFD_DIR'] = '/home/mi/sfddata-master' #modify this to point to the dust map downloaded from https://github.com/kbarbary/sfddata
os.environ['SIRAHPIPE_DIR'] = os.getcwd() #Or your sirah_target_pipe dir if you are running in other directories | _____no_output_____ | MIT | demo_notebook.ipynb | mi-dai/sirahtargets-public |
**1. Using the pipeline** **1.1 import the SIRAHPipe module first** | from pipeline import SIRAHPipe | _____no_output_____ | MIT | demo_notebook.ipynb | mi-dai/sirahtargets-public |
**1.2 initialize the pipeline** This cell defines the pipeline and chooses the brokers, crossmatch calalogues, and the selection cuts **Currently implemented brokers:** [alerce](http://alerce.science/), [lasair](https://lasair.roe.ac.uk/streams/),[tns](https://wis-tns.weizmann.ac.il/) **Crossmatch catalogues:** [sdss](https://skyserver.sdss.org/CasJobs/SubmitJob.aspx),[ned](https://ned.ipac.caltech.edu/forms/nnd.html),[glade](http://glade.elte.hu/) **Selection cuts:**zcut,ztflimcut,hostdistcut,olddetectioncut,magzcut,rbcut (see 1.4 running the pipeline for descriptions on the cuts)For this demo we select all of the above options: | pipe = SIRAHPipe(brokers=['alerce','lasair','tns'],xmatch_catalogues=['sdss','ned','glade'],
selection_cuts=['zcut','ztflimcut','hostdistcut','olddetectioncut','magzcut','rbcut']) | Setting up SkyServer...
Setting up local db [db/glade_v23.db]
Brokers to query: ['alerce', 'lasair', 'tns']
Crossmatch catalogues: ['sdss', 'ned', 'glade']
Cuts to apply: ['zcut', 'ztflimcut', 'hostdistcut', 'olddetectioncut', 'magzcut', 'rbcut']
| MIT | demo_notebook.ipynb | mi-dai/sirahtargets-public |
**1.3 using the realtime mode**Let's record current date and time. The pipeline can run in realtime or non-realtime mode. if `realtime==True`, the sql query includes conditions on the latest magnitude provided by the brokers; if `realtime==False`, the latest magnitudes are calculated offline by querying all the detections before the specific date. This is to mimic realtime query but can be used for an earlier date for testing and comparing. Note that this may not produce the exact results as the real time mode as the broker databases may change. For this demo we set `realtime = True` | from astropy.time import Time
from datetime import datetime
print(Time(datetime.today()),'mjd=',Time(datetime.today()).mjd)
realtime = True | 2020-04-22 18:13:58.102328 mjd= 58961.75970026116
| MIT | demo_notebook.ipynb | mi-dai/sirahtargets-public |
**1.4 running the pipeline** This is the main part of the pipeline that many options can be specified. Here I list some useful ones (and their default values): - **[query options]** - **mjdstart, mjdend:** mjd range to query - **gmax(=20.), rmax(=20.), gmin(=16.), rmin(=16.):** max/min magnitude ranges to query (ZTF specifically, for objects on TNS that are not ZTF the bands are hard code to be g at the moment) - **qlim:** number of objects to query for brokers (this is a universal number set for all brokers but currently not applied to tns) - **skip_ztf(=True):** set to True to skip importing TNS objects that has internal ZTF names (since it's already in the Alerce/Lasair queries) - **use_sherlock(=True):** include Lasair's sherlock classification and crossmatch results in query, default is True. This can be set to False in case sherlock is not available or for testing purpose - **[selection cut options]** - **[zcut] zlow(=0.016), zhigh(=0.08), zerrlim(=0.02):** limit on redshift and redshift error, for specz the zerr is currently set to 0.001. Increase the number to include photoz results - **[magzcut] dmag_min(=1.), dmag_max(=4.5), magabs=(-19.1):** cut on the mag difference from a given absolute magnitude (peak mag for example) - this can be translated as phase cut if the sedmodel is known (see mag/z plot below). - **[rbcut] rb(=0.5):** real/bogus score for ZTF objects (higher rb value is more likely to be real) - **[hostdistcut] dist_min(=2.), dist_max(=None):** distance to host (in arcsec) - **[ztflimcut] maglim(=19.8), nobs(=1):** set mag lim for objects that has <= nobs detections. This is used to cut on single detections closer to the (ZTF) limiting mag - **[olddetectioncut] dayslim(=5), fromday(=None):** Remove objects whose latest detection is > *dayslim* days from *fromday* (These are potentially too old)- **[other options]** - **[magz plot options]** - **sedmodel=('salt2'):** sedmodel to generate the mag vs z lines. This can be any sncosmo model listed [here](https://sncosmo.readthedocs.io/en/v2.1.x/source-list.html) - **sedmodel_pardict=(None):** dictionary of parameters to set for the sedmodel`pipe.run()` **runs the pipeline and applies the selection cuts defined in 1.2 initialize the pipeline, and makes a mag/z plot for the objects that passed all cuts. This may take a while (a few minutes to about less than an hour) depending on how large the query is and how many objects to be crossmatched.** | import time
today = Time(datetime.today()).mjd
start = time.time()
pipe.run(mjdstart=today-10,mjdend=today,qlim=100,zerrlim=0.01,dmag_max=4.5,gmax=22,rmax=22,realtime=realtime,skip_ztf=True,use_sherlock=True)
# pipe.run(mjdstart=today-10,mjdend=today,qlim=100,zerrlim=0.01,dmag_max=4.5,gmax=22,rmax=22,realtime=realtime,
# skip_ztf=True,use_sherlock=True,sedmodel='s11-2005hl',magabs=-18)
end = time.time()
print("Time used: {:.2f} mins".format((end - start)/60.)) | queryresult size: 100
queryresult size: 100
queryresult size: 166
Table [sncoor] uploaded successfully.
Query Job is submitted. JobID=47354091
Job 47354091 is finished
Cross-matching GLADE catalog using astropy module search_around_sky
Done. Time=0.021483612060546876 minutes
Cross match NED for 0.01 < sdss_photoz < 0.08, num = 2
Table [sncoor] uploaded successfully.
Query Job is submitted. JobID=47354099
Job 47354099 is finished
Cross-matching GLADE catalog using astropy module search_around_sky
Done. Time=0.023793903986612956 minutes
Selecting 0.016 < z < 0.080
and zerr < 0.010
Number fails cut [flag_zcut]: 779/912
Cut on magnitude lim for nobs <= 1: maglim = 19.8
Number fails cut [flag_ztflimcut]: 14/912
Cut on distance to host: 2 < dist < None (Arcsec)
Number fails cut [flag_hostdistcut]: 113/912
Cut on days since last detection from 2020-04-22 18:16:02.215: dt < 5
Number fails cut [flag_olddetectioncut]: 459/912
| MIT | demo_notebook.ipynb | mi-dai/sirahtargets-public |
**1.5 Miscellaneous features**Here I describe some miscellaneous features that may be useful in analyzing the query results 1.5.1 *SIRAHPipe.results*After the pipeline runs, all the query results, including the selection cut flags, are saved in `SIRAHPipe.results` | pipe.results.head() | _____no_output_____ | MIT | demo_notebook.ipynb | mi-dai/sirahtargets-public |
1.5.2 the *SIRAHPipe.MakeCuts* module- **(re)applying cuts using** ***SIRAHPipe.MakeCuts.[Cutname]*** The `SIRAHPipe.MakeCuts` module contains all the selection cut functions that can be reapplied after the pipeline runs to change the selection criteria. e.g. We can reapply the redshift cut with narrower `zerrlim=0.005` or apply new cuts that are not selected in the initialization phase - **make mag/z plot using** ***SIRAHPipe.MakeCuts.plot()***- **return a pandas DataFrame for objects that passed all cuts using** ***SIRAHPipe.MakeCuts.aftercuts()*** Note that `SIRAHPipe.MakeCuts.aftercuts()` returns all entries passing the cuts so there can be multiple entries for the same object. You may use `pd.DataFrame.sort_values()` and `pd.DataFrame.drop_duplicates()` to select unique results or define your own ranking | ## apply additional cut here
pipe.MakeCuts.zcut(zerrlim=0.005)
# pipe.MakeCuts.olddetectioncut()
pipe.MakeCuts.magzcut(dmag_max=3.,dmag_min=0.,magabs=-18)
# pipe.MakeCuts.ztflimcut()
pipe.MakeCuts.plot(magabs=-18,magz_plot=True,sedmodel='snana-2004fe',phase=[-10,-5,0])
# pipe.MakeCuts.plot(magabs=-19,magz_plot=True)
plt.show()
df = pipe.MakeCuts.aftercuts()
cols = ['xmatch_rank','xmatch_db','distance']
sort_cols = [x for x in cols if x in df.columns]
df = df.sort_values(sort_cols).drop_duplicates('oid')
cols = ['oid','nobs','firstmjd','lastmjd','gmaglatest','rmaglatest','classearly','classification','distance','separationArcsec','dmag_g','dmag_r',
'z','zerr','xmatch_rank','objID','xmatch_objid','xmatch_table_name']
cols_exist = [x for x in cols if x in df.columns]
df[cols_exist].head() | Selecting 0.016 < z < 0.080
and zerr < 0.005
Selecting candidates based on mag vs z: 0.00 < dmag (from max) < 3.00
Plotting MakeCuts...
| MIT | demo_notebook.ipynb | mi-dai/sirahtargets-public |
**2. Making plots for selected candidates** The main function for making light curve and phase estimate plots is *utils.gen_plots* `gen_plots()` requires a pandas DataFrame as input that comes from running the pipeline or a self-defined `pd.DataFrame` that has the required columns as the pipeline results - set **interactive = True** to make interactive plots in jupyter notebook (this doesn't seem to work with jupyter lab) if interactive=True, the image will be an interactive Aladin widget; otherwise the image will be retrieved from the PS1 image server- set **savepdf = True** to out pdfs currently the pdfs are plotted in the order of decreasing phase estimation**Here are some useful *gen_plots* options:** - **magabs(=-19.1):** For Ia, - **extra_lc(=False):** set to *True* is extra photometry is available. The photometry need to be placed in `data/extra_photometry` as `[objectname].txt`- **update_lc_prediction(=False):** replot the lc prediction- **last_detection_max(=5):** don't include objects with last detection that is >5 days old- **source(='ztf'):** provide correct photometry format ('ztf' or 'tns')- **broker(='alerce'):** for ztf objects only, broker to query for light curve points ('alerce' or 'lasair')- **plot_ylim(=(21,15)):** ylims for the light curve and mag/z plots- **ps1_image_size(=320):** image size for PS1 images (in pixels), actual image size is 0.25 arcsec/pixel | from utils import *
import matplotlib.pyplot as plt
import os
from PyPDF2 import PdfFileMerger
# %matplotlib inline
interactive = False
savepdf = True
querydate = date.today()
if not os.path.isdir('demo'):
os.mkdir('demo')
if savepdf:
pdf_file = 'demo/Candidates_{}.pdf'.format(querydate.strftime("%m-%d-%Y"))
folder = 'demo/{}'.format(querydate.strftime("%m-%d-%Y"))
if not os.path.isdir(folder):
os.mkdir(folder)
pdflist = []
orderlist = []
# display(target[target.oid=='ZTF20aamfpft'])
for i,row in df[0:2].iterrows():
if savepdf:
f = '{}/{}.pdf'.format(folder,row.oid)
else:
f = None
source = 'tns' if row['Broker'] == 'tns' else 'ztf'
res = gen_plots(row,interactive=interactive,pdf_file=f,source=source,plot_ylim=(22,15),broker='lasair',
sedmodel='salt2',magabs=-19.1)
if savepdf and not res['too_old']:
pdflist.append(f)
orderlist.append(res['phase_tuesday'])
if savepdf:
idx_ordered = np.argsort(orderlist)
merger = PdfFileMerger()
for pdf in np.array(pdflist)[idx_ordered]:
merger.append(pdf)
merger.write(pdf_file)
merger.close() | ZTF20aaurfhs: z=0.0352 +/- 0.0010
ra = 17:01:22.507 dec = +20:18:58.35 mwebv=0.058
| MIT | demo_notebook.ipynb | mi-dai/sirahtargets-public |
SageMaker Serverless Inference XGBoost Regression ExampleAmazon SageMaker Serverless Inference is a purpose-built inference option that makes it easy for customers to deploy and scale ML models. Serverless Inference is ideal for workloads which have idle periods between traffic spurts and can tolerate cold starts. Serverless endpoints also automatically launch compute resources and scale them in and out depending on traffic, eliminating the need to choose instance types or manage scaling policies.For this notebook we'll be working with the SageMaker XGBoost Algorithm to train a model and then deploy a serverless endpoint. We will be using the public S3 Abalone regression dataset for this example.Notebook Setting- SageMaker Classic Notebook Instance: ml.m5.xlarge Notebook Instance & conda_python3 Kernel- SageMaker Studio: Python 3 (Data Science)- Regions Available: SageMaker Serverless Inference is currently available in the following regions: US East (Northern Virginia), US East (Ohio), US West (Oregon), EU (Ireland), Asia Pacific (Tokyo) and Asia Pacific (Sydney) Table of Contents- Setup- Model Training- Deployment - Model Creation - Endpoint Configuration (Adjust for Serverless) - Serverless Endpoint Creation - Endpoint Invocation- Cleanup SetupFor testing you need to properly configure your Notebook Role to have SageMaker Full Access. Let's start by installing preview wheels of the Python SDK, boto and aws cli | # Fallback in case wheels are unavailable
! pip install sagemaker botocore boto3 awscli --upgrade
import subprocess
def execute_cmd(cmd):
print(cmd)
output = subprocess.getstatusoutput(cmd)
return output
def _download_from_s3(_file_path):
_path = f"s3://reinvent21-sm-rc-wheels/{_file_path}"
print(f"Path is {_path}")
ls_cmd = f"aws s3 ls {_path}"
print(execute_cmd(ls_cmd))
cmd = f"aws s3 cp {_path} /tmp/"
print("Downloading: ", cmd)
return execute_cmd(cmd)
def _install_wheel(wheel_name):
cmd = f"pip install --no-deps --log /tmp/output3.log /tmp/{wheel_name} --force-reinstall"
ret = execute_cmd(cmd)
_name = wheel_name.split(".")[0]
_, _version = execute_cmd(f"python -c 'import {_name}; print({_name}.__version__)'")
for package in ["botocore", "sagemaker", "boto3", "awscli"]:
print(execute_cmd(f"python -c 'import {package}; print({package}.__version__)'"))
print(f"Installed {_name}:{_version}")
return ret
def install_sm_py_sdk():
pySDK_name = "sagemaker.tar.gz"
exit_code, _ = _download_from_s3("dist/sagemaker.tar.gz")
if not exit_code:
_install_wheel(pySDK_name)
else:
print(f"'{pySDK_name}' is not present in S3 Bucket. Installing from public PyPi...")
execute_cmd("pip install sagemaker")
def install_boto_wheels():
WHEELS = ["botocore.tar.gz", "boto3.tar.gz", "awscli.tar.gz"]
for wheel_name in WHEELS:
_path = f"boto3/{wheel_name}"
exit_code, _ = _download_from_s3(_path)
if not exit_code:
_install_wheel(wheel_name)
else:
print(f"'{wheel_name}' is not present in S3 Bucket. Ignoring...")
install_boto_wheels()
install_sm_py_sdk()
# Setup clients
import boto3
client = boto3.client(service_name="sagemaker")
runtime = boto3.client(service_name="sagemaker-runtime") | _____no_output_____ | Apache-2.0 | serverless-inference/Serverless-Inference-Walkthrough.ipynb | jkroll-aws/amazon-sagemaker-examples |
SageMaker SetupTo begin, we import the AWS SDK for Python (Boto3) and set up our environment, including an IAM role and an S3 bucket to store our data. | import boto3
import sagemaker
from sagemaker.estimator import Estimator
boto_session = boto3.session.Session()
region = boto_session.region_name
print(region)
sagemaker_session = sagemaker.Session()
base_job_prefix = "xgboost-example"
role = sagemaker.get_execution_role()
print(role)
default_bucket = sagemaker_session.default_bucket()
s3_prefix = base_job_prefix
training_instance_type = "ml.m5.xlarge" | _____no_output_____ | Apache-2.0 | serverless-inference/Serverless-Inference-Walkthrough.ipynb | jkroll-aws/amazon-sagemaker-examples |
Retrieve the Abalone dataset from a publicly hosted S3 bucket. | # retrieve data
! curl https://sagemaker-sample-files.s3.amazonaws.com/datasets/tabular/uci_abalone/train_csv/abalone_dataset1_train.csv > abalone_dataset1_train.csv | _____no_output_____ | Apache-2.0 | serverless-inference/Serverless-Inference-Walkthrough.ipynb | jkroll-aws/amazon-sagemaker-examples |
Upload the Abalone dataset to the default S3 bucket. | # upload data to S3
!aws s3 cp abalone_dataset1_train.csv s3://{default_bucket}/xgboost-regression/train.csv | _____no_output_____ | Apache-2.0 | serverless-inference/Serverless-Inference-Walkthrough.ipynb | jkroll-aws/amazon-sagemaker-examples |
Model Training Now, we train an ML model using the XGBoost Algorithm. In this example, we use a SageMaker-provided [XGBoost](https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost.html) container image and configure an estimator to train our model. | from sagemaker.inputs import TrainingInput
training_path = f"s3://{default_bucket}/xgboost-regression/train.csv"
train_input = TrainingInput(training_path, content_type="text/csv")
model_path = f"s3://{default_bucket}/{s3_prefix}/xgb_model"
# retrieve xgboost image
image_uri = sagemaker.image_uris.retrieve(
framework="xgboost",
region=region,
version="1.0-1",
py_version="py3",
instance_type=training_instance_type,
)
# Configure Training Estimator
xgb_train = Estimator(
image_uri=image_uri,
instance_type=training_instance_type,
instance_count=1,
output_path=model_path,
sagemaker_session=sagemaker_session,
role=role,
)
# Set Hyperparameters
xgb_train.set_hyperparameters(
objective="reg:linear",
num_round=50,
max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.7,
silent=0,
) | _____no_output_____ | Apache-2.0 | serverless-inference/Serverless-Inference-Walkthrough.ipynb | jkroll-aws/amazon-sagemaker-examples |
Train the model on the Abalone dataset. | # Fit model
xgb_train.fit({"train": train_input}) | _____no_output_____ | Apache-2.0 | serverless-inference/Serverless-Inference-Walkthrough.ipynb | jkroll-aws/amazon-sagemaker-examples |
Deployment After training the model, retrieve the model artifacts so that we can deploy the model to an endpoint. | # Retrieve model data from training job
model_artifacts = xgb_train.model_data
model_artifacts | _____no_output_____ | Apache-2.0 | serverless-inference/Serverless-Inference-Walkthrough.ipynb | jkroll-aws/amazon-sagemaker-examples |
Model CreationCreate a model by providing your model artifacts, the container image URI, environment variables for the container (if applicable), a model name, and the SageMaker IAM role. | from time import gmtime, strftime
model_name = "xgboost-serverless" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("Model name: " + model_name)
# dummy environment variables
byo_container_env_vars = {"SAGEMAKER_CONTAINER_LOG_LEVEL": "20", "SOME_ENV_VAR": "myEnvVar"}
create_model_response = client.create_model(
ModelName=model_name,
Containers=[
{
"Image": image_uri,
"Mode": "SingleModel",
"ModelDataUrl": model_artifacts,
"Environment": byo_container_env_vars,
}
],
ExecutionRoleArn=role,
)
print("Model Arn: " + create_model_response["ModelArn"]) | _____no_output_____ | Apache-2.0 | serverless-inference/Serverless-Inference-Walkthrough.ipynb | jkroll-aws/amazon-sagemaker-examples |
Endpoint Configuration CreationThis is where you can adjust the Serverless Configuration for your endpoint. The current max concurrent invocations for a single endpoint, known as MaxConcurrency, can be any value from 1 to 50, and MemorySize can be any of the following: 1024 MB, 2048 MB, 3072 MB, 4096 MB, 5120 MB, or 6144 MB. | xgboost_epc_name = "xgboost-serverless-epc" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
endpoint_config_response = client.create_endpoint_config(
EndpointConfigName=xgboost_epc_name,
ProductionVariants=[
{
"VariantName": "byoVariant",
"ModelName": model_name,
"ServerlessConfig": {
"MemorySizeInMB": 4096,
"MaxConcurrency": 1,
},
},
],
)
print("Endpoint Configuration Arn: " + endpoint_config_response["EndpointConfigArn"]) | _____no_output_____ | Apache-2.0 | serverless-inference/Serverless-Inference-Walkthrough.ipynb | jkroll-aws/amazon-sagemaker-examples |
Serverless Endpoint CreationNow that we have an endpoint configuration, we can create a serverless endpoint and deploy our model to it. When creating the endpoint, provide the name of your endpoint configuration and a name for the new endpoint. | endpoint_name = "xgboost-serverless-ep" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=xgboost_epc_name,
)
print("Endpoint Arn: " + create_endpoint_response["EndpointArn"]) | _____no_output_____ | Apache-2.0 | serverless-inference/Serverless-Inference-Walkthrough.ipynb | jkroll-aws/amazon-sagemaker-examples |
Wait until the endpoint status is InService before invoking the endpoint. | # wait for endpoint to reach a terminal state (InService) using describe endpoint
import time
describe_endpoint_response = client.describe_endpoint(EndpointName=endpoint_name)
while describe_endpoint_response["EndpointStatus"] == "Creating":
describe_endpoint_response = client.describe_endpoint(EndpointName=endpoint_name)
print(describe_endpoint_response["EndpointStatus"])
time.sleep(15)
describe_endpoint_response | _____no_output_____ | Apache-2.0 | serverless-inference/Serverless-Inference-Walkthrough.ipynb | jkroll-aws/amazon-sagemaker-examples |
Endpoint InvocationInvoke the endpoint by sending a request to it. The following is a sample data point grabbed from the CSV file downloaded from the public Abalone dataset. | response = runtime.invoke_endpoint(
EndpointName=endpoint_name,
Body=b".345,0.224414,.131102,0.042329,.279923,-0.110329,-0.099358,0.0",
ContentType="text/csv",
)
print(response["Body"].read()) | _____no_output_____ | Apache-2.0 | serverless-inference/Serverless-Inference-Walkthrough.ipynb | jkroll-aws/amazon-sagemaker-examples |
Clean UpDelete any resources you created in this notebook that you no longer wish to use. | client.delete_model(ModelName=model_name)
client.delete_endpoint_config(EndpointConfigName=xgboost_epc_name)
client.delete_endpoint(EndpointName=endpoint_name) | _____no_output_____ | Apache-2.0 | serverless-inference/Serverless-Inference-Walkthrough.ipynb | jkroll-aws/amazon-sagemaker-examples |
import numpy as np
import pandas as pd
from numpy.linalg import eig
from scipy.linalg import hilbert
from copy import copy | _____no_output_____ | Apache-2.0 | method5.ipynb | ancka019/ComputationsMethods6sem |
|
Степенной метод | def pow_method(a,eps,x0=None):
if x0 is None:
x0 = np.random.uniform(-1,1,size=a.shape[1])
x1 = a@x0
num_of_iters = 1
lambda0 = x1[0]/x0[0]
while True:
x0,x1 = x1, a@x1
lambda1 = x1[0]/x0[0]
if abs(lambda1-lambda0)<eps or num_of_iters > 5000:
break
lambda0 = lambda1
num_of_iters += 1
return abs(lambda1),num_of_iters | _____no_output_____ | Apache-2.0 | method5.ipynb | ancka019/ComputationsMethods6sem |
метод скалярных произведений | def scal_method(a,eps,x0=None):
if x0 is None:
x0 = np.random.uniform(-1,1,size=a.shape[1])
num_of_iters = 1
x1 = a@x0
y0 = copy(x0)
a_T = np.transpose(a)
y1 = a_T@x0
lambda0 = np.dot(x1,y1)/np.dot(x0,y0)
while True:
x0,x1 = x1, a@x1
y0,y1 = y1, a_T@y1
lambda1 = np.dot(x1,y1)/np.dot(x0,y1)
if abs(lambda1-lambda0)<eps or num_of_iters > 5000:
break
lambda0 = lambda1
num_of_iters += 1
return abs(lambda1),num_of_iters | _____no_output_____ | Apache-2.0 | method5.ipynb | ancka019/ComputationsMethods6sem |
Решение | result = []
for size in [3,5,11]:
A = hilbert(size)
data = []
lambda_acc = max(abs(np.linalg.eig(A)[0]))
for eps in range(-6, -1):
eps = 10**eps
data.append({
'eps': eps,
'Степенной метод (К-во итераций)':
pow_method(A, eps)[1],
'Степенной метод |lambda_acc - lambda|' :
abs(lambda_acc - abs(pow_method(A, eps)[0])),
'Метод скалярных произведений (К-во итераций)':
scal_method(A, eps)[1],
'Метод скалярных произведений |lambda_acc - lambda|' :
abs(lambda_acc - abs(scal_method(A, eps)[0]))
})
result.append(pd.DataFrame(data))
result[0]
result[1]
result[2] | _____no_output_____ | Apache-2.0 | method5.ipynb | ancka019/ComputationsMethods6sem |
Data Summary | elnino.head(20)
# Print statistical summary
print("Statistical summary for the 'Elnino' file: \n")
print(elnino.describe(), "\n \n")
# display types for variables
print("The variable types are as follow: \n")
print(elnino.dtypes, "\n")
## get name of columns in dataset
print("The name of the columns in dataset are: \n")
print(elnino.columns, "\n")
## get number of rows and column in dataset
print("The dataset file is composed of the following number of rows and columns (rows, columns): ")
print(elnino.shape, "\n \n")
# looking for null values
print("The total number of null/missing values before converting the variables is: ")
print(elnino.isnull().sum(), "\n")
print("So, there is :", elnino.isnull().values.sum(), " value missing")
##
# convert categorical variables to numeric
elnino['Zonal Winds'] = pd.to_numeric(elnino['Zonal Winds'], errors='coerce')
elnino['Meridional Winds'] = pd.to_numeric(elnino['Meridional Winds'], errors='coerce')
elnino['Humidity'] = pd.to_numeric(elnino['Humidity'], errors='coerce')
elnino['Air Temp'] = pd.to_numeric(elnino['Air Temp'], errors='coerce')
elnino['Sea Surface Temp'] = pd.to_numeric(elnino['Sea Surface Temp'], errors='coerce')
# display data types
print("After converting the variables, the new Data types are: \n", elnino.dtypes, "\n")
elnino.describe().round(2)
# replace empty spaces with NAN values
elnino.replace(r'^\s*$', np.nan, regex=True)
print("The count of filled cells for each variable in the data set is: \n")
print(elnino.apply(lambda x: x.count(), axis=0), "\n \n")
print("The sum of filled cells in the data set is: ")
print(elnino.apply(lambda x: x.count(), axis=0).sum(), "\n \n")
elnino.apply(lambda x: x.count(), axis=0)
# print missing values per year
elnino_Latitude = elnino['Latitude'].isnull().groupby(elnino['Year']).sum().astype(int).reset_index(name='Latitude')
elnino_Longitude = elnino['Longitude'].isnull().groupby(elnino['Year']).sum().astype(int).reset_index(name='Longitude')
elnino_Zonal_Winds = elnino['Zonal Winds'].isnull().groupby(elnino['Year']).sum().astype(int).reset_index(
name='Zonal Winds')
elnino_Meridional_Winds = elnino['Meridional Winds'].isnull().groupby(elnino['Year']).sum().astype(int).reset_index(
name='Meridional Winds')
elnino_Humidity = elnino['Humidity'].isnull().groupby(elnino['Year']).sum().astype(int).reset_index(name='Humidity')
elnino_Air_Temp = elnino['Air Temp'].isnull().groupby(elnino['Year']).sum().astype(int).reset_index(name='Air Temp')
elnino_Sea_Surface_Temp = elnino['Sea Surface Temp'].isnull().groupby(elnino['Year']).sum().astype(int).reset_index(
name='Sea Surface Temp')
elnino_1 = elnino_Latitude.join(elnino_Longitude.set_index('Year'), on='Year')
elnino_2 = elnino_1.join(elnino_Zonal_Winds.set_index('Year'), on='Year')
elnino_3 = elnino_2.join(elnino_Meridional_Winds.set_index('Year'), on='Year')
elnino_4 = elnino_3.join(elnino_Humidity.set_index('Year'), on='Year')
elnino_5 = elnino_4.join(elnino_Air_Temp.set_index('Year'), on='Year')
elnino_6 = elnino_5.join(elnino_Sea_Surface_Temp.set_index('Year'), on='Year')
print("The total number of missing values per variable grouped by Year is: ", "\n")
elnino_6
# looking for null values
print("The total number of null/missing values for each variable is: ", "\n")
print(elnino.isnull().sum(), "\n")
print("So, there is :", elnino.isnull().values.sum(), " value missing", "\n \n \n \n \n")
print(elnino.head(10))
elnino_data_sum = elnino.apply(lambda x: x.count(), axis=0).sum()
elnino_missing_sum = elnino.isnull().values.sum()
# display percentage of missing data
print("The percentage of missing data is: ", "\n")
print(str(round((elnino_missing_sum / elnino_data_sum) * 100, 2)), "%" "\n \n \n \n")
# print correlation matrix
# plt.figure(figsize=(12,12))
plt.matshow(elnino.corr(), fignum=2)
plt.title('Correlation Matrix El Nino')
plt.colorbar()
plt.gca().xaxis.tick_bottom()
plt.xticks(range(12), list(elnino.columns), rotation='vertical')
plt.yticks(range(12), list(elnino.columns))
elnino.drop(columns=['Observation']).corr().round(2)
elnino.hist(
column=['Latitude', 'Longitude', 'Zonal Winds', 'Meridional Winds', 'Humidity', 'Air Temp', 'Sea Surface Temp'],
figsize=(14, 14)) | _____no_output_____ | Apache-2.0 | el_nino.ipynb | nohitme/psu-daan888-project |
Naive Bayes and Support vector machines using TF-IDF vectorizerIn this model both algorithms, Naive Bayes and Support Vector Machines are tested using the TF-IDF vectorizer, implemented in the scikit-learn's library. This vectorizer transforms a count matrix to a normalized tf-idf representation. Tf means term-frequency while tf-idf means term-frequency times inverse document-frequency. IDF is used to scale down the impact of tokens that occur very frequently in a given corpus and that are hence empirically less informative than features that occur in a small fraction of the training corpus. | import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from nltk.tokenize import word_tokenize
from nltk import pos_tag
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from sklearn.preprocessing import LabelEncoder
from collections import defaultdict
from nltk.corpus import wordnet as wn
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn import model_selection, naive_bayes, svm
from sklearn.metrics import accuracy_score
from collections import Counter
#[1] Importing dataset
dataset = pd.read_json(r"C:\Users\Panos\Desktop\Dissert\Code\Video_Games_5.json", lines=True, encoding='latin-1')
dataset = dataset[['reviewText','overall']]
#[2] Reduce number of classes
ratings = []
for index,entry in enumerate(dataset['overall']):
if entry == 1.0 or entry == 2.0:
ratings.append(-1)
elif entry == 3.0:
ratings.append(0)
elif entry == 4.0 or entry == 5.0:
ratings.append(1)
#[3] Cleaning the text
import re
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
corpus = []
for i in range(0, len(dataset)):
review = re.sub('[^a-zA-Z]', ' ', dataset['reviewText'][i])
review = review.lower()
review = review.split()
review = [word for word in review if not word in set(stopwords.words('english'))]
review = ' '.join(review)
corpus.append(review)
#[4] Prepare Train and Test Data sets
Train_X, Test_X, Train_Y, Test_Y = model_selection.train_test_split(corpus,ratings,test_size=0.3)
print(Counter(Train_Y).values()) # counts the elements' frequency
#[5] Encoding
Encoder = LabelEncoder()
Train_Y = Encoder.fit_transform(Train_Y)
Test_Y = Encoder.fit_transform(Test_Y)
#[6] Word Vectorization
Tfidf_vect = TfidfVectorizer(max_features=10000)
Tfidf_vect.fit(corpus)
Train_X_Tfidf = Tfidf_vect.transform(Train_X)
Test_X_Tfidf = Tfidf_vect.transform(Test_X)
# the vocabulary that it has learned from the corpus
#print(Tfidf_vect.vocabulary_)
# the vectorized data
#print(Train_X_Tfidf)
#[7] Use the Naive Bayes Algorithms to Predict the outcome
# fit the training dataset on the NB classifier
Naive = naive_bayes.MultinomialNB()
Naive.fit(Train_X_Tfidf,Train_Y)
# predict the labels on validation dataset
predictions_NB = Naive.predict(Test_X_Tfidf)
# Use accuracy_score function to get the accuracy
print("-----------------------Naive Bayes------------------------\n")
print("Naive Bayes Accuracy Score -> ",accuracy_score(predictions_NB, Test_Y)*100)
# Making the confusion matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(Test_Y, predictions_NB)
print("\n",cm,"\n")
# Printing a classification report of different metrics
from sklearn.metrics import classification_report
my_tags = ['Positive','Neutral','Negative']
print(classification_report(Test_Y, predictions_NB,target_names=my_tags))
# Export reports to files for later visualizations
report_NB = classification_report(Test_Y, predictions_NB,target_names=my_tags, output_dict=True)
report_NB_df = pd.DataFrame(report_NB).transpose()
report_NB_df.to_csv(r'NB_report_TFIDFVect.csv', index = True, float_format="%.3f")
#[8] Use the Support Vector Machine Algorithms to Predict the outcome
# Classifier - Algorithm - SVM
# fit the training dataset on the classifier
SVM = svm.SVC(C=1.0, kernel='linear', degree=3, gamma='auto')
SVM.fit(Train_X_Tfidf,Train_Y)
# predict the labels on validation dataset
predictions_SVM = SVM.predict(Test_X_Tfidf)
# Use accuracy_score function to get the accuracy
print("-----------------Support Vector Machine CM------------------\n")
print("Accuracy Score -> ",accuracy_score(predictions_SVM, Test_Y)*100)
cm = confusion_matrix(Test_Y, predictions_SVM)
# Making the confusion matrix
print("\n",cm,"\n")
# Printing a classification report of different metrics
print(classification_report(Test_Y, predictions_SVM,target_names=my_tags))
# Export reports to files for later visualizations
report_SVM = classification_report(Test_Y, predictions_SVM,target_names=my_tags, output_dict=True)
report_SVM_df = pd.DataFrame(report_SVM).transpose()
report_SVM_df.to_csv(r'SVM_report_TFIDFVect.csv', index = True, float_format="%.3f") | -----------------Support Vector Machine CM------------------
Accuracy Score -> 82.27485834268128
[[ 4993 761 2883]
[ 1365 1399 5762]
[ 880 674 50817]]
precision recall f1-score support
Positive 0.69 0.58 0.63 8637
Neutral 0.49 0.16 0.25 8526
Negative 0.85 0.97 0.91 52371
accuracy 0.82 69534
macro avg 0.68 0.57 0.59 69534
weighted avg 0.79 0.82 0.79 69534
| MIT | Naive_Bayes_&_SVM_tfidfVectorizer.ipynb | panayiotiska/Sentiment-Analysis-Reviews |
CARTO frames workshopFull details at the [documentation](https://cartodb.github.io/cartoframes/) page. I'm using the `master` branch installed using```shpip install cartoframes jupyter seaborn``` | import cartoframes
from cartoframes import Credentials, CartoContext
import pandas as pd
import os | _____no_output_____ | CC-BY-4.0 | 06-sdks/exercises/python_SDK/CARTO_Frames.ipynb | oss-spanish-geoserver/carto-workshop |
Load the credentials | try:
cc = cartoframes.CartoContext()
print('Getting the credentials from a previous session')
except Exception as e:
print('Getting the credentials from your environment or here')
BASEURL = os.environ.get('CARTO_API_URL','https://jsanz.carto.com') # <-- replace with your username or set up the envvar
APIKEY = os.environ.get('CARTO_API_KEY',False) # <-- replace False with your CARTO API key or set up the envvar
if BASEURL and APIKEY:
creds = Credentials(base_url=BASEURL,key=APIKEY)
creds.save()
cc = cartoframes.CartoContext()
else:
print('Set up your environment!')
| Getting the credentials from a previous session
| CC-BY-4.0 | 06-sdks/exercises/python_SDK/CARTO_Frames.ipynb | oss-spanish-geoserver/carto-workshop |
Load the typical `Populated Places` dataset from CARTO You can import this dataset from the Data Library | df = cc.read('populated_places')
df.head() | _____no_output_____ | CC-BY-4.0 | 06-sdks/exercises/python_SDK/CARTO_Frames.ipynb | oss-spanish-geoserver/carto-workshop |
It's a Pandas data frameYou can get the `featurecla` field counts | df.groupby('featurecla').featurecla.count() | _____no_output_____ | CC-BY-4.0 | 06-sdks/exercises/python_SDK/CARTO_Frames.ipynb | oss-spanish-geoserver/carto-workshop |
Run SQL queries | cc.query('''
SELECT featurecla,count(*) as counts
FROM populated_places
GROUP BY featurecla
''') | _____no_output_____ | CC-BY-4.0 | 06-sdks/exercises/python_SDK/CARTO_Frames.ipynb | oss-spanish-geoserver/carto-workshop |
Draw graphics using SeabornMore about seaborn [here](https://seaborn.pydata.org/) | import seaborn as sns
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
f, ax = plt.subplots(figsize=(16, 6))
sns.boxplot(x="featurecla", y="pop_max", data=df); | _____no_output_____ | CC-BY-4.0 | 06-sdks/exercises/python_SDK/CARTO_Frames.ipynb | oss-spanish-geoserver/carto-workshop |
Render your tables from CARTO | from cartoframes import Layer, styling
l = Layer(
'populated_places',
color={'column': 'featurecla','scheme': styling.prism(9)},
size ={'column': 'pop_max','bin_method':'quantiles','bins' : 4, 'min': 3, 'max':10}
)
cc.map(layers=l, interactive=True) | _____no_output_____ | CC-BY-4.0 | 06-sdks/exercises/python_SDK/CARTO_Frames.ipynb | oss-spanish-geoserver/carto-workshop |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.