markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
We now use the plotting library <tt>matplotlib</tt> available in Python to visualize the measurements.
import matplotlib.pyplot as plt plt.ion() fig1 = plt.figure(num=None, figsize=(10, 6), dpi=80, facecolor='w', edgecolor='k') fig1.set_tight_layout(False) plt.plot(r, avg_rdf, '-', color="#A60628", linewidth=2, alpha=1) plt.xlabel('r $[\sigma]$', fontsize=20) plt.ylabel('$g(r)$', fontsize=20) plt.show() fig2 = plt.figure(num=None, figsize=(10, 6), dpi=80, facecolor='w', edgecolor='k') fig2.set_tight_layout(False) plt.plot(time, instantaneous_temperature, '-', color="red", linewidth=2, alpha=0.5, label='Instantaneous Temperature') plt.plot([min(time), max(time)], [TEMPERATURE] * 2, '-', color="#348ABD", linewidth=2, alpha=1, label='Set Temperature') plt.xlabel(r'Time [$\delta t$]', fontsize=20) plt.ylabel(r'$k_B$ Temperature [$k_B T$]', fontsize=20) plt.legend(fontsize=16, loc=0) plt.show()
doc/tutorials/01-lennard_jones/01-lennard_jones.ipynb
psci2195/espresso-ffans
gpl-3.0
The first column of this array contains the lag time in units of the time step. The second column contains the number of values used to perform the averaging of the correlation. The next three columns contain the x, y and z mean squared displacement of the msd of the first particle. The next three columns then contain the x, y, z mean squared displacement of the next particle...
fig3 = plt.figure(num=None, figsize=(10, 6), dpi=80, facecolor='w', edgecolor='k') fig3.set_tight_layout(False) lag_time = msd[:, 0] for i in range(0, N_PART, 30): msd_particle_i = msd[:, 2+i*3] + msd[:, 3+i*3] + msd[:, 4+i*3] plt.plot(lag_time, msd_particle_i, 'o-', linewidth=2, label="particle id =" + str(i)) plt.xlabel(r'Lag time $\tau$ [$\delta t$]', fontsize=20) plt.ylabel(r'Mean squared displacement [$\sigma^2$]', fontsize=20) plt.xscale('log') plt.yscale('log') plt.legend() plt.show()
doc/tutorials/01-lennard_jones/01-lennard_jones.ipynb
psci2195/espresso-ffans
gpl-3.0
如下图分别是训练数据"train.csv"和“store.csv”的大致结构,显示头5行的数据内容。从中我们大致了解了有哪些数据特征
for name in names[:2]: display(Image('C:/Users/Administrator/Desktop/report/' + name, width=800))
CapstonePoject/capstone_report.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
先探索一下销售额是否和处于一周的第几天有关,填补缺失值,默认周天以外的时间商店都是处于“open”状态,有如下图的情况,由此可以看出销售额与周几的一些关系
for name in names[2:3]: display(Image('C:/Users/Administrator/Desktop/report/' + name, width=1000))
CapstonePoject/capstone_report.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
通过将时间数据进行清理,再探索一下销售额与时间(月)的关系,如下图所示展示了平均销售额与月份的关系以及百分比改变状况。
display(Image('C:/Users/Administrator/Desktop/report/' + "4.png", width=1000))
CapstonePoject/capstone_report.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
这个图就比较清晰地表明了销售额与月份有着密切地关系,月份对于销售额比较大地影响。这个特征需要重视。 再比较年份的销售和每年到访的用户数量,如下图所示
display(Image('C:/Users/Administrator/Desktop/report/' + "5.png", width=1000))
CapstonePoject/capstone_report.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
从图中可以得知年份用户数量有一定关系,但关系不是很大 接下来用box-plot和折线图分析下月份和用户数量的关系
display(Image('C:/Users/Administrator/Desktop/report/' + "6.png", width=1000))
CapstonePoject/capstone_report.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
由此可知用户数量和月份是紧密相关的,这个趋势图和月份与销售额的很像,这样就容易推理出,用户数量基本上决定了销售额。为了验证我们的想法,我们在更细小的时间段进行验证,下面我以星期为验证,看看用户和销售额的关系
display(Image('C:/Users/Administrator/Desktop/report/' + "7.png", width=1000))
CapstonePoject/capstone_report.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
由上图能非常明显的看出,我们的推理得到了验证:“销售额的变化基本是和用户数量的变化是一致的,百分比变化几乎完全一样” 现在来看看促销对销售额与顾客数量是否明显的影响
display(Image('C:/Users/Administrator/Desktop/report/' + "8.png", width=1000))
CapstonePoject/capstone_report.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
如图可知,促销对于顾客数量以及销售额都有着显著的影响,程度上来说,对于销售额的影响大于对用户数量的影响,因此可以推断出促销一定程度上能提升客单价。 接下来可以看看a、b、c三种state假日在总天数中的占比情况,以及有无state对于销售额和用户的影响
display(Image('C:/Users/Administrator/Desktop/report/' + "9.png", width=1000))
CapstonePoject/capstone_report.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
接下来看看学校放假对于销售额和用户量的影响
display(Image('C:/Users/Administrator/Desktop/report/' + "11.png", width=1000))
CapstonePoject/capstone_report.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
然后可以再分析下客户量和销售额的关系,下图基本能大致说清楚客户数量与客单价的联系了
display(Image('C:/Users/Administrator/Desktop/report/' + "12.png", width=1000))
CapstonePoject/capstone_report.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
由此其实可以推断过度的促销可能使得客单价偏低,有点得不偿失,因此应该控制好平衡。 之后看看将“store”的各种特征合并到“train”后的一些情况,比如店铺不同类型的占比数量,以及不同类型店铺对于销售额和用户量的影响
display(Image('C:/Users/Administrator/Desktop/report/' + "13.png", width=1000)) display(Image('C:/Users/Administrator/Desktop/report/' + "14.png", width=1000))
CapstonePoject/capstone_report.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
长期促销销售额对于用户数量的影响,如下图
display(Image('C:/Users/Administrator/Desktop/report/' + "15.png", width=1000))
CapstonePoject/capstone_report.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
然后再看一个比较关键的特征,一般竞争者们之间的距离和销售额的关系。形状比较像正态分布
display(Image('C:/Users/Administrator/Desktop/report/' + "16.png", width=1000))
CapstonePoject/capstone_report.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
竞争开始时,商店的平均销售额在一段时间内发生了什么事? 我通过一个店铺进行了演示:store_id = 6的平均销售额自竞争开始以来急剧下降。
display(Image('C:/Users/Administrator/Desktop/report/' + "17.png", width=1000))
CapstonePoject/capstone_report.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
2.2算法与方法 先从特征开始讲起把: 从上述部分可以看出,仅通过使用客户数量数据或仅仅是促销是不能能预测销售的。销售受到每个属性的影响。为了对销售进行预测,首先我们将摆脱“客户”功能和“销售”功能中的异常值,因为数字过高或过低可能会出现异常情况。 这样的数据会影响模型的精度。处理异常值后,我们可以开始预处理数据。这包括摆脱Null值,encoding一些特征,如StoreType,StateHoliday等。 处理Missing Data是根据特征的现实意义进行填补和丢弃的,比如除每个星期天外,我都默认商店是处于“open”状态的,这比较符合我们的生活常识 如前所述,日期的转换对于预测而言是非常重要的,这也是视觉化中证明的,因为日期,月份和日期的销售变化很大。 我们也可以处理竞争的细节,因为它肯定会影响到真正影响销售的客户数量。促销可能有助于让客户回来,这就是为什么我们需要对所有Promo列进行编码的原因。一旦我们对数据进行了预处理,我们就可以使用cross_validation.train_test_split方法进行拆分。 该方法随机洗牌数据并返回两套训练和测试。可以定义测试的大小。然后,训练集用于训练几个模型 - 模型上用到的相关方法: 我实验了大概好几种,分开来看,分别是 DecisionTree回归 ,GradientBoost回归,KNeighborsClassifier我进行了分别的测验,然后根据情况又进行了Ensemable,但最后还是别人造的轮子 xgboost更好用 -1. DecisionTree回归 - 该模型的目标是创建一个通过学习从数据特征推断的简单决策规则来预测目标变量的值的模型。 DecisionTree采用特征,并使用ifnd-else决策规则来获取目标变量的方法。 -2. Kneighbour回归 - 最近邻方法的原理是找到与新点最近距离最近的预定义数量的训练样本,并从它们预测标签。 K邻居回归基于每个查询点的k个最近邻居实现学习,其中k是用户指定的整数值。 Kneighbour的回归实例 - -3. GradientBoost回归 - 渐变增强回归是推广到任意可微分损失函数的推广。 梯度增强产生一个预测模型,它以弱预测模型(通常是决策树)的形式组成。它以像其他增强方法一样的阶段性方式构建模型,并且通过允许优化任意不同的损失函数来推广它们。 由于以下属性,这些模型已被选择用于此问题 - •考虑到影响销售的功能,数据可以轻松地分解为使用if-then-else决策规则对输入特征进行决策。这需要可以轻松完成的数据准备,所以我们不需要担心这里的主要缺点。 虽然决策树可能不稳定,但处理分类数据和数值也是非常好的。 因此,我们的数据集中的多种类型的功能将不会成为问题。我们可以通过查看模型的测试分数来确定是否对我们的数据不稳定,然后决定是否可以使用。 •我们的数据集有大量的数据点。 Kneighbours使用强力执行快速计算,这有助于我们降低模型的成本。另一方面,仅在某些输入特征不是连续值的情况下才是有效的。我们可以通过测试分数找出影响模型准确性的程度。 •GradientBoost是一个缓慢的模型,因此模型的成本将会很高。另一方面,它通过优化任意不可分解的损失函数使用不同的方法。如果我们无法通过任何其他模型获得良好的成绩,我们可以通过这种方法获得良好的成绩。 2.3基准 考虑到我们将要尝试3个以上的模型,我期望一个模型具有良好的RMSE分数,注意在RMSE的情况下越低越好,0是完美的分数。 对销售有一个好主意,商店经理需要对模型的准确性有一定的信念。 可以接受一定的错误率,但如果误差几乎达到五分之一,那么该模型是不错的。 有经验的店经理将能够预测那么多错误率。 如果模型没有正确地预测至少五分之一的数据,则数据需要更多的处理,并且所选择的模型需要被优化或改变。 因此,RMSE的基准应为0.20 三、方法 首先我们首先处理异常值。 检测数据中的异常值在任何数据预处理步骤中都非常重要的分析。 异常值的存在常常会使结果不好 考虑到这些数据点。 有很多“规则“是什么构成数据集中的异常值? 使用Tukey's方法来识别异常值:离群值是计算为四分位数范围(IQR)的1.5倍。 数据点具有超出IQR之外的异常值的功能该功能被认为是异常的。 因此,我写了一个代码,用可视化的方法辅助,发现客户和中的异常值 销售,然后看到哪些异常值在这两个特征中是常见的。然后常见的异常值被丢弃。 如前所述,数据需要适量的处理。直接使用一些具有数值的功能 - “存储”,“竞争力”,“促销”,“促销2”,“学校”。处理数据的初始步骤是用零填充所有的NaN值。 这里我们假设列没有被填写,因为该特征的abcense。然后为了加快处理速度,我放弃了商店关闭的行,这就是开放设置为零的地方,因为我们只想在商店开放的日子里培训模型,因此有销售。那么具有分类值“StoreType”,“Assortment”和“StateHoliday”的特征具有可以在模型中使用的替代值的所有值。之后,我们转到日期。给出的日期格式是aribtrary,需要处理。 因此,所有的日期都被分为“DayOfWeek”,“Month”,“Day”,“Year”,“WeekOfYear”的功能。然后处理以年或月为单位的竞赛特征的日期。我们将所有的值转换成几个月,以便有一个单位进行比较。 在“PromoOpen”这一年以来,从星期以来给出了同样的步骤。最后,“IsPromoMonth”映射为月份值,并根据该值分配0和1。为了使模型创建更快一点,销售为0的所有行都将被删除,因为它可能是一个未填充的值,这只会对模型产生负面影响。 3.2 实施 实施首先,将步骤分为各种功能,以计算每个功能为检查成本所花费的时间。 第一个函数列出它所用的分类器作为参数。它适用于拟合方法并报告时间。 然后第二个函数运行训练集本身的预测,并返回均方根误差率。它还返回了对训练集进行预测所需的时间。然后第三个函数对测试集进行预测,并报告时间和分数。 首先,将销售数据转换为日志值,以便更容易预测。然后,前面部分提到的每个模型都被调用并作为函数的参数传递,并为每个函数报告时间和分数。一旦报告了分数,则选择最有效的模型并计算特征重要性。功能重要性告诉我们哪些功能在做出预测时最相关。这可以与我们的分析进行比较,同时探索数据。然后在条形图上表示特征重要性。 然后将整个数据集训练在所选择的模型上,最后的预测被做出并保存到测试文件中。 选择细化决策回归。 最初,错误率很高。 因此,将模型训练的销售数据转换为日志以降低错误率。 测试组的均方误差值为0.1819,远高于预期值。 然后通过应用GridSearchCV,错误率降低到0.164。 GridSearchCV详尽地考虑了在参数网格中传递的所有参数组合。 在这种情况下,优化了决策树算法的叶样本和样本分割值,以获得最佳分数。 四、结果 4.1模型评估和验证 决策树回归器使用392592的训练集大小来训练DecisionTreeRegressor。 。 训练用去6.3615秒 做出预测在0.6946秒。 训练集的mean_squared_error:0.0000。 在0.1140秒内做出预测。 测试集的mean_squared_error:0.1819。 使用训练集大小为392592的Kneighbours培训KNeighborsRegressor。 。 训练用去在3.6165秒 在23.1225秒内做出预测。 训练集的mean_squared_error:0.1927。 在5.8234秒内做出预测。 测试集的mean_squared_error:0.2470。 对于GradientBoost回归器,使用训练集大小为392592的一个GradientBoostingRegressor进行训练。 训练用去71.3005秒 做出预测在1.1283秒。 训练集的mean_squared_error:0.3151。 在0.2588秒内做出预测。 测试集的mean_squared_error:0.3181。 4.2理由: 分析 在Kneighbour的情况下,模型的成本低于预期,错误率低于所确定的基准值。 这是一个很好的模型,但是错误率相对高于DecisionTree模型,因此该模型不用于最终预测。 GradientBoost回归器具有极高的训练成本,预期的模型也能提供最高的误码率。 该模型不被分类为最优模型。 DecisionTree回归是明确的赢家,它的错误率最低,训练时间不会太长。 即使训练时间高于Kneighbours,预测时间要低得多。 RMSE的基准值为0.33,该模型的值为0.18,更好。 从而使DecisionTree成为最优的模型。 在通过网格搜索优化DecisionTree后,模型的误码率降至0.16。 具体的理由 在上一节中,错误率的基准值几乎为0.20。 由于DecisionTree回归,错误率几乎是预期值的三分之二。 如果可以预测销售额的错误率只有0.16, 那么管理者很容易做出必要的改变,看看是什么增加或减少了销售。 DecisonTree回归创建了一个模型,通过学习从数据特征推断出的简单决策规则来预测销售额的价值。 错误是如此之低,因为每个功能都被应用,如果另外决定规则来预测销售,并且由于功能和数字以及分类的模型是完美契合。 因此,对整个数据集进行了最后的训练,并且预测了测试集的销售。 可以肯定地说,这个模型将精确地预测所需的值。 因此,这个项目的任务已经完成了! 五,结论 自由形式可视化最后,当最优模型被训练时,我们可以看到哪些特征是影响最重要性的特征,并评估我们在本项目前期的预测。 让我们看看重要性 5.1特征重要性可视化分析
display(Image('C:/Users/Administrator/Desktop/report/' + "19.png", width=1000))
CapstonePoject/capstone_report.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
如图所见,商店,DayOfWeek和Date是最重要的feature,似乎在销售上有最大的差异。 这个预测是正确的。 另一方面,假期和促销也似乎有很大的差异,但这些特征已经被降低了 这里有我后来用xgboost实现了低于0.1的error的时候进行的特征的分析,和上面的结果有较显著的区别,如下图所示。尤其是排名3-10的特征,在xgboost运算后明显有了更大的权重,也就是说,xgboost更具有发现特征对于预测结果的能力,不像单模型那样过于简单,靠常识也能知道过于依赖某个特征可信度会有比较大风险。所以最后结果也自然有了区别
display(Image('C:/Users/Administrator/Desktop/report/' + "20.png", width=1000))
CapstonePoject/capstone_report.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
5.2思考 项目的分析最初是项目的一个有意义的部分,因为它能够告诉我们哪个功能会影响销售,几乎和DecisionTree回归函数的feature_importance属性一样。 由于数据未进行预处理,因此在数据可视化方面存在困难。 数据中有很多NaN值在几乎每个阶段都在降低产出的质量。 我预计最终的模型要花费更少的时间,但训练时间不是以小时计算的话根本不重要。 优化模型是一个挑战,因为处理的索引错误(如代码中所述)。 现在这个模型可以用于预测销售,即使有更多的商店或不同的业务来了,如果有类似的功能,那么这个模型可以用于适当的预测 如何将模型应用在商业上 理想的方法是(需要检查与业务,看看我们是否需要包括他们的任何一个实施以下方法的制约因素) 与业务核实我们需要多长时间才能从生产中刷新该模型状态。 开发端到端管道,采用来自所有1115家商店的综合销售数据做数据预处理,特征工程,然后训练模型(使用CV验证方法),并根据刷新频率输出预测 管道应能够持续整合新数据(每天/每周)和帮助预测在包括新数据在内的训练模型时,尽可能准确。 应将报告发送给每个店主,以了解他未来6周的具体店铺预测 5.3优化 项目的分析最初是项目的一个有意义的部分,因为它能够告诉我们哪个功能会影响销售,几乎和DecisionTree回归函数的feature_importance属性一样。 优化模型是一个挑战,因为处理的索引错误(如代码中所述)。 现在这个模型可以用于预测销售,即使有更多的商店或不同的业务来了,如果有类似的功能,那么这个模型可以用于适当的预测 可以参见我用xgboost做出的结果,因为主要是去调参和之前需要实验的一些特征工程,所以此处不会详细分析,具体内容可以去看看代码。此处我会贴一些最后的截图。第一次不怎么会调参,第二次看了一些博客后有了显著的提升。下面是最终结果
display(Image('C:/Users/Administrator/Desktop/report/' + "21.png", width=1000)) display(Image('C:/Users/Administrator/Desktop/report/' + "22.png", width=1000))
CapstonePoject/capstone_report.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
Preprocess To do anything useful with it, we'll need to turn the each string into a list of characters: <img src="images/source_and_target_arrays.png"/> Then convert the characters to their int values as declared in our vocabulary:
def extract_character_vocab(data): special_words = ['<PAD>', '<UNK>', '<GO>', '<EOS>'] set_words = set([character for line in data.split('\n') for character in line]) int_to_vocab = {word_i: word for word_i, word in enumerate(special_words + list(set_words))} vocab_to_int = {word: word_i for word_i, word in int_to_vocab.items()} return int_to_vocab, vocab_to_int # Build int2letter and letter2int dicts source_int_to_letter, source_letter_to_int = extract_character_vocab(source_sentences) target_int_to_letter, target_letter_to_int = extract_character_vocab(target_sentences) # Convert characters to ids source_letter_ids = [[source_letter_to_int.get(letter, source_letter_to_int['<UNK>']) for letter in line] for line in source_sentences.split('\n')] target_letter_ids = [[target_letter_to_int.get(letter, target_letter_to_int['<UNK>']) for letter in line] + [target_letter_to_int['<EOS>']] for line in target_sentences.split('\n')] print("Example source sequence") print(source_sentences[:30]) print(source_letter_ids[:3]) print("\n") print("Example target sequence") print(target_sentences[:30]) print(target_letter_ids[:3])
seq2seq/sequence_to_sequence_implementation.ipynb
abhi1509/deep-learning
mit
Set up the decoder components - Embedding - Decoder cell - Dense output layer - Training decoder - Inference decoder 1- Embedding Now that we have prepared the inputs to the training decoder, we need to embed them so they can be ready to be passed to the decoder. We'll create an embedding matrix like the following then have tf.nn.embedding_lookup convert our input to its embedded equivalent: <img src="images/embeddings.png" /> 2- Decoder Cell Then we declare our decoder cell. Just like the encoder, we'll use an tf.contrib.rnn.LSTMCell here as well. We need to declare a decoder for the training process, and a decoder for the inference/prediction process. These two decoders will share their parameters (so that all the weights and biases that are set during the training phase can be used when we deploy the model). First, we'll need to define the type of cell we'll be using for our decoder RNNs. We opted for LSTM. 3- Dense output layer Before we move to declaring our decoders, we'll need to create the output layer, which will be a tensorflow.python.layers.core.Dense layer that translates the outputs of the decoder to logits that tell us which element of the decoder vocabulary the decoder is choosing to output at each time step. 4- Training decoder Essentially, we'll be creating two decoders which share their parameters. One for training and one for inference. The two are similar in that both created using tf.contrib.seq2seq.BasicDecoder and tf.contrib.seq2seq.dynamic_decode. They differ, however, in that we feed the the target sequences as inputs to the training decoder at each time step to make it more robust. We can think of the training decoder as looking like this (except that it works with sequences in batches): <img src="images/sequence-to-sequence-training-decoder.png"/> The training decoder does not feed the output of each time step to the next. Rather, the inputs to the decoder time steps are the target sequence from the training dataset (the orange letters). 5- Inference decoder The inference decoder is the one we'll use when we deploy our model to the wild. <img src="images/sequence-to-sequence-inference-decoder.png"/> We'll hand our encoder hidden state to both the training and inference decoders and have it process its output. TensorFlow handles most of the logic for us. We just have to use the appropriate methods from tf.contrib.seq2seq and supply them with the appropriate inputs.
def decoding_layer(target_letter_to_int, decoding_embedding_size, num_layers, rnn_size, target_sequence_length, max_target_sequence_length, enc_state, dec_input): target_vocab_size = len(target_letter_to_int) # 1. Decoder Embedding dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) #dec_embed_input = tf.contrib.layers.embed_sequence(dec_input, target_vocab_size, decoding_embedding_size) # 2. Construct the decoder cell def make_cell(rnn_size): dec_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) return dec_cell dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) # 3. Dense layer to translate the decoder's output at each time # step into a choice from the target vocabulary # https://www.tensorflow.org/api_docs/python/tf/layers/dense # dense( # inputs, # units, # activation=None, # use_bias=True, # kernel_initializer=None, # bias_initializer=tf.zeros_initializer(), # kernel_regularizer=None, # bias_regularizer=None, # activity_regularizer=None, # trainable=True, # name=None, # reuse=None) output_layer = Dense(target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1)) # 4. Set up a training decoder and an inference decoder # Training Decoder with tf.variable_scope("decode"): # Helper for the training process. Used by BasicDecoder to read inputs. training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input, sequence_length=target_sequence_length, time_major=False) # Basic decoder training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, enc_state, output_layer) # Perform dynamic decoding using the decoder training_decoder_output = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length)[0] # 5. Inference Decoder # Reuses the same parameters trained by the training process with tf.variable_scope("decode", reuse=True): start_tokens = tf.tile(tf.constant([target_letter_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens') # Helper for the inference process. inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, target_letter_to_int['<EOS>']) # Basic decoder inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, enc_state, output_layer) # Perform dynamic decoding using the decoder inference_decoder_output = tf.contrib.seq2seq.dynamic_decode(inference_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length)[0] return training_decoder_output, inference_decoder_output
seq2seq/sequence_to_sequence_implementation.ipynb
abhi1509/deep-learning
mit
Model outputs training_decoder_output and inference_decoder_output both contain a 'rnn_output' logits tensor that looks like this: <img src="images/logits.png"/> The logits we get from the training tensor we'll pass to tf.contrib.seq2seq.sequence_loss() to calculate the loss and ultimately the gradient.
# Build the graph train_graph = tf.Graph() # Set the graph to default to ensure that it is ready for training with train_graph.as_default(): # Load the model inputs input_data, targets, lr, target_sequence_length, max_target_sequence_length, source_sequence_length = get_model_inputs() # Create the training and inference logits training_decoder_output, inference_decoder_output = seq2seq_model(input_data, targets, lr, target_sequence_length, max_target_sequence_length, source_sequence_length, len(source_letter_to_int), len(target_letter_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers) # Create tensors for the training logits and inference logits # https://discussions.udacity.com/t/need-some-help-understanding-the-decoder-part-of-the-implementation-tf1-1/276860 training_logits = tf.identity(training_decoder_output.rnn_output, 'logits') inference_logits = tf.identity(inference_decoder_output.sample_id, name='predictions') # Create the weights for sequence_loss masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -5., 5.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients)
seq2seq/sequence_to_sequence_implementation.ipynb
abhi1509/deep-learning
mit
Preprocessing tweets Next step is to look at tweets and see if we can preprocess them so they work better for our model. We want to convert each to tweet a list of words. As an input we have tweets which are unicode strings. I simply convert them to list of words by splitting on whitespaces. Then, I do these conversions: - convert all text to lowercase (I assume case does not affect meaning, also SoMe PeOPle WrItE LiKe tHiS) - remove hyperlinks - remove @UserNames (marks answer to another user's tweet) - remove stopwords (words like 'a', 'the' etc) - I use NLTK stopwords corpus - convert numbers from 0-20 to textual representation - I use 'inflect' library - remove words that contain non-alphanumeric characters We will also ignore words which don't exist in our Word Embedding. This filtering will be done later, because I wanted the preprocessed dataset to be a single file, fit for all embeddings and models.
from src.data.dataset import run_interactive_processed_data_generation # When prompt for folder name, insert 'gathered_dataset' run_interactive_processed_data_generation() # data will be saved in data/gathered_dataset/processed/training_set.txt (script shows a relative path)
Notebook.ipynb
mikolajsacha/tweetsclassification
mit
Embedding tweets We have aquired a Word Embedding which allows us to get a vector of numbers for each word. However, our training set consists of lists of words, so we need to specify how to represent a sentence (tweet) based on representations of its words. I test two different approaches:<br> <br> First is to simply concatanate word vectors, so for each sentence we get vector of size (number of words in sentence) * (size of a single word vector). To use machine learning algorithms each tweet vector must have the same size, so we need to specify a constant length of a sentence. For instance, we can decide that sentences longer than 30 words will be cut at the end and sentences shorter than 30 words will be filled with zeros.<br> <br> Second approach is not to concatanate, but sum word vectors (element-wise). This means that the size of sentence vector will be the same as the size of word vector, so we don't have to artificially cut or lengthen tweets.<br> <br> Concatanation yields longer vectors, so models will obviously need more time to compute. On the other hand, we don't lose information about order of words. However, if we look at this example we can see that in training for finding categories order of words may not be so important:<br> <br> Tweet 1: "President Obama was great!"<br> Tweet 2: "What a great president Obama was!"<br> <br> Both are obviously about politics and we know this whatever order of words is.<br> Furthermore, if we have a small training set, we may encounter only one of these tweets. In sum embedding it is not a problem, because both will have very similar representation. In concatanation embedding we may end up with a model which labels correctly only one of them. So it is not obvious which embedding is going to work better.<br>
# 3D visualization of sentence embeddings, similar to visualization of word embeddings. # It transforms sentence vectors to 3 dimensions using PCA trained on them. # Colors of sentences align with their category from src.visualization.visualize_sentence_embeddings import visualize_sentence_embeddings from src.features.sentence_embeddings.sentence_embeddings import ConcatenationEmbedding, SumEmbedding from src.features.word_embeddings.word2vec_embedding import Word2VecEmbedding # You can also use GloVe embedding here # To use GloVe embedding: from src.features.word_embeddings.glove_embedding import GloveEmbedding word_emb = Word2VecEmbedding('google/GoogleNews-vectors-negative300.bin', 300) word_emb.build() sentence_embeddings = [ ConcatenationEmbedding, SumEmbedding ] visualize_sentence_embeddings(word_emb, sentence_embeddings)
Notebook.ipynb
mikolajsacha/tweetsclassification
mit
Finding the best model hyperparameters I chose four standard classifiers from sklearn package to test: SVC (SVM Classifier) RandomForestClassifier MLPClassifier (Multi-Layer Perceptron classifier - neural network) KNeighborsClassifier (classification by k-nearest neighbors). For all these methods, I defined a set of hyperparameters to test: CLASSIFIERS_PARAMS = [<br> &nbsp;(SVC,<br> &nbsp;&nbsp;{"C": list(log_range(-1, 8)),<br> &nbsp;&nbsp;"gamma": list(log_range(-7, 1))}),<br> <br> &nbsp;(RandomForestClassifier,<br> &nbsp;&nbsp;{"criterion": ["gini", "entropy"],<br> &nbsp;&nbsp;"min_samples_split": [2, 5, 10, 15],<br> &nbsp;&nbsp;"min_samples_leaf": [1, 5, 10, 15],<br> &nbsp;&nbsp;"max_features": [None, "sqrt"]}),<br> <br> &nbsp;(MLPClassifier,<br> &nbsp;&nbsp;{"alpha": list(log_range(-5, -2)),<br> &nbsp;&nbsp;"learning_rate": ["constant", "adaptive"],<br> &nbsp;&nbsp;"activation": ["identity", "logistic", "tanh", "relu"],<br> &nbsp;&nbsp;"hidden_layer_sizes": [(100,), (100, 50)]}),<br> <br> &nbsp;(KNeighborsClassifier,<br> &nbsp;&nbsp;{"n_neighbors": [1, 2, 3, 4, 7, 10, 12, 15, 30, 50, 75, 100, 150],<br> &nbsp;&nbsp;"weights": ['uniform', 'distance']})<br> ]<br> I test all possible combinations of these parameters, for all word embeddings' and sentence embeddings' combinations. This yields a lot of test cases. Fortunately, classifiers from sklearn package have similar interfaces, so I wrote a generic method for grid-search. This method looks more or less like this: for word_embedding in word_embbedings: - build word embedding for sentence_embedding in sentence_embeddings: - build features using sentence embedding and word embedding for classifier_class, tested_parameters in CLASSIFIERS_PARAMS: - run multithreaded GridSearchCV for classifier_class and tested_parameters on traing set labels and built features - safe GridSearchCV results in a summary file in /summaries folder With models from sklearn we can also use GridSearchCV method from this package, which allows to perform grid search for best parameters in parallel. For each combination of parameters it saves average score in folds. I use folds count equal to 5.<br> <br> Testing all models can take up to a couple of hours. Because of this, I save grid search result in summaries folder, which is included in repository. We can use these results to get the best possible model or do some interpretation of the results.<br> <br> By viewing grid search results files we can see that the best scores are somewhere around 85%.<br>
# run grid search on chosen models # consider doing a backup of text summary files from /summary folder before running this script from src.common import DATA_FOLDER, FOLDS_COUNT, CLASSIFIERS_PARAMS, \ SENTENCE_EMBEDDINGS, CLASSIFIERS_WRAPPERS, WORD_EMBEDDINGS from src.models.model_testing.grid_search import grid_search classifiers_to_check = [] for classifier_class, params in CLASSIFIERS_PARAMS: to_run = raw_input("Do you wish to test {0} with parameters {1} ? [y/n] " .format(classifier_class.__name__, str(params))) if to_run.lower() == 'y' or to_run.lower() == 'yes': classifiers_to_check.append((classifier_class, params)) print("*" * 20 + "\n") grid_search(DATA_FOLDER, FOLDS_COUNT, word_embeddings=WORD_EMBEDDINGS, sentence_embeddings=SENTENCE_EMBEDDINGS, classifiers=classifiers_to_check, n_jobs=-1) # uses as many threads as CPU cores # now that we have performed grid search, we can train model using the best parameters and test it interactively from src.common import choose_classifier, LABELS, SENTENCES from src.models.model_testing.grid_search import get_best_from_grid_search_results from src.features.build_features import FeatureBuilder from src.visualization.interactive import interactive_test classifier = choose_classifier() # choose classifier model best_parameters = get_best_from_grid_search_results(classifier) # get best parameters if best_parameters is None: exit(-1) # no grid search summary file for this model word_emb_class, word_emb_params, sen_emb_class, hyperparams = best_parameters print ("\nEvaluating model for word embedding: {:s}({:s}), sentence embedding: {:s} \nHyperparameters {:s}\n" .format(word_emb_class.__name__, ', '.join(map(str, word_emb_params)), sen_emb_class.__name__, str(hyperparams))) hyperparams["n_jobs"] = -1 # uses as many threads as CPU cores print ("Building word embedding...") word_emb = word_emb_class(*word_emb_params) word_emb.build() print ("Building sentence embedding...") sen_emb = sen_emb_class() sen_emb.build(word_emb) print ("Building features...") # this helper class builds a matrix of features with provided sentence embedding fb = FeatureBuilder() fb.build(sen_emb, LABELS, SENTENCES) print ("Building model...") clf = classifier(sen_emb, probability=True, **hyperparams) clf.fit(fb.features, fb.labels) print ("Model evaluated!...\n") interactive_test(clf) # This scripts visualizes how models work in 2D # It scatters training samples on a 2D space and colors backgrounds according to model prediction # It uses best possible parameters from grid search result file # I use PCA to reduce number of dimension to 2D # Models perform poorly in 2D, but we can see how boundaries between categories may look for different methods from src.visualization.visualize_2d import visualize_2d from src.features.word_embeddings.word2vec_embedding import Word2VecEmbedding from src.common import choose_multiple_classifiers classifier_classes = choose_multiple_classifiers() # use same word embedding for all classifiers - we don't want to load more than one embedding to RAM word_emb = Word2VecEmbedding('google/GoogleNews-vectors-negative300.bin', 300) word_emb.build() visualize_2d(word_emb, classifier_classes) # Attention: There is probably a bug in matplotlib, which causes the color of the fifth category # not to be drawn in the background (cyan, 'Movies' category)
Notebook.ipynb
mikolajsacha/tweetsclassification
mit
Applying PCA Until now, I applied PCA several times to make plotting 2D or 3D figures possible. Maybe there is a chance that applying a well fit Principal Component Analysis to our data set will increase model accuracy. The possible option is to fit PCA to our sentece vectors. Then we can train model on sentence vectors transformed with PCA. With reduced number of dimensions, it is also possible that our model could work faster.<br> <br> The script below allows user to see how applying PCA to a chosen model works. It cross-validates model built on PCA with dimensions reduced to numbers distributed linearly between 1 and the initial number of dimensions. There is however a limit - we can't train PCA to number of dimensions which is higher than the number of training samples. So, if we have a dataset of 1500 tweets and we use 5 folds for cross-validation, that means that maximum dimensions after PCA are 1200 (4/5 * 1500). It can be an issue when we use Concatenation Embedding.<br> <br> The script measures: cross-validation scores, model training time (including fitting PCA) and average sentence prediction time for a trained model.<br> <br> My conclusion after seeing the results is that in our problem applying PCA probably wouldn't be advantegous, as there is a noticable decreasement in training speed and no apparent improvement in accuracy, as compared to model without PCA.
# training for PCA visualization may take a while, so I also use a summary file for storing the results from src.visualization.visualize_pca import visualize_pca from src.common import choose_classifier classifier_class = choose_classifier() visualize_pca(classifier_class, n_jobs=-1)
Notebook.ipynb
mikolajsacha/tweetsclassification
mit
Expanding training set After having a way to choose the best parameters for a model I modified the script for mining tweets. In the beginning I used keywords to search for tweets possibly fit for the training set. This gave me quite clear tweets, but may seem my models not general, because they fit mostly to those keywords.<br> <br> The approach that I use to expand my current training set is to use a trained classification model to test tweets incoming from Twitter. I declare a threshold, for instance 0.7 and I take into consideration only these tweets, which are assigned to one category with probability more than the threshold.<br>
from src.data.data_gathering.tweet_miner import mine_tweets # At first, we train the best possible model we have so far (searching in grid search results files) # Then, we run as many threads as CPU cores, or two if there is only one core. # First thread reads stream from Twitter API and puts all tweets in a synchronous queue. # Other threads read from the queue, test tweets on model # and if threshold requirement is met, put them in mined_tweets.txt file # by setting this flag to False we ignore tweets that are classified as "Other" include_unclassified = False threshold = 0.7 mine_tweets(threshold, include_unclassified) # this method can be stopped only by killing the process # when we have some tweets in mined_tweets.txt, we must label them manually. from src.data.data_gathering.tweet_selector import select_tweets select_tweets() # Results will be stored in file selected_tweets.txt. # We can manually merge this file with already existing training set
Notebook.ipynb
mikolajsacha/tweetsclassification
mit
Some analysis Having performed grid search on several models, we can analyse how various hyperparameters perform. Use script below to see some plots showing cross-validation performance for different parameters.
# compare how different sentence embedding perform for all tested models from src.common import SENTENCE_EMBEDDINGS from src.visualization.compare_sentence_embeddings import get_grid_search_results_by_sentence_embeddings from src.visualization.compare_sentence_embeddings import compare_sentence_embeddings_bar_chart sen_embeddings = [sen_emb.__name__ for sen_emb in SENTENCE_EMBEDDINGS] grid_search_results = get_grid_search_results_by_sentence_embeddings(sen_embeddings) compare_sentence_embeddings_bar_chart(grid_search_results) # compare overall performance of all tested models from src.visualization.compare_models import get_available_grid_search_results from src.visualization.compare_models import compare_models_bar_chart best_results_for_models = get_available_grid_search_results() for classifier_class, parameters in best_results_for_models: word_emb_class, word_emb_params, sen_emb_class, params, best_result, avg_result = parameters print ("\n{0}: Best result: {1}%, Average result: {2}%". format(classifier_class.__name__, best_result, avg_result)) print ("For embeddings: {0}({1}), {2}".format(word_emb_class.__name__, ', '.join(map(str, word_emb_params)), sen_emb_class.__name__)) print ("And for parameters: {0}".format(str(params))) compare_models_bar_chart(best_results_for_models) # For a given model, visualize how it performs for a chosen parameter or pair of parameters from src.common import choose_classifier from src.visualization.visualize_parameters import get_all_grid_searched_parameters from src.visualization.visualize_parameters import choose_parameters_to_analyze from src.visualization.visualize_parameters import analyze_single_parameter from src.visualization.visualize_parameters import analyze_two_parameters classifier_class = choose_classifier() parameters_list = get_all_grid_searched_parameters(classifier_class) if not parameters_list: # grid search results not found exit(-1) tested_parameters = list(parameters_list[0][0].iterkeys()) parameters_to_analyze = choose_parameters_to_analyze(tested_parameters) # if we choose a single parameter, draw 1D plot if len(parameters_to_analyze) == 1: analyze_single_parameter(parameters_to_analyze[0], classifier_class, parameters_list) # if we choose two parameters, draw 2D plot elif len(parameters_to_analyze) == 2: analyze_two_parameters(parameters_to_analyze[0], parameters_to_analyze[1], classifier_class, parameters_list)
Notebook.ipynb
mikolajsacha/tweetsclassification
mit
Luego de hacer la importación de las librerías que se van a utilizar, en la función main_eos() definida por un usuario se realiza la especificación de la sustancia pura junto con el modelo de ecuación de estado y parámetros que se requieren en la función "pt.function_elv(components, Vc, Tc, Pc, omega, k, d1)" que realiza los cálculos del algoritmo que se describió previamente.
def main_eos(): print("-" * 79) components = ["METHANE"] MODEL = "PR" specification = "constants" component_eos = pt.parameters_eos_constans(components, MODEL, specification) #print(component_eos) #print('-' * 79) methane = component_eos[component_eos.index==components] #print(methane) methane_elv = methane[["Tc", "Pc", "k", "d1"]] #print(methane_elv) Tc = np.array(methane["Tc"]) Pc = np.array(methane["Pc"]) Vc = np.array(methane["Vc"]) omega = np.array(methane["Omega"]) k = np.array(methane["k"]) d1 = np.array(methane["d1"]) punto_critico = np.array([Pc, Vc]) print("Tc main = ", Tc) print("Pc main = ", Pc) print("punto critico = ", punto_critico) data_elv = pt.function_elv(components, Vc, Tc, Pc, omega, k, d1) #print(data_elv) return data_elv, Vc, Pc
Envolvente.ipynb
pysg/pyther
mit
9.4 Resultados Se obtiene el diagrama de fases líquido-vapor de una sustancia pura utilizando el método function_elv(components, Vc, Tc, Pc, omega, k, d1) de la librería pyther. Se observa que la función anterior main_eos() puede ser reemplazada por un bloque de widgets que simplifiquen la interfaz gráfica con los usuarios.
volumen = envolvente[0][0] presion = envolvente[0][1] Vc, Pc = envolvente[1], envolvente[2] plt.plot(volumen,presion) plt.scatter(Vc, Pc) plt.xlabel('Volumen [=] $mol/cm^3$') plt.ylabel('Presión [=] bar') plt.grid(True) plt.text(Vc * 1.4, Pc * 1.01, "Punto critico")
Envolvente.ipynb
pysg/pyther
mit
If we specify the carbon source, we can also get the carbon and mass yield. For example, temporarily setting the objective to produce acetate instead we could get production envelope as follows and pandas to quickly plot the results.
prod_env = production_envelope( model, ["EX_o2_e"], objective="EX_ac_e", c_source="EX_glc__D_e") prod_env.head() %matplotlib inline prod_env[prod_env.direction == 'maximum'].plot( kind='line', x='EX_o2_e', y='carbon_yield')
documentation_builder/phenotype_phase_plane.ipynb
zakandrewking/cobrapy
lgpl-2.1
That layer when called with a list of sentences will create a sentence vector for each sentence by averaging the word vectors of the sentence.
outputs = med_embed(tf.constant(["ilium", "I have a fracture", "aneurism"])) outputs
notebooks/text_models/solutions/custom_tf_hub_word_embedding.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Loading files and dealing with local I/O
import os print os.getcwd() print os.path.abspath("./") # find out "where you are" and "where Data folder is" with these commands
LogReg-sklearn.ipynb
ernestyalumni/MLgrabbag
mit
Let's load the data for Exercise 2 of Machine Learning, taught by Andrew Ng, of Coursera.
ex2data1 = np.loadtxt("./Data/ex2data1.txt",delimiter=',') # you, the user, may have to change this, if the directory that you're running this from is somewhere else ex2data2 = np.loadtxt("./Data/ex2data2.txt",delimiter=',') X_ex2data1 = ex2data1[:,0:2] Y_ex2data1 = ex2data1[:,2] X_ex2data2 = ex2data2[:,:2] Y_ex2data2 = ex2data2[:,2] logreg.fit(X_ex2data1,Y_ex2data1) def trainingdat2mesh(X,marginsize=.5, h=0.2): rows, features = X.shape ranges = [] for feature in range(features): minrange = X[:,feature].min()-marginsize maxrange = X[:,feature].max()+marginsize ranges.append((minrange,maxrange)) if len(ranges) == 2: xx, yy = np.meshgrid(np.arange(ranges[0][0], ranges[0][1], h), np.arange(ranges[1][0], ranges[1][1], h)) return xx, yy else: return ranges xx_ex2data1, yy_ex2data1 = trainingdat2mesh(X_ex2data1,h=0.2) Z_ex2data1 = logreg.predict(np.c_[xx_ex2data1.ravel(),yy_ex2data1.ravel()]) Z_ex2data1 = Z_ex2data1.reshape(xx_ex2data1.shape) plt.figure(2) plt.pcolormesh(xx_ex2data1,yy_ex2data1,Z_ex2data1) plt.scatter(X_ex2data1[:, 0], X_ex2data1[:, 1], c=Y_ex2data1, edgecolors='k') plt.show()
LogReg-sklearn.ipynb
ernestyalumni/MLgrabbag
mit
Get the probability estimates; say a student has an Exam 1 score of 45 and an Exam 2 score of 85.
logreg.predict_proba(np.array([[45,85]])).flatten() print "The student has a probability of no admission of %s and probability of admission of %s" % tuple( logreg.predict_proba(np.array([[45,85]])).flatten() )
LogReg-sklearn.ipynb
ernestyalumni/MLgrabbag
mit
Let's change the "regularization" with the C parameter/option for LogisticRegression. Call this logreg2
logreg2 = linear_model.LogisticRegression() logreg2.fit(X_ex2data2,Y_ex2data2) xx_ex2data2, yy_ex2data2 = trainingdat2mesh(X_ex2data2,h=0.02) Z_ex2data2 = logreg.predict(np.c_[xx_ex2data2.ravel(),yy_ex2data2.ravel()]) Z_ex2data2 = Z_ex2data2.reshape(xx_ex2data2.shape) plt.figure(3) plt.pcolormesh(xx_ex2data2,yy_ex2data2,Z_ex2data2) plt.scatter(X_ex2data2[:, 0], X_ex2data2[:, 1], c=Y_ex2data2, edgecolors='k') plt.show()
LogReg-sklearn.ipynb
ernestyalumni/MLgrabbag
mit
As one can see, the "dataset cannot be separated into positive and negative examples by a straight-line through the plot." cf. ex2.pdf We're going to need polynomial terms to map onto. Use this code: cf. Underfitting vs. Overfitting¶
from sklearn.pipeline import Pipeline from sklearn.preprocessing import PolynomialFeatures polynomial_features = PolynomialFeatures(degree=6,include_bias=False) pipeline = Pipeline([("polynomial_features", polynomial_features),("logistic_regression",logreg2)]) pipeline.fit(X_ex2data2,Y_ex2data2) Z_ex2data2 = pipeline.predict(np.c_[xx_ex2data2.ravel(),yy_ex2data2.ravel()]) Z_ex2data2 = Z_ex2data2.reshape(xx_ex2data2.shape) plt.figure(3) plt.pcolormesh(xx_ex2data2,yy_ex2data2,Z_ex2data2) plt.scatter(X_ex2data2[:, 0], X_ex2data2[:, 1], c=Y_ex2data2, edgecolors='k') plt.show()
LogReg-sklearn.ipynb
ernestyalumni/MLgrabbag
mit
The principal component score Vector of Hydrophobic, Steric, and Electronic properties (VHSE) is a set of amino acid descriptors that come from A new set of amino acid descriptors and its application in peptide QSARs VHSE1 and VHSE2 are related to hydrophobic (H) properties, VHSE3 and VHSE4 to steric (S) properties, and VHSE5 to VHSE8 to electronic (E) properties.
# (3-letter, VHSE1, VHSE2, VHSE3, VHSE4, VHSE5, VHSE6, VHSE7, VHSE8) vhse = { "A": ("Ala", 0.15, -1.11, -1.35, -0.92, 0.02, -0.91, 0.36, -0.48), "R": ("Arg", -1.47, 1.45, 1.24, 1.27, 1.55, 1.47, 1.30, 0.83), "N": ("Asn", -0.99, 0.00, -0.37, 0.69, -0.55, 0.85, 0.73, -0.80), "D": ("Asp", -1.15, 0.67, -0.41, -0.01, -2.68, 1.31, 0.03, 0.56), "C": ("Cys", 0.18, -1.67, -0.46, -0.21, 0.00, 1.20, -1.61, -0.19), "Q": ("Gln", -0.96, 0.12, 0.18, 0.16, 0.09, 0.42, -0.20, -0.41), "E": ("Glu", -1.18, 0.40, 0.10, 0.36, -2.16, -0.17, 0.91, 0.02), "G": ("Gly", -0.20, -1.53, -2.63, 2.28, -0.53, -1.18, 2.01, -1.34), "H": ("His", -0.43, -0.25, 0.37, 0.19, 0.51, 1.28, 0.93, 0.65), "I": ("Ile", 1.27, -0.14, 0.30, -1.80, 0.30, -1.61, -0.16, -0.13), "L": ("Leu", 1.36, 0.07, 0.26, -0.80, 0.22, -1.37, 0.08, -0.62), "K": ("Lys", -1.17, 0.70, 0.70, 0.80, 1.64, 0.67, 1.63, 0.13), "M": ("Met", 1.01, -0.53, 0.43, 0.00, 0.23, 0.10, -0.86, -0.68), "F": ("Phe", 1.52, 0.61, 0.96, -0.16, 0.25, 0.28, -1.33, -0.20), "P": ("Pro", 0.22, -0.17, -0.50, 0.05, -0.01, -1.34, -0.19, 3.56), "S": ("Ser", -0.67, -0.86, -1.07, -0.41, -0.32, 0.27, -0.64, 0.11), "T": ("Thr", -0.34, -0.51, -0.55, -1.06, 0.01, -0.01, -0.79, 0.39), "W": ("Trp", 1.50, 2.06, 1.79, 0.75, 0.75, -0.13, -1.06, -0.85), "Y": ("Tyr", 0.61, 1.60, 1.17, 0.73, 0.53, 0.25, -0.96, -0.52), "V": ("Val", 0.76, -0.92, 0.17, -1.91, 0.22, -1.40, -0.24, -0.03)}
VHSE-Based Prediction of Proteasomal Cleavage Sites.ipynb
massie/notebooks
apache-2.0
There were eight dataset used in this study. The reference datasets (s1, s3, s5, s7) were converted into the actual datasets used in the analysis (s2, s4, s6, s8) using the vhse vector. The s2 and s4 datasets were used for training the SVM model and the s6 and s8 were used for testing.
%ls data/proteasomal_cleavage from aa_props import seq_to_aa_props # Converts the raw input into our X matrix and y vector. The 'peptide_key' # and 'activity_key' parameters are the names of the column in the dataframe # for the peptide amino acid string and activity (not cleaved/cleaved) # respectively. The 'sequence_len' allows for varying the number of flanking # amino acids to cleavage site (which is at position 14 of 28 in each cleaved # sample. def dataset_to_X_y(dataframe, peptide_key, activity_key, sequence_len = 28, use_vhse = True): raw_peptide_len = 28 if (sequence_len % 2 or sequence_len > raw_peptide_len or sequence_len <= 0): raise ValueError("sequence_len needs to an even value (0,%d]" % (raw_peptide_len)) X = [] y = [] for (peptide, activity) in zip(dataframe[peptide_key], dataframe[activity_key]): if (len(peptide) != raw_peptide_len): # print "Skipping peptide! len(%s)=%d. Should be len=%d" \ # % (peptide, len(peptide), raw_peptide_len) continue y.append(activity) num_amino_acids_to_clip = (raw_peptide_len - sequence_len) / 2 clipped_peptide = peptide if num_amino_acids_to_clip == 0 else \ peptide[num_amino_acids_to_clip:-num_amino_acids_to_clip] # There is a single peptide in dataset s6 with an "'" in the sequence. # The VHSE values used for it in the study match Proline (P). clipped_peptide = clipped_peptide.replace('\'', 'P') row = [] if use_vhse: for amino_acid in clipped_peptide: row.append(vhse[amino_acid][1]) # hydrophobic row.append(vhse[amino_acid][3]) # steric row.append(vhse[amino_acid][5]) # electric else: row = seq_to_aa_props(clipped_peptide) X.append(row) return (X, y)
VHSE-Based Prediction of Proteasomal Cleavage Sites.ipynb
massie/notebooks
apache-2.0
Creating the In Vivo Data To create the in vivo training set, the authors Queried the AntiJen database (7,324 MHC-I ligands) Removed ligands with unknown source protein in ExPASy/SWISS-PROT (6036 MHC-I ligands) Removed duplicate ligands (3,148 ligands) Removed the 231 ligands used for test samples by Saxova et al, (2,917 ligands) Removed sequences less than 28 residues (2,607 ligands) to create the cleavage sample set Assigned non-cleavage sites, removed sequences with less than 28 resides (2,480 ligands) to create the non-cleavage sample set This process created 5,087 training samples: 2,607 cleavage and 2,480 non-cleavage samples. Creating Samples from Ligands and Proteins The C-terminus of the ligand is assumed to be a cleavage site and the midpoint between the N-terminus and C-terminus is assumed to not be a cleavage site. Both the cleavage and non-cleavage sites are at the center position of each sample. <img src="images/creating_samples_from_ligands.png"/> Format of Training Data Each Sequence is 28 residues long, however the authors found the best performance using 20 residues. The Activity is 1 for cleavage and -1 for no cleavage. There are 28 * 8 = 224 features in the raw training set.
training_set = pd.DataFrame.from_csv("data/proteasomal_cleavage/s2_in_vivo_mhc_1_antijen_swiss_prot_dataset.csv") print training_set.head(3)
VHSE-Based Prediction of Proteasomal Cleavage Sites.ipynb
massie/notebooks
apache-2.0
Creating the Linear SVM Model The authors measured linear, polynomial, radial basis, and sigmoid kernel and found no significant difference in performance. The linear kernel was chosen for its simplicity and interpretability. The authors did not provide the C value used in their linear model, so I used GridSearchCV to find the best value.
from sklearn.model_selection import GridSearchCV from sklearn.feature_selection import RFECV def create_linear_svc_model(parameters, sequence_len = 28, use_vhse = True): scaler = MinMaxScaler() (X_train_unscaled, y_train) = dataset_to_X_y(training_set, \ "Sequence", "Activity", \ sequence_len = sequence_len, \ use_vhse = use_vhse) X_train = pd.DataFrame(scaler.fit_transform(X_train_unscaled)) parameters={'estimator__C': [pow(2, i) for i in xrange(-25, 4, 1)]} svc = svm.LinearSVC() rfe = RFECV(estimator=svc, step=.1, cv=2, scoring='accuracy', n_jobs=8) clf = GridSearchCV(rfe, parameters, scoring='accuracy', n_jobs=8, cv=2, verbose=1) clf.fit(X_train, y_train) # summarize results print("Best: %f using %s" % (clf.best_score_, clf.best_params_)) means = clf.cv_results_['mean_test_score'] stds = clf.cv_results_['std_test_score'] params = clf.cv_results_['params'] for mean, stdev, param in zip(means, stds, params): print("%f (%f) with: %r" % (mean, stdev, param)) # #svr = svm.LinearSVC() #clf = GridSearchCV(svr, parameters, cv=10, scoring='accuracy', n_jobs=1) #clf.fit(X_train, y_train) #print("The best parameters are %s with a score of %0.2f" \ # % (clf.best_params_, clf.best_score_)) return (scaler, clf) (vhse_scaler, vhse_model) = create_linear_svc_model( parameters = {'estimator__C': [pow(2, i) for i in xrange(-25, 4, 1)]}, use_vhse = False)
VHSE-Based Prediction of Proteasomal Cleavage Sites.ipynb
massie/notebooks
apache-2.0
Testing In Vivo SVM Model
def test_linear_svc_model(scaler, model, sequence_len = 28, use_vhse = True): testing_set = pd.DataFrame.from_csv("data/proteasomal_cleavage/s6_in_vivo_mhc_1_ligands_dataset.csv") (X_test_prescaled, y_test) = dataset_to_X_y(testing_set, \ "Sequences", "Activity", \ sequence_len = sequence_len,\ use_vhse = use_vhse) X_test = pd.DataFrame(scaler.transform(X_test_prescaled)) y_predicted = model.predict(X_test) accuracy = 100.0 * metrics.accuracy_score(y_test, y_predicted) ((tn, fp), (fn, tp)) = metrics.confusion_matrix(y_test, y_predicted, labels=[-1, 1]) sensitivity = 100.0 * tp/(tp + fn) specificity = 100.0 * tn/(tn + fp) mcc = metrics.matthews_corrcoef(y_test, y_predicted) print "Authors reported performance" print "Acc: 73.5, Sen: 82.3, Spe: 64.8, MCC: 0.48" print "Notebook performance (sequence_len=%d, use_vhse=%s)" % (sequence_len, use_vhse) print "Acc: %.1f, Sen: %.1f, Spe: %.1f, MCC: %.2f" \ %(accuracy, sensitivity, specificity, mcc) test_linear_svc_model(vhse_scaler, vhse_model, use_vhse = False) testing_set = pd.DataFrame.from_csv("data/proteasomal_cleavage/s6_in_vivo_mhc_1_ligands_dataset.csv") (X_test_prescaled, y_test) = dataset_to_X_y(testing_set, \ "Sequences", "Activity", \ sequence_len = 28,\ use_vhse = False) X_test = pd.DataFrame(vhse_scaler.transform(X_test_prescaled)) poslabels = ["-%02d" % (i) for i in range(14, 0, -1)] + ["+%02d" % (i) for i in range(1,15)] # 18 H 17 S 15 E proplables = ["H%02d" % (i) for i in range(18)] + ["S%02d" % (i) for i in range(17)] + ["E%02d" % (i) for i in range(15)] cols = [] for poslabel in poslabels: for proplable in proplables: cols.append("%s%s" % (poslabel, proplable)) X_test.columns = cols for col in X_test.columns[vhse_model.best_estimator_.get_support()]: print col
VHSE-Based Prediction of Proteasomal Cleavage Sites.ipynb
massie/notebooks
apache-2.0
Comparing Linear SVM to PAProC, FragPredict, and NetChop Interpreting Model Weights <img src="images/journal.pone.0074506.g002.png" align="left" border="0"/> The VHSE1 variable at the P1 position has the largest positive weight coefficient (10.49) in line with research showing that C-terminal residues are usually hydrophobic to aid in ER transfer and binding to the MHC molecule. There is a mostly positive and mostly negative coefficents upstream and downstream of the cleavage site respectively. This potential difference appears to be conducive to cleavage.
#h = svr.coef_[:, 0::3] #s = svr.coef_[:, 1::3] #e = svr.coef_[:, 2::3] #%matplotlib notebook #n_groups = h.shape[1] #fig, ax = plt.subplots(figsize=(12,9)) #index = np.arange(n_groups) #bar_width = 0.25 #ax1 = ax.bar(index + bar_width, h.T, bar_width, label="Hydrophobic", color='b') #ax2 = ax.bar(index, s.T, bar_width, label="Steric", color='r') #ax3 = ax.bar(index - bar_width, e.T, bar_width, label="Electronic", color='g') #ax.set_xlim(-bar_width,len(index)+bar_width) #plt.xlabel('Amino Acid Position') #plt.ylabel('SVM Coefficient Value') #plt.title('Hydrophobic, Steric, and Electronic Effect by Amino Acid Position') #plt.xticks(index, range (n_groups/2, 0, -1) + [str(i)+"'" for i in range (1, n_groups/2+1)]) #plt.legend() #plt.tight_layout() #plt.show()
VHSE-Based Prediction of Proteasomal Cleavage Sites.ipynb
massie/notebooks
apache-2.0
PCA vs. full matrices Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components Source. Mei et al, creators of the VHSE, applied PCA on 18 hydophobic, 17 steric, and 15 electronic properties. The first 2, 2, and 4 principle components account for 74.33, 78.68, and 77.9% of variability in original matrices. The authors of this paper only used the first principle component from the hydrophobic, steric, and electronic matrices. What performance would the authors have found if used the full matrices instead of PCA features? | Matrix | Features | Sensitivity | Specificity | MCC | |--------|----------|-------------|-------------|------| | VHSE | 3x20=60 | 82.2 | 63.2 | 0.46 | | Full | 50x20=1000 | 81.2 | 64.1 | 0.46 |
# Performance with no VHSE (no_vhse_scaler, no_vhse_model) = create_linear_svc_model( parameters = {'C': [0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1]}, use_vhse = False) test_linear_svc_model(no_vhse_scaler, no_vhse_model, use_vhse = False) # Performance with more flanking residues and no VHSE (full_flank_scaler, full_flank_model) = create_linear_svc_model( parameters = {'C': [0.0001, 0.003, 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1]}, use_vhse = False, sequence_len = 28) test_linear_svc_model(full_flank_scaler, full_flank_model, use_vhse = False, sequence_len=28)
VHSE-Based Prediction of Proteasomal Cleavage Sites.ipynb
massie/notebooks
apache-2.0
Importing Libraries
import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import numpy as np import nltk nltk.download('stopwords') nltk.download('punkt') from nltk.corpus import stopwords from nltk.util import ngrams from sklearn.feature_extraction.text import CountVectorizer from collections import defaultdict from collections import Counter plt.style.use('ggplot') stop=set(stopwords.words('english')) import re from nltk.tokenize import word_tokenize import gensim import string from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences from tqdm import tqdm from keras.models import Sequential from keras.layers import Embedding,LSTM,Dense,SpatialDropout1D from keras.initializers import Constant from sklearn.model_selection import train_test_split from tensorflow.keras.optimizers import Adam import os #os.listdir('../input/glove-global-vectors-for-word-representation/glove.6B.100d.txt')
natural-language-processing-with-disaster-tweets-kaggle-competition/natural-language-processing-with-disaster-tweets-kale.ipynb
kubeflow/examples
apache-2.0
Load data
tweet= pd.read_csv('./data/train.csv') test=pd.read_csv('./data/test.csv') tweet.head(3) print('There are {} rows and {} columns in train'.format(tweet.shape[0],tweet.shape[1])) print('There are {} rows and {} columns in train'.format(test.shape[0],test.shape[1]))
natural-language-processing-with-disaster-tweets-kaggle-competition/natural-language-processing-with-disaster-tweets-kale.ipynb
kubeflow/examples
apache-2.0
Removing urls
def remove_URL(text): url = re.compile(r'https?://\S+|www\.\S+') return url.sub(r'',text) df['text']=df['text'].apply(lambda x : remove_URL(x))
natural-language-processing-with-disaster-tweets-kaggle-competition/natural-language-processing-with-disaster-tweets-kale.ipynb
kubeflow/examples
apache-2.0
Removing HTML tags
def remove_html(text): html=re.compile(r'<.*?>') return html.sub(r'',text) df['text']=df['text'].apply(lambda x : remove_html(x))
natural-language-processing-with-disaster-tweets-kaggle-competition/natural-language-processing-with-disaster-tweets-kale.ipynb
kubeflow/examples
apache-2.0
Removing Emojis
# Reference : https://gist.github.com/slowkow/7a7f61f495e3dbb7e3d767f97bd7304b def remove_emoji(text): emoji_pattern = re.compile("[" u"\U0001F600-\U0001F64F" # emoticons u"\U0001F300-\U0001F5FF" # symbols & pictographs u"\U0001F680-\U0001F6FF" # transport & map symbols u"\U0001F1E0-\U0001F1FF" # flags (iOS) u"\U00002702-\U000027B0" u"\U000024C2-\U0001F251" "]+", flags=re.UNICODE) return emoji_pattern.sub(r'', text) df['text']=df['text'].apply(lambda x: remove_emoji(x))
natural-language-processing-with-disaster-tweets-kaggle-competition/natural-language-processing-with-disaster-tweets-kale.ipynb
kubeflow/examples
apache-2.0
Removing punctuations
def remove_punct(text): table=str.maketrans('','',string.punctuation) return text.translate(table) df['text']=df['text'].apply(lambda x : remove_punct(x))
natural-language-processing-with-disaster-tweets-kaggle-competition/natural-language-processing-with-disaster-tweets-kale.ipynb
kubeflow/examples
apache-2.0
Spelling Correction Even if I'm not good at spelling I can correct it with python :) I will use pyspellcheker to do that. Corpus Creation
def create_corpus(df): corpus=[] for tweet in tqdm(df['text']): words=[word.lower() for word in word_tokenize(tweet) if((word.isalpha()==1) & (word not in stop))] corpus.append(words) return corpus corpus=create_corpus(df)
natural-language-processing-with-disaster-tweets-kaggle-competition/natural-language-processing-with-disaster-tweets-kale.ipynb
kubeflow/examples
apache-2.0
Download Glove
# download files import wget import zipfile wget.download("http://nlp.stanford.edu/data/glove.6B.zip", './glove.6B.zip') with zipfile.ZipFile("glove.6B.zip", 'r') as zip_ref: zip_ref.extractall("./")
natural-language-processing-with-disaster-tweets-kaggle-competition/natural-language-processing-with-disaster-tweets-kale.ipynb
kubeflow/examples
apache-2.0
Embedding Step
embedding_dict={} with open("./glove.6B.100d.txt",'r') as f: for line in f: values=line.split() word=values[0] vectors=np.asarray(values[1:],'float32') embedding_dict[word]=vectors f.close() MAX_LEN=50 tokenizer_obj=Tokenizer() tokenizer_obj.fit_on_texts(corpus) sequences=tokenizer_obj.texts_to_sequences(corpus) tweet_pad=pad_sequences(sequences,maxlen=MAX_LEN,truncating='post',padding='post') word_index=tokenizer_obj.word_index print('Number of unique words:',len(word_index)) num_words=len(word_index)+1 embedding_matrix=np.zeros((num_words,100)) for word,i in tqdm(word_index.items()): if i > num_words: continue emb_vec=embedding_dict.get(word) if emb_vec is not None: embedding_matrix[i]=emb_vec
natural-language-processing-with-disaster-tweets-kaggle-competition/natural-language-processing-with-disaster-tweets-kale.ipynb
kubeflow/examples
apache-2.0
Baseline Model
model=Sequential() embedding=Embedding(num_words,100,embeddings_initializer=Constant(embedding_matrix), input_length=MAX_LEN,trainable=False) model.add(embedding) model.add(SpatialDropout1D(0.2)) model.add(LSTM(64, dropout=0.2, recurrent_dropout=0.2)) model.add(Dense(1, activation='sigmoid')) optimzer=Adam(learning_rate=1e-5) model.compile(loss='binary_crossentropy',optimizer=optimzer,metrics=['accuracy']) model.summary() train=tweet_pad[:tweet.shape[0]] final_test=tweet_pad[tweet.shape[0]:] X_train,X_test,y_train,y_test=train_test_split(train,tweet['target'].values,test_size=0.15) print('Shape of train',X_train.shape) print("Shape of Validation ",X_test.shape)
natural-language-processing-with-disaster-tweets-kaggle-competition/natural-language-processing-with-disaster-tweets-kale.ipynb
kubeflow/examples
apache-2.0
Training Model
history=model.fit(X_train,y_train,batch_size=4,epochs=5,validation_data=(X_test,y_test),verbose=2)
natural-language-processing-with-disaster-tweets-kaggle-competition/natural-language-processing-with-disaster-tweets-kale.ipynb
kubeflow/examples
apache-2.0
Making our submission
sample_sub=pd.read_csv('./data/sample_submission.csv') y_pre=model.predict(final_test) y_pre=np.round(y_pre).astype(int).reshape(3263) sub=pd.DataFrame({'id':sample_sub['id'].values.tolist(),'target':y_pre}) sub.to_csv('submission.csv',index=False) sub.head()
natural-language-processing-with-disaster-tweets-kaggle-competition/natural-language-processing-with-disaster-tweets-kale.ipynb
kubeflow/examples
apache-2.0
so that is the benchmark to beat, we will export our expressions as Cython code. We then subclass ODEsys to have it render, compile and import the code:
# %load ../scipy2017codegen/odesys_cython.py import uuid import numpy as np import sympy as sym import setuptools import pyximport from scipy2017codegen import templates from scipy2017codegen.odesys import ODEsys pyximport.install() cython_template = """ cimport numpy as cnp import numpy as np def f(cnp.ndarray[cnp.float64_t, ndim=1] y, double t, %(args)s): cdef cnp.ndarray[cnp.float64_t, ndim=1] out = np.empty(y.size) %(f_exprs)s return out def j(cnp.ndarray[cnp.float64_t, ndim=1] y, double t, %(args)s): cdef cnp.ndarray[cnp.float64_t, ndim=2] out = np.empty((y.size, y.size)) %(j_exprs)s return out """ class CythonODEsys(ODEsys): def setup(self): self.mod_name = 'ode_cython_%s' % uuid.uuid4().hex[:10] idxs = list(range(len(self.f))) subs = {s: sym.Symbol('y[%d]' % i) for i, s in enumerate(self.y)} f_exprs = ['out[%d] = %s' % (i, str(self.f[i].xreplace(subs))) for i in idxs] j_exprs = ['out[%d, %d] = %s' % (ri, ci, self.j[ri, ci].xreplace(subs)) for ri in idxs for ci in idxs] ctx = dict( args=', '.join(map(str, self.p)), f_exprs = '\n '.join(f_exprs), j_exprs = '\n '.join(j_exprs), ) open('%s.pyx' % self.mod_name, 'wt').write(cython_template % ctx) open('%s.pyxbld' % self.mod_name, 'wt').write(templates.pyxbld % dict( sources=[], include_dirs=[np.get_include()], library_dirs=[], libraries=[], extra_compile_args=[], extra_link_args=[] )) mod = __import__(self.mod_name) self.f_eval = mod.f self.j_eval = mod.j cython_sys = mk_rsys(CythonODEsys, **watrad_data) %timeit cython_sys.integrate(tout, y0)
notebooks/40-chemical-kinetics-cython.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
That is a considerable speed up from before. But the solver still has to allocate memory for creating new arrays at each call, and each evaluation has to pass the python layer which is now the bottleneck for the integration. In order to speed up integration further we need to make sure the solver can evaluate the function and Jacobian without calling into Python.
import matplotlib.pyplot as plt %matplotlib inline
notebooks/40-chemical-kinetics-cython.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Just to see that everything looks alright:
fig, ax = plt.subplots(1, 1, figsize=(14, 6)) cython_sys.plot_result(tout, *cython_sys.integrate_odeint(tout, y0), ax=ax) ax.set_xscale('log') ax.set_yscale('log')
notebooks/40-chemical-kinetics-cython.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Kets are column vectors, i.e. with shape (d, 1):
qu(data, qtype='ket')
docs/basics.ipynb
jcmgray/quijy
mit
The normalized=True option can be used to ensure a normalized output. Bras are row vectors, i.e. with shape (1, d):
qu(data, qtype='bra') # also conjugates the data
docs/basics.ipynb
jcmgray/quijy
mit
And operators are square matrices, i.e. have shape (d, d):
qu(data, qtype='dop')
docs/basics.ipynb
jcmgray/quijy
mit
Which can also be sparse:
qu(data, qtype='dop', sparse=True) psi = 1.0j * bell_state('psi-') psi psi.H psi = up() psi psi.H @ psi # inner product X = pauli('X') X @ psi # act as gate psi.H @ X @ psi # operator expectation expec(psi, psi) expec(psi, X)
docs/basics.ipynb
jcmgray/quijy
mit
Here's an example for a much larger (20 qubit), sparse operator expecation, which will be automatically parallelized:
psi = rand_ket(2**20) A = rand_herm(2**20, sparse=True) + speye(2**20) A expec(A, psi) # should be ~ 1 %%timeit expec(A, psi) dims = [2] * 10 # overall space of 10 qubits X = pauli('X') IIIXXIIIII = ikron(X, dims, inds=[3, 4]) # act on 4th and 5th spin only IIIXXIIIII.shape dims = [2] * 3 XZ = pauli('X') & pauli('Z') ZIX = pkron(XZ, dims, inds=[2, 0]) ZIX.real.astype(int) dims = [2] * 10 D = prod(dims) psi = rand_ket(D) rho_ab = ptr(psi, dims, [0, 9]) rho_ab.round(3) # probably pretty close to identity
docs/basics.ipynb
jcmgray/quijy
mit
Column expression A Spark column instance is NOT a column of values from the DataFrame: when you crate a column instance, it does not give you the actual values of that column in the DataFrame. I found it makes more sense to me if I consider a column instance as a column of expressions. These expressions are evaluated by other methods (e.g., the select(), groupby(), and orderby() from pyspark.sql.DataFrame) Example data
mtcars = spark.read.csv('../../data/mtcars.csv', inferSchema=True, header=True) mtcars = mtcars.withColumnRenamed('_c0', 'model') mtcars.show(5)
notebooks/02-data-manipulation/2.7.1-column-expression.ipynb
MingChen0919/learning-apache-spark
mit
Part 1: Featurize categorical data using one-hot-encoding (1a) One-hot-encoding We would like to develop code to convert categorical features to numerical ones, and to build intuition, we will work with a sample unlabeled dataset with three data points, with each data point representing an animal. The first feature indicates the type of animal (bear, cat, mouse); the second feature describes the animal's color (black, tabby); and the third (optional) feature describes what the animal eats (mouse, salmon). In a one-hot-encoding (OHE) scheme, we want to represent each tuple of (featureID, category) via its own binary feature. We can do this in Python by creating a dictionary that maps each tuple to a distinct integer, where the integer corresponds to a binary feature. To start, manually enter the entries in the OHE dictionary associated with the sample dataset by mapping the tuples to consecutive integers starting from zero, ordering the tuples first by featureID and next by category. Later in this lab, we'll use OHE dictionaries to transform data points into compact lists of features that can be used in machine learning algorithms.
# Data for manual OHE # Note: the first data point does not include any value for the optional third feature sampleOne = [(0, 'mouse'), (1, 'black')] sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')] sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')] sampleDataRDD = sc.parallelize([sampleOne, sampleTwo, sampleThree]) print sampleDataRDD.count() # print sampleDataRDD.take(5) # TODO: Replace <FILL IN> with appropriate code sampleOHEDictManual = {} sampleOHEDictManual[(0,'bear')] = 0 sampleOHEDictManual[(0,'cat')] = 1 sampleOHEDictManual[(0,'mouse')] = 2 sampleOHEDictManual[(1, 'black')] = 3 sampleOHEDictManual[(1, 'tabby')] = 4 sampleOHEDictManual[(2, 'mouse')] = 5 sampleOHEDictManual[(2, 'salmon')] = 6 print len(sampleOHEDictManual)
jupyter_notebooks/CS190-1x Module 4- Feature Hashing Lab.ipynb
bwalrond/explore-notebooks
mit
WARNING: If test_helper, required in the cell below, is not installed, follow the instructions here.
# TEST One-hot-encoding (1a) from test_helper import Test Test.assertEqualsHashed(sampleOHEDictManual[(0,'bear')], 'b6589fc6ab0dc82cf12099d1c2d40ab994e8410c', "incorrect value for sampleOHEDictManual[(0,'bear')]") Test.assertEqualsHashed(sampleOHEDictManual[(0,'cat')], '356a192b7913b04c54574d18c28d46e6395428ab', "incorrect value for sampleOHEDictManual[(0,'cat')]") Test.assertEqualsHashed(sampleOHEDictManual[(0,'mouse')], 'da4b9237bacccdf19c0760cab7aec4a8359010b0', "incorrect value for sampleOHEDictManual[(0,'mouse')]") Test.assertEqualsHashed(sampleOHEDictManual[(1,'black')], '77de68daecd823babbb58edb1c8e14d7106e83bb', "incorrect value for sampleOHEDictManual[(1,'black')]") Test.assertEqualsHashed(sampleOHEDictManual[(1,'tabby')], '1b6453892473a467d07372d45eb05abc2031647a', "incorrect value for sampleOHEDictManual[(1,'tabby')]") Test.assertEqualsHashed(sampleOHEDictManual[(2,'mouse')], 'ac3478d69a3c81fa62e60f5c3696165a4e5e6ac4', "incorrect value for sampleOHEDictManual[(2,'mouse')]") Test.assertEqualsHashed(sampleOHEDictManual[(2,'salmon')], 'c1dfd96eea8cc2b62785275bca38ac261256e278', "incorrect value for sampleOHEDictManual[(2,'salmon')]") Test.assertEquals(len(sampleOHEDictManual.keys()), 7, 'incorrect number of keys in sampleOHEDictManual')
jupyter_notebooks/CS190-1x Module 4- Feature Hashing Lab.ipynb
bwalrond/explore-notebooks
mit
(1b) Sparse vectors Data points can typically be represented with a small number of non-zero OHE features relative to the total number of features that occur in the dataset. By leveraging this sparsity and using sparse vector representations of OHE data, we can reduce storage and computational burdens. Below are a few sample vectors represented as dense numpy arrays. Use SparseVector to represent them in a sparse fashion, and verify that both the sparse and dense representations yield the same results when computing dot products (we will later use MLlib to train classifiers via gradient descent, and MLlib will need to compute dot products between SparseVectors and dense parameter vectors). Use SparseVector(size, *args) to create a new sparse vector where size is the length of the vector and args is either a dictionary, a list of (index, value) pairs, or two separate arrays of indices and values (sorted by index). You'll need to create a sparse vector representation of each dense vector aDense and bDense.
import numpy as np from pyspark.mllib.linalg import SparseVector # TODO: Replace <FILL IN> with appropriate code aDense = np.array([0., 3., 0., 4.]) aSparse = SparseVector(len(aDense), range(0,len(aDense)), aDense) bDense = np.array([0., 0., 0., 1.]) bSparse = SparseVector(len(bDense), range(0,len(bDense)), bDense) w = np.array([0.4, 3.1, -1.4, -.5]) print aDense.dot(w) print aSparse.dot(w) print bDense.dot(w) print bSparse.dot(w) print aDense print bDense print aSparse print bSparse # TEST Sparse Vectors (1b) Test.assertTrue(isinstance(aSparse, SparseVector), 'aSparse needs to be an instance of SparseVector') Test.assertTrue(isinstance(bSparse, SparseVector), 'aSparse needs to be an instance of SparseVector') Test.assertTrue(aDense.dot(w) == aSparse.dot(w), 'dot product of aDense and w should equal dot product of aSparse and w') Test.assertTrue(bDense.dot(w) == bSparse.dot(w), 'dot product of bDense and w should equal dot product of bSparse and w')
jupyter_notebooks/CS190-1x Module 4- Feature Hashing Lab.ipynb
bwalrond/explore-notebooks
mit
(1c) OHE features as sparse vectors Now let's see how we can represent the OHE features for points in our sample dataset. Using the mapping defined by the OHE dictionary from Part (1a), manually define OHE features for the three sample data points using SparseVector format. Any feature that occurs in a point should have the value 1.0. For example, the DenseVector for a point with features 2 and 4 would be [0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0].
# Reminder of the sample features # sampleOne = [(0, 'mouse'), (1, 'black')] # sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')] # sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')] # TODO: Replace <FILL IN> with appropriate code sampleOneOHEFeatManual = SparseVector(7, [2,3], np.array([1.0,1.0])) sampleTwoOHEFeatManual = SparseVector(7, [1,4,5], np.array([1.0,1.0,1.0])) sampleThreeOHEFeatManual = SparseVector(7, [0,3,6], np.array([1.0,1.0,1.0])) print sampleOneOHEFeatManual print sampleTwoOHEFeatManual print sampleThreeOHEFeatManual # TEST OHE Features as sparse vectors (1c) Test.assertTrue(isinstance(sampleOneOHEFeatManual, SparseVector), 'sampleOneOHEFeatManual needs to be a SparseVector') Test.assertTrue(isinstance(sampleTwoOHEFeatManual, SparseVector), 'sampleTwoOHEFeatManual needs to be a SparseVector') Test.assertTrue(isinstance(sampleThreeOHEFeatManual, SparseVector), 'sampleThreeOHEFeatManual needs to be a SparseVector') Test.assertEqualsHashed(sampleOneOHEFeatManual, 'ecc00223d141b7bd0913d52377cee2cf5783abd6', 'incorrect value for sampleOneOHEFeatManual') Test.assertEqualsHashed(sampleTwoOHEFeatManual, '26b023f4109e3b8ab32241938e2e9b9e9d62720a', 'incorrect value for sampleTwoOHEFeatManual') Test.assertEqualsHashed(sampleThreeOHEFeatManual, 'c04134fd603ae115395b29dcabe9d0c66fbdc8a7', 'incorrect value for sampleThreeOHEFeatManual')
jupyter_notebooks/CS190-1x Module 4- Feature Hashing Lab.ipynb
bwalrond/explore-notebooks
mit
(1d) Define a OHE function Next we will use the OHE dictionary from Part (1a) to programatically generate OHE features from the original categorical data. First write a function called oneHotEncoding that creates OHE feature vectors in SparseVector format. Then use this function to create OHE features for the first sample data point and verify that the result matches the result from Part (1c).
# TODO: Replace <FILL IN> with appropriate code def oneHotEncoding_old(rawFeats, OHEDict, numOHEFeats): """Produce a one-hot-encoding from a list of features and an OHE dictionary. Note: You should ensure that the indices used to create a SparseVector are sorted. Args: rawFeats (list of (int, str)): The features corresponding to a single observation. Each feature consists of a tuple of featureID and the feature's value. (e.g. sampleOne) OHEDict (dict): A mapping of (featureID, value) to unique integer. numOHEFeats (int): The total number of unique OHE features (combinations of featureID and value). Returns: SparseVector: A SparseVector of length numOHEFeats with indices equal to the unique identifiers for the (featureID, value) combinations that occur in the observation and with values equal to 1.0. """ newFeats = [] idx = [] for k,i in sorted(OHEDict.items(), key=lambda x: x[1]): if k in rawFeats: newFeats += [1.0] idx += [i] return SparseVector(numOHEFeats, idx, np.array(newFeats)) # TODO: Replace <FILL IN> with appropriate code def oneHotEncoding(rawFeats, OHEDict, numOHEFeats): """Produce a one-hot-encoding from a list of features and an OHE dictionary. Note: You should ensure that the indices used to create a SparseVector are sorted. Args: rawFeats (list of (int, str)): The features corresponding to a single observation. Each feature consists of a tuple of featureID and the feature's value. (e.g. sampleOne) OHEDict (dict): A mapping of (featureID, value) to unique integer. numOHEFeats (int): The total number of unique OHE features (combinations of featureID and value). Returns: SparseVector: A SparseVector of length numOHEFeats with indices equal to the unique identifiers for the (featureID, value) combinations that occur in the observation and with values equal to 1.0. """ newFeats = [] idx = [] for f in rawFeats: if f in OHEDict: newFeats += [1.0] idx += [OHEDict[f]] return SparseVector(numOHEFeats, sorted(idx), np.array(newFeats)) # Calculate the number of features in sampleOHEDictManual numSampleOHEFeats = len(sampleOHEDictManual) # Run oneHotEnoding on sampleOne sampleOneOHEFeat = oneHotEncoding(sampleOne,sampleOHEDictManual,numSampleOHEFeats) print sampleOneOHEFeat # TEST Define an OHE Function (1d) Test.assertTrue(sampleOneOHEFeat == sampleOneOHEFeatManual, 'sampleOneOHEFeat should equal sampleOneOHEFeatManual') Test.assertEquals(sampleOneOHEFeat, SparseVector(7, [2,3], [1.0,1.0]), 'incorrect value for sampleOneOHEFeat') Test.assertEquals(oneHotEncoding([(1, 'black'), (0, 'mouse')], sampleOHEDictManual, numSampleOHEFeats), SparseVector(7, [2,3], [1.0,1.0]), 'incorrect definition for oneHotEncoding')
jupyter_notebooks/CS190-1x Module 4- Feature Hashing Lab.ipynb
bwalrond/explore-notebooks
mit
(1e) Apply OHE to a dataset Finally, use the function from Part (1d) to create OHE features for all 3 data points in the sample dataset.
# TODO: Replace <FILL IN> with appropriate code def toOHE(row): return oneHotEncoding(row,sampleOHEDictManual,numSampleOHEFeats) sampleOHEData = sampleDataRDD.map(toOHE) print sampleOHEData.collect() # TEST Apply OHE to a dataset (1e) sampleOHEDataValues = sampleOHEData.collect() Test.assertTrue(len(sampleOHEDataValues) == 3, 'sampleOHEData should have three elements') Test.assertEquals(sampleOHEDataValues[0], SparseVector(7, {2: 1.0, 3: 1.0}), 'incorrect OHE for first sample') Test.assertEquals(sampleOHEDataValues[1], SparseVector(7, {1: 1.0, 4: 1.0, 5: 1.0}), 'incorrect OHE for second sample') Test.assertEquals(sampleOHEDataValues[2], SparseVector(7, {0: 1.0, 3: 1.0, 6: 1.0}), 'incorrect OHE for third sample')
jupyter_notebooks/CS190-1x Module 4- Feature Hashing Lab.ipynb
bwalrond/explore-notebooks
mit
Part 2: Construct an OHE dictionary (2a) Pair RDD of (featureID, category) To start, create an RDD of distinct (featureID, category) tuples. In our sample dataset, the 7 items in the resulting RDD are (0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'), (1, 'tabby'), (2, 'mouse'), (2, 'salmon'). Notably 'black' appears twice in the dataset but only contributes one item to the RDD: (1, 'black'), while 'mouse' also appears twice and contributes two items: (0, 'mouse') and (2, 'mouse'). Use flatMap and distinct.
flat = sampleDataRDD.flatMap(lambda r: r).distinct() print flat.count() for i in flat.take(8): print i # TODO: Replace <FILL IN> with appropriate code sampleDistinctFeats = (sampleDataRDD.flatMap(lambda r: r).distinct()) # TEST Pair RDD of (featureID, category) (2a) Test.assertEquals(sorted(sampleDistinctFeats.collect()), [(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'), (1, 'tabby'), (2, 'mouse'), (2, 'salmon')], 'incorrect value for sampleDistinctFeats')
jupyter_notebooks/CS190-1x Module 4- Feature Hashing Lab.ipynb
bwalrond/explore-notebooks
mit
(2b) OHE Dictionary from distinct features Next, create an RDD of key-value tuples, where each (featureID, category) tuple in sampleDistinctFeats is a key and the values are distinct integers ranging from 0 to (number of keys - 1). Then convert this RDD into a dictionary, which can be done using the collectAsMap action. Note that there is no unique mapping from keys to values, as all we require is that each (featureID, category) key be mapped to a unique integer between 0 and the number of keys. In this exercise, any valid mapping is acceptable. Use zipWithIndex followed by collectAsMap. In our sample dataset, one valid list of key-value tuples is: [((0, 'bear'), 0), ((2, 'salmon'), 1), ((1, 'tabby'), 2), ((2, 'mouse'), 3), ((0, 'mouse'), 4), ((0, 'cat'), 5), ((1, 'black'), 6)]. The dictionary defined in Part (1a) illustrates another valid mapping between keys and integers.
# TODO: Replace <FILL IN> with appropriate code sampleOHEDict = sampleDistinctFeats.zipWithIndex().collectAsMap() print sampleOHEDict # TEST OHE Dictionary from distinct features (2b) Test.assertEquals(sorted(sampleOHEDict.keys()), [(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'), (1, 'tabby'), (2, 'mouse'), (2, 'salmon')], 'sampleOHEDict has unexpected keys') Test.assertEquals(sorted(sampleOHEDict.values()), range(7), 'sampleOHEDict has unexpected values')
jupyter_notebooks/CS190-1x Module 4- Feature Hashing Lab.ipynb
bwalrond/explore-notebooks
mit
(2c) Automated creation of an OHE dictionary Now use the code from Parts (2a) and (2b) to write a function that takes an input dataset and outputs an OHE dictionary. Then use this function to create an OHE dictionary for the sample dataset, and verify that it matches the dictionary from Part (2b).
# TODO: Replace <FILL IN> with appropriate code def createOneHotDict(inputData): """Creates a one-hot-encoder dictionary based on the input data. Args: inputData (RDD of lists of (int, str)): An RDD of observations where each observation is made up of a list of (featureID, value) tuples. Returns: dict: A dictionary where the keys are (featureID, value) tuples and map to values that are unique integers. """ flat = inputData.flatMap(lambda r: r).distinct() return flat.zipWithIndex().collectAsMap() sampleOHEDictAuto = createOneHotDict(sampleDataRDD) print sampleOHEDictAuto # TEST Automated creation of an OHE dictionary (2c) Test.assertEquals(sorted(sampleOHEDictAuto.keys()), [(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'), (1, 'tabby'), (2, 'mouse'), (2, 'salmon')], 'sampleOHEDictAuto has unexpected keys') Test.assertEquals(sorted(sampleOHEDictAuto.values()), range(7), 'sampleOHEDictAuto has unexpected values')
jupyter_notebooks/CS190-1x Module 4- Feature Hashing Lab.ipynb
bwalrond/explore-notebooks
mit
Part 3: Parse CTR data and generate OHE features Before we can proceed, you'll first need to obtain the data from Criteo. Here is the link to Criteo's data sharing agreement:http://labs.criteo.com/downloads/2014-kaggle-display-advertising-challenge-dataset/. After you accept the agreement, you can obtain the download URL by right-clicking on the "Download Sample" button and clicking "Copy link address" or "Copy Link Location", depending on your browser. Paste the URL into the # TODO cell below. The script below will download the file and make the sample dataset's contents available in the rawData variable. Note that the download should complete within 30 seconds.
import os.path baseDir = os.path.join('/Users/bill.walrond/Documents/dsprj/data') inputPath = os.path.join('CS190_Mod4', 'dac_sample.txt') fileName = os.path.join(baseDir, inputPath) if os.path.isfile(fileName): rawData = (sc .textFile(fileName, 2) .map(lambda x: x.replace('\t', ','))) # work with either ',' or '\t' separated data print rawData.take(1) print rawData.count() else: print 'Couldn\'t find filename: %s' % fileName # TODO: Replace <FILL IN> with appropriate code import glob from io import BytesIO import os.path import tarfile import urllib import urlparse # Paste in url, url should end with: dac_sample.tar.gz url = '<FILL IN>' url = url.strip() if 'rawData' in locals(): print 'rawData already loaded. Nothing to do.' elif not url.endswith('dac_sample.tar.gz'): print 'Check your download url. Are you downloading the Sample dataset?' else: try: tmp = BytesIO() urlHandle = urllib.urlopen(url) tmp.write(urlHandle.read()) tmp.seek(0) tarFile = tarfile.open(fileobj=tmp) dacSample = tarFile.extractfile('dac_sample.txt') dacSample = [unicode(x.replace('\n', '').replace('\t', ',')) for x in dacSample] rawData = (sc .parallelize(dacSample, 1) # Create an RDD .zipWithIndex() # Enumerate lines .map(lambda (v, i): (i, v)) # Use line index as key .partitionBy(2, lambda i: not (i < 50026)) # Match sc.textFile partitioning .map(lambda (i, v): v)) # Remove index print 'rawData loaded from url' print rawData.take(1) except IOError: print 'Unable to unpack: {0}'.format(url)
jupyter_notebooks/CS190-1x Module 4- Feature Hashing Lab.ipynb
bwalrond/explore-notebooks
mit
(3a) Loading and splitting the data We are now ready to start working with the actual CTR data, and our first task involves splitting it into training, validation, and test sets. Use the randomSplit method with the specified weights and seed to create RDDs storing each of these datasets, and then cache each of these RDDs, as we will be accessing them multiple times in the remainder of this lab. Finally, compute the size of each dataset.
# TODO: Replace <FILL IN> with appropriate code weights = [.8, .1, .1] seed = 42 # Use randomSplit with weights and seed rawTrainData, rawValidationData, rawTestData = rawData.randomSplit(weights, seed) # Cache the data rawTrainData.cache() rawValidationData.cache() rawTestData.cache() nTrain = rawTrainData.count() nVal = rawValidationData.count() nTest = rawTestData.count() print nTrain, nVal, nTest, nTrain + nVal + nTest print rawTrainData.take(1) # TEST Loading and splitting the data (3a) Test.assertTrue(all([rawTrainData.is_cached, rawValidationData.is_cached, rawTestData.is_cached]), 'you must cache the split data') Test.assertEquals(nTrain, 79911, 'incorrect value for nTrain') Test.assertEquals(nVal, 10075, 'incorrect value for nVal') Test.assertEquals(nTest, 10014, 'incorrect value for nTest')
jupyter_notebooks/CS190-1x Module 4- Feature Hashing Lab.ipynb
bwalrond/explore-notebooks
mit
(3b) Extract features We will now parse the raw training data to create an RDD that we can subsequently use to create an OHE dictionary. Note from the take() command in Part (3a) that each raw data point is a string containing several fields separated by some delimiter. For now, we will ignore the first field (which is the 0-1 label), and parse the remaining fields (or raw features). To do this, complete the implemention of the parsePoint function.
# TODO: Replace <FILL IN> with appropriate code def parsePoint(point): """Converts a comma separated string into a list of (featureID, value) tuples. Note: featureIDs should start at 0 and increase to the number of features - 1. Args: point (str): A comma separated string where the first value is the label and the rest are features. Returns: list: A list of (featureID, value) tuples. """ # make a list of (featureID, value) tuples, skipping the first element (the label) return [(k,v) for k,v in enumerate(point[2:].split(','))] parsedTrainFeat = rawTrainData.map(parsePoint) print parsedTrainFeat.count() numCategories = (parsedTrainFeat .flatMap(lambda x: x) .distinct() .map(lambda x: (x[0], 1)) .reduceByKey(lambda x, y: x + y) .sortByKey() .collect()) print numCategories[2][1] # TEST Extract features (3b) Test.assertEquals(numCategories[2][1], 855, 'incorrect implementation of parsePoint') Test.assertEquals(numCategories[32][1], 4, 'incorrect implementation of parsePoint')
jupyter_notebooks/CS190-1x Module 4- Feature Hashing Lab.ipynb
bwalrond/explore-notebooks
mit
(3c) Create an OHE dictionary from the dataset Note that parsePoint returns a data point as a list of (featureID, category) tuples, which is the same format as the sample dataset studied in Parts 1 and 2 of this lab. Using this observation, create an OHE dictionary using the function implemented in Part (2c). Note that we will assume for simplicity that all features in our CTR dataset are categorical.
# TODO: Replace <FILL IN> with appropriate code ctrOHEDict = createOneHotDict(parsedTrainFeat) print 'Len of ctrOHEDict: {0}'.format(len(ctrOHEDict)) numCtrOHEFeats = len(ctrOHEDict.keys()) print numCtrOHEFeats print ctrOHEDict.has_key((0, '')) theItems = ctrOHEDict.items() for i in range(0,9): print theItems[i] # TEST Create an OHE dictionary from the dataset (3c) Test.assertEquals(numCtrOHEFeats, 233286, 'incorrect number of features in ctrOHEDict') Test.assertTrue((0, '') in ctrOHEDict, 'incorrect features in ctrOHEDict')
jupyter_notebooks/CS190-1x Module 4- Feature Hashing Lab.ipynb
bwalrond/explore-notebooks
mit
(3d) Apply OHE to the dataset Now let's use this OHE dictionary by starting with the raw training data and creating an RDD of LabeledPoint objects using OHE features. To do this, complete the implementation of the parseOHEPoint function. Hint: parseOHEPoint is an extension of the parsePoint function from Part (3b) and it uses the oneHotEncoding function from Part (1d).
from pyspark.mllib.regression import LabeledPoint print rawTrainData.count() r = rawTrainData.first() l = parsePoint(r) print 'Length of parsed list: %d' % len(l) print 'Here\'s the list ...' print l sv = oneHotEncoding(l, ctrOHEDict, numCtrOHEFeats) print 'Here\'s the sparsevector ...' print sv lp = LabeledPoint(float(r[:1]), sv) print 'Here\'s the labeledpoint ...' print lp # TODO: Replace <FILL IN> with appropriate code def parseOHEPoint(point, OHEDict, numOHEFeats): """Obtain the label and feature vector for this raw observation. Note: You must use the function `oneHotEncoding` in this implementation or later portions of this lab may not function as expected. Args: point (str): A comma separated string where the first value is the label and the rest are features. OHEDict (dict of (int, str) to int): Mapping of (featureID, value) to unique integer. numOHEFeats (int): The number of unique features in the training dataset. Returns: LabeledPoint: Contains the label for the observation and the one-hot-encoding of the raw features based on the provided OHE dictionary. """ # first, get the label label = float(point[:1]) parsed = parsePoint(point) features = oneHotEncoding(parsed, OHEDict, numOHEFeats) # return parsed return LabeledPoint(label,features) def toOHEPoint(point): return parseOHEPoint(point, ctrOHEDict, numCtrOHEFeats) sc.setLogLevel("INFO") rawTrainData = rawTrainData.repartition(8) rawTrainData.cache() OHETrainData = rawTrainData.map(toOHEPoint) OHETrainData.cache() print OHETrainData.take(1) # Check that oneHotEncoding function was used in parseOHEPoint backupOneHot = oneHotEncoding oneHotEncoding = None withOneHot = False try: parseOHEPoint(rawTrainData.take(1)[0], ctrOHEDict, numCtrOHEFeats) except TypeError: withOneHot = True oneHotEncoding = backupOneHot # TEST Apply OHE to the dataset (3d) numNZ = sum(parsedTrainFeat.map(lambda x: len(x)).take(5)) numNZAlt = sum(OHETrainData.map(lambda lp: len(lp.features.indices)).take(5)) Test.assertEquals(numNZ, numNZAlt, 'incorrect implementation of parseOHEPoint') Test.assertTrue(withOneHot, 'oneHotEncoding not present in parseOHEPoint')
jupyter_notebooks/CS190-1x Module 4- Feature Hashing Lab.ipynb
bwalrond/explore-notebooks
mit
Visualization 1: Feature frequency We will now visualize the number of times each of the 233,286 OHE features appears in the training data. We first compute the number of times each feature appears, then bucket the features by these counts. The buckets are sized by powers of 2, so the first bucket corresponds to features that appear exactly once ( \( \scriptsize 2^0 \) ), the second to features that appear twice ( \( \scriptsize 2^1 \) ), the third to features that occur between three and four ( \( \scriptsize 2^2 \) ) times, the fifth bucket is five to eight ( \( \scriptsize 2^3 \) ) times and so on. The scatter plot below shows the logarithm of the bucket thresholds versus the logarithm of the number of features that have counts that fall in the buckets.
x = sc.parallelize([("a", 1), ("b", 1), ("a", 1), ("a", 1),("b", 1), ("b", 1), ("b", 1), ("b", 1)], 3) y = x.reduceByKey(lambda accum, n: accum + n) y.collect() def bucketFeatByCount(featCount): """Bucket the counts by powers of two.""" for i in range(11): size = 2 ** i if featCount <= size: return size return -1 featCounts = (OHETrainData .flatMap(lambda lp: lp.features.indices) .map(lambda x: (x, 1)) .reduceByKey(lambda x, y: x + y)) featCountsBuckets = (featCounts .map(lambda x: (bucketFeatByCount(x[1]), 1)) .filter(lambda (k, v): k != -1) .reduceByKey(lambda x, y: x + y) .collect()) print featCountsBuckets import matplotlib.pyplot as plt %matplotlib inline x, y = zip(*featCountsBuckets) x, y = np.log(x), np.log(y) def preparePlot(xticks, yticks, figsize=(10.5, 6), hideLabels=False, gridColor='#999999', gridWidth=1.0): """Template for generating the plot layout.""" plt.close() fig, ax = plt.subplots(figsize=figsize, facecolor='white', edgecolor='white') ax.axes.tick_params(labelcolor='#999999', labelsize='10') for axis, ticks in [(ax.get_xaxis(), xticks), (ax.get_yaxis(), yticks)]: axis.set_ticks_position('none') axis.set_ticks(ticks) axis.label.set_color('#999999') if hideLabels: axis.set_ticklabels([]) plt.grid(color=gridColor, linewidth=gridWidth, linestyle='-') map(lambda position: ax.spines[position].set_visible(False), ['bottom', 'top', 'left', 'right']) return fig, ax # generate layout and plot data fig, ax = preparePlot(np.arange(0, 10, 1), np.arange(4, 14, 2)) ax.set_xlabel(r'$\log_e(bucketSize)$'), ax.set_ylabel(r'$\log_e(countInBucket)$') plt.scatter(x, y, s=14**2, c='#d6ebf2', edgecolors='#8cbfd0', alpha=0.75) # display(fig) plt.show() pass
jupyter_notebooks/CS190-1x Module 4- Feature Hashing Lab.ipynb
bwalrond/explore-notebooks
mit
(3e) Handling unseen features We naturally would like to repeat the process from Part (3d), e.g., to compute OHE features for the validation and test datasets. However, we must be careful, as some categorical values will likely appear in new data that did not exist in the training data. To deal with this situation, update the oneHotEncoding() function from Part (1d) to ignore previously unseen categories, and then compute OHE features for the validation data.
# TODO: Replace <FILL IN> with appropriate code def oneHotEncoding(rawFeats, OHEDict, numOHEFeats): """Produce a one-hot-encoding from a list of features and an OHE dictionary. Note: If a (featureID, value) tuple doesn't have a corresponding key in OHEDict it should be ignored. Args: rawFeats (list of (int, str)): The features corresponding to a single observation. Each feature consists of a tuple of featureID and the feature's value. (e.g. sampleOne) OHEDict (dict): A mapping of (featureID, value) to unique integer. numOHEFeats (int): The total number of unique OHE features (combinations of featureID and value). Returns: SparseVector: A SparseVector of length numOHEFeats with indices equal to the unique identifiers for the (featureID, value) combinations that occur in the observation and with values equal to 1.0. """ newFeats = [] idx = [] for f in rawFeats: if f in OHEDict: newFeats += [1.0] idx += [OHEDict[f]] return SparseVector(numOHEFeats, sorted(idx), np.array(newFeats)) OHEValidationData = rawValidationData.map(lambda point: parseOHEPoint(point, ctrOHEDict, numCtrOHEFeats)) OHEValidationData.cache() print OHEValidationData.take(1) # TEST Handling unseen features (3e) numNZVal = (OHEValidationData .map(lambda lp: len(lp.features.indices)) .sum()) Test.assertEquals(numNZVal, 372080, 'incorrect number of features')
jupyter_notebooks/CS190-1x Module 4- Feature Hashing Lab.ipynb
bwalrond/explore-notebooks
mit
(4b) Log loss Throughout this lab, we will use log loss to evaluate the quality of models. Log loss is defined as: \[ \scriptsize \ell_{log}(p, y) = \begin{cases} -\log (p) & \text{if } y = 1 \\ -\log(1-p) & \text{if } y = 0 \end{cases} \] where \( \scriptsize p\) is a probability between 0 and 1 and \( \scriptsize y\) is a label of either 0 or 1. Log loss is a standard evaluation criterion when predicting rare-events such as click-through rate prediction (it is also the criterion used in the Criteo Kaggle competition). Write a function to compute log loss, and evaluate it on some sample inputs.
# TODO: Replace <FILL IN> with appropriate code from math import log def computeLogLoss(p, y): """Calculates the value of log loss for a given probabilty and label. Note: log(0) is undefined, so when p is 0 we need to add a small value (epsilon) to it and when p is 1 we need to subtract a small value (epsilon) from it. Args: p (float): A probabilty between 0 and 1. y (int): A label. Takes on the values 0 and 1. Returns: float: The log loss value. """ epsilon = 10e-12 if p not in [0.0,1.0]: logeval = p elif p == 0: logeval = p+epsilon else: logeval = p-epsilon if y == 1: return (-log(logeval)) elif y == 0: return (-log(1-logeval)) print computeLogLoss(.5, 1) print computeLogLoss(.5, 0) print computeLogLoss(.99, 1) print computeLogLoss(.99, 0) print computeLogLoss(.01, 1) print computeLogLoss(.01, 0) print computeLogLoss(0, 1) print computeLogLoss(1, 1) print computeLogLoss(1, 0) # TEST Log loss (4b) Test.assertTrue(np.allclose([computeLogLoss(.5, 1), computeLogLoss(.01, 0), computeLogLoss(.01, 1)], [0.69314718056, 0.0100503358535, 4.60517018599]), 'computeLogLoss is not correct') Test.assertTrue(np.allclose([computeLogLoss(0, 1), computeLogLoss(1, 1), computeLogLoss(1, 0)], [25.3284360229, 1.00000008275e-11, 25.3284360229]), 'computeLogLoss needs to bound p away from 0 and 1 by epsilon')
jupyter_notebooks/CS190-1x Module 4- Feature Hashing Lab.ipynb
bwalrond/explore-notebooks
mit
(4c) Baseline log loss Next we will use the function we wrote in Part (4b) to compute the baseline log loss on the training data. A very simple yet natural baseline model is one where we always make the same prediction independent of the given datapoint, setting the predicted value equal to the fraction of training points that correspond to click-through events (i.e., where the label is one). Compute this value (which is simply the mean of the training labels), and then use it to compute the training log loss for the baseline model. The log loss for multiple observations is the mean of the individual log loss values.
# TODO: Replace <FILL IN> with appropriate code # Note that our dataset has a very high click-through rate by design # In practice click-through rate can be one to two orders of magnitude lower classOneFracTrain = OHETrainData.map(lambda p: p.label).mean() print classOneFracTrain logLossTrBase = OHETrainData.map(lambda p: computeLogLoss(classOneFracTrain,p.label) ).mean() print 'Baseline Train Logloss = {0:.6f}\n'.format(logLossTrBase) # TEST Baseline log loss (4c) Test.assertTrue(np.allclose(classOneFracTrain, 0.22717773523), 'incorrect value for classOneFracTrain') Test.assertTrue(np.allclose(logLossTrBase, 0.535844), 'incorrect value for logLossTrBase')
jupyter_notebooks/CS190-1x Module 4- Feature Hashing Lab.ipynb
bwalrond/explore-notebooks
mit
(4d) Predicted probability In order to compute the log loss for the model we trained in Part (4a), we need to write code to generate predictions from this model. Write a function that computes the raw linear prediction from this logistic regression model and then passes it through a sigmoid function \( \scriptsize \sigma(t) = (1+ e^{-t})^{-1} \) to return the model's probabilistic prediction. Then compute probabilistic predictions on the training data. Note that when incorporating an intercept into our predictions, we simply add the intercept to the value of the prediction obtained from the weights and features. Alternatively, if the intercept was included as the first weight, we would need to add a corresponding feature to our data where the feature has the value one. This is not the case here.
# TODO: Replace <FILL IN> with appropriate code from math import exp # exp(-t) = e^-t def getP(x, w, intercept): """Calculate the probability for an observation given a set of weights and intercept. Note: We'll bound our raw prediction between 20 and -20 for numerical purposes. Args: x (SparseVector): A vector with values of 1.0 for features that exist in this observation and 0.0 otherwise. w (DenseVector): A vector of weights (betas) for the model. intercept (float): The model's intercept. Returns: float: A probability between 0 and 1. """ rawPrediction = w.dot(x) + intercept # Bound the raw prediction value rawPrediction = min(rawPrediction, 20) rawPrediction = max(rawPrediction, -20) return ( 1 / (1 + exp(-1*rawPrediction)) ) trainingPredictions = OHETrainData.map(lambda p: getP(p.features,model0.weights, model0.intercept)) print trainingPredictions.take(5) # TEST Predicted probability (4d) Test.assertTrue(np.allclose(trainingPredictions.sum(), 18135.4834348), 'incorrect value for trainingPredictions')
jupyter_notebooks/CS190-1x Module 4- Feature Hashing Lab.ipynb
bwalrond/explore-notebooks
mit
(4e) Evaluate the model We are now ready to evaluate the quality of the model we trained in Part (4a). To do this, first write a general function that takes as input a model and data, and outputs the log loss. Then run this function on the OHE training data, and compare the result with the baseline log loss.
a = OHETrainData.map(lambda p: (getP(p.features, model0.weights, model0.intercept), p.label)) print a.count() print a.take(5) b = a.map(lambda lp: computeLogLoss(lp[0],lp[1])) print b.count() print b.take(5) # TODO: Replace <FILL IN> with appropriate code def evaluateResults(model, data): """Calculates the log loss for the data given the model. Args: model (LogisticRegressionModel): A trained logistic regression model. data (RDD of LabeledPoint): Labels and features for each observation. Returns: float: Log loss for the data. """ # Run a map to create an RDD of (prediction, label) tuples preds_labels = data.map(lambda p: (getP(p.features, model.weights, model.intercept), p.label)) return preds_labels.map(lambda lp: computeLogLoss(lp[0], lp[1])).mean() logLossTrLR0 = evaluateResults(model0, OHETrainData) print ('OHE Features Train Logloss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.6f}' .format(logLossTrBase, logLossTrLR0)) # TEST Evaluate the model (4e) Test.assertTrue(np.allclose(logLossTrLR0, 0.456903), 'incorrect value for logLossTrLR0')
jupyter_notebooks/CS190-1x Module 4- Feature Hashing Lab.ipynb
bwalrond/explore-notebooks
mit
(4f) Validation log loss Next, following the same logic as in Parts (4c) and 4(e), compute the validation log loss for both the baseline and logistic regression models. Notably, the baseline model for the validation data should still be based on the label fraction from the training dataset.
# TODO: Replace <FILL IN> with appropriate code logLossValBase = OHEValidationData.map(lambda p: computeLogLoss(classOneFracTrain, p.label)).mean() logLossValLR0 = evaluateResults(model0, OHEValidationData) print ('OHE Features Validation Logloss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.6f}' .format(logLossValBase, logLossValLR0)) # TEST Validation log loss (4f) Test.assertTrue(np.allclose(logLossValBase, 0.527603), 'incorrect value for logLossValBase') Test.assertTrue(np.allclose(logLossValLR0, 0.456957), 'incorrect value for logLossValLR0')
jupyter_notebooks/CS190-1x Module 4- Feature Hashing Lab.ipynb
bwalrond/explore-notebooks
mit
Visualization 2: ROC curve We will now visualize how well the model predicts our target. To do this we generate a plot of the ROC curve. The ROC curve shows us the trade-off between the false positive rate and true positive rate, as we liberalize the threshold required to predict a positive outcome. A random model is represented by the dashed line.
labelsAndScores = OHEValidationData.map(lambda lp: (lp.label, getP(lp.features, model0.weights, model0.intercept))) labelsAndWeights = labelsAndScores.collect() labelsAndWeights.sort(key=lambda (k, v): v, reverse=True) labelsByWeight = np.array([k for (k, v) in labelsAndWeights]) length = labelsByWeight.size truePositives = labelsByWeight.cumsum() numPositive = truePositives[-1] falsePositives = np.arange(1.0, length + 1, 1.) - truePositives truePositiveRate = truePositives / numPositive falsePositiveRate = falsePositives / (length - numPositive) # Generate layout and plot data fig, ax = preparePlot(np.arange(0., 1.1, 0.1), np.arange(0., 1.1, 0.1)) ax.set_xlim(-.05, 1.05), ax.set_ylim(-.05, 1.05) ax.set_ylabel('True Positive Rate (Sensitivity)') ax.set_xlabel('False Positive Rate (1 - Specificity)') plt.plot(falsePositiveRate, truePositiveRate, color='#8cbfd0', linestyle='-', linewidth=3.) plt.plot((0., 1.), (0., 1.), linestyle='--', color='#d6ebf2', linewidth=2.) # Baseline model # display(fig) plt.show() pass
jupyter_notebooks/CS190-1x Module 4- Feature Hashing Lab.ipynb
bwalrond/explore-notebooks
mit
(5b) Creating hashed features Next we will use this hash function to create hashed features for our CTR datasets. First write a function that uses the hash function from Part (5a) with numBuckets = \( \scriptsize 2^{15} \approx 33K \) to create a LabeledPoint with hashed features stored as a SparseVector. Then use this function to create new training, validation and test datasets with hashed features. Hint: parsedHashPoint is similar to parseOHEPoint from Part (3d).
feats = [(k,v) for k,v in enumerate(rawTrainData.take(1)[0][2:].split(','))] print feats hashDict = hashFunction(2 ** 15, feats) print hashDict print len(hashDict) print 2**15 # TODO: Replace <FILL IN> with appropriate code def parseHashPoint(point, numBuckets): """Create a LabeledPoint for this observation using hashing. Args: point (str): A comma separated string where the first value is the label and the rest are features. numBuckets: The number of buckets to hash to. Returns: LabeledPoint: A LabeledPoint with a label (0.0 or 1.0) and a SparseVector of hashed features. """ label = float(point[:1]) rawFeats = [(k,v) for k,v in enumerate(point[2:].split(','))] hashDict = hashFunction(numBuckets, rawFeats) return LabeledPoint(label,SparseVector(len(hashDict), sorted(hashDict.keys()), hashDict.values())) numBucketsCTR = 2 ** 15 hashTrainData = rawTrainData.map(lambda r: parseHashPoint(r,numBucketsCTR)) hashTrainData.cache() hashValidationData = rawValidationData.map(lambda r: parseHashPoint(r,numBucketsCTR)) hashValidationData.cache() hashTestData = rawTestData.map(lambda r: parseHashPoint(r,numBucketsCTR)) hashTestData.cache() a = hashTrainData.take(1) print a # TEST Creating hashed features (5b) hashTrainDataFeatureSum = sum(hashTrainData .map(lambda lp: len(lp.features.indices)) .take(20)) print hashTrainDataFeatureSum hashTrainDataLabelSum = sum(hashTrainData .map(lambda lp: lp.label) .take(100)) print hashTrainDataLabelSum hashValidationDataFeatureSum = sum(hashValidationData .map(lambda lp: len(lp.features.indices)) .take(20)) hashValidationDataLabelSum = sum(hashValidationData .map(lambda lp: lp.label) .take(100)) hashTestDataFeatureSum = sum(hashTestData .map(lambda lp: len(lp.features.indices)) .take(20)) hashTestDataLabelSum = sum(hashTestData .map(lambda lp: lp.label) .take(100)) Test.assertEquals(hashTrainDataFeatureSum, 772, 'incorrect number of features in hashTrainData') Test.assertEquals(hashTrainDataLabelSum, 24.0, 'incorrect labels in hashTrainData') Test.assertEquals(hashValidationDataFeatureSum, 776, 'incorrect number of features in hashValidationData') Test.assertEquals(hashValidationDataLabelSum, 16.0, 'incorrect labels in hashValidationData') Test.assertEquals(hashTestDataFeatureSum, 774, 'incorrect number of features in hashTestData') Test.assertEquals(hashTestDataLabelSum, 23.0, 'incorrect labels in hashTestData')
jupyter_notebooks/CS190-1x Module 4- Feature Hashing Lab.ipynb
bwalrond/explore-notebooks
mit
(5c) Sparsity Since we have 33K hashed features versus 233K OHE features, we should expect OHE features to be sparser. Verify this hypothesis by computing the average sparsity of the OHE and the hashed training datasets. Note that if you have a SparseVector named sparse, calling len(sparse) returns the total number of features, not the number features with entries. SparseVector objects have the attributes indices and values that contain information about which features are nonzero. Continuing with our example, these can be accessed using sparse.indices and sparse.values, respectively.
s = sum(hashTrainData.map(lambda lp: len(lp.features.indices) / float(numBucketsCTR) ).collect()) / nTrain # ratios.count() s # TODO: Replace <FILL IN> with appropriate code def computeSparsity(data, d, n): """Calculates the average sparsity for the features in an RDD of LabeledPoints. Args: data (RDD of LabeledPoint): The LabeledPoints to use in the sparsity calculation. d (int): The total number of features. n (int): The number of observations in the RDD. Returns: float: The average of the ratio of features in a point to total features. """ return sum(hashTrainData.map(lambda lp: len(lp.features.indices) / float(d) ).collect()) / n averageSparsityHash = computeSparsity(hashTrainData, numBucketsCTR, nTrain) averageSparsityOHE = computeSparsity(OHETrainData, numCtrOHEFeats, nTrain) print 'Average OHE Sparsity: {0:.7e}'.format(averageSparsityOHE) print 'Average Hash Sparsity: {0:.7e}'.format(averageSparsityHash) # TEST Sparsity (5c) Test.assertTrue(np.allclose(averageSparsityOHE, 1.6717677e-04), 'incorrect value for averageSparsityOHE') Test.assertTrue(np.allclose(averageSparsityHash, 1.1805561e-03), 'incorrect value for averageSparsityHash') sc.stop()
jupyter_notebooks/CS190-1x Module 4- Feature Hashing Lab.ipynb
bwalrond/explore-notebooks
mit
Parameters are given as follows. D, radius, N_A, U, and ka_factor mean a diffusion constant, a radius of molecules, an initial number of molecules of A and B, a ratio of dissociated form of A at the steady state, and a ratio between an intrinsic association rate and collision rate defined as ka andkD below, respectively. Dimensions of length and time are assumed to be micro-meter and second.
D = 1 radius = 0.005 N_A = 60 U = 0.5 ka_factor = 10 # 10 is for diffusion-limited N = 20 # a number of samples
en/tests/Reversible_Diffusion_limited.ipynb
ecell/ecell4-notebooks
gpl-2.0
Start with no C molecules, and simulate 3 seconds.
y0 = {'A': N_A, 'B': N_A} duration = 0.35 opt_kwargs = {'legend': True}
en/tests/Reversible_Diffusion_limited.ipynb
ecell/ecell4-notebooks
gpl-2.0
Simulating with spatiocyte. voxel_radius is given as radius. Use alpha enough less than 1.0 for a diffusion-limited case (Bars represent standard error of the mean):
# alpha = 0.03 ret2 = ensemble_simulations(duration, ndiv=20, y0=y0, model=m, solver=('spatiocyte', radius), repeat=N) ret2.plot('o', ret1, '-', **opt_kwargs)
en/tests/Reversible_Diffusion_limited.ipynb
ecell/ecell4-notebooks
gpl-2.0
Load the lending club dataset We will be using the same LendingClub dataset as in the previous assignment.
loans = graphlab.SFrame('lending-club-data.gl/') loans.head()
ml-classification/week-3/module-5-decision-tree-assignment-2-blank.ipynb
zomansud/coursera
mit
Decision tree implementation In this section, we will implement binary decision trees from scratch. There are several steps involved in building a decision tree. For that reason, we have split the entire assignment into several sections. Function to count number of mistakes while predicting majority class Recall from the lecture that prediction at an intermediate node works by predicting the majority class for all data points that belong to this node. Now, we will write a function that calculates the number of missclassified examples when predicting the majority class. This will be used to help determine which feature is the best to split on at a given node of the tree. Note: Keep in mind that in order to compute the number of mistakes for a majority classifier, we only need the label (y values) of the data points in the node. Steps to follow : * Step 1: Calculate the number of safe loans and risky loans. * Step 2: Since we are assuming majority class prediction, all the data points that are not in the majority class are considered mistakes. * Step 3: Return the number of mistakes. Now, let us write the function intermediate_node_num_mistakes which computes the number of misclassified examples of an intermediate node given the set of labels (y values) of the data points contained in the node. Fill in the places where you find ## YOUR CODE HERE. There are three places in this function for you to fill in.
def intermediate_node_num_mistakes(labels_in_node): # Corner case: If labels_in_node is empty, return 0 if len(labels_in_node) == 0: return 0 # Count the number of 1's (safe loans) safe_loans = (labels_in_node == 1).sum() # Count the number of -1's (risky loans) risky_loans = (labels_in_node == -1).sum() # Return the number of mistakes that the majority classifier makes. return risky_loans if safe_loans >= risky_loans else safe_loans
ml-classification/week-3/module-5-decision-tree-assignment-2-blank.ipynb
zomansud/coursera
mit
Function to pick best feature to split on The function best_splitting_feature takes 3 arguments: 1. The data (SFrame of data which includes all of the feature columns and label column) 2. The features to consider for splits (a list of strings of column names to consider for splits) 3. The name of the target/label column (string) The function will loop through the list of possible features, and consider splitting on each of them. It will calculate the classification error of each split and return the feature that had the smallest classification error when split on. Recall that the classification error is defined as follows: $$ \mbox{classification error} = \frac{\mbox{# mistakes}}{\mbox{# total examples}} $$ Follow these steps: * Step 1: Loop over each feature in the feature list * Step 2: Within the loop, split the data into two groups: one group where all of the data has feature value 0 or False (we will call this the left split), and one group where all of the data has feature value 1 or True (we will call this the right split). Make sure the left split corresponds with 0 and the right split corresponds with 1 to ensure your implementation fits with our implementation of the tree building process. * Step 3: Calculate the number of misclassified examples in both groups of data and use the above formula to compute the classification error. * Step 4: If the computed error is smaller than the best error found so far, store this feature and its error. This may seem like a lot, but we have provided pseudocode in the comments in order to help you implement the function correctly. Note: Remember that since we are only dealing with binary features, we do not have to consider thresholds for real-valued features. This makes the implementation of this function much easier. Fill in the places where you find ## YOUR CODE HERE. There are five places in this function for you to fill in.
def best_splitting_feature(data, features, target): best_feature = None # Keep track of the best feature best_error = 10 # Keep track of the best error so far # Note: Since error is always <= 1, we should intialize it with something larger than 1. # Convert to float to make sure error gets computed correctly. num_data_points = float(len(data)) # Loop through each feature to consider splitting on that feature for feature in features: # The left split will have all data points where the feature value is 0 left_split = data[data[feature] == 0] # The right split will have all data points where the feature value is 1 right_split = data[data[feature] == 1] # Calculate the number of misclassified examples in the left split. # Remember that we implemented a function for this! (It was called intermediate_node_num_mistakes) left_mistakes = intermediate_node_num_mistakes(left_split[target]) # Calculate the number of misclassified examples in the right split. right_mistakes = intermediate_node_num_mistakes(right_split[target]) # Compute the classification error of this split. # Error = (# of mistakes (left) + # of mistakes (right)) / (# of data points) error = float(left_mistakes + right_mistakes) / num_data_points # If this is the best error we have found so far, store the feature as best_feature and the error as best_error if error < best_error: best_error = error best_feature = feature return best_feature # Return the best feature we found
ml-classification/week-3/module-5-decision-tree-assignment-2-blank.ipynb
zomansud/coursera
mit
Building the tree With the above functions implemented correctly, we are now ready to build our decision tree. Each node in the decision tree is represented as a dictionary which contains the following keys and possible values: { 'is_leaf' : True/False. 'prediction' : Prediction at the leaf node. 'left' : (dictionary corresponding to the left tree). 'right' : (dictionary corresponding to the right tree). 'splitting_feature' : The feature that this node splits on. } First, we will write a function that creates a leaf node given a set of target values. Fill in the places where you find ## YOUR CODE HERE. There are three places in this function for you to fill in.
def create_leaf(target_values): # Create a leaf node leaf = {'splitting_feature' : None, 'left' : None, 'right' : None, 'is_leaf': True } # Count the number of data points that are +1 and -1 in this node. num_ones = len(target_values[target_values == +1]) num_minus_ones = len(target_values[target_values == -1]) # For the leaf node, set the prediction to be the majority class. # Store the predicted class (1 or -1) in leaf['prediction'] if num_ones > num_minus_ones: leaf['prediction'] = +1 else: leaf['prediction'] = -1 # Return the leaf node return leaf
ml-classification/week-3/module-5-decision-tree-assignment-2-blank.ipynb
zomansud/coursera
mit
We have provided a function that learns the decision tree recursively and implements 3 stopping conditions: 1. Stopping condition 1: All data points in a node are from the same class. 2. Stopping condition 2: No more features to split on. 3. Additional stopping condition: In addition to the above two stopping conditions covered in lecture, in this assignment we will also consider a stopping condition based on the max_depth of the tree. By not letting the tree grow too deep, we will save computational effort in the learning process. Now, we will write down the skeleton of the learning algorithm. Fill in the places where you find ## YOUR CODE HERE. There are seven places in this function for you to fill in.
def decision_tree_create(data, features, target, current_depth = 0, max_depth = 10): remaining_features = features[:] # Make a copy of the features. target_values = data[target] print "--------------------------------------------------------------------" print "Subtree, depth = %s (%s data points)." % (current_depth, len(target_values)) # Stopping condition 1 # (Check if there are mistakes at current node. # Recall you wrote a function intermediate_node_num_mistakes to compute this.) if intermediate_node_num_mistakes(target_values) == 0: ## YOUR CODE HERE print "Stopping condition 1 reached." # If not mistakes at current node, make current node a leaf node return create_leaf(target_values) # Stopping condition 2 (check if there are remaining features to consider splitting on) if len(remaining_features) == 0: ## YOUR CODE HERE print "Stopping condition 2 reached." # If there are no remaining features to consider, make current node a leaf node return create_leaf(target_values) # Additional stopping condition (limit tree depth) if current_depth >= max_depth: ## YOUR CODE HERE print "Reached maximum depth. Stopping for now." # If the max tree depth has been reached, make current node a leaf node return create_leaf(target_values) # Find the best splitting feature (recall the function best_splitting_feature implemented above) splitting_feature = best_splitting_feature(data, remaining_features, target) # Split on the best feature that we found. left_split = data[data[splitting_feature] == 0] right_split = data[data[splitting_feature] == 1] remaining_features.remove(splitting_feature) print "Split on feature %s. (%s, %s)" % (\ splitting_feature, len(left_split), len(right_split)) # Create a leaf node if the split is "perfect" if len(left_split) == len(data): print "Creating leaf node." return create_leaf(left_split[target]) if len(right_split) == len(data): print "Creating leaf node." return create_leaf(right_split[target]) # Repeat (recurse) on left and right subtrees left_tree = decision_tree_create(left_split, remaining_features, target, current_depth + 1, max_depth) ## YOUR CODE HERE right_tree = decision_tree_create(right_split, remaining_features, target, current_depth + 1, max_depth) return {'is_leaf' : False, 'prediction' : None, 'splitting_feature': splitting_feature, 'left' : left_tree, 'right' : right_tree}
ml-classification/week-3/module-5-decision-tree-assignment-2-blank.ipynb
zomansud/coursera
mit
Quiz question: What was the feature that my_decision_tree first split on while making the prediction for test_data[0]? Quiz question: What was the first feature that lead to a right split of test_data[0]? Quiz question: What was the last feature split on before reaching a leaf node for test_data[0]? Evaluating your decision tree Now, we will write a function to evaluate a decision tree by computing the classification error of the tree on the given dataset. Again, recall that the classification error is defined as follows: $$ \mbox{classification error} = \frac{\mbox{# mistakes}}{\mbox{# total examples}} $$ Now, write a function called evaluate_classification_error that takes in as input: 1. tree (as described above) 2. data (an SFrame) 3. target (a string - the name of the target/label column) This function should calculate a prediction (class label) for each row in data using the decision tree and return the classification error computed using the above formula. Fill in the places where you find ## YOUR CODE HERE. There is one place in this function for you to fill in.
def evaluate_classification_error(tree, data, target): # Apply the classify(tree, x) to each row in your data prediction = data.apply(lambda x: classify(tree, x)) # Once you've made the predictions, calculate the classification error and return it accuracy = (prediction == data[target]).sum() error = 1 - float(accuracy) / len(data[target]) return error
ml-classification/week-3/module-5-decision-tree-assignment-2-blank.ipynb
zomansud/coursera
mit
Now, let's use this function to evaluate the classification error on the test set.
round(evaluate_classification_error(my_decision_tree, test_data, target), 2)
ml-classification/week-3/module-5-decision-tree-assignment-2-blank.ipynb
zomansud/coursera
mit
Quiz question: What is the path of the first 3 feature splits considered along the left-most branch of my_decision_tree? Quiz question: What is the path of the first 3 feature splits considered along the right-most branch of my_decision_tree?
print_stump(my_decision_tree['right'], my_decision_tree['splitting_feature']) print_stump(my_decision_tree['right']['right'], my_decision_tree['left']['splitting_feature'])
ml-classification/week-3/module-5-decision-tree-assignment-2-blank.ipynb
zomansud/coursera
mit