markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
**Demo BOT**
# Launch website cf.window_show_desktop() cf.browser_activate('https://en.wikipedia.org/wiki/Robotic_process_automation') # # Method 1 - Use screen scraping functions to copy all available data to notepad # cf.message_counter_down_timer(strMsg="Calling ClointFusion Screen Scraping function in: ") # cf.scrape_save_contents_to_notepad(folderPathToSaveTheNotepad="E:/ClointFusion Demo/Bot Demo") # Method 2 - Use keyboard shortcuts cf.message_counter_down_timer(strMsg="Using keyboard shortcuts for selecting and copying content...") # Select all text and copy to clipboard cf.key_press(write_to_window="Chrome",key_1="ctrl", key_2="a") cf.key_press(write_to_window="Chrome",key_1="ctrl", key_2="c") # Open a new Word document and paste copied data cf.launch_any_exe_bat_application(pathOfExeFile="Winword") cf.key_hit_enter(write_to_window="Word") cf.message_counter_down_timer(strMsg="Pasting copied data to Word...") cf.key_press(write_to_window="Word", key_1="ctrl", key_2="v") # Open Save-As dialog box and enter filename cf.key_press(key_1="F12") cf.time.sleep(5) cf.key_write_enter(text_to_write="RPA - info") cf.time.sleep(3)
_____no_output_____
BSD-4-Clause
ClointFusion_Labs.ipynb
JAY-007-TRIVEDI/ClointFusion
**Browser Automation using Helium**
# Use this function to launch a browser and navigate to given url ''' function : cf.browser_activate( url="", files_download_path='', dummy_browser=True, open_in_background=False, incognito=False, clear_previous_instances=False, profile="Default" ) param desc: url (str, optional): Website you want to visit. files_download_path (str, optional): Path to which the files need to be downloaded. dummy_browser (bool, optional): If it is false Default profile is opened. incognito (bool, optional): Opens the browser in incognito mode. clear_previous_instances (bool, optional): If true all the opened chrome instances are closed. profile (str, optional): By default it opens the 'Default' profile. Eg : Profile 1, Profile 2 ''' cf.browser_activate(url="https://github.com/") # Use this function to Write a String on the user visible element on the page ''' function : browser_write_h(Value='', User_Visible_Text_Element='') param Desc: Value: The value to enter into the Visible element area. User_Visible_Text_Element: Label of element where the stored Value needs to be entered. ''' cf.browser_write_h("ClointFusion",User_Visible_Text_Element="Search GitHub") # Use this function to enter Special KEYS using Browser Selenium API ''' function: cf.browser_key_press_h(key_1="", key_2="") param desc: key_1 (str): Keys you want to simulate or string you want to press Eg: "tab" or "ClointFusion". key_2 (str, optional): Key you want to simulate with combination to key_1. Eg: "shift" or "escape". ''' cf.browser_key_press_h('ENTER') # Use this function to perform Mouse Left Click on the user visible element on the page ''' function: browser_mouse_click_h( User_Visible_Text_Element='', element='', double_click=False, right_click=False ) ''' cf.browser_mouse_click_h("ClointFusion/ClointFusion",element='d') # Use this function to navigate to specified URL ''' function: browser_navigate_h(url='') param desc: url: Url of the website ''' # GUI Mode # cf.browser_navigate_h() # Non GUI Mode cf.browser_navigate_h(url='https://github.com/clointfusion/clointfusion') # Use this function to refresh the page ''' function: browser_refresh_page_h() ''' cf.browser_refresh_page_h() # Use this function to perform left doubleclick on the user visible element on the page cf.browser_mouse_click_h(User_Visible_Text_Element='What is ClointFusion ?',element='d',double_click=True) # Use this function to locate element by xpath ''' function: browser_locate_element_h(selector="", get_text=False, multiple_elements=False) param desc: selector (str, optional): Give Xpath or CSS selector. Defaults to "". get_text (bool, optional): Give the text of the element. Defaults to False. multiple_elements (bool, optional): True if you want to get all the similar elements with matching selector as list. Defaults to False. ''' txt = cf.browser_locate_element_h("//h2[text()='About']/following-sibling::p",get_text=True) print(txt) #use this function to wait unil a specific text or element loads on the screen ''' function: browser_wait_until_h(text="",element="t") ''' cf.browser_mouse_click_h("Insights",element='d') cf.browser_wait_until_h("Contributors") cf.browser_mouse_click_h('Contributors',element='d') # Use this function to close the browser cf.browser_quit_h()
_____no_output_____
BSD-4-Clause
ClointFusion_Labs.ipynb
JAY-007-TRIVEDI/ClointFusion
**Miscellaneous Functions**
# Use this function to launch an excel. Just click browse and point to Excel File. # Note that, once the selected application is opened, it maximises automatically. cf.launch_any_exe_bat_application() cf.schedule_create_task_windows() cf.schedule_delete_task_windows() # Use this function in any of your print statements, to make your messages impressive to your Audience. print(cf.show_emoji('yum')) print(cf.show_emoji('heart_eyes')) # You may please refer emoji cheat_sheet here: https://www.webfx.com/tools/emoji-cheat-sheet # Use this function to Pause program. print("Program is Paused for 5 seconds..") cf.pause_program("5") print("Program is Resumed.") # Use this function to print colorful statements in random color order cf.print_with_magic_color(strMsg="Welcome to RPA tool ClointFusion", magic=True)
_____no_output_____
BSD-4-Clause
ClointFusion_Labs.ipynb
JAY-007-TRIVEDI/ClointFusion
Paddlepaddle实现逻辑回归 - 识别猫欢迎大家来到这个有趣的实验!在这个实验中,大家将学使用PaddlePaddle实现Logistic回归模型来解决识别猫的问题,一步步跟随内容完成训练,加深对逻辑回归理论内容的理解并串联各个知识点,收获对神经网络和深度学习概念的整体把握。 ** 你将学会 **- 预处理图片数据- 利用PaddlePaddle框架实现Logistic回归模型:在开始实验之前,让我们简单介绍一下图片处理的相关知识:** 图片处理 **由于识别猫问题涉及到图片处理指示,这里对计算机如何保存图片做一个简单的介绍。在计算机中,图片被存储为三个独立的矩阵,分别对应图3-6中的红、绿、蓝三个颜色通道,如果图片是64*64像素的,就会有三个64*64大小的矩阵,要把这些像素值放进一个特征向量中,需要定义一个特征向量X,将三个颜色通道中的所有像素值都列出来。如果图片是64*64大小的,那么特征向量X的总纬度就是64*64*3,也就是12288维。这样一个12288维矩阵就是Logistic回归模型的一个训练数据。现在,让我们正式进入实验吧! 1 - 引用库首先,载入几个需要用到的库,它们分别是:- numpy:一个python的基本库,用于科学计算- matplotlib.pyplot:用于生成图,在验证模型准确率和展示成本变化趋势时会使用到- h5py:用于处理hdf5文件数据- PIL和scipy:用于最后使用自己的图片验证训练模型- lr_utils:定义了load_datase()方法用于载入数据- paddle.v2:PaddlePaddle深度学习框架
import sys import numpy as np import lr_utils import matplotlib.pyplot as plt import paddle.v2 as paddle %matplotlib inline
_____no_output_____
Apache-2.0
jupyter/3.logistic_regression/.ipynb_checkpoints/paddle_logistic-checkpoint.ipynb
BaiduOSS/PaddleTutorial
2 - 数据预处理这里简单介绍数据集及其结构。数据集以hdf5文件的形式存储,包含了如下内容:- 训练数据集:包含了m_train个图片的数据集,数据的标签(Label)分为cat(y=1)和non-cat(y=0)两类。- 测试数据集:包含了m_test个图片的数据集,数据的标签(Label)同上。单个图片数据的存储形式为(num_x, num_x, 3),其中num_x表示图片的长或宽(数据集图片的长和宽相同),数字3表示图片的三通道(RGB)。在代码中使用一行代码来读取数据,读者暂不需要了解数据的读取过程,只需调用load_dataset()方法,并存储五个返回值,以便后续的使用。 需要注意的是,添加“_orig”后缀表示该数据为原始数据,因为之后还需要对数据进行进一步处理。未添加“_orig”的数据则表示之后不对该数据作进一步处理。
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = lr_utils.load_dataset() # 图片示例 index = 23 plt.imshow(train_set_x_orig[index]) print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.")
y = [0], it's a 'non-cat' picture.
Apache-2.0
jupyter/3.logistic_regression/.ipynb_checkpoints/paddle_logistic-checkpoint.ipynb
BaiduOSS/PaddleTutorial
获取数据后的下一步工作是获得数据的相关信息,如训练样本个数m_train、测试样本个数m_test和图片的长度或宽度num_x,使用numpy.array.shape来获取数据的相关信息。 ** 练习: ** 查看样本信息: - m_train (训练样本数) - m_test (测试样本数) - num_px (图片长或宽)`train_set_x_orig` 是一个(m_train, num_px, num_px, 3)形状的numpy数组。举个例子,你可以使用`train_set_x_orig.shape[0]`来获得 `m_train`。
### START CODE HERE ### (≈ 3 lines of code) m_train = train_set_x_orig.shape[0] m_test = test_set_x_orig.shape[0] num_px = test_set_x_orig.shape[1] ### END CODE HERE ### print ("训练样本数: m_train = " + str(m_train)) print ("测试样本数: m_test = " + str(m_test)) print ("图片高度/宽度: num_px = " + str(num_px)) print ("图片大小: (" + str(num_px) + ", " + str(num_px) + ", 3)") print ("train_set_x shape: " + str(train_set_x_orig.shape)) print ("train_set_y shape: " + str(train_set_y.shape)) print ("test_set_x shape: " + str(test_set_x_orig.shape)) print ("test_set_y shape: " + str(test_set_y.shape))
训练样本数: m_train = 209 测试样本数: m_test = 50 图片高度/宽度: num_px = 64 图片大小: (64, 64, 3) train_set_x shape: (209, 64, 64, 3) train_set_y shape: (1, 209) test_set_x shape: (50, 64, 64, 3) test_set_y shape: (1, 50)
Apache-2.0
jupyter/3.logistic_regression/.ipynb_checkpoints/paddle_logistic-checkpoint.ipynb
BaiduOSS/PaddleTutorial
**期望输出:**: **m_train** 209 **m_test** 50 **num_px** 64 接下来需要对数据作进一步处理,为了便于训练,你可以忽略图片的结构信息,将包含图像长、宽和通道数信息的三维数组压缩成一维数组,图片数据的形状将由(64, 64, 3)转化为(64 * 64 * 3, 1)。** 练习:** 将数据形状由(64, 64, 3)转化为(64 * 64 * 3, 1)。** 技巧:**我们可以使用一个小技巧来将(a,b,c,d)形状的矩阵转化为(b$*$c$*$d, a)形状的矩阵: ```pythonX_flatten = X.reshape(X.shape[0], -1)```
# 转换数据形状 ### START CODE HERE ### (≈ 2 lines of code) train_set_x_flatten = train_set_x_orig.reshape(m_train,-1) test_set_x_flatten = test_set_x_orig.reshape(m_test,-1) ### END CODE HERE ### print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape)) print ("train_set_y shape: " + str(train_set_y.shape)) print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape)) print ("test_set_y shape: " + str(test_set_y.shape))
train_set_x_flatten shape: (209, 12288) train_set_y shape: (1, 209) test_set_x_flatten shape: (50, 12288) test_set_y shape: (1, 50)
Apache-2.0
jupyter/3.logistic_regression/.ipynb_checkpoints/paddle_logistic-checkpoint.ipynb
BaiduOSS/PaddleTutorial
**期望输出**: **train_set_x_flatten shape** (209, 12288) **train_set_y shape** (1, 209) **test_set_x_flatten shape** (50, 12288) **test_set_y shape** (1, 50) 在开始训练之前,还需要对数据进行归一化处理。图片采用红、绿、蓝三通道的方式来表示颜色,每个通道的单个像素点都存储着一个0-255的像素值,所以图片的归一化处理十分简单,只需要将数据集中的每个值除以255即可,但需要注意的是结果值应为float类型,直接除以255会导致结果错误,在Python中除以255.即可将结果转化为float类型。现在让我们来归一化数据吧!
### START CODE HERE ### (≈ 2 lines of code) train_set_x = train_set_x_flatten/255. test_set_x = test_set_x_flatten/255. ### END CODE HERE ###
_____no_output_____
Apache-2.0
jupyter/3.logistic_regression/.ipynb_checkpoints/paddle_logistic-checkpoint.ipynb
BaiduOSS/PaddleTutorial
为了方便后续的测试工作,添加了合并数据集和标签集的操作,使用numpy.hstack实现numpy数组的横向合并。
train_set = np.hstack((train_set_x, train_set_y.T)) test_set = np.hstack((test_set_x, test_set_y.T))
_____no_output_____
Apache-2.0
jupyter/3.logistic_regression/.ipynb_checkpoints/paddle_logistic-checkpoint.ipynb
BaiduOSS/PaddleTutorial
**经过上面的实验,大家应该记住:**对数据进行预处理的一般步骤是: - 了解数据的维度和形状等信息,例如(m_train, m_test, num_px, ...)- 降低数据纬度,例如将数据维度(num_px, num_px, 3)转化为(num_px \* num_px \* 3, 1)- 数据归一化 至此我们就完成了数据预处理工作!在接下来的练习中我们将构造reader,用于读取数据。 3 - 构造reader构造read_data()函数,来读取训练数据集train_set或者测试数据集test_set。它的具体实现是在read_data()函数内部构造一个reader(),使用yield关键字来让reader()成为一个Generator(生成器),注意,yield关键字的作用和使用方法类似return关键字,不同之处在于yield关键字可以构造生成器(Generator)。虽然我们可以直接创建一个包含所有数据的列表,但是由于内存限制,我们不可能创建一个无限大的或者巨大的列表,并且很多时候在创建了一个百万数量级别的列表之后,我们却只需要用到开头的几个或几十个数据,这样造成了极大的浪费,而生成器的工作方式是在每次循环时计算下一个值,不断推算出后续的元素,不会创建完整的数据集列表,从而节约了内存使用。** 练习:**现在让我们使用yield来构造一个reader()吧!
# 读取训练数据或测试数据 def read_data(data_set): """ 一个reader Args: data_set -- 要获取的数据集 Return: reader -- 用于获取训练数据集及其标签的生成器generator """ def reader(): """ 一个reader Args: Return: data[:-1], data[-1:] -- 使用yield返回生成器(generator), data[:-1]表示前n-1个元素,也就是训练数据,data[-1:]表示最后一个元素,也就是对应的标签 """ for data in data_set: ### START CODE HERE ### (≈ 2 lines of code) yield data[:-1], data[-1:] ### END CODE HERE ### return reader test_array = [[1,1,1,1,0], [2,2,2,2,1], [3,3,3,3,0]] print("test_array for read_data:") for value in read_data(test_array)(): print(value)
test_array for read_data: ([1, 1, 1, 1], [0]) ([2, 2, 2, 2], [1]) ([3, 3, 3, 3], [0])
Apache-2.0
jupyter/3.logistic_regression/.ipynb_checkpoints/paddle_logistic-checkpoint.ipynb
BaiduOSS/PaddleTutorial
**期望输出**: ([1, 1, 1, 1], [0]) ([2, 2, 2, 2], [1]) ([3, 3, 3, 3], [0]) 4 - 训练过程完成了数据的预处理工作并构造了read_data()来读取数据,接下来将进入模型的训练过程,使用PaddlePaddle来定义构造可训练的Logistic回归模型,关键步骤如下:- 初始化- 配置网络结构和设置参数 - 配置网络结构 - 定义损失函数cost - 创建parameters - 定义优化器optimizer- 模型训练- 模型检验- 预测- 绘制学习曲线** (1)初始化 **首先进行最基本的初始化操作,在PaddlePaddle中使用paddle.init(use_gpu=False, trainer_count=1)来进行初始化:- use_gpu=False表示不使用gpu进行训练- trainer_count=1表示仅使用一个训练器进行训练
# 初始化 paddle.init(use_gpu=False, trainer_count=1)
_____no_output_____
Apache-2.0
jupyter/3.logistic_regression/.ipynb_checkpoints/paddle_logistic-checkpoint.ipynb
BaiduOSS/PaddleTutorial
** (2)配置网络结构和设置参数 **** 配置网络结构 **我们知道Logistic回归模型结构相当于一个只含一个神经元的神经网络,如下图所示,只包含输入数据以及输出层,不存在隐藏层,所以只需配置输入层(input)、输出层(predict)和标签层(label)即可。** 练习:**接下来让我们使用PaddlePaddle提供的接口开始配置Logistic回归模型的简单网络结构吧,一共需要配置三层:** 输入层: **我们可以定义image=paddle.layer.data(name=”image”, type=paddle.data_type.dense_vector(data_dim))来表示生成一个数据输入层,名称为“image”,数据类型为data_dim维向量;在定义输入层之前,我们需要使用之前计算的num_px来获取数据维度data_dim,data_dim=num_px \* num_px \* 3** 输出层: **我们可以定义predict=paddle.layer.fc(input=image, size=1, act=paddle.activation.Sigmoid())表示生成一个全连接层,输入数据为image,神经元个数为1,激活函数为Sigmoid();** 标签层 **我们可以定义label=paddle.layer.data(name=”label”, type=paddle.data_type.dense_vector(1))表示生成一个数据层,名称为“label”,数据类型为1维向量。
# 配置网络结构 # 数据层需要使用到数据维度data_dim,根据num_px来计算data_dim ### START CODE HERE ### (≈ 2 lines of code) data_dim = num_px * num_px * 3 ### END CODE HERE ### # 输入层,paddle.layer.data表示数据层 ### START CODE HERE ### (≈ 2 lines of code) image = paddle.layer.data( name='image', type=paddle.data_type.dense_vector(data_dim)) ### END CODE HERE ### # 输出层,paddle.layer.fc表示全连接层 ### START CODE HERE ### (≈ 2 lines of code) predict = paddle.layer.fc( input=image, size=1, act=paddle.activation.Sigmoid()) ### END CODE HERE ### # 标签数据层,paddle.layer.data表示数据层 ### START CODE HERE ### (≈ 2 lines of code) label = paddle.layer.data( name='label', type=paddle.data_type.dense_vector(1)) ### END CODE HERE ###
_____no_output_____
Apache-2.0
jupyter/3.logistic_regression/.ipynb_checkpoints/paddle_logistic-checkpoint.ipynb
BaiduOSS/PaddleTutorial
** 定义损失函数 **在配置网络结构之后,我们需要定义一个损失函数来计算梯度并优化参数,在这里我们可以使用PaddlePaddle提供的交叉熵损失函数,定义cost = paddle.layer.multi_binary_label_cross_entropy_cost(input=predict, label=label),使用predict与label计算成本。
# 损失函数,使用交叉熵损失函数 ### START CODE HERE ### (≈ 2 lines of code) cost = paddle.layer.multi_binary_label_cross_entropy_cost(input=predict, label=label) ### END CODE HERE ###
_____no_output_____
Apache-2.0
jupyter/3.logistic_regression/.ipynb_checkpoints/paddle_logistic-checkpoint.ipynb
BaiduOSS/PaddleTutorial
** 创建parameters **PaddlePaddle中提供了接口paddle.parameters.create(cost)来创建和初始化参数,参数cost表示基于我们刚刚创建的cost损失函数来创建和初始化参数。
# 创建parameters ### START CODE HERE ### (≈ 2 lines of code) parameters = paddle.parameters.create(cost) ### END CODE HERE ###
_____no_output_____
Apache-2.0
jupyter/3.logistic_regression/.ipynb_checkpoints/paddle_logistic-checkpoint.ipynb
BaiduOSS/PaddleTutorial
** optimizer **参数创建完成后,定义参数优化器optimizer= paddle.optimizer.Momentum(momentum=0, learning_rate=0.00002),使用Momentum作为优化器,并设置动量momentum为零,学习率为0.00002。注意,读者暂时无需了解Momentum的含义,只需要学会使用即可。
#创建optimizer ### START CODE HERE ### (≈ 2 lines of code) optimizer = paddle.optimizer.Momentum(momentum=0, learning_rate=0.00002) ### END CODE HERE ###
_____no_output_____
Apache-2.0
jupyter/3.logistic_regression/.ipynb_checkpoints/paddle_logistic-checkpoint.ipynb
BaiduOSS/PaddleTutorial
** 其它配置 **feeding={‘image’:0, ‘label’:1}是数据层名称和数组索引的映射,用于在训练时输入数据,costs数组用于存储cost值,记录成本变化情况。最后定义函数event_handler(event)用于事件处理,事件event中包含batch_id,pass_id,cost等信息,我们可以打印这些信息或作其它操作。
# 数据层和数组索引映射,用于trainer训练时喂数据 feeding = { 'image': 0, 'label': 1} # 记录成本cost costs = [] # 事件处理 def event_handler(event): """ 事件处理器,可以根据训练过程的信息作相应操作 Args: event -- 事件对象,包含event.pass_id, event.batch_id, event.cost等信息 Return: """ if isinstance(event, paddle.event.EndIteration): if event.pass_id % 100 == 0: print("Pass %d, Batch %d, Cost %f" % (event.pass_id, event.batch_id, event.cost)) costs.append(event.cost)
_____no_output_____
Apache-2.0
jupyter/3.logistic_regression/.ipynb_checkpoints/paddle_logistic-checkpoint.ipynb
BaiduOSS/PaddleTutorial
** 模型训练 **上述内容进行了模型初始化、网络结构的配置并创建了损失函数、参数、优化器,接下来利用上述配置进行模型训练。首先定义一个随机梯度下降trainer,配置三个参数cost、parameters、update_equation,它们分别表示损失函数、参数和更新公式。再利用trainer.train()即可开始真正的模型训练,我们可以设置参数如下:- paddle.reader.shuffle(train(), buf_size=5000)表示trainer从train()这个reader中读取了buf_size=5000大小的数据并打乱顺序- paddle.batch(reader(), batch_size=256)表示从打乱的数据中再取出batch_size=256大小的数据进行一次迭代训练- 参数feeding用到了之前定义的feeding索引,将数据层image和label输入trainer,也就是训练数据的来源。- 参数event_handler是事件管理机制,读者可以自定义event_handler,根据事件信息作相应的操作。- 参数num_passes=5000表示迭代训练5000次后停止训练。** 练习:**定义trainer并开始训练模型(大家可以自己定义buf_size,batch_size,num_passes等参数,但是建议大家先参考上述的参数设置,在完成整个训练之后,再回过头来调整这些参数,看看结果会有什么不同)
# 构造trainer ### START CODE HERE ### (≈ 2 lines of code) trainer = paddle.trainer.SGD( cost=cost, parameters=parameters, update_equation=optimizer) ### END CODE HERE ### # 模型训练 ### START CODE HERE ### (≈ 2 lines of code) trainer.train( reader=paddle.batch( paddle.reader.shuffle(read_data(train_set), buf_size=5000), batch_size=256), feeding=feeding, event_handler=event_handler, num_passes=2000) ### END CODE HERE ###
Pass 0, Batch 0, Cost 0.718985 Pass 100, Batch 0, Cost 0.497979 Pass 200, Batch 0, Cost 0.431610 Pass 300, Batch 0, Cost 0.386631 Pass 400, Batch 0, Cost 0.352110 Pass 500, Batch 0, Cost 0.324237 Pass 600, Batch 0, Cost 0.301005 Pass 700, Batch 0, Cost 0.281201 Pass 800, Batch 0, Cost 0.264035 Pass 900, Batch 0, Cost 0.248960 Pass 1000, Batch 0, Cost 0.235580 Pass 1100, Batch 0, Cost 0.223603 Pass 1200, Batch 0, Cost 0.212803 Pass 1300, Batch 0, Cost 0.203003 Pass 1400, Batch 0, Cost 0.194064 Pass 1500, Batch 0, Cost 0.185871 Pass 1600, Batch 0, Cost 0.178331 Pass 1700, Batch 0, Cost 0.171367 Pass 1800, Batch 0, Cost 0.164913 Pass 1900, Batch 0, Cost 0.158914
Apache-2.0
jupyter/3.logistic_regression/.ipynb_checkpoints/paddle_logistic-checkpoint.ipynb
BaiduOSS/PaddleTutorial
** 模型检验 **模型训练完成后,接下来检验模型的准确率。在这里我们首先需要定义一个函数get_data()来帮助我们获得用于检验模型准确率的数据,而数据的来源是read_data()返回的训练数据和测试数据。
# 获取数据 def get_data(data_creator): """ 使用参数data_creator来获取测试数据 Args: data_creator -- 数据来源,可以是train()或者test() Return: result -- 包含测试数据(image)和标签(label)的python字典 """ data_creator = data_creator data_image = [] data_label = [] for item in data_creator(): data_image.append((item[0],)) data_label.append(item[1]) ### START CODE HERE ### (≈ 4 lines of code) result = { "image": data_image, "label": data_label } ### END CODE HERE ### return result
_____no_output_____
Apache-2.0
jupyter/3.logistic_regression/.ipynb_checkpoints/paddle_logistic-checkpoint.ipynb
BaiduOSS/PaddleTutorial
获得数据之后,我们就可以开始利用paddle.infer()来进行预测,参数output_layer 表示输出层,参数parameters表示模型参数,参数input表示输入的测试数据。** 练习:**- 利用get_data()获取测试数据和训练数据- 使用paddle.infer()进行预测
# 获取测试数据和训练数据,用来验证模型准确度 ### START CODE HERE ### (≈ 2 lines of code) train_data = get_data(read_data(train_set)) test_data = get_data(read_data(test_set)) ### END CODE HERE ### # 根据train_data和test_data预测结果,output_layer表示输出层,parameters表示模型参数,input表示输入的测试数据 ### START CODE HERE ### (≈ 6 lines of code) probs_train = paddle.infer( output_layer=predict, parameters=parameters, input=train_data['image'] ) probs_test = paddle.infer( output_layer=predict, parameters=parameters, input=test_data['image'] ) ### END CODE HERE ###
_____no_output_____
Apache-2.0
jupyter/3.logistic_regression/.ipynb_checkpoints/paddle_logistic-checkpoint.ipynb
BaiduOSS/PaddleTutorial
获得检测结果probs_train和probs_test之后,我们将结果转化为二分类结果并计算预测正确的结果数量,定义train_accuracy和test_accuracy来分别计算训练准确度和测试准确度。注意,在test_accuracy中,我们不仅计算了准确度,并且传入了一个test_data_y数组参数用于存储预测结果,方便后续的查看。
# 训练集准确度 def train_accuracy(probs_train, train_data): """ 根据训练数据集来计算训练准确度train_accuracy Args: probs_train -- 训练数据集的预测结果,调用paddle.infer()来获取 train_data -- 训练数据集 Return: train_accuracy -- 训练准确度train_accuracy """ train_right = 0 train_total = len(train_data['label']) for i in range(len(probs_train)): if float(probs_train[i][0]) > 0.5 and train_data['label'][i] == 1: train_right += 1 elif float(probs_train[i][0]) < 0.5 and train_data['label'][i] == 0: train_right += 1 train_accuracy = (float(train_right) / float(train_total)) * 100 return train_accuracy # 测试集准确度 def test_accuracy(probs_test, test_data, test_data_y): """ 根据测试数据集来计算测试准确度test_accuracy Args: probs_test -- 测试数据集的预测结果,调用paddle.infer()来获取 test_data -- 测试数据集 Return: test_accuracy -- 测试准确度test_accuracy """ test_right = 0 test_total = len(test_data['label']) for i in range(len(probs_test)): if float(probs_test[i][0]) > 0.5: test_data_y.append(1) if test_data['label'][i] == 1: test_right += 1 elif float(probs_test[i][0]) < 0.5: test_data_y.append(0) if test_data['label'][i] == 0: test_right += 1 test_accuracy = (float(test_right) / float(test_total)) * 100 return test_accuracy
_____no_output_____
Apache-2.0
jupyter/3.logistic_regression/.ipynb_checkpoints/paddle_logistic-checkpoint.ipynb
BaiduOSS/PaddleTutorial
调用上述两个函数并输出
# 计算train_accuracy和test_accuracy test_data_y = [] ### START CODE HERE ### (≈ 6 lines of code) print("train_accuracy: {} %".format(train_accuracy(probs_train, train_data))) print("test_accuracy: {} %".format(test_accuracy(probs_test, test_data, test_data_y))) ### END CODE HERE ###
train_accuracy: 98.5645933014 % test_accuracy: 70.0 %
Apache-2.0
jupyter/3.logistic_regression/.ipynb_checkpoints/paddle_logistic-checkpoint.ipynb
BaiduOSS/PaddleTutorial
** 期望输出:** **train_accuracy ** $\approx$ 98% **test Accuracy** $\approx$ 70% 因为数据集和逻辑回归模型的限制,并且没有加入其它优化方式,所以70%的测试集准确率已经是相当好的结果,如果你也得到相似的结果,大约98%的训练集准确率和70%的测试集准确率,那么恭喜你到目前为止的工作都很棒,你已经配置了不错的模型和参数。当然你可以返回去调整一些参数,例如learning_rate/batch_size/num_passes,或者参考官方[PaddlePaddle文档](http://paddlepaddle.org/docs/develop/documentation/zh/getstarted/index_cn.html)来修改你的模型,尝试去犯错或者通过调参来得到更好的结果都能帮助你熟悉深度学习以及PaddlePaddle框架的使用! ** 学习曲线 **接下来我们利用之前保存的costs数据来输出成本的变化情况,利用学习曲线对模型进行分析。
costs = np.squeeze(costs) plt.plot(costs) plt.ylabel('cost') plt.xlabel('iterations (per hundreds)') plt.title("Learning rate = 0.00002") plt.show()
_____no_output_____
Apache-2.0
jupyter/3.logistic_regression/.ipynb_checkpoints/paddle_logistic-checkpoint.ipynb
BaiduOSS/PaddleTutorial
可以看到图中成本在刚开始收敛较快,随着迭代次数变多,收敛速度变慢,最终收敛到一个较小值。接下来,利用之前保存的测试结果test_data_y来对测试数据集中的单个图片进行预测,通过index来选择一张图片,看看你的模型是不是正确预测了这张图片吧!
# Example of a picture that was wrongly classified. index = 14 plt.imshow((np.array(test_data['image'][index])).reshape((64, 64, 3))) print ("y = " + str(test_data_y[index]) + ", you predicted that it is a \"" + classes[test_data_y[index]].decode("utf-8") + "\" picture.")
y = 0, you predicted that it is a "non-cat" picture.
Apache-2.0
jupyter/3.logistic_regression/.ipynb_checkpoints/paddle_logistic-checkpoint.ipynb
BaiduOSS/PaddleTutorial
Arithematic
3+5 'a' + 'b' -5.2 50 - 243 'a' + 2 'a' + '2' # multiplication 2 * 3 2 * 'aaaa'
_____no_output_____
MIT
2. ITW2/01_operator.ipynb
piyushmoolchandani/Lab_codes
Power operator
print(3 ** 4) print('abc' ** 3) # divide print(12/5) # modulo print(int(16/3) % 2)
1
MIT
2. ITW2/01_operator.ipynb
piyushmoolchandani/Lab_codes
Relational OperatorBoolean: True / False
print(5<3) print(3< 5) print(3 < 5 < 7) print(3 > 5 < 7) print( "piyusji" <= "piyusj") x = 'str' ; y = 'Str' x == y
_____no_output_____
MIT
2. ITW2/01_operator.ipynb
piyushmoolchandani/Lab_codes
- Python is case sensitive
x = 'Piyush' y = "Piyush" x == y x != y print(5 == 5.000) print( 5 == 5.3)
True False
MIT
2. ITW2/01_operator.ipynb
piyushmoolchandani/Lab_codes
Logical Operator- Short Circuit evaluation
x = True not x x = False print(x and not x) print(x or not x)
True
MIT
2. ITW2/01_operator.ipynb
piyushmoolchandani/Lab_codes
User Input and Output
person_name = input('Enter your name: ') print(person_name) age_str = input('Enter your age: ') age = int(age_str) print('Age: '+ str(age)) print('Age: '+ age_str) age_1 = int(input('Enter your age: ')) print(type(age_1)) print('Age: '+str(age_1)) print('Age: ', age_1)
Enter your age: 19 <class 'int'> Age: 19 Age: 19
MIT
2. ITW2/01_operator.ipynb
piyushmoolchandani/Lab_codes
Control Statements:
a = 2 if a == 2: print("Condition is true") print("Sample Statement") if a > 3: print("This also satisfies double if") print('Another Statement') a = 1 if a == 0: print("I am in if block") else: print('I am in else block') if (a == 3): print('if in else') else: print('else in else') print("Another Statement")
I am in else block else in else Another Statement
MIT
2. ITW2/01_operator.ipynb
piyushmoolchandani/Lab_codes
Question
a = int(input("Enter a number to check: ")) if (a % 2 == 0): print("Number is even") else: print("Number is odd") var = 100 if var < 200 and var >= 50: print("Expression is greater than 50 but less and 200") if var == 150: print("It is 150") elif var == 100: print("It is 100") elif var == 50: print("It is 50") elif var < 50: print("Expression is less than 50") else: print("Cannot find the true expression") print("Good Bye") a = int(input("Enter a number to check: ")) if a == 0: print("It is zero, which is a neutral number.") else: if (a % 2 == 0): print("Number is even") else: print("Number is odd")
Enter a number to check: 9 Number is odd
MIT
2. ITW2/01_operator.ipynb
piyushmoolchandani/Lab_codes
Loop Structure:
# help(range) for i in range(17, 3 , -2): print(i) for i in range(-7, -2, 3): print(i) # Question final = 0 n = int(input("Enter the number: ")) for i in range(n+1): final += i**2 print(final) final = 0 n = int(input("Enter your number: ")) for i in range(1, n+1): if i%2: final += i**3 else: final += i**2 print(final)
Enter your number: 34 173893
MIT
2. ITW2/01_operator.ipynb
piyushmoolchandani/Lab_codes
- While
n = 5 i = 1 while(i <= n): print(i) i = i+1 final = 0 n = int(input("Enter your number: ")) i = 0 while (i < n): i += 1 if i % 2: final += i**3 else: final += i**2 print(final)
Enter your number: 8 616
MIT
2. ITW2/01_operator.ipynb
piyushmoolchandani/Lab_codes
* Break and continue
for letter in 'Python': if letter == 'h': break; print("current letter: ", letter) print('out of For') for letter in "Django": if letter == 'D': continue; print("current letter: ", letter)
current letter: j current letter: a current letter: n current letter: g current letter: o
MIT
2. ITW2/01_operator.ipynb
piyushmoolchandani/Lab_codes
ModelsThis is an introduction and overview on how to work with models in Gammapy. The sub-package `~gammapy.modeling` contains all the functionality related to modeling and fittingdata. This includes spectral, spatial and temporal model classes, as well as the fitand parameter API. We will cover the following topics in order:1. [Spectral Models](Spectral-Models)1. [Spatial Models](Spatial-Models)1. [Temporal Models](Temporal-Models)1. [SkyModel](SkyModel)1. [Modifying model parameters](Modifying-model-parameters)1. [Model Lists and Serialisation](Model-Lists-and-Serialisation)1. [Models with shared parameter](Models-with-shared-parameter)1. [Implementing as Custom Model](Implementing-a-Custom-Model)1. [Energy dependent models](Models-with-energy-dependent-morphology)The models follow a naming scheme which contains the category as a suffix to the class name. An overview of all the available models can be found in the [model gallery](../../modeling/gallery/index.rstspectral-models).Note that there are separate tutorials, [model_management](model_management.ipynb) and [fitting](fitting.ipynb) that explains about `~gammapy.modeling`,the Gammapy modeling and fitting framework. You have to read that to learn how to work with models in order to analyse data. Setup
%matplotlib inline import numpy as np import matplotlib.pyplot as plt from astropy import units as u from gammapy.maps import Map, WcsGeom, MapAxis
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
Spectral modelsAll models are imported from the `~gammapy.modeling.models` namespace. Let's start with a `PowerLawSpectralModel`:
from gammapy.modeling.models import PowerLawSpectralModel pwl = PowerLawSpectralModel() print(pwl)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
To get a list of all available spectral models you can import and print the spectral model registry or take a look at the [model gallery](https://docs.gammapy.org/dev/modeling/gallery/index.htmlspectral-models):
from gammapy.modeling.models import SPECTRAL_MODEL_REGISTRY print(SPECTRAL_MODEL_REGISTRY)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
Spectral models all come with default parameters. Different parametervalues can be passed on creation of the model, either as a string definingthe value and unit or as an `astropy.units.Quantity` object directly:
amplitude = 1e-12 * u.Unit("TeV-1 cm-2 s-1") pwl = PowerLawSpectralModel(amplitude=amplitude, index=2.2)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
For convenience a `str` specifying the value and unit can be passed as well:
pwl = PowerLawSpectralModel(amplitude="2.7e-12 TeV-1 cm-2 s-1", index=2.2) print(pwl)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
The model can be evaluated at given energies by calling the model instance:
energy = [1, 3, 10, 30] * u.TeV dnde = pwl(energy) print(dnde)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
The returned quantity is a differential photon flux. For spectral models you can additionally compute the integrated and energy fluxin a given energy range:
flux = pwl.integral(energy_min=1 * u.TeV, energy_max=10 * u.TeV) print(flux) eflux = pwl.energy_flux(energy_min=1 * u.TeV, energy_max=10 * u.TeV) print(eflux)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
This also works for a list or an array of integration boundaries:
energy = [1, 3, 10, 30] * u.TeV flux = pwl.integral(energy_min=energy[:-1], energy_max=energy[1:]) print(flux)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
In some cases it can be useful to find use the inverse of a spectral model, to find the energy at which a given flux is reached:
dnde = 2.7e-12 * u.Unit("TeV-1 cm-2 s-1") energy = pwl.inverse(dnde) print(energy)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
As a convenience you can also plot any spectral model in a given energy range:
pwl.plot(energy_bounds=[1, 100] * u.TeV)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
Norm Spectral Models Normed spectral models are a special class of Spectral Models, which have a dimension-less normalisation. These spectral models feature a norm parameter insteadof amplitude and are named using the ``NormSpectralModel`` suffix. They **must** be used along with another spectral model, as a multiplicative correction factor according to their spectral shape. They can be typically used for adjusting template based models, or adding a EBL correction to some analytic model. To check if a given `SpectralModel` is a norm model, you can simply look at the `is_norm_spectral_model` property
# To see the available norm models shipped with gammapy: for model in SPECTRAL_MODEL_REGISTRY: if model.is_norm_spectral_model: print(model)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
As an example, we see the `PowerLawNormSpectralModel`
from gammapy.modeling.models import PowerLawNormSpectralModel pwl_norm = PowerLawNormSpectralModel(tilt=0.1) print(pwl_norm)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
We can check the correction introduced at each energy
energy = [0.3, 1, 3, 10, 30] * u.TeV pwl_norm(energy)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
A typical use case of a norm model would be in applying spectral correction to a `TemplateSpectralModel`. A template model is defined by custom tabular values provided at initialization.
from gammapy.modeling.models import TemplateSpectralModel energy = [0.3, 1, 3, 10, 30] * u.TeV values = [40, 30, 20, 10, 1] * u.Unit("TeV-1 s-1 cm-2") template = TemplateSpectralModel(energy, values) template.plot(energy_bounds=[0.2, 50] * u.TeV, label="template model") normed_template = template * pwl_norm normed_template.plot( energy_bounds=[0.2, 50] * u.TeV, label="normed_template model" ) plt.legend();
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
Compound Spectral ModelA `CompoundSpectralModel` is an arithmetic combination of two spectral models. The model `normed_template` created in the preceding example is an example of a `CompoundSpectralModel`
print(normed_template)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
To create an additive model, you can do simply:
model_add = pwl + template print(model_add)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
Spatial models Spatial models are imported from the same `~gammapy.modeling.models` namespace, let's start with a `GaussianSpatialModel`:
from gammapy.modeling.models import GaussianSpatialModel gauss = GaussianSpatialModel(lon_0="0 deg", lat_0="0 deg", sigma="0.2 deg") print(gauss)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
Again you can check the `SPATIAL_MODELS` registry to see which models are available or take a look at the [model gallery](https://docs.gammapy.org/dev/modeling/gallery/index.htmlspatial-models).
from gammapy.modeling.models import SPATIAL_MODEL_REGISTRY print(SPATIAL_MODEL_REGISTRY)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
The default coordinate frame for all spatial models is ``"icrs"``, but the frame can be modified using the``frame`` argument:
gauss = GaussianSpatialModel( lon_0="0 deg", lat_0="0 deg", sigma="0.2 deg", frame="galactic" )
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
You can specify any valid `astropy.coordinates` frame. The center position of the model can be retrieved as a `astropy.coordinates.SkyCoord` object using `SpatialModel.position`:
print(gauss.position)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
Spatial models can be evaluated again by calling the instance:
lon = [0, 0.1] * u.deg lat = [0, 0.1] * u.deg flux_per_omega = gauss(lon, lat) print(flux_per_omega)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
The returned quantity corresponds to a surface brightness. Spatial modelcan be also evaluated using `~gammapy.maps.Map` and `~gammapy.maps.Geom` objects:
m = Map.create(skydir=(0, 0), width=(1, 1), binsz=0.02, frame="galactic") m.quantity = gauss.evaluate_geom(m.geom) m.plot(add_cbar=True);
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
Again for convenience the model can be plotted directly:
gauss.plot(add_cbar=True);
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
All spatial models have an associated sky region to it e.g. to illustrate the extend of the model on a sky image. The returned object is an `regions.SkyRegion` object:
print(gauss.to_region())
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
Now we can plot the region on an sky image:
# create and plot the model gauss_elongated = GaussianSpatialModel( lon_0="0 deg", lat_0="0 deg", sigma="0.2 deg", e=0.7, phi="45 deg" ) ax = gauss_elongated.plot(add_cbar=True) # add region illustration region = gauss_elongated.to_region() region_pix = region.to_pixel(ax.wcs) ax.add_artist(region_pix.as_artist(ec="w", fc="None"));
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
The `.to_region()` method can also be useful to write e.g. ds9 region files using `write_ds9` from the `regions` package:
from regions import write_ds9 regions = [gauss.to_region(), gauss_elongated.to_region()] filename = "regions.reg" write_ds9( regions, filename, coordsys="galactic", fmt=".4f", radunit="deg", overwrite=True, ) !cat regions.reg
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
Temporal models Temporal models are imported from the same `~gammapy.modeling.models` namespace, let's start with a `GaussianTemporalModel`:
from gammapy.modeling.models import GaussianTemporalModel gauss_temp = GaussianTemporalModel(t_ref=59240.0 * u.d, sigma=2.0 * u.d) print(gauss_temp)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
To check the `TEMPORAL_MODELS` registry to see which models are available:
from gammapy.modeling.models import TEMPORAL_MODEL_REGISTRY print(TEMPORAL_MODEL_REGISTRY)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
Temporal models can be evaluated on `astropy.time.Time` objects. The returned quantity is a dimensionless number
from astropy.time import Time time = Time("2021-01-29 00:00:00.000") gauss_temp(time)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
As for other models, they can be plotted in a given time range
time = Time([59233.0, 59250], format="mjd") gauss_temp.plot(time)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
SkyModel The `~gammapy.modeling.models.SkyModel` class combines a spectral, and optionally, a spatial model and a temporal. It can be createdfrom existing spectral, spatial and temporal model components:
from gammapy.modeling.models import SkyModel model = SkyModel( spectral_model=pwl, spatial_model=gauss, temporal_model=gauss_temp, name="my-source", ) print(model)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
It is good practice to specify a name for your sky model, so that you can access it later by name and have meaningful identifier you serilisation. If you don't define a name, a unique random name is generated:
model_without_name = SkyModel(spectral_model=pwl, spatial_model=gauss) print(model_without_name.name)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
The individual components of the source model can be accessed using `.spectral_model`, `.spatial_model` and `.temporal_model`:
model.spectral_model model.spatial_model model.temporal_model
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
And can be used as you have seen already seen above:
model.spectral_model.plot(energy_bounds=[1, 10] * u.TeV);
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
Note that the gammapy fitting can interface only with a `SkyModel` and **not** its individual components. So, it is customary to work with `SkyModel` even if you are not doing a 3D fit. Since the amplitude parameter resides on the `SpectralModel`, specifying a spectral component is compulsory. The temporal and spatial components are optional. The temporal model needs to be specified only for timing analysis. In some cases (e.g. when doing a spectral analysis) there is no need for a spatial component either, and only a spectral model is associated with the source.
model_spectrum = SkyModel(spectral_model=pwl, name="source-spectrum") print(model_spectrum)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
Additionally the spatial model of `~gammapy.modeling.models.SkyModel` can be used to represent source models based on templates, where the spatial and energy axes are correlated. It can be created e.g. from an existing FITS file:
from gammapy.modeling.models import TemplateSpatialModel from gammapy.modeling.models import PowerLawNormSpectralModel diffuse_cube = TemplateSpatialModel.read( "$GAMMAPY_DATA/fermi-3fhl-gc/gll_iem_v06_gc.fits.gz", normalize=False ) diffuse = SkyModel(PowerLawNormSpectralModel(), diffuse_cube) print(diffuse)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
Note that if the spatial model is not normalized over the sky it has to be combined with a normalized spectral model, for example `~gammapy.modeling.models.PowerLawNormSpectralModel`. This is the only case in `gammapy.models.SkyModel` where the unit is fully attached to the spatial model. Modifying model parametersModel parameters can be modified (eg: frozen, values changed, etc at any point), eg:
# Freezing a parameter model.spectral_model.index.frozen = True # Making a parameter free model.spectral_model.index.frozen = False # Changing a value model.spectral_model.index.value = 3 # Setting min and max ranges on parameters model.spectral_model.index.min = 1.0 model.spectral_model.index.max = 5.0 # Visualise the model as a table model.parameters.to_table().show_in_notebook()
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
You can use the interactive boxes to choose model parameters by name, type or other attrributes mentioned in the column names. Model lists and serialisationIn a typical analysis scenario a model consists of multiple model components, or a "catalog" or "source library". To handle this list of multiple model components, Gammapy has a `Models` class:
from gammapy.modeling.models import Models models = Models([model, diffuse]) print(models)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
Individual model components in the list can be accessed by their name:
print(models["my-source"])
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
**Note:**To make the access by name unambiguous, models are required to have a unique name, otherwise an error will be thrown.To see which models are available you can use the `.names` attribute:
print(models.names)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
Note that a `SkyModel` object can be evaluated for a given longitude, latitude, and energy, but the `Models` object cannot. This `Models` container object will be assigned to `Dataset` or `Datasets` together with the data to be fitted as explained in other analysis tutorials (see for example the [modeling](../analysis/2D/modeling_2D.ipynb) notebook).The `Models` class also has in place `.append()` and `.extend()` methods:
model_copy = model.copy(name="my-source-copy") models.append(model_copy)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
This list of models can be also serialised to a custom YAML based format:
models_yaml = models.to_yaml() print(models_yaml)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
The structure of the yaml files follows the structure of the python objects.The `components` listed correspond to the `SkyModel` and `SkyDiffuseCube` components of the `Models`. For each `SkyModel` we have information about its `name`, `type` (corresponding to the tag attribute) and sub-mobels (i.e `spectral` model and eventually `spatial` model). Then the spatial and spectral models are defined by their type and parameters. The `parameters` keys name/value/unit are mandatory, while the keys min/max/frozen are optionnals (so you can prepare shorter files).If you want to write this list of models to disk and read it back later you can use:
models.write("models.yaml", overwrite=True) models_read = Models.read("models.yaml")
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
Additionally the models can exported and imported togeter with the data using the `Datasets.read()` and `Datasets.write()` methods as shown in the [analysis_mwl](../analysis/3D/analysis_mwl.ipynb) notebook. Models with shared parameterA model parameter can be shared with other models, for example we can define two power-law models with the same spectral index but different amplitudes:
pwl2 = PowerLawSpectralModel() pwl2.index = pwl.index pwl.index.value = 2.3 # also update pwl2 as the parameter object is now the same as shown below print(pwl.index) print(pwl2.index)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
In the YAML files the shared parameter is flagged by the additional `link` entry that follows the convention `parameter.name@unique_id`:
models = Models( [SkyModel(pwl, name="source1"), SkyModel(pwl2, name="source2")] ) models_yaml = models.to_yaml() print(models_yaml)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
Implementing a custom modelIn order to add a user defined spectral model you have to create a SpectralModel subclass.This new model class should include:- a tag used for serialization (it can be the same as the class name)- an instantiation of each Parameter with their unit, default values and frozen status- the evaluate function where the mathematical expression for the model is defined.As an example we will use a PowerLawSpectralModel plus a Gaussian (with fixed width).First we define the new custom model class that we name `MyCustomSpectralModel`:
from gammapy.modeling import Parameter from gammapy.modeling.models import SpectralModel class MyCustomSpectralModel(SpectralModel): """My custom spectral model, parametrising a power law plus a Gaussian spectral line. Parameters ---------- amplitude : `astropy.units.Quantity` Amplitude of the spectra model. index : `astropy.units.Quantity` Spectral index of the model. reference : `astropy.units.Quantity` Reference energy of the power law. mean : `astropy.units.Quantity` Mean value of the Gaussian. width : `astropy.units.Quantity` Sigma width of the Gaussian line. """ tag = "MyCustomSpectralModel" amplitude = Parameter("amplitude", "1e-12 cm-2 s-1 TeV-1", min=0) index = Parameter("index", 2, min=0) reference = Parameter("reference", "1 TeV", frozen=True) mean = Parameter("mean", "1 TeV", min=0) width = Parameter("width", "0.1 TeV", min=0, frozen=True) @staticmethod def evaluate(energy, index, amplitude, reference, mean, width): pwl = PowerLawSpectralModel.evaluate( energy=energy, index=index, amplitude=amplitude, reference=reference, ) gauss = amplitude * np.exp(-((energy - mean) ** 2) / (2 * width ** 2)) return pwl + gauss
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
It is good practice to also implement a docstring for the model, defining the parameters and also definig a `tag`, which specifies the name of the model for serialisation. Also note that gammapy assumes that all SpectralModel evaluate functions return a flux in unit of `"cm-2 s-1 TeV-1"` (or equivalent dimensions).This model can now be used as any other spectral model in Gammapy:
my_custom_model = MyCustomSpectralModel(mean="3 TeV") print(my_custom_model) my_custom_model.integral(1 * u.TeV, 10 * u.TeV) my_custom_model.plot(energy_bounds=[1, 10] * u.TeV)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
As a next step we can also register the custom model in the `SPECTRAL_MODELS` registry, so that it becomes available for serilisation:
SPECTRAL_MODEL_REGISTRY.append(MyCustomSpectralModel) model = SkyModel(spectral_model=my_custom_model, name="my-source") models = Models([model]) models.write("my-custom-models.yaml", overwrite=True) !cat my-custom-models.yaml
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
Similarly you can also create custom spatial models and add them to the `SPATIAL_MODELS` registry. In that case gammapy assumes that the evaluate function return a normalized quantity in "sr-1" such as the model integral over the whole sky is one. Models with energy dependent morphologyA common science case in the study of extended sources is to probe for energy dependent morphology, eg: in Supernova Remnants or Pulsar Wind Nebulae. Traditionally, this has been done by splitting the data into energy bands and doing individual fits of the morphology in these energy bands.`SkyModel` offers a natural framework to simultaneously model the energy and morphology, e.g. spatial extent described by a parametric model expression with energy dependent parameters.The models shipped within gammapy use a “factorised” representation of the source model, where the spatial ($l,b$), energy ($E$) and time ($t$) dependence are independent model components and not correlated: $$f(l, b, E, t) = F(l, b) \cdot G(E) \cdot H(t) $$ To use full 3D models, ie $f(l, b, E) = F(l, b, E) \cdot G(E) $, you have to implement your own custom `SpatialModel`. Note that it is still necessary to multiply by a `SpectralModel`, $G(E)$ to be dimensionally consistent.In this example, we create Gaussian Spatial Model with the extension varying with energy. For simplicity, we assume a linear dependence on energy and parameterize this by specifying the extension at 2 energies. You can add more complex dependences, probably motivated by physical models.
from gammapy.modeling.models import SpatialModel from astropy.coordinates.angle_utilities import angular_separation class MyCustomGaussianModel(SpatialModel): """My custom Energy Dependent Gaussian model. Parameters ---------- lon_0, lat_0 : `~astropy.coordinates.Angle` Center position sigma_1TeV : `~astropy.coordinates.Angle` Width of the Gaussian at 1 TeV sigma_10TeV : `~astropy.coordinates.Angle` Width of the Gaussian at 10 TeV """ tag = "MyCustomGaussianModel" is_energy_dependent = True lon_0 = Parameter("lon_0", "0 deg") lat_0 = Parameter("lat_0", "0 deg", min=-90, max=90) sigma_1TeV = Parameter("sigma_1TeV", "2.0 deg", min=0) sigma_10TeV = Parameter("sigma_10TeV", "0.2 deg", min=0) @staticmethod def evaluate(lon, lat, energy, lon_0, lat_0, sigma_1TeV, sigma_10TeV): sep = angular_separation(lon, lat, lon_0, lat_0) # Compute sigma for the given energy using linear interpolation in log energy sigma_nodes = u.Quantity([sigma_1TeV, sigma_10TeV]) energy_nodes = [1, 10] * u.TeV log_s = np.log(sigma_nodes.to("deg").value) log_en = np.log(energy_nodes.to("TeV").value) log_e = np.log(energy.to("TeV").value) sigma = np.exp(np.interp(log_e, log_en, log_s)) * u.deg exponent = -0.5 * (sep / sigma) ** 2 norm = 1 / (2 * np.pi * sigma ** 2) return norm * np.exp(exponent)
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
Serialisation of this model can be achieved as explained in the previous section.You can now use it as stadard `SpatialModel` in your analysis. Note that this is still a `SpatialModel`, and not a `SkyModel`, so it needs to be multiplied by a `SpectralModel` as before.
spatial_model = MyCustomGaussianModel() spectral_model = PowerLawSpectralModel() sky_model = SkyModel( spatial_model=spatial_model, spectral_model=spectral_model ) spatial_model.evaluation_radius
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
To visualise it, we evaluate it on a 3D geom.
energy_axis = MapAxis.from_energy_bounds( energy_min=0.1 * u.TeV, energy_max=10.0 * u.TeV, nbin=3, name="energy_true" ) geom = WcsGeom.create( skydir=(0, 0), width=5.0 * u.deg, binsz=0.1, axes=[energy_axis] ) spatial_model.plot_grid(geom=geom, add_cbar=True, figsize=(14, 3));
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
For computational purposes, it is useful to specify a `evaluation_radius` for `SpatialModels` - this gives a size on which to compute the model. Though optional, it is highly recommended for Custom Spatial Models. This can be done, for ex, by defining the following function inside the above class:
@property def evaluation_radius(self): """Evaluation radius (`~astropy.coordinates.Angle`).""" return 5 * np.max([self.sigma_1TeV.value, self.sigma_10TeV.value]) * u.deg
_____no_output_____
BSD-3-Clause
docs/tutorials/api/models.ipynb
aaguasca/gammapy
TASK1: INTRODUCTION
!pip install nn_utils import numpy as np from tensorflow.keras.models import Sequential from tensorflow.keras.layers import TimeDistributed, Dense, Dropout, SimpleRNN, RepeatVector from tensorflow.keras.callbacks import EarlyStopping, LambdaCallback import nn_utils from termcolor import colored
_____no_output_____
MIT
Recurrent_neural_network.ipynb
jaivanti/Simple-RNN-Model
TASK2: GENERATE DATA
all_chars='0123456789+' num_feature=len(all_chars) print('number of features:',num_feature) char_to_index=dict((c,i) for i,c in enumerate(all_chars)) index_to_char=dict((i,c) for i,c in enumerate (all_chars)) def generate_data(): first=np.random.randint(0,100) second=np.random.randint(0,100) example=str(first)+ '+'+str(second) label=str(first+second) return example,label generate_data()
_____no_output_____
MIT
Recurrent_neural_network.ipynb
jaivanti/Simple-RNN-Model
TASK3: CREATE THE MODELconsider two reviews:review 1: This movie is not terrible at all.review 2: This movie is pretty descent.
hidden_units=128 max_time_steps=5 model=Sequential([ SimpleRNN(hidden_units, input_shape=(None, num_feature)), RepeatVector(max_time_steps), SimpleRNN(hidden_units, return_sequences=True), TimeDistributed(Dense(num_feature, activation='softmax')) ]) model.compile( loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'] ) model.summary()
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= simple_rnn (SimpleRNN) (None, 128) 17920 _________________________________________________________________ repeat_vector (RepeatVector) (None, 5, 128) 0 _________________________________________________________________ simple_rnn_1 (SimpleRNN) (None, 5, 128) 32896 _________________________________________________________________ time_distributed (TimeDistri (None, 5, 11) 1419 ================================================================= Total params: 52,235 Trainable params: 52,235 Non-trainable params: 0 _________________________________________________________________
MIT
Recurrent_neural_network.ipynb
jaivanti/Simple-RNN-Model
TASK4: VECTORISE AND DEVECTORISE DATA
def vectorize_example(example, label): x=np.zeros((max_time_steps, num_feature)) y=np.zeros((max_time_steps, num_feature)) diff_x=max_time_steps - len(example) diff_y=max_time_steps - len(label) for i,c in enumerate(example): x[i+diff_x, char_to_index[c]]=1 for i in range(diff_x): x[i,char_to_index['0']]=1 for i,c in enumerate(label): y[i+diff_y, char_to_index[c]]=1 for i in range(diff_y): y[i,char_to_index['0']]=1 return x, y m=1 e,m=generate_data() print(e, m) x, y = vectorize_example(e, m) print(x.shape, y.shape) def devectorize_example(example): result=[index_to_char[np.argmax(vec)] for i,vec in enumerate(example)] return''.join(result) devectorize_example(x) devectorize_example(y)
_____no_output_____
MIT
Recurrent_neural_network.ipynb
jaivanti/Simple-RNN-Model
TASK5:CREATE DATASET
def create_dataset(num_examples=2000): x=np.zeros((num_examples, max_time_steps, num_feature)) y=np.zeros((num_examples, max_time_steps, num_feature)) for i in range(num_examples): m=1 e, m = generate_data() e_v, m_v = vectorize_example(e,m) x[i]=e_v y[i]=m_v return x, y x,y= create_dataset() print(x.shape,y.shape) devectorize_example(x[0]) devectorize_example(y[0])
_____no_output_____
MIT
Recurrent_neural_network.ipynb
jaivanti/Simple-RNN-Model
TASK6:TRAINING THE MODEL
m_cb = LambdaCallback( on_epoch_end=lambda e, m: print('{:.2f}'.format(m['val_accuracy']), end='-') ) es_cb=EarlyStopping(monitor='val_loss',patience=10) model.fit(x, y, epochs=500, batch_size=256, validation_split=0.2, verbose=False, callbacks=[es_cb,m_cb]) x_test, y_test= create_dataset(10) preds = model.predict(x_test) for i,pred in enumerate(preds): y=devectorize_example(y_test[i]) y_hat=devectorize_example(pred) col='green' if y != y_hat: col='red' out='Input:'+devectorize_example(x_test[i])+'out:'+y+'pred:'+y_hat print(colored(out,col))
_____no_output_____
MIT
Recurrent_neural_network.ipynb
jaivanti/Simple-RNN-Model
Transformer Network Application: Question AnsweringWelcome to Week 4's third, and the last lab of the course! Congratulations on making it this far. In this notebook you'll explore another application of the transformer architecture that you built.**After this assignment you'll be able to**:* Perform extractive Question Answering * Fine-tune a pre-trained transformer model to a custom dataset* Implement a QA model in TensorFlow and PyTorch Table of Contents- [1 - Extractive Question Answering](1) - [1.1 - Data Cleaning](1-1) - [1.2 - Tokenize and Align Labels with 🤗 Library](1-2)- [2 - Training](2) - [2.1 TensorFlow implementation](2-1) - [2.2 PyTorch implementation](2-2) 1 - Extractive Question AnsweringQuestion answering (QA) is a task of natural language processing that aims to automatically answer questions. The goal of *extractive* QA is to identify the portion of the text that contains the answer to a question. For example, when tasked with answering the question 'When will Jane go to Africa?' given the text data 'Jane visits Africa in September', the question answering model will highlight 'September'.* You will use a variation of the Transformer model you built in the last assignment to answer questions about stories.* You will implement extractive QA model in TensorFlow and in PyTorch.**Recommendation:*** If you are interested, check out the [Course 4: Natural Language Processing with Attention Models](https://www.coursera.org/learn/attention-models-in-nlp/home/welcome) of our [Natural Language Processing Specialization](https://www.coursera.org/specializations/natural-language-processing?=) where you can learn how to build Transformers and perform QA using the [Trax](https://trax.readthedocs.io/en/latest/) library. 1.1 - Data preprocessingRun the following cell to load the [QA bAbI dataset](https://research.fb.com/downloads/babi/), which is one of the bAbI datasets generated by Facebook AI Research to advance natural language processing.
from datasets import load_from_disk # Load a dataset and print the first example in the training set babi_dataset = load_from_disk('data/') print(babi_dataset['train'][0])
_____no_output_____
MIT
5-sequence-models/week4/QA_dataset.ipynb
alekshiidenhovi/Deep-Learning-Specialization
Take a look at the format of the data. For a given story, there are two sentences which serve as the context, and one question. Each of these phrases has an ID. There is also a supporting fact ID which refers to a sentence in the story that helps answer the question. For example, for the question 'What is east of the hallway?', the supporting fact 'The bedroom is east of the hallway' has the ID '2'. There is also the answer, 'bedroom' for the question.
babi_dataset['train'][102]
_____no_output_____
MIT
5-sequence-models/week4/QA_dataset.ipynb
alekshiidenhovi/Deep-Learning-Specialization
Check and see if the entire dataset of stories has this format.
type_set = set() for story in babi_dataset['train']: if str(story['story']['type'] )not in type_set: type_set.add(str(story['story']['type'] )) type_set
_____no_output_____
MIT
5-sequence-models/week4/QA_dataset.ipynb
alekshiidenhovi/Deep-Learning-Specialization
To make the data easier to work with, you will flatten the dataset to transform it from a dictionary structure to a table structure.
flattened_babi = babi_dataset.flatten() flattened_babi next(iter(flattened_babi['train']))
_____no_output_____
MIT
5-sequence-models/week4/QA_dataset.ipynb
alekshiidenhovi/Deep-Learning-Specialization
Now it is much easier to access the information you need! You can now easily extract the answer, question, and facts from the story, and also join the facts into a single entry under 'sentences'.
def get_question_and_facts(story): dic = {} dic['question'] = story['story.text'][2] dic['sentences'] = ' '.join([story['story.text'][0], story['story.text'][1]]) dic['answer'] = story['story.answer'][2] return dic processed = flattened_babi.map(get_question_and_facts) processed['train'][2] processed['test'][2]
_____no_output_____
MIT
5-sequence-models/week4/QA_dataset.ipynb
alekshiidenhovi/Deep-Learning-Specialization
The goal of extractive QA is to find the part of the text that contains the answer to the question. You will identify the position of the answer using the indexes of the string. For example, if the answer to some question was 'September', you would need to find the start and end string indices of the word 'September' in the context sentence 'Jane visits Africa in September.'Use this next function to get the start and end indices of the answer in each of the stories in your dataset.
def get_start_end_idx(story): str_idx = story['sentences'].find(story['answer']) end_idx = str_idx + len(story['answer']) return {'str_idx':str_idx, 'end_idx': end_idx} processed = processed.map(get_start_end_idx) num = 187 print(processed['test'][num]) start_idx = processed['test'][num]['str_idx'] end_idx = processed['test'][num]['end_idx'] print('answer:', processed['test'][num]['sentences'][start_idx:end_idx])
_____no_output_____
MIT
5-sequence-models/week4/QA_dataset.ipynb
alekshiidenhovi/Deep-Learning-Specialization
1.2 - Tokenize and Align with 🤗 LibraryNow you have all the data you need to train a Transformer model to perform Question Answering! You are ready for a task you may have already encountered in the Named-Entity Recognition lab - tokenizing and aligning your input. To feed text data to a Transformer model, you will need to tokenize your input using a [🤗 Transformer tokenizer](https://huggingface.co/transformers/main_classes/tokenizer.html). It is crucial that the tokenizer you use must match the Transformer model type you are using! In this exercise, you will use the 🤗 [DistilBERT fast tokenizer](https://huggingface.co/transformers/model_doc/distilbert.html), which standardizes the length of your sequence to 512 and pads with zeros. Transformer models are often trained by tokenizers that split words into subwords. For instance, the word 'Africa' might get split into multiple subtokens. This can create some misalignment between the list of tags for the dataset and the list of labels generated by the tokenizer, since the tokenizer can split one word into several, or add special tokens. Before processing, it is important that you align the start and end indices with the tokens associated with the target answer word with a `tokenize_and_align()` function. In this case, since you are interested in the start and end indices of the answer, you will want to align the index of the sentence to match the index of the token for a word.
from transformers import DistilBertTokenizerFast tokenizer = DistilBertTokenizerFast.from_pretrained('tokenizer/') def tokenize_align(example): encoding = tokenizer(example['sentences'], example['question'], truncation=True, padding=True, max_length=tokenizer.model_max_length) start_positions = encoding.char_to_token(example['str_idx']) end_positions = encoding.char_to_token(example['end_idx']-1) if start_positions is None: start_positions = tokenizer.model_max_length if end_positions is None: end_positions = tokenizer.model_max_length return {'input_ids': encoding['input_ids'], 'attention_mask': encoding['attention_mask'], 'start_positions': start_positions, 'end_positions': end_positions} qa_dataset = processed.map(tokenize_align) qa_dataset = qa_dataset.remove_columns(['story.answer', 'story.id', 'story.supporting_ids', 'story.text', 'story.type']) qa_dataset['train'][200]
_____no_output_____
MIT
5-sequence-models/week4/QA_dataset.ipynb
alekshiidenhovi/Deep-Learning-Specialization
What you should remember:- The goal of *extractive* QA is to identify the portion of the text that contains the answer to a question.- Transformer models are often trained by tokenizers that split words into subwords. - Before processing, it is important that you align the start and end indices with the tokens associated with the target answer word. 2 - Training Now that you have finished tokenizing and aligning your data, you can feed it into a pre-trained 🤗 Transformer model! You will use a DistilBERT model, which matches the tokenizer you used to preprocess your data.
train_ds = qa_dataset['train'] test_ds = qa_dataset['test'] from transformers import TFDistilBertForQuestionAnswering model = TFDistilBertForQuestionAnswering.from_pretrained("model/tensorflow", return_dict=False)
_____no_output_____
MIT
5-sequence-models/week4/QA_dataset.ipynb
alekshiidenhovi/Deep-Learning-Specialization