markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Part 2. Make features Calculate the number of cell phones per person, and add this column onto your dataframe.(You've calculated correctly if you get 1.220 cell phones per person in the United States in 2017.)
df.head() df[df['country'] == 'United States'] df.dtypes df['cell_phones_total'].value_counts().sum() condition = (df['country'] == 'United States') & (df['time'] == 2017) columns = ['country', 'time', 'cell_phones_total', 'population_total'] subset = df[condition][columns] subset.shape subset.head() #value = subset['cell_phones_total'] / subset['population_total'] df['cell_phones_per_person'] = df['cell_phones_total'] / df['population_total'] # better way to do this df[(df.country=='United States') & (df.time==2017)] value
_____no_output_____
MIT
tdu1s3.ipynb
cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
Modify the `geo` column to make the geo codes uppercase instead of lowercase.
df.head(1) df['geo'] = df['geo'].str.upper() df.head()
_____no_output_____
MIT
tdu1s3.ipynb
cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
Part 3. Process data Use the describe function, to describe your dataframe's numeric columns, and then its non-numeric columns.(You'll see the time period ranges from 1960 to 2017, and there are 195 unique countries represented.)
# numeric columns df.describe() # non-numeric columns df.describe(exclude='number')
_____no_output_____
MIT
tdu1s3.ipynb
cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
In 2017, what were the top 5 countries with the most cell phones total?Your list of countries should have these totals:| country | cell phones total ||:-------:|:-----------------:|| ? | 1,474,097,000 || ? | 1,168,902,277 || ? | 458,923,202 || ? | 395,881,000 || ? | 236,488,548 |
# This optional code formats float numbers with comma separators pd.options.display.float_format = '{:,}'.format condition = (df['time'] == 2017) columns = ['country', 'cell_phones_total'] subset = df[condition][columns] subset.head() subset = subset.sort_values(by=['cell_phones_total'], ascending=False) subset.head(5)
_____no_output_____
MIT
tdu1s3.ipynb
cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
2017 was the first year that China had more cell phones than people.What was the first year that the USA had more cell phones than people?
condition = (df['country'] == 'United States') columns = ['time', 'country', 'cell_phones_total', 'population_total'] subset1 = df[condition][columns] subset1.sort_values(by='time', ascending=False).head(5) df[(df.geo=='USA') & (df.cell_phones_per_person > 1)].time.min() # the way to get the answer via coding # The year that USA had more cell phones than people is in 2014 when cell_phones_total = 355,500,000 vs population_total = 317,718,779
_____no_output_____
MIT
tdu1s3.ipynb
cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
Part 4. Reshape data *This part is not needed to pass the sprint challenge, only to get a 3! Only work on this after completing the other sections.*Create a pivot table:- Columns: Years 2007—2017- Rows: China, India, United States, Indonesia, Brazil (order doesn't matter)- Values: Cell Phones TotalThe table's shape should be: (5, 11)
condition = subset1['country'] == ('China', 'United States', 'Indonesia', 'Brazil') columns = ['time', 'country', 'cell_phones_total', 'population_total'] subset2 = subset1[condition][columns] subset2 subset1.pivot_table(index='columns', columns='time', values='cell_phones_total') # ran out of time...oh well
_____no_output_____
MIT
tdu1s3.ipynb
cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
Sort these 5 countries, by biggest increase in cell phones from 2007 to 2017.Which country had 935,282,277 more cell phones in 2017 versus 2007?
_____no_output_____
MIT
tdu1s3.ipynb
cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
If you have the time and curiosity, what other questions can you ask and answer with this data? Data StorytellingIn this part of the sprint challenge you'll work with a dataset from **FiveThirtyEight's article, [Every Guest Jon Stewart Ever Had On ‘The Daily Show’](https://fivethirtyeight.com/features/every-guest-jon-stewart-ever-had-on-the-daily-show/)**! Part 0 — Run this starter codeYou don't need to add or change anything here. Just run this cell and it loads the data for you, into a dataframe named `df`.(You can explore the data if you want, but it's not required to pass the Sprint Challenge.)
%matplotlib inline import matplotlib.pyplot as plt import numpy as np import pandas as pd url = 'https://raw.githubusercontent.com/fivethirtyeight/data/master/daily-show-guests/daily_show_guests.csv' df = pd.read_csv(url).rename(columns={'YEAR': 'Year', 'Raw_Guest_List': 'Guest'}) def get_occupation(group): if group in ['Acting', 'Comedy', 'Musician']: return 'Acting, Comedy & Music' elif group in ['Media', 'media']: return 'Media' elif group in ['Government', 'Politician', 'Political Aide']: return 'Government and Politics' else: return 'Other' df['Occupation'] = df['Group'].apply(get_occupation)
_____no_output_____
MIT
tdu1s3.ipynb
cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
Part 1 — What's the breakdown of guests’ occupations per year?For example, in 1999, what percentage of guests were actors, comedians, or musicians? What percentage were in the media? What percentage were in politics? What percentage were from another occupation?Then, what about in 2000? In 2001? And so on, up through 2015.So, **for each year of _The Daily Show_, calculate the percentage of guests from each occupation:**- Acting, Comedy & Music- Government and Politics- Media- Other Hints:You can make a crosstab. (See pandas documentation for examples, explanation, and parameters.)You'll know you've calculated correctly when the percentage of "Acting, Comedy & Music" guests is 90.36% in 1999, and 45% in 2015.
df.head() df.describe() df.describe(exclude='number') df1 = pd.crosstab(df['Year'], df['Occupation'], normalize='index') df1
_____no_output_____
MIT
tdu1s3.ipynb
cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
Part 2 — Recreate this explanatory visualization:
from IPython.display import display, Image png = 'https://fivethirtyeight.com/wp-content/uploads/2015/08/hickey-datalab-dailyshow.png' example = Image(png, width=500) display(example)
_____no_output_____
MIT
tdu1s3.ipynb
cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
**Hints:**- You can choose any Python visualization library you want. I've verified the plot can be reproduced with matplotlib, pandas plot, or seaborn. I assume other libraries like altair or plotly would work too.- If you choose to use seaborn, you may want to upgrade the version to 0.9.0.**Expectations:** Your plot should include:- 3 lines visualizing "occupation of guests, by year." The shapes of the lines should look roughly identical to 538's example. Each line should be a different color. (But you don't need to use the _same_ colors as 538.)- Legend or labels for the lines. (But you don't need each label positioned next to its line or colored like 538.)- Title in the upper left: _"Who Got To Be On 'The Daily Show'?"_ with more visual emphasis than the subtitle. (Bolder and/or larger font.)- Subtitle underneath the title: _"Occupation of guests, by year"_**Optional Bonus Challenge:**- Give your plot polished aesthetics, with improved resemblance to the 538 example.- Any visual element not specifically mentioned in the expectations is an optional bonus.
display(example) df2 = df1.drop(['Other'], axis=1) df2 display(example) plt.style.use('fivethirtyeight') yax = ['0', '25', '50', '75', '100%'] ax = df2.plot() ax.patch.set_alpha(0.1) plt.title("Who Got To Be On 'The Daily Show'?", fontsize=18, x=-0.1, y=1.1,loc='left', fontweight='bold'); subtitle_string = 'Occupation of guests, by year' plt.suptitle(subtitle_string, fontsize=14, x= 0.24, y=0.94) plt.xlabel(' ') plt.legend(bbox_to_anchor = [0.6, 0.75]);
_____no_output_____
MIT
tdu1s3.ipynb
cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
Example by Alex
import matplotlib.pyplot as plt import seaborn as sns import numpy as np x = np.arange(0,10) x y = x**2 y plt.plot(x,y) y_labels = [f'{i}%' for i in y] plt.yticks(y, y_labels); y_labels = [f'{i}' if i!= 100 else f'{i}%' for i in range(0, 101, 10)] # y_labels = [f'{i}' if i= 100 else f'{i}%' for i in range(0, 101, 10)] plt.plot(x,y) plt.yticks(range(0,101,10), y_labels); plt.title('My plot') plt.text(x=4, y=50, s='My text') import pandas as pd df = pd.DataFrame({'x':X, 'y':y}) import seaborn as sns sns.replot() plt.yticks(range(0,101,10), y_labels); plt.title('My plot') plt.text(x=4, y=50, s='My text')
_____no_output_____
MIT
tdu1s3.ipynb
cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
IFTTT - Trigger workflow **Tags:** ifttt automation nocode Input Import library
from naas_drivers import ifttt
_____no_output_____
BSD-3-Clause
IFTTT/IFTTT_Trigger_workflow.ipynb
vivard/awesome-notebooks
Variables
event = "myevent" key = "cl9U-VaeBu1**********" data = { "value1": "Bryan", "value2": "Helmig", "value3": 27 }
_____no_output_____
BSD-3-Clause
IFTTT/IFTTT_Trigger_workflow.ipynb
vivard/awesome-notebooks
Model Connect to IFTTT
result = ifttt.connect(key)
_____no_output_____
BSD-3-Clause
IFTTT/IFTTT_Trigger_workflow.ipynb
vivard/awesome-notebooks
Output Display result
result = ifttt.send(event, data)
_____no_output_____
BSD-3-Clause
IFTTT/IFTTT_Trigger_workflow.ipynb
vivard/awesome-notebooks
Please enter the correct file path
train_dir = r'C:\Users\Ryukijano\Python_notebooks\Face_ Mask_ Dataset\Train' validation_dir = r'C:\Users\Ryukijano\Python_notebooks\Face_ Mask_ Dataset\Validation' test_dir =r'C:\Users\Ryukijano\Python_notebooks\Face_ Mask_ Dataset\Test' from tensorflow.keras.preprocessing.image import ImageDataGenerator train_datagen = ImageDataGenerator(rescale=1./255) test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( train_dir, target_size=(128, 128), batch_size=20, class_mode='binary') validation_generator = test_datagen.flow_from_directory( validation_dir, target_size=(128, 128), batch_size=20, class_mode='binary') from tensorflow.keras import layers from tensorflow.keras import models model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(128, 128, 3))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(128, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(128, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Flatten()) model.add(layers.Dense(512, activation='relu')) model.add(layers.Dense(1, activation='sigmoid')) from tensorflow.keras import optimizers model.compile(loss='binary_crossentropy', optimizer=optimizers.RMSprop(lr=1e-4), metrics=['acc']) history = model.fit_generator( train_generator, steps_per_epoch=500, epochs=20, validation_data=validation_generator, validation_steps=40) model.save("model_cnn_project_P1.h5") from tensorflow.keras import backend as K K.clear_session() del model train_datagen = ImageDataGenerator( rescale=1./255, rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True,) test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( train_dir, target_size=(128, 128), batch_size=32, class_mode='binary') validation_generator = test_datagen.flow_from_directory( validation_dir, target_size=(128, 128), batch_size=32, class_mode='binary') model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(128, 128, 3))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(128, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(128, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Flatten()) model.add(layers.Dropout(0.5)) model.add(layers.Dense(512, activation='relu')) model.add(layers.Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer=optimizers.RMSprop(lr=1e-4), metrics=['acc']) history = model.fit_generator( train_generator, steps_per_epoch=300, epochs=10, validation_data=validation_generator, validation_steps=25) from tensorflow.keras.applications import VGG19 conv_base = VGG19(weights='imagenet', include_top=False, input_shape=(128, 128, 3)) from tensorflow.keras import models from tensorflow.keras import layers model = models.Sequential() model.add(conv_base) model.add(layers.Flatten()) model.add(layers.Dense(256, activation='relu')) model.add(layers.Dense(1, activation='sigmoid')) from tensorflow.keras import optimizers model.compile(loss='binary_crossentropy', optimizer=optimizers.RMSprop(lr=2e-5), metrics=['acc']) checkpoint_cb = keras.callbacks.ModelCheckpoint("CNN_Final_Project_Model-{epoch:02d}.h5") history = model.fit_generator( train_generator, steps_per_epoch=300, epochs=10, validation_data=validation_generator, validation_steps=25, callbacks=[checkpoint_cb]) test_generator = test_datagen.flow_from_directory( test_dir, target_size=(128, 128), batch_size=32, class_mode='binary') model.evaluate_generator(test_generator, steps=31)
_____no_output_____
MIT
It-can-see-you.ipynb
Ryukijano/it-can-see-you
L15 - Model evaluation 2 (confidence intervals)---- Instructor: Dalcimar Casanova ([email protected])- Course website: https://www.dalcimar.com/disciplinas/aprendizado-de-maquina- Bibliography: based on lectures of Dr. Sebastian Raschka
import numpy as np import matplotlib.pyplot as plt
_____no_output_____
MIT
L15_model evaluation 2/code/L15_confidence intervals holdout.ipynb
pedrogomes-dev/MA28CP-Intro-to-Machine-Learning
from mlxtend.data import iris_data from sklearn.neighbors import KNeighborsClassifier from sklearn.model_selection import train_test_split X, y = iris_data() print(np.shape(y)) X_train_valid, X_test, y_train_valid, y_test = train_test_split(X, y, test_size=0.3, random_state=1, stratify=y) X_train, X_valid, y_train, y_valid = train_test_split(X_train_valid, y_train_valid, test_size=0.5, random_state=1) print(np.shape(y_train)) print(np.shape(y_valid)) print(np.shape(y_test)) pip install hypopt
Requirement already satisfied: hypopt in /usr/local/lib/python3.6/dist-packages (1.0.9) Requirement already satisfied: numpy>=1.11.3 in /usr/local/lib/python3.6/dist-packages (from hypopt) (1.19.5) Requirement already satisfied: scikit-learn>=0.18 in /usr/local/lib/python3.6/dist-packages (from hypopt) (0.22.2.post1) Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.18->hypopt) (1.0.0) Requirement already satisfied: scipy>=0.17.0 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.18->hypopt) (1.4.1)
MIT
L15_model evaluation 2/code/L15_confidence intervals holdout.ipynb
pedrogomes-dev/MA28CP-Intro-to-Machine-Learning
from hypopt import GridSearch #from sklearn.model_selection import GridSearchCV knn = KNeighborsClassifier() param_grid = { 'n_neighbors': [2, 3, 4, 5] } grid = GridSearch(knn, param_grid=param_grid) grid.fit(X_train, y_train, X_valid, y_valid)
100%|██████████| 4/4 [00:00<00:00, 265.79it/s]
MIT
L15_model evaluation 2/code/L15_confidence intervals holdout.ipynb
pedrogomes-dev/MA28CP-Intro-to-Machine-Learning
print(grid.param_scores) print(grid.best_params) print(grid.best_score) print(grid.best_estimator_) clf = grid.best_estimator_ from sklearn.metrics import accuracy_score y_test_pred = clf.predict(X_test) acc_test = accuracy_score(y_test, y_test_pred) print(acc_test)
0.9333333333333333
MIT
L15_model evaluation 2/code/L15_confidence intervals holdout.ipynb
pedrogomes-dev/MA28CP-Intro-to-Machine-Learning
clf.fit(X_train_valid, y_train_valid)
_____no_output_____
MIT
L15_model evaluation 2/code/L15_confidence intervals holdout.ipynb
pedrogomes-dev/MA28CP-Intro-to-Machine-Learning
y_test_pred = clf.predict(X_test) acc_test = accuracy_score(y_test, y_test_pred) print(acc_test)
0.9777777777777777
MIT
L15_model evaluation 2/code/L15_confidence intervals holdout.ipynb
pedrogomes-dev/MA28CP-Intro-to-Machine-Learning
Confidence interval (via normal approximation)
ci_test = 1.96 * np.sqrt((acc_test*(1-acc_test)) / y_test.shape[0]) test_lower = acc_test-ci_test test_upper = acc_test+ci_test print(test_lower, test_upper) ci_test = 2.58 * np.sqrt((acc_test*(1-acc_test)) / y_test.shape[0]) test_lower = acc_test-ci_test test_upper = acc_test+ci_test print(test_lower, test_upper)
0.921085060454202 1.0344704951013535
MIT
L15_model evaluation 2/code/L15_confidence intervals holdout.ipynb
pedrogomes-dev/MA28CP-Intro-to-Machine-Learning
InventoryThe Inventory is arguably the most important piece of nornir. Let's see how it works. To begin with the [inventory](../api/nornir/core/inventory.htmlmodule-nornir.core.inventory) is comprised of [hosts](../api/nornir/core/inventory.htmlnornir.core.inventory.Hosts), [groups](../api/nornir/core/inventory.htmlnornir.core.inventory.Groups) and [defaults](../api/nornir/core/inventory.htmlnornir.core.inventory.Defaults).In this tutorial we are using the [SimpleInventory](../api/nornir/plugins/inventory/simple.htmlnornir.plugins.inventory.simple.SimpleInventory) plugin. This inventory plugin stores all the relevant data in three files. Let’s start by checking them:
# hosts file %highlight_file inventory/hosts.yaml
_____no_output_____
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
The hosts file is basically a map where the outermost key is the name of the host and then a `Host` object. You can see the schema of the object by executing:
from nornir.core.inventory import Host import json print(json.dumps(Host.schema(), indent=4))
{ "name": "str", "connection_options": { "$connection_type": { "extras": { "$key": "$value" }, "hostname": "str", "port": "int", "username": "str", "password": "str", "platform": "str" } }, "groups": [ "$group_name" ], "data": { "$key": "$value" }, "hostname": "str", "port": "int", "username": "str", "password": "str", "platform": "str" }
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
The `groups_file` follows the same rules as the `hosts_file`.
# groups file %highlight_file inventory/groups.yaml
_____no_output_____
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
Finally, the defaults file has the same schema as the `Host` we described before but without outer keys to denote individual elements. We will see how the data in the groups and defaults file is used later on in this tutorial.
# defaults file %highlight_file inventory/defaults.yaml
_____no_output_____
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
Accessing the inventoryYou can access the [inventory](../api/nornir/core/inventory.htmlmodule-nornir.core.inventory) with the `inventory` attribute:
from nornir import InitNornir nr = InitNornir(config_file="config.yaml") print(nr.inventory.hosts)
{'host1.cmh': Host: host1.cmh, 'host2.cmh': Host: host2.cmh, 'spine00.cmh': Host: spine00.cmh, 'spine01.cmh': Host: spine01.cmh, 'leaf00.cmh': Host: leaf00.cmh, 'leaf01.cmh': Host: leaf01.cmh, 'host1.bma': Host: host1.bma, 'host2.bma': Host: host2.bma, 'spine00.bma': Host: spine00.bma, 'spine01.bma': Host: spine01.bma, 'leaf00.bma': Host: leaf00.bma, 'leaf01.bma': Host: leaf01.bma}
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
The inventory has two dict-like attributes `hosts` and `groups` that you can use to access the hosts and groups respectively:
nr.inventory.hosts nr.inventory.groups nr.inventory.hosts["leaf01.bma"]
_____no_output_____
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
Hosts and groups are also dict-like objects:
host = nr.inventory.hosts["leaf01.bma"] host.keys() host["site"]
_____no_output_____
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
Inheritance modelLet's see how the inheritance models works by example. Let's start by looking again at the groups file:
# groups file %highlight_file inventory/groups.yaml
_____no_output_____
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
The host `leaf01.bma` belongs to the group `bma` which in turn belongs to the groups `eu` and `global`. The host `spine00.cmh` belongs to the group `cmh` which doesn't belong to any other group.Data resolution works by iterating recursively over all the parent groups and trying to see if that parent group (or any of it's parents) contains the data. For instance:
leaf01_bma = nr.inventory.hosts["leaf01.bma"] leaf01_bma["domain"] # comes from the group `global` leaf01_bma["asn"] # comes from group `eu`
_____no_output_____
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
Values in `defaults` will be returned if neither the host nor the parents have a specific value for it.
leaf01_cmh = nr.inventory.hosts["leaf01.cmh"] leaf01_cmh["domain"] # comes from defaults
_____no_output_____
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
If nornir can't resolve the data you should get a KeyError as usual:
try: leaf01_cmh["non_existent"] except KeyError as e: print(f"Couldn't find key: {e}")
Couldn't find key: 'non_existent'
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
You can also try to access data without recursive resolution by using the `data` attribute. For example, if we try to access `leaf01_cmh.data["domain"]` we should get an error as the host itself doesn't have that data:
try: leaf01_cmh.data["domain"] except KeyError as e: print(f"Couldn't find key: {e}")
Couldn't find key: 'domain'
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
Filtering the inventorySo far we have seen that `nr.inventory.hosts` and `nr.inventory.groups` are dict-like objects that we can use to iterate over all the hosts and groups or to access any particular one directly. Now we are going to see how we can do some fancy filtering that will enable us to operate on groups of hosts based on their properties.The simpler way of filtering hosts is by `` pairs. For instance:
nr.filter(site="cmh").inventory.hosts.keys()
_____no_output_____
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
You can also filter using multiple `` pairs:
nr.filter(site="cmh", role="spine").inventory.hosts.keys()
_____no_output_____
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
Filter is cumulative:
nr.filter(site="cmh").filter(role="spine").inventory.hosts.keys()
_____no_output_____
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
Or:
cmh = nr.filter(site="cmh") cmh.filter(role="spine").inventory.hosts.keys() cmh.filter(role="leaf").inventory.hosts.keys()
_____no_output_____
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
You can also grab the children of a group:
nr.inventory.children_of_group("eu")
_____no_output_____
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
Advanced filteringSometimes you need more fancy filtering. For those cases you have two options:1. Use a filter function.2. Use a filter object. Filter functionsThe ``filter_func`` parameter let's you run your own code to filter the hosts. The function signature is as simple as ``my_func(host)`` where host is an object of type [Host](../api/nornir/core/inventory.htmlnornir.core.inventory.Host) and it has to return either ``True`` or ``False`` to indicate if you want to host or not.
def has_long_name(host): return len(host.name) == 11 nr.filter(filter_func=has_long_name).inventory.hosts.keys() # Or a lambda function nr.filter(filter_func=lambda h: len(h.name) == 9).inventory.hosts.keys()
_____no_output_____
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
Filter ObjectYou can also use a filter objects to incrementally create a complex query objects. Let's see how it works by example:
# first you need to import the F object from nornir.core.filter import F # hosts in group cmh cmh = nr.filter(F(groups__contains="cmh")) print(cmh.inventory.hosts.keys()) # devices running either linux or eos linux_or_eos = nr.filter(F(platform="linux") | F(platform="eos")) print(linux_or_eos.inventory.hosts.keys()) # spines in cmh cmh_and_spine = nr.filter(F(groups__contains="cmh") & F(role="spine")) print(cmh_and_spine.inventory.hosts.keys()) # cmh devices that are not spines cmh_and_not_spine = nr.filter(F(groups__contains="cmh") & ~F(role="spine")) print(cmh_and_not_spine.inventory.hosts.keys())
dict_keys(['host1.cmh', 'host2.cmh', 'leaf00.cmh', 'leaf01.cmh'])
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
You can also access nested data and even check if dicts/lists/strings contains elements. Again, let's see by example:
nested_string_asd = nr.filter(F(nested_data__a_string__contains="asd")) print(nested_string_asd.inventory.hosts.keys()) a_dict_element_equals = nr.filter(F(nested_data__a_dict__c=3)) print(a_dict_element_equals.inventory.hosts.keys()) a_list_contains = nr.filter(F(nested_data__a_list__contains=2)) print(a_list_contains.inventory.hosts.keys())
dict_keys(['host1.cmh', 'host2.cmh'])
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
Importing Libraries
# important packages import pandas as pd # data manipulation using dataframes import numpy as np # data statistical analysis import seaborn as sns # Statistical data visualization import cv2 # Image and Video processing library import matplotlib.pyplot as plt # data visualisation %matplotlib inline pd.set_option('display.max_colwidth',1000) import re # for regular expressions import nltk # for text manipulation nltk.download('punkt') # Punkt Sentence Tokenizer import warnings warnings.filterwarnings("ignore", category=DeprecationWarning) from google.colab import drive drive.mount('/content/drive') !pwd
/content
MIT
Sentiment_Analysis_of_Twitter_using_Naive_Bayes.ipynb
shreenath2001/Sentiment_Analysis_of_Twitter
Importing Dataset
df = pd.read_csv("/content/drive/My Drive/P1: Twitter Sentiment Analysis/train.txt") df_test = pd.read_csv("/content/drive/My Drive/P1: Twitter Sentiment Analysis/test_samples.txt") df.head() df.shape df_test.head() df.info() df["sentiment"].unique()
_____no_output_____
MIT
Sentiment_Analysis_of_Twitter_using_Naive_Bayes.ipynb
shreenath2001/Sentiment_Analysis_of_Twitter
Data Visualization
sns.countplot(df['sentiment'], label = "Count") positive = df[df["sentiment"] == 'positive'] negative = df[df["sentiment"] == 'negative'] neutral = df[df["sentiment"] == 'neutral'] positive_percentage = (positive.shape[0]/df.shape[0])*100 negative_percentage = (negative.shape[0]/df.shape[0])*100 neutral_percentage = (neutral.shape[0]/df.shape[0])*100 print(f"Positve Tweets = {positive_percentage:.2f}%\nNegative Tweets = {negative_percentage:.2f}%\nNeutral Tweets = {neutral_percentage:.2f}%")
Positve Tweets = 42.23% Negative Tweets = 15.78% Neutral Tweets = 41.99%
MIT
Sentiment_Analysis_of_Twitter_using_Naive_Bayes.ipynb
shreenath2001/Sentiment_Analysis_of_Twitter
Data Cleaning
df_test["sentiment"] = "NA" df_total = pd.concat((df, df_test), ignore_index=True) df_total.head() df_total.shape #### Removing Twitter Handles (@user) def remove_pattern(input_txt, pattern): r = re.findall(pattern, input_txt) for i in r: input_txt = re.sub(i, '', input_txt) return input_txt import re import nltk nltk.download('wordnet') nltk.download('stopwords') from nltk.corpus import stopwords from nltk.stem.porter import PorterStemmer from nltk.stem.wordnet import WordNetLemmatizer corpus = [] for i in range(df_total.shape[0]): text = np.vectorize(remove_pattern)(df_total['tweet_text'][i], "@[\w]*") # Removing Twitter Handles (@user) text = str(text) text = re.sub('[^a-zA-Z]', ' ', text) # Removing Punctuations, Numbers, and Special Characters text = re.sub(r'\s+', ' ', text).strip() # Remove_trailing_spaces(input_txt) text = text.lower() # Convert to lower case text = text.split() # Split-data lemmatizer = WordNetLemmatizer() #WordNet Lemmatization #ps = PorterStemmer() #porter's stemmer all_stopwords = stopwords.words('english') # List of Stopwords all_stopwords.remove('not') text = [lemmatizer.lemmatize(word) for word in text if not word in set(all_stopwords)] #text = [ps.stem(word) for word in text if not word in set(all_stopwords)] text = ' '.join(text) corpus.append(text) len(corpus) ### to find no. of sentences with words >500 num = 0 for i in range(len(corpus)): if (len(corpus[i]) >= 500): num = num + 1 num
_____no_output_____
MIT
Sentiment_Analysis_of_Twitter_using_Naive_Bayes.ipynb
shreenath2001/Sentiment_Analysis_of_Twitter
Data Visualization
sns.countplot(df['sentiment'], label = "Count") positive = df[df["sentiment"] == 'positive'] negative = df[df["sentiment"] == 'negative'] neutral = df[df["sentiment"] == 'neutral'] positive_percentage = (positive.shape[0]/df.shape[0])*100 negative_percentage = (negative.shape[0]/df.shape[0])*100 neutral_percentage = (neutral.shape[0]/df.shape[0])*100 print(f"Positve Tweets = {positive_percentage:.2f}%\nNegative Tweets = {negative_percentage:.2f}%\nNeutral Tweets = {neutral_percentage:.2f}%")
Positve Tweets = 42.23% Negative Tweets = 15.78% Neutral Tweets = 41.99%
MIT
Sentiment_Analysis_of_Twitter_using_Naive_Bayes.ipynb
shreenath2001/Sentiment_Analysis_of_Twitter
Bag of Words Model CountVectorizer
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer cv = CountVectorizer() X= cv.fit_transform(corpus).toarray() len(X[0])
_____no_output_____
MIT
Sentiment_Analysis_of_Twitter_using_Naive_Bayes.ipynb
shreenath2001/Sentiment_Analysis_of_Twitter
TfidfVectorizer
#from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer #tv = TfidfVectorizer() #X = tv.fit_transform(corpus).toarray() len(X[0])
_____no_output_____
MIT
Sentiment_Analysis_of_Twitter_using_Naive_Bayes.ipynb
shreenath2001/Sentiment_Analysis_of_Twitter
Data Splitting
X_train = X[:21465] X_test = X[21465:] y_train = df.iloc[:, 1].values X_train.shape y_train.shape from sklearn.naive_bayes import MultinomialNB classifier = MultinomialNB() classifier.fit(X_train, y_train)
_____no_output_____
MIT
Sentiment_Analysis_of_Twitter_using_Naive_Bayes.ipynb
shreenath2001/Sentiment_Analysis_of_Twitter
Model Prediction
y_pred = classifier.predict(X_test) print(y_pred) print(y_pred.shape) list1 = [] heading = ['tweet_id', 'sentiment'] list1.append(heading) for i in range(len(y_pred)): sub = [] sub.append(df_test["tweet_id"][i]) sub.append(y_pred[i]) list1.append(sub)
_____no_output_____
MIT
Sentiment_Analysis_of_Twitter_using_Naive_Bayes.ipynb
shreenath2001/Sentiment_Analysis_of_Twitter
Generate Submission File
import csv with open('/content/drive/My Drive/P1: Twitter Sentiment Analysis/Models/NB_TV.csv', 'w', newline='') as fp: a = csv.writer(fp, delimiter = ",") data = list1 a.writerows(data)
_____no_output_____
MIT
Sentiment_Analysis_of_Twitter_using_Naive_Bayes.ipynb
shreenath2001/Sentiment_Analysis_of_Twitter
Optical Flow test This notebook will try to follow optical flow initial code comes from : https://docs.opencv.org/3.3.1/d7/d8b/tutorial_py_lucas_kanade.html
%matplotlib inline import cv2 import os import sys import numpy as np import glob import matplotlib.pyplot as plt samplehash = '10a278dc5ebd2b93e1572a136578f9dbe84d10157cc6cca178c339d9ca762c52' #'7fafc640d446cab1872e4376b5c2649f8c67e658b3fc89d2bced3b47c929e608'# files = sorted(glob.glob( "../data/train/data/" + samplehash + "/frame*.png" )) images = [ cv2.imread(x,1) for x in files ] cap = cv2.VideoCapture("vtest.avi") frame1 = images[0] prvs = cv2.cvtColor(frame1,cv2.COLOR_BGR2GRAY) hsv = np.zeros_like(frame1) hsv[...,1] = 255 index =0 while(1): index += 1 if index == 100: break frame2 = images[index] next = cv2.cvtColor(frame2,cv2.COLOR_BGR2GRAY) flow = cv2.calcOpticalFlowFarneback(prvs,next, None, 0.5, 3, 15, 3, 5, 1.2, 0) mag, ang = cv2.cartToPolar(flow[...,0], flow[...,1]) hsv[...,0] = ang*180/np.pi/2 hsv[...,2] = cv2.normalize(mag,None,0,255,cv2.NORM_MINMAX) bgr = cv2.cvtColor(hsv,cv2.COLOR_HSV2BGR) cv2.imshow('frame2',bgr) k = cv2.waitKey(30) & 0xff if k == 27: break elif k == ord('s'): cv2.imwrite('opticalfb.png',frame2) cv2.imwrite('opticalhsv.png',bgr) prvs = next cap.release() cv2.destroyAllWindows()
_____no_output_____
MIT
analysis/optical flow.ipynb
hitennirmal/goucher
Comparações ContraintuitivasExistem algumas comparações no Python que não são tão intuitivas quando vemos pela primeira vez, mas que são muito usadas, principalmente por programadores mais experientes.É bom sabermos alguns exemplos e buscar sempre entender o que aquela comparação está buscando verificar. Exemplo 1:Digamos que você está construindo um sistema de controle de vendas e precisa de algumas informações para fazer o cálculo do resultado da loja no fim de um mês.
faturamento = input('Qual foi o faturamento da loja nesse mês?') custo = input('Qual foi o custo da loja nesse mês?') if faturamento and custo: # serve para o usuário não deixar os valores vazios lucro = int(faturamento) - int(custo) print("O lucro da loja foi de {} reais".format(lucro)) else: # se os valores forem vazios vai aparecer essa mensagem print('Preencha o faturamento e o lucro corretamente')
Qual foi o faturamento da loja nesse mês?500 Qual foi o custo da loja nesse mês? Preencha o faturamento e o lucro corretamente
MIT
if-else/comparacoes-contraintuitivas.ipynb
amarelopiupiu/python
Conditional Statements in Python Conditional statement controls the flow of execution depending on some condition. Python conditionsPython supports the usual logical conditions from mathematics: | **Condition** | **Expression** | |----:|:----:|| Equal |a == b|| Not Equal |a != b|| Less than |a < b|| Less than or equal to |a <= b|| Greater than |a > b|| Greater than or equal to |a >= b|
a = 2 b = 5 # Equal a == b # Not equal a != b # Less than a < b # Less than or equal to a <= b # Greater than a > b # Greater than or equal to a >= b
_____no_output_____
MIT
02-Basic Python/07-Cond Stat.ipynb
Goliath-Research/Introduction-to-Data-Science
Python Logical Operators:- `and`: Returns True if both statements are true- `or`: Returns True if one of the statements is true- `not`: Reverse the result. Returns False if the result is true, and True is the result is False
a = 1 b = 2 c = 10 # True and True a < c and b < c # True and False a < c and b > c # True or False a < c or b > c # False or True a > c or b < c # True or True a < c or b < c # False or False a > c or b > c
_____no_output_____
MIT
02-Basic Python/07-Cond Stat.ipynb
Goliath-Research/Introduction-to-Data-Science
Using `not` before a boolean expression inverts it:
print(not False) not(a < c) not(a > c)
_____no_output_____
MIT
02-Basic Python/07-Cond Stat.ipynb
Goliath-Research/Introduction-to-Data-Science
If statements
a = 10 b = 20 if b > a: print("The condition is True") print('All these sentences are executed!')
The condition is True All these sentences are executed!
MIT
02-Basic Python/07-Cond Stat.ipynb
Goliath-Research/Introduction-to-Data-Science
Remember Python relies on indentation (whitespace at the beginning of a line) to define scope in the code. The same sentence, without indentation, raises an error.
if b > a: # This will raise an error print("The condition is True") print('All these sentences are executed')
_____no_output_____
MIT
02-Basic Python/07-Cond Stat.ipynb
Goliath-Research/Introduction-to-Data-Science
When the condition is False, the sentence is not executed.
a = 10 b = 20 if b < a: print("The condition is False") print('These sentences are NOT executed!')
_____no_output_____
MIT
02-Basic Python/07-Cond Stat.ipynb
Goliath-Research/Introduction-to-Data-Science
The else keyword catches anything which isn't caught by the preceding conditions.
a = 5 b = 10 if b < a: print("The condition is True.") else: print("The condition is False.")
The condition is False.
MIT
02-Basic Python/07-Cond Stat.ipynb
Goliath-Research/Introduction-to-Data-Science
The elif keyword is pythons way of saying "if the previous conditions were not true, then try this condition".
# using elif a = 3 b = 3 if b > a: print("b is greater than a") elif a == b: print("a and b are equal") # using else a = 6 b = 4 if b > a: print("b is greater than a") elif a == b: print("a and b are equal") else: print("a is greater than b")
a is greater than b
MIT
02-Basic Python/07-Cond Stat.ipynb
Goliath-Research/Introduction-to-Data-Science
An arbitrary number of `elif` clauses can be specified. The `else` clause is optional. If it is present, there can be only one, and it must be specified last.
name = 'Anna' if name == 'Maria': print('Hello Maria!') elif name == 'Sarah': print('Hello Sarah!') elif name == 'Anna': print('Hello Anna!') elif name == 'Sofia': print('Hello Sofia!') else: print("I do not know who you are!") name = 'Julia' if name == 'Maria': print('Hello Maria!') elif name == 'Sarah': print('Hello Sarah!') elif name == 'Anna': print('Hello Anna!') elif name == 'Sofia': print('Hello Sofia!') else: print("I do not know who you are!") # Processing user input username = input('Enter username:') print('Your name is', username) age = input('Enter your age') if int(age) < 18: print('You are a child!') else: print('You are an adult!') # Nested if x = 14 if x > 10: print('Above 10,') if x > 20: print('and also above 20.') else: print('but not above 20.') x = 35 if x > 10: print('Above 10,') if x > 20: print('and also above 20.') else: print('but not above 20.')
Above 10, and also above 20.
MIT
02-Basic Python/07-Cond Stat.ipynb
Goliath-Research/Introduction-to-Data-Science
The `pass` Statement: if statements cannot be empty, but if you for some reason have an if statement with no content, put in the `pass` statement to avoid getting an error.
a = 33 b = 200 if b > a: pass else: print('b <= a')
_____no_output_____
MIT
02-Basic Python/07-Cond Stat.ipynb
Goliath-Research/Introduction-to-Data-Science
Numbers In Pi[link](https://www.algoexpert.io/questions/Numbers%20In%20Pi) My Solution
def numbersInPi(pi, numbers): # Write your code here. # brute force d1 = {number: True for number in numbers} minSpaces = [float('inf')] numbersInPiHelper(pi, d1, 0, minSpaces, 0) return minSpaces[0] if minSpaces[0] != float('inf') else -1 def numbersInPiHelper(pi, d1, startIdx, minSpaces, numberOfSpaces): for endIdx in range(startIdx, len(pi)): cur = pi[startIdx:endIdx + 1] if cur in d1: if endIdx == len(pi) - 1 and numberOfSpaces < minSpaces[0]: minSpaces[0] = numberOfSpaces continue numbersInPiHelper(pi, d1, endIdx + 1, minSpaces, numberOfSpaces + 1) def numbersInPi(pi, numbers): # Write your code here. # dp: O(n^3 + m) time | O(n + m) # n - the number of digits in pi # m - length of the number list d1 = {number: True for number in numbers} opt = [-1 for i in range(len(pi))] for i in range(len(pi)): if pi[:i + 1] in d1: opt[i] = 0 else: minValue = float('inf') for j in range(i): if opt[j] != -1 and pi[j + 1:i + 1] in d1: minValue = min(opt[j], minValue) if minValue != float('inf'): opt[i] = minValue + 1 return opt[-1]
_____no_output_____
MIT
algoExpert/numbers_in_pi/solution.ipynb
maple1eaf/learning_algorithm
Expert Solution
# O(n^3 + m) time | O(n + m) space, where n is the number of digits in Pi and m is the number of favorite numbers # recursive solution def numbersInPi(pi, numbers): numbersTable = {number: True for number in numbers} minSpaces = getMinSpaces(pi, numbersTable, {}, 0) return -1 if minSpaces == float("inf") else minSpaces def getMinSpaces(pi, numbersTable, cache, idx): if idx == len(pi): return -1 if idx in cache: return cache[idx] minSpaces = float("inf") for i in range(idx, len(pi)): prefix = pi[idx: i + 1] if prefix in numbersTable: minSpacesInSuffix = getMinSpaces(pi, numbersTable, cache, i + 1) minSpaces = min(minSpaces, minSpacesInSuffix + 1) cache[idx] = minSpaces return cache[idx] # O(n^3 + m) time | O(n + m) space, where n is the number of digits in Pi and m is the number of favorite numbers # dp def numbersInPi(pi, numbers): numbersTable = {number: True for number in numbers} cache = {} for i in reversed(range(len(pi))): getMinSpaces(pi, numbersTable, cache, i) return -1 if cache[0] == float("inf") else cache[0] def getMinSpaces(pi, numbersTable, cache, idx): if idx == len(pi): return -1 if idx in cache: return cache[idx] minSpaces = float("inf") for i in range(idx, len(pi)): prefix = pi[idx : i + 1] if prefix in numbersTable: minSpacesInSuffix = getMinSpaces(pi, numbersTable, cache, i + 1) minSpaces = min(minSpaces, minSpacesInSuffix) cache[idx] = minSpaces return cache[idx]
_____no_output_____
MIT
algoExpert/numbers_in_pi/solution.ipynb
maple1eaf/learning_algorithm
Introduction to the Harmonic Oscillator *Note:* Much of this is adapted/copied from https://flothesof.github.io/harmonic-oscillator-three-methods-solution.html This week week we are going to begin studying molecular dynamics, which uses classical mechanics to study molecular systems. Our "hydrogen atom" in this section will be the 1D harmomic oscillator. ![1D Harmonic Oscillator](ho.png) The harmonic oscillator is a system that, when displaced from its equilibrium position, experiences a restoring force F proportional to the displacement x:$$F=-kx$$The potential energy of this system is $$V = {1 \over 2}k{x^2}$$ These are sometime rewritten as$$ F=- \omega_0^2 m x, \text{ } V(x) = {1 \over 2} \omega_0^2 m {x^2}$$Where $\omega_0 = \sqrt {{k \over m}} $ In classical mechanics, our goal is to determine the equations of motion, $x(t),y(t)$, that describe our system. In this notebook we will use sympy to solve an second order, ordinary differential equation. 1. Solving differential equations with sympy Soliving differential equations can be tough, and there is not always a set plan on how to proceed. Luckily for us, the harmonic osscillator is the classic second order diffferential eqations. Consider the following second order differential equation$$ay(t)''+by(t)'=c$$where $y(t)'' = {{{d^2}y} \over {dt^2}}$, and $y(t)' = {{{d}y} \over {dt}}$ We can rewrite this as a homogeneous linear differential equations$$ay(t)''+by(t)'-c=0$$ The goal here is to find $y(t)$, similar to our classical mechanics problems. Lets use sympy to solve this equation Second order ordinary differential equation First we import the sympy library
import sympy as sym
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
Next we initialize pretty printing
sym.init_printing()
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
Next we will set our symbols
t,a,b,c=sym.symbols("t,a,b,c")
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
Now for somehting new. We can define functions using `sym.Function("f")`
y=sym.Function("y") y(t)
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
Now, If I want to define a first or second derivative, I can use `sym.diff`
sym.diff(y(t),(t,1)),sym.diff(y(t),(t,2))
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
My differential equation can be written as follows
dfeq=a*sym.diff(y(t),(t,2))+b*sym.diff(y(t),(t,1))-c dfeq sol = sym.dsolve(dfeq) sol
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
The two constants $C_1$ and $C_2$ can be determined by setting boundry conditions.First, we can set the condition $y(t=0)=y_0$The next intial condition we will set is $y'(t=0)=v_0$To setup the equality we want to solve, we are using `sym.Eq`. This function sets up an equaility between a lhs aand rhs of an equation
# sym.Eq example alpha,beta=sym.symbols("alpha,beta") sym.Eq(alpha+2,beta)
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
Back to the actual problem
y0,v0=sym.symbols("y_0,v_0") ics=[sym.Eq(sol.args[1].subs(t, 0), y0), sym.Eq(sol.args[1].diff(t).subs(t, 0), v0)] ics
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
We can use this result to first solve for $C_2$ and then solve for $C_1$.Or we can use sympy to solve this for us.
solved_ics=sym.solve(ics) solved_ics
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
Substitute the result back into $y(t)$
full_sol = sol.subs(solved_ics[0]) full_sol
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
We can plot this result too. Assume that $a,b,c=1$ and that the starting conditions are $y_0=0,v_0=0$We will use two sample problems:* case 1 : initial position is nonzero and initial velocity is zero* case 2 : initial position is zero and initialvelocity is nonzero
# Print plots %matplotlib inline
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
Initial velocity set to zero
case1 = sym.simplify(full_sol.subs({y0:0, v0:0, a:1, b:1, c:1})) case1 sym.plot(case1.rhs) sym.plot(case1.rhs,(t,-2,2))
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
Initial velocity set to one
case2 = sym.simplify(full_sol.subs({y0:0, v0:1, a:1, b:1, c:1})) case2 sym.plot(case2.lhs,(t,-2,2))
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
Calculate the phase space As we will see in lecture, the state of our classical systems are defined as points in phase space, a hyperspace defined by ${{\bf{r}}^N},{{\bf{p}}^N}$. We will convert our sympy expression into a numerical function so that we can plot the path of $y(t)$ in phase space $y,y'$.
case1 # Import numpy library import numpy as np # Make numerical functions out of symbolic expressions yfunc=sym.lambdify(t,case1.rhs,'numpy') vfunc=sym.lambdify(t,case1.rhs.diff(t),'numpy') # Make list of numbers tlst=np.linspace(-2,2,100) # Import pyplot import matplotlib import matplotlib.pyplot as plt # Make plot plt.plot(yfunc(tlst),vfunc(tlst)) plt.xlabel('$y$') plt.ylabel("$y'$") plt.show()
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
Exercise 1.1 Change the initial starting conditions and see how that changes the plots. Make three different plots with different starting conditions
#Making initial velocity equal 10 and change initial positions to 5,5,5 case3 = sym.simplify(full_sol.subs({y0:1, v0:10, a:5, b:5, c:5})) case3 sym.plot(case3.rhs) sym.plot(case3.rhs,(t,-2,2)) case4 = sym.simplify(full_sol.subs({y0:5, v0:2, a:3, b:4, c:5})) case4 sym.plot(case4.rhs) sym.plot(case4.rhs,(t,-2,2)) case5 = sym.simplify(full_sol.subs({y0:10, v0:0, a:0, b:1, c:1})) case5 sym.plot(case5.rhs) sym.plot(case5.rhs,(t,-2,2))
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
2. Harmonic oscillator Applying the harmonic oscillator force to Newton's second law leads to the following second order differential equation$$ F = m a $$$$ F= - \omega_0^2 m x $$$$ a = - \omega_0^2 x $$$$ x(t)'' = - \omega_0^2 x $$ The final expression can be rearranged into a second order homogenous differential equation, and can be solved using the methods we used above Your goal is determine and plot the equations of motion of a 1D harmomnic oscillator Exercise 2.1 1. Use the methodology above to determine the equations of motion $x(t), v(t)$ for a harmonic ocillator1. Solve for any constants by using the following initial conditions: $x(0)=x_0, v(0)=v_0$1. Show expressions for and plot the equations of motion for the following cases: 1. $x(0)=0, v(0)=0$ 1. $x(0)=0, v(0)>0$ 1. $x(0)>0, v(0)=0$ 1. $x(0)<0, v(0)=0$1. Plot the phasespace diagram for the harmonic oscillator
# Your code here m,t,omega=sym.symbols("m,t,omega") x=sym.Function("x") x(t) sym.diff(x(t),(t,1)),sym.diff(x(t),(t,2)) dfeq1=sym.diff(x(t),(t,2))+omega**2*x(t) dfeq1 sol1 = sym.dsolve(dfeq1) sol1 x0,v0=sym.symbols("x_0,v_0") ics1=[sym.Eq(sol1.args[1].subs(t, 0), x0), sym.Eq(sol1.args[1].diff(t).subs(t, 0), v0)] ics1 solved_ics1=sym.solve(ics1) solved_ics1 full_sol1 = sol.subs(solved_ics1[0]) full_sol1 # Print plots %matplotlib inline case_100 = sym.simplify(full_sol1.subs({x0:0, v0:0, omega:1})) case_100 sym.plot(case_100.rhs) sym.plot(case_100.rhs,(t,-2,2)) case_101 = sym.simplify(full_sol1.subs({x0:0, v0:5, omega:1})) case_101 sym.plot(case_101.rhs) sym.plot(case_101.rhs,(t,-2,2)) case_102 = sym.simplify(full_sol1.subs({x0:5, v0:0, omega:1})) case_102 sym.plot(case_102.rhs) sym.plot(case_102.rhs,(t,-2,2)) case_103 = sym.simplify(full_sol1.subs({x0:-5, v0:0, omega:1})) case_103 sym.plot(case_103.rhs) sym.plot(case_103.rhs,(t,-2,2)) case_100 # Import numpy library import numpy as np # Make numerical functions out of symbolic expressions xfunc=sym.lambdify(t,case_100.rhs,'numpy') vfunc=sym.lambdify(t,case_100.rhs.diff(t),'numpy') # Make list of numbers tlst=np.linspace(-2,2,100) # Import pyplot import matplotlib import matplotlib.pyplot as plt # Make plot plt.plot(xfunc(tlst),vfunc(tlst)) plt.xlabel('$y$') plt.ylabel("$y'$") plt.show() # Import numpy library import numpy as np # Make numerical functions out of symbolic expressions xfunc=sym.lambdify(t,case_101.rhs,'numpy') vfunc=sym.lambdify(t,case_101.rhs.diff(t),'numpy') # Make list of numbers tlst=np.linspace(-5,5,100) # Import pyplot import matplotlib import matplotlib.pyplot as plt # Make plot plt.plot(xfunc(tlst),vfunc(tlst)) plt.xlabel('$y$') plt.ylabel("$y'$") plt.show() # Import numpy library import numpy as np # Make numerical functions out of symbolic expressions xfunc=sym.lambdify(t,case_102.rhs,'numpy') vfunc=sym.lambdify(t,case_102.rhs.diff(t),'numpy') # Make list of numbers tlst=np.linspace(-5,5,100) # Import pyplot import matplotlib import matplotlib.pyplot as plt # Make plot plt.plot(xfunc(tlst),vfunc(tlst)) plt.xlabel('$y$') plt.ylabel("$y'$") plt.show()
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
_Python help: Running the notebook the first time, make sure to run all cells to be able to make changes in the notebook. Hit Shift+Enter to run the cell or click on the top menu: Kernel > Restart & Run All > Restart and Run All Cells to rerun the whole notebook. If you make any changes in a cell, rerun that cell._ Measured data plotting In this notebook you can load radial velocity measurements of multiple galaxies from a prepared Python library. Plot them all in a single graph so you can compare them with each other. Plotting these measurements is the first step of producing a rotation curve for a galaxy and an indication of how much mass the galaxy contains. Setting Newton's law of gravitation and the circular motion equation equal to each other, you can derive the equation for circular velocity in terms of enclosed mass and radius: \begin{equation}v(r) = \sqrt{\frac{G M_{enc}(r)}{r}}\end{equation}>where: $G$ = gravitational constant $M_{enc}(r)$ = enclosed mass as a function of radius $r$ = radius or distance from the center of the galaxy Knowing the radial velocity of stars at different radii, you can estimate the mass that is enclosed in that radius. By measuring the brightnesses (photometric profile) of stars and the amount of gas, you can approximate the mass of "visible" matter. Compare it with the actual mass calculated from the radial velocities to get an idea of how much mass is "missing". The result is a ratio (mass-to-light ratio or M/L) that has been useful to describe the amount of dark matter in galaxies. Vocabulary__Rotation curve__: velocity of stars and gas at distances from the center of the galaxy and plotted as a curve__Radial velocity__: how fast stars and gas are moving at different distances from the center of the galaxy__NGC__: New General Catalogue of galaxies__UGC__: Uppsala General Catalogue of galaxies__kpc__: kiloparsec: 1 kpc = 3262 light years = 3.086e+19 meters = 1.917e+16 miles Load data of multiple galaxies Load the radii, velocities, and errors in velocities of multiple galaxies from a Python library.
# NGC 5533 r_NGC5533, v_NGC5533, v_err_NGC5533 = lg.NGC5533['m_radii'],lg.NGC5533['m_velocities'],lg.NGC5533['m_v_errors'] # NGC 891 r_NGC0891, v_NGC0891, v_err_NGC0891 = lg.NGC0891['m_radii'],lg.NGC0891['m_velocities'],lg.NGC0891['m_v_errors'] # NGC 7814 r_NGC7814, v_NGC7814, v_err_NGC7814 = lg.NGC7814['m_radii'],lg.NGC7814['m_velocities'],lg.NGC7814['m_v_errors'] # NGC 5005 r_NGC5005, v_NGC5005, v_err_NGC5005 = lg.NGC5005['m_radii'],lg.NGC5005['m_velocities'],lg.NGC5005['m_v_errors'] # NGC 3198 r_NGC3198, v_NGC3198, v_err_NGC3198 = lg.NGC3198['m_radii'],lg.NGC3198['m_velocities'],lg.NGC3198['m_v_errors'] # UGC 89 #r_UGC89, v_UGC89, v_err_UGC89 = lg.UGC89['m_radii'],lg.UGC89['m_velocities'],lg.UGC89['m_v_errors'] # UGC 477 r_UGC477, v_UGC477, v_err_UGC477 = lg.UGC477['m_radii'],lg.UGC477['m_velocities'],lg.UGC477['m_v_errors'] # UGC 1281 r_UGC1281, v_UGC1281, v_err_UGC1281 = lg.UGC1281['m_radii'],lg.UGC1281['m_velocities'],lg.UGC1281['m_v_errors'] # UGC 1437 r_UGC1437, v_UGC1437, v_err_UGC1437 = lg.UGC1437['m_radii'],lg.UGC1437['m_velocities'],lg.UGC1437['m_v_errors'] # UGC 2953 r_UGC2953, v_UGC2953, v_err_UGC2953 = lg.UGC2953['m_radii'],lg.UGC2953['m_velocities'],lg.UGC2953['m_v_errors'] # UGC 4325 r_UGC4325, v_UGC4325, v_err_UGC4325 = lg.UGC4325['m_radii'],lg.UGC4325['m_velocities'],lg.UGC4325['m_v_errors'] # UGC 5253 r_UGC5253, v_UGC5253, v_err_UGC5253 = lg.UGC5253['m_radii'],lg.UGC5253['m_velocities'],lg.UGC5253['m_v_errors'] # UGC 6787 r_UGC6787, v_UGC6787, v_err_UGC6787 = lg.UGC6787['m_radii'],lg.UGC6787['m_velocities'],lg.UGC6787['m_v_errors'] # UGC 10075 r_UGC10075, v_UGC10075, v_err_UGC10075 = lg.UGC10075['m_radii'],lg.UGC10075['m_velocities'],lg.UGC10075['m_v_errors']
_____no_output_____
MIT
binder/DM_workshop_082721/Interactive_Measured_Data_Plotting.ipynb
villano-lab/galactic-spin
Plot measured data with errorbars Measured data points of 13 galaxies are plotted below.1. __Change the limits of the x-axis to zoom in and out of the graph.__ _Python help: change the limits of the x-axis by modifying the two numbers (left and right) of the line: plt.xlim then rerun the notebook or the cell._ 2. __Finding supermassive black holes:__ A high velocity at a radius close to zero (close to the center of the galaxy) indicates that there is a supermassive black hole is present at the center of that galaxy, changing the velocities of the close-by stars only. The reason the black hole does not have that much effect on the motion of stars at larger distances is because it acts as a point mass with negligible radius and the velocity drops off as $1 / \sqrt r$. __Can you find the galaxies with a possible central supermassive black hole and hide the curves of the rest of the galaxies?__ _Python help: Turn off the display of all lines and go through them one by one. You can "turn off" the display of each galaxy by typing a sign in front of the line "plt.errorbar".__Sneak peak: In the Interactive_\__Rotation_\__Curve_\__Plotting notebook you will be able to find out which stars are affected by the central supermassive black hole, or in other words, what we mean by "close-by" stars._ 3. __What do you notice about the size of the error bars at radii close to the center and far from the center? What could be the reason?__
# Define radius for plotting r = np.linspace(0,100,100) # Plot plt.figure(figsize=(10.0,7.0)) # size of the plot plt.title('Measured data of multiple galaxies', fontsize=14) # giving the plot a title plt.xlabel('Radius (kpc)', fontsize=12) # labeling the x-axis plt.ylabel('Velocity (km/s)', fontsize=12) # labeling the y-axis plt.xlim(0,20) # limits of the x-axis (default from 0 to 20 kpc) plt.ylim(0,420) # limits of the y-axis (default from 0 to 420 km/s) # Plotting the measured data plt.errorbar(r_NGC5533,v_NGC5533,yerr=v_err_NGC5533, label='NGC 5533', marker='o', markersize=6, linestyle='none', color='royalblue') plt.errorbar(r_NGC0891,v_NGC0891,yerr=v_err_NGC0891, label='NGC 891', marker='o', markersize=6, linestyle='none', color='seagreen') plt.errorbar(r_NGC7814,v_NGC7814,yerr=v_err_NGC7814, label='NGC 7814', marker='o', markersize=6, linestyle='none', color='m') plt.errorbar(r_NGC5005,v_NGC5005,yerr=v_err_NGC5005, label='NGC 5005', marker='o', markersize=6, linestyle='none', color='red') plt.errorbar(r_NGC3198,v_NGC3198,yerr=v_err_NGC3198, label='NGC 3198', marker='o', markersize=6, linestyle='none', color='gold') plt.errorbar(r_UGC477,v_UGC477,yerr=v_err_UGC477, label='UGC 477', marker='o', markersize=6, linestyle='none', color='lightpink') plt.errorbar(r_UGC1281,v_UGC1281,yerr=v_err_UGC1281, label='UGC 1281', marker='o', markersize=6, linestyle='none', color='aquamarine') plt.errorbar(r_UGC1437,v_UGC1437,yerr=v_err_UGC1437, label='UGC 1437', marker='o', markersize=6, linestyle='none', color='peru') plt.errorbar(r_UGC2953,v_UGC2953,yerr=v_err_UGC2953, label='UGC 2953', marker='o', markersize=6, linestyle='none', color='lightslategrey') plt.errorbar(r_UGC4325,v_UGC4325,yerr=v_err_UGC4325, label='UGC 4325', marker='o', markersize=6, linestyle='none', color='darkorange') plt.errorbar(r_UGC5253,v_UGC5253,yerr=v_err_UGC5253, label='UGC 5253', marker='o', markersize=6, linestyle='none', color='maroon') plt.errorbar(r_UGC6787,v_UGC6787,yerr=v_err_UGC6787, label='UGC 6787', marker='o', markersize=6, linestyle='none', color='midnightblue') plt.errorbar(r_UGC10075,v_UGC10075,yerr=v_err_UGC10075, label='UGC 10075', marker='o', markersize=6, linestyle='none', color='y') plt.legend(loc='upper right') plt.show() # Time executionTime = (time.time() - startTime) ttt=executionTime/60 print(f'Execution time: {ttt:.2f} minutes')
Execution time: 8.77 minutes
MIT
binder/DM_workshop_082721/Interactive_Measured_Data_Plotting.ipynb
villano-lab/galactic-spin
Import packages
# import sklearn import numpy as np import matplotlib.pyplot as plt %matplotlib inline
_____no_output_____
MIT
Chapter6.ipynb
VandyChris/HandsOnMachineLearning
define two functions for visualization
# from matplotlib.colors import ListedColormap def plot_decision_boundary(clf, X, y, axes=[0, 7.5, 0, 3], iris=True, legend=False, plot_training=True): x1s = np.linspace(axes[0], axes[1], 100) x2s = np.linspace(axes[2], axes[3], 100) x1, x2 = np.meshgrid(x1s, x2s) X_new = np.c_[x1.ravel(), x2.ravel()] y_pred = clf.predict(X_new).reshape(x1.shape) custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x1, x2, y_pred, alpha=0.3, cmap=custom_cmap) if not iris: custom_cmap2 = ListedColormap(['#7d7d58','#4c4c7f','#507d50']) plt.contour(x1, x2, y_pred, cmap=custom_cmap2, alpha=0.8) if plot_training: plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo", label="Iris-Setosa") plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs", label="Iris-Versicolor") plt.plot(X[:, 0][y==2], X[:, 1][y==2], "g^", label="Iris-Virginica") plt.axis(axes) if iris: plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) else: plt.xlabel(r"$x_1$", fontsize=18) plt.ylabel(r"$x_2$", fontsize=18, rotation=0) if legend: plt.legend(loc="lower right", fontsize=14) # def plot_regression_predictions(tree_reg, X, y, axes=[0, 1, -0.2, 1], ylabel="$y$"): x1 = np.linspace(axes[0], axes[1], 500).reshape(-1, 1) y_pred = tree_reg.predict(x1) plt.axis(axes) plt.xlabel("$x_1$", fontsize=18) if ylabel: plt.ylabel(ylabel, fontsize=18, rotation=0) plt.plot(X, y, "b.") plt.plot(x1, y_pred, "r.-", linewidth=2, label=r"$\hat{y}$")
_____no_output_____
MIT
Chapter6.ipynb
VandyChris/HandsOnMachineLearning
Load the Iris data from sklearn. Use petal length and width as the training input.
from sklearn.datasets import load_iris iris = load_iris() iris.feature_names X = iris.data[:, 2:] y = iris.target
_____no_output_____
MIT
Chapter6.ipynb
VandyChris/HandsOnMachineLearning
Now fit a decision tree classifier. Set max depth at 2.
from sklearn.tree import DecisionTreeClassifier clf_tree = DecisionTreeClassifier(max_depth=2) clf_tree.fit(X, y)
_____no_output_____
MIT
Chapter6.ipynb
VandyChris/HandsOnMachineLearning
Now visualize the model by the plot_decision_boundary function.
plt.figure(figsize=(10, 6)) plot_decision_boundary(clf_tree, X, y, axes=[0, 7.5, 0, 3], iris=True, legend=False, plot_training=True)
_____no_output_____
MIT
Chapter6.ipynb
VandyChris/HandsOnMachineLearning
Predict the probability of each class for [5, 1.5]
clf_tree.predict_proba([[5, 1.5]])
_____no_output_____
MIT
Chapter6.ipynb
VandyChris/HandsOnMachineLearning
Run next cell to generate 100 moon data at noise=0.25 and random_state=53
from sklearn.datasets import make_moons X, y = make_moons(n_samples=100, noise=0.25, random_state=53)
_____no_output_____
MIT
Chapter6.ipynb
VandyChris/HandsOnMachineLearning
Now fit two decision tree model. One has no restriction, and another has min_samples_leaf = 4
clf_tree = DecisionTreeClassifier() clf_tree_4 = DecisionTreeClassifier(min_samples_leaf=4) clf_tree.fit(X, y) clf_tree_4.fit(X, y)
_____no_output_____
MIT
Chapter6.ipynb
VandyChris/HandsOnMachineLearning
Now use function plot_decision_boundary to visualize and compare these two models. Check for overfitting.
limit = [X[:, 0].min(), X[:, 0].max(), X[:, 1].min(), X[:, 1].max()] plt.figure(figsize=(12, 6)) plt.subplot(121) plot_decision_boundary(clf_tree, X, y, axes=limit, iris=False) plt.title('no restriction') plt.subplot(122) plot_decision_boundary(clf_tree_4, X, y, axes=limit, iris=False) plt.title('min_samples_leaf=4')
_____no_output_____
MIT
Chapter6.ipynb
VandyChris/HandsOnMachineLearning
Regression Run next cell to generate synthetic data
# np.random.seed(42) m = 200 X = np.random.rand(m, 1) y = 4 * (X - 0.5) ** 2 y = y + np.random.randn(m, 1) / 10 y = y.ravel()
_____no_output_____
MIT
Chapter6.ipynb
VandyChris/HandsOnMachineLearning
Fit two regression trees. The first three have max_depth of 2, 3, 5; and the last one has no restriction.
from sklearn.tree import DecisionTreeRegressor reg_tree_2 = DecisionTreeRegressor(max_depth=2) reg_tree_3 = DecisionTreeRegressor(max_depth=3) reg_tree_5 = DecisionTreeRegressor(max_depth=5) reg_tree_none = DecisionTreeRegressor() reg_tree_2.fit(X, y) reg_tree_3.fit(X, y) reg_tree_5.fit(X, y) reg_tree_none.fit(X, y)
_____no_output_____
MIT
Chapter6.ipynb
VandyChris/HandsOnMachineLearning
Now visualize these four trees by the function of plot_regression_predictions
limit = [X.min(), X.max(), y.min(), y.max()] plt.figure(figsize=(10, 10)) plt.subplot(221) plot_regression_predictions(reg_tree_2, X, y, axes=limit) plt.subplot(222) plot_regression_predictions(reg_tree_3, X, y, axes=limit) plt.subplot(223) plot_regression_predictions(reg_tree_5, X, y, axes=limit) plt.subplot(224) plot_regression_predictions(reg_tree_none, X, y, axes=limit)
_____no_output_____
MIT
Chapter6.ipynb
VandyChris/HandsOnMachineLearning
Read the actual Bird Data CSV Convert Cornell Bird Migration CSV files to CZMLTrying out the CZML python package. Installed from PIP until we can build our own conda package
import pandas as pd import datetime as dt import numpy as np # parser to convert integer yeardays to datetimes in 2015 def parse(day): date = dt.datetime(2015,1,1,0,0) + dt.timedelta(days=(day.astype(np.int32)-1)) return date def csv_to_position(file='Acadian_Flycatcher.csv'): df = pd.read_csv(file, parse_dates=True, date_parser=parse, index_col=0, na_values='NA') df.dropna(how="all", inplace=True) df['z']=0.0 df['str']= df.index.strftime('%Y-%m-%d') df2 = df.ix[:,[3,0,1,2]] a = df2.values.tolist() return {'cartographicDegrees':[val for sublist in a for val in sublist]} import glob csv_files = glob.glob('*.csv') for csv_file in csv_files: bird = csv_file.split('.')[0] packet = czml.CZMLPacket(id=bird, availability="2015-01-01/2015-12-31") packet.point = point pos = csv_to_position(file=csv_file) packet.position = pos desc = czml.Description(string=bird) packet.description = desc doc.packets.append(packet) # inspect the last packet packet.dumps() # Write the CZML document to a file filename = "all_birds.czml" doc.write(filename)
_____no_output_____
CC0-1.0
birds/bird_csv_to_czml.ipynb
rsignell-usgs/CZML
CoreNLP running on Python Server
!echo "Downloading CoreNLP..." !wget "http://nlp.stanford.edu/software/stanford-corenlp-full-2018-10-05.zip" -O corenlp.zip !unzip corenlp.zip !mv ./stanford-corenlp-full-2018-10-05 ./corenlp # Set the CORENLP_HOME environment variable to point to the installation location import os os.environ["CORENLP_HOME"] = "./corenlp" # Import client module from stanfordnlp.server import CoreNLPClient # Construct a CoreNLPClient with some basic annotators, a memory allocation of 4GB, and port number 9001 client = CoreNLPClient(annotators=['tokenize','ssplit', 'pos', 'lemma', 'ner'], memory='4G', endpoint='http://localhost:9001') print(client) # Start the background server and wait for some time # Note that in practice this is totally optional, as by default the server will be started when the first annotation is performed client.start() import time; time.sleep(10)
<stanfordnlp.server.client.CoreNLPClient object at 0x7f6ed0cf68d0> Starting server with command: java -Xmx4G -cp ./corenlp/* edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9001 -timeout 60000 -threads 5 -maxCharLength 100000 -quiet True -serverProperties corenlp_server-5f4e4d7044944e52.props -preload tokenize,ssplit,pos,lemma,ner
MIT
Colab NoteBooks/BertSum_and_PreSumm_preprocessing.ipynb
gagan94/PreSumm