markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Flow Control Control flow statements help you to structure the code and direct it towards your convenience and introduce loops and so on. If statements | price = -5;
if price <0:
print("Price is negative!")
elif price <1:
print("Price is too small!")
else:
print("Price is suitable.") | Price is negative!
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Especially in text mining, comparing strings is very important: | #Comparing strings
name1 = "edinburgh"
name2 = "Edinburgh"
if name1 == name2:
print("Equal")
else:
print("Not equal")
if name1.lower() == name2.lower():
print("Equal")
else:
print("Not equal") | Not equal
Equal
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Using multiple conditions: | number = 9
if number > 1 and not number > 9:
print("Number is between 1 and 10")
number = 9
name = 'johannes'
if number < 5 or 'j' in name:
print("Number is lower than 5 or the name contains a 'j'") | Number is between 1 and 10
Number is lower than 5 or the name contains a 'j'
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
While loops | number = 4
while number > 1:
print(number)
number = number -1 | 4
3
2
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
For loops For loops allow you to iteratre over elements in a certain collection, for example a list: | # We'll look into lists in a minute
number_list = [1, 2, 3, 4]
for item in number_list:
print(item)
list = ['a', 'b', 'c']
for item in list:
print(item) | a
b
c
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Ranges are also useful. Note that the upper element is not included and we can adjust the step size: | for i in range(1,4):
print(i)
for i in range(30,100, 10):
print(i) | 30
40
50
60
70
80
90
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Indentation Please be very careful with indentation | number_1 = 3
number_2 = 5
print('No indent (no tabs used)')
if number_1 > 1:
print('\tNumber 1 higher than 1.')
if number_2 > 5:
print('\t\tnumber 2 higher than 5')
print('\tnumber 2 higher than 5')
number_1 = 3
number_2 = 6
print('No indent (no tabs used)')
if number_1 > 1:
print('\tNumber 1 higher than 1.')
if number_2 > 5:
print('\t\tnumber 2 higher than 5')
print('\tnumber 2 higher than 5') | No indent (no tabs used)
Number 1 higher than 1.
number 2 higher than 5
No indent (no tabs used)
Number 1 higher than 1.
number 2 higher than 5
number 2 higher than 5
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
List & Tuple Lists Lists are great for collecting anything. They can contain objects of different types. For example: | names = [5, "Giovanni", "Rose", "Yongzhe", "Luciana", "Imani"] | _____no_output_____ | MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Although that is not best practice. Let's start with a list of names: | names = ["Johannes", "Giovanni", "Rose", "Yongzhe", "Luciana", "Imani"]
# Loop names
for name in names:
print('Name: '+name)
# Get 'Giovanni' from list
# Lists start counting at 0
giovanni = names[1]
print(giovanni.upper())
# Get last item
name = names[-1]
print(name.upper())
# Get second to last item
name = names[-2]
print(name.upper())
print("First three: "+str(names[0:3]))
print("First four: "+str(names[:4]))
print("Up until the second to last one: "+str(names[:-2]))
print("Last two: "+str(names[-2:])) | Name: Johannes
Name: Giovanni
Name: Rose
Name: Yongzhe
Name: Luciana
Name: Imani
GIOVANNI
IMANI
LUCIANA
First three: ['Johannes', 'Giovanni', 'Rose']
First four: ['Johannes', 'Giovanni', 'Rose', 'Yongzhe']
Up until the second to last one: ['Johannes', 'Giovanni', 'Rose', 'Yongzhe']
Last two: ['Luciana', 'Imani']
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Enumeration We can enumerate collections/lists that adds an index to every element: | for index, name in enumerate(names):
print(str(index) , " " , name, " is in the list.") | 0 Johannes is in the list.
1 Giovanni is in the list.
2 Rose is in the list.
3 Yongzhe is in the list.
4 Luciana is in the list.
5 Imani is in the list.
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Searching and editing | names = ["Johannes", "Giovanni", "Rose", "Yongzhe", "Luciana", "Imani"]
# Finding an element
print(names.index("Johannes"))
# Adding an element
names.append("Kumiko")
# Adding an element at a specific location
names.insert(2, "Roberta")
print(names)
#Removal
fruits = ["apple","orange","pear"]
del fruits[0]
fruits.remove("pear")
print('Fruits: ', fruits)
# Modifying an element
names[5] = "Tom"
print(names)
# Test whether an item is in the list (best do this before removing to avoid raising errors)
print("Tom" in names)
# Length of a list
print("Length of the list: " + str(len(names))) | 0
['Johannes', 'Giovanni', 'Roberta', 'Rose', 'Yongzhe', 'Luciana', 'Imani', 'Kumiko']
Fruits: ['orange']
['Johannes', 'Giovanni', 'Roberta', 'Rose', 'Yongzhe', 'Tom', 'Imani', 'Kumiko']
True
Length of the list: 8
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Python starts at 0!!! Sorting and copying | # Temporary sorting:
print(sorted(names))
print(names)
# Make changes permanent
names.sort()
print("Sorted names: " + str(names))
names.sort(reverse=True)
print("Reverse sorted names: " + str(names))
# Copying list (a shallow copy just duplicates the pointer to the memory address)
namez = names
namez.remove("Johannes")
print(namez)
print(names)
# Now a 'deep' copy
print("After deep copy")
namez = names.copy()
namez.remove("Giovanni")
print(namez)
print(names)
#Alternative
namez = names[:]
print(namez) | ['Yongzhe', 'Tom', 'Rose', 'Roberta', 'Kumiko', 'Imani', 'Giovanni']
['Yongzhe', 'Tom', 'Rose', 'Roberta', 'Kumiko', 'Imani', 'Giovanni']
After deep copy
['Yongzhe', 'Tom', 'Rose', 'Roberta', 'Kumiko', 'Imani']
['Yongzhe', 'Tom', 'Rose', 'Roberta', 'Kumiko', 'Imani', 'Giovanni']
['Yongzhe', 'Tom', 'Rose', 'Roberta', 'Kumiko', 'Imani', 'Giovanni']
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Strings as lists Strings can be manipulated and used just like lists. This is especially handy in text mining: | course = "Predictive analytics"
print("Last nine letters: "+course[-9:])
print("Analytics in course title? " + str("analytics" in course))
print("Start location of 'analytics': " + str(course.find("analytics")))
print(course.replace("analytics","analysis"))
list_of_words = course.split(" ")
for index, word in enumerate(list_of_words):
print("Word ", index, ": "+word) | Last nine letters: analytics
Analytics in course title? True
Start location of 'analytics': 11
Predictive analysis
Word 0 : Predictive
Word 1 : analytics
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Sets Sets only contain unique elements. They have to be declared upfront using set() and allow for operations such as intersection(): | name_set = set(names)
print(name_set)
# Add an element
name_set.add("Galina")
print(name_set)
# Discard an element
name_set.discard("Johannes")
print(name_set)
name_set2 = set(["Rose", "Tom"])
# Difference and intersection
difference = name_set - name_set2
print(difference)
intersection = name_set.intersection(name_set2)
print(intersection) | {'Yongzhe', 'Kumiko', 'Roberta', 'Giovanni', 'Tom', 'Imani', 'Rose'}
{'Yongzhe', 'Kumiko', 'Roberta', 'Giovanni', 'Tom', 'Imani', 'Galina', 'Rose'}
{'Yongzhe', 'Kumiko', 'Roberta', 'Giovanni', 'Tom', 'Imani', 'Galina', 'Rose'}
{'Yongzhe', 'Kumiko', 'Roberta', 'Giovanni', 'Imani', 'Galina'}
{'Tom', 'Rose'}
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Dictionary & Function Dictionaries Dictionaries are a great way to store particular data as key-value pairs, which mimics the basic structure of a simple database. | courses = {"Johannes" : "Predictive analytics", "Kumiko" : "Prescriptive analytics", "Luciana" : "Descriptive analytics"}
for organizer in courses:
print(organizer + " teaches " + courses[organizer]) | Johannes teaches Predictive analytics
Kumiko teaches Prescriptive analytics
Luciana teaches Descriptive analytics
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
We can also write: | for organizer, course in courses.items():
print(organizer + " teaches " + course)
# Adding items
courses["Imani"] = "Other analytics"
print(courses)
# Overwrite
courses["Johannes"] = "Business analytics"
print(courses)
# Remove
del courses["Johannes"]
print(courses)
# Looping values
for course in courses.values():
print(course)
# Sorted output (on keys)
for organizer, course in sorted(courses.items()):
print(organizer +" teaches " + course) | Imani teaches Other analytics
Kumiko teaches Prescriptive analytics
Luciana teaches Descriptive analytics
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Functions Functions form the backbone of all code. You have already used some, like print(). They can be easily defined by yourself as well. | def my_function(a, b):
a = a.title()
b = b.upper()
print(a+ " "+b)
def my_function2(a, b):
a = a.title()
b = b.upper()
return a + " " + b
my_function("johannes","de smedt")
output = my_function2("johannes","de smedt")
print(output) | Johannes DE SMEDT
Johannes DE SMEDT
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Notice how the first function already prints, while the second returns a string we have to print ourselves. Python is weakly-typed, so a function can produce different results, like in this example: | # Different output type
def calculate_mean(a, b):
if (a>0):
return (a+b)/2
else:
return "a is negative"
output = calculate_mean(1,2)
print(output)
output = calculate_mean(0,1)
print(output) | 1.5
a is negative
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Comprehensions Comprehensions allow you to quickly/efficiently write lists/dictionaries: | # Finding even numbers
evens = [i for i in range(1,11) if i % 2 ==0]
print(evens) | [2, 4, 6, 8, 10]
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
In Python, you can easily make tuples such as pairs, like here: | # Double fun
pairs = [(x,y) for x in range(1,11) for y in range(5,11) if x>y]
print(pairs) | [(6, 5), (7, 5), (7, 6), (8, 5), (8, 6), (8, 7), (9, 5), (9, 6), (9, 7), (9, 8), (10, 5), (10, 6), (10, 7), (10, 8), (10, 9)]
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
They are also useful to perform some pre-processing, e.g., on strings: | # Operations
names = ["jamal", "maurizio", "johannes"]
titled_names = [name.title() for name in names]
print(titled_names)
j_s = [name.title() for name in names if name.lower()[0] == 'j']
print(j_s) | ['Jamal', 'Maurizio', 'Johannes']
['Jamal', 'Johannes']
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
IO & Library | # Download some datasets
# If you are using git, then you don't need to run the following.
!wget -q https://raw.githubusercontent.com/Magica-Chen/WebSNA-notes/main/Week0/data/DM_1.csv
!wget -q https://raw.githubusercontent.com/Magica-Chen/WebSNA-notes/main/Week0/data/DM_2.csv
!wget -q https://raw.githubusercontent.com/Magica-Chen/WebSNA-notes/main/Week0/data/ordered_amounts_per_person.csv
!mkdir data
!mv *.csv ./data | _____no_output_____ | MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Reading files In Python, we can easily open any file type. Naturally, it is most suitable for plainly-structured formats such as .txt., .csv., as so on. You can also open Excel files with appropriate packages, such as pandas (more on this later). Let's read in a .csv file: | # Open a file for reading ('r')
file = open('data/DM_1.csv','r')
for line in file:
print(line) | Name,Email,City,Salary
Brent Hopkins,[email protected],Mount Pearl,38363
Colt Bender,[email protected],Castle Douglas,21506
Arthur Hammond,[email protected],Biloxi,27511
Sean Warner,[email protected],Moere,25201
Tate Greene,[email protected],Ipswich,35052
Gavin Gibson,[email protected],Oordegem,37126
Kelly Garza,[email protected],Kukatpalle,39420
Zane Preston,[email protected],Neudšrfl,28553
Cole Cunningham,[email protected],Catemu,27972
Tarik Hendricks,[email protected],Newbury,39027
Elvis Collier,[email protected],Paradise,22568
Jackson Huber,[email protected],Veere,29922
Macaulay Cline,[email protected],Campobasso,24163
Elijah Chase,[email protected],Grantham,23881
Dennis Anthony,[email protected],Cedar Rapids,27969
Fulton Snyder,[email protected],San Pedro,21594
Leo Willis,[email protected],Kester,31203
Matthew Hooper,[email protected],Bellefontaine,33222
Todd Jones,[email protected],Toledo,24809
Palmer Byrd,[email protected],Bissegem,29045
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
We can store this information in objects and start using it: | # File is looped now, hence, reread file
file = open('data/DM_1.csv','r')
# ignore the header
next(file)
# Store names with amount (i.e. columns 1 & 2)
amount_per_person = {}
for line in file:
cells = line.split(",")
amount_per_person[cells[0]] = int(cells[3])
for person, amount in sorted(amount_per_person.items()):
if amount > 25000:
print(person , " has " , amount)
# Now we use 'w' for write
output_file = open('data/ordered_amounts_per_person.csv','w')
for person, amount in sorted(amount_per_person.items()):
output_file.write(person.lower()+","+str(amount))
output_file.close() | _____no_output_____ | MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Libraries Libraries are imported by using `import`: | import numpy
import pandas
import sklearn | _____no_output_____ | MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
If you haven't installed sklearn, please install it by: | !pip install sklearn | Collecting sklearn
Downloading sklearn-0.0.tar.gz (1.1 kB)
Requirement already satisfied: scikit-learn in c:\users\zchen112\anaconda3\lib\site-packages (from sklearn) (0.24.1)
Requirement already satisfied: joblib>=0.11 in c:\users\zchen112\anaconda3\lib\site-packages (from scikit-learn->sklearn) (1.0.1)
Requirement already satisfied: scipy>=0.19.1 in c:\users\zchen112\anaconda3\lib\site-packages (from scikit-learn->sklearn) (1.6.2)
Requirement already satisfied: numpy>=1.13.3 in c:\users\zchen112\anaconda3\lib\site-packages (from scikit-learn->sklearn) (1.20.1)
Requirement already satisfied: threadpoolctl>=2.0.0 in c:\users\zchen112\anaconda3\lib\site-packages (from scikit-learn->sklearn) (2.1.0)
Building wheels for collected packages: sklearn
Building wheel for sklearn (setup.py): started
Building wheel for sklearn (setup.py): finished with status 'done'
Created wheel for sklearn: filename=sklearn-0.0-py2.py3-none-any.whl size=1316 sha256=de54cc32dd40e89c8b3d1fd541e221f659546c0f556371dadf408a133d078ca0
Stored in directory: c:\users\zchen112\appdata\local\pip\cache\wheels\22\0b\40\fd3f795caaa1fb4c6cb738bc1f56100be1e57da95849bfc897
Successfully built sklearn
Installing collected packages: sklearn
Successfully installed sklearn-0.0
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
We can import just a few bits using `from`, or create aliases using `as`: | import math as m
from math import pi
print(numpy.add(1, 2))
print(pi)
print(m.sin(1)) | 3
3.141592653589793
0.8414709848078965
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
In the next part, some basic procedures that exist in NumPy, pandas, and scikit-learn are covered. This only scratches the surface of the possibilities, and many other functions and code will be used later on. Make sure to search around for the possiblities that exist yourself, and get a grasp of how the modules are called and used. Let's import them in this notebook to start with: | import numpy as np
import pandas as pd
import sklearn | _____no_output_____ | MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Numpy | # Create empty arrays/matrices
empty_array = np.zeros(5)
empty_matrix = np.zeros((5,2))
print('Empty array: \n',empty_array)
print('Empty matrix: \n',empty_matrix)
# Create matrices
mat = np.array([[1,2,3],[4,5,6]])
print('Matrix: \n', mat)
print('Transpose: \n', mat.T)
print('Item 2,2: ', mat[1,1])
print('Item 2,3: ', mat[1,2])
print('rows and columns: ', np.shape(mat))
print('Sum total matrix: ', np.sum(mat))
print('Sum row 1: ' , np.sum(mat[0]))
print('Sum row 2: ', np.sum(mat[1]))
print('Sum column 2: ', np.sum(mat,axis=0)[2]) | Matrix:
[[1 2 3]
[4 5 6]]
Transpose:
[[1 4]
[2 5]
[3 6]]
Item 2,2: 5
Item 2,3: 6
rows and columns: (2, 3)
Sum total matrix: 21
Sum row 1: 6
Sum row 2: 15
Sum column 2: 9
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
pandas Creating dataframes pandas is great for reading and creating datasets, as well as performing basic operations on them. | # Creating a matrix with three rows of data
data = [['johannes',10], ['giovanni',2], ['john',3]]
# Creating and printing a pandas DataFrame object from the matrix
df = pd.DataFrame(data)
print(df)
# Adding columns to the DataFrame object
df.columns = ['names', 'years']
print(df)
df_2 = pd.DataFrame(data = data, columns = ['names', 'years'])
print(df_2)
# Taking out a single column and calculating its sum
# This also shows the type of the variable: a 64 bit integer (array)
print(df['years'])
print('Sum of all values in column: ', df['years'].sum())
# Creating a larger matrix
data = [['johannes',10], ['giovanni',2], ['john',3], ['giovanni',2], ['john',3], ['giovanni',2], ['john',3], ['giovanni',2], ['john',3], ['johannes',10]]
# Again, creating a DataFrame object, now with columns
df = pd.DataFrame(data, columns = ['names','years'])
# Print the 5 first (head) and 5 last (tail) observations
print(df.head())
print('\n')
print(df.tail()) | names years
0 johannes 10
1 giovanni 2
2 john 3
3 giovanni 2
4 john 3
names years
5 giovanni 2
6 john 3
7 giovanni 2
8 john 3
9 johannes 10
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Reading files You can read files: | dataset = pd.read_csv('data/DM_1.csv')
print(dataset.head()) | Name Email City \
0 Brent Hopkins [email protected] Mount Pearl
1 Colt Bender [email protected] Castle Douglas
2 Arthur Hammond [email protected] Biloxi
3 Sean Warner [email protected] Moere
4 Tate Greene [email protected] Ipswich
Salary
0 38363
1 21506
2 27511
3 25201
4 35052
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Using dataframes | # Print all unique values of the column names
print(df['names'].unique())
# Print all values and their frequency:
print(df['names'].value_counts())
print(df['years'].value_counts())
# Add a column names 'code' with all zeros
df['code'] = np.zeros(10)
print(df) | names years code
0 johannes 10 0.0
1 giovanni 2 0.0
2 john 3 0.0
3 giovanni 2 0.0
4 john 3 0.0
5 giovanni 2 0.0
6 john 3 0.0
7 giovanni 2 0.0
8 john 3 0.0
9 johannes 10 0.0
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
You can also easily find things in a DataFrame use `.loc`: | # Rows 2 to 5 and all columns:
print(df.loc[2:5, :])
# Looping columns
for variable in df.columns:
print(df[variable])
# Looping columns and obtaining the values (which returns an array)
for variable in df.columns:
print(df[variable].values) | ['johannes' 'giovanni' 'john' 'giovanni' 'john' 'giovanni' 'john'
'giovanni' 'john' 'johannes']
[10 2 3 2 3 2 3 2 3 10]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
preparing datasets | dataset_1 = pd.read_csv('data/DM_1.csv', encoding='latin1')
dataset_2 = pd.read_csv('data/DM_2.csv', encoding='latin1')
dataset_1
dataset_2
dataset_2.columns = ['First name', 'Last name', 'Days active']
dataset_2 | _____no_output_____ | MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
We can convert the second dataset to only have 1 column for names: | # .title() can be used to only make the first letter a capital
names = [dataset_2.loc[i,'First name'] + " " + dataset_2.loc[i,'Last name'].title() for i in range(0, len(dataset_2))]
# Make a new column for the name
dataset_2['Name'] = names
# Remove the old columns
dataset_2 = dataset_2.drop(['First name', 'Last name'], axis=1)
dataset_2 | _____no_output_____ | MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Bringing together the datasets Now the datasets are made compatible, we can merge them in a few different ways. | # A left join starts from the left dataset, in this case dataset_1, and for every row matches the value in the
# column used for joining. As you will see, the result has 22 rows since some names appear multiple times in
# the second dataset dataset_2.
both = pd.merge(dataset_1, dataset_2, on='Name', how='left')
both
# A right join does the opposite: now, dataset_2 is used to match all names with the corresponding
# observations in dataset_1. There are as many observations as there are in dataset_2, as the rows
# in dataset_1 are unique. The last row cannot be matched with any observation in dataset_1.
both = pd.merge(dataset_1, dataset_2, on='Name', how='right')
both
# Inner and outer join
# It is also possible to only retain the values that are matched in both tables, or match any value
# that matches. This is using an inner and outer join respectively.
both = pd.merge(dataset_1, dataset_2, on='Name', how='inner')
both | _____no_output_____ | MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Notice how observation 12 is missing, as there is no corresponding value in `dataset_1`. | both = pd.merge(dataset_1, dataset_2, on='Name', how='outer')
both | _____no_output_____ | MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
In the last table, we have 23 rows, as both matching and non-matching values are returned.Merging datasets can be really helpful. This code should give you ample ideas on how to do this quickly yourself. As always, there are a number of ways of achieving the same result. Don't hold back to explore other solutions that might be quicker or easier. scikit-learn scikit-learn is great for performing all major data analysis operations. It also contains datasets. In this code, we will load a dataset and fit a simple linear regression. | from sklearn import datasets as ds
# Load the Boston Housing dataset
dataset = ds.load_boston()
# It is a dictionary, see the keys for details:
print(dataset.keys())
# The 'DESCR' key holds a description text for the whole dataset
print(dataset['DESCR'])
# The data (independent variables) are stored under the 'data' key
# The names of the independent variables are stored in the 'feature_names' key
# Let's use them to create a DataFrame object:
df = pd.DataFrame(data=dataset['data'], columns=dataset['feature_names'])
print(df.head())
# The dependent variable is stored separately
df_y = pd.DataFrame(data=dataset['target'], columns=['target'])
print(df_y.head())
# Now, let's build a linear regression model
from sklearn.linear_model import LinearRegression as LR
# First we create a linear regression object
regression = LR()
# Then, we fit the independent and dependent data
regression.fit(df, df_y)
# We can obtain the R^2 score (more on this later)
print(regression.score(df, df_y)) | 0.7406426641094095
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Very often, we need to perform an operation on a single observation. In that case, we have to reshape the data using numpy: | # Consider a single observation
so = df.loc[2, :]
print(so)
# Just the values of the observation without meta data
print(so.values)
# Reshaping yields a new matrix with one row with as many columns as the original observation (indicated by the -1)
print(np.reshape(so.values, (1, -1)))
# For two observations:
so_2 = df.loc[2:3, :]
print(np.reshape(so_2.values, (2, -1))) | [[2.7290e-02 0.0000e+00 7.0700e+00 0.0000e+00 4.6900e-01 7.1850e+00
6.1100e+01 4.9671e+00 2.0000e+00 2.4200e+02 1.7800e+01 3.9283e+02
4.0300e+00]
[3.2370e-02 0.0000e+00 2.1800e+00 0.0000e+00 4.5800e-01 6.9980e+00
4.5800e+01 6.0622e+00 3.0000e+00 2.2200e+02 1.8700e+01 3.9463e+02
2.9400e+00]]
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
This concludes our quick run-through of some basic functionality of the modules. Later on, we will use more and more specialized functions and objects, but for now this allows you to play around with data already. Visualisation The visualisations often require a bit of tricks and extra lines of code to make things look better. This is often confusing at first, but it will become more and more intuitive once you get the hang of how the general ideas work. We will be working mostly with Matplotlib (often imported as plt), Numpy (np), and pandas (pd). Often, both Matplotlib and pandas offer similar solutions, but one is often slightly more convenient than the other in various situations. Make sure to look up some of the alternatives, as they might also make more sense to you. | # First, we need to import our packages
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd | _____no_output_____ | MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Pie and bar chart | # Data to plot
labels = 'classification', 'regression', 'time series'
sizes = [10, 22, 2]
colors = ['lightblue', 'lightgreen', 'pink']
# Allows us to highlight a certain piece of the pie chart
explode = (0.1, 0, 0)
# Plot a pie chart with the pie() function. Notice how various parameters are given for coloring, labels, etc.
# They should be relatively self-explanatory
plt.pie(sizes, explode=explode, labels=labels, colors=colors,
autopct='%1.1f%%', shadow=True, startangle=90)
# This function makes the axes equal, so the circle is round
plt.axis('equal')
# Add a title to the plot
plt.title("Pie chart of modelling techniques")
# Finally, show the plot
plt.show() | _____no_output_____ | MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Adding a legend: | patches, texts = plt.pie(sizes, colors=colors, shadow=True, startangle=90)
plt.legend(patches, labels, loc="best")
plt.axis('equal')
plt.title("Pie chart of modelling techniques")
plt.show()
# Bar charts are relatively similar. Here we use the bar() function
plt.bar(labels, sizes, align='center')
plt.xticks(labels)
plt.ylabel('#use cases')
plt.title('Bar chart of modelling technique')
plt.show() | _____no_output_____ | MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Histogram | # This function plots a diagram with the 'data' object providing the data
# bins are calculated automatically, as indicated by the 'auto' option, which makes them relatively balanced and
# sets appropriate boundaries
# color sets the color of the bars
# the rwidth sets the bars to somewhat slightly less wide than the bins are wide to leave space between the bars
data = np.random.normal(10, 2, 1000)
plt.hist(x= data, bins='auto', color='#008000', rwidth=0.85)
# For more information on colour codes, please visit: https://htmlcolorcodes.com/
# Additionally, some options are added:
# This option sets the grid of the plot to follow the values on the y-axis
plt.grid(axis='y')
# Adds a label to the x-axis
plt.xlabel('Value')
# Adds a label to the y-axis
plt.ylabel('Frequency')
# Adds a title to the plot
plt.title('Histogram of x')
# Makes the plot visible in the program
plt.show()
# Here, a different color and manually-specified bins are used
plt.hist(x= data, bins=[0,1,2,3,4,5,6,7,8,9,10], color='olive', rwidth=0.85)
plt.grid(axis='y')
plt.xlabel('Value')
plt.ylabel('Frequency')
plt.title('Histogram of x and y')
plt.show() | _____no_output_____ | MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
See how we cut the tail off the distribution. | # Now, let's build a histogram with radomly generated data that follows a normal distribution
# Mean = 10, stddev = 15, sample size = 1,000
# More on random numbers will follow in module 2
s = np.random.normal(10, 15, 1000)
plt.hist(x=s, bins='auto', color='#008000', rwidth=0.85)
plt.grid(axis='y')
plt.xlabel('Value')
plt.ylabel('Frequency')
plt.title('Histogram of x')
plt.show() | _____no_output_____ | MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Boxplot | # Boxplots are even easier. We can just use the boxplot() function without many parameters
# We use the implementation of Pandas, which relies on Matplotlib in the background
# We now use subplots.
data = [3,8,3,4,1,7,5,3,8,2,7,3,1,6,10,10,3,6,5,10]
# Subplot with 1 row, 2 columns, here we add figure 1 of 2 (first row, first column)
plt.subplot(1,2,1)
plt.boxplot(data)
data_2 = [3,8,3,4,1,7,5,3,8,2,7,3,1,6,10,10,3,6,5,10, 99,87,45,-20]
# Here we add figure 2 of 2, hence it will be positioned in the second column of the first row
plt.subplot(1,2,2)
plt.boxplot(data_2)
plt.show() | _____no_output_____ | MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Boxplot for multiple variables: | # Generate 4 columns with 10 observations
df = pd.DataFrame(data = np.random.random(size=(10,3)), columns = ['class.','reg.','time series'])
print(df)
boxplot = df.boxplot()
plt.title('Triple boxplot')
plt.show()
df = pd.DataFrame(data = np.random.random(size=(10,3)), columns = ['class.','reg.','time series'])
df['number_of_runs'] = [0,0,0,1,1,2,0,1,2,0]
boxplot = df.boxplot(by='number_of_runs')
plt.show() | class. reg. time series
0 0.402362 0.348025 0.893360
1 0.496534 0.454527 0.631422
2 0.268591 0.815153 0.371747
3 0.596372 0.121358 0.591864
4 0.575830 0.964928 0.908575
5 0.380839 0.435604 0.488436
6 0.788519 0.562830 0.303210
7 0.424057 0.888664 0.476388
8 0.699300 0.380225 0.776302
9 0.463731 0.239730 0.686004
| MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Scatterplot | # We load the data gain
x = [3,8,3,4,1,7,5,3,8,2,7,3,1,6,10,10,3,6,5,10]
y = [10,7,2,7,5,4,2,3,4,1,5,7,8,4,10,2,3,4,5,6]
# Here, we build a simple scatterplot of the two variables
plt.scatter(x,y)
plt.xlabel('x')
plt.ylabel('y')
plt.title('Simple scatterplot')
plt.show() | _____no_output_____ | MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
Hard to tell which variable is what, but it gives an overall impression of the data. | # A simple line plot
# We use the plot function for this. 'o-' indicates we want to use circles for markers and connect them with lines
plt.plot(x,'o-',color='blue',)
# Here we use 'x--' for cross-shaped markers connected with intermittent lines
plt.plot(y,'x--',color='red')
plt.xlabel('Time')
plt.ylabel('Value')
plt.title("x and y over time")
# This function sets the range limits for the x axis at 0 and 20
plt.xlim(0,20)
# Adding a grid
plt.grid(True)
# Adding markets on the x and y axis. We start at zero, make our way to 10 (the last integer is not included,
# hence we use 21 and 11)
# We add steps of 4 for the x axis, and 4 for the y axis
plt.xticks(range(0,21,4))
plt.yticks(range(0,11,2))
plt.show() | _____no_output_____ | MIT | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes |
---_You are currently looking at **version 1.0** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-text-mining/resources/d9pwm) course resource._--- Assignment 4 - Document Similarity & Topic Modelling Part 1 - Document SimilarityFor the first part of this assignment, you will complete the functions `doc_to_synsets` and `similarity_score` which will be used by `document_path_similarity` to find the path similarity between two documents.The following functions are provided:* **`convert_tag:`** converts the tag given by `nltk.pos_tag` to a tag used by `wordnet.synsets`. You will need to use this function in `doc_to_synsets`.* **`document_path_similarity:`** computes the symmetrical path similarity between two documents by finding the synsets in each document using `doc_to_synsets`, then computing similarities using `similarity_score`.You will need to finish writing the following functions:* **`doc_to_synsets:`** returns a list of synsets in document. This function should first tokenize and part of speech tag the document using `nltk.word_tokenize` and `nltk.pos_tag`. Then it should find each tokens corresponding synset using `wn.synsets(token, wordnet_tag)`. The first synset match should be used. If there is no match, that token is skipped.* **`similarity_score:`** returns the normalized similarity score of a list of synsets (s1) onto a second list of synsets (s2). For each synset in s1, find the synset in s2 with the largest similarity value. Sum all of the largest similarity values together and normalize this value by dividing it by the number of largest similarity values found. Be careful with data types, which should be floats. Missing values should be ignored.Once `doc_to_synsets` and `similarity_score` have been completed, submit to the autograder which will run `test_document_path_similarity` to test that these functions are running correctly. *Do not modify the functions `convert_tag`, `document_path_similarity`, and `test_document_path_similarity`.* | import numpy as np
import nltk
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
nltk.download('wordnet')
from nltk.corpus import wordnet as wn
import pandas as pd
def convert_tag(tag):
"""Convert the tag given by nltk.pos_tag to the tag used by wordnet.synsets"""
tag_dict = {'N': 'n', 'J': 'a', 'R': 'r', 'V': 'v'}
try:
return tag_dict[tag[0]]
except KeyError:
return None
def doc_to_synsets(doc):
"""
Returns a list of synsets in document.
Tokenizes and tags the words in the document doc.
Then finds the first synset for each word/tag combination.
If a synset is not found for that combination it is skipped.
Args:
doc: string to be converted
Returns:
list of synsets
Example:
doc_to_synsets('Fish are nvqjp friends.')
Out: [Synset('fish.n.01'), Synset('be.v.01'), Synset('friend.n.01')]
"""
# Your Code Here
tokens = nltk.word_tokenize(doc)
tags = [tag[1] for tag in nltk.pos_tag(tokens)]
wordnet_tags = [convert_tag(tag) for tag in tags]
synsets = [wn.synsets(token, wordnet_tag) for token, wordnet_tag in list(zip(tokens, wordnet_tags))]
answer = [i[0] for i in synsets if len(i) > 0]
return answer # Your Answer Here
def similarity_score(s1, s2):
"""
Calculate the normalized similarity score of s1 onto s2
For each synset in s1, finds the synset in s2 with the largest similarity value.
Sum of all of the largest similarity values and normalize this value by dividing it by the
number of largest similarity values found.
Args:
s1, s2: list of synsets from doc_to_synsets
Returns:
normalized similarity score of s1 onto s2
Example:
synsets1 = doc_to_synsets('I like cats')
synsets2 = doc_to_synsets('I like dogs')
similarity_score(synsets1, synsets2)
Out: 0.73333333333333339
"""
# Your Code Here
lvs = [] # largest similarity values
for i1 in s1:
scores=[x for x in [i1.path_similarity(i2) for i2 in s2] if x is not None]
if scores:
lvs.append(max(scores))
return sum(lvs) / len(lvs)# Your Answer Here
def document_path_similarity(doc1, doc2):
"""Finds the symmetrical similarity between doc1 and doc2"""
synsets1 = doc_to_synsets(doc1)
synsets2 = doc_to_synsets(doc2)
return (similarity_score(synsets1, synsets2) + similarity_score(synsets2, synsets1)) / 2 | [nltk_data] Downloading package punkt to /home/jovyan/nltk_data...
[nltk_data] Package punkt is already up-to-date!
[nltk_data] Downloading package averaged_perceptron_tagger to
[nltk_data] /home/jovyan/nltk_data...
[nltk_data] Package averaged_perceptron_tagger is already up-to-
[nltk_data] date!
[nltk_data] Downloading package wordnet to /home/jovyan/nltk_data...
[nltk_data] Package wordnet is already up-to-date!
| MIT | 4-5 Applied Text Mining in Python/Assignment 4.ipynb | MLunov/Applied-Data-Science-with-Python-Specialization-Michigan |
test_document_path_similarityUse this function to check if doc_to_synsets and similarity_score are correct.*This function should return the similarity score as a float.* | def test_document_path_similarity():
doc1 = 'This is a function to test document_path_similarity.'
doc2 = 'Use this function to see if your code in doc_to_synsets \
and similarity_score is correct!'
return document_path_similarity(doc1, doc2)
test_document_path_similarity() | _____no_output_____ | MIT | 4-5 Applied Text Mining in Python/Assignment 4.ipynb | MLunov/Applied-Data-Science-with-Python-Specialization-Michigan |
___`paraphrases` is a DataFrame which contains the following columns: `Quality`, `D1`, and `D2`.`Quality` is an indicator variable which indicates if the two documents `D1` and `D2` are paraphrases of one another (1 for paraphrase, 0 for not paraphrase). | # Use this dataframe for questions most_similar_docs and label_accuracy
paraphrases = pd.read_csv('paraphrases.csv')
paraphrases.head() | _____no_output_____ | MIT | 4-5 Applied Text Mining in Python/Assignment 4.ipynb | MLunov/Applied-Data-Science-with-Python-Specialization-Michigan |
___ most_similar_docsUsing `document_path_similarity`, find the pair of documents in paraphrases which has the maximum similarity score.*This function should return a tuple `(D1, D2, similarity_score)`* | def most_similar_docs():
# Your Code Here
return max(map(document_path_similarity, paraphrases['D1'], paraphrases['D2'])) # Your Answer Here
most_similar_docs() | _____no_output_____ | MIT | 4-5 Applied Text Mining in Python/Assignment 4.ipynb | MLunov/Applied-Data-Science-with-Python-Specialization-Michigan |
label_accuracyProvide labels for the twenty pairs of documents by computing the similarity for each pair using `document_path_similarity`. Let the classifier rule be that if the score is greater than 0.75, label is paraphrase (1), else label is not paraphrase (0). Report accuracy of the classifier using scikit-learn's accuracy_score.*This function should return a float.* | def label_accuracy():
from sklearn.metrics import accuracy_score
paraphrases['labels'] = [1 if i > 0.75 else 0 for i in map(document_path_similarity, paraphrases['D1'], paraphrases['D2'])] # Your Code Here
return accuracy_score(paraphrases['Quality'], paraphrases['labels']) # Your Answer Here
label_accuracy() | _____no_output_____ | MIT | 4-5 Applied Text Mining in Python/Assignment 4.ipynb | MLunov/Applied-Data-Science-with-Python-Specialization-Michigan |
Part 2 - Topic ModellingFor the second part of this assignment, you will use Gensim's LDA (Latent Dirichlet Allocation) model to model topics in `newsgroup_data`. You will first need to finish the code in the cell below by using gensim.models.ldamodel.LdaModel constructor to estimate LDA model parameters on the corpus, and save to the variable `ldamodel`. Extract 10 topics using `corpus` and `id_map`, and with `passes=25` and `random_state=34`. | import pickle
import gensim
from sklearn.feature_extraction.text import CountVectorizer
# Load the list of documents
with open('newsgroups', 'rb') as f:
newsgroup_data = pickle.load(f)
# Use CountVectorizor to find three letter tokens, remove stop_words,
# remove tokens that don't appear in at least 20 documents,
# remove tokens that appear in more than 20% of the documents
vect = CountVectorizer(min_df=20, max_df=0.2, stop_words='english',
token_pattern='(?u)\\b\\w\\w\\w+\\b')
# Fit and transform
X = vect.fit_transform(newsgroup_data)
# Convert sparse matrix to gensim corpus.
corpus = gensim.matutils.Sparse2Corpus(X, documents_columns=False)
# Mapping from word IDs to words (To be used in LdaModel's id2word parameter)
id_map = dict((v, k) for k, v in vect.vocabulary_.items())
# Use the gensim.models.ldamodel.LdaModel constructor to estimate
# LDA model parameters on the corpus, and save to the variable `ldamodel`
# Your code here:
ldamodel = gensim.models.ldamodel.LdaModel(corpus=corpus, num_topics=10, id2word=id_map, passes=25, random_state=34) | _____no_output_____ | MIT | 4-5 Applied Text Mining in Python/Assignment 4.ipynb | MLunov/Applied-Data-Science-with-Python-Specialization-Michigan |
lda_topicsUsing `ldamodel`, find a list of the 10 topics and the most significant 10 words in each topic. This should be structured as a list of 10 tuples where each tuple takes on the form:`(9, '0.068*"space" + 0.036*"nasa" + 0.021*"science" + 0.020*"edu" + 0.019*"data" + 0.017*"shuttle" + 0.015*"launch" + 0.015*"available" + 0.014*"center" + 0.014*"sci"')`for example.*This function should return a list of tuples.* | def lda_topics():
# Your Code Here
return ldamodel.print_topics(num_topics=10, num_words=10) # Your Answer Here
lda_topics() | _____no_output_____ | MIT | 4-5 Applied Text Mining in Python/Assignment 4.ipynb | MLunov/Applied-Data-Science-with-Python-Specialization-Michigan |
topic_distributionFor the new document `new_doc`, find the topic distribution. Remember to use vect.transform on the the new doc, and Sparse2Corpus to convert the sparse matrix to gensim corpus.*This function should return a list of tuples, where each tuple is `(topic, probability)`* | new_doc = ["\n\nIt's my understanding that the freezing will start to occur because \
of the\ngrowing distance of Pluto and Charon from the Sun, due to it's\nelliptical orbit. \
It is not due to shadowing effects. \n\n\nPluto can shadow Charon, and vice-versa.\n\nGeorge \
Krumins\n-- "]
def topic_distribution():
# Your Code Here
# Transform
X = vect.transform(new_doc)
# Convert sparse matrix to gensim corpus.
corpus = gensim.matutils.Sparse2Corpus(X, documents_columns=False)
return list(ldamodel[corpus])[0] # Your Answer Here
topic_distribution() | _____no_output_____ | MIT | 4-5 Applied Text Mining in Python/Assignment 4.ipynb | MLunov/Applied-Data-Science-with-Python-Specialization-Michigan |
topic_namesFrom the list of the following given topics, assign topic names to the topics you found. If none of these names best matches the topics you found, create a new 1-3 word "title" for the topic.Topics: Health, Science, Automobiles, Politics, Government, Travel, Computers & IT, Sports, Business, Society & Lifestyle, Religion, Education.*This function should return a list of 10 strings.* | def topic_names():
# Your Code Here
return ['Automobiles', 'Health', 'Science',
'Politics',
'Sports',
'Business', 'Society & Lifestyle',
'Religion', 'Education', 'Computers & IT'] # Your Answer Here
topic_names() | _____no_output_____ | MIT | 4-5 Applied Text Mining in Python/Assignment 4.ipynb | MLunov/Applied-Data-Science-with-Python-Specialization-Michigan |
NOAA extreme weather eventsThe [National Oceanic and Atmospheric Administration](https://en.wikipedia.org/wiki/National_Oceanic_and_Atmospheric_Administration) has a database of extreme weather events that contains lots of detail for every year ([Link](https://www.climate.gov/maps-data/dataset/severe-storms-and-extreme-events-data-table)). In this notebook I will create map files for individual weather events, mapped to their coordinates. | import pandas as pd
import numpy as np
import random
import geopandas
import matplotlib.pyplot as plt
pd.set_option('display.max_columns', None) # Unlimited columns
# Custom function for displaying the shape and head of a dataframe
def display(df, n=5):
print(df.shape)
return df.head(n) | _____no_output_____ | MIT | notebooks/DMA8 - NOAA weather events by coords.ipynb | KimDuclos/liveSafe-data |
Get map of US counties | # Import a shape file with all the counties in the US.
# Note how it doesn't include all the same territories as the
# quake contour map.
counties = geopandas.read_file('../data_input/1_USCounties/')
# Turn state codes from strings to integers
for col in ['STATE_FIPS', 'CNTY_FIPS', 'FIPS']:
counties[col] = counties[col].astype(int) | _____no_output_____ | MIT | notebooks/DMA8 - NOAA weather events by coords.ipynb | KimDuclos/liveSafe-data |
Process NOAA data for one year onlyAs a starting point that I'll generalize later. | # Get NOAA extreme weather event data for one year
df1 = pd.read_csv('../data_local/NOAA/StormEvents_details-ftp_v1.0_d2018_c20190422.csv')
print(df1.shape)
print(df1.columns)
df1.head(2)
# Extract only a few useful columns
df2 = df1[['TOR_F_SCALE','EVENT_TYPE','BEGIN_LAT','BEGIN_LON']].copy()
# Remove any rows with null coordinates
df2 = df2.dropna(subset=['BEGIN_LAT','BEGIN_LON'])
# Create geoDF of all the points
df3 = geopandas.GeoDataFrame(
df2, geometry=geopandas.points_from_xy(df2.BEGIN_LON, df2.BEGIN_LAT))
# Trim the list of events to only include those that happened within one of our official counties.
df4 = geopandas.sjoin(df3, counties, how='left', op='within').dropna(subset=['FIPS'])
# Drop useless columns
df4 = df4[['TOR_F_SCALE','EVENT_TYPE','geometry']]
# Add new columns for event categories
flood_types =['Flood','Flash Flood','Coastal Flood',
'Storm Surge/Tide','Lakeshore Flood','Debris Flow']
df4['Flood'] = df4['EVENT_TYPE'].isin(flood_types)
storm_types = ['Thunderstorm Wind','Marine Thunderstorm Wind','Marine High Wind',
'High Wind','Funnel Cloud','Dust Storm',
'Strong Wind','Dust Devil','Tropical Depression','Lightning',
'Tropical Storm','High Surf','Heavy Rain','Hail','Marine Hail',
'Marine Strong Wind','Waterspout']
df4['Storm'] = df4['EVENT_TYPE'].isin(storm_types)
df4['Tornado'] = df4['EVENT_TYPE'].isin(['Tornado'])
# Reorganize columns
type_columns = ['Storm','Flood','Tornado']
df4 = df4[['TOR_F_SCALE','EVENT_TYPE','geometry'] + type_columns]
display(df4)
# Plot over a map of US counties
fig, ax = plt.subplots(figsize=(20,20))
counties.plot(ax=ax, color='white', edgecolor='black');
df4.plot(ax=ax, marker='o')
# ax.set_xlim(-125,-114)
ax.set_ylim(15,75)
plt.show() | _____no_output_____ | MIT | notebooks/DMA8 - NOAA weather events by coords.ipynb | KimDuclos/liveSafe-data |
NOAA file processing functionGeneralize the previous operations so they can apply to the data for any year | def process_noaa(filepath):
"""
Process one year of NOAA Extreme weather events. Requires
the list of official counties and the list of official weather
event types.
Inputs
------
filepath (string) : file path for the list of events from one year.
Outputs
-------
result (pandas.DataFrame) : Dataframe each event for that year, with
boolean columns for each event category.
"""
df1 = pd.read_csv(filepath)
# Extract only a few useful columns
df2 = df1[['TOR_F_SCALE','EVENT_TYPE','BEGIN_LAT','BEGIN_LON']].copy()
# Remove any rows with null coordinates
df2 = df2.dropna(subset=['BEGIN_LAT','BEGIN_LON'])
# Create geoDF of all the points
df3 = geopandas.GeoDataFrame(
df2, geometry=geopandas.points_from_xy(df2.BEGIN_LON, df2.BEGIN_LAT))
# Trim the list of events to only include those that happened within one of our official counties.
df4 = geopandas.sjoin(df3, counties, how='left', op='within').dropna(subset=['FIPS'])
# Drop useless columns
df4 = df4[['TOR_F_SCALE','EVENT_TYPE','geometry']]
# Add new columns for event categories
flood_types =['Flood','Flash Flood','Coastal Flood',
'Storm Surge/Tide','Lakeshore Flood','Debris Flow']
df4['Flood'] = df4['EVENT_TYPE'].isin(flood_types)
storm_types = ['Thunderstorm Wind','Marine Thunderstorm Wind','Marine High Wind',
'High Wind','Funnel Cloud','Dust Storm',
'Strong Wind','Dust Devil','Tropical Depression','Lightning',
'Tropical Storm','High Surf','Heavy Rain','Hail','Marine Hail',
'Marine Strong Wind','Waterspout']
df4['Storm'] = df4['EVENT_TYPE'].isin(storm_types)
df4['Tornado'] = df4['EVENT_TYPE'].isin(['Tornado'])
# Reorganize columns
type_columns = ['Storm','Flood','Tornado']
df4 = df4[['TOR_F_SCALE','EVENT_TYPE','geometry'] + type_columns]
# Add a column for the year of this file
year = int(filepath[49:53])
df4['year'] = year
return df4
# Example
test_2018 = process_noaa('../data_local/NOAA/StormEvents_details-ftp_v1.0_d2018_c20190422.csv')
display(test_2018)
# These are the extreme weather events recorded in 2018
test_2018[type_columns].sum().sort_values(ascending=False) | _____no_output_____ | MIT | notebooks/DMA8 - NOAA weather events by coords.ipynb | KimDuclos/liveSafe-data |
Process all the available data | import glob
import os
# Read the CSV files for each year going back to 1996 (the first year
# when many of these event types started being recorded)
path = '../data_local/NOAA/'
filenames = sorted(glob.glob(os.path.join(path, '*.csv')))
layers = []
# Aggregate the dataframes in a list
for name in filenames:
year = int(name[49:53])
print(f'Processing {year}')
layers.append(process_noaa(name))
# Concatenate all these dataframes into a single dataframe
noaa = pd.concat(layers)
display(noaa)
# total events per type
noaa[type_columns].sum()
# Aggregate event types into different geopandas dataframes.
storms = noaa[noaa['Storm']][['EVENT_TYPE','year','geometry']].reset_index(drop=True)
floods = noaa[noaa['Flood']][['EVENT_TYPE','year','geometry']].reset_index(drop=True)
tornadoes = noaa[noaa['Tornado']][['TOR_F_SCALE','year','geometry']].reset_index(drop=True)
storms.shape, floods.shape, tornadoes.shape | _____no_output_____ | MIT | notebooks/DMA8 - NOAA weather events by coords.ipynb | KimDuclos/liveSafe-data |
Process tornado dataIn 2007, the National Weather Service (NWS) switched their scale for measuring tornado intensity, from the Fujita (F) scale to the Enhanced Fujita (EF) scale. I will lump them together here and just make a note for the user that the scale means something slightly different before and after 2007. Also, I'll cast unknown magnitudes (EFU) as if they were EF0. | # Tornadoes by magnitude, using the NWS's original labels.
# Notice the two different scales and also a label for 'unknown'
tornadoes.TOR_F_SCALE.value_counts()
# Function that extracts the scale level and sets unkwnown to zero.
def process_fujita(x):
if x[-1] == 'U':
return 0
else:
return int(x[-1])
tornadoes['intensity'] = tornadoes['TOR_F_SCALE'].apply(process_fujita)
tornadoes = tornadoes.drop(columns='TOR_F_SCALE')
display(tornadoes)
# Distribution of tornado intensities.
tornadoes.intensity.hist(); | _____no_output_____ | MIT | notebooks/DMA8 - NOAA weather events by coords.ipynb | KimDuclos/liveSafe-data |
Visualizing the data | # Sample of 2000 storms in the Lower48
fig, ax = plt.subplots(figsize=(20,20))
counties.plot(ax=ax, color='white', edgecolor='black');
storms.sample(2000).plot(ax=ax, marker='o')
ax.set_xlim(-125.0011,-66.9326)
ax.set_ylim(24.9493, 49.5904)
plt.show() | _____no_output_____ | MIT | notebooks/DMA8 - NOAA weather events by coords.ipynb | KimDuclos/liveSafe-data |
Floods and tornadoes show basically the same distribution, so I won't plot them separately. For reference, this is what the dataframes that we're about to export look like. | display(storms)
display(floods)
display(tornadoes) | (30898, 3)
| MIT | notebooks/DMA8 - NOAA weather events by coords.ipynb | KimDuclos/liveSafe-data |
Export! | storms.to_file("../data_output/5__NOAA/storms.geojson",
driver='GeoJSON')
floods.to_file("../data_output/5__NOAA/floods.geojson",
driver='GeoJSON')
tornadoes.to_file("../data_output/5__NOAA/tornadoes.geojson",
driver='GeoJSON') | _____no_output_____ | MIT | notebooks/DMA8 - NOAA weather events by coords.ipynb | KimDuclos/liveSafe-data |
**1D Convolutional Neural Networks**"A Convolutional Neural Network (ConvNet/CNN) is a Deep Learning algorithm which can take in an input image, assign importance (learnable weights and biases) to various aspects/objects in the image and be able to differentiate one from the other. The pre-processing required in a ConvNet is much lower as compared to other classification algorithms. While in primitive methods filters are hand-engineered, with enough training, ConvNets have the ability to learn these filters/characteristics." [4]"The architecture of a ConvNet is analogous to that of the connectivity pattern of Neurons in the Human Brain and was inspired by the organization of the Visual Cortex. Individual neurons respond to stimuli only in a restricted region of the visual field known as the Receptive Field. A collection of such fields overlap to cover the entire visual area." [4]"Convolutional neural network models were developed for image classification problems, where the model learns an internal representation of a two-dimensional input, in a process referred to as feature learning." [1]"This same process can be harnessed on one-dimensional sequences of data, such as in the case of acceleration and gyroscopic data for human activity recognition. The model learns to extract features from sequences of observations and how to map the internal features to different activity types." [1]"The benefit of using CNNs for sequence classification is that they can learn from the raw time series data directly, and in turn do not require domain expertise to manually engineer input features. The model can learn an internal representation of the time series data and ideally achieve comparable performance to models fit on a version of the dataset with engineered features." [1]**Convolutional Neural Network Architecture**"A CNN typically has three layers: a convolutional layer, a pooling layer, and a fully connected layer." [5]**Convolution Layer**"The convolution layer is the core building block of the CNN. It carries the main portion of the network’s computational load." [5]"This layer performs a dot product between two matrices, where one matrix is the set of learnable parameters otherwise known as a kernel, and the other matrix is the restricted portion of the receptive field. The kernel is spatially smaller than an image but is more in-depth. This means that, if the image is composed of three (RGB) channels, the kernel height and width will be spatially small, but the depth extends up to all three channels." [5]"During the forward pass, the kernel slides across the height and width of the image-producing the image representation of that receptive region. This produces a two-dimensional representation of the image known as an activation map that gives the response of the kernel at each spatial position of the image. The sliding size of the kernel is called a stride.If we have an input of size W x W x D and Dout number of kernels with a spatial size of F with stride S and amount of padding P, then the size of output volume can be determined by the following formula:" [5]**Pooling Layer**"The pooling layer replaces the output of the network at certain locations by deriving a summary statistic of the nearby outputs. This helps in reducing the spatial size of the representation, which decreases the required amount of computation and weights. The pooling operation is processed on every slice of the representation individually." [5]"There are several pooling functions such as the average of the rectangular neighborhood, L2 norm of the rectangular neighborhood, and a weighted average based on the distance from the central pixel. However, the most popular process is max pooling, which reports the maximum output from the neighborhood." [5]"If we have an activation map of size W x W x D, a pooling kernel of spatial size F, and stride S, then the size of output volume can be determined by the following formula:" [5]"This will yield an output volume of size Wout x Wout x D.In all cases, pooling provides some translation invariance which means that an object would be recognizable regardless of where it appears on the frame." [5]**Fully Connected Layer**"Neurons in this layer have full connectivity with all neurons in the preceding and succeeding layer as seen in regular FCNN. This is why it can be computed as usual by a matrix multiplication followed by a bias effect." [5]"The FC layer helps to map the representation between the input and the output." [5]**Non-Linearity Layers**"Since convolution is a linear operation and images are far from linear, non-linearity layers are often placed directly after the convolutional layer to introduce non-linearity to the activation map." [5]"There are several types of non-linear operations, the popular ones being:" [5]1. Sigmoid2. Tanh3. ReLU**Advantages:**1. Speed vs. other type of neural networks [2]2. Capacity to extract the most important features automatically [2]**Disadvantages:**1. Classification of similar objects with different Positions [3]2. Vulnerable to adversarial examples [3]3. Coordinate Frame [3]4. Other minor disadvantages like performance [3]**References:**1. https://machinelearningmastery.com/cnn-models-for-human-activity-recognition-time-series-classification/2. https://cai.tools.sap/blog/ml-spotlight-cnn/3. https://iq.opengenus.org/disadvantages-of-cnn/4. https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a535. https://towardsdatascience.com/convolutional-neural-networks-explained-9cc5188c4939 | pip install tensorflow
!pip install fsspec
# cnn model
from numpy import mean
from numpy import std
from numpy import dstack
from pandas import read_csv
from matplotlib import pyplot
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import Dropout
from keras.layers.convolutional import Conv1D
from keras.layers.convolutional import MaxPooling1D
#from keras.utils import to_categorical
from tensorflow.keras.utils import to_categorical # previous commented out line does not work
# load a single file as a numpy array
def load_file(filepath):
dataframe = read_csv(filepath, header=None, delim_whitespace=True)
return dataframe.values
# load a list of files and return as a 3d numpy array
def load_group(filenames, prefix=''):
loaded = list()
for name in filenames:
data = load_file(prefix + name)
loaded.append(data)
# stack group so that features are the 3rd dimension
loaded = dstack(loaded)
return loaded
# load a dataset group, such as train or test
def load_dataset_group(group, prefix=''):
#filepath = prefix + group + '/Inertial Signals/'
filepath = 'https://raw.githubusercontent.com/iotanalytics/IoTTutorial/main/data/UCI%20HAR%20Dataset/' + group + '/Inertial%20Signals/'
# load all 9 files as a single array
filenames = list()
# total acceleration
filenames += ['total_acc_x_'+group+'.txt', 'total_acc_y_'+group+'.txt', 'total_acc_z_'+group+'.txt']
# body acceleration
filenames += ['body_acc_x_'+group+'.txt', 'body_acc_y_'+group+'.txt', 'body_acc_z_'+group+'.txt']
# body gyroscope
filenames += ['body_gyro_x_'+group+'.txt', 'body_gyro_y_'+group+'.txt', 'body_gyro_z_'+group+'.txt']
# load input data
X = load_group(filenames, filepath)
# load class output
#y = load_file(prefix + group + '/y_'+group+'.txt')
y = load_file('https://raw.githubusercontent.com/iotanalytics/IoTTutorial/main/data/UCI%20HAR%20Dataset/'+group+'/y_'+group+'.txt')
return X, y
# load the dataset, returns train and test X and y elements
def load_dataset(prefix=''):
# load all train
#trainX, trainy = load_dataset_group('train', prefix + 'HARDataset/')
trainX, trainy = load_dataset_group('train', prefix)
print(trainX.shape, trainy.shape)
# load all test
#testX, testy = load_dataset_group('test', prefix + 'HARDataset/')
testX, testy = load_dataset_group('test', prefix)
print(testX.shape, testy.shape)
# zero-offset class values
trainy = trainy - 1
testy = testy - 1
# one hot encode y
trainy = to_categorical(trainy)
testy = to_categorical(testy)
print(trainX.shape, trainy.shape, testX.shape, testy.shape)
return trainX, trainy, testX, testy
# fit and evaluate a model
def evaluate_model(trainX, trainy, testX, testy):
verbose, epochs, batch_size = 0, 10, 32
n_timesteps, n_features, n_outputs = trainX.shape[1], trainX.shape[2], trainy.shape[1]
model = Sequential()
model.add(Conv1D(filters=64, kernel_size=3, activation='relu', input_shape=(n_timesteps,n_features)))
model.add(Conv1D(filters=64, kernel_size=3, activation='relu'))
model.add(Dropout(0.5))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(100, activation='relu'))
model.add(Dense(n_outputs, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit network
model.fit(trainX, trainy, epochs=epochs, batch_size=batch_size, verbose=verbose)
# evaluate model
_, accuracy = model.evaluate(testX, testy, batch_size=batch_size, verbose=0)
return accuracy
# summarize scores
def summarize_results(scores):
print(scores)
m, s = mean(scores), std(scores)
print('Accuracy: %.3f%% (+/-%.3f)' % (m, s))
# run an experiment
def run_experiment(repeats=10):
# load data
trainX, trainy, testX, testy = load_dataset()
# repeat experiment
scores = list()
for r in range(repeats):
score = evaluate_model(trainX, trainy, testX, testy)
score = score * 100.0
print('>#%d: %.3f' % (r+1, score))
scores.append(score)
# summarize results
summarize_results(scores)
# run the experiment
run_experiment() | (7352, 128, 9) (7352, 1)
(2947, 128, 9) (2947, 1)
(7352, 128, 9) (7352, 6) (2947, 128, 9) (2947, 6)
>#1: 90.363
>#2: 88.157
>#3: 92.467
>#4: 90.601
>#5: 90.227
>#6: 90.058
>#7: 91.992
>#8: 90.363
>#9: 89.786
>#10: 91.211
[90.3630793094635, 88.15745115280151, 92.46691465377808, 90.60060977935791, 90.22734761238098, 90.05768299102783, 91.99185371398926, 90.3630793094635, 89.78622555732727, 91.21140241622925]
Accuracy: 90.523% (+/-1.136)
| MIT | code/clustering_and_classification/1D_CNN.ipynb | iotanalytics/IoTTutorial |
Pymaceuticals Inc.--- Analysis* Overall, it is clear that Capomulin is a viable drug regimen to reduce tumor growth.* Capomulin had the most number of mice complete the study, with the exception of Remicane, all other regimens observed a number of mice deaths across the duration of the study. * There is a strong correlation between mouse weight and tumor volume, indicating that mouse weight may be contributing to the effectiveness of any drug regimen.* There was one potential outlier within the Infubinol regimen. While most mice showed tumor volume increase, there was one mouse that had a reduction in tumor growth in the study. | # Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
# Display the data table for preview
# Checking the number of mice.
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
# Optional: Get all the data for the duplicate mouse ID.
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
# Checking the number of mice in the clean DataFrame.
| _____no_output_____ | ADSL | pymaceuticals_starter_with_plots.ipynb | vaideheeshah13/MatPlotLib |
Summary Statistics | # Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
# Assemble the resulting series into a single summary dataframe.
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Using the aggregation method, produce the same summary statistics in a single line
| _____no_output_____ | ADSL | pymaceuticals_starter_with_plots.ipynb | vaideheeshah13/MatPlotLib |
Bar and Pie Charts | # Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.
# Generate a bar plot showing the total number of measurements taken on each drug regimen using using pyplot.
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Generate a pie plot showing the distribution of female versus male mice using pyplot
| _____no_output_____ | ADSL | pymaceuticals_starter_with_plots.ipynb | vaideheeshah13/MatPlotLib |
Quartiles, Outliers and Boxplots | # Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
# Put treatments into a list for for loop (and later for plot labels)
# Create empty list to fill with tumor vol data (for plotting)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
| _____no_output_____ | ADSL | pymaceuticals_starter_with_plots.ipynb | vaideheeshah13/MatPlotLib |
Line and Scatter Plots | # Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
| _____no_output_____ | ADSL | pymaceuticals_starter_with_plots.ipynb | vaideheeshah13/MatPlotLib |
Correlation and Regression | # Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
| The correlation between mouse weight and the average tumor volume is 0.84
| ADSL | pymaceuticals_starter_with_plots.ipynb | vaideheeshah13/MatPlotLib |
Microsoft Insights Module Example Notebook | %run /OEA_py
%run /NEW_Insights_py
# 0) Initialize the OEA framework and Insights module class notebook.
oea = OEA()
insights = Insights()
insights.ingest() | _____no_output_____ | CC-BY-4.0 | modules/Microsoft_Data/Microsoft_Education_Insights_Premium/notebook/Insights_module_ingestion.ipynb | ahalabi/OpenEduAnalytics |
WeatherPy---- Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps. | w_api = 'f85af5acc7275a9eb032d03a3cca5913'
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
from scipy.stats import linregress
# Import API key
# from api_keys import weather_api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180) | _____no_output_____ | ADSL | starter_code/old/WeatherPy.ipynb | rbvancleave/python-api-challenge |
Generate Cities List | # List for holding lat_lngs and cities
lat_lngs = []
cities = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(lat_range[0], lat_range[1], size=1500)
lngs = np.random.uniform(lng_range[0], lng_range[1], size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
len(cities) | _____no_output_____ | ADSL | starter_code/old/WeatherPy.ipynb | rbvancleave/python-api-challenge |
Perform API Calls* Perform a weather check on each city using a series of successive API calls.* Include a print log of each city as it'sbeing processed (with the city number and city name). Convert Raw Data to DataFrame* Export the city data into a .csv.* Display the DataFrame Inspect the data and remove the cities where the humidity > 100%.----Skip this step if there are no cities that have humidity > 100%. | # Get the indices of cities that have humidity over 100%.
# Make a new DataFrame equal to the city data to drop all humidity outliers by index.
# Passing "inplace=False" will make a copy of the city_data DataFrame, which we call "clean_city_data".
| _____no_output_____ | ADSL | starter_code/old/WeatherPy.ipynb | rbvancleave/python-api-challenge |
Matplotlib Applied **Aim: SWBAT create a figure with 4 subplots of varying graph types.** | import matplotlib.pyplot as plt
import numpy as np
from numpy.random import seed, randint
seed(100)
# Create Figure and Subplots
fig, axes = plt.subplots(2,2, figsize=(10,6), sharex=True, sharey=True, dpi=100)
# Define the colors and markers to use
colors = {0:'g', 1:'b', 2:'r', 3:'y'}
markers = {0:'o', 1:'x', 2:'*', 3:'p'}
# Plot each axes
for i, ax in enumerate(axes.ravel()):
ax.plot(sorted(randint(0,10,10)), sorted(randint(0,10,10)), marker=markers[i], color=colors[i])
ax.set_title('Ax: ' + str(i))
ax.yaxis.set_ticks_position('right')
plt.suptitle('Four Subplots in One Figure', verticalalignment='bottom', fontsize=16)
plt.tight_layout()
# plt.show() | _____no_output_____ | MIT | Phase_1/ds-data_visualization-main/Matplotlib_Applied.ipynb | BenJMcCarty/ds-east-042621-lectures |
Go through and play with the code above to try answer the questions below:- What do you think `sharex` and `sharey` do?- What does the `dpi` argument control?- What does `numpy.ravel()` do, and why do they call it here?- What does `yaxis.set_ticks_position()` do?- How do they use the `colors` and `markers` dictionaries? Your turn:- Create a figure that has 4 sub plots on it.- Plot 1: a line blue graph (`.plot()`) using data `x` and `y`- Plot 2: a scatter plot (`.scatter()`) using data `x2` and `y2` with red markers that are non-filled circles.- Plot 3: a plot that has both a line graph (x and y data) and a scatterplot (x2, y2) that only use 1 y axis- plot 4: a plot that is similiar to plot3 except the scatterplot has it own axis on the right hand side. - Put titles on each subplot.- Create a title for the entire figure.- Save figure as png. | from numpy.random import seed, randint
seed(100)
x = sorted(randint(0,10,10))
x2 = sorted(randint(0,20,10))
y = sorted(randint(0,10,10))
y2 = sorted(randint(0,20,10)) | _____no_output_____ | MIT | Phase_1/ds-data_visualization-main/Matplotlib_Applied.ipynb | BenJMcCarty/ds-east-042621-lectures |
Great tutorial on matplotlibhttps://www.machinelearningplus.com/plots/matplotlib-tutorial-complete-guide-python-plot-examples/ | fig | _____no_output_____ | MIT | Phase_1/ds-data_visualization-main/Matplotlib_Applied.ipynb | BenJMcCarty/ds-east-042621-lectures |
Now You Code 2: Paint PricingHouse Depot, a big-box hardware retailer, has contracted you to create an app to calculate paint prices. The price of paint is determined by the following factors:- Everyday quality paint is `$19.99` per gallon.- Select quality paint is `$24.99` per gallon.- Premium quality paint is `$32.99` per gallon.In addition if the customer wants computerized color-matching that incurs an additional fee of `$4.99` per gallon. Write a program to ask the user to select a paint quality: 'everyday', 'select' or 'premium' and then whether they need color matching and then outputs the price per gallon of the paint.Example Run 1:```Which paint quality do you require ['everyday', 'select', 'premium'] ?selectDo you require color matching [y/n] ?yTotal price of select paint with color matching is $29.98```Example Run 2:```Which paint quality do you require ['everyday', 'select', 'premium'] ?premiumDo you require color matching [y/n] ?nTotal price of premium paint without color matching is $32.99``` Step 1: Problem AnalysisInputs:Outputs:Algorithm (Steps in Program): | # Step 2: Write code here
choices = ["everyday", "select", "premium"]
colorChoices = ["y", "n"]
quality = input("which paint quality would you like? ["everyday", "select", "premium"]")
if quality in choices
if quality == "everyday":
quality =19.99
elif quality == "select":
quality = 24.99
elif quality =="premium":
quality = 32.99
colorMatching = input("do you requrie color matching [yes/no]")
if colorMatching in colorChoices:
if colorMatching == "yes":
colorMatching = 4.99
elif colorMatching == "no":
colorMatching = 0
final = quality + colorMatching
if colorMatching == "yes":
print("total price of paint with color matching is %.2f" %(final))
else:
print("total price of paint without color matching is %.2f" %(final))
else:
print("you must enter yes or no")
else:
print("That is not a paint quality")
| _____no_output_____ | MIT | content/lessons/04/Now-You-Code/NYC2-Paint-Matching.ipynb | MahopacHS/spring2019-rizzenM |
Kaggle ML and Data Science Survey Analysis Data 512, Final Project Plan - Zicong Liang Project MotivationThis project an analysis for a survey about Machine Learning and Data Science. Recently, lots of people are talking about machine learning and Data Science. In addition, more and more companies hire data science talents and invest in data science in order to make their business more data-driven. According to Wikipedia, Data science is also known as data-driven science, is an interdisciplinary field combined from mathematics, statistics and computer science to extract insights from data. On the one hand, I remember Oliver commented about nobody's really done a proper survey related to Data Science. I think it is a great opportunity to do some research in this field of study because I find out Kaggle releases a survey responses about Data Science and Machine Learning. On the other hand, there are a few reasons drive me to know more about data science and machine learning field of study.1. I am one of the graduate students in the master of science data science program at University of Washington.2. I'd like to get a job in data science filed after graduating from this program.3. I'd like to know how those people related to this field think about data science and machine learning, then analyses their response to draw some conclusions that I am looking for. As one of the beginner in Data Science, it is better to know more about what kind of skills we need to well-equipped in order to be ready to step into this field. That's why I am planning to do this analysis.Throughout this project, we are going to conduct some analyses for Data Science and Machine Learning in various aspects, such as gender, age, programming skills, algorithms, education degree and salary and so on. It helps us gain new insights of Data Science that different from what we have learnt from school. In addition, the analysis of this project can be a reference to those who people for pursuing a career in Data Science, not only for switching job but also looking for their first job in this field. Project DatasetThe data I am going to use for this project is from Kaggle. It is one of the most popular datasets in Kaggle recently. To be more specifically, the data is about a survey which conducted by Kaggle for an industry-wide to establish a comprehensive view of the state of Data Science and Machine Learning. This survey received more than 16000 responses. According to this data, we would be able to learn a ton about who is working in the Data Science Field, what’s happening at the cutting edge of machine learning across industries, and how new data scientists can best break into the field.Here is the link to the main page of [Kaggle ML and Data Science Survey, 2017](https://www.kaggle.com/kaggle/kaggle-survey-2017).The dataset consists of 5 files:1. **schema.csv**: a CSV file with survey schema. This schema includes the questions that correspond to each column name in both the **multipleChoiceResponses.csv** and **freeformResponses.csv**.2. **multipleChoiceResponses.csv**: Respondents' answers to multiple choice and ranking questions.3. **freeformResponses.csv**: Respondents' freeform answers to Kaggle's survey questions.4. **conversionRates.csv**: Currency conversion rates to USD (accessed from the R package "quantmod" on September 14, 2017)5. **RespondentTypeREADME.txt**: This is a schema for decoding the responses in the "Asked" column of the **schema.csv** file.Click [here](https://www.kaggle.com/kaggle/kaggle-survey-2017/data) to the data page.Base on the description above, I am going to use two files from the data.1. **multipleChoiceResponses.csv**2. **conversionRates.csv**The **conversionRates.csv** data is pretty straightforward, so let's look at the **multipleChoiceResponses.csv** before the start any analysis. | import pandas as pd
my_data = pd.read_csv("multipleChoiceResponses.csv", encoding='ISO-8859-1', delimiter=',', low_memory=False)
my_data.head()
my_data.shape | _____no_output_____ | MIT | Final Project Plan.ipynb | lzctony/data-512-finalproject |
Continuous training pipeline with Kubeflow Pipeline and AI Platform **Learning Objectives:**1. Learn how to use Kubeflow Pipeline(KFP) pre-build components (BiqQuery, AI Platform training and predictions)1. Learn how to use KFP lightweight python components1. Learn how to build a KFP with these components1. Learn how to compile, upload, and run a KFP with the command lineIn this lab, you will build, deploy, and run a KFP pipeline that orchestrates **BigQuery** and **AI Platform** services to train, tune, and deploy a **scikit-learn** model. Understanding the pipeline design The workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL). The pipeline's DSL is in the `covertype_training_pipeline.py` file that we will generate below.The pipeline's DSL has been designed to avoid hardcoding any environment specific settings like file paths or connection strings. These settings are provided to the pipeline code through a set of environment variables. | #!grep 'BASE_IMAGE =' -A 5 pipeline/covertype_training_pipeline.py
!pip list | grep kfp | kfp 1.0.0
kfp-pipeline-spec 0.1.7
kfp-server-api 1.5.0
| Apache-2.0 | on_demand/kfp-caip-sklearn/lab-02-kfp-pipeline/lab-02.ipynb | bharathraja23/mlops-on-gcp |
The pipeline uses a mix of custom and pre-build components.- Pre-build components. The pipeline uses the following pre-build components that are included with the KFP distribution: - [BigQuery query component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/bigquery/query) - [AI Platform Training component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/ml_engine/train) - [AI Platform Deploy component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/ml_engine/deploy)- Custom components. The pipeline uses two custom helper components that encapsulate functionality not available in any of the pre-build components. The components are implemented using the KFP SDK's [Lightweight Python Components](https://www.kubeflow.org/docs/pipelines/sdk/lightweight-python-components/) mechanism. The code for the components is in the `helper_components.py` file: - **Retrieve Best Run**. This component retrieves a tuning metric and hyperparameter values for the best run of a AI Platform Training hyperparameter tuning job. - **Evaluate Model**. This component evaluates a *sklearn* trained model using a provided metric and a testing dataset. | %%writefile ./pipeline/covertype_training_pipeline.py
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""KFP orchestrating BigQuery and Cloud AI Platform services."""
import os
from helper_components import evaluate_model
from helper_components import retrieve_best_run
from jinja2 import Template
import kfp
from kfp.components import func_to_container_op
from kfp.dsl.types import Dict
from kfp.dsl.types import GCPProjectID
from kfp.dsl.types import GCPRegion
from kfp.dsl.types import GCSPath
from kfp.dsl.types import String
from kfp.gcp import use_gcp_secret
# Defaults and environment settings
BASE_IMAGE = os.getenv('BASE_IMAGE')
TRAINER_IMAGE = os.getenv('TRAINER_IMAGE')
RUNTIME_VERSION = os.getenv('RUNTIME_VERSION')
PYTHON_VERSION = os.getenv('PYTHON_VERSION')
COMPONENT_URL_SEARCH_PREFIX = os.getenv('COMPONENT_URL_SEARCH_PREFIX')
USE_KFP_SA = os.getenv('USE_KFP_SA')
TRAINING_FILE_PATH = 'datasets/training/data.csv'
VALIDATION_FILE_PATH = 'datasets/validation/data.csv'
TESTING_FILE_PATH = 'datasets/testing/data.csv'
# Parameter defaults
SPLITS_DATASET_ID = 'splits'
HYPERTUNE_SETTINGS = """
{
"hyperparameters": {
"goal": "MAXIMIZE",
"maxTrials": 6,
"maxParallelTrials": 3,
"hyperparameterMetricTag": "accuracy",
"enableTrialEarlyStopping": True,
"params": [
{
"parameterName": "max_iter",
"type": "DISCRETE",
"discreteValues": [500, 1000]
},
{
"parameterName": "alpha",
"type": "DOUBLE",
"minValue": 0.0001,
"maxValue": 0.001,
"scaleType": "UNIT_LINEAR_SCALE"
}
]
}
}
"""
# Helper functions
def generate_sampling_query(source_table_name, num_lots, lots):
"""Prepares the data sampling query."""
sampling_query_template = """
SELECT *
FROM
`{{ source_table }}` AS cover
WHERE
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), {{ num_lots }}) IN ({{ lots }})
"""
query = Template(sampling_query_template).render(
source_table=source_table_name, num_lots=num_lots, lots=str(lots)[1:-1])
return query
# Create component factories
component_store = kfp.components.ComponentStore(
local_search_paths=None, url_search_prefixes=[COMPONENT_URL_SEARCH_PREFIX])
bigquery_query_op = component_store.load_component('bigquery/query')
mlengine_train_op = component_store.load_component('ml_engine/train')
mlengine_deploy_op = component_store.load_component('ml_engine/deploy')
retrieve_best_run_op = func_to_container_op(
retrieve_best_run, base_image=BASE_IMAGE)
evaluate_model_op = func_to_container_op(evaluate_model, base_image=BASE_IMAGE)
@kfp.dsl.pipeline(
name='Covertype Classifier Training',
description='The pipeline training and deploying the Covertype classifierpipeline_yaml'
)
def covertype_train(project_id,
region,
source_table_name,
gcs_root,
dataset_id,
evaluation_metric_name,
evaluation_metric_threshold,
model_id,
version_id,
replace_existing_version,
hypertune_settings=HYPERTUNE_SETTINGS,
dataset_location='US'):
"""Orchestrates training and deployment of an sklearn model."""
# Create the training split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[1, 2, 3, 4])
training_file_path = '{}/{}'.format(gcs_root, TRAINING_FILE_PATH)
create_training_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=training_file_path,
dataset_location=dataset_location)
# Create the validation split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[8])
validation_file_path = '{}/{}'.format(gcs_root, VALIDATION_FILE_PATH)
create_validation_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=validation_file_path,
dataset_location=dataset_location)
# Create the testing split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[9])
testing_file_path = '{}/{}'.format(gcs_root, TESTING_FILE_PATH)
create_testing_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=testing_file_path,
dataset_location=dataset_location)
# Tune hyperparameters
tune_args = [
'--training_dataset_path',
create_training_split.outputs['output_gcs_path'],
'--validation_dataset_path',
create_validation_split.outputs['output_gcs_path'], '--hptune', 'True'
]
job_dir = '{}/{}/{}'.format(gcs_root, 'jobdir/hypertune',
kfp.dsl.RUN_ID_PLACEHOLDER)
hypertune = mlengine_train_op(
project_id=project_id,
region=region,
master_image_uri=TRAINER_IMAGE,
job_dir=job_dir,
args=tune_args,
training_input=hypertune_settings)
# Retrieve the best trial
get_best_trial = retrieve_best_run_op(
project_id, hypertune.outputs['job_id'])
# Train the model on a combined training and validation datasets
job_dir = '{}/{}/{}'.format(gcs_root, 'jobdir', kfp.dsl.RUN_ID_PLACEHOLDER)
train_args = [
'--training_dataset_path',
create_training_split.outputs['output_gcs_path'],
'--validation_dataset_path',
create_validation_split.outputs['output_gcs_path'], '--alpha',
get_best_trial.outputs['alpha'], '--max_iter',
get_best_trial.outputs['max_iter'], '--hptune', 'False'
]
train_model = mlengine_train_op(
project_id=project_id,
region=region,
master_image_uri=TRAINER_IMAGE,
job_dir=job_dir,
args=train_args)
# Evaluate the model on the testing split
eval_model = evaluate_model_op(
dataset_path=str(create_testing_split.outputs['output_gcs_path']),
model_path=str(train_model.outputs['job_dir']),
metric_name=evaluation_metric_name)
# Deploy the model if the primary metric is better than threshold
with kfp.dsl.Condition(eval_model.outputs['metric_value'] > evaluation_metric_threshold):
deploy_model = mlengine_deploy_op(
model_uri=train_model.outputs['job_dir'],
project_id=project_id,
model_id=model_id,
version_id=version_id,
runtime_version=RUNTIME_VERSION,
python_version=PYTHON_VERSION,
replace_existing_version=replace_existing_version)
# Configure the pipeline to run using the service account defined
# in the user-gcp-sa k8s secret
if USE_KFP_SA == 'True':
kfp.dsl.get_pipeline_conf().add_op_transformer(
use_gcp_secret('user-gcp-sa')) | _____no_output_____ | Apache-2.0 | on_demand/kfp-caip-sklearn/lab-02-kfp-pipeline/lab-02.ipynb | bharathraja23/mlops-on-gcp |
The custom components execute in a container image defined in `base_image/Dockerfile`. | !cat base_image/Dockerfile | _____no_output_____ | Apache-2.0 | on_demand/kfp-caip-sklearn/lab-02-kfp-pipeline/lab-02.ipynb | bharathraja23/mlops-on-gcp |
The training step in the pipeline employes the AI Platform Training component to schedule a AI Platform Training job in a custom training container. The custom training image is defined in `trainer_image/Dockerfile`. | !cat trainer_image/Dockerfile | _____no_output_____ | Apache-2.0 | on_demand/kfp-caip-sklearn/lab-02-kfp-pipeline/lab-02.ipynb | bharathraja23/mlops-on-gcp |
Building and deploying the pipelineBefore deploying to AI Platform Pipelines, the pipeline DSL has to be compiled into a pipeline runtime format, also refered to as a pipeline package. The runtime format is based on [Argo Workflow](https://github.com/argoproj/argo), which is expressed in YAML. Configure environment settingsUpdate the below constants with the settings reflecting your lab environment. - `REGION` - the compute region for AI Platform Training and Prediction- `ARTIFACT_STORE` - the GCS bucket created during installation of AI Platform Pipelines. The bucket name will be similar to `qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default`.- `ENDPOINT` - set the `ENDPOINT` constant to the endpoint to your AI Platform Pipelines instance. Then endpoint to the AI Platform Pipelines instance can be found on the [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page in the Google Cloud Console.1. Open the **SETTINGS** for your instance2. Use the value of the `host` variable in the **Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD** section of the **SETTINGS** window.Run gsutil ls without URLs to list all of the Cloud Storage buckets under your default project ID. | !gsutil ls | _____no_output_____ | Apache-2.0 | on_demand/kfp-caip-sklearn/lab-02-kfp-pipeline/lab-02.ipynb | bharathraja23/mlops-on-gcp |
**HINT:** For **ENDPOINT**, use the value of the `host` variable in the **Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SDK** section of the **SETTINGS** window.For **ARTIFACT_STORE_URI**, copy the bucket name which starts with the qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default prefix from the previous cell output. Your copied value should look like **'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default'** | REGION = 'us-central1'
ENDPOINT = '627be4a1d4049ed3-dot-us-central1.pipelines.googleusercontent.com' # TO DO: REPLACE WITH YOUR ENDPOINT
ARTIFACT_STORE_URI = 'gs://dna-gcp-data-kubeflowpipelines-default' # TO DO: REPLACE WITH YOUR ARTIFACT_STORE NAME
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0] | _____no_output_____ | Apache-2.0 | on_demand/kfp-caip-sklearn/lab-02-kfp-pipeline/lab-02.ipynb | bharathraja23/mlops-on-gcp |
Build the trainer image | IMAGE_NAME='trainer_image'
TAG='test'
TRAINER_IMAGE='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG) | _____no_output_____ | Apache-2.0 | on_demand/kfp-caip-sklearn/lab-02-kfp-pipeline/lab-02.ipynb | bharathraja23/mlops-on-gcp |
**Note**: Please ignore any **incompatibility ERROR** that may appear for the packages visions as it will not affect the lab's functionality. | !gcloud builds submit --timeout 15m --tag $TRAINER_IMAGE trainer_image | _____no_output_____ | Apache-2.0 | on_demand/kfp-caip-sklearn/lab-02-kfp-pipeline/lab-02.ipynb | bharathraja23/mlops-on-gcp |
Build the base image for custom components | IMAGE_NAME='base_image'
TAG='test2'
BASE_IMAGE='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)
!pwd
!gcloud builds submit --timeout 15m --tag $BASE_IMAGE base_image | Creating temporary tarball archive of 2 file(s) totalling 290 bytes before compression.
Uploading tarball of [base_image] to [gs://dna-gcp-data_cloudbuild/source/1621581960.433286-cef9441cb3234402ad8faeccf31ce5fe.tgz]
Created [https://cloudbuild.googleapis.com/v1/projects/dna-gcp-data/locations/global/builds/d2e1016b-599c-4537-b03f-3a8e0039c2fc].
Logs are available at [https://console.cloud.google.com/cloud-build/builds/d2e1016b-599c-4537-b03f-3a8e0039c2fc?project=1011566672334].
----------------------------- REMOTE BUILD OUTPUT ------------------------------
starting build "d2e1016b-599c-4537-b03f-3a8e0039c2fc"
FETCHSOURCE
Fetching storage object: gs://dna-gcp-data_cloudbuild/source/1621581960.433286-cef9441cb3234402ad8faeccf31ce5fe.tgz#1621581960754721
Copying gs://dna-gcp-data_cloudbuild/source/1621581960.433286-cef9441cb3234402ad8faeccf31ce5fe.tgz#1621581960754721...
/ [1 files][ 299.0 B/ 299.0 B]
Operation completed over 1 objects/299.0 B.
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
Sending build context to Docker daemon 3.584kB
Step 1/3 : FROM gcr.io/deeplearning-platform-release/base-cpu
latest: Pulling from deeplearning-platform-release/base-cpu
01bf7da0a88c: Pulling fs layer
f3b4a5f15c7a: Pulling fs layer
57ffbe87baa1: Pulling fs layer
424e7c9d5d89: Pulling fs layer
9b397537aef0: Pulling fs layer
2bd5028f4b85: Pulling fs layer
4f4fb700ef54: Pulling fs layer
b2ed56b85d3a: Pulling fs layer
8bfb788e9874: Pulling fs layer
0618fb353339: Pulling fs layer
42045a665612: Pulling fs layer
031d8d7b75f7: Pulling fs layer
5780cc9addac: Pulling fs layer
8fbe78107b3d: Pulling fs layer
eee173fc570a: Pulling fs layer
424e7c9d5d89: Waiting
9b397537aef0: Waiting
2bd5028f4b85: Waiting
4f4fb700ef54: Waiting
b2ed56b85d3a: Waiting
8bfb788e9874: Waiting
0618fb353339: Waiting
42045a665612: Waiting
031d8d7b75f7: Waiting
5780cc9addac: Waiting
8fbe78107b3d: Waiting
eee173fc570a: Waiting
9334ecc802d5: Pulling fs layer
c631c38965fd: Pulling fs layer
9334ecc802d5: Waiting
c631c38965fd: Waiting
1aada407354a: Pulling fs layer
151efcb7d3c3: Pulling fs layer
151efcb7d3c3: Waiting
1aada407354a: Waiting
57ffbe87baa1: Verifying Checksum
57ffbe87baa1: Download complete
f3b4a5f15c7a: Verifying Checksum
f3b4a5f15c7a: Download complete
424e7c9d5d89: Verifying Checksum
424e7c9d5d89: Download complete
01bf7da0a88c: Verifying Checksum
01bf7da0a88c: Download complete
4f4fb700ef54: Verifying Checksum
4f4fb700ef54: Download complete
b2ed56b85d3a: Verifying Checksum
b2ed56b85d3a: Download complete
2bd5028f4b85: Verifying Checksum
2bd5028f4b85: Download complete
0618fb353339: Download complete
42045a665612: Verifying Checksum
42045a665612: Download complete
031d8d7b75f7: Verifying Checksum
031d8d7b75f7: Download complete
5780cc9addac: Download complete
8fbe78107b3d: Verifying Checksum
8fbe78107b3d: Download complete
eee173fc570a: Verifying Checksum
eee173fc570a: Download complete
9334ecc802d5: Verifying Checksum
9334ecc802d5: Download complete
c631c38965fd: Verifying Checksum
c631c38965fd: Download complete
9b397537aef0: Verifying Checksum
9b397537aef0: Download complete
8bfb788e9874: Verifying Checksum
8bfb788e9874: Download complete
151efcb7d3c3: Verifying Checksum
151efcb7d3c3: Download complete
01bf7da0a88c: Pull complete
f3b4a5f15c7a: Pull complete
57ffbe87baa1: Pull complete
424e7c9d5d89: Pull complete
1aada407354a: Verifying Checksum
1aada407354a: Download complete
9b397537aef0: Pull complete
2bd5028f4b85: Pull complete
4f4fb700ef54: Pull complete
b2ed56b85d3a: Pull complete
8bfb788e9874: Pull complete
0618fb353339: Pull complete
42045a665612: Pull complete
031d8d7b75f7: Pull complete
5780cc9addac: Pull complete
8fbe78107b3d: Pull complete
eee173fc570a: Pull complete
9334ecc802d5: Pull complete
c631c38965fd: Pull complete
1aada407354a: Pull complete
151efcb7d3c3: Pull complete
Digest: sha256:76ee9c0261dbcfb75e201ce21fd666f61127fe6b9ff74e6cf78b6ef09751de95
Status: Downloaded newer image for gcr.io/deeplearning-platform-release/base-cpu:latest
---> c1e1d5999dc3
Step 2/3 : RUN pip install -U fire scikit-learn==0.20.4 pandas==0.24.2 kfp==1.0.0
---> Running in c4bf640bbcee
Collecting fire
Downloading fire-0.4.0.tar.gz (87 kB)
Collecting scikit-learn==0.20.4
Downloading scikit_learn-0.20.4-cp37-cp37m-manylinux1_x86_64.whl (5.4 MB)
Collecting pandas==0.24.2
Downloading pandas-0.24.2-cp37-cp37m-manylinux1_x86_64.whl (10.1 MB)
Collecting kfp==1.0.0
Downloading kfp-1.0.0.tar.gz (116 kB)
Requirement already satisfied: scipy>=0.13.3 in /opt/conda/lib/python3.7/site-packages (from scikit-learn==0.20.4) (1.6.3)
Requirement already satisfied: numpy>=1.8.2 in /opt/conda/lib/python3.7/site-packages (from scikit-learn==0.20.4) (1.19.5)
Requirement already satisfied: python-dateutil>=2.5.0 in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (2.8.1)
Requirement already satisfied: pytz>=2011k in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (2021.1)
Requirement already satisfied: PyYAML in /opt/conda/lib/python3.7/site-packages (from kfp==1.0.0) (5.4.1)
Requirement already satisfied: google-cloud-storage>=1.13.0 in /opt/conda/lib/python3.7/site-packages (from kfp==1.0.0) (1.38.0)
Collecting kubernetes<12.0.0,>=8.0.0
Downloading kubernetes-11.0.0-py3-none-any.whl (1.5 MB)
Requirement already satisfied: google-auth>=1.6.1 in /opt/conda/lib/python3.7/site-packages (from kfp==1.0.0) (1.30.0)
Collecting requests_toolbelt>=0.8.0
Downloading requests_toolbelt-0.9.1-py2.py3-none-any.whl (54 kB)
Requirement already satisfied: cloudpickle in /opt/conda/lib/python3.7/site-packages (from kfp==1.0.0) (1.6.0)
Collecting kfp-server-api<2.0.0,>=0.2.5
Downloading kfp-server-api-1.5.0.tar.gz (50 kB)
Requirement already satisfied: jsonschema>=3.0.1 in /opt/conda/lib/python3.7/site-packages (from kfp==1.0.0) (3.2.0)
Collecting tabulate
Downloading tabulate-0.8.9-py3-none-any.whl (25 kB)
Requirement already satisfied: click in /opt/conda/lib/python3.7/site-packages (from kfp==1.0.0) (7.1.2)
Collecting Deprecated
Downloading Deprecated-1.2.12-py2.py3-none-any.whl (9.5 kB)
Collecting strip-hints
Downloading strip-hints-0.1.9.tar.gz (30 kB)
Requirement already satisfied: setuptools>=40.3.0 in /opt/conda/lib/python3.7/site-packages (from google-auth>=1.6.1->kfp==1.0.0) (49.6.0.post20210108)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /opt/conda/lib/python3.7/site-packages (from google-auth>=1.6.1->kfp==1.0.0) (4.2.2)
Requirement already satisfied: rsa<5,>=3.1.4 in /opt/conda/lib/python3.7/site-packages (from google-auth>=1.6.1->kfp==1.0.0) (4.7.2)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /opt/conda/lib/python3.7/site-packages (from google-auth>=1.6.1->kfp==1.0.0) (0.2.7)
Requirement already satisfied: six>=1.9.0 in /opt/conda/lib/python3.7/site-packages (from google-auth>=1.6.1->kfp==1.0.0) (1.16.0)
Requirement already satisfied: requests<3.0.0dev,>=2.18.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-storage>=1.13.0->kfp==1.0.0) (2.25.1)
Requirement already satisfied: google-resumable-media<2.0dev,>=1.2.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-storage>=1.13.0->kfp==1.0.0) (1.2.0)
Requirement already satisfied: google-cloud-core<2.0dev,>=1.4.1 in /opt/conda/lib/python3.7/site-packages (from google-cloud-storage>=1.13.0->kfp==1.0.0) (1.6.0)
Requirement already satisfied: google-api-core<2.0.0dev,>=1.21.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-core<2.0dev,>=1.4.1->google-cloud-storage>=1.13.0->kfp==1.0.0) (1.26.3)
Requirement already satisfied: packaging>=14.3 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0.0dev,>=1.21.0->google-cloud-core<2.0dev,>=1.4.1->google-cloud-storage>=1.13.0->kfp==1.0.0) (20.9)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0.0dev,>=1.21.0->google-cloud-core<2.0dev,>=1.4.1->google-cloud-storage>=1.13.0->kfp==1.0.0) (1.53.0)
Requirement already satisfied: protobuf>=3.12.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0.0dev,>=1.21.0->google-cloud-core<2.0dev,>=1.4.1->google-cloud-storage>=1.13.0->kfp==1.0.0) (3.16.0)
Requirement already satisfied: google-crc32c<2.0dev,>=1.0 in /opt/conda/lib/python3.7/site-packages (from google-resumable-media<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==1.0.0) (1.1.2)
Requirement already satisfied: cffi>=1.0.0 in /opt/conda/lib/python3.7/site-packages (from google-crc32c<2.0dev,>=1.0->google-resumable-media<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==1.0.0) (1.14.5)
Requirement already satisfied: pycparser in /opt/conda/lib/python3.7/site-packages (from cffi>=1.0.0->google-crc32c<2.0dev,>=1.0->google-resumable-media<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==1.0.0) (2.20)
Requirement already satisfied: attrs>=17.4.0 in /opt/conda/lib/python3.7/site-packages (from jsonschema>=3.0.1->kfp==1.0.0) (21.2.0)
Requirement already satisfied: importlib-metadata in /opt/conda/lib/python3.7/site-packages (from jsonschema>=3.0.1->kfp==1.0.0) (4.0.1)
Requirement already satisfied: pyrsistent>=0.14.0 in /opt/conda/lib/python3.7/site-packages (from jsonschema>=3.0.1->kfp==1.0.0) (0.17.3)
Requirement already satisfied: urllib3>=1.15 in /opt/conda/lib/python3.7/site-packages (from kfp-server-api<2.0.0,>=0.2.5->kfp==1.0.0) (1.26.4)
Requirement already satisfied: certifi in /opt/conda/lib/python3.7/site-packages (from kfp-server-api<2.0.0,>=0.2.5->kfp==1.0.0) (2020.12.5)
Requirement already satisfied: websocket-client!=0.40.0,!=0.41.*,!=0.42.*,>=0.32.0 in /opt/conda/lib/python3.7/site-packages (from kubernetes<12.0.0,>=8.0.0->kfp==1.0.0) (0.57.0)
Requirement already satisfied: requests-oauthlib in /opt/conda/lib/python3.7/site-packages (from kubernetes<12.0.0,>=8.0.0->kfp==1.0.0) (1.3.0)
Requirement already satisfied: pyparsing>=2.0.2 in /opt/conda/lib/python3.7/site-packages (from packaging>=14.3->google-api-core<2.0.0dev,>=1.21.0->google-cloud-core<2.0dev,>=1.4.1->google-cloud-storage>=1.13.0->kfp==1.0.0) (2.4.7)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /opt/conda/lib/python3.7/site-packages (from pyasn1-modules>=0.2.1->google-auth>=1.6.1->kfp==1.0.0) (0.4.8)
Requirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-cloud-storage>=1.13.0->kfp==1.0.0) (2.10)
Requirement already satisfied: chardet<5,>=3.0.2 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-cloud-storage>=1.13.0->kfp==1.0.0) (4.0.0)
Collecting termcolor
Downloading termcolor-1.1.0.tar.gz (3.9 kB)
Requirement already satisfied: wrapt<2,>=1.10 in /opt/conda/lib/python3.7/site-packages (from Deprecated->kfp==1.0.0) (1.12.1)
Requirement already satisfied: zipp>=0.5 in /opt/conda/lib/python3.7/site-packages (from importlib-metadata->jsonschema>=3.0.1->kfp==1.0.0) (3.4.1)
Requirement already satisfied: typing-extensions>=3.6.4 in /opt/conda/lib/python3.7/site-packages (from importlib-metadata->jsonschema>=3.0.1->kfp==1.0.0) (3.7.4.3)
Requirement already satisfied: oauthlib>=3.0.0 in /opt/conda/lib/python3.7/site-packages (from requests-oauthlib->kubernetes<12.0.0,>=8.0.0->kfp==1.0.0) (3.0.1)
Requirement already satisfied: wheel in /opt/conda/lib/python3.7/site-packages (from strip-hints->kfp==1.0.0) (0.36.2)
Building wheels for collected packages: kfp, kfp-server-api, fire, strip-hints, termcolor
Building wheel for kfp (setup.py): started
Building wheel for kfp (setup.py): finished with status 'done'
Created wheel for kfp: filename=kfp-1.0.0-py3-none-any.whl size=159769 sha256=74a8b1d2b91b9957b95f2f878e91ba0a2b847ba6b479ba6090b199fee94f8de1
Stored in directory: /root/.cache/pip/wheels/81/39/f2/ee01d785a5bd135e42e7721fedb05857badf763fc465a4e822
Building wheel for kfp-server-api (setup.py): started
Building wheel for kfp-server-api (setup.py): finished with status 'done'
Created wheel for kfp-server-api: filename=kfp_server_api-1.5.0-py3-none-any.whl size=92524 sha256=b672a04ca3bcf1257f2981061c5a9b460c20f5e816ede414eb22892649d86973
Stored in directory: /root/.cache/pip/wheels/1e/ab/eb/1608f904a1a3f2a28696129c6dbd3cac00bea2cdad226ee60e
Building wheel for fire (setup.py): started
Building wheel for fire (setup.py): finished with status 'done'
Created wheel for fire: filename=fire-0.4.0-py2.py3-none-any.whl size=115928 sha256=040e97679ac4b63443c3b3127e2f2b0afb4d70d927adcc11efa1f94be0084ba3
Stored in directory: /root/.cache/pip/wheels/8a/67/fb/2e8a12fa16661b9d5af1f654bd199366799740a85c64981226
Building wheel for strip-hints (setup.py): started
Building wheel for strip-hints (setup.py): finished with status 'done'
Created wheel for strip-hints: filename=strip_hints-0.1.9-py2.py3-none-any.whl size=20993 sha256=9481cd3b4b52c0e9713db3553c23c1bda587a1075bcd59d71e99110b6f1b6533
Stored in directory: /root/.cache/pip/wheels/2d/b8/4e/a3ec111d2db63cec88121bd7c0ab1a123bce3b55dd19dda5c1
Building wheel for termcolor (setup.py): started
Building wheel for termcolor (setup.py): finished with status 'done'
Created wheel for termcolor: filename=termcolor-1.1.0-py3-none-any.whl size=4829 sha256=b67e2ecf8cbb39455760408250cf67d6ddf4f873c94b486c5e753b4e84f4c8db
Stored in directory: /root/.cache/pip/wheels/3f/e3/ec/8a8336ff196023622fbcb36de0c5a5c218cbb24111d1d4c7f2
Successfully built kfp kfp-server-api fire strip-hints termcolor
Installing collected packages: termcolor, tabulate, strip-hints, requests-toolbelt, kubernetes, kfp-server-api, Deprecated, scikit-learn, pandas, kfp, fire
Attempting uninstall: kubernetes
Found existing installation: kubernetes 12.0.1
Uninstalling kubernetes-12.0.1:
Successfully uninstalled kubernetes-12.0.1
Attempting uninstall: scikit-learn
Found existing installation: scikit-learn 0.24.2
Uninstalling scikit-learn-0.24.2:
Successfully uninstalled scikit-learn-0.24.2
Attempting uninstall: pandas
Found existing installation: pandas 1.2.4
Uninstalling pandas-1.2.4:
Successfully uninstalled pandas-1.2.4
[91mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
visions 0.7.1 requires pandas>=0.25.3, but you have pandas 0.24.2 which is incompatible.
phik 0.11.2 requires pandas>=0.25.1, but you have pandas 0.24.2 which is incompatible.
pandas-profiling 3.0.0 requires pandas!=1.0.0,!=1.0.1,!=1.0.2,!=1.1.0,>=0.25.3, but you have pandas 0.24.2 which is incompatible.
[0m[91mWARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv
[0mSuccessfully installed Deprecated-1.2.12 fire-0.4.0 kfp-1.0.0 kfp-server-api-1.5.0 kubernetes-11.0.0 pandas-0.24.2 requests-toolbelt-0.9.1 scikit-learn-0.20.4 strip-hints-0.1.9 tabulate-0.8.9 termcolor-1.1.0
Removing intermediate container c4bf640bbcee
---> 94e41aa9f2e0
Step 3/3 : RUN pip list | grep kfp
---> Running in cf3f27276384
kfp 1.0.0
kfp-server-api 1.5.0
Removing intermediate container cf3f27276384
---> e31322a505ba
Successfully built e31322a505ba
Successfully tagged gcr.io/dna-gcp-data/base_image:test2
PUSH
Pushing gcr.io/dna-gcp-data/base_image:test2
The push refers to repository [gcr.io/dna-gcp-data/base_image]
ce2e56091c30: Preparing
fed09d378d72: Preparing
896901ec1a67: Preparing
06a5bf49b163: Preparing
b34dae69fc5d: Preparing
0ffb7465dde9: Preparing
e2563d1ada9a: Preparing
42b027d1e826: Preparing
636a7c2e7d03: Preparing
1ba1158adf89: Preparing
96e46d1341e8: Preparing
954f6dc3f7f5: Preparing
8760a171b659: Preparing
5f70bf18a086: Preparing
a0710233fd2d: Preparing
05449afa4be9: Preparing
5b9e34b5cf74: Preparing
8cafc6d2db45: Preparing
a5d4bacb0351: Preparing
5153e1acaabc: Preparing
96e46d1341e8: Waiting
954f6dc3f7f5: Waiting
8760a171b659: Waiting
5f70bf18a086: Waiting
a0710233fd2d: Waiting
05449afa4be9: Waiting
5b9e34b5cf74: Waiting
8cafc6d2db45: Waiting
0ffb7465dde9: Waiting
e2563d1ada9a: Waiting
42b027d1e826: Waiting
636a7c2e7d03: Waiting
1ba1158adf89: Waiting
5153e1acaabc: Waiting
896901ec1a67: Layer already exists
fed09d378d72: Layer already exists
b34dae69fc5d: Layer already exists
06a5bf49b163: Layer already exists
0ffb7465dde9: Layer already exists
e2563d1ada9a: Layer already exists
42b027d1e826: Layer already exists
636a7c2e7d03: Layer already exists
1ba1158adf89: Layer already exists
954f6dc3f7f5: Layer already exists
8760a171b659: Layer already exists
5f70bf18a086: Layer already exists
96e46d1341e8: Layer already exists
8cafc6d2db45: Layer already exists
5b9e34b5cf74: Layer already exists
05449afa4be9: Layer already exists
a0710233fd2d: Layer already exists
a5d4bacb0351: Layer already exists
5153e1acaabc: Layer already exists
ce2e56091c30: Pushed
test2: digest: sha256:09e11684e6de0ac905077560c50a24b94b8b3d22afa167ccb7f2b92345068511 size: 4499
DONE
--------------------------------------------------------------------------------
ID CREATE_TIME DURATION SOURCE IMAGES STATUS
d2e1016b-599c-4537-b03f-3a8e0039c2fc 2021-05-21T07:26:00+00:00 2M25S gs://dna-gcp-data_cloudbuild/source/1621581960.433286-cef9441cb3234402ad8faeccf31ce5fe.tgz gcr.io/dna-gcp-data/base_image:test2 SUCCESS
| Apache-2.0 | on_demand/kfp-caip-sklearn/lab-02-kfp-pipeline/lab-02.ipynb | bharathraja23/mlops-on-gcp |
Compile the pipelineYou can compile the DSL using an API from the **KFP SDK** or using the **KFP** compiler.To compile the pipeline DSL using the **KFP** compiler. Set the pipeline's compile time settingsThe pipeline can run using a security context of the GKE default node pool's service account or the service account defined in the `user-gcp-sa` secret of the Kubernetes namespace hosting KFP. If you want to use the `user-gcp-sa` service account you change the value of `USE_KFP_SA` to `True`.Note that the default AI Platform Pipelines configuration does not define the `user-gcp-sa` secret. | USE_KFP_SA = False
COMPONENT_URL_SEARCH_PREFIX = 'https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/'
RUNTIME_VERSION = '1.15'
PYTHON_VERSION = '3.7'
ENDPOINT='https://627be4a1d4049ed3-dot-us-central1.pipelines.googleusercontent.com'
%env USE_KFP_SA={USE_KFP_SA}
%env BASE_IMAGE={BASE_IMAGE}
%env TRAINER_IMAGE={TRAINER_IMAGE}
%env COMPONENT_URL_SEARCH_PREFIX={COMPONENT_URL_SEARCH_PREFIX}
%env RUNTIME_VERSION={RUNTIME_VERSION}
%env PYTHON_VERSION={PYTHON_VERSION}
%env ENDPOINT={ENDPOINT} | env: USE_KFP_SA=False
env: BASE_IMAGE=gcr.io/dna-gcp-data/base_image:test2
env: TRAINER_IMAGE=gcr.io/dna-gcp-data/trainer_image:test
env: COMPONENT_URL_SEARCH_PREFIX=https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/
env: RUNTIME_VERSION=1.15
env: PYTHON_VERSION=3.7
env: ENDPOINT=https://627be4a1d4049ed3-dot-us-central1.pipelines.googleusercontent.com
| Apache-2.0 | on_demand/kfp-caip-sklearn/lab-02-kfp-pipeline/lab-02.ipynb | bharathraja23/mlops-on-gcp |
Use the CLI compiler to compile the pipeline | !dsl-compile --py pipeline/covertype_training_pipeline.py --output covertype_training_pipeline.yaml | _____no_output_____ | Apache-2.0 | on_demand/kfp-caip-sklearn/lab-02-kfp-pipeline/lab-02.ipynb | bharathraja23/mlops-on-gcp |
The result is the `covertype_training_pipeline.yaml` file. | !head covertype_training_pipeline.yaml | _____no_output_____ | Apache-2.0 | on_demand/kfp-caip-sklearn/lab-02-kfp-pipeline/lab-02.ipynb | bharathraja23/mlops-on-gcp |
Deploy the pipeline package | PIPELINE_NAME='covertype_continuous_training'
!kfp --endpoint $ENDPOINT pipeline upload \
-p $PIPELINE_NAME \
covertype_training_pipeline.yaml | Pipeline 7eda6268-681e-41eb-8f65-a9c853030888 has been submitted
Pipeline Details
------------------
ID 7eda6268-681e-41eb-8f65-a9c853030888
Name covertype_continuous_training
Description
Uploaded at 2021-05-21T08:50:00+00:00
+--------------------------+--------------------------------------------------+
| Parameter Name | Default Value |
+==========================+==================================================+
| project_id | |
+--------------------------+--------------------------------------------------+
| region | |
+--------------------------+--------------------------------------------------+
| source_table_name | |
+--------------------------+--------------------------------------------------+
| gcs_root | |
+--------------------------+--------------------------------------------------+
| dataset_id | |
+--------------------------+--------------------------------------------------+
| evaluation_metric_name | |
+--------------------------+--------------------------------------------------+
| model_id | |
+--------------------------+--------------------------------------------------+
| version_id | |
+--------------------------+--------------------------------------------------+
| replace_existing_version | |
+--------------------------+--------------------------------------------------+
| experiment_id | |
+--------------------------+--------------------------------------------------+
| hypertune_settings | { |
| | "hyperparameters": { |
| | "goal": "MAXIMIZE", |
| | "maxTrials": 6, |
| | "maxParallelTrials": 3, |
| | "hyperparameterMetricTag": "accuracy", |
| | "enableTrialEarlyStopping": True, |
| | "params": [ |
| | { |
| | "parameterName": "max_iter", |
| | "type": "DISCRETE", |
| | "discreteValues": [500, 1000] |
| | }, |
| | { |
| | "parameterName": "alpha", |
| | "type": "DOUBLE", |
| | "minValue": 0.0001, |
| | "maxValue": 0.001, |
| | "scaleType": "UNIT_LINEAR_SCALE" |
| | } |
| | ] |
| | } |
| | } |
+--------------------------+--------------------------------------------------+
| dataset_location | US |
+--------------------------+--------------------------------------------------+
| Apache-2.0 | on_demand/kfp-caip-sklearn/lab-02-kfp-pipeline/lab-02.ipynb | bharathraja23/mlops-on-gcp |
Submitting pipeline runsYou can trigger pipeline runs using an API from the KFP SDK or using KFP CLI. To submit the run using KFP CLI, execute the following commands. Notice how the pipeline's parameters are passed to the pipeline run. List the pipelines in AI Platform Pipelines | !kfp --endpoint $ENDPOINT experiment list | +--------------------------------------+-------------------------------+---------------------------+
| Experiment ID | Name | Created at |
+======================================+===============================+===========================+
| 889c1532-fee9-4b06-bc2b-10b1cd332c9a | Covertype_Classifier_Training | 2021-05-19T12:54:04+00:00 |
+--------------------------------------+-------------------------------+---------------------------+
| 3794c159-24a7-41e0-89be-f23152971870 | helloworld-dev | 2021-05-06T16:07:23+00:00 |
+--------------------------------------+-------------------------------+---------------------------+
| 821a36b0-8db9-4604-9e65-035b8f70c77d | my_pipeline | 2021-05-06T10:35:19+00:00 |
+--------------------------------------+-------------------------------+---------------------------+
| 6587995a-9b11-4a8e-a2fc-d0b80534dfe8 | Default | 2021-05-04T02:29:12+00:00 |
+--------------------------------------+-------------------------------+---------------------------+
| Apache-2.0 | on_demand/kfp-caip-sklearn/lab-02-kfp-pipeline/lab-02.ipynb | bharathraja23/mlops-on-gcp |
Submit a runFind the ID of the `covertype_continuous_training` pipeline you uploaded in the previous step and update the value of `PIPELINE_ID` . | PIPELINE_ID='7eda6268-681e-41eb-8f65-a9c853030888' # TO DO: REPLACE WITH YOUR PIPELINE ID
EXPERIMENT_NAME = 'Covertype_Classifier_Training'
RUN_ID = 'Run_001'
SOURCE_TABLE = 'covertype_dataset.covertype'
DATASET_ID = 'covertype_dataset'
EVALUATION_METRIC = 'accuracy'
MODEL_ID = 'covertype_classifier'
VERSION_ID = 'v01'
REPLACE_EXISTING_VERSION = 'True'
EXPERIMENT_ID = '889c1532-fee9-4b06-bc2b-10b1cd332c9a'
GCS_STAGING_PATH = '{}/staging'.format(ARTIFACT_STORE_URI) | _____no_output_____ | Apache-2.0 | on_demand/kfp-caip-sklearn/lab-02-kfp-pipeline/lab-02.ipynb | bharathraja23/mlops-on-gcp |
Run the pipeline using the `kfp` command line by retrieving the variables from the environment to pass to the pipeline where:- EXPERIMENT_NAME is set to the experiment used to run the pipeline. You can choose any name you want. If the experiment does not exist it will be created by the command- RUN_ID is the name of the run. You can use an arbitrary name- PIPELINE_ID is the id of your pipeline. Use the value retrieved by the `kfp pipeline list` command- GCS_STAGING_PATH is the URI to the Cloud Storage location used by the pipeline to store intermediate files. By default, it is set to the `staging` folder in your artifact store.- REGION is a compute region for AI Platform Training and Prediction. You should be already familiar with these and other parameters passed to the command. If not go back and review the pipeline code. | !kfp --endpoint $ENDPOINT run submit \
-e $EXPERIMENT_NAME \
-r $RUN_ID \
-p $PIPELINE_ID \
project_id=$PROJECT_ID \
gcs_root=$GCS_STAGING_PATH \
region=$REGION \
source_table_name=$SOURCE_TABLE \
dataset_id=$DATASET_ID \
evaluation_metric_name=$EVALUATION_METRIC \
model_id=$MODEL_ID \
version_id=$VERSION_ID \
replace_existing_version=$REPLACE_EXISTING_VERSION \
experiment_id=$EXPERIMENT_ID
#!kfp --endpoint $ENDPOINT experiment list
from typing import NamedTuple
def get_previous_run_metric( ENDPOINT: str, experiment_id: str ) -> NamedTuple('Outputs', [('run_id', str), ('accuracy', float)]):
import kfp as kfp
runs_details= kfp.Client(host=ENDPOINT).list_runs(experiment_id=experiment_id, sort_by='created_at desc').to_dict()
# print(runs_details)
latest_success_run_details=''
print("runs_details['runs'] type {}".format(type(runs_details['runs'])))
for i in runs_details['runs']:
print("i['status'] type {}".format(type(i['status'])))
if i['status'] == 'Succeeded':
run_id=i['id']
accuracy=i['metrics'][0]['number_value']
break;
print("accuracy={}".format(accuracy))
print(type(run_id))
return (run_id, accuracy)
a=get_previous_run_metric(ENDPOINT, experiment_id)
print(a)
import kfp as kfp
runs_details= kfp.Client(host=ENDPOINT).list_runs(experiment_id=experiment_id, sort_by='created_at desc').to_dict()
latest_success_run_details=''
print("runs_details['runs'] type {}".format(type(runs_details['runs'])))
for i in runs_details['runs']:
print("i['status'] type {}".format(type(i['status'])))
if i['status'] == 'Succeeded':
latest_success_run_details=i
break;
run_id=latest_success_run_details['id']
run_id_details=kfp.Client(host=ENDPOINT).get_run(run_id=run_id).to_dict()
print(run_id_details)
accuracy=run_id_details['run']['metrics'][0]['number_value']
print(accuracy)
from googleapiclient import discovery
ml = discovery.build('ml', 'v1')
job_name = 'projects/{}/jobs/{}'.format('dna-gcp-data', 'job_1dae51e7dd77989943e0aaf271f1effd')
request = ml.projects().jobs().get(name=job_name)
print(type(request.execute()) | _____no_output_____ | Apache-2.0 | on_demand/kfp-caip-sklearn/lab-02-kfp-pipeline/lab-02.ipynb | bharathraja23/mlops-on-gcp |
Feature: TF-IDF Distances Create TF-IDF vectors from question texts and compute vector distances between them. Imports This utility package imports `numpy`, `pandas`, `matplotlib` and a helper `kg` module into the root namespace. | from pygoose import *
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_distances, euclidean_distances | _____no_output_____ | MIT | notebooks/feature-tfidf.ipynb | MinuteswithMetrics/kaggle-quora-question-pairs |
Config Automatically discover the paths to various data folders and compose the project structure. | project = kg.Project.discover() | _____no_output_____ | MIT | notebooks/feature-tfidf.ipynb | MinuteswithMetrics/kaggle-quora-question-pairs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.