markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Provided is a buggy for loop that tries to accumulate some values out of some dictionaries. Insert a try/except so that the code passes.
di = [{"Puppies": 17, 'Kittens': 9, "Birds": 23, 'Fish': 90, "Hamsters": 49}, {"Puppies": 23, "Birds": 29, "Fish": 20, "Mice": 20, "Snakes": 7}, {"Fish": 203, "Hamsters": 93, "Snakes": 25, "Kittens": 89}, {"Birds": 20, "Puppies": 90, "Snakes": 21, "Fish": 10, "Kittens": 67}] total = 0 for diction in di: try: diction.keys == "Puppies" total = total + diction['Puppies'] except: pass print("Total number of puppies:", total)
_____no_output_____
MIT
Class ,Constractor,Inheritance,overriding,Exception.ipynb
Ks226/upgard_code
The code below takes the list of country, country, and searches to see if it is in the dictionary gold which shows some countries who won gold during the Olympics. However, this code currently does not work. Correctly add try/except clause in the code so that it will correctly populate the list, country_gold, with either the number of golds won or the string “Did not get gold”.
gold = {"US":46, "Fiji":1, "Great Britain":27, "Cuba":5, "Thailand":2, "China":26, "France":10} country = ["Fiji", "Chile", "Mexico", "France", "Norway", "US"] country_gold = [] print(gold.keys()) for x in country: try: x in gold.keys() country_gold.append(gold[x]) except KeyError: country_gold.append("Did not get gold") print(country_gold)
_____no_output_____
MIT
Class ,Constractor,Inheritance,overriding,Exception.ipynb
Ks226/upgard_code
The list, numb, contains integers. Write code that populates the list remainder with the remainder of 36 divided by each number in numb. For example, the first element should be 0, because 36/6 has no remainder. If there is an error, have the string “Error” appear in the remainder.
numb = [6, 0, 36, 8, 2, 36, 0, 12, 60, 0, 45, 0, 3, 23] remainder = [] for i in numb: if (i == 0): remainder.append("Error") elif (36 % i): remainder.append(36 % i) elif (36 % i == 0): remainder.append(0) print(remainder)
_____no_output_____
MIT
Class ,Constractor,Inheritance,overriding,Exception.ipynb
Ks226/upgard_code
Provided is buggy code, insert a try/except so that the code passes.
lst = [2,4,10,42,12,0,4,7,21,4,83,8,5,6,8,234,5,6,523,42,34,0,234,1,435,465,56,7,3,43,23] lst_three = [] for num in lst: try: if 3 % num == 0: lst_three.append(num) except ZeroDivisionError: pass print(lst_three)
_____no_output_____
MIT
Class ,Constractor,Inheritance,overriding,Exception.ipynb
Ks226/upgard_code
Write code so that the buggy code provided works using a try/except. When the codes does not work in the try, have it append to the list attempt the string “Error”.
full_lst = ["ab", 'cde', 'fgh', 'i', 'jkml', 'nop', 'qr', 's', 'tv', 'wxy', 'z'] attempt = [] for elem in full_lst: try: attempt.append(elem[1]) except: attempt.append("Error")
_____no_output_____
MIT
Class ,Constractor,Inheritance,overriding,Exception.ipynb
Ks226/upgard_code
The following code tries to append the third element of each list in conts to the new list third_countries. Currently, the code does not work. Add a try/except clause so the code runs without errors, and the string ‘Continent does not have 3 countries’ is appended to countries instead of producing an error.
conts = [['Spain', 'France', 'Greece', 'Portugal', 'Romania', 'Germany'], ['USA', 'Mexico', 'Canada'], ['Japan', 'China', 'Korea', 'Vietnam', 'Cambodia'], ['Argentina', 'Chile', 'Brazil', 'Ecuador', 'Uruguay', 'Venezuela'], ['Australia'], ['Zimbabwe', 'Morocco', 'Kenya', 'Ethiopa', 'South Africa'], ['Antarctica']] third_countries = [] for c in conts: try: third_countries.append(c[2]) except IndexError: third_countries.append("Continent does not have 3 countries")
_____no_output_____
MIT
Class ,Constractor,Inheritance,overriding,Exception.ipynb
Ks226/upgard_code
The buggy code below prints out the value of the sport in the list sport. Use try/except so that the code will run properly. If the sport is not in the dictionary, ppl_play, add it in with the value of 1.
sport = ["hockey", "basketball", "soccer", "tennis", "football", "baseball"] ppl_play = {"hockey":4, "soccer": 10, "football": 15, "tennis": 8} for x in sport: try: print(ppl_play[x]) except KeyError: ppl_play[x] = 1
_____no_output_____
MIT
Class ,Constractor,Inheritance,overriding,Exception.ipynb
Ks226/upgard_code
Provided is a buggy for loop that tries to accumulate some values out of some dictionaries. Insert a try/except so that the code passes. If the key is not there, initialize it in the dictionary and set the value to zero.
di = [{"Puppies": 17, 'Kittens': 9, "Birds": 23, 'Fish': 90, "Hamsters": 49}, {"Puppies": 23, "Birds": 29, "Fish": 20, "Mice": 20, "Snakes": 7}, {"Fish": 203, "Hamsters": 93, "Snakes": 25, "Kittens": 89}, {"Birds": 20, "Puppies": 90, "Snakes": 21, "Fish": 10, "Kittens": 67}] total = 0 for diction in di: try: diction.keys() == "Puppies" total = total + diction['Puppies'] except : pass if("Puppies" not in diction.keys()): diction["Puppies"] = 0 print("Total number of puppies:", total) VOWEL_COST = 250 LETTERS = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' VOWELS = 'AEIOU' # Write the WOFPlayer class definition (part A) here class WOFPlayer: prizeMoney = 0 prizes = [] def __init__(self, name): self.name = name def addMoney(self, amt): self.prizeMoney += amt def goBankrupt(self): self.prizeMoney = 0 def addPrize(prizes,prize): prizes.append(prize) def __str__(self): return "%s (%s)" % (self.name, self.prize) # Write the WOFHumanPlayer class definition (part B) here class WOFHumanPlayer(WOFPlayer): def getMove(self, category, obscuredPhrase, guesse): input( "{%s} has ${%s}\n" "Category: {%s}\n" "Phrase: {%s}\n" "Guessed: {%s}\n" "Guess a letter, phrase, or type 'exit' or 'pass':\n") % ( self.name, self.prizeMoney, category, obscuredPhrase, guesse) return ("%s") % (guesse) # Write the WOFComputerPlayer class definition (part C) here class WOFComputerPlayer(WOFPlayer): SORTED_FREQUENCIES = "ZQXJKVBPYGFWMUCLDRHSNIOATE" VOWEL_COST = 250 VOWELS = "AEIOU" def __init__(self, difficulty): self.difficulty = difficulty def smartCoinFlip(self): random_num = random.randint(1, 10) if random_num > self.difficulty: return True else: return False def getPossibleLetters(self, guessed): return guessed.upper() def getMove(category, obscuredPhrase, guessed): return getPossibleLetters(guessed)
_____no_output_____
MIT
Class ,Constractor,Inheritance,overriding,Exception.ipynb
Ks226/upgard_code
Disaggregation
from __future__ import print_function, division import time from matplotlib import rcParams import matplotlib.pyplot as plt import pandas as pd import numpy as np from six import iteritems from nilmtk import DataSet, TimeFrame, MeterGroup, HDFDataStore from nilmtk.disaggregate import CombinatorialOptimisation, FHMM import nilmtk.utils %matplotlib inline rcParams['figure.figsize'] = (13, 6)
_____no_output_____
Apache-2.0
docs/manual/user_guide/disaggregation_and_metrics.ipynb
Ming-er/nilmtk
Dividing data into train and test set
train = DataSet('/data/redd.h5') test = DataSet('/data/redd.h5')
_____no_output_____
Apache-2.0
docs/manual/user_guide/disaggregation_and_metrics.ipynb
Ming-er/nilmtk
Let us use building 1 for demo purposes
building = 1
_____no_output_____
Apache-2.0
docs/manual/user_guide/disaggregation_and_metrics.ipynb
Ming-er/nilmtk
Let's split data at April 30th
train.set_window(end="2011-04-30") test.set_window(start="2011-04-30") train_elec = train.buildings[1].elec test_elec = test.buildings[1].elec
_____no_output_____
Apache-2.0
docs/manual/user_guide/disaggregation_and_metrics.ipynb
Ming-er/nilmtk
Visualizing the data
train_elec.plot() test_elec.mains().plot()
_____no_output_____
Apache-2.0
docs/manual/user_guide/disaggregation_and_metrics.ipynb
Ming-er/nilmtk
REDD data set has got appliance level data sampled every 3 or 4 seconds and mains data sampled every 1 second. Let us verify the same.
fridge_meter = train_elec['fridge'] fridge_df = next(fridge_meter.load()) fridge_df.head() mains = train_elec.mains() mains_df = next(mains.load()) mains_df.head()
_____no_output_____
Apache-2.0
docs/manual/user_guide/disaggregation_and_metrics.ipynb
Ming-er/nilmtk
Since, both of these are sampled at different frequencies, we will downsample both to 1 minute resolution. We will also select the top-5 appliances in terms of energy consumption and use them for training our FHMM and CO models. Selecting top-5 appliances
top_5_train_elec = train_elec.submeters().select_top_k(k=5) top_5_train_elec
_____no_output_____
Apache-2.0
docs/manual/user_guide/disaggregation_and_metrics.ipynb
Ming-er/nilmtk
Training and disaggregation A function to disaggregate the mains data to constituent appliances and return the predictions
def predict(clf, test_elec, sample_period, timezone): pred = {} gt= {} # "ac_type" varies according to the dataset used. # Make sure to use the correct ac_type before using the default parameters in this code. for i, chunk in enumerate(test_elec.mains().load(physical_quantity = 'power', ac_type = 'apparent', sample_period=sample_period)): chunk_drop_na = chunk.dropna() pred[i] = clf.disaggregate_chunk(chunk_drop_na) gt[i]={} for meter in test_elec.submeters().meters: # Only use the meters that we trained on (this saves time!) gt[i][meter] = next(meter.load(physical_quantity = 'power', ac_type = 'active', sample_period=sample_period)) gt[i] = pd.DataFrame({k:v.squeeze() for k,v in iteritems(gt[i]) if len(v)}, index=next(iter(gt[i].values())).index).dropna() # If everything can fit in memory gt_overall = pd.concat(gt) gt_overall.index = gt_overall.index.droplevel() pred_overall = pd.concat(pred) pred_overall.index = pred_overall.index.droplevel() # Having the same order of columns gt_overall = gt_overall[pred_overall.columns] #Intersection of index gt_index_utc = gt_overall.index.tz_convert("UTC") pred_index_utc = pred_overall.index.tz_convert("UTC") common_index_utc = gt_index_utc.intersection(pred_index_utc) common_index_local = common_index_utc.tz_convert(timezone) gt_overall = gt_overall.loc[common_index_local] pred_overall = pred_overall.loc[common_index_local] appliance_labels = [m for m in gt_overall.columns.values] gt_overall.columns = appliance_labels pred_overall.columns = appliance_labels return gt_overall, pred_overall
_____no_output_____
Apache-2.0
docs/manual/user_guide/disaggregation_and_metrics.ipynb
Ming-er/nilmtk
Train using 2 benchmarking algorithms - Combinatorial Optimisation (CO) and Factorial Hidden Markov Model (FHMM)
classifiers = {'CO':CombinatorialOptimisation(), 'FHMM':FHMM()} predictions = {} sample_period = 120 for clf_name, clf in classifiers.items(): print("*"*20) print(clf_name) print("*" *20) start = time.time() # Note that we have given the sample period to downsample the data to 1 minute. # If instead of top_5 we wanted to train on all appliance, we would write # fhmm.train(train_elec, sample_period=60) clf.train(top_5_train_elec, sample_period=sample_period) end = time.time() print("Runtime =", end-start, "seconds.") gt, predictions[clf_name] = predict(clf, test_elec, sample_period, train.metadata['timezone'])
******************** CO ******************** Training model for submeter 'ElecMeter(instance=11, building=1, dataset='REDD', appliances=[Appliance(type='microwave', instance=1)])' Training model for submeter 'ElecMeter(instance=8, building=1, dataset='REDD', appliances=[Appliance(type='sockets', instance=2)])' Training model for submeter 'ElecMeter(instance=9, building=1, dataset='REDD', appliances=[Appliance(type='light', instance=1)])' Training model for submeter 'ElecMeter(instance=5, building=1, dataset='REDD', appliances=[Appliance(type='fridge', instance=1)])' Training model for submeter 'ElecMeter(instance=6, building=1, dataset='REDD', appliances=[Appliance(type='dish washer', instance=1)])' Done training! Runtime = 1.8285462856292725 seconds. Loading data for meter ElecMeterID(instance=2, building=1, dataset='REDD') Done loading data all meters for this chunk. Estimating power demand for 'ElecMeter(instance=11, building=1, dataset='REDD', appliances=[Appliance(type='microwave', instance=1)])' Estimating power demand for 'ElecMeter(instance=8, building=1, dataset='REDD', appliances=[Appliance(type='sockets', instance=2)])' Estimating power demand for 'ElecMeter(instance=9, building=1, dataset='REDD', appliances=[Appliance(type='light', instance=1)])' Estimating power demand for 'ElecMeter(instance=5, building=1, dataset='REDD', appliances=[Appliance(type='fridge', instance=1)])' Estimating power demand for 'ElecMeter(instance=6, building=1, dataset='REDD', appliances=[Appliance(type='dish washer', instance=1)])' Loading data for meter ElecMeterID(instance=4, building=1, dataset='REDD') Done loading data all meters for this chunk. Loading data for meter ElecMeterID(instance=20, building=1, dataset='REDD') Done loading data all meters for this chunk. ******************** FHMM ******************** Training model for submeter 'ElecMeter(instance=11, building=1, dataset='REDD', appliances=[Appliance(type='microwave', instance=1)])' Training model for submeter 'ElecMeter(instance=8, building=1, dataset='REDD', appliances=[Appliance(type='sockets', instance=2)])' Training model for submeter 'ElecMeter(instance=9, building=1, dataset='REDD', appliances=[Appliance(type='light', instance=1)])' Training model for submeter 'ElecMeter(instance=5, building=1, dataset='REDD', appliances=[Appliance(type='fridge', instance=1)])' Training model for submeter 'ElecMeter(instance=6, building=1, dataset='REDD', appliances=[Appliance(type='dish washer', instance=1)])' Runtime = 2.4450082778930664 seconds. Loading data for meter ElecMeterID(instance=2, building=1, dataset='REDD') Done loading data all meters for this chunk. Loading data for meter ElecMeterID(instance=4, building=1, dataset='REDD') Done loading data all meters for this chunk. Loading data for meter ElecMeterID(instance=20, building=1, dataset='REDD') Done loading data all meters for this chunk.
Apache-2.0
docs/manual/user_guide/disaggregation_and_metrics.ipynb
Ming-er/nilmtk
Using prettier labels!
appliance_labels = [m.label() for m in gt.columns.values] gt.columns = appliance_labels predictions['CO'].columns = appliance_labels predictions['FHMM'].columns = appliance_labels
_____no_output_____
Apache-2.0
docs/manual/user_guide/disaggregation_and_metrics.ipynb
Ming-er/nilmtk
Taking a look at the ground truth of top 5 appliance power consumption
gt.head() predictions['CO'].head() predictions['FHMM'].head()
_____no_output_____
Apache-2.0
docs/manual/user_guide/disaggregation_and_metrics.ipynb
Ming-er/nilmtk
Plotting the predictions against the actual usage
predictions['CO']['Fridge'].head(300).plot(label="Pred") gt['Fridge'].head(300).plot(label="GT") plt.legend() predictions['FHMM']['Fridge'].head(300).plot(label="Pred") gt['Fridge'].head(300).plot(label="GT") plt.legend()
_____no_output_____
Apache-2.0
docs/manual/user_guide/disaggregation_and_metrics.ipynb
Ming-er/nilmtk
Comparing NILM algorithms (CO vs FHMM) `nilmtk.utils.compute_rmse` is an extended of the following, handling both missing values and labels better:```pythondef compute_rmse(gt, pred): from sklearn.metrics import mean_squared_error rms_error = {} for appliance in gt.columns: rms_error[appliance] = np.sqrt(mean_squared_error(gt[appliance], pred[appliance])) return pd.Series(rms_error)```
? nilmtk.utils.compute_rmse rmse = {} for clf_name in classifiers.keys(): rmse[clf_name] = nilmtk.utils.compute_rmse(gt, predictions[clf_name]) rmse = pd.DataFrame(rmse) rmse
_____no_output_____
Apache-2.0
docs/manual/user_guide/disaggregation_and_metrics.ipynb
Ming-er/nilmtk
Write and Save Files in PythonEstimated time needed: **25** minutes ObjectivesAfter completing this lab you will be able to:* Write to files using Python libraries Table of Contents Writing Files Appending Files Additional File modes Copy a File Writing Files We can open a file object using the method write() to save the text file to a list. To write to a file, the mode argument must be set to **w**. Let’s write a file **Example2.txt** with the line: **“This is line A”**
# Write line to file exmp2 = '/resources/data/Example2.txt' with open(exmp2, 'w') as writefile: writefile.write("This is line A")
_____no_output_____
MIT
Python for Data Science, AI & Development/4. Working with Data in Python/Writing Files with Open.ipynb
aqafridi/Data-Analytics
We can read the file to see if it worked:
# Read file with open(exmp2, 'r') as testwritefile: print(testwritefile.read())
_____no_output_____
MIT
Python for Data Science, AI & Development/4. Working with Data in Python/Writing Files with Open.ipynb
aqafridi/Data-Analytics
We can write multiple lines:
# Write lines to file with open(exmp2, 'w') as writefile: writefile.write("This is line A\n") writefile.write("This is line B\n")
_____no_output_____
MIT
Python for Data Science, AI & Development/4. Working with Data in Python/Writing Files with Open.ipynb
aqafridi/Data-Analytics
The method .write() works similar to the method .readline(), except instead of reading a new line it writes a new line. The process is illustrated in the figure. The different colour coding of the grid represents a new line added to the file after each method call. You can check the file to see if your results are correct
# Check whether write to file with open(exmp2, 'r') as testwritefile: print(testwritefile.read())
_____no_output_____
MIT
Python for Data Science, AI & Development/4. Working with Data in Python/Writing Files with Open.ipynb
aqafridi/Data-Analytics
We write a list to a **.txt** file as follows:
# Sample list of text Lines = ["This is line A\n", "This is line B\n", "This is line C\n"] Lines # Write the strings in the list to text file with open('Example2.txt', 'w') as writefile: for line in Lines: print(line) writefile.write(line)
_____no_output_____
MIT
Python for Data Science, AI & Development/4. Working with Data in Python/Writing Files with Open.ipynb
aqafridi/Data-Analytics
We can verify the file is written by reading it and printing out the values:
# Verify if writing to file is successfully executed with open('Example2.txt', 'r') as testwritefile: print(testwritefile.read())
_____no_output_____
MIT
Python for Data Science, AI & Development/4. Working with Data in Python/Writing Files with Open.ipynb
aqafridi/Data-Analytics
However, note that setting the mode to **w** overwrites all the existing data in the file.
with open('Example2.txt', 'w') as writefile: writefile.write("Overwrite\n") with open('Example2.txt', 'r') as testwritefile: print(testwritefile.read())
_____no_output_____
MIT
Python for Data Science, AI & Development/4. Working with Data in Python/Writing Files with Open.ipynb
aqafridi/Data-Analytics
Appending Files We can write to files without losing any of the existing data as follows by setting the mode argument to append: **a**. you can append a new line as follows:
# Write a new line to text file with open('Example2.txt', 'a') as testwritefile: testwritefile.write("This is line C\n") testwritefile.write("This is line D\n") testwritefile.write("This is line E\n")
_____no_output_____
MIT
Python for Data Science, AI & Development/4. Working with Data in Python/Writing Files with Open.ipynb
aqafridi/Data-Analytics
You can verify the file has changed by running the following cell:
# Verify if the new line is in the text file with open('Example2.txt', 'r') as testwritefile: print(testwritefile.read())
_____no_output_____
MIT
Python for Data Science, AI & Development/4. Working with Data in Python/Writing Files with Open.ipynb
aqafridi/Data-Analytics
Additional modes It's fairly ineffecient to open the file in **a** or **w** and then reopening it in **r** to read any lines. Luckily we can access the file in the following modes:* **r+** : Reading and writing. Cannot truncate the file.* **w+** : Writing and reading. Truncates the file.* **a+** : Appending and Reading. Creates a new file, if none exists. You dont have to dwell on the specifics of each mode for this lab. Let's try out the **a+** mode:
with open('Example2.txt', 'a+') as testwritefile: testwritefile.write("This is line E\n") print(testwritefile.read())
_____no_output_____
MIT
Python for Data Science, AI & Development/4. Working with Data in Python/Writing Files with Open.ipynb
aqafridi/Data-Analytics
There were no errors but read() also did not output anything. This is because of our location in the file. Most of the file methods we've looked at work in a certain location in the file. .write() writes at a certain location in the file. .read() reads at a certain location in the file and so on. You can think of this as moving your pointer around in the notepad to make changes at specific location. Opening the file in **w** is akin to opening the .txt file, moving your cursor to the beginning of the text file, writing new text and deleting everything that follows.Whereas opening the file in **a** is similiar to opening the .txt file, moving your cursor to the very end and then adding the new pieces of text. It is often very useful to know where the 'cursor' is in a file and be able to control it. The following methods allow us to do precisely this -* .tell() - returns the current position in bytes* .seek(offset,from) - changes the position by 'offset' bytes with respect to 'from'. From can take the value of 0,1,2 corresponding to beginning, relative to current position and end Now lets revisit **a+**
with open('Example2.txt', 'a+') as testwritefile: print("Initial Location: {}".format(testwritefile.tell())) data = testwritefile.read() if (not data): #empty strings return false in python print('Read nothing') else: print(testwritefile.read()) testwritefile.seek(0,0) # move 0 bytes from beginning. print("\nNew Location : {}".format(testwritefile.tell())) data = testwritefile.read() if (not data): print('Read nothing') else: print(data) print("Location after read: {}".format(testwritefile.tell()) )
_____no_output_____
MIT
Python for Data Science, AI & Development/4. Working with Data in Python/Writing Files with Open.ipynb
aqafridi/Data-Analytics
Finally, a note on the difference between **w+** and **r+**. Both of these modes allow access to read and write methods, however, opening a file in **w+** overwrites it and deletes all pre-existing data. To work with a file on existing data, use **r+** and **a+**. While using **r+**, it can be useful to add a .truncate() method at the end of your data. This will reduce the file to your data and delete everything that follows. In the following code block, Run the code as it is first and then run it with the .truncate().
with open('Example2.txt', 'r+') as testwritefile: data = testwritefile.readlines() testwritefile.seek(0,0) #write at beginning of file testwritefile.write("Line 1" + "\n") testwritefile.write("Line 2" + "\n") testwritefile.write("Line 3" + "\n") testwritefile.write("finished\n") #Uncomment the line below #testwritefile.truncate() testwritefile.seek(0,0) print(testwritefile.read())
_____no_output_____
MIT
Python for Data Science, AI & Development/4. Working with Data in Python/Writing Files with Open.ipynb
aqafridi/Data-Analytics
Copy a File Let's copy the file **Example2.txt** to the file **Example3.txt**:
# Copy file to another with open('Example2.txt','r') as readfile: with open('Example3.txt','w') as writefile: for line in readfile: writefile.write(line)
_____no_output_____
MIT
Python for Data Science, AI & Development/4. Working with Data in Python/Writing Files with Open.ipynb
aqafridi/Data-Analytics
We can read the file to see if everything works:
# Verify if the copy is successfully executed with open('Example3.txt','r') as testwritefile: print(testwritefile.read())
_____no_output_____
MIT
Python for Data Science, AI & Development/4. Working with Data in Python/Writing Files with Open.ipynb
aqafridi/Data-Analytics
After reading files, we can also write data into files and save them in different file formats like **.txt, .csv, .xls (for excel files) etc**. You will come across these in further examples Now go to the directory to ensure the **.txt** file exists and contains the summary data that we wrote. Exercise Your local university's Raptors fan club maintains a register of its active members on a .txt document. Every month they update the file by removing the members who are not active. You have been tasked with automating this with your Python skills. Given the file currentMem, Remove each member with a 'no' in their Active coloumn. Keep track of each of the removed members and append them to the exMem file. Make sure the format of the original files in preserved. (*Hint: Do this by reading/writing whole lines and ensuring the header remains* ) Run the code block below prior to starting the exercise. The skeleton code has been provided for you, Edit only the cleanFiles function.
#Run this prior to starting the exercise from random import randint as rnd memReg = 'members.txt' exReg = 'inactive.txt' fee =('yes','no') def genFiles(current,old): with open(current,'w+') as writefile: writefile.write('Membership No Date Joined Active \n') data = "{:^13} {:<11} {:<6}\n" for rowno in range(20): date = str(rnd(2015,2020))+ '-' + str(rnd(1,12))+'-'+str(rnd(1,25)) writefile.write(data.format(rnd(10000,99999),date,fee[rnd(0,1)])) with open(old,'w+') as writefile: writefile.write('Membership No Date Joined Active \n') data = "{:^13} {:<11} {:<6}\n" for rowno in range(3): date = str(rnd(2015,2020))+ '-' + str(rnd(1,12))+'-'+str(rnd(1,25)) writefile.write(data.format(rnd(10000,99999),date,fee[1])) genFiles(memReg,exReg)
_____no_output_____
MIT
Python for Data Science, AI & Development/4. Working with Data in Python/Writing Files with Open.ipynb
aqafridi/Data-Analytics
Start your solution below:
def cleanFiles(currentMem,exMem): ''' currentMem: File containing list of current members exMem: File containing list of old members Removes all rows from currentMem containing 'no' and appends them to exMem ''' pass # Code to help you see the files # Leave as is memReg = 'members.txt' exReg = 'inactive.txt' cleanFiles(memReg,exReg) headers = "Membership No Date Joined Active \n" with open(memReg,'r') as readFile: print("Active Members: \n\n") print(readFile.read()) with open(exReg,'r') as readFile: print("Inactive Members: \n\n") print(readFile.read())
_____no_output_____
MIT
Python for Data Science, AI & Development/4. Working with Data in Python/Writing Files with Open.ipynb
aqafridi/Data-Analytics
Run the following to verify your code:
def testMsg(passed): if passed: return 'Test Passed' else : return 'Test Failed' testWrite = "testWrite.txt" testAppend = "testAppend.txt" passed = True genFiles(testWrite,testAppend) with open(testWrite,'r') as file: ogWrite = file.readlines() with open(testAppend,'r') as file: ogAppend = file.readlines() try: cleanFiles(testWrite,testAppend) except: print('Error') with open(testWrite,'r') as file: clWrite = file.readlines() with open(testAppend,'r') as file: clAppend = file.readlines() # checking if total no of rows is same, including headers if (len(ogWrite) + len(ogAppend) != len(clWrite) + len(clAppend)): print("The number of rows do not add up. Make sure your final files have the same header and format.") passed = False for line in clWrite: if 'no' in line: passed = False print("Inactive members in file") break else: if line not in ogWrite: print("Data in file does not match original file") passed = False print ("{}".format(testMsg(passed)))
_____no_output_____
MIT
Python for Data Science, AI & Development/4. Working with Data in Python/Writing Files with Open.ipynb
aqafridi/Data-Analytics
Predict house price in America> Analyse and predict on house price dataset.- toc: true - badges: true- comments: true- categories: [self-taught]- image: images/chart-preview.png Introduction
import pandas as pd pd.options.display.max_columns = 999 import numpy as np import matplotlib.pyplot as plt from sklearn.model_selection import KFold from sklearn.metrics import mean_squared_error from sklearn import linear_model from sklearn.model_selection import KFold df = pd.read_csv("AmesHousing.tsv", delimiter="\t") def transform_features(df): return df def select_features(df): return df[["Gr Liv Area", "SalePrice"]] def train_and_test(df): train = df[:1460] test = df[1460:] ## You can use `pd.DataFrame.select_dtypes()` to specify column types ## and return only those columns as a data frame. numeric_train = train.select_dtypes(include=['integer', 'float']) numeric_test = test.select_dtypes(include=['integer', 'float']) ## You can use `pd.Series.drop()` to drop a value. features = numeric_train.columns.drop("SalePrice") lr = linear_model.LinearRegression() lr.fit(train[features], train["SalePrice"]) predictions = lr.predict(test[features]) mse = mean_squared_error(test["SalePrice"], predictions) rmse = np.sqrt(mse) return rmse transform_df = transform_features(df) filtered_df = select_features(transform_df) rmse = train_and_test(filtered_df) rmse
_____no_output_____
Apache-2.0
_notebooks/2017-09-15-predict-house-price.ipynb
phucnsp/blog
Feature Engineering Handle missing values: All columns: Drop any with 5% or more missing values for now. Text columns: Drop any with 1 or more missing values for now. Numerical columns: For columns with missing values, fill in with the most common value in that column 1: All columns: Drop any with 5% or more missing values for now.
## Series object: column name -> number of missing values num_missing = df.isnull().sum() # Filter Series to columns containing >5% missing values drop_missing_cols = num_missing[(num_missing > len(df)/20)].sort_values() # Drop those columns from the data frame. Note the use of the .index accessor df = df.drop(drop_missing_cols.index, axis=1) ## Series object: column name -> number of missing values text_mv_counts = df.select_dtypes(include=['object']).isnull().sum().sort_values(ascending=False) ## Filter Series to columns containing *any* missing values drop_missing_cols_2 = text_mv_counts[text_mv_counts > 0] df = df.drop(drop_missing_cols_2.index, axis=1) ## Compute column-wise missing value counts num_missing = df.select_dtypes(include=['int', 'float']).isnull().sum() fixable_numeric_cols = num_missing[(num_missing < len(df)/20) & (num_missing > 0)].sort_values() fixable_numeric_cols ## Compute the most common value for each column in `fixable_nmeric_missing_cols`. replacement_values_dict = df[fixable_numeric_cols.index].mode().to_dict(orient='records')[0] replacement_values_dict ## Use `pd.DataFrame.fillna()` to replace missing values. df = df.fillna(replacement_values_dict) ## Verify that every column has 0 missing values df.isnull().sum().value_counts() years_sold = df['Yr Sold'] - df['Year Built'] years_sold[years_sold < 0] years_since_remod = df['Yr Sold'] - df['Year Remod/Add'] years_since_remod[years_since_remod < 0] ## Create new columns df['Years Before Sale'] = years_sold df['Years Since Remod'] = years_since_remod ## Drop rows with negative values for both of these new features df = df.drop([1702, 2180, 2181], axis=0) ## No longer need original year columns df = df.drop(["Year Built", "Year Remod/Add"], axis = 1)
_____no_output_____
Apache-2.0
_notebooks/2017-09-15-predict-house-price.ipynb
phucnsp/blog
Drop columns that: a. that aren't useful for ML b. leak data about the final sale
## Drop columns that aren't useful for ML df = df.drop(["PID", "Order"], axis=1) ## Drop columns that leak info about the final sale df = df.drop(["Mo Sold", "Sale Condition", "Sale Type", "Yr Sold"], axis=1)
_____no_output_____
Apache-2.0
_notebooks/2017-09-15-predict-house-price.ipynb
phucnsp/blog
Let's update transform_features()
def transform_features(df): num_missing = df.isnull().sum() drop_missing_cols = num_missing[(num_missing > len(df)/20)].sort_values() df = df.drop(drop_missing_cols.index, axis=1) text_mv_counts = df.select_dtypes(include=['object']).isnull().sum().sort_values(ascending=False) drop_missing_cols_2 = text_mv_counts[text_mv_counts > 0] df = df.drop(drop_missing_cols_2.index, axis=1) num_missing = df.select_dtypes(include=['int', 'float']).isnull().sum() fixable_numeric_cols = num_missing[(num_missing < len(df)/20) & (num_missing > 0)].sort_values() replacement_values_dict = df[fixable_numeric_cols.index].mode().to_dict(orient='records')[0] df = df.fillna(replacement_values_dict) years_sold = df['Yr Sold'] - df['Year Built'] years_since_remod = df['Yr Sold'] - df['Year Remod/Add'] df['Years Before Sale'] = years_sold df['Years Since Remod'] = years_since_remod df = df.drop([1702, 2180, 2181], axis=0) df = df.drop(["PID", "Order", "Mo Sold", "Sale Condition", "Sale Type", "Year Built", "Year Remod/Add"], axis=1) return df def select_features(df): return df[["Gr Liv Area", "SalePrice"]] def train_and_test(df): train = df[:1460] test = df[1460:] ## You can use `pd.DataFrame.select_dtypes()` to specify column types ## and return only those columns as a data frame. numeric_train = train.select_dtypes(include=['integer', 'float']) numeric_test = test.select_dtypes(include=['integer', 'float']) ## You can use `pd.Series.drop()` to drop a value. features = numeric_train.columns.drop("SalePrice") lr = linear_model.LinearRegression() lr.fit(train[features], train["SalePrice"]) predictions = lr.predict(test[features]) mse = mean_squared_error(test["SalePrice"], predictions) rmse = np.sqrt(mse) return rmse df = pd.read_csv("AmesHousing.tsv", delimiter="\t") transform_df = transform_features(df) filtered_df = select_features(transform_df) rmse = train_and_test(filtered_df) rmse
_____no_output_____
Apache-2.0
_notebooks/2017-09-15-predict-house-price.ipynb
phucnsp/blog
Feature Selection
numerical_df = transform_df.select_dtypes(include=['int', 'float']) numerical_df abs_corr_coeffs = numerical_df.corr()['SalePrice'].abs().sort_values() abs_corr_coeffs ## Let's only keep columns with a correlation coefficient of larger than 0.4 (arbitrary, worth experimenting later!) abs_corr_coeffs[abs_corr_coeffs > 0.4] ## Drop columns with less than 0.4 correlation with SalePrice transform_df = transform_df.drop(abs_corr_coeffs[abs_corr_coeffs < 0.4].index, axis=1)
_____no_output_____
Apache-2.0
_notebooks/2017-09-15-predict-house-price.ipynb
phucnsp/blog
Which categorical columns should we keep?
## Create a list of column names from documentation that are *meant* to be categorical nominal_features = ["PID", "MS SubClass", "MS Zoning", "Street", "Alley", "Land Contour", "Lot Config", "Neighborhood", "Condition 1", "Condition 2", "Bldg Type", "House Style", "Roof Style", "Roof Matl", "Exterior 1st", "Exterior 2nd", "Mas Vnr Type", "Foundation", "Heating", "Central Air", "Garage Type", "Misc Feature", "Sale Type", "Sale Condition"]
_____no_output_____
Apache-2.0
_notebooks/2017-09-15-predict-house-price.ipynb
phucnsp/blog
Which columns are currently numerical but need to be encoded as categorical instead (because the numbers don't have any semantic meaning)? If a categorical column has hundreds of unique values (or categories), should we keep it? When we dummy code this column, hundreds of columns will need to be added back to the data frame.
## Which categorical columns have we still carried with us? We'll test tehse transform_cat_cols = [] for col in nominal_features: if col in transform_df.columns: transform_cat_cols.append(col) ## How many unique values in each categorical column? uniqueness_counts = transform_df[transform_cat_cols].apply(lambda col: len(col.value_counts())).sort_values() ## Aribtrary cutoff of 10 unique values (worth experimenting) drop_nonuniq_cols = uniqueness_counts[uniqueness_counts > 10].index transform_df = transform_df.drop(drop_nonuniq_cols, axis=1) ## Select just the remaining text columns and convert to categorical text_cols = transform_df.select_dtypes(include=['object']) for col in text_cols: transform_df[col] = transform_df[col].astype('category') ## Create dummy columns and add back to the dataframe! transform_df = pd.concat([ transform_df, pd.get_dummies(transform_df.select_dtypes(include=['category'])) ], axis=1)
_____no_output_____
Apache-2.0
_notebooks/2017-09-15-predict-house-price.ipynb
phucnsp/blog
Update select_features()
def transform_features(df): num_missing = df.isnull().sum() drop_missing_cols = num_missing[(num_missing > len(df)/20)].sort_values() df = df.drop(drop_missing_cols.index, axis=1) text_mv_counts = df.select_dtypes(include=['object']).isnull().sum().sort_values(ascending=False) drop_missing_cols_2 = text_mv_counts[text_mv_counts > 0] df = df.drop(drop_missing_cols_2.index, axis=1) num_missing = df.select_dtypes(include=['int', 'float']).isnull().sum() fixable_numeric_cols = num_missing[(num_missing < len(df)/20) & (num_missing > 0)].sort_values() replacement_values_dict = df[fixable_numeric_cols.index].mode().to_dict(orient='records')[0] df = df.fillna(replacement_values_dict) years_sold = df['Yr Sold'] - df['Year Built'] years_since_remod = df['Yr Sold'] - df['Year Remod/Add'] df['Years Before Sale'] = years_sold df['Years Since Remod'] = years_since_remod df = df.drop([1702, 2180, 2181], axis=0) df = df.drop(["PID", "Order", "Mo Sold", "Sale Condition", "Sale Type", "Year Built", "Year Remod/Add"], axis=1) return df def select_features(df, coeff_threshold=0.4, uniq_threshold=10): numerical_df = df.select_dtypes(include=['int', 'float']) abs_corr_coeffs = numerical_df.corr()['SalePrice'].abs().sort_values() df = df.drop(abs_corr_coeffs[abs_corr_coeffs < coeff_threshold].index, axis=1) nominal_features = ["PID", "MS SubClass", "MS Zoning", "Street", "Alley", "Land Contour", "Lot Config", "Neighborhood", "Condition 1", "Condition 2", "Bldg Type", "House Style", "Roof Style", "Roof Matl", "Exterior 1st", "Exterior 2nd", "Mas Vnr Type", "Foundation", "Heating", "Central Air", "Garage Type", "Misc Feature", "Sale Type", "Sale Condition"] transform_cat_cols = [] for col in nominal_features: if col in df.columns: transform_cat_cols.append(col) uniqueness_counts = df[transform_cat_cols].apply(lambda col: len(col.value_counts())).sort_values() drop_nonuniq_cols = uniqueness_counts[uniqueness_counts > 10].index df = df.drop(drop_nonuniq_cols, axis=1) text_cols = df.select_dtypes(include=['object']) for col in text_cols: df[col] = df[col].astype('category') df = pd.concat([df, pd.get_dummies(df.select_dtypes(include=['category']))], axis=1) return df def train_and_test(df, k=0): numeric_df = df.select_dtypes(include=['integer', 'float']) features = numeric_df.columns.drop("SalePrice") lr = linear_model.LinearRegression() if k == 0: train = df[:1460] test = df[1460:] lr.fit(train[features], train["SalePrice"]) predictions = lr.predict(test[features]) mse = mean_squared_error(test["SalePrice"], predictions) rmse = np.sqrt(mse) return rmse if k == 1: # Randomize *all* rows (frac=1) from `df` and return shuffled_df = df.sample(frac=1, ) train = df[:1460] test = df[1460:] lr.fit(train[features], train["SalePrice"]) predictions_one = lr.predict(test[features]) mse_one = mean_squared_error(test["SalePrice"], predictions_one) rmse_one = np.sqrt(mse_one) lr.fit(test[features], test["SalePrice"]) predictions_two = lr.predict(train[features]) mse_two = mean_squared_error(train["SalePrice"], predictions_two) rmse_two = np.sqrt(mse_two) avg_rmse = np.mean([rmse_one, rmse_two]) print(rmse_one) print(rmse_two) return avg_rmse else: kf = KFold(n_splits=k, shuffle=True) rmse_values = [] for train_index, test_index, in kf.split(df): train = df.iloc[train_index] test = df.iloc[test_index] lr.fit(train[features], train["SalePrice"]) predictions = lr.predict(test[features]) mse = mean_squared_error(test["SalePrice"], predictions) rmse = np.sqrt(mse) rmse_values.append(rmse) print(rmse_values) avg_rmse = np.mean(rmse_values) return avg_rmse df = pd.read_csv("AmesHousing.tsv", delimiter="\t") transform_df = transform_features(df) filtered_df = select_features(transform_df) rmse = train_and_test(filtered_df, k=4) rmse
[25761.875549560471, 36527.812968130842, 24956.485193881424, 28486.738135675929]
Apache-2.0
_notebooks/2017-09-15-predict-house-price.ipynb
phucnsp/blog
FastPitch: Voice Modification with Pre-defined Pitch Transformations The [FastPitch](https://arxiv.org/abs/2006.06873) model is based on the [FastSpeech](https://arxiv.org/abs/1905.09263) model. Similarly to [FastSpeech2](https://arxiv.org/abs/2006.04558), which has been developed concurrently, it learns to predict the pitch contour and conditions the generation on such contour.The simple mechanism of predicting the pitch on grapheme-level (rather than frame-level, as FastSpeech2 does) allows to easily alter the pitch during synthesis. FastPitch can thus change the perceived emotional state of the speaker, or slightly emphasise certain lexical units. Requirements Run the notebook inside the container. By default the container forwards port `8888`.```bash scripts/docker/interactive.sh inside the containercd notebooksjupyter notebook --ip='*' --port=8888```Please refer the Requirement section in `README.md` for more details and running outside the container.
import os assert os.getcwd().split('/')[-1] == 'notebooks'
_____no_output_____
MIT
SpeechSynthesis/FastPitch/notebooks/FastPitch_voice_modification.ipynb
eba472/mongolian_tts
Generate audio samples Training a FastPitch model from scrath takes 3 to 27 hours depending on the type and number of GPUs, performance numbers can be found in Section "Training performance results" in `README.md`. Therefore, to save the time of running this notebook, we recommend to download the pretrained FastPitch checkpoints on NGC for inference.You can find FP32 checkpoint at [NGC](https://ngc.nvidia.com/catalog/models/nvidia:fastpitch_pyt_fp32_ckpt_v1/files) , and AMP (Automatic Mixed Precision) checkpoint at [NGC](https://ngc.nvidia.com/catalog/models/nvidia:fastpitch_pyt_amp_ckpt_v1/files).To synthesize audio, you will need a WaveGlow model, which generates waveforms based on mel-spectrograms generated by FastPitch.You can download a pre-trained WaveGlow AMP model at [NGC](https://ngc.nvidia.com/catalog/models/nvidia:waveglow256pyt_fp16).
! mkdir -p output ! MODEL_DIR='../pretrained_models' ../scripts/download_fastpitch.sh ! MODEL_DIR='../pretrained_models' ../scripts/download_waveglow.sh
_____no_output_____
MIT
SpeechSynthesis/FastPitch/notebooks/FastPitch_voice_modification.ipynb
eba472/mongolian_tts
You can perform inference using the respective checkpoints that are passed as `--fastpitch` and `--waveglow` arguments. Next, you will use FastPitch model to generate audio samples for input text, including the basic version and the variations i npace, fade out, and pitch transforms, etc.
import IPython # store paths in aux variables fastp = '../pretrained_models/fastpitch/nvidia_fastpitch_200518.pt' waveg = '../pretrained_models/waveglow/waveglow_1076430_14000_amp.pt' flags = f'--cuda --fastpitch {fastp} --waveglow {waveg} --wn-channels 256'
_____no_output_____
MIT
SpeechSynthesis/FastPitch/notebooks/FastPitch_voice_modification.ipynb
eba472/mongolian_tts
1. Basic speech synthesis You need to create an input file with some text, or just input the text in the below cell:
%%writefile text.txt The forms of printed letters should be beautiful, and that their arrangement on the page should be reasonable and a help to the shapeliness of the letters themselves.
_____no_output_____
MIT
SpeechSynthesis/FastPitch/notebooks/FastPitch_voice_modification.ipynb
eba472/mongolian_tts
Run the script below to generate audio from the input text file:
# basic systhesis !python ../inference.py {flags} -i text.txt -o output/original > /dev/null IPython.display.Audio("output/original/audio_0.wav")
_____no_output_____
MIT
SpeechSynthesis/FastPitch/notebooks/FastPitch_voice_modification.ipynb
eba472/mongolian_tts
2. Add variations to the generated speech FastPitch allows us to exert additional control over the synthesized utterances, the key parameters are the pace, fade out, and pitch transforms in particular. 2.1 Pace FastPitch allows you to linearly adjust the pace of synthesized speech, similar to [FastSpeech](https://arxiv.org/abs/1905.09263) model. For instance, pass --pace 0.5 for a twofold decrease in speed, --pace 1.0 = unchanged.
# Change the pace of speech to double with --pace 0.5 # (1.0 = unchanged) !python ../inference.py {flags} -i text.txt -o output/pace --pace 0.5 > /dev/null IPython.display.Audio("output/pace/audio_0.wav")
_____no_output_____
MIT
SpeechSynthesis/FastPitch/notebooks/FastPitch_voice_modification.ipynb
eba472/mongolian_tts
2.2 Raise or lower the pitch For every input character, the model predicts a pitch cue - an average pitch over a character in Hz. Pitch can be adjusted by transforming those pitch cues. A few simple examples are provided below.
# Raise/lower pitch by --pitch-transform-shift <Hz> # Synthesize with a -50 Hz shift !python ../inference.py {flags} -i text.txt -o output/riselowpitch --pitch-transform-shift -50 > /dev/null IPython.display.Audio("output/riselowpitch/audio_0.wav")
_____no_output_____
MIT
SpeechSynthesis/FastPitch/notebooks/FastPitch_voice_modification.ipynb
eba472/mongolian_tts
2.3 Flatten the pitch
# Flatten the pitch to a constant value with --pitch-transform-flatten !python ../inference.py {flags} -i text.txt -o output/flattenpitch --pitch-transform-flatten > /dev/null IPython.display.Audio("output/flattenpitch/audio_0.wav")
_____no_output_____
MIT
SpeechSynthesis/FastPitch/notebooks/FastPitch_voice_modification.ipynb
eba472/mongolian_tts
2.4 Invert the pitch
# Invert pitch wrt. to the mean pitch with --pitch-transform-invert !python ../inference.py {flags} -i text.txt -o output/invertpitch --pitch-transform-invert > /dev/null IPython.display.Audio("output/invertpitch/audio_0.wav")
_____no_output_____
MIT
SpeechSynthesis/FastPitch/notebooks/FastPitch_voice_modification.ipynb
eba472/mongolian_tts
2.5 Amplify the pitch
# Amplify pitch wrt. to the mean pitch with --pitch-transform-amplify 2.0 # values in the (1.0, 3.0) range work the best !python ../inference.py {flags} -i text.txt -o output/amplifypitch --pitch-transform-amplify 2.0 > /dev/null IPython.display.Audio("output/amplifypitch/audio_0.wav")
_____no_output_____
MIT
SpeechSynthesis/FastPitch/notebooks/FastPitch_voice_modification.ipynb
eba472/mongolian_tts
2.6 Combine the flags The flags can be combined. You can find all the available options by calling python inference.py --help.
!python ../inference.py --help
_____no_output_____
MIT
SpeechSynthesis/FastPitch/notebooks/FastPitch_voice_modification.ipynb
eba472/mongolian_tts
Below example shows how to generate an audio with a combination of the flags --pace --pitch-transform-flatten --pitch-transform-shift --pitch-transform-invert --pitch-transform-amplify
# Dobuble the speed and combine multiple transformations !python ../inference.py {flags} -i text.txt -o output/combine \ --pace 2.0 --pitch-transform-flatten --pitch-transform-shift 50 \ --pitch-transform-invert --pitch-transform-amplify 1.5 > /dev/null IPython.display.Audio("output/combine/audio_0.wav")
_____no_output_____
MIT
SpeechSynthesis/FastPitch/notebooks/FastPitch_voice_modification.ipynb
eba472/mongolian_tts
3. Inference performance benchmark
# Benchmark inference using AMP !python ../inference.py {flags} \ --include-warmup --batch-size 8 --repeats 100 --torchscript --amp \ -i ../phrases/benchmark_8_128.tsv -o output/benchmark
_____no_output_____
MIT
SpeechSynthesis/FastPitch/notebooks/FastPitch_voice_modification.ipynb
eba472/mongolian_tts
k-NN, Function Expectation, Density Estimation
from experiment_framework.helpers import build_convergence_curve_pipeline from empirical_privacy.one_bit_sum import GenSampleOneBitSum # from empirical_privacy import one_bit_sum_joblib as one_bit_sum # from empirical_privacy import lsdd # reload(one_bit_sum) def B_pmf(k, n, p): return binom(n, p).pmf(k) def B0_pmf(k, n, p): return B_pmf(k, n-1, p) def B1_pmf(k, n, p): return B_pmf(k-1, n-1, p) def sd(N, P): return 0.5*np.sum(abs(B0_pmf(i, N, P) - B1_pmf(i, N, P)) for i in range(N+1)) def optimal_correctness(n, p): return 0.5 + 0.5*sd(n, p) n_max = 2**10 ntri=30 n=7 p=0.5 sd(n,p) B0 = [B0_pmf(i, n, p) for i in range(n+1)] B1 = [B1_pmf(i, n, p) for i in range(n+1)] dif = np.abs(np.array(B0)-np.array(B1)) sdv = 0.5*np.sum(dif) pc = 0.5+0.5*sdv print(f'n={n} coin flips p={p} probability of heads'\ '\nB0 has first outcome=0, B1 has first outcome=1') print(f'Statistic is the total number of heads sum') print(f'N_heads=\t{" ".join(np.arange(n+1).astype(str))}') print(f'PMF of B0=\t{B0}\nPMF of B1=\t{B1}') print(f'|B0-B1|=\t{dif}') print(f'sd = 0.5 * sum(|B0-B1|) = {sdv}') print(f'P(Correct) = 0.5 + 0.5*sd = {pc}') ccc_kwargs = { 'confidence_interval_width':10, 'n_max':2**13, 'dataset_settings' : { 'n_trials':n, 'prob_success':p, 'gen_distr_type':'binom' }, 'validation_set_size' : 2000 } CCCs = [] Fits = ['knn', 'density', 'expectation'] for fit in Fits: CCCs.append(build_convergence_curve_pipeline( GenSampleOneBitSum, gensample_kwargs = {'generate_in_batch':True}, fitter=fit, fitter_kwargs={} if fit=='knn' else {'statistic_column':0} )(**ccc_kwargs) ) luigi.build(CCCs, local_scheduler=True, workers=4, log_level='ERROR') colors = cm.Accent(np.linspace(0,1,len(CCCs)+1)) ax = plt.figure(figsize=(10,5)) ax = plt.gca() leg_handles = [] for (i, CC) in enumerate(CCCs): with CC.output().open() as f: res = pickle.load(f) handle=sns.tsplot(res['sd_matrix'], ci='sd', color=colors[i], ax=ax, legend=False, time=res['training_set_sizes']) j=0 for i in range(len(CCCs), 2*len(CCCs)): handle.get_children()[i].set_label('{}'.format(Fits[j])) j+=1 plt.semilogx() plt.axhline(optimal_correctness(n, p), linestyle='--', color='r', label='_nolegend_') plt.axhline(0.5, linestyle='-', color='b', label='_nolegend_') plt.title('n={n} p={p} $\delta$={d:.3f}'.format(n=n, p=p, d=sd(n,p)), fontsize=20) plt.xlabel('num samples') plt.ylabel('Correctness Rate') plt.legend(loc=(0,1.1))
/opt/conda/lib/python3.6/site-packages/seaborn/timeseries.py:183: UserWarning: The `tsplot` function is deprecated and will be removed in a future release. Please update your code to use the new `lineplot` function. warnings.warn(msg, UserWarning) /opt/conda/lib/python3.6/site-packages/seaborn/timeseries.py:183: UserWarning: The `tsplot` function is deprecated and will be removed in a future release. Please update your code to use the new `lineplot` function. warnings.warn(msg, UserWarning) /opt/conda/lib/python3.6/site-packages/seaborn/timeseries.py:183: UserWarning: The `tsplot` function is deprecated and will be removed in a future release. Please update your code to use the new `lineplot` function. warnings.warn(msg, UserWarning)
MIT
Notebooks/development/1-bit sum.ipynb
maksimt/empirical_privacy
Repeat the above using joblib to make sure the luigi implementation is correct
from math import ceil, log one_bit_sum.n_jobs=1 N = int(ceil(log(n_max) / log(2))) N_samples = np.logspace(4,N,num=N-3, base=2).astype(np.int) ax = plt.figure(figsize=(10,5)) ax = plt.gca() AlgArg = namedtuple('AlgArg', field_names=['f_handle', 'f_kwargs']) algs = [ AlgArg(one_bit_sum.get_knn_correctness_rate_cached, {'neighbor_method':'sqrt'}), AlgArg(one_bit_sum.get_knn_correctness_rate_cached, {'neighbor_method':'sqrt_random_tiebreak'}), AlgArg(one_bit_sum.get_density_est_correctness_rate_cached, {'bandwidth_method':None}), AlgArg(one_bit_sum.get_expectation_correctness_rate_cached, {'bandwidth_method':None}), AlgArg(one_bit_sum.get_lsdd_correctness_rate_cached, {}) #AlgArg(one_bit_sum.get_knn_correctness_rate_cached, {'neighbor_method':'cv'}) ] colors = cm.Accent(np.linspace(0,1,len(algs)+1)) leg_handles = [] for (i,alg) in enumerate(algs): res = one_bit_sum.get_res(n,p,ntri, alg.f_handle, alg.f_kwargs, n_max=n_max) handle=sns.tsplot(res, ci='sd', color=colors[i], ax=ax, legend=False, time=N_samples) # f, coef = get_fit(res, N_samples) # print alg, coef # lim = coef[0] # plt.plot(N_samples, f(N_samples), linewidth=3) # plt.text(N_samples[-1], lim, '{:.3f}'.format(lim),fontsize=16) j=0 for i in range(len(algs), 2*len(algs)): #print i, i/2-1 if i%2==0 else (i)/2 handle.get_children()[i].set_label('{} {}'.format(algs[j].f_handle.func.__name__, algs[j].f_kwargs)) j+=1 plt.semilogx() plt.axhline(optimal_correctness(n, p), linestyle='--', color='r', label='_nolegend_') plt.axhline(0.5, linestyle='-', color='b', label='_nolegend_') plt.title('n={n} p={p} $\delta$={d:.3f}'.format(n=n, p=p, d=sd(n,p)), fontsize=20) plt.xlabel('num samples') plt.ylabel('Correctness Rate') plt.legend(loc=(0,1.1)) #print ax.get_legend_handles_labels()
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.5s remaining: 0.0s [Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 1.3s remaining: 0.0s [Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 2.0s remaining: 0.0s [Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 2.0s finished /opt/conda/lib/python3.6/site-packages/seaborn/timeseries.py:183: UserWarning: The `tsplot` function is deprecated and will be removed in a future release. Please update your code to use the new `lineplot` function. warnings.warn(msg, UserWarning) [Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.7s remaining: 0.0s [Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 1.3s remaining: 0.0s [Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 2.1s remaining: 0.0s [Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 2.1s finished /opt/conda/lib/python3.6/site-packages/seaborn/timeseries.py:183: UserWarning: The `tsplot` function is deprecated and will be removed in a future release. Please update your code to use the new `lineplot` function. warnings.warn(msg, UserWarning) [Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 1.9s remaining: 0.0s [Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 3.5s remaining: 0.0s [Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 5.1s remaining: 0.0s [Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 5.1s finished /opt/conda/lib/python3.6/site-packages/seaborn/timeseries.py:183: UserWarning: The `tsplot` function is deprecated and will be removed in a future release. Please update your code to use the new `lineplot` function. warnings.warn(msg, UserWarning) [Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.8s remaining: 0.0s [Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 1.4s remaining: 0.0s [Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 2.0s remaining: 0.0s [Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 2.0s finished /opt/conda/lib/python3.6/site-packages/seaborn/timeseries.py:183: UserWarning: The `tsplot` function is deprecated and will be removed in a future release. Please update your code to use the new `lineplot` function. warnings.warn(msg, UserWarning) [Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 7.4s remaining: 0.0s [Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 14.3s remaining: 0.0s [Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 22.8s remaining: 0.0s [Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 22.8s finished /opt/conda/lib/python3.6/site-packages/seaborn/timeseries.py:183: UserWarning: The `tsplot` function is deprecated and will be removed in a future release. Please update your code to use the new `lineplot` function. warnings.warn(msg, UserWarning)
MIT
Notebooks/development/1-bit sum.ipynb
maksimt/empirical_privacy
Timing GenSamples Without halving: 7.5secWith halving: 8.1sec (i.e. not much overhead)
from luigi_utils.sampling_framework import GenSamples import time class GS(GenSamples(GenSampleOneBitSum, generate_in_batch=True)): pass GSi = GS(dataset_settings = ccc_kwargs['dataset_settings'], random_seed='0', generate_positive_samples=True, num_samples=2**15) start = time.time() luigi.build([GSi], local_scheduler=True, workers=8, log_level='ERROR') cputime = time.time() - start print(cputime) res['training_set_sizes'].shape np.concatenate((np.array([]), np.array([1,2,3])))
_____no_output_____
MIT
Notebooks/development/1-bit sum.ipynb
maksimt/empirical_privacy
More exp
def get_fit(res, N_samples): ntri, nsamp = res.shape sqrt2 = np.sqrt(2) Xlsq = np.hstack((np.ones((nsamp,1)), sqrt2/(N_samples.astype(np.float)**0.25)[:, np.newaxis])) y = 1.0 - res.reshape((nsamp*ntri, 1)) Xlsq = reduce(lambda x,y: np.vstack((x,y)), [Xlsq]*ntri) coef = np.linalg.lstsq(Xlsq, y)[0].ravel() f = lambda n: 1.0 - coef[0] - coef[1]*sqrt2/n.astype(np.float)**0.25, coef return f trial=0 num_samples=2**11 bandwidth_method=None from scipy.stats import gaussian_kde X0, X1, y0, y1 = one_bit_sum.gen_data(n, p, num_samples, trial) X0 = X0.ravel() X1 = X1.ravel() bw = None if hasattr(bandwidth_method, '__call__'): bw = float(bandwidth_method(num_samples)) / num_samples # eg log if type(bandwidth_method) == float: bw = num_samples**(1-bandwidth_method) f0 = gaussian_kde(X0, bw_method = bw) f1 = gaussian_kde(X1, bw_method = bw) #Omega = np.unique(np.concatenate((X0, X1))) _min = 0 _max = n x = np.linspace(_min, _max, num=10*num_samples) print('difference of densities=',0.5 + 0.5 * 0.5 * np.mean(np.abs(f0(x)-f1(x)))) denom = f0(x)+f1(x) numer = np.abs(f0(x)-f1(x)) print('expectation = ',0.5 + 0.5*np.mean(numer/denom))
_____no_output_____
MIT
Notebooks/development/1-bit sum.ipynb
maksimt/empirical_privacy
Uniforml distributed random variables $$g_0 = U[0,0.5]+\sum_{i=1}^{n-1} U[0,1]$$$$g_1 = U[0.5,1.0]+\sum_{i=1}^{n-1} U[0,1]$$ Let $\mu_n = \frac{n-1}{2}$ and $\sigma_n = \sqrt{\frac{n-0.75}{12}}$By the CLT $g_0\sim N(\mu_n+0.25, \sigma_n)$ and $g_1\sim N(\mu_n+0.75, \sigma_n)$.
from math import sqrt n=3 x = np.linspace(n/2.0-sqrt(n), n/2.0+sqrt(n)) sigma = sqrt((n-0.75)/12.0) sqrt2 = sqrt(2) mu = (n-1.0)/2 def g0_pdf(x): return norm.pdf(x, loc=mu+0.25, scale=sigma) def g1_pdf(x): return norm.pdf(x, loc=mu+0.75, scale=sigma) def d_pdf(x): return norm.pdf(x, loc=-0.5, scale=sigma*sqrt2) def g_int(n): sigma = sqrt((n-0.75)/12.0) mu = (n-1.0)/2 N0 = norm(loc=mu+0.25, scale=sigma) N1 = norm(loc=mu+0.75, scale=sigma) I0 = N0.cdf(n*0.5)-N0.cdf(0) I1 = N1.cdf(n*0.5)-N1.cdf(0) return 2*(I0-I1) def g_stat_dist(n): return 0.5 * g_int(n) def g_optimal_correctness(n): return 0.5 + 0.5*g_stat_dist(n) plt.plot(x, g0_pdf(x), label='$g_0$') plt.plot(x, g1_pdf(x), label='$g_1$') #plt.plot(x, d_pdf(x), label='$d$') plt.axvline(x=n/2.0, color='r') assert g0_pdf(n/2.0)==g1_pdf(n/2.0) plt.legend() print(g_optimal_correctness(n)) from math import ceil, log if n_max >= 2**13: one_bit_sum.n_jobs=1 else: one_bit_sum.n_jobs=-1 N = int(ceil(log(n_max) / log(2))) N_samples = np.logspace(4,N,num=N-3, base=2).astype(np.int) ax = plt.figure(figsize=(10,5)) ax = plt.gca() AlgArg = namedtuple('AlgArg', field_names=['f_handle', 'f_kwargs']) algs = [ AlgArg(one_bit_sum.get_knn_correctness_rate_cached, {'neighbor_method':'sqrt'}), AlgArg(one_bit_sum.get_knn_correctness_rate_cached, {'neighbor_method':'sqrt_random_tiebreak'}), AlgArg(one_bit_sum.get_density_est_correctness_rate_cached, {'bandwidth_method':None}), AlgArg(one_bit_sum.get_expectation_correctness_rate_cached, {'bandwidth_method':None}), AlgArg(one_bit_sum.get_lsdd_correctness_rate_cached, {}) #AlgArg(one_bit_sum.get_knn_correctness_rate_cached, {'neighbor_method':'cv'}) ] for A in algs: A.f_kwargs['type']='norm' colors = cm.Accent(np.linspace(0,1,len(algs)+1)) leg_handles = [] for (i,alg) in enumerate(algs): res = one_bit_sum.get_res(n,p,ntri, alg.f_handle, alg.f_kwargs, n_max=n_max) handle=sns.tsplot(res, ci='sd', color=colors[i], ax=ax, legend=False, time=N_samples) # f, coef = get_fit(res, N_samples) # print alg, coef # lim = coef[0] # plt.plot(N_samples, f(N_samples), linewidth=3) # plt.text(N_samples[-1], lim, '{:.3f}'.format(lim),fontsize=16) j=0 for i in range(len(algs), 2*len(algs)): #print i, i/2-1 if i%2==0 else (i)/2 handle.get_children()[i].set_label(algs[j].f_handle.func.__name__) j+=1 #print handle.get_children()[i].get_label() plt.semilogx() plt.axhline(g_optimal_correctness(n), linestyle='--', color='r', label='_nolegend_') plt.axhline(0.5, linestyle='-', color='b', label='_nolegend_') plt.title('n={n} $\delta$={d:.3f}'.format(n=n, d=g_stat_dist(n)), fontsize=20) plt.xlabel('num samples') plt.ylabel('Correctness Rate') plt.legend(loc=(1.1,0)) #print ax.get_legend_handles_labels() true_value = g_optimal_correctness(n) print(true_value) trial=0 num_samples=2**15 bandwidth_method=None from scipy.stats import gaussian_kde X0, X1, y0, y1 = one_bit_sum.gen_data(n, p, num_samples, trial, type='norm') X0 = X0.ravel() X1 = X1.ravel() bw = None if hasattr(bandwidth_method, '__call__'): bw = float(bandwidth_method(num_samples)) / num_samples # eg log if type(bandwidth_method) == float: bw = num_samples**(1-bandwidth_method) f0 = gaussian_kde(X0, bw_method = bw) f1 = gaussian_kde(X1, bw_method = bw) #Omega = np.unique(np.concatenate((X0, X1))) _min = 0 _max = n x = np.linspace(_min, _max, num=num_samples) print('difference of densities=',0.5 + 0.5 * 0.5 * integrate.quad(lambda x: np.abs(f0(x)-f1(x)), -np.inf, np.inf)[0]) X = np.concatenate((X0,X1)) f0x = f0(X) f1x = f1(X) denom = (f0x+f1x+np.spacing(1)) numer = np.abs(f0x-f1x) print('expectation = ',0.5 + 0.5*np.mean(numer/denom)) print('exact=',g_optimal_correctness(n)) plt.plot(x, f0(x),label='$\hat g_0$', linestyle='--') plt.plot(x, f1(x),label='$\hat g_1$', linestyle='--') plt.plot(x, g0_pdf(x), label='$g_0$') plt.plot(x, g1_pdf(x), label='$g_1$') plt.legend(loc=(1.05,0))
_____no_output_____
MIT
Notebooks/development/1-bit sum.ipynb
maksimt/empirical_privacy
Comparing different numerical integration techniques
to_int = [f0,f1] print 'Quad' # for (i,f) in enumerate(to_int): # intr = integrate.quad(f, -np.inf, np.inf) # print 'func={0} err={1:.3e}'.format(i, abs(1-intr[0])) g_int(n)-integrate.quad(lambda x: np.abs(f0(x)-f1(x)), -np.inf, np.inf)[0] to_int = [f0,f1] print 'Quad' g_int(n)-integrate.quad(lambda x: np.abs(f0(x)-f1(x)), -np.inf, np.inf)[0] g_int(n) print 'Simps' def delta(x): return np.abs(f0(x)-f1(x)) X = np.unique(np.concatenate((X0,X1))) y = delta(X) g_int(n)-integrate.simps(y,X) import empirical_privacy.lsdd rtv = lsdd.lsdd(X0[np.newaxis, :], X1[np.newaxis, :]) plt.hist(rtv[1]) np.mean(rtv[1])
_____no_output_____
MIT
Notebooks/development/1-bit sum.ipynb
maksimt/empirical_privacy
Sympy-based analysis
import sympy as sy n,k = sy.symbols('n k', integer=True) #k = sy.Integer(k) p = sy.symbols('p', real=True) q=1-p def binom_pmf(k, n, p): return sy.binomial(n,k)*(p**k)*(q**(n-k)) def binom_cdf(x, n, p): return sy.Sum([binom_pmf(j, n, p) for j in sy.Range(x+1)]) B0 = binom_pmf(k, n-1, p) B1 = binom_pmf(k-1, n-1, p) def stat_dist(N,P): return 0.5*sum([sy.Abs(B0.subs([(n,N),(p,P), (k,i)])-B1.subs([(n,N),(p,P), (k,i)])) for i in range(N+1)]) def sd(N, P): return 0.5*np.sum(abs(B0(i, N, P) - B1(i, N, P)) for i in range(N+1)) stat_dist(50,0.5) sd(5000,0.5) N=2 terms =[(B0.subs([(n,N), (k,i)]).simplify(),B1.subs([(n,N), (k,i)]).simplify()) for i in range(N+1)] print terms 0.5*sum(map(lambda t: sy.Abs(t[0]-t[1]), terms)).subs([(p,0.5)]) stat_dist(4,0.5)
_____no_output_____
MIT
Notebooks/development/1-bit sum.ipynb
maksimt/empirical_privacy
Regression Week 2: Multiple Regression (gradient descent) In the first notebook we explored multiple regression using graphlab create. Now we will use graphlab along with numpy to solve for the regression weights with gradient descent.In this notebook we will cover estimating multiple regression weights via gradient descent. You will:* Add a constant column of 1's to a graphlab SFrame to account for the intercept* Convert an SFrame into a Numpy array* Write a predict_output() function using Numpy* Write a numpy function to compute the derivative of the regression weights with respect to a single feature* Write gradient descent function to compute the regression weights given an initial weight vector, step size and tolerance.* Use the gradient descent function to estimate regression weights for multiple features Fire up graphlab create Make sure you have the latest version of graphlab (>= 1.7)
import graphlab
_____no_output_____
MIT
Machine_Learning_Regression/.ipynb_checkpoints/week-2-multiple-regression-assignment-2-blank-checkpoint.ipynb
nkmah2/ML_Uni_Washington_Coursera
Load in house sales dataDataset is from house sales in King County, the region where the city of Seattle, WA is located.
sales = graphlab.SFrame('kc_house_data.gl/')
[INFO] 1449884188 : INFO: (initialize_globals_from_environment:282): Setting configuration variable GRAPHLAB_FILEIO_ALTERNATIVE_SSL_CERT_FILE to /home/nitin/anaconda/lib/python2.7/site-packages/certifi/cacert.pem 1449884188 : INFO: (initialize_globals_from_environment:282): Setting configuration variable GRAPHLAB_FILEIO_ALTERNATIVE_SSL_CERT_DIR to This non-commercial license of GraphLab Create is assigned to [email protected] and will expire on October 14, 2016. For commercial licensing options, visit https://dato.com/buy/. [INFO] Start server at: ipc:///tmp/graphlab_server-4201 - Server binary: /home/nitin/anaconda/lib/python2.7/site-packages/graphlab/unity_server - Server log: /tmp/graphlab_server_1449884188.log [INFO] GraphLab Server Version: 1.7.1
MIT
Machine_Learning_Regression/.ipynb_checkpoints/week-2-multiple-regression-assignment-2-blank-checkpoint.ipynb
nkmah2/ML_Uni_Washington_Coursera
If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the other Week 2 notebook. For this notebook, however, we will work with the existing features. Convert to Numpy Array Although SFrames offer a number of benefits to users (especially when using Big Data and built-in graphlab functions) in order to understand the details of the implementation of algorithms it's important to work with a library that allows for direct (and optimized) matrix operations. Numpy is a Python solution to work with matrices (or any multi-dimensional "array").Recall that the predicted value given the weights and the features is just the dot product between the feature and weight vector. Similarly, if we put all of the features row-by-row in a matrix then the predicted value for *all* the observations can be computed by right multiplying the "feature matrix" by the "weight vector". First we need to take the SFrame of our data and convert it into a 2D numpy array (also called a matrix). To do this we use graphlab's built in .to_dataframe() which converts the SFrame into a Pandas (another python library) dataframe. We can then use Panda's .as_matrix() to convert the dataframe into a numpy matrix.
import numpy as np # note this allows us to refer to numpy as np instead
_____no_output_____
MIT
Machine_Learning_Regression/.ipynb_checkpoints/week-2-multiple-regression-assignment-2-blank-checkpoint.ipynb
nkmah2/ML_Uni_Washington_Coursera
Now we will write a function that will accept an SFrame, a list of feature names (e.g. ['sqft_living', 'bedrooms']) and an target feature e.g. ('price') and will return two things:* A numpy matrix whose columns are the desired features plus a constant column (this is how we create an 'intercept')* A numpy array containing the values of the outputWith this in mind, complete the following function (where there's an empty line you should write a line of code that does what the comment above indicates)**Please note you will need GraphLab Create version at least 1.7.1 in order for .to_numpy() to work!**
def get_numpy_data(data_sframe, features, output): data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame # add the column 'constant' to the front of the features list so that we can extract it along with the others: features = ['constant'] + features # this is how you combine two lists # select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant): # the following line will convert the features_SFrame into a numpy matrix: feature_matrix = features_sframe.to_numpy() # assign the column of data_sframe associated with the output to the SArray output_sarray # the following will convert the SArray into a numpy array by first converting it to a list output_array = output_sarray.to_numpy() return(feature_matrix, output_array)
_____no_output_____
MIT
Machine_Learning_Regression/.ipynb_checkpoints/week-2-multiple-regression-assignment-2-blank-checkpoint.ipynb
nkmah2/ML_Uni_Washington_Coursera
For testing let's use the 'sqft_living' feature and a constant as our features and price as our output:
(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price') # the [] around 'sqft_living' makes it a list print example_features[0,:] # this accesses the first row of the data the ':' indicates 'all columns' print example_output[0] # and the corresponding output
_____no_output_____
MIT
Machine_Learning_Regression/.ipynb_checkpoints/week-2-multiple-regression-assignment-2-blank-checkpoint.ipynb
nkmah2/ML_Uni_Washington_Coursera
Predicting output given regression weights Suppose we had the weights [1.0, 1.0] and the features [1.0, 1180.0] and we wanted to compute the predicted output 1.0\*1.0 + 1.0\*1180.0 = 1181.0 this is the dot product between these two arrays. If they're numpy arrayws we can use np.dot() to compute this:
my_weights = np.array([1., 1.]) # the example weights my_features = example_features[0,] # we'll use the first data point predicted_value = np.dot(my_features, my_weights) print predicted_value
_____no_output_____
MIT
Machine_Learning_Regression/.ipynb_checkpoints/week-2-multiple-regression-assignment-2-blank-checkpoint.ipynb
nkmah2/ML_Uni_Washington_Coursera
np.dot() also works when dealing with a matrix and a vector. Recall that the predictions from all the observations is just the RIGHT (as in weights on the right) dot product between the features *matrix* and the weights *vector*. With this in mind finish the following predict_output function to compute the predictions for an entire matrix of features given the matrix and the weights:
def predict_output(feature_matrix, weights): # assume feature_matrix is a numpy matrix containing the features as columns and weights is a corresponding numpy array # create the predictions vector by using np.dot() return(predictions)
_____no_output_____
MIT
Machine_Learning_Regression/.ipynb_checkpoints/week-2-multiple-regression-assignment-2-blank-checkpoint.ipynb
nkmah2/ML_Uni_Washington_Coursera
If you want to test your code run the following cell:
test_predictions = predict_output(example_features, my_weights) print test_predictions[0] # should be 1181.0 print test_predictions[1] # should be 2571.0
_____no_output_____
MIT
Machine_Learning_Regression/.ipynb_checkpoints/week-2-multiple-regression-assignment-2-blank-checkpoint.ipynb
nkmah2/ML_Uni_Washington_Coursera
Computing the Derivative We are now going to move to computing the derivative of the regression cost function. Recall that the cost function is the sum over the data points of the squared difference between an observed output and a predicted output.Since the derivative of a sum is the sum of the derivatives we can compute the derivative for a single data point and then sum over data points. We can write the squared difference between the observed output and predicted output for a single point as follows:(w[0]\*[CONSTANT] + w[1]\*[feature_1] + ... + w[i] \*[feature_i] + ... + w[k]\*[feature_k] - output)^2Where we have k features and a constant. So the derivative with respect to weight w[i] by the chain rule is:2\*(w[0]\*[CONSTANT] + w[1]\*[feature_1] + ... + w[i] \*[feature_i] + ... + w[k]\*[feature_k] - output)\* [feature_i]The term inside the paranethesis is just the error (difference between prediction and output). So we can re-write this as:2\*error\*[feature_i]That is, the derivative for the weight for feature i is the sum (over data points) of 2 times the product of the error and the feature itself. In the case of the constant then this is just twice the sum of the errors!Recall that twice the sum of the product of two vectors is just twice the dot product of the two vectors. Therefore the derivative for the weight for feature_i is just two times the dot product between the values of feature_i and the current errors. With this in mind complete the following derivative function which computes the derivative of the weight given the value of the feature (over all data points) and the errors (over all data points).
def feature_derivative(errors, feature): # Assume that errors and feature are both numpy arrays of the same length (number of data points) # compute twice the dot product of these vectors as 'derivative' and return the value return(derivative)
_____no_output_____
MIT
Machine_Learning_Regression/.ipynb_checkpoints/week-2-multiple-regression-assignment-2-blank-checkpoint.ipynb
nkmah2/ML_Uni_Washington_Coursera
To test your feature derivartive run the following:
(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price') my_weights = np.array([0., 0.]) # this makes all the predictions 0 test_predictions = predict_output(example_features, my_weights) # just like SFrames 2 numpy arrays can be elementwise subtracted with '-': errors = test_predictions - example_output # prediction errors in this case is just the -example_output feature = example_features[:,0] # let's compute the derivative with respect to 'constant', the ":" indicates "all rows" derivative = feature_derivative(errors, feature) print derivative print -np.sum(example_output)*2 # should be the same as derivative
_____no_output_____
MIT
Machine_Learning_Regression/.ipynb_checkpoints/week-2-multiple-regression-assignment-2-blank-checkpoint.ipynb
nkmah2/ML_Uni_Washington_Coursera
Gradient Descent Now we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of *increase* and therefore the negative gradient is the direction of *decrease* and we're trying to *minimize* a cost function. The amount by which we move in the negative gradient *direction* is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. We define this by requiring that the magnitude (length) of the gradient vector to be smaller than a fixed 'tolerance'.With this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent we update the weight for each feature befofe computing our stopping criteria
from math import sqrt # recall that the magnitude/length of a vector [g[0], g[1], g[2]] is sqrt(g[0]^2 + g[1]^2 + g[2]^2) def regression_gradient_descent(feature_matrix, output, initial_weights, step_size, tolerance): converged = False weights = np.array(initial_weights) # make sure it's a numpy array while not converged: # compute the predictions based on feature_matrix and weights using your predict_output() function # compute the errors as predictions - output gradient_sum_squares = 0 # initialize the gradient sum of squares # while we haven't reached the tolerance yet, update each feature's weight for i in range(len(weights)): # loop over each weight # Recall that feature_matrix[:, i] is the feature column associated with weights[i] # compute the derivative for weight[i]: # add the squared value of the derivative to the gradient magnitude (for assessing convergence) # subtract the step size times the derivative from the current weight # compute the square-root of the gradient sum of squares to get the gradient matnigude: gradient_magnitude = sqrt(gradient_sum_squares) if gradient_magnitude < tolerance: converged = True return(weights)
_____no_output_____
MIT
Machine_Learning_Regression/.ipynb_checkpoints/week-2-multiple-regression-assignment-2-blank-checkpoint.ipynb
nkmah2/ML_Uni_Washington_Coursera
A few things to note before we run the gradient descent. Since the gradient is a sum over all the data points and involves a product of an error and a feature the gradient itself will be very large since the features are large (squarefeet) and the output is large (prices). So while you might expect "tolerance" to be small, small is only relative to the size of the features. For similar reasons the step size will be much smaller than you might expect but this is because the gradient has such large values. Running the Gradient Descent as Simple Regression First let's split the data into training and test data.
train_data,test_data = sales.random_split(.8,seed=0)
_____no_output_____
MIT
Machine_Learning_Regression/.ipynb_checkpoints/week-2-multiple-regression-assignment-2-blank-checkpoint.ipynb
nkmah2/ML_Uni_Washington_Coursera
Although the gradient descent is designed for multiple regression since the constant is now a feature we can use the gradient descent function to estimat the parameters in the simple regression on squarefeet. The folowing cell sets up the feature_matrix, output, initial weights and step size for the first model:
# let's test out the gradient descent simple_features = ['sqft_living'] my_output = 'price' (simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output) initial_weights = np.array([-47000., 1.]) step_size = 7e-12 tolerance = 2.5e7
_____no_output_____
MIT
Machine_Learning_Regression/.ipynb_checkpoints/week-2-multiple-regression-assignment-2-blank-checkpoint.ipynb
nkmah2/ML_Uni_Washington_Coursera
Next run your gradient descent with the above parameters. How do your weights compare to those achieved in week 1 (don't expect them to be exactly the same)? **Quiz Question: What is the value of the weight for sqft_living -- the second element of ‘simple_weights’ (rounded to 1 decimal place)?** Use your newly estimated weights and your predict_output() function to compute the predictions on all the TEST data (you will need to create a numpy array of the test feature_matrix and test output first:
(test_simple_feature_matrix, test_output) = get_numpy_data(test_data, simple_features, my_output)
_____no_output_____
MIT
Machine_Learning_Regression/.ipynb_checkpoints/week-2-multiple-regression-assignment-2-blank-checkpoint.ipynb
nkmah2/ML_Uni_Washington_Coursera
Now compute your predictions using test_simple_feature_matrix and your weights from above. **Quiz Question: What is the predicted price for the 1st house in the TEST data set for model 1 (round to nearest dollar)?** Now that you have the predictions on test data, compute the RSS on the test data set. Save this value for comparison later. Recall that RSS is the sum of the squared errors (difference between prediction and output). Running a multiple regression Now we will use more than one actual feature. Use the following code to produce the weights for a second model with the following parameters:
model_features = ['sqft_living', 'sqft_living15'] # sqft_living15 is the average squarefeet for the nearest 15 neighbors. my_output = 'price' (feature_matrix, output) = get_numpy_data(train_data, model_features, my_output) initial_weights = np.array([-100000., 1., 1.]) step_size = 4e-12 tolerance = 1e9
_____no_output_____
MIT
Machine_Learning_Regression/.ipynb_checkpoints/week-2-multiple-regression-assignment-2-blank-checkpoint.ipynb
nkmah2/ML_Uni_Washington_Coursera
Data can be found: https://data-seattlecitygis.opendata.arcgis.com/search?collection=Dataset&modified=2021-01-01%2C2021-10-13
import geopandas as gpd gpd.datasets.available world = gpd.read_file( gpd.datasets.get_path('naturalearth_lowres') ) seattle_zoning_2035 = gpd.read_file('Future_Land_Use__2035.shp') seattle_zoning_2035.head() education_centers = gpd.read_file('data/Environmental_Education_Centers.shp') education_centers
_____no_output_____
Apache-2.0
_notebooks/public_schools/EDA.ipynb
Hevia/blog
Advanced Logistic Regression in TensorFlow 2.0 Learning Objectives1. Load a CSV file using Pandas2. Create train, validation, and test sets3. Define and train a model using Keras (including setting class weights)4. Evaluate the model using various metrics (including precision and recall)5. Try common techniques for dealing with imbalanced data: Class weighting and Oversampling Introduction This lab how to classify a highly imbalanced dataset in which the number of examples in one class greatly outnumbers the examples in another. You will work with the [Credit Card Fraud Detection](https://www.kaggle.com/mlg-ulb/creditcardfraud) dataset hosted on Kaggle. The aim is to detect a mere 492 fraudulent transactions from 284,807 transactions in total. You will use [Keras](../../guide/keras/overview.ipynb) to define the model and [class weights](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model) to help the model learn from the imbalanced data. PENDING LINK UPDATE: Each learning objective will correspond to a __TODO__ in the [student lab notebook](https://training-data-analyst/courses/machine_learning/deepdive2/image_classification/labs/5_fashion_mnist_class.ipynb) -- try to complete that notebook first before reviewing this solution notebook. Start by importing the necessary libraries for this lab.
import tensorflow as tf from tensorflow import keras import os import tempfile import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns import sklearn from sklearn.metrics import confusion_matrix from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler print("TensorFlow version: ",tf.version.VERSION)
TensorFlow version: 2.1.0
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
In the next cell, we're going to customize our Matplot lib visualization figure size and colors. Note that each time Matplotlib loads, it defines a runtime configuration (rc) containing the default styles for every plot element we create. This configuration can be adjusted at any time using the plt.rc convenience routine.
mpl.rcParams['figure.figsize'] = (12, 10) colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Data processing and exploration Download the Kaggle Credit Card Fraud data setPandas is a Python library with many helpful utilities for loading and working with structured data and can be used to download CSVs into a dataframe.Note: This dataset has been collected and analysed during a research collaboration of Worldline and the [Machine Learning Group](http://mlg.ulb.ac.be) of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available [here](https://www.researchgate.net/project/Fraud-detection-5) and the page of the [DefeatFraud](https://mlg.ulb.ac.be/wordpress/portfolio_page/defeatfraud-assessment-and-validation-of-deep-feature-engineering-and-learning-solutions-for-fraud-detection/) project
file = tf.keras.utils raw_df = pd.read_csv('https://storage.googleapis.com/download.tensorflow.org/data/creditcard.csv') raw_df.head()
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Now, let's view the statistics of the raw dataframe.
raw_df[['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V26', 'V27', 'V28', 'Amount', 'Class']].describe()
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Examine the class label imbalanceLet's look at the dataset imbalance:
neg, pos = np.bincount(raw_df['Class']) total = neg + pos print('Examples:\n Total: {}\n Positive: {} ({:.2f}% of total)\n'.format( total, pos, 100 * pos / total))
Examples: Total: 284807 Positive: 492 (0.17% of total)
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
This shows the small fraction of positive samples. Clean, split and normalize the dataThe raw data has a few issues. First the `Time` and `Amount` columns are too variable to use directly. Drop the `Time` column (since it's not clear what it means) and take the log of the `Amount` column to reduce its range.
cleaned_df = raw_df.copy() # You don't want the `Time` column. cleaned_df.pop('Time') # The `Amount` column covers a huge range. Convert to log-space. eps=0.001 # 0 => 0.1¢ cleaned_df['Log Ammount'] = np.log(cleaned_df.pop('Amount')+eps)
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Split the dataset into train, validation, and test sets. The validation set is used during the model fitting to evaluate the loss and any metrics, however the model is not fit with this data. The test set is completely unused during the training phase and is only used at the end to evaluate how well the model generalizes to new data. This is especially important with imbalanced datasets where [overfitting](https://developers.google.com/machine-learning/crash-course/generalization/peril-of-overfitting) is a significant concern from the lack of training data.
# Use a utility from sklearn to split and shuffle our dataset. train_df, test_df = train_test_split(cleaned_df, test_size=0.2) train_df, val_df = train_test_split(train_df, test_size=0.2) # Form np arrays of labels and features. train_labels = np.array(train_df.pop('Class')) bool_train_labels = train_labels != 0 val_labels = np.array(val_df.pop('Class')) test_labels = np.array(test_df.pop('Class')) train_features = np.array(train_df) val_features = np.array(val_df) test_features = np.array(test_df)
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Normalize the input features using the sklearn StandardScaler.This will set the mean to 0 and standard deviation to 1.Note: The `StandardScaler` is only fit using the `train_features` to be sure the model is not peeking at the validation or test sets.
scaler = StandardScaler() train_features = scaler.fit_transform(train_features) val_features = scaler.transform(val_features) test_features = scaler.transform(test_features) train_features = np.clip(train_features, -5, 5) val_features = np.clip(val_features, -5, 5) test_features = np.clip(test_features, -5, 5) print('Training labels shape:', train_labels.shape) print('Validation labels shape:', val_labels.shape) print('Test labels shape:', test_labels.shape) print('Training features shape:', train_features.shape) print('Validation features shape:', val_features.shape) print('Test features shape:', test_features.shape)
Training labels shape: (182276,) Validation labels shape: (45569,) Test labels shape: (56962,) Training features shape: (182276, 29) Validation features shape: (45569, 29) Test features shape: (56962, 29)
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Caution: If you want to deploy a model, it's critical that you preserve the preprocessing calculations. The easiest way to implement them as layers, and attach them to your model before export. Look at the data distributionNext compare the distributions of the positive and negative examples over a few features. Good questions to ask yourself at this point are:* Do these distributions make sense? * Yes. You've normalized the input and these are mostly concentrated in the `+/- 2` range.* Can you see the difference between the ditributions? * Yes the positive examples contain a much higher rate of extreme values.
pos_df = pd.DataFrame(train_features[ bool_train_labels], columns = train_df.columns) neg_df = pd.DataFrame(train_features[~bool_train_labels], columns = train_df.columns) sns.jointplot(pos_df['V5'], pos_df['V6'], kind='hex', xlim = (-5,5), ylim = (-5,5)) plt.suptitle("Positive distribution") sns.jointplot(neg_df['V5'], neg_df['V6'], kind='hex', xlim = (-5,5), ylim = (-5,5)) _ = plt.suptitle("Negative distribution")
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Define the model and metricsDefine a function that creates a simple neural network with a densly connected hidden layer, a [dropout](https://developers.google.com/machine-learning/glossary/dropout_regularization) layer to reduce overfitting, and an output sigmoid layer that returns the probability of a transaction being fraudulent:
METRICS = [ keras.metrics.TruePositives(name='tp'), keras.metrics.FalsePositives(name='fp'), keras.metrics.TrueNegatives(name='tn'), keras.metrics.FalseNegatives(name='fn'), keras.metrics.BinaryAccuracy(name='accuracy'), keras.metrics.Precision(name='precision'), keras.metrics.Recall(name='recall'), keras.metrics.AUC(name='auc'), ] def make_model(metrics = METRICS, output_bias=None): if output_bias is not None: output_bias = tf.keras.initializers.Constant(output_bias) model = keras.Sequential([ keras.layers.Dense( 16, activation='relu', input_shape=(train_features.shape[-1],)), keras.layers.Dropout(0.5), keras.layers.Dense(1, activation='sigmoid', bias_initializer=output_bias), ]) model.compile( optimizer=keras.optimizers.Adam(lr=1e-3), loss=keras.losses.BinaryCrossentropy(), metrics=metrics) return model
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Understanding useful metricsNotice that there are a few metrics defined above that can be computed by the model that will be helpful when evaluating the performance.* **False** negatives and **false** positives are samples that were **incorrectly** classified* **True** negatives and **true** positives are samples that were **correctly** classified* **Accuracy** is the percentage of examples correctly classified> $\frac{\text{true samples}}{\text{total samples}}$* **Precision** is the percentage of **predicted** positives that were correctly classified> $\frac{\text{true positives}}{\text{true positives + false positives}}$* **Recall** is the percentage of **actual** positives that were correctly classified> $\frac{\text{true positives}}{\text{true positives + false negatives}}$* **AUC** refers to the Area Under the Curve of a Receiver Operating Characteristic curve (ROC-AUC). This metric is equal to the probability that a classifier will rank a random positive sample higher than than a random negative sample.Note: Accuracy is not a helpful metric for this task. You can 99.8%+ accuracy on this task by predicting False all the time. Read more:* [True vs. False and Positive vs. Negative](https://developers.google.com/machine-learning/crash-course/classification/true-false-positive-negative)* [Accuracy](https://developers.google.com/machine-learning/crash-course/classification/accuracy)* [Precision and Recall](https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall)* [ROC-AUC](https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc) Baseline model Build the modelNow create and train your model using the function that was defined earlier. Notice that the model is fit using a larger than default batch size of 2048, this is important to ensure that each batch has a decent chance of containing a few positive samples. If the batch size was too small, they would likely have no fraudulent transactions to learn from.Note: this model will not handle the class imbalance well. You will improve it later in this tutorial.
EPOCHS = 100 BATCH_SIZE = 2048 early_stopping = tf.keras.callbacks.EarlyStopping( monitor='val_auc', verbose=1, patience=10, mode='max', restore_best_weights=True) model = make_model() model.summary()
Model: "sequential_8" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_16 (Dense) (None, 16) 480 _________________________________________________________________ dropout_8 (Dropout) (None, 16) 0 _________________________________________________________________ dense_17 (Dense) (None, 1) 17 ================================================================= Total params: 497 Trainable params: 497 Non-trainable params: 0 _________________________________________________________________
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Test run the model:
model.predict(train_features[:10])
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Optional: Set the correct initial bias. These are initial guesses are not great. You know the dataset is imbalanced. Set the output layer's bias to reflect that (See: [A Recipe for Training Neural Networks: "init well"](http://karpathy.github.io/2019/04/25/recipe/2-set-up-the-end-to-end-trainingevaluation-skeleton--get-dumb-baselines)). This can help with initial convergence. With the default bias initialization the loss should be about `math.log(2) = 0.69314`
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0) print("Loss: {:0.4f}".format(results[0]))
Loss: 1.7441
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
The correct bias to set can be derived from:$$ p_0 = pos/(pos + neg) = 1/(1+e^{-b_0}) $$$$ b_0 = -log_e(1/p_0 - 1) $$$$ b_0 = log_e(pos/neg)$$
initial_bias = np.log([pos/neg]) initial_bias
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Set that as the initial bias, and the model will give much more reasonable initial guesses. It should be near: `pos/total = 0.0018`
model = make_model(output_bias = initial_bias) model.predict(train_features[:10])
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
With this initialization the initial loss should be approximately:$$-p_0log(p_0)-(1-p_0)log(1-p_0) = 0.01317$$
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0) print("Loss: {:0.4f}".format(results[0]))
Loss: 0.0275
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
This initial loss is about 50 times less than if would have been with naive initilization.This way the model doesn't need to spend the first few epochs just learning that positive examples are unlikely. This also makes it easier to read plots of the loss during training. Checkpoint the initial weightsTo make the various training runs more comparable, keep this initial model's weights in a checkpoint file, and load them into each model before training.
initial_weights = os.path.join(tempfile.mkdtemp(),'initial_weights') model.save_weights(initial_weights)
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Confirm that the bias fix helpsBefore moving on, confirm quick that the careful bias initialization actually helped.Train the model for 20 epochs, with and without this careful initialization, and compare the losses:
model = make_model() model.load_weights(initial_weights) model.layers[-1].bias.assign([0.0]) zero_bias_history = model.fit( train_features, train_labels, batch_size=BATCH_SIZE, epochs=20, validation_data=(val_features, val_labels), verbose=0) model = make_model() model.load_weights(initial_weights) careful_bias_history = model.fit( train_features, train_labels, batch_size=BATCH_SIZE, epochs=20, validation_data=(val_features, val_labels), verbose=0) def plot_loss(history, label, n): # Use a log scale to show the wide range of values. plt.semilogy(history.epoch, history.history['loss'], color=colors[n], label='Train '+label) plt.semilogy(history.epoch, history.history['val_loss'], color=colors[n], label='Val '+label, linestyle="--") plt.xlabel('Epoch') plt.ylabel('Loss') plt.legend() plot_loss(zero_bias_history, "Zero Bias", 0) plot_loss(careful_bias_history, "Careful Bias", 1)
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst