markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Script FunctionsSageMaker invokes the main function defined within your training script for training. When deploying your trained model to an endpoint, the model_fn() is called to determine how to load your trained model. The model_fn() along with a few other functions list below are called to enable predictions on SageMaker. [Predicting Functions](https://github.com/aws/sagemaker-pytorch-containers/blob/master/src/sagemaker_pytorch_container/serving.py)* model_fn(model_dir) - loads your model.* input_fn(serialized_input_data, content_type) - deserializes predictions to predict_fn.* output_fn(prediction_output, accept) - serializes predictions from predict_fn.* predict_fn(input_data, model) - calls a model on data deserialized in input_fn.The model_fn() is the only function that doesn't have a default implementation and is required by the user for using PyTorch on SageMaker. Create a training job using the sagemaker.PyTorch estimatorThe `PyTorch` class allows us to run our training function on SageMaker. We need to configure it with our training script, an IAM role, the number of training instances, and the training instance type. For local training with GPU, we could set this to "local_gpu". In this case, `instance_type` was set above based on your whether you're running a GPU instance.After we've constructed our `PyTorch` object, we fit it using the data we uploaded to S3. Even though we're in local mode, using S3 as our data source makes sense because it maintains consistency with how SageMaker's distributed, managed training ingests data.
from sagemaker.pytorch import PyTorch cifar10_estimator = PyTorch(entry_point='source/cifar10.py', role=role, framework_version='1.4.0', train_instance_count=1, train_instance_type=instance_type) cifar10_estimator.fit(inputs)
_____no_output_____
Apache-2.0
sagemaker-python-sdk/pytorch_cnn_cifar10/pytorch_local_mode_cifar10.ipynb
nigenda-amazon/amazon-sagemaker-examples
Deploy the trained model to prepare for predictionsThe deploy() method creates an endpoint (in this case locally) which serves prediction requests in real-time.
from sagemaker.pytorch import PyTorchModel cifar10_predictor = cifar10_estimator.deploy(initial_instance_count=1, instance_type=instance_type)
_____no_output_____
Apache-2.0
sagemaker-python-sdk/pytorch_cnn_cifar10/pytorch_local_mode_cifar10.ipynb
nigenda-amazon/amazon-sagemaker-examples
Invoking the endpoint
# get some test images dataiter = iter(testloader) images, labels = dataiter.next() # print images imshow(torchvision.utils.make_grid(images)) print('GroundTruth: ', ' '.join('%4s' % classes[labels[j]] for j in range(4))) outputs = cifar10_predictor.predict(images.numpy()) _, predicted = torch.max(torch.from_numpy(np.array(outputs)), 1) print('Predicted: ', ' '.join('%4s' % classes[predicted[j]] for j in range(4)))
_____no_output_____
Apache-2.0
sagemaker-python-sdk/pytorch_cnn_cifar10/pytorch_local_mode_cifar10.ipynb
nigenda-amazon/amazon-sagemaker-examples
Clean-upDeleting the local endpoint when you're finished is important, since you can only run one local endpoint at a time.
cifar10_estimator.delete_endpoint()
_____no_output_____
Apache-2.0
sagemaker-python-sdk/pytorch_cnn_cifar10/pytorch_local_mode_cifar10.ipynb
nigenda-amazon/amazon-sagemaker-examples
Procedural programming in python Topics* Tuples, lists and dictionaries* Flow control, part 1 * If * For * range() function* Some hacky hack time* Flow control, part 2 * Functions TuplesLet's begin by creating a tuple called `my_tuple` that contains three elements.
my_tuple = ('I', 'like', 'cake') my_tuple
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
Tuples are simple containers for data. They are ordered, meaining the order the elements are in when the tuple is created are preserved. We can get values from our tuple by using array indexing, similar to what we were doing with pandas.
my_tuple[0]
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
Recall that Python indexes start at 0. So the first element in a tuple is 0 and the last is array length - 1. You can also address from the `end` to the `front` by using negative (`-`) indexes, e.g.
my_tuple[-1]
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
You can also access a range of elements, e.g. the first two, the first three, by using the `:` to expand a range. This is called ``slicing``.
my_tuple[0:2] my_tuple[0:3]
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
What do you notice about how the upper bound is referenced? Without either end, the ``:`` expands to the entire list.
my_tuple[1:] my_tuple[:-1] my_tuple[:]
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
Tuples have a key feature that distinguishes them from other types of object containers in Python. They are _immutable_. This means that once the values are set, they cannot change.
my_tuple[2]
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
So what happens if I decide that I really prefer pie over cake?
#my_tuple[2] = 'pie'
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
Facts about tuples:* You can't add elements to a tuple. Tuples have no append or extend method.* You can't remove elements from a tuple. Tuples have no remove or pop method.* You can also use the in operator to check if an element exists in the tuple.So then, what are the use cases of tuples? * Speed* `Write-protects` data that other pieces of code should not alter You can alter the value of a tuple variable, e.g. change the tuple it holds, but you can't modify it.
my_tuple my_tuple = ('I', 'love', 'pie') my_tuple
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
There is a really handy operator ``in`` that can be used with tuples that will return `True` if an element is present in a tuple and `False` otherwise.
'love' in my_tuple
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
Finally, tuples can contain different types of data, not just strings.
import math my_second_tuple = (42, 'Elephants', 'ate', math.pi) my_second_tuple
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
Numerical operators work... Sort of. What happens when you add? ``my_second_tuple + 'plus'`` Not what you expects? What about adding two tuples?
my_second_tuple + my_tuple
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
Other operators: -, /, * Questions about tuples before we move on? ListsLet's begin by creating a list called `my_list` that contains three elements.
my_list = ['I', 'like', 'cake'] my_list
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
At first glance, tuples and lists look pretty similar. Notice the lists use '[' and ']' instead of '(' and ')'. But indexing and refering to the first entry as 0 and the last as -1 still works the same.
my_list[0] my_list[-1] my_list[0:3]
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
Lists, however, unlike tuples, are mutable.
my_list[2] = 'pie' my_list
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
Multiple elements in the list can even be changed at once!
my_list[1:] = ['love', 'puppies'] my_list
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
You can still use the `in` operator.
'puppies' in my_list 'kittens' in my_list
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
So when to use a tuple and when to use a list?* Use a list when you will modify it after it is created?Ways to modify a list? You have already seen by index. Let's start with an empty list.
my_new_list = [] my_new_list
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
We can add to the list using the append method on it.
my_new_list.append('Now') my_new_list
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
We can use the `+` operator to create a longer list by adding the contents of two lists together.
my_new_list + my_list
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
One of the useful things to know about a list how many elements are in it. This can be found with the `len` function.
len(my_list)
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
Some other handy functions with lists:* max* min* cmp Sometimes you have a tuple and you need to make it a list. You can `cast` the tuple to a list with ``list(my_tuple)``
list(my_tuple)
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
What in the above told us it was a list? You can also use the ``type`` function to figure out the type.
type(tuple) type(list(my_tuple))
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
There are other useful methods on lists, including:| methods | description ||---|---|| list.append(obj) | Appends object obj to list || list.count(obj)| Returns count of how many times obj occurs in list || list.extend(seq) | Appends the contents of seq to list || list.index(obj) | Returns the lowest index in list that obj appears || list.insert(index, obj) | Inserts object obj into list at offset index || list.pop(obj=list[-1]) | Removes and returns last object or obj from list || list.remove(obj) | Removes object obj from list || list.reverse() | Reverses objects of list in place || list.sort([func]) | Sort objects of list, use compare func, if given |Try some of them now.```my_list.count('I')my_listmy_list.append('I')my_listmy_list.count('I')my_listmy_list.index(42)my_list.index('puppies')my_listmy_list.insert(my_list.index('puppies'), 'furry')my_list```
my_list.count('I') my_list my_list.append('I') my_list my_list.count('I') my_list #my_list.index(42) my_list.index('puppies') my_list my_list.insert(my_list.index('puppies'), 'furry') my_list my_list.pop() my_list my_list.remove('puppies') my_list my_list.append('cabbages') my_list
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
Any questions about lists before we move on? DictionariesDictionaries are similar to tuples and lists in that they hold a collection of objects. Dictionaries, however, allow an additional indexing mode: keys. Think of a real dictionary where the elements in it are the definitions of the words and the keys to retrieve the entries are the words themselves.| word | definition ||------|------------|| tuple | An immutable collection of ordered objects || list | A mutable collection of ordered objects || dictionary | A mutable collection of named objects |Let's create this data structure now. Dictionaries, like tuples and elements use a unique referencing method, '{' and its evil twin '}'.
my_dict = { 'tuple' : 'An immutable collection of ordered objects', 'list' : 'A mutable collection of ordered objects', 'dictionary' : 'A mutable collection of objects' } my_dict
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
We access items in the dictionary by name, e.g.
my_dict['dictionary']
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
Since the dictionary is mutable, you can change the entries.
my_dict['dictionary'] = 'A mutable collection of named objects' my_dict
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
Notice that ordering is not preserved! As of Python 3.7 the ordering is garunteed to be insertion order but that does not mean alphabetical or otherwise sorted.And we can add new items to the list.
my_dict['cabbage'] = 'Green leafy plant in the Brassica family' my_dict
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
To delete an entry, we can't just set it to ``None``
my_dict['cabbage'] = None my_dict
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
To delete it propery, we need to pop that specific entry.
my_dict.pop('cabbage', None) my_dict
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
You can use other objects as names, but that is a topic for another time. You can mix and match key types, e.g.
my_new_dict = {} my_new_dict[1] = 'One' my_new_dict['42'] = 42 my_new_dict
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
You can get a list of keys in the dictionary by using the ``keys`` method.
my_dict.keys()
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
Similarly the contents of the dictionary with the ``items`` method.
my_dict.items()
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
We can use the keys list for fun stuff, e.g. with the ``in`` operator.
'dictionary' in my_dict.keys()
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
This is a synonym for `in my_dict`
'dictionary' in my_dict
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
Notice, it doesn't work for elements.
'A mutable collection of ordered objects' in my_dict
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
Other dictionary methods:| methods | description ||---|---|| dict.clear() | Removes all elements from dict || dict.get(key, default=None) | For ``key`` key, returns value or ``default`` if key doesn't exist in dict | | dict.items() | Returns a list of dicts (key, value) tuple pairs | | dict.keys() | Returns a list of dictionary keys || dict.setdefault(key, default=None) | Similar to get, but set the value of key if it doesn't exist in dict || dict.update(dict2) | Add the key / value pairs in dict2 to dict || dict.values | Returns a list of dictionary values|Feel free to experiment... Flow controlFlow control figureFlow control refers how to programs do loops, conditional execution, and order of functional operations. Let's start with conditionals, or the venerable ``if`` statement.Let's start with a simple list of instructors for these classes.
instructors = ['Dave', 'Jim', 'Dorkus the Clown'] instructors
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
IfIf statements can be use to execute some lines or block of code if a particular condition is satisfied. E.g. Let's print something based on the entries in the list.
if 'Dorkus the Clown' in instructors: print('#fakeinstructor')
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
Usually we want conditional logic on both sides of a binary condition, e.g. some action when ``True`` and some when ``False``
if 'Dorkus the Clown' in instructors: print('There are fake names for class instructors in your list!') else: print("Nothing to see here")
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
There is a special do nothing word: `pass` that skips over some arm of a conditional, e.g.
if 'Jim' in instructors: print("Congratulations! Jim is teaching, your class won't stink!") else: pass
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
_Note_: what have you noticed in this session about quotes? What is the difference between ``'`` and ``"``?Another simple example:
if True is False: print("I'm so confused") else: print("Everything is right with the world")
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
It is always good practice to handle all cases explicity. `Conditional fall through` is a common source of bugs.Sometimes we wish to test multiple conditions. Use `if`, `elif`, and `else`.
my_favorite = 'pie' if my_favorite is 'cake': print("He likes cake! I'll start making a double chocolate velvet cake right now!") elif my_favorite is 'pie': print("He likes pie! I'll start making a cherry pie right now!") else: print("He likes " + my_favorite + ". I don't know how to make that.")
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
Conditionals can take ``and`` and ``or`` and ``not``. E.g.
my_favorite = 'pie' if my_favorite is 'cake' or my_favorite is 'pie': print(my_favorite + " : I have a recipe for that!") else: print("Ew! Who eats that?")
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
ForFor loops are the standard loop, though `while` is also common. For has the general form:```for items in list: do stuff```For loops and collections like tuples, lists and dictionaries are natural friends.
for instructor in instructors: print(instructor)
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
You can combine loops and conditionals:
for instructor in instructors: if instructor.endswith('Clown'): print(instructor + " doesn't sound like a real instructor name!") else: print(instructor + " is so smart... all those gooey brains!")
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
Dictionaries can use the `keys` method for iterating.
for key in my_dict.keys(): if len(key) > 5: print(my_dict[key])
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
range()Since for operates over lists, it is common to want to do something like:```NOTE: C-likefor (i = 0; i < 3; ++i) { print(i);}```The Python equivalent is:```for i in [0, 1, 2]: do something with i```What happens when the range you want to sample is big, e.g.```NOTE: C-likefor (i = 0; i < 1000000000; ++i) { print(i);}```That would be a real pain in the rear to have to write out the entire list from 1 to 1000000000.Enter, the `range()` function. E.g. ```range(3) is [0, 1, 2]```
range(3)
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
Notice that Python (in the newest versions, e.g. 3+) has an object type that is a range. This saves memory and speeds up calculations vs. an explicit representation of a range as a list - but it can be automagically converted to a list on the fly by Python. To show the contents as a `list` we can use the type case like with the tuple above.Sometimes, in older Python docs, you will see `xrange`. This used the range object back in Python 2 and `range` returned an actual list. Beware of this!
list(range(3))
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
Remember earlier with slicing, the syntax `:3` meant `[0, 1, 2]`? Well, the same upper bound philosophy applies here.
for index in range(3): instructor = instructors[index] if instructor.endswith('Clown'): print(instructor + " doesn't sound like a real instructor name!") else: print(instructor + " is so smart... all those gooey brains!")
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
This would probably be better written as
for index in range(len(instructors)): instructor = instructors[index] if instructor.endswith('Clown'): print(instructor + " doesn't sound like a real instructor name!") else: print(instructor + " is so smart... all those gooey brains!")
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
But in all, it isn't very Pythonesque to use indexes like that (unless you have another reason in the loop) and you would opt instead for the `instructor in instructors` form. More often, you are doing something with the numbers that requires them to be integers, e.g. math.
sum = 0 for i in range(10): sum += i print(sum)
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
For loops can be nested_Note_: for more on formatting strings, see: [https://pyformat.info](https://pyformat.info)
for i in range(1, 4): for j in range(1, 4): print('%d * %d = %d' % (i, j, i*j)) # Note string formatting here, %d means an integer
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
You can exit loops early if a condition is met:
for i in range(10): if i == 4: break i
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
You can skip stuff in a loop with `continue`
sum = 0 for i in range(10): if (i == 5): continue else: sum += i print(sum)
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
There is a unique language feature call ``for...else``
sum = 0 for i in range(10): sum += i else: print('final i = %d, and sum = %d' % (i, sum))
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
You can iterate over letters in a string
my_string = "DIRECT" for c in my_string: print(c)
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
Hacky Hack Time with Ifs, Fors, Lists, and imports!Objective: Replace the `bash magic` bits for downloading the HCEPDB data and uncompressing it with Python code. Since the download is big, check if the zip file exists first before downloading it again. Then load it into a pandas dataframe.Notes:* The `os` package has tools for checking if a file exists: ``os.path.exists`````import osfilename = 'HCEPDB_moldata.zip'if os.path.exists(filename): print("wahoo!")```* Use the `requests` package to get the file given a url (got this from the requests docs)```import requestsurl = 'http://faculty.washington.edu/dacb/HCEPDB_moldata.zip'req = requests.get(url)assert req.status_code == 200 if the download failed, this line will generate an errorwith open(filename, 'wb') as f: f.write(req.content)```* Use the `zipfile` package to decompress the file while reading it into `pandas````import pandas as pdimport zipfilecsv_filename = 'HCEPDB_moldata.csv'zf = zipfile.ZipFile(filename)data = pd.read_csv(zf.open(csv_filename))``` Now, use your code from above for the following URLs and filenames| URL | filename | csv_filename ||-----|----------|--------------|| http://faculty.washington.edu/dacb/HCEPDB_moldata_set1.zip | HCEPDB_moldata_set1.zip | HCEPDB_moldata_set1.csv || http://faculty.washington.edu/dacb/HCEPDB_moldata_set2.zip | HCEPDB_moldata_set2.zip | HCEPDB_moldata_set2.csv || http://faculty.washington.edu/dacb/HCEPDB_moldata_set3.zip | HCEPDB_moldata_set3.zip | HCEPDB_moldata_set3.csv |What pieces of the data structures and flow control that we talked about earlier can you use? How did you solve this problem? FunctionsFor loops let you repeat some code for every item in a list. Functions are similar in that they run the same lines of code for new values of some variable. They are different in that functions are not limited to looping over items.Functions are a critical part of writing easy to read, reusable code.Create a function like:```def function_name (parameters): """ optional docstring """ function expressions return [variable]```_Note:_ Sometimes I use the word argument in place of parameter.Here is a simple example. It prints a string that was passed in and returns nothing.
def print_string(str): """This prints out a string passed as the parameter.""" print(str) return
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
To call the function, use:```print_string("Dave is awesome!")```_Note:_ The function has to be defined before you can call it!
print_string("Dave is awesome!")
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
If you don't provide an argument or too many, you get an error. Parameters (or arguments) in Python are all passed by reference. This means that if you modify the parameters in the function, they are modified outside of the function.See the following example:```def change_list(my_list): """This changes a passed list into this function""" my_list.append('four'); print('list inside the function: ', my_list) returnmy_list = [1, 2, 3];print('list before the function: ', my_list)change_list(my_list);print('list after the function: ', my_list)```
def change_list(my_list): """This changes a passed list into this function""" my_list.append('four'); print('list inside the function: ', my_list) return my_list = [1, 2, 3]; print('list before the function: ', my_list) change_list(my_list); print('list after the function: ', my_list)
list before the function: [1, 2, 3] list inside the function: [1, 2, 3, 'four'] list after the function: [1, 2, 3, 'four']
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
Variables have scope: `global` and `local`In a function, new variables that you create are not saved when the function returns - these are `local` variables. Variables defined outside of the function can be accessed but not changed - these are `global` variables, _Note_ there is a way to do this with the `global` keyword. Generally, the use of `global` variables is not encouraged, instead use parameters.```my_global_1 = 'bad idea'my_global_2 = 'another bad one'my_global_3 = 'better idea'def my_function(): print(my_global) my_global_2 = 'broke your global, man!' global my_global_3 my_global_3 = 'still a better idea' return my_function()print(my_global_2)print(my_global_3)``` In general, you want to use parameters to provide data to a function and return a result with the `return`. E.g.```def sum(x, y): my_sum = x + y return my_sum```If you are going to return multiple objects, what data structure that we talked about can be used? Give and example below. Parameters have four different types:| type | behavior ||------|----------|| required | positional, must be present or error, e.g. `my_func(first_name, last_name)` || keyword | position independent, e.g. `my_func(first_name, last_name)` can be called `my_func(first_name='Dave', last_name='Beck')` or `my_func(last_name='Beck', first_name='Dave')` || default | keyword params that default to a value if not provided |
def print_name(first, last='the Clown'): print('Your name is %s %s' % (first, last)) return
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
Play around with the above function. Functions can contain any code that you put anywhere else including:* if...elif...else* for...else* while* other function calls
def print_name_age(first, last, age): print_name(first, last) print('Your age is %d' % (age)) if age > 35: print('You are really old.') return print_name_age(age=40, last='Beck', first='Dave')
_____no_output_____
BSD-3-Clause
Wi20_content/SEDS/L5.Procedural_Python.ipynb
ShahResearchGroup/UWDIRECT.github.io
Lomb-Scargle Example Dataset The DataFor simplicity, we download the data here and save locally
import pandas as pd def get_LINEAR_lightcurve(lcid): from astroML.datasets import fetch_LINEAR_sample LINEAR_sample = fetch_LINEAR_sample() data = pd.DataFrame(LINEAR_sample[lcid], columns=['t', 'mag', 'magerr']) data.to_csv('LINEAR_{0}.csv'.format(lcid), index=False) # Uncomment to download the data # get_LINEAR_lightcurve(lcid=11375941) data = pd.read_csv('LINEAR_11375941.csv') data.head() data.shape (data.t.max() - data.t.min()) / 365.
_____no_output_____
CC-BY-4.0
figures/LINEAR_Example.ipynb
spencerwplovie/Lomb-Scargle-Copied
Visualizing the Data
%matplotlib inline import matplotlib.pyplot as plt import numpy as np import pandas as pd plt.style.use('seaborn-whitegrid') fig, ax = plt.subplots(figsize=(8, 3)) ax.errorbar(data.t, data.mag, data.magerr, fmt='.k', ecolor='gray', capsize=0) ax.set(xlabel='time (MJD)', ylabel='magnitude', title='LINEAR object 11375941') ax.invert_yaxis() fig.savefig('fig01_LINEAR_data.pdf'); from astropy.stats import LombScargle ls = LombScargle(data.t, data.mag, data.magerr) frequency, power = ls.autopower(nyquist_factor=500, minimum_frequency=0.2) period_days = 1. / frequency period_hours = period_days * 24 best_period = period_days[np.argmax(power)] phase = (data.t / best_period) % 1 print("Best period: {0:.2f} hours".format(24 * best_period)) fig, ax = plt.subplots(1, 2, figsize=(8, 3)) # PSD has a _LOT_ of elements. Rasterize it so it can be displayed as PDF ax[0].plot(period_days, power, '-k', rasterized=True) ax[0].set(xlim=(0, 2.5), ylim=(0, 0.8), xlabel='Period (days)', ylabel='Lomb-Scargle Power', title='Lomb-Scargle Periodogram') ax[1].errorbar(phase, data.mag, data.magerr, fmt='.k', ecolor='gray', capsize=0) ax[1].set(xlabel='phase', ylabel='magnitude', title='Phased Data') ax[1].invert_yaxis() ax[1].text(0.02, 0.03, "Period = {0:.2f} hours".format(24 * best_period), transform=ax[1].transAxes) inset = fig.add_axes([0.25, 0.6, 0.2, 0.25]) inset.plot(period_hours, power, '-k', rasterized=True) inset.xaxis.set_major_locator(plt.MultipleLocator(1)) inset.yaxis.set_major_locator(plt.MultipleLocator(0.2)) inset.set(xlim=(1, 5), xlabel='Period (hours)', ylabel='power') fig.savefig('fig02_LINEAR_PSD.pdf');
_____no_output_____
CC-BY-4.0
figures/LINEAR_Example.ipynb
spencerwplovie/Lomb-Scargle-Copied
Peak PrecisionEstimate peak precision by plotting the Bayesian periodogram peak and fitting a Gaussian to the peak (for simplicity, just do it by-eye):
f, P = ls.autopower(nyquist_factor=500, minimum_frequency=9.3, maximum_frequency=9.31, samples_per_peak=20, normalization='psd') P = np.exp(P) P /= P.max() h = 24. / f plt.plot(h, P, '-k') plt.fill(h, np.exp(-0.5 * (h - 2.58014) ** 2 / 0.00004 ** 2), color='gray', alpha=0.3) plt.xlim(2.58, 2.5803)
/Users/jakevdp/anaconda/envs/python3.5/lib/python3.5/site-packages/ipykernel/__main__.py:6: RuntimeWarning: overflow encountered in exp /Users/jakevdp/anaconda/envs/python3.5/lib/python3.5/site-packages/ipykernel/__main__.py:7: RuntimeWarning: invalid value encountered in true_divide
CC-BY-4.0
figures/LINEAR_Example.ipynb
spencerwplovie/Lomb-Scargle-Copied
Looks like $2.58023 \pm 0.00006$ hours
fig, ax = plt.subplots(figsize=(10, 3)) phase_model = np.linspace(-0.5, 1.5, 100) best_frequency = frequency[np.argmax(power)] mag_model = ls.model(phase_model / best_frequency, best_frequency) for offset in [-1, 0, 1]: ax.errorbar(phase + offset, data.mag, data.magerr, fmt='.', color='gray', ecolor='lightgray', capsize=0); ax.plot(phase_model, mag_model, '-k', lw=2) ax.set(xlim=(-0.5, 1.5), xlabel='phase', ylabel='mag') ax.invert_yaxis() fig.savefig('fig18_ls_model.pdf') period_hours_bad = np.linspace(1, 6, 10001) frequency_bad = 24 / period_hours_bad power_bad = ls.power(frequency_bad) mask = (period_hours > 1) & (period_hours < 6) fig, ax = plt.subplots(figsize=(10, 3)) ax.plot(period_hours[mask], power[mask], '-', color='lightgray', rasterized=True, label='Well-motivated frequency grid') ax.plot(period_hours_bad, power_bad, '-k', rasterized=True, label='10,000 equally-spaced periods') ax.grid(False) ax.legend() ax.set(xlabel='period (hours)', ylabel='Lomb-Scargle Power', title='LINEAR object 11375941') fig.savefig('fig19_LINEAR_coarse_grid.pdf')
_____no_output_____
CC-BY-4.0
figures/LINEAR_Example.ipynb
spencerwplovie/Lomb-Scargle-Copied
Required Grid Spacing
!head LINEAR_11375941.csv n_digits = 6 f_ny = 0.5 * 10 ** n_digits T = (data.t.max() - data.t.min()) n_o = 5 delta_f = 1. / n_o / T print("f_ny =", f_ny) print("T =", T) print("n_grid =", f_ny / delta_f)
f_ny = 500000.0 T = 1961.847365 n_grid = 4904618412.5
CC-BY-4.0
figures/LINEAR_Example.ipynb
spencerwplovie/Lomb-Scargle-Copied
Building your Deep Neural Network: Step by StepWelcome to your week 4 assignment (part 1 of 2)! You have previously trained a 2-layer Neural Network (with a single hidden layer). This week, you will build a deep neural network, with as many layers as you want!- In this notebook, you will implement all the functions required to build a deep neural network.- In the next assignment, you will use these functions to build a deep neural network for image classification.**After this assignment you will be able to:**- Use non-linear units like ReLU to improve your model- Build a deeper neural network (with more than 1 hidden layer)- Implement an easy-to-use neural network class**Notation**:- Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer. - Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters.- Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example. - Example: $x^{(i)}$ is the $i^{th}$ training example.- Lowerscript $i$ denotes the $i^{th}$ entry of a vector. - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations).Let's get started! 1 - PackagesLet's first import all the packages that you will need during this assignment. - [numpy](www.numpy.org) is the main package for scientific computing with Python.- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.- dnn_utils provides some necessary functions for this notebook.- testCases provides some test cases to assess the correctness of your functions- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. Please don't change the seed.
import numpy as np import h5py import matplotlib.pyplot as plt from testCases_v4 import * from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward %matplotlib inline plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' %load_ext autoreload %autoreload 2 np.random.seed(1)
C:\Users\korra\Anaconda3\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters
MIT
Course1_week4_Building your Deep Neural Network - Step by Step v8.ipynb
korra0501/deeplearning.ai-coursera
2 - Outline of the AssignmentTo build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will:- Initialize the parameters for a two-layer network and for an $L$-layer neural network.- Implement the forward propagation module (shown in purple in the figure below). - Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$). - We give you the ACTIVATION function (relu/sigmoid). - Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function. - Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.- Compute the loss.- Implement the backward propagation module (denoted in red in the figure below). - Complete the LINEAR part of a layer's backward propagation step. - We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward) - Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function. - Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function- Finally update the parameters. **Figure 1****Note** that for every forward function, there is a corresponding backward function. That is why at every step of your forward module you will be storing some values in a cache. The cached values are useful for computing gradients. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps. 3 - InitializationYou will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers. 3.1 - 2-layer Neural Network**Exercise**: Create and initialize the parameters of the 2-layer neural network.**Instructions**:- The model's structure is: *LINEAR -> RELU -> LINEAR -> SIGMOID*. - Use random initialization for the weight matrices. Use `np.random.randn(shape)*0.01` with the correct shape.- Use zero initialization for the biases. Use `np.zeros(shape)`.
# GRADED FUNCTION: initialize_parameters def initialize_parameters(n_x, n_h, n_y): """ Argument: n_x -- size of the input layer n_h -- size of the hidden layer n_y -- size of the output layer Returns: parameters -- python dictionary containing your parameters: W1 -- weight matrix of shape (n_h, n_x) b1 -- bias vector of shape (n_h, 1) W2 -- weight matrix of shape (n_y, n_h) b2 -- bias vector of shape (n_y, 1) """ np.random.seed(1) ### START CODE HERE ### (≈ 4 lines of code) W1 = np.random.randn(n_h, n_x)*0.01 b1 = np.zeros((n_h, 1)) W2 = np.random.randn(n_y, n_h)*0.01 b2 = np.zeros((n_y, 1)) ### END CODE HERE ### assert(W1.shape == (n_h, n_x)) assert(b1.shape == (n_h, 1)) assert(W2.shape == (n_y, n_h)) assert(b2.shape == (n_y, 1)) parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2} return parameters parameters = initialize_parameters(3,2,1) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"]))
W1 = [[ 0.01624345 -0.00611756 -0.00528172] [-0.01072969 0.00865408 -0.02301539]] b1 = [[0.] [0.]] W2 = [[ 0.01744812 -0.00761207]] b2 = [[0.]]
MIT
Course1_week4_Building your Deep Neural Network - Step by Step v8.ipynb
korra0501/deeplearning.ai-coursera
**Expected output**: **W1** [[ 0.01624345 -0.00611756 -0.00528172] [-0.01072969 0.00865408 -0.02301539]] **b1** [[ 0.] [ 0.]] **W2** [[ 0.01744812 -0.00761207]] **b2** [[ 0.]] 3.2 - L-layer Neural NetworkThe initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the `initialize_parameters_deep`, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then: **Shape of W** **Shape of b** **Activation** **Shape of Activation** **Layer 1** $(n^{[1]},12288)$ $(n^{[1]},1)$ $Z^{[1]} = W^{[1]} X + b^{[1]} $ $(n^{[1]},209)$ **Layer 2** $(n^{[2]}, n^{[1]})$ $(n^{[2]},1)$ $Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ $(n^{[2]}, 209)$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ **Layer L-1** $(n^{[L-1]}, n^{[L-2]})$ $(n^{[L-1]}, 1)$ $Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ $(n^{[L-1]}, 209)$ **Layer L** $(n^{[L]}, n^{[L-1]})$ $(n^{[L]}, 1)$ $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$ $(n^{[L]}, 209)$ Remember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if: $$ W = \begin{bmatrix} j & k & l\\ m & n & o \\ p & q & r \end{bmatrix}\;\;\; X = \begin{bmatrix} a & b & c\\ d & e & f \\ g & h & i \end{bmatrix} \;\;\; b =\begin{bmatrix} s \\ t \\ u\end{bmatrix}\tag{2}$$Then $WX + b$ will be:$$ WX + b = \begin{bmatrix} (ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\\ (ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\\ (pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u\end{bmatrix}\tag{3} $$ **Exercise**: Implement initialization for an L-layer Neural Network. **Instructions**:- The model's structure is *[LINEAR -> RELU] $ \times$ (L-1) -> LINEAR -> SIGMOID*. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function.- Use random initialization for the weight matrices. Use `np.random.randn(shape) * 0.01`.- Use zeros initialization for the biases. Use `np.zeros(shape)`.- We will store $n^{[l]}$, the number of units in different layers, in a variable `layer_dims`. For example, the `layer_dims` for the "Planar Data classification model" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. Thus means `W1`'s shape was (4,2), `b1` was (4,1), `W2` was (1,4) and `b2` was (1,1). Now you will generalize this to $L$ layers! - Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network).```python if L == 1: parameters["W" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01 parameters["b" + str(L)] = np.zeros((layer_dims[1], 1))```
# GRADED FUNCTION: initialize_parameters_deep def initialize_parameters_deep(layer_dims): """ Arguments: layer_dims -- python array (list) containing the dimensions of each layer in our network Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1]) bl -- bias vector of shape (layer_dims[l], 1) """ np.random.seed(3) parameters = {} L = len(layer_dims) # number of layers in the network for l in range(1, L): ### START CODE HERE ### (≈ 2 lines of code) parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01 parameters['b' + str(l)] = np.zeros((layer_dims[l], 1)) ### END CODE HERE ### assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1])) assert(parameters['b' + str(l)].shape == (layer_dims[l], 1)) return parameters parameters = initialize_parameters_deep([5,4,3]) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"]))
W1 = [[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388] [-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218] [-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034] [-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]] b1 = [[0.] [0.] [0.] [0.]] W2 = [[-0.01185047 -0.0020565 0.01486148 0.00236716] [-0.01023785 -0.00712993 0.00625245 -0.00160513] [-0.00768836 -0.00230031 0.00745056 0.01976111]] b2 = [[0.] [0.] [0.]]
MIT
Course1_week4_Building your Deep Neural Network - Step by Step v8.ipynb
korra0501/deeplearning.ai-coursera
**Expected output**: **W1** [[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388] [-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218] [-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034] [-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.01185047 -0.0020565 0.01486148 0.00236716] [-0.01023785 -0.00712993 0.00625245 -0.00160513] [-0.00768836 -0.00230031 0.00745056 0.01976111]] **b2** [[ 0.] [ 0.] [ 0.]] 4 - Forward propagation module 4.1 - Linear Forward Now that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order:- LINEAR- LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid. - [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID (whole model)The linear forward module (vectorized over all the examples) computes the following equations:$$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\tag{4}$$where $A^{[0]} = X$. **Exercise**: Build the linear part of forward propagation.**Reminder**:The mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find `np.dot()` useful. If your dimensions don't match, printing `W.shape` may help.
# GRADED FUNCTION: linear_forward def linear_forward(A, W, b): """ Implement the linear part of a layer's forward propagation. Arguments: A -- activations from previous layer (or input data): (size of previous layer, number of examples) W -- weights matrix: numpy array of shape (size of current layer, size of previous layer) b -- bias vector, numpy array of shape (size of the current layer, 1) Returns: Z -- the input of the activation function, also called pre-activation parameter cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently """ ### START CODE HERE ### (≈ 1 line of code) Z = np.dot(W, A) + b ### END CODE HERE ### assert(Z.shape == (W.shape[0], A.shape[1])) cache = (A, W, b) return Z, cache A, W, b = linear_forward_test_case() Z, linear_cache = linear_forward(A, W, b) print("Z = " + str(Z))
Z = [[ 3.26295337 -1.23429987]]
MIT
Course1_week4_Building your Deep Neural Network - Step by Step v8.ipynb
korra0501/deeplearning.ai-coursera
**Expected output**: **Z** [[ 3.26295337 -1.23429987]] 4.2 - Linear-Activation ForwardIn this notebook, you will use two activation functions:- **Sigmoid**: $\sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}}$. We have provided you with the `sigmoid` function. This function returns **two** items: the activation value "`a`" and a "`cache`" that contains "`Z`" (it's what we will feed in to the corresponding backward function). To use it you could just call: ``` pythonA, activation_cache = sigmoid(Z)```- **ReLU**: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. We have provided you with the `relu` function. This function returns **two** items: the activation value "`A`" and a "`cache`" that contains "`Z`" (it's what we will feed in to the corresponding backward function). To use it you could just call:``` pythonA, activation_cache = relu(Z)``` For more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step.**Exercise**: Implement the forward propagation of the *LINEAR->ACTIVATION* layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation "g" can be sigmoid() or relu(). Use linear_forward() and the correct activation function.
# GRADED FUNCTION: linear_activation_forward def linear_activation_forward(A_prev, W, b, activation): """ Implement the forward propagation for the LINEAR->ACTIVATION layer Arguments: A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples) W -- weights matrix: numpy array of shape (size of current layer, size of previous layer) b -- bias vector, numpy array of shape (size of the current layer, 1) activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu" Returns: A -- the output of the activation function, also called the post-activation value cache -- a python dictionary containing "linear_cache" and "activation_cache"; stored for computing the backward pass efficiently """ if activation == "sigmoid": # Inputs: "A_prev, W, b". Outputs: "A, activation_cache". ### START CODE HERE ### (≈ 2 lines of code) Z, linear_cache = linear_forward(A_prev, W, b) A, activation_cache = sigmoid(Z) ### END CODE HERE ### elif activation == "relu": # Inputs: "A_prev, W, b". Outputs: "A, activation_cache". ### START CODE HERE ### (≈ 2 lines of code) Z, linear_cache = linear_forward(A_prev, W, b) A, activation_cache = relu(Z) ### END CODE HERE ### assert (A.shape == (W.shape[0], A_prev.shape[1])) cache = (linear_cache, activation_cache) return A, cache A_prev, W, b = linear_activation_forward_test_case() A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid") print("With sigmoid: A = " + str(A)) A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu") print("With ReLU: A = " + str(A))
With sigmoid: A = [[0.96890023 0.11013289]] With ReLU: A = [[3.43896131 0. ]]
MIT
Course1_week4_Building your Deep Neural Network - Step by Step v8.ipynb
korra0501/deeplearning.ai-coursera
**Expected output**: **With sigmoid: A ** [[ 0.96890023 0.11013289]] **With ReLU: A ** [[ 3.43896131 0. ]] **Note**: In deep learning, the "[LINEAR->ACTIVATION]" computation is counted as a single layer in the neural network, not two layers. d) L-Layer Model For even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (`linear_activation_forward` with RELU) $L-1$ times, then follows that with one `linear_activation_forward` with SIGMOID. **Figure 2** : *[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID* model**Exercise**: Implement the forward propagation of the above model.**Instruction**: In the code below, the variable `AL` will denote $A^{[L]} = \sigma(Z^{[L]}) = \sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called `Yhat`, i.e., this is $\hat{Y}$.) **Tips**:- Use the functions you had previously written - Use a for loop to replicate [LINEAR->RELU] (L-1) times- Don't forget to keep track of the caches in the "caches" list. To add a new value `c` to a `list`, you can use `list.append(c)`.
# GRADED FUNCTION: L_model_forward def L_model_forward(X, parameters): """ Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation Arguments: X -- data, numpy array of shape (input size, number of examples) parameters -- output of initialize_parameters_deep() Returns: AL -- last post-activation value caches -- list of caches containing: every cache of linear_activation_forward() (there are L-1 of them, indexed from 0 to L-1) """ caches = [] A = X L = len(parameters) // 2 # number of layers in the neural network # Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list. for l in range(1, L): A_prev = A ### START CODE HERE ### (≈ 2 lines of code) A, cache = linear_activation_forward(A_prev, parameters['W'+str(l)], parameters['b'+str(l)], activation='relu') caches.append(cache) ### END CODE HERE ### # Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list. ### START CODE HERE ### (≈ 2 lines of code) AL, cache = linear_activation_forward(A, parameters['W'+str(L)], parameters['b'+str(L)], activation='sigmoid') caches.append(cache) ### END CODE HERE ### assert(AL.shape == (1,X.shape[1])) return AL, caches X, parameters = L_model_forward_test_case_2hidden() AL, caches = L_model_forward(X, parameters) print("AL = " + str(AL)) print("Length of caches list = " + str(len(caches)))
AL = [[0.03921668 0.70498921 0.19734387 0.04728177]] Length of caches list = 3
MIT
Course1_week4_Building your Deep Neural Network - Step by Step v8.ipynb
korra0501/deeplearning.ai-coursera
**AL** [[ 0.03921668 0.70498921 0.19734387 0.04728177]] **Length of caches list ** 3 Great! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in "caches". Using $A^{[L]}$, you can compute the cost of your predictions. 5 - Cost functionNow you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning.**Exercise**: Compute the cross-entropy cost $J$, using the following formula: $$-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right)) \tag{7}$$
# GRADED FUNCTION: compute_cost def compute_cost(AL, Y): """ Implement the cost function defined by equation (7). Arguments: AL -- probability vector corresponding to your label predictions, shape (1, number of examples) Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples) Returns: cost -- cross-entropy cost """ m = Y.shape[1] # Compute loss from aL and y. ### START CODE HERE ### (≈ 1 lines of code) cost = -(1/m) * np.sum(np.multiply(Y, np.log(AL)) + np.multiply((1-Y), np.log(1-AL))) ### END CODE HERE ### cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17). assert(cost.shape == ()) return cost Y, AL = compute_cost_test_case() print("cost = " + str(compute_cost(AL, Y)))
cost = 0.41493159961539694
MIT
Course1_week4_Building your Deep Neural Network - Step by Step v8.ipynb
korra0501/deeplearning.ai-coursera
**Expected Output**: **cost** 0.41493159961539694 6 - Backward propagation moduleJust like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters. **Reminder**: **Figure 3** : Forward and Backward propagation for *LINEAR->RELU->LINEAR->SIGMOID* *The purple blocks represent the forward propagation, and the red blocks represent the backward propagation.* <!-- For those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows:$$\frac{d \mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \frac{d\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\frac{{da^{[2]}}}{{dz^{[2]}}}\frac{{dz^{[2]}}}{{da^{[1]}}}\frac{{da^{[1]}}}{{dz^{[1]}}} \tag{8} $$In order to calculate the gradient $dW^{[1]} = \frac{\partial L}{\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted.Equivalently, in order to calculate the gradient $db^{[1]} = \frac{\partial L}{\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial b^{[1]}}$.This is why we talk about **backpropagation**.!-->Now, similar to forward propagation, you are going to build the backward propagation in three steps:- LINEAR backward- LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation- [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model) 6.1 - Linear backwardFor layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation).Suppose you have already calculated the derivative $dZ^{[l]} = \frac{\partial \mathcal{L} }{\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]} dA^{[l-1]})$. **Figure 4** The three outputs $(dW^{[l]}, db^{[l]}, dA^{[l]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need:$$ dW^{[l]} = \frac{\partial \mathcal{L} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} \tag{8}$$$$ db^{[l]} = \frac{\partial \mathcal{L} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{[l](i)}\tag{9}$$$$ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \tag{10}$$ **Exercise**: Use the 3 formulas above to implement linear_backward().
# GRADED FUNCTION: linear_backward def linear_backward(dZ, cache): """ Implement the linear portion of backward propagation for a single layer (layer l) Arguments: dZ -- Gradient of the cost with respect to the linear output (of current layer l) cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer Returns: dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev dW -- Gradient of the cost with respect to W (current layer l), same shape as W db -- Gradient of the cost with respect to b (current layer l), same shape as b """ A_prev, W, b = cache m = A_prev.shape[1] ### START CODE HERE ### (≈ 3 lines of code) dW = (1/m) * np.dot(dZ, A_prev.T) db = (1/m) * np.sum(dZ, axis=1, keepdims=True) dA_prev = np.dot(W.T, dZ) ### END CODE HERE ### assert (dA_prev.shape == A_prev.shape) assert (dW.shape == W.shape) assert (db.shape == b.shape) return dA_prev, dW, db # Set up some test inputs dZ, linear_cache = linear_backward_test_case() dA_prev, dW, db = linear_backward(dZ, linear_cache) print ("dA_prev = "+ str(dA_prev)) print ("dW = " + str(dW)) print ("db = " + str(db))
dA_prev = [[ 0.51822968 -0.19517421] [-0.40506361 0.15255393] [ 2.37496825 -0.89445391]] dW = [[-0.10076895 1.40685096 1.64992505]] db = [[0.50629448]]
MIT
Course1_week4_Building your Deep Neural Network - Step by Step v8.ipynb
korra0501/deeplearning.ai-coursera
**Expected Output**: **dA_prev** [[ 0.51822968 -0.19517421] [-0.40506361 0.15255393] [ 2.37496825 -0.89445391]] **dW** [[-0.10076895 1.40685096 1.64992505]] **db** [[ 0.50629448]] 6.2 - Linear-Activation backwardNext, you will create a function that merges the two helper functions: **`linear_backward`** and the backward step for the activation **`linear_activation_backward`**. To help you implement `linear_activation_backward`, we provided two backward functions:- **`sigmoid_backward`**: Implements the backward propagation for SIGMOID unit. You can call it as follows:```pythondZ = sigmoid_backward(dA, activation_cache)```- **`relu_backward`**: Implements the backward propagation for RELU unit. You can call it as follows:```pythondZ = relu_backward(dA, activation_cache)```If $g(.)$ is the activation function, `sigmoid_backward` and `relu_backward` compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \tag{11}$$. **Exercise**: Implement the backpropagation for the *LINEAR->ACTIVATION* layer.
# GRADED FUNCTION: linear_activation_backward def linear_activation_backward(dA, cache, activation): """ Implement the backward propagation for the LINEAR->ACTIVATION layer. Arguments: dA -- post-activation gradient for current layer l cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu" Returns: dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev dW -- Gradient of the cost with respect to W (current layer l), same shape as W db -- Gradient of the cost with respect to b (current layer l), same shape as b """ linear_cache, activation_cache = cache if activation == "relu": ### START CODE HERE ### (≈ 2 lines of code) dZ = relu_backward(dA, activation_cache) dA_prev, dW, db = linear_backward(dZ, linear_cache) ### END CODE HERE ### elif activation == "sigmoid": ### START CODE HERE ### (≈ 2 lines of code) dZ = sigmoid_backward(dA, activation_cache) dA_prev, dW, db = linear_backward(dZ, linear_cache) ### END CODE HERE ### return dA_prev, dW, db dAL, linear_activation_cache = linear_activation_backward_test_case() dA_prev, dW, db = linear_activation_backward(dAL, linear_activation_cache, activation = "sigmoid") print ("sigmoid:") print ("dA_prev = "+ str(dA_prev)) print ("dW = " + str(dW)) print ("db = " + str(db) + "\n") dA_prev, dW, db = linear_activation_backward(dAL, linear_activation_cache, activation = "relu") print ("relu:") print ("dA_prev = "+ str(dA_prev)) print ("dW = " + str(dW)) print ("db = " + str(db))
sigmoid: dA_prev = [[ 0.11017994 0.01105339] [ 0.09466817 0.00949723] [-0.05743092 -0.00576154]] dW = [[ 0.10266786 0.09778551 -0.01968084]] db = [[-0.05729622]] relu: dA_prev = [[ 0.44090989 -0. ] [ 0.37883606 -0. ] [-0.2298228 0. ]] dW = [[ 0.44513824 0.37371418 -0.10478989]] db = [[-0.20837892]]
MIT
Course1_week4_Building your Deep Neural Network - Step by Step v8.ipynb
korra0501/deeplearning.ai-coursera
**Expected output with sigmoid:** dA_prev [[ 0.11017994 0.01105339] [ 0.09466817 0.00949723] [-0.05743092 -0.00576154]] dW [[ 0.10266786 0.09778551 -0.01968084]] db [[-0.05729622]] **Expected output with relu:** dA_prev [[ 0.44090989 0. ] [ 0.37883606 0. ] [-0.2298228 0. ]] dW [[ 0.44513824 0.37371418 -0.10478989]] db [[-0.20837892]] 6.3 - L-Model Backward Now you will implement the backward function for the whole network. Recall that when you implemented the `L_model_forward` function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the `L_model_backward` function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass. **Figure 5** : Backward pass ** Initializing backpropagation**:To backpropagate through this network, we know that the output is, $A^{[L]} = \sigma(Z^{[L]})$. Your code thus needs to compute `dAL` $= \frac{\partial \mathcal{L}}{\partial A^{[L]}}$.To do so, use this formula (derived using calculus which you don't need in-depth knowledge of):```pythondAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) derivative of cost with respect to AL```You can then use this post-activation gradient `dAL` to keep going backward. As seen in Figure 5, you can now feed in `dAL` into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a `for` loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula : $$grads["dW" + str(l)] = dW^{[l]}\tag{15} $$For example, for $l=3$ this would store $dW^{[l]}$ in `grads["dW3"]`.**Exercise**: Implement backpropagation for the *[LINEAR->RELU] $\times$ (L-1) -> LINEAR -> SIGMOID* model.
# GRADED FUNCTION: L_model_backward def L_model_backward(AL, Y, caches): """ Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group Arguments: AL -- probability vector, output of the forward propagation (L_model_forward()) Y -- true "label" vector (containing 0 if non-cat, 1 if cat) caches -- list of caches containing: every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2) the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1]) Returns: grads -- A dictionary with the gradients grads["dA" + str(l)] = ... grads["dW" + str(l)] = ... grads["db" + str(l)] = ... """ grads = {} L = len(caches) # the number of layers m = AL.shape[1] Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL # Initializing the backpropagation ### START CODE HERE ### (1 line of code) dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) ### END CODE HERE ### # Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "dAL, current_cache". Outputs: "grads["dAL-1"], grads["dWL"], grads["dbL"] ### START CODE HERE ### (approx. 2 lines) current_cache = caches[L-1] grads["dA" + str(L-1)], grads["dW" + str(L)], grads["db" + str(L)] = linear_activation_backward(dAL, current_cache,'sigmoid') ### END CODE HERE ### # Loop from l=L-2 to l=0 for l in reversed(range(L-1)): # lth layer: (RELU -> LINEAR) gradients. # Inputs: "grads["dA" + str(l + 1)], current_cache". Outputs: "grads["dA" + str(l)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)] ### START CODE HERE ### (approx. 5 lines) current_cache = caches[l] dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads['dA'+str(l+1)], current_cache, 'relu') grads["dA" + str(l)] = dA_prev_temp grads["dW" + str(l + 1)] = dW_temp grads["db" + str(l + 1)] = db_temp ### END CODE HERE ### return grads AL, Y_assess, caches = L_model_backward_test_case() grads = L_model_backward(AL, Y_assess, caches) print_grads(grads)
dW1 = [[0.41010002 0.07807203 0.13798444 0.10502167] [0. 0. 0. 0. ] [0.05283652 0.01005865 0.01777766 0.0135308 ]] db1 = [[-0.22007063] [ 0. ] [-0.02835349]] dA1 = [[ 0.12913162 -0.44014127] [-0.14175655 0.48317296] [ 0.01663708 -0.05670698]]
MIT
Course1_week4_Building your Deep Neural Network - Step by Step v8.ipynb
korra0501/deeplearning.ai-coursera
**Expected Output** dW1 [[ 0.41010002 0.07807203 0.13798444 0.10502167] [ 0. 0. 0. 0. ] [ 0.05283652 0.01005865 0.01777766 0.0135308 ]] db1 [[-0.22007063] [ 0. ] [-0.02835349]] dA1 [[ 0.12913162 -0.44014127] [-0.14175655 0.48317296] [ 0.01663708 -0.05670698]] 6.4 - Update ParametersIn this section you will update the parameters of the model, using gradient descent: $$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{16}$$$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{17}$$where $\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary. **Exercise**: Implement `update_parameters()` to update your parameters using gradient descent.**Instructions**:Update parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$.
# GRADED FUNCTION: update_parameters def update_parameters(parameters, grads, learning_rate): """ Update parameters using gradient descent Arguments: parameters -- python dictionary containing your parameters grads -- python dictionary containing your gradients, output of L_model_backward Returns: parameters -- python dictionary containing your updated parameters parameters["W" + str(l)] = ... parameters["b" + str(l)] = ... """ L = len(parameters) // 2 # number of layers in the neural network # Update rule for each parameter. Use a for loop. ### START CODE HERE ### (≈ 3 lines of code) for l in range(L): parameters["W" + str(l+1)] -= learning_rate * grads["dW" + str(l+1)] parameters["b" + str(l+1)] -= learning_rate * grads["db" + str(l+1)] ### END CODE HERE ### return parameters parameters, grads = update_parameters_test_case() parameters = update_parameters(parameters, grads, 0.1) print ("W1 = "+ str(parameters["W1"])) print ("b1 = "+ str(parameters["b1"])) print ("W2 = "+ str(parameters["W2"])) print ("b2 = "+ str(parameters["b2"]))
W1 = [[-0.59562069 -0.09991781 -2.14584584 1.82662008] [-1.76569676 -0.80627147 0.51115557 -1.18258802] [-1.0535704 -0.86128581 0.68284052 2.20374577]] b1 = [[-0.04659241] [-1.28888275] [ 0.53405496]] W2 = [[-0.55569196 0.0354055 1.32964895]] b2 = [[-0.84610769]]
MIT
Course1_week4_Building your Deep Neural Network - Step by Step v8.ipynb
korra0501/deeplearning.ai-coursera
Data Structures and Algorithms Andres Castellano June 11, 2020 Mini Project 1
from google.colab import drive drive.mount('/content/gdrive/') %cd /content/gdrive/My\ Drive/Mini-Project 1
_____no_output_____
MIT
Connect The Islands.ipynb
ac547/Connect-The-Islands
**Step 1)** Construct a text file that follows the input specifications of the problem, i.e. it can serveas a sample input. Specifically, you should give an input file representing a 10x10 patch. The patchshould contain two or three islands, according to your choice. The shape of the islands can bearbitrary, but try to be creative. The text file should be of the form firstname-lastname.txt.Notice that each cell in the patch is characterized by its coordinates. The top left coordinate is(0,0) and coordinate (i,j) is for the cell in the i-th row and j-th column. ![alt text](andres-castellano.png "Input File") **Step 2)** Write a function that reads an input file with the given specifications and returns the listof the coordinates of the land points, i.e. the list of coordinates for the ‘X’ points.
f = open('andres-castellano.txt', 'r') def Coordinates(file): ''' This function will return a list, where each element of the list is a list itself of coordinates for each test case in the file. In the case where the input file only contains 1 test case, the function will return a list of length 1.''' import re f.seek(0) coor = [] num_cases = int(f.readlines(1)[0][0]) print("Number of test cases is: {} \n".format(num_cases)) for case in range(num_cases): print('For test case {} ... '.format(case+1)) dims = re.findall('\d+', f.readlines(1)[0]) i = int(dims[0]) j = int(dims[1]) case_coor = [] print('Dimensions are {0} by {1} \n'.format(i,j)) for ith in range(i): line = f.readlines(1)[0] for jth in range(j): if line[jth] == "x": #print(ith+1,jth+1) case_coor.append((ith+1,jth+1)) #print(case_coor) coor.append(case_coor) return coor Coordinates(f)
Number of test cases is: 2 For test case 1 ... Dimensions are 10 by 10 For test case 2 ... Dimensions are 20 by 20
MIT
Connect The Islands.ipynb
ac547/Connect-The-Islands
**Step 3)** Write a function CoordinateToNumber(i, j, m, n) that takes a coordinate (i, j) andmaps it to a unique number t in [0, mn − 1], which is then returned by the function.I'm assuming this step is directing us to create a function that takes a list of coordinates and generates unique identifiers for each coordinate within the list.
def CoordinateToNumber(coordinate_list=list,m=int,n=int): '''Returns a list for a list of coordinates as input, returns a tuple for a coordinate as input''' global T if len(coordinate_list) > m*n: raise Exception('Stop, too many coordinates for mXn') if type(coordinate_list) is tuple: T = {0 : coordinate_list} else: for value in enumerate(coordinate_list): T = dict(val for val in enumerate(coordinate_list)) return T CoordinateToNumber(Coordinates(f)[0],10,10)
Number of test cases is: 2 For test case 1 ... Dimensions are 10 by 10 For test case 2 ... Dimensions are 20 by 20
MIT
Connect The Islands.ipynb
ac547/Connect-The-Islands
**Step 4)** Write a function NumberToCoordinate(t, m, n) that takes a number t and returns thecorresponding coordinate. This function must be the inverse of CoordinateToNumber. Thatis, for all i, j, m, n we must haveNumberToCoordinate(CoordinateToNumber(i, j, m, n), m, n) = (i, j)The two steps above mean that besides its coordinates, each cell has its own unique identitynumber in [0, mn − 1]
def NumberToCoordinate(L=list,m=int,n=int): '''Returns a list size n of tuples for a list of inputs of size n''' coordinate_list = [] for key in L: coordinate_list.append(T.get(key)) return coordinate_list NumberToCoordinate(CoordinateToNumber(Coordinates(f)[0],10,10),10,10)
Number of test cases is: 2 For test case 1 ... Dimensions are 10 by 10 For test case 2 ... Dimensions are 20 by 20
MIT
Connect The Islands.ipynb
ac547/Connect-The-Islands
Note 1:The functions as defined above take as inputs iterable objects as seen above. However. All functions can also be called on specific coordinates.For example:The below code will call the identifier and coordinates of one single cell from test case 1.
Coordinates(f)[0][5] # Gets the 6th coordinate of test case 1. CoordinateToNumber(Coordinates(f)[0][5],10,10) # Gets the identifier and coordinates for the 6th coordinate of case 1. NumberToCoordinate(CoordinateToNumber(Coordinates(f)[0][5],10,10),10,10)
Number of test cases is: 2 For test case 1 ... Dimensions are 10 by 10 For test case 2 ... Dimensions are 20 by 20
MIT
Connect The Islands.ipynb
ac547/Connect-The-Islands
The last line of code gets the coordinate from the correspondng identifier given to the coordinate 6 of test case 1.Note that Coordinates(f)[0][5] and NumberToCoordinate(CoordinateToNumber(Coordinates(f)[0][5])) should returnthe same coordinate and it does.**Step 5)** Write a function Distance(t1, t2), where t1 and t2 are the identity numbers of two cells,and the output is the distance between them. The distance is the minimum number of connectedcells that one has to traverse to go from t1 to t2. (Hint: Use function NumberToCoordinate forthis)
CoordinateToNumber(Coordinates(f)[0],10,10) def Distance(t1=int,t2=int): '''Returns a scalar representing the manhattan distance between two coordinates representing input identifiers t1 and t2. ''' t1 = T[t1] t2 = T[t2] distance = abs(t2[1]-t1[1]) + abs(t2[0]-t1[0]) return distance Distance(0,1) == abs(8-3)+abs(2-2)
_____no_output_____
MIT
Connect The Islands.ipynb
ac547/Connect-The-Islands
Recall that in **Step 2** we wrote a function for finding the list of land cells. Let’s callthis function **FindLandCells**, and its output **LandCell_List**. This list of land cells can looklike this:LandCell List = [10, 11, 25, 12, 50, 51, 80, 81, 82](this is only an example, it does not correspond to some specific input).
FindLandCells = CoordinateToNumber LandCell_List = CoordinateToNumber(Coordinates(f)[1],20,20) LandCell_List.keys()
Number of test cases is: 2 For test case 1 ... Dimensions are 10 by 10 For test case 2 ... Dimensions are 20 by 20
MIT
Connect The Islands.ipynb
ac547/Connect-The-Islands
Now this lists can be further broken into islands. So, we have something that looks like this:Island_List = [[10, 11, 12], [25], [50, 51], [80, 81, 82]]You see how all the cells from the original list appear in the second data structure, which is a listof lists, with each list being an island. Observe how cells belonging to the same island (e.g. cell12), can be mixed up with other islands in LandCell List. In other words, one island’s cells donot have to be in contiguous positions in LandCell List.In this section we will write functions to help find the list of islands.**Step 6)** Write a function GenerateNeighbors(t1, n, m), that takes one cell number t1 (and alsothe dimensions), and returns the numbers for the neighbors of t1 in the grid. Notice that t1 canhave 2, 3 or 4 neighbors.
CoordinateToNumber(Coordinates(f)[0],10,10) def GenerateNeighbors(t1=int,m=int,n=int): coordinates = T[t1] row_neighboors = [] candidates = [] row_candidates = [(coordinates[0],coordinates[1]-1),(coordinates[0],coordinates[1]+1)] [candidates.append(x) for x in row_candidates] col_candidates = [(coordinates[0]-1,coordinates[1]),(coordinates[0]+1,coordinates[1])] [candidates.append(x) for x in col_candidates] #return [x in T.values() for x in candidates] return sum([x in T.values() for x in candidates]) GenerateNeighbors(2,10,10)
_____no_output_____
MIT
Connect The Islands.ipynb
ac547/Connect-The-Islands
**Step 7)** Write a function ExploreIsland(t1, n, m). This function should start from cell t1,and construct a list of cells that are in the same island as t1. (Hint: t1 can add itself to adictionary representing the island, and also its neighbors, then the neighbors should recursivelydo the the same. But when new neighbors are inserted in the dictionary, we should first check ifthey are already in it. The process should terminate when it’s not possible to add more cells to thedictionary, meaning that we found the island. Finally the function should return a list with thecells on the island)
FindLandCells = CoordinateToNumber LandCell_List = CoordinateToNumber(Coordinates(f)[0],10,10) LandCell_List.keys() def ExploreIsland(t1=int,m=int,n=int, neighbors=[]): coordinates = T[t1] candidates = [] if neighbors == []: neighbors.append(t1) row_candidates = [(coordinates[0],coordinates[1]-1),(coordinates[0],coordinates[1]+1)] [candidates.append(x) for x in row_candidates] col_candidates = [(coordinates[0]-1,coordinates[1]),(coordinates[0]+1,coordinates[1])] [candidates.append(x) for x in col_candidates] print("\nFor Land {0} with coordinates {1}, Candidates are {2} ".format(t1, coordinates, candidates)) for x in candidates: print(" ...Checking coordinates {} for land {}".format(x,t1)) if x in T.values(): for key, value in T.items(): if value == x and key in neighbors: print(" Land {} already on Island with land {}! ".format(key,t1)) if value == x and key not in neighbors: print(" ...Adding land {} with coordinates {} to land {} ".format(key,x,t1)) neighbors.append(key) print("\nExploring land {}____________\n".format(key)) ExploreIsland(key,m,n,neighbors) #print("Island consists of Lands {}".format(neighbors)) return neighbors #neighbors.append() ExploreIsland(10,10,10,neighbors=[]) FindLandCells = CoordinateToNumber LandCell_List = FindLandCells(Coordinates(f)[1],20,20) LandCell_List.keys() ExploreIsland(0,20,20,neighbors=[])
For Land 0 with coordinates (3, 10), Candidates are [(3, 9), (3, 11), (2, 10), (4, 10)] ...Checking coordinates (3, 9) for land 0 ...Checking coordinates (3, 11) for land 0 ...Adding land 1 with coordinates (3, 11) to land 0 Exploring land 1____________ For Land 1 with coordinates (3, 11), Candidates are [(3, 10), (3, 12), (2, 11), (4, 11)] ...Checking coordinates (3, 10) for land 1 Land 0 already on Island with land 1! ...Checking coordinates (3, 12) for land 1 ...Adding land 2 with coordinates (3, 12) to land 1 Exploring land 2____________ For Land 2 with coordinates (3, 12), Candidates are [(3, 11), (3, 13), (2, 12), (4, 12)] ...Checking coordinates (3, 11) for land 2 Land 1 already on Island with land 2! ...Checking coordinates (3, 13) for land 2 ...Adding land 3 with coordinates (3, 13) to land 2 Exploring land 3____________ For Land 3 with coordinates (3, 13), Candidates are [(3, 12), (3, 14), (2, 13), (4, 13)] ...Checking coordinates (3, 12) for land 3 Land 2 already on Island with land 3! ...Checking coordinates (3, 14) for land 3 ...Checking coordinates (2, 13) for land 3 ...Checking coordinates (4, 13) for land 3 ...Adding land 7 with coordinates (4, 13) to land 3 Exploring land 7____________ For Land 7 with coordinates (4, 13), Candidates are [(4, 12), (4, 14), (3, 13), (5, 13)] ...Checking coordinates (4, 12) for land 7 ...Adding land 6 with coordinates (4, 12) to land 7 Exploring land 6____________ For Land 6 with coordinates (4, 12), Candidates are [(4, 11), (4, 13), (3, 12), (5, 12)] ...Checking coordinates (4, 11) for land 6 ...Adding land 5 with coordinates (4, 11) to land 6 Exploring land 5____________ For Land 5 with coordinates (4, 11), Candidates are [(4, 10), (4, 12), (3, 11), (5, 11)] ...Checking coordinates (4, 10) for land 5 ...Adding land 4 with coordinates (4, 10) to land 5 Exploring land 4____________ For Land 4 with coordinates (4, 10), Candidates are [(4, 9), (4, 11), (3, 10), (5, 10)] ...Checking coordinates (4, 9) for land 4 ...Checking coordinates (4, 11) for land 4 Land 5 already on Island with land 4! ...Checking coordinates (3, 10) for land 4 Land 0 already on Island with land 4! ...Checking coordinates (5, 10) for land 4 ...Checking coordinates (4, 12) for land 5 Land 6 already on Island with land 5! ...Checking coordinates (3, 11) for land 5 Land 1 already on Island with land 5! ...Checking coordinates (5, 11) for land 5 ...Checking coordinates (4, 13) for land 6 Land 7 already on Island with land 6! ...Checking coordinates (3, 12) for land 6 Land 2 already on Island with land 6! ...Checking coordinates (5, 12) for land 6 ...Adding land 8 with coordinates (5, 12) to land 6 Exploring land 8____________ For Land 8 with coordinates (5, 12), Candidates are [(5, 11), (5, 13), (4, 12), (6, 12)] ...Checking coordinates (5, 11) for land 8 ...Checking coordinates (5, 13) for land 8 ...Adding land 9 with coordinates (5, 13) to land 8 Exploring land 9____________ For Land 9 with coordinates (5, 13), Candidates are [(5, 12), (5, 14), (4, 13), (6, 13)] ...Checking coordinates (5, 12) for land 9 Land 8 already on Island with land 9! ...Checking coordinates (5, 14) for land 9 ...Checking coordinates (4, 13) for land 9 Land 7 already on Island with land 9! ...Checking coordinates (6, 13) for land 9 ...Adding land 11 with coordinates (6, 13) to land 9 Exploring land 11____________ For Land 11 with coordinates (6, 13), Candidates are [(6, 12), (6, 14), (5, 13), (7, 13)] ...Checking coordinates (6, 12) for land 11 ...Adding land 10 with coordinates (6, 12) to land 11 Exploring land 10____________ For Land 10 with coordinates (6, 12), Candidates are [(6, 11), (6, 13), (5, 12), (7, 12)] ...Checking coordinates (6, 11) for land 10 ...Checking coordinates (6, 13) for land 10 Land 11 already on Island with land 10! ...Checking coordinates (5, 12) for land 10 Land 8 already on Island with land 10! ...Checking coordinates (7, 12) for land 10 ...Adding land 14 with coordinates (7, 12) to land 10 Exploring land 14____________ For Land 14 with coordinates (7, 12), Candidates are [(7, 11), (7, 13), (6, 12), (8, 12)] ...Checking coordinates (7, 11) for land 14 ...Checking coordinates (7, 13) for land 14 ...Adding land 15 with coordinates (7, 13) to land 14 Exploring land 15____________ For Land 15 with coordinates (7, 13), Candidates are [(7, 12), (7, 14), (6, 13), (8, 13)] ...Checking coordinates (7, 12) for land 15 Land 14 already on Island with land 15! ...Checking coordinates (7, 14) for land 15 ...Checking coordinates (6, 13) for land 15 Land 11 already on Island with land 15! ...Checking coordinates (8, 13) for land 15 ...Adding land 19 with coordinates (8, 13) to land 15 Exploring land 19____________ For Land 19 with coordinates (8, 13), Candidates are [(8, 12), (8, 14), (7, 13), (9, 13)] ...Checking coordinates (8, 12) for land 19 ...Adding land 18 with coordinates (8, 12) to land 19 Exploring land 18____________ For Land 18 with coordinates (8, 12), Candidates are [(8, 11), (8, 13), (7, 12), (9, 12)] ...Checking coordinates (8, 11) for land 18 ...Checking coordinates (8, 13) for land 18 Land 19 already on Island with land 18! ...Checking coordinates (7, 12) for land 18 Land 14 already on Island with land 18! ...Checking coordinates (9, 12) for land 18 ...Checking coordinates (8, 14) for land 19 ...Checking coordinates (7, 13) for land 19 Land 15 already on Island with land 19! ...Checking coordinates (9, 13) for land 19 ...Checking coordinates (6, 12) for land 14 Land 10 already on Island with land 14! ...Checking coordinates (8, 12) for land 14 Land 18 already on Island with land 14! ...Checking coordinates (6, 14) for land 11 ...Checking coordinates (5, 13) for land 11 Land 9 already on Island with land 11! ...Checking coordinates (7, 13) for land 11 Land 15 already on Island with land 11! ...Checking coordinates (4, 12) for land 8 Land 6 already on Island with land 8! ...Checking coordinates (6, 12) for land 8 Land 10 already on Island with land 8! ...Checking coordinates (4, 14) for land 7 ...Checking coordinates (3, 13) for land 7 Land 3 already on Island with land 7! ...Checking coordinates (5, 13) for land 7 Land 9 already on Island with land 7! ...Checking coordinates (2, 12) for land 2 ...Checking coordinates (4, 12) for land 2 Land 6 already on Island with land 2! ...Checking coordinates (2, 11) for land 1 ...Checking coordinates (4, 11) for land 1 Land 5 already on Island with land 1! ...Checking coordinates (2, 10) for land 0 ...Checking coordinates (4, 10) for land 0 Land 4 already on Island with land 0!
MIT
Connect The Islands.ipynb
ac547/Connect-The-Islands
**Step 8)** Write a function FindIslands that reads the list LandCell_List and converts its toIsland List as explained above. The idea for this step is to scan the list of land cells, and callrepeatedly the ExploreIsland function.
def FindIslands(LandCell_List=list): Island_List = [] island = [] checked = [] for i in LandCell_List.keys(): if i not in checked: print("Finding Islands Connected to Land {}".format(i)) island = ExploreIsland(i,20,20,neighbors=[]) [checked.append(x) for x in island] print("Explored island {}, consists of {}:".format(i, island)) if len(island) < 1: Island_List.append([i]) if island is not None: Island_List.append(island) else: next return Island_List Island_List = FindIslands(LandCell_List) Island_List
_____no_output_____
MIT
Connect The Islands.ipynb
ac547/Connect-The-Islands
**Step 9)** Write a function Island Distance(isl1, isl2), which takes two lists of cells representingtwo islands, and finds the distance of these two islands. For this you will need to compute theDistance function from Milestone 1.
def Island_Distance(isl1=list,isl2=list): x0 = 'nothing' for i in isl1: if GenerateNeighbors(i,m=int,n=int) > 3: #Landlocked Land next else: #print(" Checking land {}".format(i)) for j in isl2: if GenerateNeighbors(j,m=int,n=int) >3: next else: #print("Measuring land {} to land {}".format(i,j)) x1 = Distance(i,j) if x0 == 'nothing': x0 = x1 if x1 < x0: x0 = x1 #print("\nNew Shortest Lenght is {}".format(x0)) #print("\nShortest distance is {}\n".format(x0)) return x0 Island_Distance(Island_List[0],Island_List[1])
_____no_output_____
MIT
Connect The Islands.ipynb
ac547/Connect-The-Islands
**Step 10)** We will now construct a graph of islands. Consider an example for this. SupposeIsland List contains 3 islands. We will assign to each island a unique number in [0, 3). ThenIsland Graph will be a list of the following form:[[0, 1, d(0, 1)], [0, 2, d(0, 2)], [1, 2, d(1, 2)]].Here d(i, j) is the distance between islands i and j, as computed with the function in Step 9. Inother words, for each pair of islands, we have a triple consisting of the identities of the pair, andtheir distance. This is a complete weighted graph. The goal of this Step is to write a functionIsland Graph that outputs this list.
def Island_Graph(Island_List=list): import sys, os # For n islands in Island list, enumerate island on [0,3) output = [] skip = [] global dums dums = dict(val for val in enumerate(Island_List)) for i in dums.keys(): #print(i) skip.append(i) for j in dums.keys(): if i == j or j in skip: next #print("skipped") else: #sys.stdout = open(os.devnull, 'w') y0 = [i,j,Island_Distance(dums[i],dums[j])] output.append(y0) #sys.stdout = sys.__stdout__ #print(y0) #print(output) print("\nLenght of output list is {}, ".format(len(output)), "which makes sense for a list of size {}, ".format(len(Island_List)), "Since {} times {} divided by 2 is {}.".format(len(Island_List), len(Island_List)-1, int(len(Island_List)*(len(Island_List)-1))/2)) return output fun = Island_Graph(Island_List) fun
Lenght of output list is 21, which makes sense for a list of size 7, Since 7 times 6 divided by 2 is 21.0.
MIT
Connect The Islands.ipynb
ac547/Connect-The-Islands
**Final Step** We now have a data structure which is the adjacency list (with weights) of the graphs.To connect all islands, we need to find a **minimum weight spanning tree** for this graph. Ihave seen one algorithm for computing such a tree in class. However, for this project, I willuse the python library **networkx**.<a href=" https://networkx.github.io/documentation/networkx-1.10/reference/generated/networkx.algorithms.mst.minimum_spanning_tree.htmlnetworkx.algorithms.mst.minimum_spanning_tree "> Here is the documentation of a function that computes minimum weight spaning trees:
import networkx as nx G = nx.Graph() G.clear() for i in range(len(fun)): print(fun[i][0],fun[i][1],fun[i][2]) G.add_edge(fun[i][0],fun[i][1], weight=fun[i][2]) mst = nx.tree.minimum_spanning_edges(G, algorithm='kruskal', data=False) edgelist = list(mst) sorted(sorted(e) for e in edgelist) arbol = sorted(sorted(e) for e in edgelist) nx.draw(G, with_labels=True, font_weight='bold') EDGY = [] for item in edgelist: #print(item[0],item[1]) i = item[0] j = item[1] #print(i) k = Island_Distance(Island_List[i],Island_List[j]) EDGY.append(k) length = sum(EDGY) print(length) print("The minimum cost to build the bridges required to connect all the islands is {}.".format(length))
The minimum cost to build the bridges required to connect all the islands is 23.
MIT
Connect The Islands.ipynb
ac547/Connect-The-Islands
from google.colab import drive drive.mount('/content/drive')
Mounted at /content/drive
MIT
Diabetes.ipynb
AryanMethil/Diabetes-KNN-vs-Naive-Bayes
Dataset Exploration1. Head of dataset2. Check for null values (absent)3. Check for class imbalance (present - remove it by upsampling)
# Read the csv file import pandas as pd df=pd.read_csv('/content/drive/My Drive/Diabetes/diabetes.csv') df.head() # Print the null values of every column df.isna().sum() # Print class count to check for imbalance df['Outcome'].value_counts() from sklearn.utils import resample df_majority = df[df.Outcome==0] df_minority = df[df.Outcome==1] # Upsample minority class df_minority_upsampled = resample(df_minority, replace=True, # sample with replacement n_samples=500, # to match majority class random_state=42) # reproducible results # Combine majority class with upsampled minority class df = pd.concat([df_majority, df_minority_upsampled]) print(df['Outcome'].value_counts())
1 500 0 500 Name: Outcome, dtype: int64
MIT
Diabetes.ipynb
AryanMethil/Diabetes-KNN-vs-Naive-Bayes
Stratified K Folds Cross Validation
# Add a "kfolds" column which will indicate the validation set number df['kfolds']=-1 # Shuffle all the rows and then reset the index df=df.sample(frac=1,random_state=42).reset_index(drop=True) df.head() from sklearn import model_selection # Create 5 sets of training,validation sets strat_kf=model_selection.StratifiedKFold(n_splits=5) # .split() returns a list of 5 lists (corresponding to the n_splits value) # Each of these 5 lists consists of 2 lists. 1st one contains training set indices and 2nd one contains validation set indices # In a dataset of 10 data points, data 1 and 2 will be the validation in 1st fold, data 3 and 4 in the second fold and so on # 1st iteration of the for loop : trn_ = 3,4,5,6,7,8,9,10 and val_ = 1,2 and fold : 0 # Assign 1st and 2nd row's kfolds value as 0 representing that they will be the validation points for 1st (0th) fold for fold,(trn_,val_) in enumerate(strat_kf.split(X=df,y=df['Outcome'])): df.loc[val_,'kfolds']=fold df.head()
_____no_output_____
MIT
Diabetes.ipynb
AryanMethil/Diabetes-KNN-vs-Naive-Bayes
Scale the features using StandardScaler
from sklearn.preprocessing import StandardScaler scaler=StandardScaler() df_2=pd.DataFrame(scaler.fit_transform(df),index=df.index,columns=df.columns) # Target column and kfolds column dont need to be scaled df_2['Outcome']=df['Outcome'] df_2['kfolds']=df['kfolds'] df_2.head()
_____no_output_____
MIT
Diabetes.ipynb
AryanMethil/Diabetes-KNN-vs-Naive-Bayes
Feature Selection1. KNN2. Naive Bayes
from sklearn import metrics import matplotlib.pyplot as plt def run(fold,df,models,print_details=False): # Training and validation sets df_train=df[df['kfolds']!=fold].reset_index(drop=True) df_valid=df[df['kfolds']==fold].reset_index(drop=True) # x and y of training dataset x_train=df_train.drop('Outcome',axis=1).values y_train=df_train.Outcome.values # x and y of validation dataset x_valid=df_valid.drop('Outcome',axis=1).values y_valid=df_valid.Outcome.values # accuracy => will store accuracies of the models (same for confusion_matrices) accuracy=[] confusion_matrices=[] classification_report=[] for model_name,model_constructor in list(models.items()): clf=model_constructor clf.fit(x_train,y_train) # preds_train, preds_valid => predictions when training and validation x are fed into the trained model preds_train=clf.predict(x_train) preds_valid=clf.predict(x_valid) acc_train=metrics.accuracy_score(y_train,preds_train) acc_valid=metrics.accuracy_score(y_valid,preds_valid) conf_matrix=metrics.confusion_matrix(y_valid,preds_valid) class_report=metrics.classification_report(y_valid,preds_valid) accuracy.append(acc_valid) confusion_matrices.append(conf_matrix) classification_report.append(class_report) if(print_details==True): print(f'Model => {model_name} => Fold = {fold} => Training Accuracy = {acc_train} => Validation Accuracy = {acc_valid}') if(print_details==True): print('\n--------------------------------------------------------------------------------------------\n') return accuracy,confusion_matrices,classification_report
_____no_output_____
MIT
Diabetes.ipynb
AryanMethil/Diabetes-KNN-vs-Naive-Bayes
Greedy Feature Selection
def greedy_feature_selection(fold,df,models,target_name): # target_index => stores the index of the target variable in the dataset # kfolds_index => stores the index of kfolds column in the dataset target_index=df.columns.get_loc(target_name) kfolds_index=df.columns.get_loc('kfolds') # good_features => stores the indices of all the optimal features # best_scores => keeps track of the best scores good_features=[] best_scores=[] # df has X and y and a kfolds column. # no of features (no of columns in X) => total columns in df - 1 (there's 1 y) - 1 (there's 1 kfolds) num_features=df.shape[1]-2 while True: # this_feature => the feature added to the already selected features to measure the effect of the former on the model # best_score => keeps track of the best score achieved while selecting features 1 at a time and checking its effect on the model this_feature=None best_score=0 for feature in range(num_features): # if the feature is already in the good_features list, ignore and move ahead if feature in good_features: continue # add the currently selected feature to the already discovered good features selected_features=good_features+[feature] # all the selected features + target and kfolds column df_train=df.iloc[:, selected_features + [target_index,kfolds_index]] # fit the selected dataset to a model accuracy,confusion_matrices,classification_report=run(fold,df_train,models) # if any improvement is observed over the previous set of features if(accuracy[0]>best_score): this_feature=feature best_score=accuracy[0] if(this_feature!=None): good_features.append(this_feature) best_scores.append(best_score) if(len(best_scores)>2): if(best_scores[-1]<best_scores[-2]): break return best_scores[:-1] , df.iloc[:, good_features[:-1] + [target_index,kfolds_index]]
_____no_output_____
MIT
Diabetes.ipynb
AryanMethil/Diabetes-KNN-vs-Naive-Bayes
Recursive Feature Selection
from sklearn.feature_selection import RFE def recursive_feature_selection(df,models,n_features_to_select,target_name): X=df.drop(labels=[target_name,'kfolds'],axis=1).values y=df[target_name] kfolds=df.kfolds.values model_name,model_constructor=list(models.items())[0] rfe=RFE( estimator=model_constructor, n_features_to_select=n_features_to_select ) try: rfe.fit(X,y) except RuntimeError: print(f"{model_name} does not support feature importance... Returning original dataframe\n") return df else: X_transformed = rfe.transform(X) df_optimal=pd.DataFrame(data=[X,y,kfolds]) return df_optimal from sklearn.neighbors import KNeighborsClassifier from sklearn.naive_bayes import GaussianNB from sklearn.naive_bayes import MultinomialNB print('Greedy Feature Selection : ') print('\n') models={'KNN': KNeighborsClassifier()} best_scores,df_optimal_KNN=greedy_feature_selection(fold=4,df=df_2,models=models,target_name='Outcome') print(df_optimal_KNN.head()) print('\n') print("Recursive Feature Selection : ") print('\n') df_recursive_optimal_KNN=recursive_feature_selection(df=df_2,models=models,n_features_to_select=5,target_name='Outcome') print(df_recursive_optimal_KNN.head()) models={'Naive Bayes' : GaussianNB()} best_scores,df_optimal_NB=greedy_feature_selection(fold=4,df=df_2,models=models,target_name='Outcome') print(df_optimal_NB.head()) print('\n') df_recursive_optimal_NB=recursive_feature_selection(df=df_2,models=models,n_features_to_select=5,target_name='Outcome') print(df_recursive_optimal_NB.head())
Glucose Pregnancies BMI Outcome kfolds 0 -0.230248 -0.053578 -0.363725 1 0 1 2.011697 0.787960 0.691889 1 0 2 -0.614581 1.068473 1.430819 1 0 3 -0.518498 1.629498 -0.007455 1 0 4 0.250169 -1.175629 -0.007455 0 0 Naive Bayes does not support feature importance... Returning original dataframe Pregnancies Glucose BloodPressure ... Age Outcome kfolds 0 -0.053578 -0.230248 -0.423269 ... -0.367793 1 0 1 0.787960 2.011697 -0.111124 ... 0.579018 1 0 2 1.068473 -0.614581 1.553653 ... 0.923314 1 0 3 1.629498 -0.518498 -0.215172 ... 0.665092 1 0 4 -1.175629 0.250169 0.409119 ... -1.142458 0 0 [5 rows x 10 columns]
MIT
Diabetes.ipynb
AryanMethil/Diabetes-KNN-vs-Naive-Bayes