markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
A Lightcurve can also be sliced to generate a new object.
lc_sliced = lc[100:200] len(lc_sliced.counts)
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
Methods Concatenation Two light curves can be combined into a single object using the `join` method. Note that both of them must not have overlapping time arrays.
lc_1 = lc lc_2 = Lightcurve(np.arange(1000, 2000), np.random.rand(1000)*1000, dt=1, skip_checks=True) lc_long = lc_1.join(lc_2, skip_checks=True) # Or vice-versa print(len(lc_long))
2000
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
Truncation A light curve can also be truncated.
lc_cut = lc_long.truncate(start=0, stop=1000) len(lc_cut)
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
**Note** : By default, the `start` and `stop` parameters are assumed to be given as **indices** of the time array. However, the `start` and `stop` values can also be given as time values in the same value as the time array.
lc_cut = lc_long.truncate(start=500, stop=1500, method='time') lc_cut.time[0], lc_cut.time[-1]
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
Re-binning The time resolution (`dt`) can also be changed to a larger value.**Note** : While the new resolution need not be an integer multiple of the previous time resolution, be aware that if it is not, the last bin will be cut off by the fraction left over by the integer division.
lc_rebinned = lc_long.rebin(2) print("Old time resolution = " + str(lc_long.dt)) print("Number of data points = " + str(lc_long.n)) print("New time resolution = " + str(lc_rebinned.dt)) print("Number of data points = " + str(lc_rebinned.n))
Old time resolution = 1 Number of data points = 2000 New time resolution = 2 Number of data points = 1000
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
Sorting A lightcurve can be sorted using the `sort` method. This function sorts `time` array and the `counts` array is changed accordingly.
new_lc_long = lc_long[:] # Copying into a new object new_lc_long = new_lc_long.sort(reverse=True) new_lc_long.time[0] == max(lc_long.time)
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
You can sort by the `counts` array using `sort_counts` method which changes `time` array accordingly:
new_lc = lc_long[:] new_lc = new_lc.sort_counts() new_lc.counts[-1] == max(lc_long.counts)
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
Plotting A curve can be plotted with the `plot` method.
lc.plot()
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
A plot can also be customized using several keyword arguments.
lc.plot(labels=('Time', "Counts"), # (xlabel, ylabel) axis=(0, 1000, -50, 150), # (xmin, xmax, ymin, ymax) title="Random generated lightcurve", marker='c:') # c is for cyan and : is the marker style
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
The figure drawn can also be saved in a file using keywords arguments in the plot method itself.
lc.plot(marker = 'k', save=True, filename="lightcurve.png")
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
**Note** : See `utils.savefig` function for more options on saving a file. Sample Data Stingray also has a sample `Lightcurve` data which can be imported from within the library.
from stingray import sampledata lc = sampledata.sample_data() lc.plot()
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
Checking the Light Curve for IrregularitiesYou can perform checks on the behaviour of the light curve, similar to what's done when instantiating a `Lightcurve` object when `skip_checks=False`, by calling the relevant method:
time = np.hstack([np.arange(0, 10, 0.1), np.arange(10, 20, 0.3)]) # uneven time resolution counts = np.random.poisson(100, size=len(time)) lc = Lightcurve(time, counts, dt=1.0, skip_checks=True) lc.check_lightcurve()
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
Let's add some badly formatted GTIs:
gti = [(10, 100), (20, 30, 40), ((1, 2), (3, 4, (5, 6)))] # not a well-behaved GTI lc = Lightcurve(time, counts, dt=0.1, skip_checks=True, gti=gti) lc.check_lightcurve()
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
MJDREF and Shifting TimesThe `mjdref` keyword argument defines a reference time in Modified Julian Date. Often, X-ray missions count their internal time in seconds from a given reference date and time (so that numbers don't become arbitrarily large). The data is then in the format of Mission Elapsed Time (MET), or seconds since that reference time. `mjdref` is generally passed into the `Lightcurve` object at instantiation, but it can be changed later:
mjdref = 91254 time = np.arange(1000) counts = np.random.poisson(100, size=len(time)) lc = Lightcurve(time, counts, dt=1, skip_checks=True, mjdref=mjdref) print(lc.mjdref) mjdref_new = 91254 + 20 lc_new = lc.change_mjdref(mjdref_new) print(lc_new.mjdref)
91274
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
This change only affects the *reference time*, not the values given in the `time` attribute. However, it is also possible to shift the *entire light curve*, along with its GTIs:
gti = [(0,500), (600, 1000)] lc.gti = gti print("first three time bins: " + str(lc.time[:3])) print("GTIs: " + str(lc.gti)) time_shift = 10.0 lc_shifted = lc.shift(time_shift) print("Shifted first three time bins: " + str(lc_shifted.time[:3])) print("Shifted GTIs: " + str(lc_shifted.gti))
Shifted first three time bins: [10. 11. 12.] Shifted GTIs: [[ 10. 510.] [ 610. 1010.]]
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
Calculating a baseline**TODO**: Need to document this method Working with GTIs and Splitting Light CurvesIt is possible to split light curves into multiple segments. In particular, it can be useful to split light curves with large gaps into individual contiguous segments without gaps.
# make a time array with a big gap and a small gap time = np.array([1, 2, 3, 10, 11, 12, 13, 14, 17, 18, 19, 20]) counts = np.random.poisson(100, size=len(time)) lc = Lightcurve(time, counts, skip_checks=True) lc.gti
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
This light curve has uneven bins. It has a large gap between 3 and 10, and a smaller gap between 14 and 17. We can use the `split` method to split it into three contiguous segments:
lc_split = lc.split(min_gap=2*lc.dt) for lc_tmp in lc_split: print(lc_tmp.time)
[1 2 3] [10 11 12 13 14] [17 18 19 20]
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
This has split the light curve into three contiguous segments. You can adjust the tolerance for the size of gap that's acceptable via the `min_gap` attribute. You can also require a minimum number of data points in the output light curves. This is helpful when you're only interested in contiguous segments of a certain length:
lc_split = lc.split(min_gap=6.0) for lc_tmp in lc_split: print(lc_tmp.time)
[1 2 3] [10 11 12 13 14 17 18 19 20]
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
What if we only want the long segment?
lc_split = lc.split(min_gap=6.0, min_points=4) for lc_tmp in lc_split: print(lc_tmp.time)
[10 11 12 13 14 17 18 19 20]
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
A special case of splitting your light curve object is to split by GTIs. This can be helpful if you want to look at individual contiguous segments separately:
# make a time array with a big gap and a small gap time = np.arange(20) counts = np.random.poisson(100, size=len(time)) gti = [(0,8), (12,20)] lc = Lightcurve(time, counts, dt=1, skip_checks=True, gti=gti) lc_split = lc.split_by_gti() for lc_tmp in lc_split: print(lc_tmp.time)
[1 2 3 4 5 6 7] [13 14 15 16 17 18 19]
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
Because I'd passed in GTIs that define the range from 0-8 and from 12-20 as good time intervals, the light curve will be split into two individual ones containing all data points falling within these ranges.You can also apply the GTIs *directly* to the original light curve, which will filter `time`, `counts`, `countrate`, `counts_err` and `countrate_err` to only fall within the bounds of the GTIs:
# make a time array with a big gap and a small gap time = np.arange(20) counts = np.random.poisson(100, size=len(time)) gti = [(0,8), (12,20)] lc = Lightcurve(time, counts, dt=1, skip_checks=True, gti=gti)
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
**Caution**: This is one of the few methods that change the original state of the object, rather than returning a new copy of it with the changes applied! So any events falling outside of the range of the GTIs will be lost:
# time array before applying GTIs: lc.time lc.apply_gtis() # time array after applying GTIs lc.time
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
As you can see, the time bins 8-12 have been dropped, since they fall outside of the GTIs. Analyzing Light Curve SegmentsThere's some functionality in `stingray` aimed at making analysis of individual light curve segments (or chunks, as they're called throughout the code) efficient. One helpful function tells you the length that segments should have to satisfy two conditions: (1) the minimum number of time bins in the segment, and (2) the minimum total number of counts (or flux) in each segment.Let's give this a try with an example:
dt=1.0 time = np.arange(0, 100, dt) counts = np.random.poisson(100, size=len(time)) lc = Lightcurve(time, counts, dt=dt, skip_checks=True) min_total_counts = 300 min_total_bins = 2 estimated_chunk_length = lc.estimate_chunk_length(min_total_counts, min_total_bins) print("The estimated length of each segment in seconds to satisfy both conditions is: " + str(estimated_chunk_length))
The estimated length of each segment in seconds to satisfy both conditions is: 4.0
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
So we have time bins of 1 second time resolution, each with an average of 100 counts/bin. We require at least 2 time bins in each segment, and also a minimum number of total counts in the segment of 300. In theory, you'd expect to need 3 time bins (so 3-second segments) to satisfy the condition above. However, the Poisson distribution is quite variable, so we cannot guarantee that all bins will have a total number of counts above 300. Hence, our segments need to be 4 seconds long. We can now use these segments to do some analysis, using the `analyze_by_chunks` method. In the simplest, case we can use a standard `numpy` operation to learn something about the properties of each segment:
start_times, stop_times, lc_sums = lc.analyze_lc_chunks(chunk_length = 10.0, func=np.median) lc_sums
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
This splits the light curve into 10-second segments, and then finds the median number of counts/bin in each segment. For a flat light curve like the one we generated above, this isn't super interesting, but this method can be helpful for more complex analyses. Instead of `np.median`, you can also pass in your own function:
def myfunc(lc): """ Not a very interesting function """ return np.sum(lc.counts) * 10.0 start_times, stop_times, lc_result = lc.analyze_lc_chunks(chunk_length=10.0, func=myfunc) lc_result
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
Compatibility with `Lightkurve`The [`Lightkurve` package](https://docs.lightkurve.org) provides a large amount of complementary functionality to stingray, in particular for data observed with Kepler and TESS, stars and exoplanets, and unevenly sampled data. We have implemented a conversion method that converts to/from `stingray`'s native `Lightcurve` object and `Lightkurve`'s native `LightCurve` object. Equivalent functionality exists in `Lightkurve`, too.
import lightkurve lc_new = lc.to_lightkurve() type(lc_new) lc_new.time lc_new.flux
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
Let's do the rountrip to stingray:
lc_back = lc_new.to_stingray() lc_back.time lc_back.counts
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
Similarly, we can transform `Lightcurve` objects to and from `astropy.TimeSeries` objects:
dt=1.0 time = np.arange(0, 100, dt) counts = np.random.poisson(100, size=len(time)) lc = Lightcurve(time, counts, dt=dt, skip_checks=True) # convet to astropy.TimeSeries object ts = lc.to_astropy_timeseries() type(ts) ts[:10]
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
SUMMARYEvaluate the readability, complexity and performance of a function.Write docstrings for functions following the NumPy/SciPy format.Write comments within a function to improve readability.Write and design functions with default arguments.Explain the importance of scoping and environments in Python as they relate to functions.Formulate test cases to prove a function design specification.Use assert statements to formulate a test case to prove a function design specification.Use test-driven development principles to define a function that accepts parameters, returns values and passes all tests.Handle errors gracefully via exception handling. In the last module, we were introduced to the DRY principle and how creating functions helps comply with it.Let’s do a little bit of a recap.DRY stands for Don’t Repeat Yourself.We can avoid writing repetitive code by creating a function that takes in arguments, performs some operations, and returns the results.The example in Module 5 converted code that creates a list of squared elements from an existing list of numbers into a function.
#example loop numbers = [2, 3, 5] squared = list() for number in numbers: squared.append(number ** 2) squared #ex1 loop as function def squares_a_list(numerical_list):#function name and agruement new_squared_list = list() #initialize output list for number in numerical_list: new_squared_list.append(number ** 2) return new_squared_list squares_a_list(numbers) #function call
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
This function gave us the ability to do the same operation for multiple lists without having to rewrite any code and just calling the function.
larger_numbers = [5, 44, 55, 23, 11] promoted_numbers = [73, 84, 95] executive_numbers = [100, 121, 250, 103, 183, 222, 214] squares_a_list(larger_numbers) squares_a_list(promoted_numbers) squares_a_list(executive_numbers)
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
It’s important to know what exactly is going on inside and outside of a function.In our function squares_a_list() we saw that we created a variable named new_squared_list.We can print this variable and watch all the elements be appended to it as we loop through the input list.But what happens if we try and print this variable outside of the function?Yikes! Where did new_squared_list go?It doesn’t seem to exist! That’s not entirely true.In Python, new_squared_list is something we call a local variable.Local variables are any objects that have been created within a function and only exist inside the function where they are made.Code within a function is described as a local environment.Since we called new_squared_list outside of the function’s body, Python fails to recognize it.
def squares_a_list(numerical_list): new_squared_list = list() for number in numerical_list: new_squared_list.append(number ** 2) print(new_squared_list) return new_squared_list squares_a_list(numbers) new_squared_list
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
Let’s talk more about function arguments.Arguments play a paramount role when it comes to adhering to the DRY principle as well as adding flexibility to your code.Let’s bring back the function we made named squares_a_list().The reason we made this function in the first place was to DRY out our code and avoid repeating the same for loop for any additional list we wished to operate on.What happens now if we no longer wanted to square a number but calculate a specified exponential of each element, perhaps (n^3), or (n^4)?Would we need a new function?We could make a similar new function for cubing the numbers.But this feels repetitive.A better solution that adheres to the DRY principle is to tweak our original function but add an additional argument.Take a look at exponent_a_list() which now takes 2 arguments; the original numerical_list, and now a new argument named exponent.This gives us a choice of the exponent. We could use the same function now for any exponent we want instead of making a new function for each.This makes sense to do if we foresee needing this versatility, else the additional argument isn’t necessary.
def exponent_a_list(numerical_list, exponent): new_exponent_list = list() for number in numerical_list: new_exponent_list.append(number ** exponent) return new_exponent_list numbers = [2, 3, 5] exponent_a_list(numbers, 3) #the 2nd arguement allows us to specify an exponent value exponent_a_list(numbers, 5)
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
Functions can have any number of arguments and any number of optional arguments, but we must be careful with the order of the arguments.When we define our arguments in a function, all arguments with default values (aka optional arguments) need to be placed after required arguments.If any required arguments follow any arguments with default values, an error will occur.Let’s take our original function exponent_a_list() and re-order it so the optional exponent argument is defined first.We will see Python throw an error.
def exponent_a_list(exponent=2, numerical_list): new_exponent_list = list() for number in numerical_list: new_exponent_list.append(number ** exponent) return new_exponent_list
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
Up to this point, we have been calling functions with multiple arguments in a single way.When we call our function, we have been ordering the arguments in the order the function defined them in.So, in exponent_a_list(), the argument numerical_list is defined first, followed by the argument exponent.Naturally, we have been calling our function with the arguments in this order as well.
def exponent_a_list(numerical_list, exponent=2): new_exponent_list = list() for number in numerical_list: new_exponent_list.append(number ** exponent) return new_exponent_list exponent_a_list([2, 3, 5], 5)
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
We showed earlier that we could also call the function by specifying exponent=5.Another way of calling this would be to also specify any of the argument names that do not have default values, in this case, numerical_list.What happens if we switch up the order of the arguments and put exponent=5 followed by numerical_list=numbers?It still works!
exponent_a_list(numerical_list=[2, 3, 5], exponent=5) exponent_a_list(exponent=5, numerical_list=[2, 3, 5])
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
What about if we switch up the ordering of the arguments without specifying any of the argument names.Our function doesn’t recognize the input arguments, and an error occurs because the two arguments are being swapped - it thinks 5 is the list, and [2, 3, 5] is the exponent.It’s important to take care when ordering and calling a function.The rule of thumb to remember is if you are going to call a function where the arguments are in a different order from how they were defined, you need to assign the argument name to the value when you call the function.
exponent_a_list(5, [2, 3, 5]) #this wont work because it thinkg that 5 is the list
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
Functions can get very complicated, so it is not always obvious what they do just from looking at the name, arguments, or code.Therefore, people like to explain what the function does.The standard format for doing this is called a docstring.A docstring is a literal string that comes directly after the function def and documents the function’s purpose and usage.Writing a docstring documents what your code does so that collaborators (and you in 6 months’ time!) are not struggling to decipher and reuse your code.In the last section we had our function squares_a_list().Although our function name is quite descriptive, it could mean various things.How do we know what data type it takes in and returns?Having documentation for it can be useful in answering these questions. Here is the code for a function from the pandas package called truncate().You can view the complete code here. https://github.com/pandas-dev/pandas/blob/v1.1.0/pandas/core/generic.pyL9258I think we can all agree that it would take a bit of time to figure out what the function is doing, the expected input variable types, and what the function is returning.Luckily pandas provides detailed documentation to explain the function’s code. Ah. This documentation gives us a much clearer idea of what the function is doing and how to use it.We can see what it requires as input arguments and what it returns.It also explains the expectations of the function.Reading this instead of the code saved us some time and definitely potential confusion.There are several styles of docstrings; this one and the one we’ll be using is called the NumPy style. All docstrings, not just the Numpy formatted ones, are contained within 3 sets of quotations""". We discussed in module 4 that this was one of the ways to implement string values.Adding this additional string to our function has no effect on our code, and the sole purpose of the docstring is for human consumption.The NumPy format includes 4 main sections:- A brief description of the function- Explaining the input Parameters- What the function Returns- Examplesgit a
string1 = """This is a string""" type(string1)
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
Writing documentation for squares_a_list() using the NumPy style takes the following format.We can identify the brief description of the function at the top, the parameters that it takes in, and what object type they should be, as well as what to expect as an output.Here we can even see examples of how to run it and what is returned.
def squares_a_list(numerical_list): """ Squared every element in a list. Parameters ---------- numerical_list : list The list from which to calculate squared values Returns ------- list A new list containing the squared value of each of the elements from the input list Examples -------- >>> squares_a_list([1, 2, 3, 4]) [1, 4, 9, 16] """ new_squared_list = list() for number in numerical_list: new_squared_list.append(number ** 2) return new_squared_list
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
Using exponent_a_list(), a function from the previous section as an example, we include an optional note in the parameter definition and an explanation of the default value in the parameter description.
def exponent_a_list(numerical_list, exponent=2): """ Creates a new list containing specified exponential values of the input list. Parameters ---------- numerical_list : list The list from which to calculate exponential values from exponent: int or float, optional The exponent value (the default is 2, which implies the square). Returns ------- new_exponent_list : list A new list containing the exponential value specified of each of the elements from the input list Examples -------- >>> exponent_a_list([1, 2, 3, 4]) [1, 4, 9, 16] """ new_exponent_list = list() for number in numerical_list: new_exponent_list.append(number ** exponent)
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
Ah, remember how we talked about side effects back at the beginning of this module?Although we recommend avoiding side effects in your functions, there may be occasions where they’re unavoidable or required.In these cases, we must make it clear in the documentation so that the user of the function knows that their objects are going to be modified. (As an analogy: If someone wants you to babysit their cat, you would probably tell them first if you were going to paint it red while you had it!)So how we include side effects in our docstrings?It’s best to include your function side effects in the first sentence of the docstring.
def function_name(param1, param2): """The first line is a short description of the function. If your function includes side effects, explain it clearly here. Parameters ---------- param1 : datatype A description of param1. . . . Etc. """
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
Ok great! Now that we’ve written and explained our functions with a standardized format, we can read it in our file easily, but what if our function is located in a different file?How can we learn what it does when reading our code?We learned in the first assignment that we can read more about built-in functions using the question mark before the function name.This returns the docstring of the function.?function_name
?len # For example, if we want the docstring for the function len():
Object `len # For example, if we want the docstring for the function len():` not found.
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
We all know that mistakes are a regular part of life.In coding, every line of code is at risk for potential errors, so naturally, we want a way of defending our functions against potential issues.Defensive programming is code written in such a way that, if errors do occur, they are handled in a graceful, fast and informative manner.If something goes wrong, we don’t want the code to crash on its own terms - we want it to fail gracefully, in a way we pre-determined.To help soften the landing, we write code that throws our own Exceptions.Exceptions are used in Defensive programming to disrupt the normal flow of instructions. When Python encounters code that it cannot execute, it will throw an exception.Before we dive into exceptions, let’s revisit our function exponent_a_list().It works somewhat well, but what happens if we try to use it with an input string instead of a list.We get an error that explains a little bit of what’s causing the issue but not directly.This error, called a TypeError here, is itself a Python exception. But the error message, which is a default Python message, is not super clear.This is where raising our own Exception steps in to help.
def exponent_a_list(numerical_list, exponent=2): new_exponent_list = list() for number in numerical_list: new_exponent_list.append(number ** exponent) return new_exponent_list numerical_string = "123" exponent_a_list(numerical_string) def exponent_a_list(numerical_list, exponent=2): if type(numerical_list) is not list: raise Exception("You are not using a list for the numerical_list input.") new_exponent_list = list() for number in numerical_list: new_exponent_list.append(number ** exponent) return new_exponent_list
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
Exceptions disrupt the regular execution of our code. When we raise an Exception, we are forcing our own error with our own message.If we wanted to raise an exception to solve the problem on the last slide, we could do the following.
numerical_string = "123" exponent_a_list(numerical_string)
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
Let’s take a closer look.The first line of code is an if statement - what needs to occur to trigger this new code we’ve written.This code translates to “If numerical_list is not of the type list…”.The second line does the complaining.We tell it to raise an Exception (throw an error) with this message.Now we get an error message that is straightforward on why our code is failing.Exception: You are not using a list for the numerical_list input.I hope we can agree that this message is easier to decipher than the original.The new message made the cause of the error much clearer to the user, making our function more usable. Let’s now learn more about the possible different types of Exceptions.The exception type called Exception is a generic, catch-all exception type.There are also many other exception types; for example, you may have encountered ValueError or a TypeError at some point.Exception, which is used in our previous examples, may not be the best option for the raises we made.Let’s take a look now at the exception we wrote that checks if the input value for numerical_list was the correct type.Since this is a type error, a better-raised exception over Exception would be TypeError.Let’s make our correction here and change Exception in our function to TypeError.
if type(numerical_list) is not list: raise Exception("You are not using a list for the numerical_list input.") def exponent_a_list(numerical_list, exponent=2): if type(numerical_list) is not list: raise TypeError("You are not using a list for the numerical_list input.") new_exponent_list = list() for number in numerical_list: new_exponent_list.append(number ** exponent) return new_exponent_list numerical_string = "123" exponent_a_list(numerical_string)
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
Now that we can write exceptions, it’s important to document them.It’s a good idea to include details of any included exceptions in our function’s docstring.Under the NumPy docstring format, we explain our raised exception after the “Returns” section.We first specify the exception type and then an explanation of what causes the exception to be raised.For example, we’ve added a “Raises” section in our exponent_a_list docstring here.
def exponent_a_list(numerical_list, exponent=2): """ Creates a new list containing specified exponential values of the input list. Parameters ---------- numerical_list : list The list from which to calculate exponential values from exponent : int or float, optional The exponent value (the default is 2, which implies the square). Returns ------- new_exponent_list : list A new list containing the exponential value specified of each of the elements from the input list Raises ------ TypeError If the input argument numerical_list is not of type list Examples -------- >>> exponent_a_list([1, 2, 3, 4]) [1, 4, 9, 16] """
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
In the last section, we learned about raising exceptions, which, in a lot of cases, helps the function user identify if they are using it correctly.But there are still some questions remaining:How can we be so sure that the code we wrote is doing what we want it to?Does our code work 100% of the time?These questions can be answered by using something called units tests.We’ll be implementing unit tests in Python using assert statements." assert statements are just one way of implementing unit tests.Let’s first discuss the syntax of an assert statement and then how they can be applied to the bigger concept, which is unit tests. assert statements can be used as sanity checks for our program.We implement them as a “debugging” tactic to make sure our code runs as we expect it to.When Python reaches an assert statement, it evaluates the condition to a Boolean value.If the statement is True, Python will continue to run. However, if the Boolean is False, the code stops running, and an error message is printed.Let’s take a look at one.Here we have the keyword assert that checks if 1==2. Since this is False, an error is thrown, and the message beside the condition "1 is not equal to 2." is outputted.
assert 1 == 2 , "1 is not equal to 2."
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
https://prog-learn.mds.ubc.ca/module6/assert2.png Let’s take a look at an example where the Boolean is True.Here, since the assert statement results in a True values, Python continues to run, and the next line of code is executed.When an assert is thrown due to a Boolean evaluating to False, the next line of code does not get an opportunity to be executed.
assert 1 == 1 , "1 is not equal to 1." print('Will this line execute?') assert 1 == 2 , "1 is not equal to 2." print('Will this line execute?') Not all assert statements need to have a message. We can re-write the statement from before without one. This time you’ll notice that the error doesn’t contain the particular message beside AssertionError like we had before. assert 1 == 2
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
Where do assert statements come in handy?Up to this point, we have been creating functions, and only after we have written them, we’ve tested if they work.Some programmers use a different approach: writing tests before the actual function. This is called Test-Driven Development.This may seem a little counter-intuitive, but we’re creating the expectations of our function before the actual function code.Often we have an idea of what our function should be able to do and what output is expected.If we write our tests before the function, it helps understand exactly what code we need to write and it avoids encountering large time-consuming bugs down the line.Once we have a serious of tests for the function, we can put them into assert statements as an easy way of checking that all the tests pass. https://prog-learn.mds.ubc.ca/module6/why.png So, what kind of tests do we want?We want to keep these tests simple - things that we know are true or could be easily calculated by hand.For example, let’s look at our exponent_a_list() function.Easy cases for this function would be lists containing numbers that we can easily square or cube.For example, we expect the square output of [1, 2, 4, 7] to be [1, 4, 16, 49].The test for this would look like the one shown here.It is recommended to write multiple tests.Let’s write another test for a differently sized list as well as different values for both input arguments numerical_list and exponent.Let’s make another test for exponent = 3. Again, we use numbers that we know the cube of.We can also test that the type of the returned object is correct.
def exponent_a_list(numerical_list, exponent=2): new_exponent_list = list() for number in numerical_list: new_exponent_list.append(number ** exponent) return new_exponent_list assert exponent_a_list([1, 2, 4, 7], 2) == [1, 4, 16, 49], "incorrect output for exponent = 2" assert exponent_a_list([1, 2, 3], 3) == [1, 8, 27], "incorrect output for exponent = 3" assert type(exponent_a_list([1,2,4], 2)) == list, "output type not a list"
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
Just because all our tests pass, this does not mean our program is necessarily correct.It’s common that our tests can pass, but our code contains errors.Let’s take a look at the function bad_function(). It’s very similar to exponent_a_list except that it separately computes the first entry before doing the rest in the loop.This function looks like it would work perfectly fine, but what happens if we get an input argument for numerical_list that cannot be sliced?Let’s write some unit tests using assert statements and see what happens.Here, it looks like our tests pass at first.But what happens if we try our function with an empty list?We get an unexpected error! How do we avoid this?Write a lot of tests and don’t be overconfident, even after writing a lot of tests!Checking an empty list in our bad_function() function is an example of checking a corner case.A corner case is an input that is reasonable but a bit unusual and may trip up our code.
def bad_function(numerical_list, exponent=2): new_exponent_list = [numerical_list[0] ** exponent] # seed list with first element for number in numerical_list[1:]: new_exponent_list.append(number ** exponent) return new_exponent_list assert bad_function([1, 2, 4, 7], 2) == [1, 4, 16, 49], "incorrect output for exponent = 2" assert bad_function([2, 1, 3], 3) == [8, 1, 27], "incorrect output for exponent = 3" bad_function([], 2)
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
Often, we will be making functions that work on data.For example, perhaps we want to write a function called column_stats that returns some summary statistics in the form of a dictionary.The function here is something we might have envisioned. (Note that if we’re using test-driven development, this function will just be an idea, not completed code.)In these situations, we need to invent some sort of data so that we can easily calculate the max, min, range, and mean and write unit tests to check that our function does the correct operations.The data can be made from scratch using functions such as pd.DataFrame() or pd.DataFrame.from_dict() which we learned about in module 4.You can also upload a very small slice of an existing dataframe.
def column_stats(df, column): stats_dict = {'max': df[column].max(), 'min': df[column].min(), 'mean': round(df[column].mean()), 'range': df[column].max() - df[column].min()} return stats_dict
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
The values we chose in our columns should be simple enough to easily calculate the expected output of our function.Just like how we made unit tests using calculations we know to be true, we do the same using a simple dataset we call helper data.The dataframe must have a small dimension to keep the calculations simple.The tests we write for the function column_stats() are now easy to calculate since the values we are using are few and simple.We wrote tests that check different columns in our forest dataframe.
import pandas as pd data = {'name': ['Cherry', 'Oak', 'Willow', 'Fir', 'Oak'], 'height': [15, 20, 10, 5, 10], 'diameter': [2, 5, 3, 10, 5], 'age': [0, 0, 0, 0, 0], 'flowering': [True, False, True, False, False]} forest = pd.DataFrame.from_dict(data) forest assert column_stats(forest, 'height') == {'max': 20, 'min': 5, 'mean': 12.0, 'range': 15} assert column_stats(forest, 'diameter') == {'max': 10, 'min': 2, 'mean': 5.0, 'range': 8} assert column_stats(forest, 'age') == {'max': 0, 'min': 0, 'mean': 0, 'range': 0}
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
We use a systematic approach to design our function using a general set of steps to follow when writing programs.The approach we recommend includes 5 steps:1. Write the function stub: a function that does nothing but accepts all input parameters and returns the correct datatype.This means we are writing the skeleton of a function.We include the line that defines the function with the input arguments and the return statement returning the object with the desired data type.Using our exponent_a_list() function as an example, we include the function’s first line and the return statement.
def exponent_a_list(numerical_list, exponent=2): return list()
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
2. Write tests to satisfy the design specifications.This is where our assert statements come in.We write tests that we want our function to pass.In our exponent_a_list() example, we expect that our function will take in a list and an optional argument named exponent and then returns a list with the exponential value of each element of the input list.Here we can see our code fails since we have no function code yet!
def exponent_a_list(numerical_list, exponent=2): return list() assert type(exponent_a_list([1,2,4], 2)) == list, "output type not a list" assert exponent_a_list([1, 2, 4, 7], 2) == [1, 4, 16, 49], "incorrect output for exponent = 2" assert exponent_a_list([1, 2, 3], 3) == [1, 8, 27], "incorrect output for exponent = 3"
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
3. Outline the program with pseudo-code.Pseudo-code is an informal but high-level description of the code and operations that we wish to implement.In this step, we are essentially writing the steps that we anticipate needing to complete our function as comments within the function.So for our function pseudo-code includes:
def exponent_a_list(numerical_list, exponent=2): # create a new empty list # loop through all the elements in numerical_list # for each element calculate element ** exponent # append it to the new list return list() assert type(exponent_a_list([1,2,4], 2)) == list, "output type not a list" assert exponent_a_list([1, 2, 4, 7], 2) == [1, 4, 16, 49], "incorrect output for exponent = 2" assert exponent_a_list([1, 2, 3], 3) == [1, 8, 27], "incorrect output for exponent = 3"
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
4. Write code and test frequently.Here is where we fill in our function.As you work on the code, more and more tests of the tests that you wrote will pass until finally, all your assert statements no longer produce any error messages.
def exponent_a_list(numerical_list, exponent=2): new_exponent_list = list() for number in numerical_list: new_exponent_list.append(number ** exponent) return new_exponent_list assert type(exponent_a_list([1,2,4], 2)) == list, "output type not a list" assert exponent_a_list([1, 2, 4, 7], 2) == [1, 4, 16, 49], "incorrect output for exponent = 2" assert exponent_a_list([1, 2, 3], 3) == [1, 8, 27], "incorrect output for exponent = 3"
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
5. Write documentation.Finally, we finish writing our function with a docstring.
def exponent_a_list(numerical_list, exponent=2): """ Creates a new list containing specified exponential values of the input list. Parameters ---------- numerical_list : list The list from which to calculate exponential values from exponent : int or float, optional The exponent value (the default is 2, which implies the square). Returns ------- new_exponent_list : list A new list containing the exponential value specified of each of the elements from the input list Examples -------- >>> exponent_a_list([1, 2, 3, 4]) [1, 4, 9, 16] """ new_exponent_list = list() for number in numerical_list: new_exponent_list.append(number ** exponent) return new_exponent_list
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
This has been quite a full module!We’ve learned how to make functions, how to handle errors gracefully, how to test our functions, and write the necessary documentation to keep our code comprehensible.These skills will all contribute to writing effective code.One thing we have not discussed yet is the actual code within a function.What makes a function useful?Is a function more useful when it performs more operations?Does adding parameters make your functions more or less useful?These are all questions we need to think about when writing functions.We are going to list some habits to adopt when writing and designing your functions. Hard coding is the process of embedding values directly into your code without saving them in variablesWhen we hardcode values into our code, it decreases flexibility.Being inflexible can cause you to end up writing more functions and/or violating the DRY principle.This, in turn, can decrease the readability and makes code problematic to maintain. In short, hard coding is a breeding ground for bugs.Remember our function squares_a_list()?In this function, we “hard-coded” in 2 when we calculated number ** 2.There are a couple of approaches to improving the situation. One is to assign 2 to a variable in the function before doing this calculation. That way, if you need to reuse that number, later on, you can just refer to the variable; and if you need to change the 2 to a 3, you only need to change it in one place. Another benefit is that you’re giving it a variable name, which acts as a little bit of documentation.The other approach is to turn the value into an argument like we did when we made exponent_a_list().This new function now gives us more flexibility with our code.If we now encounter a situation where we need to calculate each element to a different exponent like 4 or 0, we can do so without writing new code and potentially making a new error in doing so.We reduce our long term workload.This version is more maintainable code, but it doesn’t give the function caller any flexibility. What you decide depends on how you expect your function to be used.
def squares_a_list(numerical_list): new_squared_list = list() for number in numerical_list: new_squared_list.append(number ** 2) return new_squared_list def exponent_a_list(numerical_list, exponent): new_exponent_list = list() for number in numerical_list: new_exponent_list.append(number ** exponent) return new_exponent_list
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
Although it may seem useful when a function acts as a one-stop-shop that does everything you want in a single function, this also limits your ability to reuse code that lies within it.Ideally, functions should serve a single purpose.For example, let’s say we have a function that reads in a csv, finds the mean of each group in a column, and plots a specified variable.Although this may seem nice, we may want to break this up into multiple smaller functions. For example, what if we don’t want the plot? Perhaps the plot is just something we wanted a single time, and now we are committed to it for each time we use the function.Another problem with this function is that the means are only printed and not returned. Thus, we have no way of accessing the statistics to use further in our code (we would have to repeat ourselves and rewrite
import altair as alt def load_filter_and_average(file, grouping_column, ploting_column): df = pd.read_csv(file) source = df.groupby(grouping_column).mean().reset_index() chart = alt.Chart(source, width = 500, height = 300).mark_bar().encode( x=alt.X(grouping_column), y=alt.Y(ploting_column) ) return chart bad_idea = load_filter_and_average('cereal.csv', 'mfr', 'rating') bad_idea
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
In this case, you want to simplify the function.Having a function that only calculates the mean values of the groups in the specified column is much more usable.A preferred function would look something like this, where the input is a dataframe we have already read in, and the output is the dataframe of mean values for all the columns.
def grouped_means(df, grouping_column): grouped_mean = df.groupby(grouping_column).mean().reset_index() return grouped_mean cereal_mfr = grouped_means(cereal, 'mfr') cereal_mfr If we wanted, we could then make a second function that creates the desired plot part of the previous function. def plot_mean(df, grouping_column, ploting_column): chart = alt.Chart(df, width = 500, height = 300).mark_bar().encode( x=alt.X(grouping_column), y=alt.Y(ploting_column) ) return chart plot1 = plot_mean(cereal_mfr, 'mfr', 'rating') plot1
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
3. Return a single objectFor the most part, we have only lightly touched on the fact that functions can return multiple objects, and it’s with good reason.Although functions are capable of returning multiple objects, that doesn’t mean that it’s the best option.For instance, what if we converted our function load_filter_and_average() so that it returns a dataframe and a plot.
def load_filter_and_average(file, grouping_column, ploting_column): df = pd.read_csv(file) source = df.groupby(grouping_column).mean().reset_index() chart = alt.Chart(source, width = 500, height = 300).mark_bar().encode( x=alt.X(grouping_column), y=alt.Y(ploting_column) ) return chart, source another_bad_idea = load_filter_and_average('cereal.csv', 'mfr', 'rating') another_bad_idea
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
Since our function returns a tuple, we can obtain the plot by selecting the first element of the output.This can be quite confusing. We would recommend separating the code into two functions and can have each one return a single object.It’s best to think of programming functions in the same way as mathematical functions where most times, mathematical functions return a single value.
another_bad_idea[0]
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
It’s generally bad form to include objects in a function that were created outside of it.Take our grouped_means() function.What if instead of including df as an input argument, we just used cereal that we loaded earlier?The number one problem with doing this is now our function only works on the cereal data - it’s not usable on other data.
def grouped_means(df, grouping_column): grouped_mean = df.groupby(grouping_column).mean().reset_index() return grouped_mean cereal = pd.read_csv('cereal.csv') def bad_grouped_means(grouping_column): grouped_mean = cereal.groupby(grouping_column).mean().reset_index() return grouped_mean
_____no_output_____
MIT
summary worksheet/M6 Function Fundamentals and Best Practices.ipynb
Lavendulaa/programming-in-python-for-data-science
Introduction to the Interstellar Medium Jonathan Williams Figure 7.4: molecular rich spectrum this is a small portion (centered around CO 3-2) from the published spectrum in https://www.aanda.org/articles/aa/abs/2016/11/aa28648-16/aa28648-16.html the ascii file was provided by Jes Jorgensen
import numpy as np import matplotlib.pyplot as plt from matplotlib.ticker import FormatStrFormatter %matplotlib inline nu, flux = np.loadtxt('PILS_spectrum.txt', unpack=True) fig = plt.figure(figsize=(6,4)) ax = fig.add_subplot(1,1,1) ax.set_xlabel(r"$\nu$ [GHz]", fontsize=16) ax.set_ylabel(r"Flux [Jy]", fontsize=16) ax.plot(nu, flux, color='k', lw=1) ax.xaxis.set_major_formatter(FormatStrFormatter('%5.1f')) ax.set_xlim(344.55, 347.44) ax.set_ylim(-0.07, 1.3) fig.tight_layout() plt.savefig('PILS_spectrum.pdf')
_____no_output_____
CC0-1.0
molecules/PILS_spectrum.ipynb
CambridgeUniversityPress/IntroductionInterstellarMedium
K-Nearest Neighbors Algorithm and Its Application IntroductionAs we have learnt, NaiveBayes and decision tree are all eager learning algorithm which constructs a classification model before receiving new data to do queries. In contrast, lazy learning algorithm stores all training data until a query is made. K-nearest neighbors (a.k.a k-NN) algorithm is one of the most famous lazy learning algorithms, as well as among the simplest of all machine learning algorithms.The idea of k-NN algorithm is quite simple. Whenever we have a new point to classify, we find its K nearest neighbors from the training data. Let's say we have two class of data labeled separately as green square and red circle. The new data which is a blue triangle is waiting for classification. Right now we don't know class the triangle belongs to and our goal is to find a proper family for it. ![](http://i.imgur.com/n1OK9h2.png) Usually, the easiest way for us to decide whether a person is good or bad is through searching his or her friends. Here, we should study all the friends of the blue triangle. However, how we define friendship for data? We know that each data is actually a spot in space and it has a distance to all other data spots. In k-NN algorithm, k decides the friendly distance. Given the classification criteria above, if k = 3 (solid line circle), the blue triangle is assigned to the class of red circles because there are 2 red circles and only 1 green square inside the inner circle; if k = 5 (dashed line circle), the blue triangle is assigned to the class of green square because there are 3 green squares but 2 red circles inside the outer circle. This seems a statistical way to help us find the right class for new data. **We assign a class label only considering the number of friends near by and their classes.**The distance here can be calculated using one of the following measures:- Euclidean Distance- Minkowski Distance- Mahalanobis Distance Weighted K-NN AlgorithmAs we discussed in last session, we begin with a naive k-NN algorithm which only take the number of appearance of data in each class into consideration. We may get a wrong classification result in a situation like below: ![](http://i.imgur.com/exy2MUY.png) Within a proper distance, there are more green squares than red circles. However, the blue triangle is closely surrounded by red circles and all green squares locate farther from it. The blue triangle is most likely to be one of the class of red circles even though there are more green squares show up in the friendly circle.> One way to overcome this problem is to weight the classification, taking into account the distance from the test point to each of its k nearest neighbors.There are several approaches to apply weight to the class of each of the k nearest points. Here I introduce one approach called **attribute/feature weighted k-NN**.There are two assumptions:- All the attribute values are numerical or real- Class attribute values are discrete integer valuesDetailed steps:- Read the training data- Set K to some value- Calculate Euclidean distances to all training data points- Find the K nearest neighbors based on the distances- Assign weight to each feature- Return the class that represents the maximum of the k instances- Calculate the accuracy Other Improvements Density based k-NNIn Density based K-NN the distance between test and training instances is increased in sparse area and reduced in dense areas because it not only considers the density of test instance but also the densities of its K neighbors. Variable k-NNIt has been observed that the values in K nearest neighbor classification results heavily depends upon the number of neighbors and each data has different K value that is suitable for it. This approach finds the optimum K value for each classification and generate an array which contains the best K value for each training set instances. Class based k-NNSometimes the size of each class can also cause problem. When one class have too many instanceswhile others have too few instances, the class with smallest size won't be selected by our algorithm. We should take this factor into consideration as well. ApplicationIn this tutorial, we are going to apply k-NN algorithm on sentences from the abstract and introduction of 30 scientific articles. Firstly, we call KNeighborsClassifier from sklearn library, train it with training data set and calculate prediction accuracy on testing data. Then we implement our own K-NN algorithm. We will compare the accuracy of our algorithm with the one of sklearn library. Finally, we implement an improved k-NN algorithm based on naive k-NN. Data Set[Sentence Classification Data Set from UCI](http://archive.ics.uci.edu/ml/datasets/Sentence+Classification) Training Data FormatA snippet of training data may like:> MISC although the internet as level topology has been extensively studied over the past few years little is known about the details of the as taxonomy> MISC an as node can represent a wide variety of organizations e g large isp or small private business university with vastly different network characteristics externs> AIMX in this paper we introduce a radically new approach based on machine learning techniques to map all the ases in the internet into a natural as taxonomy> OWNX we successfully classify NUMBER NUMBER percent of ases with expected accuracy of NUMBER NUMBER percentThe first attribute is a label of:1. AIM: "A specific research goal of the current paper"2. OWNX: "(Neutral) description of own work presented in current paper"3. CONT: "Statements of comparison with or contrast to other work; weaknesses of other work"4. BASIS: "Statements of agreement with other work or continuation of other work"5. MISC: "(Neutral) description of other researchers' work"The second part is a sentence in the paper.Our goal here is to use K-NN to guess the category of a given sentense. Here are libraries we need:
import numpy as np import pandas as pd from nltk import word_tokenize from sklearn.neighbors import KNeighborsClassifier from collections import Counter import matplotlib matplotlib.use('svg') %matplotlib inline import matplotlib.pyplot as plt plt.style.use('ggplot')
_____no_output_____
MIT
2016/tutorial_final/175/tutorial.ipynb
zeromtmu/practicaldatascience.github.io
Feature ScoringWe are provided with an extra key words list for each category. We can use those words to score the likelihood of a senctence's category. For example, sentences contain words like "we", "introduce" and "design" are more likely to describe the research goal of the current paper, which should belongs to category AIM. If a sentence contain `N` words in AIM word list, we give it's AIM feature score `N`.
# Read in key words for each category def build_word_set(filename): with open(filename) as f: lines = f.readlines() return {word.strip() for word in lines if word} aim_set = build_word_set("./word_lists/aim.txt") base_set = build_word_set("./word_lists/base.txt") own_set = build_word_set("./word_lists/own.txt") constract_set = build_word_set("./word_lists/contrast.txt") stopwords_set = build_word_set("./word_lists/stopwords.txt")
_____no_output_____
MIT
2016/tutorial_final/175/tutorial.ipynb
zeromtmu/practicaldatascience.github.io
Train data on classifier. Return feature matrix and label vector. If `train` is `false`, the output label vector will be empty.
def load_data(filename, train=True): """ Training data format like: 'AIMX In this paper we derive the equations for Loop Corrected Belief Propagation on a continuous variable Gaussian model' For each single line of raw data, caculate the feature score of it and put it into feature matrix (and label matrix if it's training data) """ feature_m = np.empty((0,4), dtype=int) label_m = np.empty((0,1)) def caculate_score(line): aim_score, base_score, own_score, cons_score = 0, 0, 0, 0 word_list = [w for w in word_tokenize(line.lower()) if w not in stopwords_set] for word in word_list: if word in aim_set: aim_score += 1 if word in base_set: base_score += 1 if word in own_set: own_score += 1 if word in constract_set: cons_score += 1 return aim_score, base_score, own_score, cons_score with open(filename) as f: for line in f: if line.startswith("###"): continue else: if train: label, sentence = line.split(' ', 1) a, b, o, c = caculate_score(sentence) feature_m = np.append(feature_m, np.array([[a, b, o, c]]), axis=0) label_m = np.append(label_m, np.array([[label]]), axis=0) else: a, b, o, c = caculate_score(line) feature_m = np.append(feature_m, np.array([[a, b, o, c]]), axis=0) return feature_m, label_m
_____no_output_____
MIT
2016/tutorial_final/175/tutorial.ipynb
zeromtmu/practicaldatascience.github.io
Let's try out one training data and see the accuracy of our classifier!
f, l = load_data('./arxiv_annotate10_7_3.txt') sklearn_knn = KNeighborsClassifier(n_neighbors=3) sklearn_knn.fit(f, l.ravel()) correct = 0.0 for i in range(len(f)): if sklearn_knn.predict(f[i].reshape(1,-1)) == l[i]: correct += 1 print correct/len(f)
0.716417910448
MIT
2016/tutorial_final/175/tutorial.ipynb
zeromtmu/practicaldatascience.github.io
Performance AnalysisIn this part, we are going to evaluate the performance of our algorithm in contrast to the sklearn library.There are several perspectives for us to analysis an algorithm, time complexity, space complexity, error rate, etc.Basic kNN algorithm stores all samples, so the space complexity depends on the volume of samples. The total time complexity is `O(nk+nd)` with n examples each of dimension d, including time to compute distance to all samples and find k closest examples. Here, let's take a deep look at how k influence the error rate. How to Choose k?In theory, if infinite number of samples available, the larger is k, the better is classification. The lower bound of k is 1. In this case, the k-nn model may be too sensitive to “noise”. The upper bound is n, the number of samples. If k is that big, then we will take all samples into consideration. This means that the class of the testing data only depends on the majority of the class of all the samples and it's nothing to do with distance.The program we implemented below is for generating error rates with different k. We choose k from 1 to 50. Then draw a figure to show the relationship between k and accuracy rate. Visualized Result Load DataFirstly, we load data for use. `data.txt` is a much bigger data set which runs much slower.
f, l = load_data('./data.txt')
_____no_output_____
MIT
2016/tutorial_final/175/tutorial.ipynb
zeromtmu/practicaldatascience.github.io
Test Sklearn LibraryOn one hand, we test sklearn library with different k.
x_points = [x for x in range(1,10)] + [x for x in range(10, 50, 3)] y_points = [] for k in x_points: correct = 0.0 sklearn_knn = KNeighborsClassifier(n_neighbors=k) sklearn_knn.fit(f, l.ravel()) for i in range(len(f)): if sklearn_knn.predict(f[i].reshape(1,-1)) == l[i]: correct += 1 y_points.append(correct/len(f)) plt.plot(x_points, y_points, 'b-',lw=1.5)
_____no_output_____
MIT
2016/tutorial_final/175/tutorial.ipynb
zeromtmu/practicaldatascience.github.io
Test Our K-NNOn the other hand, we implement our own simplest K-NN algorithm first and test its accuracy rate.
class KNN: def __init__(self, k): self.k = k def _euclidean_distance(self, data1, data2): diff = np.power(data1 - data2, 2) diff_sum = np.sum(diff, axis=0) return np.sqrt(diff_sum) def majority_vote(self, neighbors): clusters = [neighbour[1][0] for neighbour in neighbors] counter = Counter(clusters) return counter.most_common()[0][0] def predict(self, training_data, test_entry): def add_distance_attr(training_entry, test_entry): return (training_entry, self._euclidean_distance(test_entry, training_entry[0])) distances = [add_distance_attr(training_entry, test_entry) for training_entry in training_data] distances.sort(key=lambda x: x[1]) sorted_training = [entry[0] for entry in distances] # Replace last line with next line when call majority_vote2 method for weighted KNN # sorted_training = [entry for entry in distances] neighbors = sorted_training[:self.k] return self.majority_vote(neighbors) training = [(f[i], l[i]) for i in range(f.shape[0])] x_points = [x for x in range(1,10)] + [x for x in range(10, 40, 3)] y_points = [] for k in x_points: knn = KNN(k) correct = 0 for i in training: if knn.predict(training, i[0]) == i[1][0]: correct += 1 y_points.append(correct*1.0/len(training)) plt.plot(x_points, y_points, 'r-',lw=1.5)
_____no_output_____
MIT
2016/tutorial_final/175/tutorial.ipynb
zeromtmu/practicaldatascience.github.io
ConclusionFrom the figures above, we find that both method have a peak accuracy rate of 77 near 6. Congratulations! Our simplest k-NN algorithm works very well on this data set compared with sklearn library. When k is small than 6, there is a wave due to the sensitivity of the noise. At the same time, we can infer that larger k gives smoother boundaries, better for generalization, but could have worse performance when boundaries become too blurry. Improved K-NNAs we discussed before, assign a weight to each feature is a way to improve K-NN. In original K-NN, we assign a class label based on the majority of each feature in the range of k neighbors. Now we multiply the value of each feature with a weight. The weight comes from inverse distance and the distance is the mean distance test point and all points of that feature.The following method is one way we implement the idea above. We can insert into our K-NN implementation to see the difference.
def majority_vote2(neighbors): """ neighbors[0] like: ([[1,2,3,4], ['MISC']], dist) list of (label, weight), weight = # of same label * 1.0 / (avg dist / # of same ) of them """ AIM_label_cnt = 0 AIM_label_total_dist = 0 OWNX_label_cnt = 0 OWNX_label_total_dist = 0 BASIS_label_cnt = 0 BASIS_label_total_dist = 0 MISC_label_cnt = 0 MISC_label_total_dist = 0 CONT_label_cnt = 0 CONT_label_total_dist = 0 for row in neighbors: label, dist = row[0][1][0], row[1] if label == "AIM": AIM_label_cnt += 1 AIM_label_total_dist += dist elif label == "OWNX": OWNX_label_cnt += 1 OWNX_label_total_dist += dist elif label == "BASIS": BASIS_label_cnt += 1 BASIS_label_total_dist += dist elif label == "MISC": MISC_label_cnt += 1 MISC_label_total_dist += dist elif label == "CONT": CONT_label_cnt += 1 CONT_label_total_dist += dist if AIM_label_total_dist == 0: AIM_label_total_dist = float('inf') if OWNX_label_total_dist == 0: OWNX_label_total_dist = float('inf') if BASIS_label_total_dist == 0: BASIS_label_total_dist = float('inf') if MISC_label_total_dist == 0: MISC_label_total_dist = float('inf') if CONT_label_total_dist == 0: CONT_label_total_dist = float('inf') label_list = [("AIM", AIM_label_cnt**2*1.0/AIM_label_total_dist), ("OWNX", OWNX_label_cnt**2*1.0/OWNX_label_total_dist), ("BASIS", BASIS_label_cnt**2*1.0/BASIS_label_total_dist), ("MISC", MISC_label_cnt**2*1.0/MISC_label_total_dist), ("CONT", CONT_label_cnt**2*1.0/CONT_label_total_dist)] label_list.sort(key = lambda x: x[1], reverse=True) return label_list[0][0]
_____no_output_____
MIT
2016/tutorial_final/175/tutorial.ipynb
zeromtmu/practicaldatascience.github.io
LIME Text Explainer via XAIThis tutorial demonstrates how to generate explanations using LIME's text explainer implemented by the Contextual AI library. Much of the tutorial overlaps with what is covered in the [LIME tabular tutorial](lime_tabular_explainer.ipynb). To recap, the main steps for generating explanations are:1. Get an explainer via the `ExplainerFactory` class2. Build the text explainer3. Call `explain_instance` Credits1. Pramodh, Manduri Step 1: Import libraries
# Some auxiliary imports for the tutorial import pprint import sys import random import numpy as np from pprint import pprint from sklearn import datasets from sklearn.model_selection import train_test_split from sklearn.naive_bayes import MultinomialNB from sklearn.feature_extraction.text import TfidfVectorizer # Set seed for reproducibility np.random.seed(123456) # Set the path so that we can import ExplainerFactory sys.path.append('../../../') # Main Contextual AI imports import xai from xai.explainer import ExplainerFactory
_____no_output_____
Apache-2.0
tutorials/compiler/20newsgroup/sample.ipynb
SebastianWolf-SAP/contextual-ai
Step 2: Load dataset and train a modelIn this tutorial, we rely on the 20newsgroups text dataset, which can be loaded via sklearn's dataset utility. Documentation on the dataset itself can be found [here](https://scikit-learn.org/0.19/datasets/twenty_newsgroups.html). To keep things simple, we will extract data for 3 topics - baseball, Christianity, and medicine.Our target model is a multinomial Naive Bayes classifier, which we train using TF-IDF vectors.
# Train on a subset of categories categories = [ 'rec.sport.baseball', 'soc.religion.christian', 'sci.med' ] raw_train = datasets.fetch_20newsgroups(subset='train', categories=categories) print(list(raw_train.keys())) print(raw_train.target_names) print(raw_train.target[:10]) raw_test = datasets.fetch_20newsgroups(subset='test', categories=categories) X_train = raw_train.data vectorizer = TfidfVectorizer() X_train_vec = vectorizer.fit_transform(X_train) y_train = raw_train.target X_test_vec = vectorizer.transform(raw_test.data) y_test = raw_test.target clf = MultinomialNB(alpha=0.1) clf.fit(X_train_vec, y_train) limit_size=200 pprint('Subsetting training sample to %s to speed up.' % limit_size) X_train = X_train[:limit_size] pprint('Classifier score: %s' % clf.score(X_test_vec, y_test)) pprint('Classifier predict func %s:' % clf.predict_proba)
['data', 'filenames', 'target_names', 'target', 'DESCR'] ['rec.sport.baseball', 'sci.med', 'soc.religion.christian'] [1 0 2 2 0 2 0 0 0 1] 'Subsetting training sample to 200 to speed up.' 'Classifier score: 0.9689336691855583' ('Classifier predict func <bound method _BaseNB.predict_proba of ' 'MultinomialNB(alpha=0.1, class_prior=None, fit_prior=True)>:')
Apache-2.0
tutorials/compiler/20newsgroup/sample.ipynb
SebastianWolf-SAP/contextual-ai
Step 3: Instantiate the explainerHere, we will use the LIME Text Explainer.
explainer = ExplainerFactory.get_explainer(domain=xai.DOMAIN.TEXT) clf.predict_proba
_____no_output_____
Apache-2.0
tutorials/compiler/20newsgroup/sample.ipynb
SebastianWolf-SAP/contextual-ai
Step 4: Build the explainerThis initializes the underlying explainer object. We provide the `explain_instance` method below with the raw text - LIME's text explainer algorithm will conduct its own preprocessing in order to generate interpretable representations of the data. Hence we must define a custom `predict_fn` which takes a raw piece of text, vectorizes it via a pre-trained TF-IDF vectorizer, and passes the vector into the trained Naive Bayes model to generate class probabilities. LIME uses `predict_fn` to query our Naive Bayes model in order to learn its behavior around the provided data instance.
def predict_fn(instance): vec = vectorizer.transform(instance) return clf.predict_proba(vec) explainer.build_explainer(predict_fn) clf = clf feature_names = [] clf_fn = predict_fn target_names_list = [] import os import json import sys sys.path.append('../../../') from xai.compiler.base import Configuration, Controller json_config = 'lime-text-classification-model-interpreter.json' with open(json_config) as file: config = json.load(file) config controller = Controller(config=Configuration(config, locals())) controller.render()
Interpret 100/200 samples Interpret 200/200 samples Warning: figure will exceed the page bottom, adding a new page.
Apache-2.0
tutorials/compiler/20newsgroup/sample.ipynb
SebastianWolf-SAP/contextual-ai
Results
pprint("report generated : %s/20newsgroup-clsssification-model-interpreter-report.pdf" % os.getcwd()) ('report generated : ' '/Users/i062308/Development/Explainable_AI/tutorials/compiler/20newsgroup/20newsgroup-clsssification-model-interpreter-report.pdf')
('report generated : ' '/Users/i062308/Development/Explainable_AI/tutorials/compiler/20newsgroup/20newsgroup-clsssification-model-interpreter-report.pdf')
Apache-2.0
tutorials/compiler/20newsgroup/sample.ipynb
SebastianWolf-SAP/contextual-ai
Fit DCE data
import sys import matplotlib.pyplot as plt import numpy as np sys.path.append('..') import dce_fit, relaxivity, signal_models, water_ex_models, aifs, pk_models %load_ext autoreload %autoreload 2
_____no_output_____
Apache-2.0
src/original/MJT_UoEdinburghUK/demo/demo_fit_dce.ipynb
JonathanArvidsson/DCE-DSC-MRI_CodeCollection
--- First get the signal data
# Input time and signal values (subject 4) t = np.array([19.810000,59.430000,99.050000,138.670000,178.290000,217.910000,257.530000,297.150000,336.770000,376.390000,416.010000,455.630000,495.250000,534.870000,574.490000,614.110000,653.730000,693.350000,732.970000,772.590000,812.210000,851.830000,891.450000,931.070000,970.690000,1010.310000,1049.930000,1089.550000,1129.170000,1168.790000,1208.410000,1248.030000]) s_vif = np.array([411.400000,420.200000,419.600000,399.000000,1650.400000,3229.200000,3716.200000,3375.600000,3022.000000,2801.200000,2669.800000,2413.800000,2321.400000,2231.400000,2152.800000,2138.200000,2059.200000,2037.600000,2008.200000,1998.800000,1936.800000,1939.400000,1887.000000,1872.800000,1840.200000,1820.400000,1796.200000,1773.000000,1775.600000,1762.000000,1693.400000,1675.800000]) s_tissue = np.array([378.774277,380.712810,378.789773,382.467975,407.950413,443.482955,446.239153,433.392045,425.428202,426.274793,420.676653,417.144112,410.072831,422.042355,414.013430,410.885847,405.251033,415.864669,418.615186,406.327479,408.692149,406.797004,418.646694,408.176136,404.993285,405.098140,417.022211,408.189050,409.819731,401.988636,405.866219,406.299587]) # Calculate the enhancement baseline_idx = [0, 1, 2] enh_vif = dce_fit.sig_to_enh(s_vif, baseline_idx) enh_tissue = dce_fit.sig_to_enh(s_tissue, baseline_idx) fig, ax = plt.subplots(2,2) ax[0,0].plot(t, s_tissue, '.', label='tissue signal') ax[1,0].plot(t, s_vif, '.', label='VIF signal') ax[1,0].set_xlabel('time (s)'); ax[0,1].plot(t, enh_tissue, '.', label='tissue enh (%)') ax[1,1].plot(t, enh_vif, '.', label='VIF enh (%)') ax[1,1].set_xlabel('time (s)'); [a.legend() for a in ax.flatten()];
_____no_output_____
Apache-2.0
src/original/MJT_UoEdinburghUK/demo/demo_fit_dce.ipynb
JonathanArvidsson/DCE-DSC-MRI_CodeCollection
Convert enhancement to concentration
# First define some relevant parameters R10_tissue, R10_vif = 1/1.3651, 1/1.7206 k_vif, k_tissue = 0.9946, 1.2037 # flip angle correction factor hct = 0.46 # Specify relaxivity model, i.e. concentration --> relaxation rate relationship c_to_r_model = relaxivity.c_to_r_linear(r1=5.0, r2=7.1) # Specify signal model, i.e. relaxation rate --> signal relationship signal_model = signal_models.spgr(tr=3.4e-3, fa_rad=15.*(np.pi/180.), te=1.7e-3) # Calculate concentrations C_t = dce_fit.enh_to_conc(enh_tissue, k_tissue, R10_tissue, c_to_r_model, signal_model) c_p_vif = dce_fit.enh_to_conc(enh_vif, k_vif, R10_vif, c_to_r_model, signal_model) / (1-hct) fig, ax = plt.subplots(2,1) ax[0].plot(t, C_t, '.', label='tissue conc (mM)') ax[0].set_xlabel('time (s)'); ax[1].plot(t, c_p_vif, '.', label='VIF plasma conc (mM)') ax[1].set_xlabel('time (s)'); [a.legend() for a in ax.flatten()];
_____no_output_____
Apache-2.0
src/original/MJT_UoEdinburghUK/demo/demo_fit_dce.ipynb
JonathanArvidsson/DCE-DSC-MRI_CodeCollection
Fit the concentration data using a pharmacokinetic model
# First create an AIF object aif = aifs.patient_specific(t, c_p_vif) # Now create a pharmacokinetic model object pk_model = pk_models.patlak(t, aif) # Set some initial parameters and fit the concentration data weights = np.concatenate([np.zeros(7), np.ones(25)]) # (exclude first few points from fit) pk_pars_0 = [{'vp': 0.2, 'ps': 1e-4}] # (just use 1 set of starting parameters here) %time pk_pars, C_t_fit = dce_fit.conc_to_pkp(C_t, pk_model, pk_pars_0, weights) plt.plot(t, C_t, '.', label='tissue conc (mM)') plt.plot(t, C_t_fit, '-', label='model fit (mM)') plt.legend(); print(f"Fitted parameters: {pk_pars}") print(f"Expected: vp = 0.0081, ps = 2.00e-4")
Wall time: 30.9 ms Fitted parameters: {'vp': 0.008097170500283217, 'ps': 0.00019992401629213917} Expected: vp = 0.0081, ps = 2.00e-4
Apache-2.0
src/original/MJT_UoEdinburghUK/demo/demo_fit_dce.ipynb
JonathanArvidsson/DCE-DSC-MRI_CodeCollection
Alternative approach: fit the tissue signal directlyTo do this, we also need to create a water_ex_model object, which determines the relationship between R1 in each tissue compartment and the exponential R1 components. We start by assuming the fast water exchange limit (as implicitly assumed above when estimating tissue concentration).The result should be very similar to fitting the concentration curve.
# Create a water exchange model object. water_ex_model = water_ex_models.fxl() # Now fit the enhancement curve %time pk_pars_enh, enh_fit = dce_fit.enh_to_pkp(enh_tissue, hct, k_tissue, R10_tissue, R10_vif, pk_model, c_to_r_model, water_ex_model, signal_model, pk_pars_0, weights) plt.plot(t, enh_tissue, '.', label='tissue enh (%)') plt.plot(t, enh_fit, '-', label='model fit (%)') plt.legend(); print(f"Fitted parameters: {pk_pars_enh}") print(f"Expected: vp = 0.0081, ps = 2.00e-4")
Wall time: 159 ms Fitted parameters: {'vp': 0.008081262743467564, 'ps': 0.00019935657535213955} Expected: vp = 0.0081, ps = 2.00e-4
Apache-2.0
src/original/MJT_UoEdinburghUK/demo/demo_fit_dce.ipynb
JonathanArvidsson/DCE-DSC-MRI_CodeCollection
Repeat the fit assuming slow water exchange...This time, we assume slow water exchange across the vessel wall. The result will be very different compared with fitting the concentration curve.
# Create a water exchange model object. water_ex_model = water_ex_models.ntexl() # slow exchange across vessel wall, fast exchange across cell wall # Now fit the enhancement curve %time pk_pars_enh_ntexl, enh_fit_ntexl = dce_fit.enh_to_pkp(enh_tissue, hct, k_tissue, R10_tissue, R10_vif, pk_model, c_to_r_model, water_ex_model, signal_model, pk_pars_0, weights) plt.plot(t, enh_tissue, '.', label='tissue enh (%)') plt.plot(t, enh_fit_ntexl, '-', label='model fit (%)') plt.legend(); print(f"Fitted parameters: {pk_pars_enh_ntexl}") print(f"Expected: vp = 0.0113, ps = 1.12e-4")
Wall time: 166 ms Fitted parameters: {'vp': 0.011282424728448814, 'ps': 0.00011163566464040331} Expected: vp = 0.0113, ps = 1.12e-4
Apache-2.0
src/original/MJT_UoEdinburghUK/demo/demo_fit_dce.ipynb
JonathanArvidsson/DCE-DSC-MRI_CodeCollection
Implement Canny edge detection
# Try Canny using "wide" and "tight" thresholds wide = cv2.Canny(gray, 30, 100) tight = cv2.Canny(gray, 200, 240) # Display the images f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10)) ax1.set_title('wide') ax1.imshow(wide, cmap='gray') ax2.set_title('tight') ax2.imshow(tight, cmap='gray')
_____no_output_____
MIT
cvnd/CVND_Exercises/1_2_Convolutional_Filters_Edge_Detection/5. Canny Edge Detection.ipynb
sijoonlee/deep_learning
TODO: Try to find the edges of this flowerSet a small enough threshold to isolate the boundary of the flower.
# Read in the image image = cv2.imread('images/sunflower.jpg') # Change color to RGB (from BGR) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) plt.imshow(image) # Convert the image to grayscale gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY) ## TODO: Define lower and upper thresholds for hysteresis # right now the threshold is so small and low that it will pick up a lot of noise lower = 150 upper = 200 edges = cv2.Canny(gray, lower, upper) plt.figure(figsize=(20,10)) plt.imshow(edges, cmap='gray')
_____no_output_____
MIT
cvnd/CVND_Exercises/1_2_Convolutional_Filters_Edge_Detection/5. Canny Edge Detection.ipynb
sijoonlee/deep_learning
LAB - Sarcasm Detector LAB* Analyze input data, determine the sequence length (max)* Train BERT Sequence Classifier to detect sarcasm in the given dataset* Save the best model in './bert_sarcasm_detection_state_dict.pth'* Predict the sacasm for some headlines Download data and import packages
!wget https://github.com/ravi-ilango/acm-dec-2020-nlp/blob/main/lab4/sarcasm_data.zip?raw=true -O sarcasm_data.zip !unzip sarcasm_data.zip !pip install transformers # imports import torch from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler from sklearn.model_selection import train_test_split from transformers import BertForSequenceClassification, BertTokenizer, BertConfig, AdamW from tqdm import trange import json import numpy as np import pandas as pd import os import matplotlib.pyplot as plt %matplotlib inline model_path = './bert_sarcasm_detection_state_dict.pth'
_____no_output_____
MIT
part2/lab4/LAB_Sarcasm_Detector.ipynb
yasheshshroff/ODSC2021_NLP_PyTorch
Load data and explore
def read_json(json_file): json_data = [] file = open(json_file) for line in file: json_line = json.loads(line) json_data.append(json_line) return json_data json_data = [] for json_file in ['./sarcasm_data/Sarcasm_Headlines_Dataset.json', './sarcasm_data/Sarcasm_Headlines_Dataset_v2.json']: json_data = json_data + read_json(json_file) df = pd.DataFrame(json_data) headline_data_train = df.headline.values is_sarcastic_label_train = df.is_sarcastic.values print(headline_data_train.shape) for _, row in df[df.is_sarcastic==1].head().iterrows(): print (f"\n{row.headline}") labels = is_sarcastic_label_train plt.hist(labels) plt.xlabel('is_sarcastic') plt.ylabel('nb samples') plt.title('is_sarcastic distribution') plt.xticks(np.arange(len(np.unique(labels))));
_____no_output_____
MIT
part2/lab4/LAB_Sarcasm_Detector.ipynb
yasheshshroff/ODSC2021_NLP_PyTorch
Working with CMIP6 data in the JASMIN Object StoreThis Notebook describes how to set up a virtual environment and then work with CMIP6 data in the JASMIN Object Store (stored in Zarr format). Start by creating a virtual environment and getting the packages installed...
# Import the required packages import virtualenv import pip import os # Define and create the base directory install virtual environments venvs_dir = os.path.join(os.path.expanduser("~"), "nb-venvs") if not os.path.isdir(venvs_dir): os.makedirs(venvs_dir) # Define the venv directory venv_dir = os.path.join(venvs_dir, 'venv-cmip6-zarr') # Create the virtual environment virtualenv.create_environment(venv_dir) # Activate the venv activate_file = os.path.join(venv_dir, "bin", "activate_this.py") exec(open(activate_file).read(), dict(__file__=activate_file)) # Install a set of required packages via `pip` requirements = ['fsspec', 'intake', 'intake_esm', 'aiohttp'] for pkg in requirements: pip.main(["install", "--prefix", venv_dir, pkg])
WARNING: pip is being invoked by an old script wrapper. This will fail in a future version of pip. Please see https://github.com/pypa/pip/issues/5599 for advice on fixing the underlying issue. To avoid this problem you can invoke Python with '-m pip' instead of running pip directly.
BSD-2-Clause
notebooks/data-notebooks/cmip6/cmip6-zarr-jasmin.ipynb
RuthPetrie/ceda-notebooks
Accessing CMIP6 Data from the JASMIN (Zarr) Object Store **Pre-requisites**1. Required packages: `['xarray', 'zarr', 'fsspec', 'intake', 'intake_esm', 'aiohttp']`2. Data access: must be able to see JASMIN Object Store for CMIP6 (currently inside JASMIN firewall) Step 1: Import required packages
import xarray as xr import intake import intake_esm import fsspec
_____no_output_____
BSD-2-Clause
notebooks/data-notebooks/cmip6/cmip6-zarr-jasmin.ipynb
RuthPetrie/ceda-notebooks
Step 2: read the CMIP6 Intake (ESM) catalog from githubWe define a collection ("col") that can be searched/filtered for required datasets.
col_url = "https://raw.githubusercontent.com/cedadev/" \ "cmip6-object-store/master/catalogs/ceda-zarr-cmip6.json" col = intake.open_esm_datastore(col_url)
_____no_output_____
BSD-2-Clause
notebooks/data-notebooks/cmip6/cmip6-zarr-jasmin.ipynb
RuthPetrie/ceda-notebooks
How many datasets are currently stored?
f'There are {len(col.df)} datasets'
_____no_output_____
BSD-2-Clause
notebooks/data-notebooks/cmip6/cmip6-zarr-jasmin.ipynb
RuthPetrie/ceda-notebooks
Step 3: Filter the catalog for historical and future dataIn this example, we want to compare the surface temperature ("tas") from theUKESM1-0-LL model, for a historical and future ("ssp585-bgc") scenario.
cat = col.search(source_id="UKESM1-0-LL", experiment_id=["historical", "ssp585-bgc"], member_id=["r4i1p1f2", "r12i1p1f2"], table_id="Amon", variable_id="tas") # Extract the single record subsets for historical and future experiments hist_cat = cat.search(experiment_id='historical') ssp_cat = cat.search(experiment_id='ssp585-bgc')
_____no_output_____
BSD-2-Clause
notebooks/data-notebooks/cmip6/cmip6-zarr-jasmin.ipynb
RuthPetrie/ceda-notebooks
Step 4: Convert to xarray datasets Define a quick function to convert a catalog to an xarray `Dataset`.
def cat_to_ds(cat): zarr_path = cat.df['zarr_path'][0] fsmap = fsspec.get_mapper(zarr_path) return xr.open_zarr(fsmap, consolidated=True, use_cftime=True)
_____no_output_____
BSD-2-Clause
notebooks/data-notebooks/cmip6/cmip6-zarr-jasmin.ipynb
RuthPetrie/ceda-notebooks
Extract the `tas` (surface air temperture) variable for the historical and future experiments.
hist_tas = cat_to_ds(hist_cat)['tas'] ssp_tas = cat_to_ds(ssp_cat)['tas']
_____no_output_____
BSD-2-Clause
notebooks/data-notebooks/cmip6/cmip6-zarr-jasmin.ipynb
RuthPetrie/ceda-notebooks
Step 5: Subtract the historical from the future averageGenerate time-series means of historical and future data. Subtract the historical from the future scenario and plot the difference.
# Calculate time means diff = ssp_tas.mean(axis=0) - hist_tas.mean(axis=0) # Plot a map of the time-series means diff.plot()
_____no_output_____
BSD-2-Clause
notebooks/data-notebooks/cmip6/cmip6-zarr-jasmin.ipynb
RuthPetrie/ceda-notebooks
Building a Small Model from ScratchBut before we continue, let's start defining the model:Step 1 will be to import tensorflow.
import tensorflow as tf
_____no_output_____
Apache-2.0
basic codes/training_deep_neuralnet.ipynb
MachineLearningWithHuman/ComputerVision
We then add convolutional layers as in the previous example, and flatten the final result to feed into the densely connected layers. Finally we add the densely connected layers. Note that because we are facing a two-class classification problem, i.e. a *binary classification problem*, we will end our network with a [*sigmoid* activation](https://wikipedia.org/wiki/Sigmoid_function), so that the output of our network will be a single scalar between 0 and 1, encoding the probability that the current image is class 1 (as opposed to class 0).
model = tf.keras.models.Sequential([ # Note the input shape is the desired size of the image 300x300 with 3 bytes color # This is the first convolution tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3)), tf.keras.layers.MaxPooling2D(2, 2), # The second convolution tf.keras.layers.Conv2D(32, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The third convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The fourth convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The fifth convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # Flatten the results to feed into a DNN tf.keras.layers.Flatten(), # 512 neuron hidden layer tf.keras.layers.Dense(512, activation='relu'), # Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans') tf.keras.layers.Dense(1, activation='sigmoid') ]) from tensorflow.keras.optimizers import RMSprop model.compile(loss='binary_crossentropy', optimizer=RMSprop(lr=1e-4), metrics=['accuracy']) model.summary() from tensorflow.keras.preprocessing.image import ImageDataGenerator # All images will be rescaled by 1./255 train_datagen = ImageDataGenerator( rescale=1./255, rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest') validation_datagen = ImageDataGenerator(rescale=1/255) # Flow training images in batches of 128 using train_datagen generator train_generator = train_datagen.flow_from_directory( '/tmp/horse-or-human/', # This is the source directory for training images target_size=(300, 300), # All images will be resized to 150x150 batch_size=128, # Since we use binary_crossentropy loss, we need binary labels class_mode='binary') # Flow training images in batches of 128 using train_datagen generator validation_generator = validation_datagen.flow_from_directory( '/tmp/validation-horse-or-human/', # This is the source directory for training images target_size=(300, 300), # All images will be resized to 150x150 batch_size=32, # Since we use binary_crossentropy loss, we need binary labels class_mode='binary') history = model.fit( train_generator, steps_per_epoch=8, epochs=100, verbose=1, validation_data = validation_generator, validation_steps=8) import matplotlib.pyplot as plt acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(acc)) plt.plot(epochs, acc, 'r', label='Training accuracy') plt.plot(epochs, val_acc, 'b', label='Validation accuracy') plt.title('Training and validation accuracy') plt.figure() plt.plot(epochs, loss, 'r', label='Training Loss') plt.plot(epochs, val_loss, 'b', label='Validation Loss') plt.title('Training and validation loss') plt.legend() plt.show()
_____no_output_____
Apache-2.0
basic codes/training_deep_neuralnet.ipynb
MachineLearningWithHuman/ComputerVision
around 40+ min
_____no_output_____
Apache-2.0
basic codes/training_deep_neuralnet.ipynb
MachineLearningWithHuman/ComputerVision
Data Science Unit 1 Sprint Challenge 2 Data Wrangling and StorytellingTaming data from its raw form into informative insights and stories. Data WranglingIn this Sprint Challenge you will first "wrangle" some data from [Gapminder](https://www.gapminder.org/about-gapminder/), a Swedish non-profit co-founded by Hans Rosling. "Gapminder produces free teaching resources making the world understandable based on reliable statistics."- [Cell phones (total), by country and year](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--cell_phones_total--by--geo--time.csv)- [Population (total), by country and year](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--population_total--by--geo--time.csv)- [Geo country codes](https://github.com/open-numbers/ddf--gapminder--systema_globalis/blob/master/ddf--entities--geo--country.csv)These two links have everything you need to successfully complete the first part of this sprint challenge.- [Pandas documentation: Working with Text Data](https://pandas.pydata.org/pandas-docs/stable/text.html) (one question)- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf) (everything else) Part 0. Load dataYou don't need to add or change anything here. Just run this cell and it loads the data for you, into three dataframes.
import pandas as pd cell_phones = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--cell_phones_total--by--geo--time.csv') population = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--population_total--by--geo--time.csv') geo_country_codes = (pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--entities--geo--country.csv') .rename(columns={'country': 'geo', 'name': 'country'}))
_____no_output_____
MIT
tdu1s3.ipynb
cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
Part 1. Join data First, join the `cell_phones` and `population` dataframes (with an inner join on `geo` and `time`).The resulting dataframe's shape should be: (8590, 4)
df = pd.merge(cell_phones, population, how='inner') df.shape
_____no_output_____
MIT
tdu1s3.ipynb
cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
Then, select the `geo` and `country` columns from the `geo_country_codes` dataframe, and join with your population and cell phone data.The resulting dataframe's shape should be: (8590, 5)
df = pd.merge(geo_country_codes[['geo', 'country']], df) df.shape
_____no_output_____
MIT
tdu1s3.ipynb
cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling