markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
As well as accessing positions in lists using indexing, you can use *slices* on lists. This uses the colon character, `:`, to stand in for 'from the beginning' or 'until the end' (when only appearing once). For instance, to print just the last two entries, we would use the index `-2:` to mean from the second-to-last onwards. Here are two distinct examples: getting the first three and last three entries to be successively printed: | print(list_example[:3])
print(list_example[-3:]) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
Slicing can be even more elaborate than that because we can jump entries using a second colon. Here's a full example that begins at the second entry (remember the index starts at 0), runs up until the second-to-last entry (exclusive), and jumps every other entry inbetween (range just produces a list of integers from the value to one less than the last): | list_of_numbers = list(range(1, 11))
start = 1
stop = -1
step = 2
print(list_of_numbers[start:stop:step]) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
A handy trick is that you can print a reversed list entirely using double colons: | print(list_of_numbers[::-1]) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
What's amazing about lists is that they can hold any type, including other lists! Here's a valid example of a list that's got a lot going on: | wacky_list = [
3.1415,
16,
["five", 4, 3],
"Hello World!",
True,
None,
{"key": "value", "key2": "value2"},
]
wacky_list | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
Can you identify the types of each of the entries (and entries of entries)? OperatorsAll of the basic operators you see in mathematics are available to use: `+` for addition, `-` for subtraction, `*` for multiplication, `**` for powers, `/` for division, and `%` for modulo. These work as you'd expect on numbers. But these operators are sometimes defined for other built-in data types too. For instance, we can 'sum' strings (which really concatenates them): | string_one = "This is an example "
string_two = "of string concatenation"
string_full = string_one + string_two
print(string_full) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
It works for lists too: | list_one = ["apples", "oranges"]
list_two = ["pears", "satsumas"]
list_full = list_one + list_two
print(list_full) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
Perhaps more surprisingly, you can multiply strings! | string = "apples, "
print(string * 3) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
Below is a table of the basic arithmetic operations.| Operator | Description || :------: | :--------------: || `+` | addition || `-` | subtraction || `*` | multiplication || `/` | division || `**` | exponentiation || `//` | integer division / floor division || `%` | modulo || `@` | matrix multiplication |As well as the usual operators, Python supports *assignment operators*. An example of one is `x+=3`, which is equivalent to running `x = x + 3`. Pretty much all of the operators can be used in this way. StringsIn some ways, strings are treated a bit like lists, meaning you can access the individual characters via slicing and indexing. For example: | string = "cheesecake"
print(string[-4:]) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
Both lists and strings will also allow you to use the `len` command to get their length: | string = "cheesecake"
print("String has length:")
print(len(string))
list_of_numbers = range(1, 20)
print("List of numbers has length:")
print(len(list_of_numbers)) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
Strings have type `string` and can be defined by single or double quotes, eg `string = "cheesecake"` would have been equally valid above.There are various functions built into Python to help you work with strings that are particularly useful for cleaning messy data. For example, imagine you have a variable name like 'This Is /A Variable '. (You may think this is implausibly bad; I only wish that were true). Let's see if we can clean this up: | string = "This Is /A Variable "
string = string.replace("/", "").rstrip().lower()
print(string) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
The steps above replace the character '/', strip out whitespace on the right hand-side of the string, and put everything in lower case. The brackets after the words signify that a function has been applied; we'll see more of functions later.You'll often want to output one type of data as another, and Python generally knows what you're trying to achieve if you, for example, `print` a boolean value. For numbers, there are more options and you can see a big list of advice on string formatting of all kinds of things [here](https://pyformat.info/). For now, let's just see a simple example of something called an f-string, a string that combines a number and a string (these begin with an `f` for formatting): | value = 20
sqrt_val = 20 ** 0.5
print(f"The square root of {value:d} is {sqrt_val:.2f}") | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
The formatting command `:d` is an instruction to treat `value` like an integer, while `:.2f` is an instruction to print it like a float with 2 decimal places.```{note}f-strings are only available in Python 3.6+``` Booleans and conditionsSome of the most important operations you will perform are with `True` and `False` values, also known as boolean data types. There are two types of operation that are associated with booleans: boolean operations, in which existing booleans are combined, and condition operations, which create a boolean when executed.Boolean operators that return booleans are as follows:| Operator | Description || :---: | :--- ||`x and y`| are `x` and `y` both True? ||`x or y` | is at least one of `x` and `y` True? || `not x` | is `x` False? | These behave as you'd expect: `True and False` evaluates to `False`, while `True or False` evaluates to `True`. There's also the `not` keyword. For example | not True | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
as you would expect.Conditions are expressions that evaluate as booleans. A simple example is `10 == 20`. The `==` is an operator that compares the objects on either side and returns `True` if they have the same *values*--though be careful using it with different data types.Here's a table of conditions that return booleans:| Operator | Description || :-------- | :----------------------------------- || `x == y ` | is `x` equal to `y`? || `x != y` | is `x` not equal to `y`? || `x > y` | is `x` greater than `y`? || `x >= y` | is `x` greater than or equal to `y`? || `x < y` | is `x` less than `y`? || `x <= y` | is `x` less than or equal to `y`? || `x is y` | is `x` the same object as `y`? |As you can see from the table, the opposite of `==` is `!=`, which you can read as 'not equal to the value of'. Here's an example of `==`: | boolean_condition = 10 == 20
print(boolean_condition) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
The real power of conditions comes when we start to use them in more complex examples. Some of the keywords that evaluate conditions are `if`, `else`, `and`, `or`, `in`, `not`, and `is`. Here's an example showing how some of these conditional keywords work: | name = "Ada"
score = 99
if name == "Ada" and score > 90:
print("Ada, you achieved a high score.")
if name == "Smith" or score > 90:
print("You could be called Smith or have a high score")
if name != "Smith" and score > 90:
print("You are not called Smith and you have a high score") | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
All three of these conditions evaluate as True, and so all three messages get printed. Given that `==` and `!=` test for equality and not equal, respectively, you may be wondering what the keywords `is` and `not` are for. Remember that everything in Python is an object, and that values can be assigned to objects. `==` and `!=` compare *values*, while `is` and `not` compare *objects*. For example, | name_list = ["Ada", "Adam"]
name_list_two = ["Ada", "Adam"]
# Compare values
print(name_list == name_list_two)
# Compare objects
print(name_list is name_list_two) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
One of the most useful conditional keywords is `in`. I must use this one ten times a day to pick out a variable or make sure something is where it's supposed to be. | name_list = ["Lovelace", "Smith", "Hopper", "Babbage"]
print("Lovelace" in name_list)
print("Bob" in name_list) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
The opposite is `not in`.Finally, one conditional construct you're bound to use at *some* point, is the `if`...`else` structure: | score = 98
if score == 100:
print("Top marks!")
elif score > 90 and score < 100:
print("High score!")
elif score > 10 and score <= 90:
pass
else:
print("Better luck next time.") | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
Note that this does nothing if the score is between 11 and 90, and prints a message otherwise.One nice feature of Python is that you can make multiple boolean comparisons in a single line. | a, b = 3, 6
1 < a < b < 20 | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
Casting variablesSometimes we need to explicitly cast a value from one type to another. We can do this using functions like `str()`, `int()`, and `float()`. If you try these, Python will do its best to interpret the input and convert it to the output type you'd like and, if they can't, the code will throw a great big error.Here's an example of casting a `float` as an `int`: | orig_number = 4.39898498
type(orig_number) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
Now we cast it to an int: | mod_number = int(orig_number)
mod_number | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
which looks like it became an integer, but we can double check that: | type(mod_number) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
Tuples and (im)mutabilityA tuple is an object that is defined by parentheses and entries that are separated by commas, for example `(15, 20, 32)`. (They are of type `tuple`.) As such, they have a lot in common with lists-but there's a big and important difference.Tuples are immutable, while lists are mutable. This means that, once defined, we can always modify a list using slicing and indexing, e.g. to change the first entry of a list called `listy` we would use `listy[0] = 5`. But trying to do this with a tuple will result in an error.*Immutable* objects, such as tuples, can't have their elements changed, appended, extended, or removed. Lists can do all of these things. Tuples aren't the only immutable objects in Python; strings are immutable too.You may wonder why both are needed given lists seem to provide a superset of functionality: sometimes in coding, lack of flexibility is a *good* thing because it restricts the number of ways a process can go awry. I dare say there are other reasons too, but you don't need to worry about them and using lists is a good default most of the time. IndentationYou'll have seen that certain parts of the code examples are indented. Code that is part of a function, a conditional clause, or loop is indented. This isn't a code style choice, it's actually what tells the language that some code is to be executed as part of, say, a loop and not to executed after the loop is finished.Here's a basic example of indentation as part of an `if` loop. The `print` statement that is indented only executes if the condition evaluates to true. | x = 10
if x > 2:
print("x is greater than 2") | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
When functions, conditional clauses, or loops are combined together, they each cause an *increase* in the level of indentation. Here's a double indent. | if x > 2:
print("outer conditional cause")
for i in range(4):
print("inner loop") | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
The standard practice for indentation is that each sub-statement should be indented by 4 spaces. It can be hard to keep track of these but, as usual, Visual Studio Code has you covered. Go to Settings (the cog in the bottom left-hand corner, then click Settings) and type 'Whitespace' into the search bar. Under 'Editor: Render Whitespace', select 'boundary'. This will show any whitespace that is more than one character long using faint grey dots. Each level of indentation in your Python code should now begin with four grey dots showing that it consists of four spaces. To make it even easier, you can install the 'indent-rainbow' extension in VS Code-this shows different levels of indentation in different colours. Loops and list comprehensionsA loop is a way of executing a similar piece of code over and over in a similar way. The most useful loops are `for` loops and list comprehensions.A `for` loop does something *for* the time that the condition is satisfied. For example, | name_list = ["Lovelace", "Smith", "Pigou", "Babbage"]
for name in name_list:
print(name) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
prints out a name until all names have been printed out. A useful trick with for loops is the `enumerate` keyword, which runs through an index that keeps track of the items in a list: | name_list = ["Lovelace", "Smith", "Hopper", "Babbage"]
for i, name in enumerate(name_list):
print(f"The name in position {i} is {name}") | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
Remember, Python indexes from 0 so the first entry of `i` will be zero. But, if you'd like to index from a different number, you can: | for i, name in enumerate(name_list, start=1):
print(f"The name in position {i} is {name}") | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
High-level languages like Python and R do not get *compiled* into highly performant machine code ahead of being run, unlike C++ and FORTRAN. What this means is that although they are much less unwieldy to use, some types of operation can be very slow--and `for` loops are particularly cumbersome. (Although you may not notice this unless you're working on a bigger computation.)But there is a way around this, and it's with something called a *list comprehension*. These can combine what a `for` loop and a `condition` do in a single line of efficiently executable code. Say we had a list of numbers and wanted to filter it according to whether the numbers divided by 3 or not: | number_list = range(1, 40)
divide_list = [x for x in number_list if x % 3 == 0]
print(divide_list) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
Or if we only wanted to pick out names that end in 'Smith': | names_list = ["Joe Bloggs", "Adam Smith", "Sandra Noone", "leonara smith"]
smith_list = [x for x in names_list if "smith" in x.lower()]
print(smith_list) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
Note how we used 'smith' rather than 'Smith' and then used `lower()` to ensure we matched names regardless of the case they are written in. We can even do a whole `if` ... `else` construct *inside* a list comprehension: | names_list = ["Joe Bloggs", "Adam Smith", "Sandra Noone", "leonara smith"]
smith_list = [x if "smith" in x.lower() else "Not Smith!" for x in names_list]
print(smith_list) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
Many of the constructs we've seen can be combined. For instance, there is no reason why we can't have a nested or repeated list comprehension, and, perhaps more surprisingly, sometimes these are useful! | first_names = ["Ada", "Adam", "Grace", "Charles"]
last_names = ["Lovelace", "Smith", "Hopper", "Babbage"]
names_list = [x + " " + y for x, y in zip(first_names, last_names)]
print(names_list) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
The `zip` keyword is doing this magic; think of it like a zipper, bringing an element of each list together in turn. It can also be used directly in a for loop: | for first_nm, last_nm in zip(first_names, last_names):
print(first_nm, last_nm) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
Finally, an even more extreme use of list comprehensions can deliver nested structures: | first_names = ["Ada", "Adam"]
last_names = ["Lovelace", "Smith"]
names_list = [[x + " " + y for x in first_names] for y in last_names]
print(names_list) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
This gives a nested structure that (in this case) iterates over `first_names` first, and then `last_names`. If you'd like to learn more about list comprehensions, check out these [short video tutorials](https://calmcode.io/comprehensions/introduction.html). While loops`while` loops continue to execute code until their conditional expression evaluates to `False`. (Of course, if it evaluates to `True` forever, your code will just continue to execute...) | n = 10
while n > 0:
print(n)
n -= 1
print("execution complete") | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
NB: in case you're wondering what `-=` does, it's a compound assignment that sets the left-hand side equal to the left-hand side minus the right-hand side.You can use the keyword `break` to break out of a while loop, for example if it's reached a certain number of iterations without converging. DictionariesAnother built-in Python type that is enormously useful is the *dictionary*. This provides a mapping one set of variables to another (either one-to-one or many-to-one). Let's see an example of defining a dictionary and using it: | fruit_dict = {
"Jazz": "Apple",
"Owari": "Satsuma",
"Seto": "Satsuma",
"Pink Lady": "Apple",
}
# Add an entry
fruit_dict.update({"Cox": "Apple"})
variety_list = ["Jazz", "Jazz", "Seto", "Cox"]
fruit_list = [fruit_dict[x] for x in variety_list]
print(fruit_list) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
From an input list of varieties, we get an output list of their associated fruits. Another good trick to know with dictionaries is that you can iterate through their keys and values: | for key, value in fruit_dict.items():
print(key + " maps into " + value) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
Running on emptyBeing able to create empty containers is sometimes useful. The commands to create empty lists, tuples, dictionaries, and sets are `lst = []`, `tup=()`, `dic={}`, and `st = set()` respectively. FunctionsIf you're an economist, I hardly need to tell you what a function is. In coding, it's much the same as in mathematics: a function has inputs, it performs its function, and it returns any outputs. Functions begin with a `def` keyword for 'define a function'. It then has a name, followed by brackets, `()`, which may contain *function arguments* and *function keyword arguments*. The body of the function is then indented relative to the left-most text. Function arguments are defined in brackets following the name, with different inputs separated by commas. Any outputs are given with the `return` keyword, again with different variables separated by commas. Let's see a very simple example: | def welcome_message(name):
return f"Hello {name}, and welcome!"
# Without indentation, this code is not part of function
name = "Ada"
output_string = welcome_message(name)
print(output_string) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
One powerful feature of functions is that we can define defaults for the input arguments. Let's see that in action by defining a default value for `name`, along with multiple outputs--a hello message and a score. | def score_message(score, name="student"):
"""This is a doc-string, a string describing a function.
Args:
score (float): Raw score
name (str): Name of student
Returns:
str: A hello message.
float: A normalised score.
"""
norm_score = (score - 50) / 10
return f"Hello {name}", norm_score
# Without indentation, this code is not part of function
name = "Ada"
score = 98
# No name entered
print(score_message(score))
# Name entered
print(score_message(score, name=name)) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
In that last example, you'll notice that I added some text to the function. This is a doc-string. It's there to help users (and, most likely, future you) to understand what the function does. Let's see how this works in action by calling `help` on the `score_message` function: | help(score_message) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
ScopeScope refers to what parts of your code can see what other parts. If you define a variable inside a function, the rest of your code won't be able to 'see' it or use it. For example, here's a function that creates a variable and then an example of calling that variable:```pythondef var_func(): str_variable = 'Hello World!'var_func()print(str_variable)```This would raise an error, because as far as your general code is concerned `str_variable` doesn't exist outside of the function. If you want to create variables inside a function and have them persist, you need to explicitly pass them out using, for example `return str_variable` like this: | def var_func():
str_variable = "Hello World!"
return str_variable
returned_var = var_func()
print(returned_var) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
Or, if you only want to modify a variable, you can declare the variable before the function is run, pass it into the function, and then return it. Using packages and modulesWe already saw how to install packages in the previous chapter: using `pip install packagename` or `conda install packagename` on the command line. What about using a package that you've installed? That's what we'll look at now.```{note}If you want to install packages into a dedicated conda environment, remember to `conda activate environmentname` before using the package install command(s).```Let's see an example of using the powerful numerical library **numpy**. There are different ways to import packages, you can import the entire package in one go or just import the functions you need. When an entire package is imported, you can give it any name you like and the convention for **numpy** is to import it as the shortened 'np'. All of the functions and methods of the package can be accessed by typing `np` followed by `.` and then typing the function name. As well as demonstrating importing the whole package, the example below shows importing just one specific function. | import numpy as np
from numpy.linalg import inv
matrix = np.array([[4.0, 2.0, 4.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]])
print("Matrix:")
print(matrix)
inv_mat = inv(matrix)
print("Inverse:")
print(inv_mat) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
We could have imported all of **numpy** using `from numpy import *` but this is considered bad practice as it fills our 'namespace' with function names that might clash with other packages and it's less easy to parse which package's function is doing what. R also relies heavily on imported libraries but uses a different convention for namespaces: in that case, everything is imported but the *order* of package imports matters for what functions are default.```{note}If you want to check what packages you have installed in your Python environment, run `conda list` on the command line.``` ModulesSometimes, you will want to call in some code from a different script. Imagine you have several code scripts (a, b, and c), all of which use the same underlying function that *you* have written. A central tenet of good coding is that you *do not repeat yourself*. Therefore, a bad solution to this problem would be to copy and paste the same code into all three of the scripts. A *good* solution is to write the code that's need just once in a separate 'utility' script and have the other scripts import that one function: | import networkx as nx
import matplotlib.pyplot as plt
graph = nx.DiGraph()
graph.add_edges_from(
[
("Utility script", "code file a"),
("Utility script", "code file b"),
("code file a", "code file c"),
("code file b", "code file c"),
("Utility script", "code file c"),
]
)
colour_node = "#AFCBFF"
fixed_pos = nx.spring_layout(graph, seed=100)
nx.draw(graph, pos=fixed_pos, with_labels=True, node_size=5000, node_color=colour_node)
plt.xlim(-1.3, 1.3)
plt.ylim(-1.3, 1.3)
plt.show(); | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
How would this work in practice? We would define a file 'utilities.py' that had the following:```python Contents of utilities.py filedef really_useful_func(number): return number*10```Then, in 'code_script_a.py': | import utilities as utils
print(utils.really_useful_func(20)) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
An alternative is to *just* import the function we want, with the name we want: | from utilities import really_useful_func as ru_fn
print(ru_fn(30)) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
Another important example is the case where you want to run 'utilities.py' as a script, but still want to borrow functions from it to run in other scripts. There's a way to do this. Let's change utilities.py to```python Contents of utilities.py filedef really_useful_func(number): return number*10def default_func(): print('Script has run')if __name__ == '__main__': default_func()```What this says is that if we call 'utilities.py' from the command line, eg```bashpython utilities.py```It will return `Script has run` because, by executing the script alone, we are asking for anything in the `main` block defined at the end of the file to be run. But we can still import anything from utilities into other scripts as before--and in that case it is not the main script, but an import, and so the `main` block will *not* be executed.You can important several functions at once from a module (aka another script file) like this:```pythonfrom utilities import really_useful_func, default_func``` Lambda functionsLambda functions are a very old idea in programming, and part of the functional programming paradigm. Coding languages tend to be more object-oriented or functional, though high-level languages often mix both. For example, R leans slightly more toward being a functional language, while Python is slightly more object oriented. However, Python does have lambda functions, for example: | plus_one = lambda x: x + 1
plus_one(3) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
For a one-liner function that has a name it's actually better practice here to use `def plus_one(x): return x + 1`, so you shouldn't see this form of lambda function too much in the wild. However, you are likely to see lambda functions being used with dataframes and other objects. For example, if you had a dataframe with a column of string called 'strings' that you want to change to “Title Case” and replace one phrase with another, you could use lambda functions to do that (there are better ways of doing this but this is useful as a simple example): | import pandas as pd
df = pd.DataFrame(
data=[["hello my blah is Ada"], ["hElLo mY blah IS Adam"]],
columns=["strings"],
dtype="string",
)
df["strings"].apply(lambda x: x.title().replace("Blah", "Name")) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
More complex lambda functions can be constructed, eg `lambda x, y, z: x + y + z`. One of the best use cases of lambdas is when you *don't* want to go to the trouble of declaring a function. For example, let's say you want to compose a series of functions and you want to specify those functions in a list, one after the other. Using functions alone, you'd have to define a new function for each operation. With lambdas, it would look like this (again, there are easier ways to do this operation, but we'll use simple functions to demonstrate the principle): | number = 1
for func in [lambda x: x + 1, lambda x: x * 2, lambda x: x ** 2]:
number = func(number)
print(number) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
Note that people often use `x` by convention, but there's nothing to stop you writing `lambda horses: horses**2` (except the looks your co-authors will give you). If you want to learn more about lambda functions, check out these [short video tutorials](https://calmcode.io/lambda/introduction.html). Splat and splatty-splatYou read those right, yes. These are also known as unpacking operators for, respectively, arguments and dictionaries. For instance, if I have a function that takes two arguments I can send variables to it in different ways: | def add(a, b):
return a + b
print(add(5, 10))
func_args = (6, 11)
print(add(*func_args)) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
The splat operator, `*`, unpacks the variable `func_args` into two different function arguments. Splatty-splat unpacks dictionaries into keyword arguments (aka kwargs): | def function_with_kwargs(a, x=0, y=0, z=0):
return a + x + y + z
print(function_with_kwargs(5))
kwargs = {"x": 3, "y": 4, "z": 5}
print(function_with_kwargs(5, **kwargs)) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
Perhaps most surprisingly of all, we can use the splat operator *in the definition of a function*. For example: | def sum_elements(*elements):
return sum(*elements)
nums = (1, 2, 3)
print(sum_elements(nums))
more_nums = (1, 2, 3, 4, 5)
print(sum_elements(more_nums)) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
To learn more about args and kwargs, check out these [short video tutorials](https://calmcode.io/args-kwargs/introduction.html). TimeLet's do a quick dive into how to deal with dates and times. This is only going to scratch the surface, but should give a sense of what's possible.The built-in library that deals with datetimes is called `datetime`. Let's import it and ask it to give us a very precise account of the datetime (when the code is executed): | from datetime import datetime
now = datetime.now()
print(now) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
You can pick out bits of the datetime that you need: | day = now.day
month = now.month
year = now.year
hour = now.hour
minute = now.minute
print(f"{year}/{month}/{day}, {hour}:{minute}") | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
To add or subtract time to a datetime, use `timedelta`: | from datetime import timedelta
new_time = now + timedelta(days=365, hours=5)
print(new_time) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
To take the difference of two dates: | from datetime import date
new_year = date(year=2022, month=1, day=1)
time_till_ny = new_year - date.today()
print(f"{time_till_ny.days} days until New Year") | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
Note that date and datetime are two different types of objects-a datetime includes information on the date and time, whereas a date does not. Reading and writing filesAlthough most applications in economics will probably use the **pandas** package to read and write tabular data, it's sometimes useful to know how to read and write arbitrary files using the built-in Python libraries too. To open a file```pythonopen('filename', mode)```where `mode` could be `r` for read, `a` for append, `w` for write, and `x` to create a file. Create a file called `text_example.txt` and write a single line in it, 'hello world'. To open the file and print the text, use:```pythonwith open('text_example.txt') as f: text_in = f.read()print(text_in)``````python'hello world!\n'````\n` is the new line character. Now let's try adding a line to the file:```pythonwith open('text_example.txt', 'a') as f: f.write('this is another line\n')```Writing and reading files using the `with` command is a quick and convenient shorthand for the less concise open, action, close pattern. For example, the above example can also be written as:```pythonf = open('text_example.txt', 'a')f.write('this is another line\n')f.close()```Although this short example shows opening and writing a text file, this approach can be used to edit a wide range of file extensions including .json, .xml, .csv, .tsv, and many more, including binary files in addition to plain text files. Miscellaneous funHere are some other bits of basic coding that might be useful. They really show why Python is such a delightful language.You can use unicode characters for variables | α = 15
β = 30
print(α / β) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
You can swap variables in a single assignment: | a = 10
b = "This is a string"
a, b = b, a
print(a) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
**iterools** offers counting, repeating, cycling, chaining, and slicing. Here's a cycling example that uses the `next` keyword to get the next iteraction: | from itertools import cycle
lorrys = ["red lorry", "yellow lorry"]
lorry_iter = cycle(lorrys)
print(next(lorry_iter))
print(next(lorry_iter))
print(next(lorry_iter)) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
**iterools** also offers products, combinations, combinations with replacement, and permutations. Here are the combinations of 'abc' of length 2: | from itertools import combinations
print(list(combinations("abc", 2))) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
Find out what the date is! (Can pass a timezone as an argument.) | from datetime import date
print(date.today()) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
Because functions are just objects, you can iterate over them just like any other object: | functions = [str.isdigit, str.islower, str.isupper]
raw_str = "asdfaa3fa"
for str_func in functions:
print(f"Function name: {str_func.__name__}, value is:")
print(str_func(raw_str)) | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
Functions can be defined recursively. For instance, the Fibonacci sequence is defined such that $ a_n = a_{n-1} + a_{n-2} $ for $ n>1 $. | def fibonacci(n):
if n < 0:
print("Please enter n>0")
return 0
elif n <= 1:
return n
else:
return fibonacci(n - 1) + fibonacci(n - 2)
[fibonacci(i) for i in range(10)] | _____no_output_____ | MIT | code-basics.ipynb | lukestein/coding-for-economists |
Animate samples from a Gaussian distributionThis notebook demonstrates how to use the functionality in `ProbNum-Evaluation` to animate samples from a Gaussian distribution. | from probnumeval import visual
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(42) | _____no_output_____ | MIT | docs/source/tutorials/animate_gaussian_distributions.ipynb | anukaal/probnum-evaluation |
As a toy example, let us consider animate a sample from a Gaussian process on 15 grid points with `N =5` frames. The number of grid points determines the dimension of the underlying Normal distribution, therefore this variable is called `dim` in `ProbNum-Evaluation`. | dim = 15
num_frames = 5 | _____no_output_____ | MIT | docs/source/tutorials/animate_gaussian_distributions.ipynb | anukaal/probnum-evaluation |
For didactic reasons, let us set `endpoint` to `True`, which means that the final sample is the first sample. | states_gp = visual.animate_with_periodic_gp(dim, num_frames, endpoint=True)
states_sphere = visual.animate_with_great_circle_of_unitsphere(dim, num_frames, endpoint=True) | _____no_output_____ | MIT | docs/source/tutorials/animate_gaussian_distributions.ipynb | anukaal/probnum-evaluation |
The output of the `animate_with*` function is a sequence of (pseudo-) samples from a standard Normal distribution. | fig, axes = plt.subplots(dpi=150, ncols=num_frames, nrows=2, sharey=True, sharex=True, figsize=(num_frames*2, 2), constrained_layout=True)
for ax in axes.flatten():
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
for state_sphere, state_gp, ax in zip(states_sphere, states_gp, axes.T):
ax[0].vlines(np.arange(dim), ymin=np.minimum(0., state_gp), ymax=np.maximum(0., state_gp), linewidth=4, color="darksalmon")
ax[1].vlines(np.arange(dim), ymin=np.minimum(0., state_sphere), ymax=np.maximum(0., state_sphere), linewidth=4, color="darksalmon")
axes[0][0].set_ylabel('GP')
axes[1][0].set_ylabel('Sphere')
for idx, ax in enumerate(axes.T):
ax[0].set_title(f"Frame {idx+1}", fontsize="medium")
plt.show() | _____no_output_____ | MIT | docs/source/tutorials/animate_gaussian_distributions.ipynb | anukaal/probnum-evaluation |
These can be turned into samples from a multivariate Normal distribution $N(m, K)$ via the formula $u \mapsto m + \sqrt{K} u$ | def k(s, t):
return np.exp(-(s - t)**2/0.1)
locations = np.linspace(0, 1, dim)
cov = k(locations[:, None], locations[None, :])
cov_cholesky = np.linalg.cholesky(cov + 1e-12 * np.eye(dim))
# From the "right", because the states have shape (N, d).
samples_sphere = states_sphere @ cov_cholesky.T
samples_gp = states_gp @ cov_cholesky.T
| _____no_output_____ | MIT | docs/source/tutorials/animate_gaussian_distributions.ipynb | anukaal/probnum-evaluation |
The resulting (pseudo-)samples "move through the sample space" in a continuous way for both, the periodic GP and the sphere. | fig, axes = plt.subplots(dpi=150, ncols=num_frames, nrows=2, sharey=True, sharex=True, figsize=(num_frames*2, 2), constrained_layout=True)
for ax in axes.flatten():
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
for sample_sphere, sample_gp, ax in zip(samples_sphere, samples_gp, axes.T):
ax[0].plot(locations, sample_gp, color="darksalmon")
ax[1].plot(locations, sample_sphere, color="darksalmon")
axes[0][0].set_ylabel('GP')
axes[1][0].set_ylabel('Sphere')
for idx, ax in enumerate(axes.T):
ax[0].set_title(f"Frame {idx+1}", fontsize="medium")
plt.show() | _____no_output_____ | MIT | docs/source/tutorials/animate_gaussian_distributions.ipynb | anukaal/probnum-evaluation |
URL text boxhttps://vuetifyjs.com/en/components/text-fields | v.Col(cols=6, children=[
v.TextField(outlined=True, class_='ma-2', label='URL', placeholder='https://')
])
# home listing
# https://www.rightmove.co.uk/property-for-sale/property-72921928.html
# all listing data: property of Rightmove.co.uk
from download_listing import get_listing
from pprint import pprint
class URLTextField(v.VuetifyTemplate):
url = Unicode('').tag(sync=True, allow_null=True)
loading = Bool(False).tag(sync=True)
template = Unicode('''
<v-text-field
class="ma-2"
v-model="url"
:disabled="loading"
:loading="loading"
label="URL"
placeholder="https:// ..."
outlined
clearable
@change=get_properties(url)
></v-text-field>
''').tag(sync=True)
def vue_get_properties(self, data):
self.loading = True
pprint(get_listing(data))
self.loading = False
URLTextField() | _____no_output_____ | BSD-3-Clause | 3 Custom components.ipynb | MichaMucha/odsc2019-voila-jupyter-web-app |
Cardhttps://vuetifyjs.com/en/components/cards | from download_listing import get_listing
class URLTextField(v.VuetifyTemplate):
url = Unicode('').tag(sync=True, allow_null=True)
loading = Bool(False).tag(sync=True)
props = List(get_listing('')).tag(sync=True)
show = Bool(False).tag(sync=True)
price_string = Unicode('£1,000,000').tag(sync=True)
template = Unicode('''
<template>
<v-col cols="8">
<v-text-field
class="ma-2"
v-model="url"
:disabled="loading"
:loading="loading"
label="URL"
placeholder="https:// ..."
outlined
clearable
@change=get_properties(url)
></v-text-field>
<v-card
class="mx-auto"
max-width="444"
>
<v-img
v-bind:src="props[3]""
height="200px"
></v-img>
<v-card-title>
{{ props[1] }}
</v-card-title>
<v-card-subtitle class="ma-4">
{{ price_string }} - {{ props[2] }}
</v-card-subtitle>
<v-card-actions>
<v-btn text v-bind:href="url">View on Rightmove</v-btn>
<v-spacer></v-spacer>
<v-btn
icon
@click="show = !show"
>
<v-icon>{{ show ? 'mdi-chevron-up' : 'mdi-chevron-down' }}</v-icon>
</v-btn>
</v-card-actions>
<v-expand-transition>
<div v-show="show">
<v-divider></v-divider>
<v-card-text>
More information ...
</v-card-text>
</div>
</v-expand-transition>
</v-card>
</v-col>
</template>
''').tag(sync=True)
def vue_get_properties(self, data):
self.disabled = True
self.loading = True
self.props = list(get_listing(data))
self.price_string = f'£{self.props[0]:,.0f}'
self.disabled = False
self.loading = False
u = URLTextField()
u | _____no_output_____ | BSD-3-Clause | 3 Custom components.ipynb | MichaMucha/odsc2019-voila-jupyter-web-app |
Sparklinehttps://vuetifyjs.com/en/components/sparklines | %matplotlib inline
import numpy as np
import pandas as pd
strange_walk = np.random.beta(2, 3, 10) * 100 * np.random.normal(size=10)
strange_walk = pd.Series(strange_walk, name='Strange Walk').cumsum().round(2)
strange_walk.plot()
class SeriesSparkline(v.VuetifyTemplate):
name = Unicode('').tag(sync=True)
value = List([1,2,3]).tag(sync=True)
template = Unicode("""
<template>
<v-card
class="mx-auto text-center"
color="green"
dark
max-width="600"
>
<v-card-text>
<v-sheet color="rgba(0, 0, 0, .12)">
<v-sparkline
:value="value"
color="rgba(255, 255, 255, .7)"
height="100"
padding="24"
stroke-linecap="round"
smooth
>
<template v-slot:label="item">
{{ item.value }}
</template>
</v-sparkline>
</v-sheet>
</v-card-text>
<v-card-text>
<div class="display-1 font-weight-thin">{{ name }}</div>
</v-card-text>
</v-card>
</template>
""").tag(sync=True)
def __init__(self, *args,
data=pd.Series(),
**kwargs):
super().__init__(*args, **kwargs)
self.name = data.name
self.value = data.tolist()
s = SeriesSparkline(data=strange_walk)
s | _____no_output_____ | BSD-3-Clause | 3 Custom components.ipynb | MichaMucha/odsc2019-voila-jupyter-web-app |
Fix the Seed for Reproducible Results | import random
seed = 23
#torch.manual_seed(seed)
#torch.cuda.manual_seed_all(seed)
np.random.seed(seed)
random.seed(seed)
import re
def my_preprocess_text(in_text):
username_re = re.compile(r'(^|[^@\w])@(\w{1,15})\b') # @username
url_re = re.compile(r'http\S+') # urls
in_text = re.sub('RT', '', in_text.rstrip()) # remove "RT"
in_text = re.sub('\:', '', in_text.rstrip()) # remove ":"
in_text = re.sub('\'s', '', in_text.rstrip()) # remove "'s" -> try without this
in_text = re.sub('#', '', in_text.rstrip()) # remove "#"
in_text = re.sub(url_re, 'xx_url', in_text.rstrip()) # replace urls with xx_url
return re.sub(username_re, ' xx_username', in_text.rstrip()).lstrip() # replace @username with xx_username
# TODO: replace multiple usernames in a row ?!
df_tweets_tsh[0] = df_tweets_tsh[0].apply(my_preprocess_text)
df_tweets_tsh.head()
test['text'] = test['text'].apply(my_preprocess_text)
test.head()
# Transform from the format text, binary, fine to
# text, other, offense for binary classification
# 1) Copy
df_tweets_tsh_new = df_tweets_tsh.copy()
# 2) Remove 3rd column
del(df_tweets_tsh_new[2])
# 3) Add 'other', 'offense' and use column names
df_tweets_tsh_new.columns = ['text', 'labels']
df_tweets_tsh_new['other'] = 0
df_tweets_tsh_new['offense'] = 0
# 4) Fill 'other' and 'offense'
mask_other = df_tweets_tsh_new.labels == 'OTHER'
mask_offense = df_tweets_tsh_new.labels == 'OFFENSE'
df_tweets_tsh_new.loc[mask_other, 'other'] = 1
df_tweets_tsh_new.loc[mask_offense, 'offense'] = 1
df_tweets_tsh_new.head()
# Use all tweets for training
train_df_tweets_tsh = df_tweets_tsh_new.copy()
train_df_tweets_tsh.head()
train_df_tweets_tsh.text[0]
lens = train_df_tweets_tsh.text.str.len()
lens.mean(), lens.std(), lens.max()
lens.hist();
# long_tweets = train_df_tweets_tsh.text.str.len() > 500 # ok, da diverse tokens verwendet werden
# train_df_tweets_tsh[long_tweets]
label_cols = ['other', 'offense']
train_df_tweets_tsh['none'] = 1-train_df_tweets_tsh[label_cols].max(axis=1)
train_df_tweets_tsh.describe()
# None-Tokenizer works surprisingly well :o
def tokenize_dummy(s):
return s.split(' ')
#from fastai import *
#from fastai.text import *
#tokenizer_fastai = Tokenizer(lang='de', n_cpus=8)
LEMMATIZE = False
import spacy
nlp = spacy.load('de')
def tokenize_spacy(corpus, lemma=LEMMATIZE):
doc = nlp(corpus)
if lemma:
return list(str(x.lemma_) for x in doc) # lemma_ to get string instead of hash
else:
return list(str(x) for x in doc)
COMMENT = 'text'
#n = train_df_tweets_tsh.shape[0]
vec = TfidfVectorizer(analyzer='char', ngram_range=(3,6), tokenizer=tokenize_spacy,
min_df=4, max_df=0.4, strip_accents='unicode', use_idf=True,
smooth_idf=False, sublinear_tf=True, lowercase=True, binary=False)
trn_term_doc = vec.fit_transform(train_df_tweets_tsh[COMMENT])
test_term_doc = vec.transform(test[COMMENT])
trn_term_doc, test_term_doc
#vec.vocabulary_
#vec
def pr(y_i, y):
p = x[y==y_i].sum(0)
return (p+1) / ((y==y_i).sum()+1)
x = trn_term_doc
test_x = test_term_doc
# other, offense
class_weight = {0: 2, 1: 1}
def get_mdl(y):
y = y.values
r = np.log(pr(1,y) / pr(0,y))
m = LogisticRegression(C=14.0, dual=False, solver='saga', multi_class='auto', penalty='l2',
class_weight='balanced', max_iter=500) # class_weight='balanced'
x_nb = x.multiply(r)
return m.fit(x_nb, y), r
preds = np.zeros((len(test), len(label_cols)))
for i, j in enumerate(label_cols):
print('fit', j)
m,r = get_mdl(train_df_tweets_tsh[j])
preds[:,i] = m.predict_proba(test_x.multiply(r))[:,1]
# All predictions for the testset file
predictions = pd.DataFrame(preds, columns = label_cols)
#predictions
submission = test.copy()
del(submission['fine'])
submission['text'][21]
submission = test.copy()
submission['fine'] = 'OTHER' # dummy label for binary classification
for index,row in submission.iterrows():
if( preds[index][1] >=0.485):
#print('OFFENSE')
submission['label'][index] = 'OFFENSE'
else:
#print('OTHER')
submission['label'][index] = 'OTHER'
submission.head()
# 1,1 ngrams -> 62.95 (C=40.0); lowercase=False
# 1,2 ngrams -> 62.88 (C=40.0); lowercase=False
# 1,1 ngrams -> 64.47 (C=1.0); lowercase=False
# 1,1 ngrams -> 66.41 (C=1.0); lowercase=True
# 1,2 ngrams -> 65.88 (C=1.0); lowercase=True
# 1,3 ngrams -> 65.80 (C=1.0); lowercase=True
# 1,4 ngrams -> 66.02 (C=1.0); lowercase=True
# 1,5 ngrams -> 65.87 (C=1.0); lowercase=True
# 1,6 ngrams -> 65.87 (C=1.0); lowercase=True
# 1,4 ngrams -> 65.79 (C=1.0); lowercase=True; class_weight='balanced'
# 3,3 ngrams -> 58.07
# 1,4 ngrams -> 66.02 (C=1.0); lowercase=True; 4/1.0
# 1,4 ngrams -> 64.07(C=1.0); lowercase=True; 2/0.8
# 1,1 ngrams -> 66.41 (C=1.0); lowercase=True; 4/0.8
# 1,1 ngrams -> 66.99 (C=4.0); lowercase=True; 4/0.8
# 1,1 ngrams -> 67.69 (C=4.0); lowercase=True; 3/0.8
# 1,1 ngrams -> 67.97 (C=4.0); lowercase=True; 2/0.8
# 1,1 ngrams -> 61.61 (C=4.0); lowercase=True; 1/0.8
# 1,1 ngrams -> 63.06 (C=8.0); lowercase=True; 2/0.8
# 1,1 ngrams -> 66.26 (C=1.0); lowercase=True; 2/0.8
# 1,1 ngrams -> 67.47 (C=2.0); lowercase=True; 2/0.8
# 1,1 ngrams -> 68.05 (C=3.0); lowercase=True; 2/0.8*
# 1,1 ngrams -> 61.88 (C=3.0); lowercase=True; 3/0.8
# 1,1 ngrams -> 68.00 (C=3.5); lowercase=True; 2/0.8
# 1,2 ngrams -> 66.16 (C=3.0); lowercase=True; 2/0.8
# 1,3 ngrams -> 65.93 (C=3.0); lowercase=True; 2/0.8
# 1,4 ngrams -> 65.95 (C=3.0); lowercase=True; 2/0.8
# 2,3 ngrams -> 59.88 (C=3.0); lowercase=True; 2/0.8
# character level
# 3,6 ngrams -> 70.71 (c=4.0); lowercase=True; 4/1.0
# 3,6 ngrams -> 69.03 (c=4.0); lowercase=False; 4/1.0
# 3,6 ngrams -> 71.03 (c=8.0); lowercase=True; 4/1.0
# 3,6 ngrams -> 69.17 (c=2.0); lowercase=True; 4/1.0
# 3,6 ngrams -> 70.79 (c=6.0); lowercase=True; 4/1.0
# 3,6 ngrams -> 71.56 (c=16.0); lowercase=True; 4/1.0
# 3,6 ngrams -> 71.11 (c=32.0); lowercase=True; 4/1.0
# 3,6 ngrams -> 71.04 (c=24.0); lowercase=True; 4/1.0
# 3,6 ngrams -> 71.29 (c=20.0); lowercase=True; 4/1.0
# 3,6 ngrams -> 71.39 (c=18.0); lowercase=True; 4/1.0
# 3,6 ngrams -> 71.38 (c=12.0); lowercase=True; 4/1.0
# 3,6 ngrams -> 71.71* (c=14.0); lowercase=True; 4/1.0
# 3,6 ngrams -> 71.57 (c=15.0); lowercase=True; 4/1.0
# 3,6 ngrams -> 71.45 (c=17.0); lowercase=True; 4/1.0
# 3,6 ngrams -> 71.14 (c=14.0); lowercase=True; 5/1.0
# 3,6 ngrams -> 71.23 (c=14.0); lowercase=True; 3/1.0
# 3,6 ngrams -> 71.21 (c=14.0); lowercase=True; 3/0.5
# 3,6 ngrams -> 71.57 (c=14.0); lowercase=True; 4/0.5
# 3,6 ngrams -> 69.32 (c=14.0); lowercase=True; 4/1.0; l1 regularization
# 3,10 ngrams -> 70.89 (c=14.0); lowercase=True; 4/1.0
# 3,8 ngrams -> 70.92 (c=14.0); lowercase=True; 4/1.0
# 2,6 ngrams -> 71.07 (c=14.0); lowercase=True; 4/1.0
# 2,5 ngrams -> 71.15 (c=14.0); lowercase=True; 4/1.0
# 4,7 ngrams -> 71.34 (c=14.0); lowercase=True; 4/1.0
# 5,8 ngrams -> 69.84 (c=14.0); lowercase=True; 4/1.0
# 3,6 ngrams -> 71.61 (c=14.0); lowercase=True; 4/0.4
# 1,10 ngrams -> 70.70
# 1,6 ngrams -> 70.99
# 1,40 ngrams -> 69.71
# Class labels (strangely balanced works best)
# 2/1: 70.3
# 1/2: 71.95
# balanced: 72.28*
# Cutoff (note that this should be 0.5)
# 0.5 -> 72.28
# 0.51 -> 72.14
# 0.55 -> 71.76
# 0.6 -> 71.59
# 0.4 -> 71.20
# 0.495 -> 72.47
# 0.49 -> 72.47
# 0.485 -> 72.62* (lemmatized: 72.58)
# 0.48 -> 72.44
# 0.47 -> 72.24
submission.to_csv('naive.csv', sep='\t', line_terminator='\n', header=None, index=False, encoding='utf-8-sig') | _____no_output_____ | MIT | Germeval2019-Task2.ipynb | rother/germeval2019 |
Fine grained task | # Transform from the format text, binary, fine to
# text, 'other', 'offense', 'abuse', 'insult', 'profanity' for finegrained classification
# 1) Copy
#df_tweets_tsh_fine = pd.read_csv(file_head, delimiter='\t', header=None)
df_tweets_tsh_fine = pd.concat([pd.read_csv(f, delimiter='\t', header=None) for f in file_list ])
# 2) Remove 2rd column
del(df_tweets_tsh_fine[1])
# 3) Add 'other', 'offense', 'abuse', 'insult', 'profanity' and use column names
df_tweets_tsh_fine.columns = ['text', 'labels']
df_tweets_tsh_fine['other'] = 0
df_tweets_tsh_fine['offense'] = 0
df_tweets_tsh_fine['abuse'] = 0
df_tweets_tsh_fine['insult'] = 0
df_tweets_tsh_fine['profanity'] = 0
# 4) Fill 'other' and 'offense'
mask_other = df_tweets_tsh_fine.labels == 'OTHER'
mask_offense = df_tweets_tsh_fine.labels == 'OFFENSE'
mask_abuse = df_tweets_tsh_fine.labels == 'ABUSE'
mask_insult = df_tweets_tsh_fine.labels == 'INSULT'
mask_profanity = df_tweets_tsh_fine.labels == 'PROFANITY'
df_tweets_tsh_fine.loc[mask_other, 'other'] = 1
df_tweets_tsh_fine.loc[mask_offense, 'offense'] = 1
df_tweets_tsh_fine.loc[mask_abuse, 'abuse'] = 1
df_tweets_tsh_fine.loc[mask_insult, 'insult'] = 1
df_tweets_tsh_fine.loc[mask_profanity, 'profanity'] = 1
df_tweets_tsh_fine.head()
train_df_tweets_tsh_fine = df_tweets_tsh_fine.copy()
train_df_tweets_tsh_fine.head()
label_cols_fine = ['other', 'abuse', 'insult', 'profanity']
train_df_tweets_tsh_fine['none'] = 1-train_df_tweets_tsh_fine[label_cols_fine].max(axis=1)
train_df_tweets_tsh_fine.describe()
COMMENT = 'text'
#n = train_df_tweets_tsh_fine.shape[0]
vec = TfidfVectorizer(analyzer='char', ngram_range=(3,6), tokenizer=tokenize_spacy,
min_df=4, max_df=1.0, strip_accents='unicode', use_idf=True,
smooth_idf=False, sublinear_tf=True, lowercase=True, binary=False)
trn_term_doc_fine = vec.fit_transform(train_df_tweets_tsh_fine[COMMENT])
test_term_doc_fine = vec.transform(test[COMMENT])
trn_term_doc_fine, test_term_doc_fine
x = trn_term_doc_fine
test_x = test_term_doc_fine
# other, abuse, insult, profanity
# class_weight = {}
def get_mdl(y):
y = y.values
r = np.log(pr(1,y) / pr(0,y))
m = LogisticRegression(C=14.0, dual=False, solver='saga', multi_class='auto', penalty='l2', max_iter=500, class_weight='balanced') # class_weight='balanced'
x_nb = x.multiply(r)
return m.fit(x_nb, y), r
preds = np.zeros((len(test), len(label_cols_fine)))
for i, j in enumerate(label_cols_fine):
print('fit', j)
m,r = get_mdl(train_df_tweets_tsh_fine[j])
preds[:,i] = m.predict_proba(test_x.multiply(r))[:,1]
# All predictions for the testset file
predictions = pd.DataFrame(preds, columns = label_cols_fine)
#predictions
submission = test.copy()
#del(submission['fine'])
submission['text'][21]
def max_from_labels(pred, labels):
return labels[np.argmax(pred)].upper()
#submission['label'] = 'OTHER' # dummy label for fine classification
for index,row in submission.iterrows():
submission['label'][index] = max_from_labels(preds[index], label_cols_fine)
submission['fine'][index] = max_from_labels(preds[index], label_cols_fine)
# 3,6c -> 45.67
# 3,7c -> 43.8
# 3,6c -> 44.41 (1/2 classes)
# 3,6c -> 45.71* (2/1 classes)
# 3,6c -> 45.43 (3/1 classes)
# Balanced
# 4/1.0; 3,6c -> 45.72* (balanced) -> same lemmatized
submission.to_csv('naive2.csv', sep='\t', line_terminator='\n', header=None, index=False, encoding='utf-8-sig')
#for index,row in submission.iterrows():
# print(preds[index]) | _____no_output_____ | MIT | Germeval2019-Task2.ipynb | rother/germeval2019 |
Concepts in Spatial Linear Modelling Data Borrowing in Supervised Learning | import numpy as np
import libpysal.weights as lp
import geopandas as gpd
import pandas as pd
import shapely.geometry as shp
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
listings = pd.read_csv('./data/berlin-listings.csv.gz')
listings['geometry'] = listings[['longitude','latitude']].apply(shp.Point, axis=1)
listings = gpd.GeoDataFrame(listings)
listings.crs = {'init':'epsg:4269'}
listings = listings.to_crs(epsg=3857)
listings[['scrape_id', 'name',
'summary', 'space', 'description', 'experiences_offered',
'neighborhood_overview', 'notes', 'transit', 'access', 'interaction',
'house_rules', 'thumbnail_url', 'medium_url', 'picture_url',
'xl_picture_url', 'host_id', 'host_url', 'host_name', 'host_since',
'host_location', 'host_about']].head()
listings.sort_values('price').plot('price', cmap='plasma') | _____no_output_____ | MIT | AdvancedDataAnalysis/Session12 Geospatial/Notebook-4-Data-Borrowing.ipynb | robretoarenal/BTS2 |
Kernel Regressions Kernel regressions are one exceptionally common way to allow observations to "borrow strength" from nearby observations. However, when working with spatial data, there are *two simultaneous senses of what is near:* - things that similar in attribute (classical kernel regression)- things that are similar in spatial position (spatial kernel regression)Below, we'll walk through how to use scikit to fit these two types of kernel regressions, show how it's not super simple to mix the two approaches together, and refer to an approach that does this correctly in another package. First, though, let's try to predict the log of an Airbnb's nightly price based on a few factors:- accommodates: the number of people the airbnb can accommodate- review_scores_rating: the aggregate rating of the listing- bedrooms: the number of bedrooms the airbnb has- bathrooms: the number of bathrooms the airbnb has- beds: the number of beds the airbnb offers | model_data = listings[['accommodates', 'review_scores_rating',
'bedrooms', 'bathrooms', 'beds',
'price', 'geometry']].dropna()
Xnames = ['accommodates', 'review_scores_rating',
'bedrooms', 'bathrooms', 'beds' ]
X = model_data[Xnames].values
X = X.astype(float)
y = np.log(model_data[['price']].values) | _____no_output_____ | MIT | AdvancedDataAnalysis/Session12 Geospatial/Notebook-4-Data-Borrowing.ipynb | robretoarenal/BTS2 |
Further, since each listing has a location, I'll extract the set of spatial coordinates coordinates for each listing. | coordinates = np.vstack(model_data.geometry.apply(lambda p: np.hstack(p.xy)).values) | _____no_output_____ | MIT | AdvancedDataAnalysis/Session12 Geospatial/Notebook-4-Data-Borrowing.ipynb | robretoarenal/BTS2 |
scikit neighbor regressions are contained in the `sklearn.neighbors` module, and there are two main types:- `KNeighborsRegressor`, which uses a k-nearest neighborhood of observations around each focal site- `RadiusNeighborsRegressor`, which considers all observations within a fixed radius around each focal site.Further, these methods can use inverse distance weighting to rank the relative importance of sites around each focal; in this way, near things are given more weight than far things, even when there's a lot of near things. | import sklearn.neighbors as skn
import sklearn.metrics as skm
shuffle = np.random.permutation(len(y))
train,test = shuffle[:14000],shuffle[14000:] | _____no_output_____ | MIT | AdvancedDataAnalysis/Session12 Geospatial/Notebook-4-Data-Borrowing.ipynb | robretoarenal/BTS2 |
So, let's fit three models:- `spatial`: using inverse distance weighting on the nearest 500 neighbors geograpical space- `attribute`: using inverse distance weighting on the nearest 500 neighbors in attribute space- `both`: using inverse distance weighting in both geographical and attribute space. | KNNR = skn.KNeighborsRegressor(weights='distance', n_neighbors=500)
spatial = KNNR.fit(coordinates[train,:],
y[train,:])
KNNR = skn.KNeighborsRegressor(weights='distance', n_neighbors=500)
attribute = KNNR.fit(X[train,:],
y[train,])
KNNR = skn.KNeighborsRegressor(weights='distance', n_neighbors=500)
both = KNNR.fit(np.hstack((coordinates,X))[train,:],
y[train,:]) | _____no_output_____ | MIT | AdvancedDataAnalysis/Session12 Geospatial/Notebook-4-Data-Borrowing.ipynb | robretoarenal/BTS2 |
To score them, I'm going to grab their out of sample prediction accuracy and get their % explained variance: | sp_ypred = spatial.predict(coordinates[test,:])
att_ypred = attribute.predict(X[test,:])
both_ypred = both.predict(np.hstack((X,coordinates))[test,:])
(skm.explained_variance_score(y[test,], sp_ypred),
skm.explained_variance_score(y[test,], att_ypred),
skm.explained_variance_score(y[test,], both_ypred)) | _____no_output_____ | MIT | AdvancedDataAnalysis/Session12 Geospatial/Notebook-4-Data-Borrowing.ipynb | robretoarenal/BTS2 |
If you don't know $X$, using $Wy$ would be better than nothing, but it works nowhere near as well... less than half of the variance that is explained by nearness in feature/attribute space is explained by nearness in geographical space. Making things even worse, simply glomming on the geographical information to the feature set makes the model perform horribly. *There must be another way!*One method that can exploit the fact that local data may be more informative in predicting $y$ at site $i$ than distant data is Geographically Weighted Regression, a type of Generalized Additive Spatial Model. Kind of like a Kernel Regression, GWR conducts a bunch of regressions at each training site only considering data near that site. This means it works like the kernel regressions above, but uses *both* the coordinates *and* the data in $X$ to predict $y$ at each site. It optimizes its sense of "local" depending on some information criteria or fit score.You can find this in the `gwr` package, and significant development is ongoing on this at `https://github.com/pysal/gwr`. Data Borrowing Another common case where these weights are used are in "feature engineering." Using the weights matrix, you can construct neighbourhood averages of the data matrix and use these as synthetic features in your model. These often have a strong relationship to the outcome as well, since spatial data is often smooth and attributes of nearby sites often have a spillover impact on each other. First, we'll construct a Kernel weight from the data that we have, make it an adaptive Kernel bandwidth, and make sure that our kernel weights don't have any self-neighbors. Since we've got the data at each site anyway, we probably shouldn't use that data *again* when we construct our neighborhood-smoothed syntetic features. | from libpysal.weights.util import fill_diagonal
kW = lp.Kernel.from_dataframe(model_data, fixed=False, function='gaussian', k=100)
kW = fill_diagonal(kW, 0)
WX = lp.lag_spatial(kW, X)
WX
kW.to_adjlist()[kW.to_adjlist()["focal"]== 1] | _____no_output_____ | MIT | AdvancedDataAnalysis/Session12 Geospatial/Notebook-4-Data-Borrowing.ipynb | robretoarenal/BTS2 |
I like `statsmodels` regression summary tables, so I'll pop it up here. Below are the results for the model with only the covariates used above:- accommodates: the number of people the airbnb can accommodate- review_scores_rating: the aggregate rating of the listing- bedrooms: the number of bedrooms the airbnb has- bathrooms: the number of bathrooms the airbnb has- beds: the number of beds the airbnb offersWe've not used any of our synthetic features in `WX`. | import statsmodels.api as sm
Xtable = pd.DataFrame(X, columns=Xnames)
onlyX = sm.OLS(y,sm.add_constant(Xtable)).fit()
onlyX.summary() | _____no_output_____ | MIT | AdvancedDataAnalysis/Session12 Geospatial/Notebook-4-Data-Borrowing.ipynb | robretoarenal/BTS2 |
Then, we could fit a model using the neighbourhood average synthetic features as well: | WXtable = pd.DataFrame(WX, columns=['lag_{}'.format(name) for name in Xnames])
WXtable.head()
XWXtable = pd.concat((Xtable,WXtable),axis=1)
XWXtable.head()
withWX = sm.OLS(y,sm.add_constant(XWXtable)).fit()
withWX.summary() | _____no_output_____ | MIT | AdvancedDataAnalysis/Session12 Geospatial/Notebook-4-Data-Borrowing.ipynb | robretoarenal/BTS2 |
AlignmentThis notebook covers [*alignment*](https://pandas.pydata.org/docs/user_guide/dsintro.htmldsintro-alignment), a feature of pandas that's crucial to using it well. It relies on the pandas' handling of *labels*. | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt | _____no_output_____ | CC-BY-4.0 | Alignment.ipynb | TomAugspurger/pandorable-pandas |
Goal: Compute Real GDPLet's learn through an example: Gross Domestic Product (the total output of a country) is measured in dollars. This means we we can't just compare the GDP from 1950 to the GDP from 2000, since the value of a dollar changed over that time (inflation).In the US, the Bureau of Economic Analysis already provides an estimate of real GDP, but we'll calculate something similar using the formula:$$real\_GDP = \frac{nominal\_GDP}{price\_index}$$I've downloaded a couple time series from [FRED](https://fred.stlouisfed.org), one for GDP and one for the Consumer Price Index.* U.S. Bureau of Labor Statistics, Consumer Price Index for All Urban Consumers: All Items in U.S. City Average [CPIAUCSL], retrieved from FRED, Federal Reserve Bank of St. Louis; https://fred.stlouisfed.org/series/CPIAUCSL, October 31, 2020.* U.S. Bureau of Economic Analysis, Gross Domestic Product [GDP], retrieved from FRED, Federal Reserve Bank of St. Louis; https://fred.stlouisfed.org/series/GDP, October 31, 2020.We're going to do things the wrong way first. | gdp_bad = pd.read_csv("data/GDP.csv.gz", parse_dates=["DATE"])
cpi_bad = pd.read_csv("data/CPIAUCSL.csv.gz", parse_dates=["DATE"]) | _____no_output_____ | CC-BY-4.0 | Alignment.ipynb | TomAugspurger/pandorable-pandas |
Our formula says `real_gdp = gdp / cpi`, so, let's try it! | %xmode plain
gdp_bad / cpi_bad | _____no_output_____ | CC-BY-4.0 | Alignment.ipynb | TomAugspurger/pandorable-pandas |
Whoops, what happened? We should probably look at our data: | gdp_bad
gdp_bad.dtypes
gdp_bad['DATE'][0] | _____no_output_____ | CC-BY-4.0 | Alignment.ipynb | TomAugspurger/pandorable-pandas |
So, we've tried to divide a datetime by a datetime, and pandas has correctly raised a type error. That raises another issue though. These two timeseries have different frequencies. | cpi_bad.head() | _____no_output_____ | CC-BY-4.0 | Alignment.ipynb | TomAugspurger/pandorable-pandas |
CPI is measured monthly, while GDP is quarterly. What we'd really need to do is *join* the two timeseries on the `DATE` variable, and then do the operation. We could do that, but let's do things the pandorable way first.A DataFrame is a 2-D data structure composed of three components:1. The *values*, the actual data2. The *row labels*, stored in a `pandas.Index` class, accessible with `.index`3. The *column labels*, stored in a `pandas.Index` class, accessible with `.columns`We'll use the *index* to store our *labels* (the dates). Then the only thing in the values is our observations (the GDP or CPI). | # Notice that we select the GDP column to convert the
# 1-column DataFrame to a 1D Series
gdp = pd.read_csv('data/GDP.csv.gz', index_col='DATE',
parse_dates=['DATE'])["GDP"]
gdp.head() | _____no_output_____ | CC-BY-4.0 | Alignment.ipynb | TomAugspurger/pandorable-pandas |
Notice that we selected the single column `"GDP"` using `[]`. This returns a `pandas.Series` object, a 1-D array *with row labels*. | type(gdp)
gdp.index | _____no_output_____ | CC-BY-4.0 | Alignment.ipynb | TomAugspurger/pandorable-pandas |
The actual values are a NumPy array of floats. | gdp.to_numpy()[:10] | _____no_output_____ | CC-BY-4.0 | Alignment.ipynb | TomAugspurger/pandorable-pandas |
Let's read in CPI as well. | cpi = pd.read_csv('data/CPIAUCSL.csv.gz', index_col='DATE',
parse_dates=['DATE'])["CPIAUCSL"]
cpi.head() | _____no_output_____ | CC-BY-4.0 | Alignment.ipynb | TomAugspurger/pandorable-pandas |
And let's try the formula again. | rgdp = gdp / cpi
rgdp | _____no_output_____ | CC-BY-4.0 | Alignment.ipynb | TomAugspurger/pandorable-pandas |
**What happened?**We've gotten our answer, but is there anything in the output that's surprising? What are these `NaN`s?In pandas, any time you do an operation involving multiple pandas objects (dataframes, series), pandas will *align* the inputs. Alignment is a two-step process:1. Take the union of the labels2. Reindex the all inputs to the union of the labels, introducing NAs where there's new values.Only after that does the operation (division in this case) happen.Looking at the raw data, we see that CPI is measured monthly, while GDP is just measured quarterly. So pandas has aligned the two (to monthly frequency, since that's the union), inserting missing values where there weren't any previously. | # manual alignment, just for demonstration:
all_dates = gdp.index.union(cpi.index)
all_dates
gdp2 = gdp.reindex(all_dates)
gdp2
cpi2 = cpi.reindex(all_dates)
cpi2
rgdp2 = gdp2 / cpi2
rgdp2 | _____no_output_____ | CC-BY-4.0 | Alignment.ipynb | TomAugspurger/pandorable-pandas |
So when we wrote```pythonrgdp = gdp / cpi```pandas performs```pythonall_dates = gdp.index.union(cpi.index)rgdp = gdp.reindex(all_dates) / cpi.reindex(all_dates)```This behavior is somewhat peculiar to pandas. But once you're used to it it's hard to go back. pandas handling the labels / alignment elimiates a class of errors that come from datasets not being aligned. Missing DataJust a quick aside on handling missing data: pandas provides tools for detecting and dealing with missing data. We'll use these throughout the tutorial. | rgdp.isna()
rgdp.dropna()
rgdp.fillna(method='ffill') # or fill with a scalar. | _____no_output_____ | CC-BY-4.0 | Alignment.ipynb | TomAugspurger/pandorable-pandas |
Exercise:Normalize real GDP to year **2000** dollars.Right now, the unit on the `CPI` variable is "Index 1982-1984=100". This means that "index value" for the Consumer Price *Index* show year is the average of 1982 - 1984. | # use `.loc[start:end]` or `.loc["<year>"]` to slice a subset of *rows*
cpi.loc['1982':'1984'].mean() # close enough to 100 | _____no_output_____ | CC-BY-4.0 | Alignment.ipynb | TomAugspurger/pandorable-pandas |
To *renormalize* an index like CPI, divide the series by the average of a different timespan (say the year 2000) and multiply by 100. | # Get the mean CPI for the year 2000
cpi_2000_average = cpi.loc[...]...
# *renormalize* the entire `cpi` series to "Index 2000" units.
cpi_2000 = 100 * (... / ...)
# Compute real GDP again, this time in "year 2000 dollars".
rgdp_2000 = ...
rgdp_2000
%load solutions/alignment-cpi2000.py | _____no_output_____ | CC-BY-4.0 | Alignment.ipynb | TomAugspurger/pandorable-pandas |
Predictions callbacks> Various callbacks to customize get_preds behaviors MCDropoutCallback> Turns on dropout during inference, allowing you to call Learner.get_preds multiple times to approximate your model uncertainty using [Monte Carlo Dropout](https://arxiv.org/pdf/1506.02142.pdf). | #|export
class MCDropoutCallback(Callback):
def before_validate(self):
for m in [m for m in flatten_model(self.model) if 'dropout' in m.__class__.__name__.lower()]:
m.train()
def after_validate(self):
for m in [m for m in flatten_model(self.model) if 'dropout' in m.__class__.__name__.lower()]:
m.eval()
learn = synth_learner()
# Call get_preds 10 times, then stack the predictions, yielding a tensor with shape [# of samples, batch_size, ...]
dist_preds = []
for i in range(10):
preds, targs = learn.get_preds(cbs=[MCDropoutCallback()])
dist_preds += [preds]
torch.stack(dist_preds).shape | _____no_output_____ | Apache-2.0 | nbs/18b_callback.preds.ipynb | EmbraceLife/fastai |
Export - | #|hide
from nbdev.export import notebook2script
notebook2script() | Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_callback.core.ipynb.
Converted 13a_learner.ipynb.
Converted 13b_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 18a_callback.training.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.vision.ipynb.
Converted 24_tutorial.siamese.ipynb.
Converted 24_vision.gan.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.text.ipynb.
Converted 39_tutorial.transformers.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.model.ipynb.
Converted 43_tabular.learner.ipynb.
Converted 44_tutorial.tabular.ipynb.
Converted 45_collab.ipynb.
Converted 46_tutorial.collab.ipynb.
Converted 50_tutorial.datablock.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 61_tutorial.medical_imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 72_callback.neptune.ipynb.
Converted 73_callback.captum.ipynb.
Converted 74_callback.cutmix.ipynb.
Converted 97_test_utils.ipynb.
Converted 99_pytorch_doc.ipynb.
Converted index.ipynb.
Converted tutorial.ipynb.
| Apache-2.0 | nbs/18b_callback.preds.ipynb | EmbraceLife/fastai |
Exercises 5. Fiddle with the activation functions. Try applying a ReLu to the first hidden layer and tanh to the second one. The tanh activation is given by the method: tf.nn.tanh()**Solution** Analogically to the previous lecture, we can change the activation functions. This time though, we will use different activators for the different layers.The result should not be significantly different. However, with different width and depth, that may change.*Additional exercise: Try to find a better combination of activation functions* Deep Neural Network for MNIST ClassificationWe'll apply all the knowledge from the lectures in this section to write a deep neural network. The problem we've chosen is referred to as the "Hello World" for machine learning because for most students it is their first example. The dataset is called MNIST and refers to handwritten digit recognition. You can find more about it on Yann LeCun's website (Director of AI Research, Facebook). He is one of the pioneers of what we've been talking about and of more complex approaches that are widely used today, such as covolutional networks. The dataset provides 28x28 images of handwritten digits (1 per image) and the goal is to write an algorithm that detects which digit is written. Since there are only 10 digits, this is a classification problem with 10 classes. In order to exemplify what we've talked about in this section, we will build a network with 2 hidden layers between inputs and outputs. Import the relevant packages | import numpy as np
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
# TensorFLow includes a data provider for MNIST that we'll use.
# This function automatically downloads the MNIST dataset to the chosen directory.
# The dataset is already split into training, validation, and test subsets.
# Furthermore, it preprocess it into a particularly simple and useful format.
# Every 28x28 image is flattened into a vector of length 28x28=784, where every value
# corresponds to the intensity of the color of the corresponding pixel.
# The samples are grayscale (but standardized from 0 to 1), so a value close to 0 is almost white and a value close to
# 1 is almost purely black. This representation (flattening the image row by row into
# a vector) is slightly naive but as you'll see it works surprisingly well.
# Since this is a classification problem, our targets are categorical.
# Recall from the lecture on that topic that one way to deal with that is to use one-hot encoding.
# With it, the target for each individual sample is a vector of length 10
# which has nine 0s and a single 1 at the position which corresponds to the correct answer.
# For instance, if the true answer is "1", the target will be [0,0,0,1,0,0,0,0,0,0] (counting from 0).
# Have in mind that the very first time you execute this command it might take a little while to run
# because it has to download the whole dataset. Following commands only extract it so they're faster.
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) | _____no_output_____ | MIT | course_2/course_material/Part_7_Deep_Learning/S54_L390/5. TensorFlow_MNIST_Activation_functions_Part_2_Solution.ipynb | Alexander-Meldrum/learning-data-science |
Outline the modelThe whole code is in one cell, so you can simply rerun this cell (instead of the whole notebook) and train a new model.The tf.reset_default_graph() function takes care of clearing the old parameters. From there on, a completely new training starts. | input_size = 784
output_size = 10
# Use same hidden layer size for both hidden layers. Not a necessity.
hidden_layer_size = 50
# Reset any variables left in memory from previous runs.
tf.reset_default_graph()
# As in the previous example - declare placeholders where the data will be fed into.
inputs = tf.placeholder(tf.float32, [None, input_size])
targets = tf.placeholder(tf.float32, [None, output_size])
# Weights and biases for the first linear combination between the inputs and the first hidden layer.
# Use get_variable in order to make use of the default TensorFlow initializer which is Xavier.
weights_1 = tf.get_variable("weights_1", [input_size, hidden_layer_size])
biases_1 = tf.get_variable("biases_1", [hidden_layer_size])
# Operation between the inputs and the first hidden layer.
# We've chosen ReLu as our activation function. You can try playing with different non-linearities.
outputs_1 = tf.nn.relu(tf.matmul(inputs, weights_1) + biases_1)
# Weights and biases for the second linear combination.
# This is between the first and second hidden layers.
weights_2 = tf.get_variable("weights_2", [hidden_layer_size, hidden_layer_size])
biases_2 = tf.get_variable("biases_2", [hidden_layer_size])
# Operation between the first and the second hidden layers. Again, we use ReLu.
outputs_2 = tf.nn.tanh(tf.matmul(outputs_1, weights_2) + biases_2)
# Weights and biases for the final linear combination.
# That's between the second hidden layer and the output layer.
weights_3 = tf.get_variable("weights_3", [hidden_layer_size, output_size])
biases_3 = tf.get_variable("biases_3", [output_size])
# Operation between the second hidden layer and the final output.
# Notice we have not used an activation function because we'll use the trick to include it directly in
# the loss function. This works for softmax and sigmoid with cross entropy.
outputs = tf.matmul(outputs_2, weights_3) + biases_3
# Calculate the loss function for every output/target pair.
# The function used is the same as applying softmax to the last layer and then calculating cross entropy
# with the function we've seen in the lectures. This function, however, combines them in a clever way,
# which makes it both faster and more numerically stable (when dealing with very small numbers).
# Logits here means: unscaled probabilities (so, the outputs, before they are scaled by the softmax)
# Naturally, the labels are the targets.
loss = tf.nn.softmax_cross_entropy_with_logits(logits=outputs, labels=targets)
# Get the average loss
mean_loss = tf.reduce_mean(loss)
# Define the optimization step. Using adaptive optimizers such as Adam in TensorFlow
# is as simple as that.
optimize = tf.train.AdamOptimizer(learning_rate=0.001).minimize(mean_loss)
# Get a 0 or 1 for every input in the batch indicating whether it output the correct answer out of the 10.
out_equals_target = tf.equal(tf.argmax(outputs, 1), tf.argmax(targets, 1))
# Get the average accuracy of the outputs.
accuracy = tf.reduce_mean(tf.cast(out_equals_target, tf.float32))
# Declare the session variable.
sess = tf.InteractiveSession()
# Initialize the variables. Default initializer is Xavier.
initializer = tf.global_variables_initializer()
sess.run(initializer)
# Batching
batch_size = 100
# Calculate the number of batches per epoch for the training set.
batches_number = mnist.train._num_examples // batch_size
# Basic early stopping. Set a miximum number of epochs.
max_epochs = 15
# Keep track of the validation loss of the previous epoch.
# If the validation loss becomes increasing, we want to trigger early stopping.
# We initially set it at some arbitrarily high number to make sure we don't trigger it
# at the first epoch
prev_validation_loss = 9999999.
import time
start_time = time.time()
# Create a loop for the epochs. Epoch_counter is a variable which automatically starts from 0.
for epoch_counter in range(max_epochs):
# Keep track of the sum of batch losses in the epoch.
curr_epoch_loss = 0.
# Iterate over the batches in this epoch.
for batch_counter in range(batches_number):
# Input batch and target batch are assigned values from the train dataset, given a batch size
input_batch, target_batch = mnist.train.next_batch(batch_size)
# Run the optimization step and get the mean loss for this batch.
# Feed it with the inputs and the targets we just got from the train dataset
_, batch_loss = sess.run([optimize, mean_loss],
feed_dict={inputs: input_batch, targets: target_batch})
# Increment the sum of batch losses.
curr_epoch_loss += batch_loss
# So far curr_epoch_loss contained the sum of all batches inside the epoch
# We want to find the average batch losses over the whole epoch
# The average batch loss is a good proxy for the current epoch loss
curr_epoch_loss /= batches_number
# At the end of each epoch, get the validation loss and accuracy
# Get the input batch and the target batch from the validation dataset
input_batch, target_batch = mnist.validation.next_batch(mnist.validation._num_examples)
# Run without the optimization step (simply forward propagate)
validation_loss, validation_accuracy = sess.run([mean_loss, accuracy],
feed_dict={inputs: input_batch, targets: target_batch})
# Print statistics for the current epoch
# Epoch counter + 1, because epoch_counter automatically starts from 0, instead of 1
# We format the losses with 3 digits after the dot
# We format the accuracy in percentages for easier interpretation
print('Epoch '+str(epoch_counter+1)+
'. Mean loss: '+'{0:.3f}'.format(curr_epoch_loss)+
'. Validation loss: '+'{0:.3f}'.format(validation_loss)+
'. Validation accuracy: '+'{0:.2f}'.format(validation_accuracy * 100.)+'%')
# Trigger early stopping if validation loss begins increasing.
if validation_loss > prev_validation_loss:
break
# Store this epoch's validation loss to be used as previous validation loss in the next iteration.
prev_validation_loss = validation_loss
# Not essential, but it is nice to know when the algorithm stopped working in the output section, rather than check the kernel
print('End of training.')
#Add the time it took the algorithm to train
print("Training time: %s seconds" % (time.time() - start_time)) | _____no_output_____ | MIT | course_2/course_material/Part_7_Deep_Learning/S54_L390/5. TensorFlow_MNIST_Activation_functions_Part_2_Solution.ipynb | Alexander-Meldrum/learning-data-science |
Test the modelAs we discussed in the lectures, after training on the training and validation sets, we test the final prediction power of our model by running it on the test dataset that the algorithm has not seen before.It is very important to realize that fiddling with the hyperparameters overfits the validation dataset. The test is the absolute final instance. You should not test before you are completely done with adjusting your model. | input_batch, target_batch = mnist.test.next_batch(mnist.test._num_examples)
test_accuracy = sess.run([accuracy],
feed_dict={inputs: input_batch, targets: target_batch})
# Test accuracy is a list with 1 value, so we want to extract the value from it, using x[0]
# Uncomment the print to see how it looks before the manipulation
# print (test_accuracy)
test_accuracy_percent = test_accuracy[0] * 100.
# Print the test accuracy formatted in percentages
print('Test accuracy: '+'{0:.2f}'.format(test_accuracy_percent)+'%') | _____no_output_____ | MIT | course_2/course_material/Part_7_Deep_Learning/S54_L390/5. TensorFlow_MNIST_Activation_functions_Part_2_Solution.ipynb | Alexander-Meldrum/learning-data-science |
Modeling Theta: Muti-population recurrent network (with BMTK BioNet) Here we will create a heterogenous yet relatively small network consisting of hundreds of cells recurrently connected. All cells will belong to one of four "cell-types". Two of these cell types will be biophysically detailed cells, i.e. containing a full morphology and somatic and dendritic channels and receptors. The other two will be point-neuron models, which lack a full morphology or channels but still act to provide inhibitory and excitory dynamics.As input to drive the simulation, we will also create an external network of "virtual cells" that synapse directly onto our internal cells and provide spike trains stimulus**Note** - scripts and files for running this tutorial can be found in the directory [theta](https://github.com/cyneuro/theta)requirements:* bmtk* NEURON 7.4+ 1. Building the network cellsThis network will loosely resemble the rodent hippocampal CA3 region. Along the center of the column will be a population of 50 biophysically detailed neurons: 40 excitatory Scnn1a cells and 10 inhibitory PV cells. | import numpy as np
from bmtk.builder.networks import NetworkBuilder
from bmtk.builder.auxi.node_params import positions_list
def pos_CA3e():
# Create the possible x,y,z coordinates for CA3e cells
x_start, x_end, x_stride = 0.5,15,2.3
y_start, y_end, y_stride = 0.5,3,1
z_start, z_end, z_stride = 1,4,1
x_grid = np.arange(x_start,x_end,x_stride)
y_grid = np.arange(y_start,y_end,y_stride)
z_grid = np.arange(z_start,z_end,z_stride)
xx, yy, zz = np.meshgrid(x_grid, y_grid, z_grid)
pos_list = np.vstack([xx.ravel(), yy.ravel(), zz.ravel()]).T
return positions_list(pos_list)
def pos_CA3o():
# Create the possible x,y,z coordinates for CA3 OLM cells
x_start, x_end, x_stride = 1,15,4
y_start, y_end, y_stride = 0.75,3,1.5
z_start, z_end, z_stride = 2,2.5,2
x_grid = np.arange(x_start,x_end,x_stride)
y_grid = np.arange(y_start,y_end,y_stride)
z_grid = np.arange(z_start,z_end,z_stride)
xx, yy, zz = np.meshgrid(x_grid, y_grid, z_grid)
pos_list = np.vstack([xx.ravel(), yy.ravel(), zz.ravel()]).T
return positions_list(pos_list)
CA3eTotal = 63 # number of CA3 principal cells
CA3oTotal = 8 # number OLM inhibitory cells
net = NetworkBuilder('hippocampus')
net.add_nodes(N=CA3eTotal, pop_name='CA3e', pop_group='CA3',
positions=pos_CA3e(),
model_type='biophysical',
model_template='hoc:CA3PyramidalCell',
morphology='blank.swc')
net.add_nodes(N=CA3oTotal, pop_name='CA3o', pop_group='CA3',
positions=pos_CA3o(),
model_type='biophysical',
model_template='hoc:IzhiCell_OLM',
morphology='blank.swc')
| _____no_output_____ | MIT | theta.ipynb | cyneuro/theta |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.