markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
If we invoke `sum3()` with strings instead, we get different invariants. Notably, we obtain the postcondition that the return value starts with the value of `a` – a universal postcondition if strings are used. | with InvariantTracker() as tracker:
y = sum3('a', 'b', 'c')
y = sum3('f', 'e', 'd')
pretty_invariants(tracker.invariants('sum3')) | _____no_output_____ | MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
If we invoke `sum3()` with both strings and numbers (and zeros, too), there are no properties left that would hold across all calls. That's the price of flexibility. | with InvariantTracker() as tracker:
y = sum3('a', 'b', 'c')
y = sum3('c', 'b', 'a')
y = sum3(-4, -5, -6)
y = sum3(0, 0, 0)
pretty_invariants(tracker.invariants('sum3')) | _____no_output_____ | MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
Converting Mined Invariants to AnnotationsAs with types, above, we would like to have some functionality where we can add the mined invariants as annotations to existing functions. To this end, we introduce the `InvariantAnnotator` class, extending `InvariantTracker`. We start with a helper method. `params()` returns a comma-separated list of parameter names as observed during calls. | class InvariantAnnotator(InvariantTracker):
def params(self, function_name):
arguments, return_value = self.calls(function_name)[0]
return ", ".join(arg_name for (arg_name, arg_value) in arguments)
with InvariantAnnotator() as annotator:
y = my_sqrt(25.0)
y = sum3(1, 2, 3)
annotator.params('my_sqrt')
annotator.params('sum3') | _____no_output_____ | MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
Now for the actual annotation. `preconditions()` returns the preconditions from the mined invariants (i.e., those propertes that do not depend on the return value) as a string with annotations: | class InvariantAnnotator(InvariantAnnotator):
def preconditions(self, function_name):
conditions = []
for inv in pretty_invariants(self.invariants(function_name)):
if inv.find(RETURN_VALUE) >= 0:
continue # Postcondition
cond = "@precondition(lambda " + self.params(function_name) + ": " + inv + ")"
conditions.append(cond)
return conditions
with InvariantAnnotator() as annotator:
y = my_sqrt(25.0)
y = my_sqrt(0.01)
y = sum3(1, 2, 3)
annotator.preconditions('my_sqrt') | _____no_output_____ | MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
`postconditions()` does the same for postconditions: | class InvariantAnnotator(InvariantAnnotator):
def postconditions(self, function_name):
conditions = []
for inv in pretty_invariants(self.invariants(function_name)):
if inv.find(RETURN_VALUE) < 0:
continue # Precondition
cond = ("@postcondition(lambda " +
RETURN_VALUE + ", " + self.params(function_name) + ": " + inv + ")")
conditions.append(cond)
return conditions
with InvariantAnnotator() as annotator:
y = my_sqrt(25.0)
y = my_sqrt(0.01)
y = sum3(1, 2, 3)
annotator.postconditions('my_sqrt') | _____no_output_____ | MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
With these, we can take a function and add both pre- and postconditions as annotations: | class InvariantAnnotator(InvariantAnnotator):
def functions_with_invariants(self):
functions = ""
for function_name in self.invariants():
try:
function = self.function_with_invariants(function_name)
except KeyError:
continue
functions += function
return functions
def function_with_invariants(self, function_name):
function = globals()[function_name] # Can throw KeyError
source = inspect.getsource(function)
return "\n".join(self.preconditions(function_name) +
self.postconditions(function_name)) + '\n' + source | _____no_output_____ | MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
Here comes `function_with_invariants()` in all its glory: | with InvariantAnnotator() as annotator:
y = my_sqrt(25.0)
y = my_sqrt(0.01)
y = sum3(1, 2, 3)
print_content(annotator.function_with_invariants('my_sqrt'), '.py') | [30;01m@precondition[39;49;00m([34mlambda[39;49;00m x: [36misinstance[39;49;00m(x, [36mfloat[39;49;00m))
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m x: x != [34m0[39;49;00m)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m x: x > [34m0[39;49;00m)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m x: x >= [34m0[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, x: [36misinstance[39;49;00m(return_value, [36mfloat[39;49;00m))
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, x: return_value != [34m0[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, x: return_value > [34m0[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, x: return_value >= [34m0[39;49;00m)
[34mdef[39;49;00m [32mmy_sqrt[39;49;00m(x):
[33m"""Computes the square root of x, using the Newton-Raphson method"""[39;49;00m
approx = [36mNone[39;49;00m
guess = x / [34m2[39;49;00m
[34mwhile[39;49;00m approx != guess:
approx = guess
guess = (approx + x / approx) / [34m2[39;49;00m
[34mreturn[39;49;00m approx
| MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
Quite a lot of invariants, is it? Further below (and in the exercises), we will discuss on how to focus on the most relevant properties. Some ExamplesHere's another example. `list_length()` recursively computes the length of a Python function. Let us see whether we can mine its invariants: | def list_length(L):
if L == []:
length = 0
else:
length = 1 + list_length(L[1:])
return length
with InvariantAnnotator() as annotator:
length = list_length([1, 2, 3])
print_content(annotator.functions_with_invariants(), '.py') | [30;01m@precondition[39;49;00m([34mlambda[39;49;00m L: L != [34m0[39;49;00m)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m L: [36misinstance[39;49;00m(L, [36mlist[39;49;00m))
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, L: [36misinstance[39;49;00m(return_value, [36mint[39;49;00m))
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, L: return_value == [36mlen[39;49;00m(L))
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, L: return_value >= [34m0[39;49;00m)
[34mdef[39;49;00m [32mlist_length[39;49;00m(L):
[34mif[39;49;00m L == []:
length = [34m0[39;49;00m
[34melse[39;49;00m:
length = [34m1[39;49;00m + list_length(L[[34m1[39;49;00m:])
[34mreturn[39;49;00m length
| MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
Almost all these properties (except for the very first) are relevant. Of course, the reason the invariants are so neat is that the return value is equal to `len(L)` is that `X == len(Y)` is part of the list of properties to be checked. The next example is a very simple function: | def sum2(a, b):
return a + b
with InvariantAnnotator() as annotator:
sum2(31, 45)
sum2(0, 0)
sum2(-1, -5) | _____no_output_____ | MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
The invariants all capture the relationship between `a`, `b`, and the return value as `return_value == a + b` in all its variations. | print_content(annotator.functions_with_invariants(), '.py') | [30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: [36misinstance[39;49;00m(a, [36mint[39;49;00m))
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: [36misinstance[39;49;00m(b, [36mint[39;49;00m))
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: a == return_value - b)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: b == return_value - a)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: [36misinstance[39;49;00m(return_value, [36mint[39;49;00m))
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value == a + b)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value == b + a)
[34mdef[39;49;00m [32msum2[39;49;00m(a, b):
[34mreturn[39;49;00m a + b
| MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
If we have a function without return value, the return value is `None` and we can only mine preconditions. (Well, we get a "postcondition" that the return value is non-zero, which holds for `None`). | def print_sum(a, b):
print(a + b)
with InvariantAnnotator() as annotator:
print_sum(31, 45)
print_sum(0, 0)
print_sum(-1, -5)
print_content(annotator.functions_with_invariants(), '.py') | [30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: [36misinstance[39;49;00m(a, [36mint[39;49;00m))
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: [36misinstance[39;49;00m(b, [36mint[39;49;00m))
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value != [34m0[39;49;00m)
[34mdef[39;49;00m [32mprint_sum[39;49;00m(a, b):
[34mprint[39;49;00m(a + b)
| MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
Checking SpecificationsA function with invariants, as above, can be fed into the Python interpreter, such that all pre- and postconditions are checked. We create a function `my_sqrt_annotated()` which includes all the invariants mined above. | with InvariantAnnotator() as annotator:
y = my_sqrt(25.0)
y = my_sqrt(0.01)
my_sqrt_def = annotator.functions_with_invariants()
my_sqrt_def = my_sqrt_def.replace('my_sqrt', 'my_sqrt_annotated')
print_content(my_sqrt_def, '.py')
exec(my_sqrt_def) | _____no_output_____ | MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
The "annotated" version checks against invalid arguments – or more precisely, against arguments with properties that have not been observed yet: | with ExpectError():
my_sqrt_annotated(-1.0) | Traceback (most recent call last):
File "<ipython-input-170-c3c5c372ccd1>", line 2, in <module>
my_sqrt_annotated(-1.0)
File "<ipython-input-100-39ada1fd0b7e>", line 8, in wrapper
retval = func(*args, **kwargs) # call original function or method
File "<ipython-input-100-39ada1fd0b7e>", line 8, in wrapper
retval = func(*args, **kwargs) # call original function or method
File "<ipython-input-100-39ada1fd0b7e>", line 6, in wrapper
assert precondition(*args, **kwargs), "Precondition violated"
AssertionError: Precondition violated (expected)
| MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
This is in contrast to the original version, which just hangs on negative values: | with ExpectTimeout(1):
my_sqrt(-1.0) | Traceback (most recent call last):
File "<ipython-input-171-afc7add26ad6>", line 2, in <module>
my_sqrt(-1.0)
File "<ipython-input-5-47185ad159a1>", line 7, in my_sqrt
guess = (approx + x / approx) / 2
File "<ipython-input-5-47185ad159a1>", line 7, in my_sqrt
guess = (approx + x / approx) / 2
File "ExpectError.ipynb", line 59, in check_time
TimeoutError (expected)
| MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
If we make changes to the function definition such that the properties of the return value change, such _regressions_ are caught as violations of the postconditions. Let us illustrate this by simply inverting the result, and return $-2$ as square root of 4. | my_sqrt_def = my_sqrt_def.replace('my_sqrt_annotated', 'my_sqrt_negative')
my_sqrt_def = my_sqrt_def.replace('return approx', 'return -approx')
print_content(my_sqrt_def, '.py')
exec(my_sqrt_def) | _____no_output_____ | MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
Technically speaking, $-2$ _is_ a square root of 4, since $(-2)^2 = 4$ holds. Yet, such a change may be unexpected by callers of `my_sqrt()`, and hence, this would be caught with the first call: | with ExpectError():
my_sqrt_negative(2.0) | Traceback (most recent call last):
File "<ipython-input-175-c80e4295dbf8>", line 2, in <module>
my_sqrt_negative(2.0)
File "<ipython-input-100-39ada1fd0b7e>", line 8, in wrapper
retval = func(*args, **kwargs) # call original function or method
File "<ipython-input-100-39ada1fd0b7e>", line 8, in wrapper
retval = func(*args, **kwargs) # call original function or method
File "<ipython-input-100-39ada1fd0b7e>", line 8, in wrapper
retval = func(*args, **kwargs) # call original function or method
[Previous line repeated 4 more times]
File "<ipython-input-100-39ada1fd0b7e>", line 10, in wrapper
assert postcondition(retval, *args, **kwargs), "Postcondition violated"
AssertionError: Postcondition violated (expected)
| MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
We see how pre- and postconditions, as well as types, can serve as *oracles* during testing. In particular, once we have mined them for a set of functions, we can check them again and again with test generators – especially after code changes. The more checks we have, and the more specific they are, the more likely it is we can detect unwanted effects of changes. Mining Specifications from Generated TestsMined specifications can only be as good as the executions they were mined from. If we only see a single call to, say, `sum2()`, we will be faced with several mined pre- and postconditions that _overspecialize_ towards the values seen: | def sum2(a, b):
return a + b
with InvariantAnnotator() as annotator:
y = sum2(2, 2)
print_content(annotator.functions_with_invariants(), '.py') | [30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: a != [34m0[39;49;00m)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: a <= b)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: a == b)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: a > [34m0[39;49;00m)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: a >= [34m0[39;49;00m)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: a >= b)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: b != [34m0[39;49;00m)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: b <= a)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: b == a)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: b > [34m0[39;49;00m)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: b >= [34m0[39;49;00m)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: b >= a)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: [36misinstance[39;49;00m(a, [36mint[39;49;00m))
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: [36misinstance[39;49;00m(b, [36mint[39;49;00m))
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: a < return_value)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: a <= b <= return_value)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: a <= return_value)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: a == return_value - b)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: a == return_value / b)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: b < return_value)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: b <= a <= return_value)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: b <= return_value)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: b == return_value - a)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: b == return_value / a)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: [36misinstance[39;49;00m(return_value, [36mint[39;49;00m))
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value != [34m0[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value == a * b)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value == a + b)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value == b * a)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value == b + a)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value > [34m0[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value > a)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value > b)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value >= [34m0[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value >= a)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value >= a >= b)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value >= b)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value >= b >= a)
[34mdef[39;49;00m [32msum2[39;49;00m(a, b):
[34mreturn[39;49;00m a + b
| MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
The mined precondition `a == b`, for instance, only holds for the single call observed; the same holds for the mined postcondition `return_value == a * b`. Yet, `sum2()` can obviously be successfully called with other values that do not satisfy these conditions. To get out of this trap, we have to _learn from more and more diverse runs_. If we have a few more calls of `sum2()`, we see how the set of invariants quickly gets smaller: | with InvariantAnnotator() as annotator:
length = sum2(1, 2)
length = sum2(-1, -2)
length = sum2(0, 0)
print_content(annotator.functions_with_invariants(), '.py') | [30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: [36misinstance[39;49;00m(a, [36mint[39;49;00m))
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: [36misinstance[39;49;00m(b, [36mint[39;49;00m))
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: a == return_value - b)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: b == return_value - a)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: [36misinstance[39;49;00m(return_value, [36mint[39;49;00m))
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value == a + b)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value == b + a)
[34mdef[39;49;00m [32msum2[39;49;00m(a, b):
[34mreturn[39;49;00m a + b
| MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
But where to we get such diverse runs from? This is the job of generating software tests. A simple grammar for calls of `sum2()` will easily resolve the problem. | from GrammarFuzzer import GrammarFuzzer # minor dependency
from Grammars import is_valid_grammar, crange, convert_ebnf_grammar # minor dependency
SUM2_EBNF_GRAMMAR = {
"<start>": ["<sum2>"],
"<sum2>": ["sum2(<int>, <int>)"],
"<int>": ["<_int>"],
"<_int>": ["(-)?<leaddigit><digit>*", "0"],
"<leaddigit>": crange('1', '9'),
"<digit>": crange('0', '9')
}
assert is_valid_grammar(SUM2_EBNF_GRAMMAR)
sum2_grammar = convert_ebnf_grammar(SUM2_EBNF_GRAMMAR)
sum2_fuzzer = GrammarFuzzer(sum2_grammar)
[sum2_fuzzer.fuzz() for i in range(10)]
with InvariantAnnotator() as annotator:
for i in range(10):
eval(sum2_fuzzer.fuzz())
print_content(annotator.function_with_invariants('sum2'), '.py') | [30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: a != [34m0[39;49;00m)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: [36misinstance[39;49;00m(a, [36mint[39;49;00m))
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: [36misinstance[39;49;00m(b, [36mint[39;49;00m))
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: a == return_value - b)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: b == return_value - a)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: [36misinstance[39;49;00m(return_value, [36mint[39;49;00m))
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value != [34m0[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value == a + b)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value == b + a)
[34mdef[39;49;00m [32msum2[39;49;00m(a, b):
[34mreturn[39;49;00m a + b
| MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
But then, writing tests (or a test driver) just to derive a set of pre- and postconditions may possibly be too much effort – in particular, since tests can easily be derived from given pre- and postconditions in the first place. Hence, it would be wiser to first specify invariants and then let test generators or program provers do the job. Also, an API grammar, such as above, will have to be set up such that it actually respects preconditions – in our case, we invoke `sqrt()` with positive numbers only, already assuming its precondition. In some way, one thus needs a specification (a model, a grammar) to mine another specification – a chicken-and-egg problem. However, there is one way out of this problem: If one can automatically generate tests at the system level, then one has an _infinite source of executions_ to learn invariants from. In each of these executions, all functions would be called with values that satisfy the (implicit) precondition, allowing us to mine invariants for these functions. This holds, because at the system level, invalid inputs must be rejected by the system in the first place. The meaningful precondition at the system level, ensuring that only valid inputs get through, thus gets broken down into a multitude of meaningful preconditions (and subsequent postconditions) at the function level. The big requirement for this, though, is that one needs good test generators at the system level. In [the next part](05_Domain-Specific_Fuzzing.ipynb), we will discuss how to automatically generate tests for a variety of domains, from configuration to graphical user interfaces. SynopsisThis chapter provides two classes that automatically extract specifications from a function and a set of inputs:* `TypeAnnotator` for _types_, and* `InvariantAnnotator` for _pre-_ and _postconditions_.Both work by _observing_ a function and its invocations within a `with` clause. Here is an example for the type annotator: | def sum2(a, b):
return a + b
with TypeAnnotator() as type_annotator:
sum2(1, 2)
sum2(-4, -5)
sum2(0, 0) | _____no_output_____ | MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
The `typed_functions()` method will return a representation of `sum2()` annotated with types observed during execution. | print(type_annotator.typed_functions()) | def sum2(a: int, b: int) ->int:
return a + b
| MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
The invariant annotator works in a similar fashion: | with InvariantAnnotator() as inv_annotator:
sum2(1, 2)
sum2(-4, -5)
sum2(0, 0) | _____no_output_____ | MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
The `functions_with_invariants()` method will return a representation of `sum2()` annotated with inferred pre- and postconditions that all hold for the observed values. | print(inv_annotator.functions_with_invariants()) | @precondition(lambda a, b: isinstance(a, int))
@precondition(lambda a, b: isinstance(b, int))
@postcondition(lambda return_value, a, b: a == return_value - b)
@postcondition(lambda return_value, a, b: b == return_value - a)
@postcondition(lambda return_value, a, b: isinstance(return_value, int))
@postcondition(lambda return_value, a, b: return_value == a + b)
@postcondition(lambda return_value, a, b: return_value == b + a)
def sum2(a, b):
return a + b
| MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
Such type specifications and invariants can be helpful as _oracles_ (to detect deviations from a given set of runs) as well as for all kinds of _symbolic code analyses_. The chapter gives details on how to customize the properties checked for. Lessons Learned* Type annotations and explicit invariants allow for _checking_ arguments and results for expected data types and other properties.* One can automatically _mine_ data types and invariants by observing arguments and results at runtime.* The quality of mined invariants depends on the diversity of values observed during executions; this variety can be increased by generating tests. Next StepsThis chapter concludes the [part on semantical fuzzing techniques](04_Semantical_Fuzzing.ipynb). In the next part, we will explore [domain-specific fuzzing techniques](05_Domain-Specific_Fuzzing.ipynb) from configurations and APIs to graphical user interfaces. BackgroundThe [DAIKON dynamic invariant detector](https://plse.cs.washington.edu/daikon/) can be considered the mother of function specification miners. Continuously maintained and extended for more than 20 years, it mines likely invariants in the style of this chapter for a variety of languages, including C, C++, C, Eiffel, F, Java, Perl, and Visual Basic. On top of the functionality discussed above, it holds a rich catalog of patterns for likely invariants, supports data invariants, can eliminate invariants that are implied by others, and determines statistical confidence to disregard unlikely invariants. The corresponding paper \cite{Ernst2001} is one of the seminal and most-cited papers of Software Engineering. A multitude of works have been published based on DAIKON and detecting invariants; see this [curated list](http://plse.cs.washington.edu/daikon/pubs/) for details. The interaction between test generators and invariant detection is already discussed in \cite{Ernst2001} (incidentally also using grammars). The Eclat tool \cite{Pacheco2005} is a model example of tight interaction between a unit-level test generator and DAIKON-style invariant mining, where the mined invariants are used to produce oracles and to systematically guide the test generator towards fault-revealing inputs. Mining specifications is not restricted to pre- and postconditions. The paper "Mining Specifications" \cite{Ammons2002} is another classic in the field, learning state protocols from executions. Grammar mining, as described in [our chapter with the same name](GrammarMiner.ipynb) can also be seen as a specification mining approach, this time learning specifications for input formats. As it comes to adding type annotations to existing code, the blog post ["The state of type hints in Python"](https://www.bernat.tech/the-state-of-type-hints-in-python/) gives a great overview on how Python type hints can be used and checked. To add type annotations, there are two important tools available that also implement our above approach:* [MonkeyType](https://instagram-engineering.com/let-your-code-type-hint-itself-introducing-open-source-monkeytype-a855c7284881) implements the above approach of tracing executions and annotating Python 3 arguments, returns, and variables with type hints.* [PyAnnotate](https://github.com/dropbox/pyannotate) does a similar job, focusing on code in Python 2. It does not produce Python 3-style annotations, but instead produces annotations as comments that can be processed by static type checkers.These tools have been created by engineers at Facebook and Dropbox, respectively, assisting them in checking millions of lines of code for type issues. ExercisesOur code for mining types and invariants is in no way complete. There are dozens of ways to extend our implementations, some of which we discuss in exercises. Exercise 1: Union TypesThe Python `typing` module allows to express that an argument can have multiple types. For `my_sqrt(x)`, this allows to express that `x` can be an `int` or a `float`: | from typing import Union, Optional
def my_sqrt_with_union_type(x: Union[int, float]) -> float:
... | _____no_output_____ | MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
Extend the `TypeAnnotator` such that it supports union types for arguments and return values. Use `Optional[X]` as a shorthand for `Union[X, None]`. **Solution.** Left to the reader. Hint: extend `type_string()`. Exercise 2: Types for Local VariablesIn Python, one cannot only annotate arguments with types, but actually also local and global variables – for instance, `approx` and `guess` in our `my_sqrt()` implementation: | def my_sqrt_with_local_types(x: Union[int, float]) -> float:
"""Computes the square root of x, using the Newton-Raphson method"""
approx: Optional[float] = None
guess: float = x / 2
while approx != guess:
approx: float = guess
guess: float = (approx + x / approx) / 2
return approx | _____no_output_____ | MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
Extend the `TypeAnnotator` such that it also annotates local variables with types. Search the function AST for assignments, determine the type of the assigned value, and make it an annotation on the left hand side. **Solution.** Left to the reader. Exercise 3: Verbose Invariant CheckersOur implementation of invariant checkers does not make it clear for the user which pre-/postcondition failed. | @precondition(lambda s: len(s) > 0)
def remove_first_char(s):
return s[1:]
with ExpectError():
remove_first_char('') | Traceback (most recent call last):
File "<ipython-input-193-dda18930f6db>", line 2, in <module>
remove_first_char('')
File "<ipython-input-100-39ada1fd0b7e>", line 6, in wrapper
assert precondition(*args, **kwargs), "Precondition violated"
AssertionError: Precondition violated (expected)
| MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
The following implementation adds an optional `doc` keyword argument which is printed if the invariant is violated: | def condition(precondition=None, postcondition=None, doc='Unknown'):
def decorator(func):
@functools.wraps(func) # preserves name, docstring, etc
def wrapper(*args, **kwargs):
if precondition is not None:
assert precondition(*args, **kwargs), "Precondition violated: " + doc
retval = func(*args, **kwargs) # call original function or method
if postcondition is not None:
assert postcondition(retval, *args, **kwargs), "Postcondition violated: " + doc
return retval
return wrapper
return decorator
def precondition(check, **kwargs):
return condition(precondition=check, doc=kwargs.get('doc', 'Unknown'))
def postcondition(check, **kwargs):
return condition(postcondition=check, doc=kwargs.get('doc', 'Unknown'))
@precondition(lambda s: len(s) > 0, doc="len(s) > 0")
def remove_first_char(s):
return s[1:]
remove_first_char('abc')
with ExpectError():
remove_first_char('') | Traceback (most recent call last):
File "<ipython-input-196-dda18930f6db>", line 2, in <module>
remove_first_char('')
File "<ipython-input-194-683ee268305f>", line 6, in wrapper
assert precondition(*args, **kwargs), "Precondition violated: " + doc
AssertionError: Precondition violated: len(s) > 0 (expected)
| MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
Extend `InvariantAnnotator` such that it includes the conditions in the generated pre- and postconditions. **Solution.** Here's a simple solution: | class InvariantAnnotator(InvariantAnnotator):
def preconditions(self, function_name):
conditions = []
for inv in pretty_invariants(self.invariants(function_name)):
if inv.find(RETURN_VALUE) >= 0:
continue # Postcondition
cond = "@precondition(lambda " + self.params(function_name) + ": " + inv + ', doc=' + repr(inv) + ")"
conditions.append(cond)
return conditions
class InvariantAnnotator(InvariantAnnotator):
def postconditions(self, function_name):
conditions = []
for inv in pretty_invariants(self.invariants(function_name)):
if inv.find(RETURN_VALUE) < 0:
continue # Precondition
cond = ("@postcondition(lambda " +
RETURN_VALUE + ", " + self.params(function_name) + ": " + inv + ', doc=' + repr(inv) + ")")
conditions.append(cond)
return conditions | _____no_output_____ | MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
The resulting annotations are harder to read, but easier to diagnose: | with InvariantAnnotator() as annotator:
y = sum2(2, 2)
print_content(annotator.functions_with_invariants(), '.py') | [30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: a != [34m0[39;49;00m, doc=[33m'[39;49;00m[33ma != 0[39;49;00m[33m'[39;49;00m)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: a <= b, doc=[33m'[39;49;00m[33ma <= b[39;49;00m[33m'[39;49;00m)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: a == b, doc=[33m'[39;49;00m[33ma == b[39;49;00m[33m'[39;49;00m)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: a > [34m0[39;49;00m, doc=[33m'[39;49;00m[33ma > 0[39;49;00m[33m'[39;49;00m)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: a >= [34m0[39;49;00m, doc=[33m'[39;49;00m[33ma >= 0[39;49;00m[33m'[39;49;00m)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: a >= b, doc=[33m'[39;49;00m[33ma >= b[39;49;00m[33m'[39;49;00m)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: b != [34m0[39;49;00m, doc=[33m'[39;49;00m[33mb != 0[39;49;00m[33m'[39;49;00m)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: b <= a, doc=[33m'[39;49;00m[33mb <= a[39;49;00m[33m'[39;49;00m)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: b == a, doc=[33m'[39;49;00m[33mb == a[39;49;00m[33m'[39;49;00m)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: b > [34m0[39;49;00m, doc=[33m'[39;49;00m[33mb > 0[39;49;00m[33m'[39;49;00m)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: b >= [34m0[39;49;00m, doc=[33m'[39;49;00m[33mb >= 0[39;49;00m[33m'[39;49;00m)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: b >= a, doc=[33m'[39;49;00m[33mb >= a[39;49;00m[33m'[39;49;00m)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: [36misinstance[39;49;00m(a, [36mint[39;49;00m), doc=[33m'[39;49;00m[33misinstance(a, int)[39;49;00m[33m'[39;49;00m)
[30;01m@precondition[39;49;00m([34mlambda[39;49;00m a, b: [36misinstance[39;49;00m(b, [36mint[39;49;00m), doc=[33m'[39;49;00m[33misinstance(b, int)[39;49;00m[33m'[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: a < return_value, doc=[33m'[39;49;00m[33ma < return_value[39;49;00m[33m'[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: a <= b <= return_value, doc=[33m'[39;49;00m[33ma <= b <= return_value[39;49;00m[33m'[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: a <= return_value, doc=[33m'[39;49;00m[33ma <= return_value[39;49;00m[33m'[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: a == return_value - b, doc=[33m'[39;49;00m[33ma == return_value - b[39;49;00m[33m'[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: a == return_value / b, doc=[33m'[39;49;00m[33ma == return_value / b[39;49;00m[33m'[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: b < return_value, doc=[33m'[39;49;00m[33mb < return_value[39;49;00m[33m'[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: b <= a <= return_value, doc=[33m'[39;49;00m[33mb <= a <= return_value[39;49;00m[33m'[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: b <= return_value, doc=[33m'[39;49;00m[33mb <= return_value[39;49;00m[33m'[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: b == return_value - a, doc=[33m'[39;49;00m[33mb == return_value - a[39;49;00m[33m'[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: b == return_value / a, doc=[33m'[39;49;00m[33mb == return_value / a[39;49;00m[33m'[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: [36misinstance[39;49;00m(return_value, [36mint[39;49;00m), doc=[33m'[39;49;00m[33misinstance(return_value, int)[39;49;00m[33m'[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value != [34m0[39;49;00m, doc=[33m'[39;49;00m[33mreturn_value != 0[39;49;00m[33m'[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value == a * b, doc=[33m'[39;49;00m[33mreturn_value == a * b[39;49;00m[33m'[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value == a + b, doc=[33m'[39;49;00m[33mreturn_value == a + b[39;49;00m[33m'[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value == b * a, doc=[33m'[39;49;00m[33mreturn_value == b * a[39;49;00m[33m'[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value == b + a, doc=[33m'[39;49;00m[33mreturn_value == b + a[39;49;00m[33m'[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value > [34m0[39;49;00m, doc=[33m'[39;49;00m[33mreturn_value > 0[39;49;00m[33m'[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value > a, doc=[33m'[39;49;00m[33mreturn_value > a[39;49;00m[33m'[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value > b, doc=[33m'[39;49;00m[33mreturn_value > b[39;49;00m[33m'[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value >= [34m0[39;49;00m, doc=[33m'[39;49;00m[33mreturn_value >= 0[39;49;00m[33m'[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value >= a, doc=[33m'[39;49;00m[33mreturn_value >= a[39;49;00m[33m'[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value >= a >= b, doc=[33m'[39;49;00m[33mreturn_value >= a >= b[39;49;00m[33m'[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value >= b, doc=[33m'[39;49;00m[33mreturn_value >= b[39;49;00m[33m'[39;49;00m)
[30;01m@postcondition[39;49;00m([34mlambda[39;49;00m return_value, a, b: return_value >= b >= a, doc=[33m'[39;49;00m[33mreturn_value >= b >= a[39;49;00m[33m'[39;49;00m)
[34mdef[39;49;00m [32msum2[39;49;00m(a, b):
[34mreturn[39;49;00m a + b
| MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
As an alternative, one may be able to use `inspect.getsource()` on the lambda expression or unparse it. This is left to the reader. Exercise 4: Save Initial ValuesIf the value of an argument changes during function execution, this can easily confuse our implementation: The values are tracked at the beginning of the function, but checked only when it returns. Extend the `InvariantAnnotator` and the infrastructure it uses such that* it saves argument values both at the beginning and at the end of a function invocation;* postconditions can be expressed over both _initial_ values of arguments as well as the _final_ values of arguments;* the mined postconditions refer to both these values as well. **Solution.** To be added. Exercise 5: ImplicationsSeveral mined invariant are actually _implied_ by others: If `x > 0` holds, then this implies `x >= 0` and `x != 0`. Extend the `InvariantAnnotator` such that implications between properties are explicitly encoded, and such that implied properties are no longer listed as invariants. See \cite{Ernst2001} for ideas. **Solution.** Left to the reader. Exercise 6: Local VariablesPostconditions may also refer to the values of local variables. Consider extending `InvariantAnnotator` and its infrastructure such that the values of local variables at the end of the execution are also recorded and made part of the invariant inference mechanism. **Solution.** Left to the reader. Exercise 7: Exploring Invariant AlternativesAfter mining a first set of invariants, have a [concolic fuzzer](ConcolicFuzzer.ipynb) generate tests that systematically attempt to invalidate pre- and postconditions. How far can you generalize? **Solution.** To be added. Exercise 8: Grammar-Generated PropertiesThe larger the set of properties to be checked, the more potential invariants can be discovered. Create a _grammar_ that systematically produces a large set of properties. See \cite{Ernst2001} for possible patterns. **Solution.** Left to the reader. Exercise 9: Embedding Invariants as AssertionsRather than producing invariants as annotations for pre- and postconditions, insert them as `assert` statements into the function code, as in:```pythondef my_sqrt(x): 'Computes the square root of x, using the Newton-Raphson method' assert isinstance(x, int), 'violated precondition' assert (x > 0), 'violated precondition' approx = None guess = (x / 2) while (approx != guess): approx = guess guess = ((approx + (x / approx)) / 2) return_value = approx assert (return_value < x), 'violated postcondition' assert isinstance(return_value, float), 'violated postcondition' return approx```Such a formulation may make it easier for test generators and symbolic analysis to access and interpret pre- and postconditions. **Solution.** Here is a tentative implementation that inserts invariants into function ASTs. Part 1: Embedding Invariants into Functions | class EmbeddedInvariantAnnotator(InvariantTracker):
def functions_with_invariants_ast(self, function_name=None):
if function_name is None:
return annotate_functions_with_invariants(self.invariants())
return annotate_function_with_invariants(function_name, self.invariants(function_name))
def functions_with_invariants(self, function_name=None):
if function_name is None:
functions = ''
for f_name in self.invariants():
try:
f_text = astor.to_source(self.functions_with_invariants_ast(f_name))
except KeyError:
f_text = ''
functions += f_text
return functions
return astor.to_source(self.functions_with_invariants_ast(function_name))
def function_with_invariants(self, function_name):
return self.functions_with_invariants(function_name)
def function_with_invariants_ast(self, function_name):
return self.functions_with_invariants_ast(function_name)
def annotate_invariants(invariants):
annotated_functions = {}
for function_name in invariants:
try:
annotated_functions[function_name] = annotate_function_with_invariants(function_name, invariants[function_name])
except KeyError:
continue
return annotated_functions
def annotate_function_with_invariants(function_name, function_invariants):
function = globals()[function_name]
function_code = inspect.getsource(function)
function_ast = ast.parse(function_code)
return annotate_function_ast_with_invariants(function_ast, function_invariants)
def annotate_function_ast_with_invariants(function_ast, function_invariants):
annotated_function_ast = EmbeddedInvariantTransformer(function_invariants).visit(function_ast)
return annotated_function_ast | _____no_output_____ | MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
Part 2: Preconditions | class PreconditionTransformer(ast.NodeTransformer):
def __init__(self, invariants):
self.invariants = invariants
super().__init__()
def preconditions(self):
preconditions = []
for (prop, var_names) in self.invariants:
assertion = "assert " + instantiate_prop(prop, var_names) + ', "violated precondition"'
assertion_ast = ast.parse(assertion)
if assertion.find(RETURN_VALUE) < 0:
preconditions += assertion_ast.body
return preconditions
def insert_assertions(self, body):
preconditions = self.preconditions()
try:
docstring = body[0].value.s
except:
docstring = None
if docstring:
return [body[0]] + preconditions + body[1:]
else:
return preconditions + body
def visit_FunctionDef(self, node):
"""Add invariants to function"""
# print(ast.dump(node))
node.body = self.insert_assertions(node.body)
return node
class EmbeddedInvariantTransformer(PreconditionTransformer):
pass
with EmbeddedInvariantAnnotator() as annotator:
my_sqrt(5)
print_content(annotator.functions_with_invariants(), '.py')
with EmbeddedInvariantAnnotator() as annotator:
y = sum3(3, 4, 5)
y = sum3(-3, -4, -5)
y = sum3(0, 0, 0)
print_content(annotator.functions_with_invariants(), '.py') | [34mdef[39;49;00m [32msum3[39;49;00m(a, b, c):
[34massert[39;49;00m [36misinstance[39;49;00m(c, [36mint[39;49;00m), [33m'[39;49;00m[33mviolated precondition[39;49;00m[33m'[39;49;00m
[34massert[39;49;00m [36misinstance[39;49;00m(b, [36mint[39;49;00m), [33m'[39;49;00m[33mviolated precondition[39;49;00m[33m'[39;49;00m
[34massert[39;49;00m [36misinstance[39;49;00m(a, [36mint[39;49;00m), [33m'[39;49;00m[33mviolated precondition[39;49;00m[33m'[39;49;00m
[34mreturn[39;49;00m a + b + c
| MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
Part 3: PostconditionsWe make a few simplifying assumptions: * Variables do not change during execution.* There is a single `return` statement at the end of the function. | class EmbeddedInvariantTransformer(PreconditionTransformer):
def postconditions(self):
postconditions = []
for (prop, var_names) in self.invariants:
assertion = "assert " + instantiate_prop(prop, var_names) + ', "violated postcondition"'
assertion_ast = ast.parse(assertion)
if assertion.find(RETURN_VALUE) >= 0:
postconditions += assertion_ast.body
return postconditions
def insert_assertions(self, body):
new_body = super().insert_assertions(body)
postconditions = self.postconditions()
body_ends_with_return = isinstance(new_body[-1], ast.Return)
if body_ends_with_return:
saver = RETURN_VALUE + " = " + astor.to_source(new_body[-1].value)
else:
saver = RETURN_VALUE + " = None"
saver_ast = ast.parse(saver)
postconditions = [saver_ast] + postconditions
if body_ends_with_return:
return new_body[:-1] + postconditions + [new_body[-1]]
else:
return new_body + postconditions
with EmbeddedInvariantAnnotator() as annotator:
my_sqrt(5)
my_sqrt_def = annotator.functions_with_invariants() | _____no_output_____ | MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
Here's the full definition with included assertions: | print_content(my_sqrt_def, '.py')
exec(my_sqrt_def.replace('my_sqrt', 'my_sqrt_annotated'))
with ExpectError():
my_sqrt_annotated(-1) | Traceback (most recent call last):
File "<ipython-input-214-bf1ed929743a>", line 2, in <module>
my_sqrt_annotated(-1)
File "<string>", line 3, in my_sqrt_annotated
AssertionError: violated precondition (expected)
| MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
Here come some more examples: | with EmbeddedInvariantAnnotator() as annotator:
y = sum3(3, 4, 5)
y = sum3(-3, -4, -5)
y = sum3(0, 0, 0)
print_content(annotator.functions_with_invariants(), '.py')
with EmbeddedInvariantAnnotator() as annotator:
length = list_length([1, 2, 3])
print_content(annotator.functions_with_invariants(), '.py')
with EmbeddedInvariantAnnotator() as annotator:
print_sum(31, 45)
print_content(annotator.functions_with_invariants(), '.py') | [34mdef[39;49;00m [32mprint_sum[39;49;00m(a, b):
[34massert[39;49;00m a <= b, [33m'[39;49;00m[33mviolated precondition[39;49;00m[33m'[39;49;00m
[34massert[39;49;00m b > [34m0[39;49;00m, [33m'[39;49;00m[33mviolated precondition[39;49;00m[33m'[39;49;00m
[34massert[39;49;00m b != [34m0[39;49;00m, [33m'[39;49;00m[33mviolated precondition[39;49;00m[33m'[39;49;00m
[34massert[39;49;00m b >= a, [33m'[39;49;00m[33mviolated precondition[39;49;00m[33m'[39;49;00m
[34massert[39;49;00m a >= [34m0[39;49;00m, [33m'[39;49;00m[33mviolated precondition[39;49;00m[33m'[39;49;00m
[34massert[39;49;00m b >= [34m0[39;49;00m, [33m'[39;49;00m[33mviolated precondition[39;49;00m[33m'[39;49;00m
[34massert[39;49;00m a < b, [33m'[39;49;00m[33mviolated precondition[39;49;00m[33m'[39;49;00m
[34massert[39;49;00m a > [34m0[39;49;00m, [33m'[39;49;00m[33mviolated precondition[39;49;00m[33m'[39;49;00m
[34massert[39;49;00m a != [34m0[39;49;00m, [33m'[39;49;00m[33mviolated precondition[39;49;00m[33m'[39;49;00m
[34massert[39;49;00m b > a, [33m'[39;49;00m[33mviolated precondition[39;49;00m[33m'[39;49;00m
[34massert[39;49;00m [36misinstance[39;49;00m(b, [36mint[39;49;00m), [33m'[39;49;00m[33mviolated precondition[39;49;00m[33m'[39;49;00m
[34massert[39;49;00m [36misinstance[39;49;00m(a, [36mint[39;49;00m), [33m'[39;49;00m[33mviolated precondition[39;49;00m[33m'[39;49;00m
[34mprint[39;49;00m(a + b)
return_value = [36mNone[39;49;00m
[34massert[39;49;00m return_value != [34m0[39;49;00m, [33m'[39;49;00m[33mviolated postcondition[39;49;00m[33m'[39;49;00m
| MIT | docs/notebooks/DynamicInvariants.ipynb | abhilashgupta/fuzzingbook |
Maclaurin series for $\sin(x)$ is:\begin{align}\sin(x)&= \sum_{k=0}^{\infty} \frac{ (-1)^k }{ (2k+1)! } x^{2k+1} \\&= x - \frac{1}{3!} x^3 + \frac{1}{5!} x^5 - \frac{1}{7!} x^7 + \frac{1}{9!} x^9 - \frac{1}{11!} x^{11} +\ldots \\%%% &= x \left( 1 - \frac{1}{2.3} x^2 \left( 1 - \frac{1}{4.5} x^2 \left( 1 - \frac{1}{6.7} x^2 \left(1 - \frac{1}{8.9} x^2 \left( 1 - \frac{1}{10.11} x^{2} \left( \ldots \right) \right) \right) \right) \right) \right) \\&= x \left( 1 - \frac{1}{2.3} x^2 \right) + \frac{1}{5!} x^5 \left( 1 - \frac{1}{6.7} x^2 \right) + \frac{1}{9!} x^9 \left( 1 - \frac{1}{10.11} x^2 \right) + \ldots \\&= \sum_{k=0}^{\infty} \frac{x^{4k+1}}{(4k+1)!} \left( 1 - \frac{x^2}{(4k+2)(4k+3)} \right) \\&= x \sum_{k=0}^{\infty} \frac{x^{4k}}{(4k+1)!} \left( 1 - \frac{x^2}{(4k+2)(4k+3)} \right)\end{align}The roundoff error is associated with the addition/subtraction involving the largest term which (for $|x|<6$) will be the first term, so of order $|x|\epsilon$. | # Significance of each term to leading term
k, eps = numpy.arange(1,30,2), numpy.finfo(float).eps
n = (k+1)/2
print('epsilon = %.2e'%eps, "= 2**%i"%int(math.log(eps)/math.log(2)))
plt.semilogy(n, eps * (1+0*n), 'k--', label=r'$\epsilon$' )
plt.semilogy(n, (numpy.pi-eps)**(k-1) / scipy.special.factorial(k), '.-', label=r'$x=\pi-\epsilon$' );
plt.semilogy(n, (numpy.pi/6*5)**(k-1) / scipy.special.factorial(k), '.-', label=r'$x=5\pi/6$ (150$^\circ$)' );
plt.semilogy(n, (numpy.pi/3*2)**(k-1) / scipy.special.factorial(k), '.-', label=r'$x=2\pi/3$ (120$^\circ$)' );
plt.semilogy(n, (numpy.pi/2)**(k-1) / scipy.special.factorial(k), 'o-', label=r'$x=\pi/2$ (90$^\circ$)' );
plt.semilogy(n, (numpy.pi/2)**(k-1) / scipy.special.factorial(k), '.-', label=r'$x=\pi/3$ (60$^\circ$)' );
plt.semilogy(n, (numpy.pi/4)**(k-1) / scipy.special.factorial(k), '.-', label=r'$x=\pi/4$ (45$^\circ$)' );
plt.semilogy(n, (numpy.pi/6)**(k-1) / scipy.special.factorial(k), '.-', label=r'$x=\pi/6$ (30$^\circ$)' );
plt.semilogy(n, (numpy.pi/18)**(k-1) / scipy.special.factorial(k), '.-', label=r'$x=\pi/18$ (10$^\circ$)' );
plt.semilogy(n, (numpy.pi/180)**(k-1) / scipy.special.factorial(k), '.-', label=r'$x=\pi/180$ (1$^\circ$)' );
plt.gca().set_xticks(numpy.arange(1,16)); plt.legend(); plt.xlabel('Terms, n = (k+1)/2'); plt.ylim(1e-17,3);
plt.title(r'$\frac{1}{k!}x^{k-1}$'); | epsilon = 2.22e-16 = 2**-52
| MIT | Calculation of sin(x).ipynb | adcroft/intrinsics |
\begin{align}\sin(x)&\approx x - \frac{1}{3!} x^3 + \frac{1}{5!} x^5 - \frac{1}{7!} x^7 + \frac{1}{9!} x^9 - \frac{1}{11!} x^{11} +\ldots \\&= x \left( 1 - \frac{1}{2.3} x^2 \left( 1 - \frac{1}{4.5} x^2 \left( 1 - \frac{1}{6.7} x^2 \left(1 - \frac{1}{8.9} x^2 \left( 1 - \frac{1}{10.11} x^{2} \left( \ldots \right) \right) \right) \right) \right) \right) \\&= x \left( 1 - c_1 x^2 \left( 1 - c_2 x^2 \left( 1 - c_3 x^2 \left(1 - c_4 x^2 \left( 1 - c_5 x^{2} \left( \ldots \right) \right) \right) \right) \right) \right) \;\;\mbox{where}\;\; c_j = \frac{1}{2j(2j+1)}\end{align} | # Coefficients in series
print(' t',' k','%26s'%'(2k+1)!','%22s'%'1/(2k+1)!','1/c[t]','%21s'%'c[t]')
for t in range(1,17):
k=2*t-1
print('%2i'%t, '%2i'%k, '%26i'%math.factorial(k), '%.16e'%(1./math.factorial(k)),'%5i'%(2*t*(2*t+1)),'%.16e'%(1./(2*t*(2*t+1)))) | t k (2k+1)! 1/(2k+1)! 1/c[t] c[t]
1 1 1 1.0000000000000000e+00 6 1.6666666666666666e-01
2 3 6 1.6666666666666666e-01 20 5.0000000000000003e-02
3 5 120 8.3333333333333332e-03 42 2.3809523809523808e-02
4 7 5040 1.9841269841269841e-04 72 1.3888888888888888e-02
5 9 362880 2.7557319223985893e-06 110 9.0909090909090905e-03
6 11 39916800 2.5052108385441720e-08 156 6.4102564102564100e-03
7 13 6227020800 1.6059043836821613e-10 210 4.7619047619047623e-03
8 15 1307674368000 7.6471637318198164e-13 272 3.6764705882352941e-03
9 17 355687428096000 2.8114572543455206e-15 342 2.9239766081871343e-03
10 19 121645100408832000 8.2206352466243295e-18 420 2.3809523809523812e-03
11 21 51090942171709440000 1.9572941063391263e-20 506 1.9762845849802370e-03
12 23 25852016738884976640000 3.8681701706306835e-23 600 1.6666666666666668e-03
13 25 15511210043330985984000000 6.4469502843844736e-26 702 1.4245014245014246e-03
14 27 10888869450418352160768000000 9.1836898637955460e-29 812 1.2315270935960591e-03
15 29 8841761993739701954543616000000 1.1309962886447718e-31 930 1.0752688172043011e-03
16 31 8222838654177922817725562880000000 1.2161250415535181e-34 1056 9.4696969696969700e-04
| MIT | Calculation of sin(x).ipynb | adcroft/intrinsics |
\begin{align}\sin(x)&\approx x - \frac{1}{3!} x^3 + \frac{1}{5!} x^5 - \frac{1}{7!} x^7 + \frac{1}{9!} x^9 - \frac{1}{11!} x^{11} +\ldots \\&= x \left( 1 - \frac{1}{2.3} x^2 \right) + \frac{1}{5!} x^5 \left( 1 - \frac{1}{6.7} x^2 \right) + \frac{1}{9!} x^9 \left( 1 - \frac{1}{10.11} x^2 \right) + \ldots \\&= \sum_{l=0}^{\infty} \frac{x^{4l+1}}{(4l+1)!} \left( 1 - \frac{x^2}{(4l+2)(4l+3)} \right) \\&= \sum_{l=0}^{\infty} \frac{x^{4l+1}}{a_l} \left( 1 - \frac{x^2}{b_l} \right)\;\;\mbox{where}\;\; a_l=(4l+1)! \;\;\mbox{and}\;\; b_l=(4l+2)(4l+3) \\&= x \sum_{l=0}^{\infty} \frac{x^{4l}}{(4l+1)!} \left( 1 - \frac{x^2}{(4l+2)(4l+3)} \right) \\&= x \sum_{l=0}^{\infty} \frac{x^{4l}}{a_l} \left( 1 - \frac{x^2}{b_l} \right) \\&= x \sum_{l=0}^{\infty} f_l \left( 1 - g_l \right)\;\;\mbox{where}\;\; f_l=\frac{x^{4l}}{a_l} \;\;\mbox{and}\;\; b_l=\frac{x^2}{b_l}\end{align}Note that\begin{align}a_l &= a_{l-1} (4l+1) 4l (4l-1) (4l-2) \;\; \forall \; l = 2,3,\ldots \\f_l&= \frac{x^{4l}}{a_l} \\&= \frac{x^{4l-4}x^4}{a_{l-1} (4l+1) 4l (4l-1) (4l-2)} \\&= \frac{x^4}{(4l+1) 4l (4l-1) (4l-2)} f_{l-1}\end{align} | # Coefficients in paired series
print(' l','4l+1','%26s'%'a[l]=(4l+1)!','%22s'%'1/a[l]',' b[l]','%22s'%'1/b[l]')
for l in range(0,7,1):
print('%2i'%l, '%4i'%(4*l+1), '%26i'%math.factorial(4*l+1), '%.16e'%(1./math.factorial(4*l+1)),
'%5i'%((4*l+2)*(4*l+3)),'%.16e'%(1./((4*l+2)*(4*l+3))))
def sin_map_x( x ):
ninety = numpy.pi/2
one_eighty = numpy.pi
three_sixty = 2.*numpy.pi
fs = 1.
if x < -ninety:
x = -one_eighty - x
if x > three_sixty:
n = int(x / three_sixty)
x = x - n*three_sixty
if x >= one_eighty:
x = x - one_eighty
fs = -1.
if x > ninety:
x = one_eighty - x
return x,fs
def sin_forward_series( x ):
# Adds terms from largest to smallest until answer is not changing
x,fs = sin_map_x( x )
# https://en.wikipedia.org/wiki/Sine#Series_definition
ro,d,s = 1.,1,-1.
for k in range(3,200,2):
d = d * (k-1) * k
f = 1. / d
r = ro + x**(k-1) * f * s
if r==ro: break
ro,s = r, -s
return ( r * x ) * fs
def sin_reverse_series( x ):
# Adds terms from smallest to largest after finding smallest term to add
x,fs = sin_map_x( x )
ro,s,d = 1.,-1.,1
for k in range(3,200,2):
d = d * (k-1) * k
f = 1. / d
r = ro + x**(k-1) * f * s
if r==ro: break
ro,s = r, -s
ro = 0.
for j in range(k,0,-2):
f = 1./ math.factorial(j)
r = ro + x**(j-1) * f * s
if r==ro: break
ro,s = r, -s
return ( r * x ) * fs
def sin_reverse_series_fixed( x ):
# Adds terms from smallest to largest for fixed number of terms
x,fs = sin_map_x( x )
ro,s,d,x2,N = 1.,-1.,1,1.,16
term = [1.] * (N)
for n in range(1,N):
x2 = x2 * ( x * x )
k = 2*n+1
d = d * (k-1) * k
f = 1. / d
#term[n] = x**(k-1) * f * s
term[n] = x2 * f * s
r = ro + term[n]
if r==ro: break
ro,s = r, -s
r = 0.
for j in range(n,-1,-1):
r = r + term[j]
return ( r * x ) * fs
def sin_reverse_precomputed( x ):
# Adds fixed number of terms from smallest to largest with precomputed coefficients
x,fs = sin_map_x( x )
C=[0.16666666666666667,
0.05,
0.023809523809523808,
0.013888888888888889,
0.009090909090909091,
0.00641025641025641,
0.004761904761904762,
0.003676470588235294,
0.0029239766081871343,
0.002380952380952381,
0.001976284584980237,
0.0016666666666666667,
0.0014245014245014246,
0.0012315270935960591,
0.001075268817204301,
0.000946969696969697,
0.0008403361344537816,
0.0007507507507507507,
0.0006747638326585695]
n = len(C)
f,r,s = [1.]*(n),0.,1.
if n%2==0: s=-1.
for i in range(1,n):
f[i] = f[i-1] * C[i-1]
for i in range(n-1,0,-1):
k = 2*i + 1
r = r + x**k * f[i] * s
s = -s
r = r + x
return r * fs
def sin_by_series(x, n=20, verbose=False, method='accurate-explicit'):
"""Returns sin(x)"""
if method=='forward-explicit': return sin_forward_series( x )
elif method=='reverse-explicit': return sin_reverse_series( x )
elif method=='reverse-fixed': return sin_reverse_series_fixed( x )
elif method=='reverse-precomputed': return sin_reverse_precomputed( x )
x,fs = sin_map_x( x )
# https://en.wikipedia.org/wiki/Sine#Series_definition
C=[0.16666666666666667,
0.05,
0.023809523809523808,
0.013888888888888889,
0.009090909090909091,
0.00641025641025641,
0.004761904761904762,
0.003676470588235294,
0.0029239766081871343,
0.002380952380952381,
0.001976284584980237,
0.0016666666666666667,
0.0014245014245014246,
0.0012315270935960591,
0.001075268817204301,
0.000946969696969697,
0.0008403361344537816,
0.0007507507507507507,
0.0006747638326585695]
if method=='forward-explicit':
# Adds terms from largest to smallest until answer is not changing
ro,f,s = 1.,1.,-1.
for k in range(3,200,2):
f = 1./ math.factorial(k)
r = ro + x**(k-1) * f * s
if verbose: print('sine:',r*x,'(%i)'%k)
if r==ro: break
ro,s = r, -s
r = r * x
elif method=='reverse-explicit':
# Adds terms from smallest to largest after finding smallest term to add
ro,s = 1.,-1.
for k in range(3,200,2):
f = 1./ math.factorial(k)
r = ro + x**(k-1) * f * s
if r==ro: break
ro,s = r, -s
ro = 0.
for j in range(k,0,-2):
f = 1./ math.factorial(j)
r = ro + x**(j-1) * f * s
if verbose: print('sine:',r*x,'(%i)'%j)
if r==ro: break
ro,s = r, -s
r = r * x
elif method=='forward-precomputed':
# Adds terms from largest to smallest until answer is not changing
ro,f,s = x,1.,-1.
for i in range(1,n):
k = 2*i + 1
#f = f * pypi.reciprocal( (k-1)*k ) # These should be pre-computed
f = f * C[i-1]
r = ro + x**k * f * s
if verbose: print('sine:',r,'(%i)'%i)
if r==ro: break
ro,s = r, -s
elif method=='reverse-precomputed':
# Adds fixed number of terms from smallest to largest with precomputed coefficients
f,r,s = [1.]*(n),0.,1.
if n%2==0: s=-1.
for i in range(1,n):
f[i] = f[i-1] * C[i-1]
for i in range(n-1,0,-1):
k = 2*i + 1
r = r + x**k * f[i] * s
if verbose: print('sine:',r,'(%i)'%i)
s = -s
r = r + x
if verbose: print('sine:',r,'(%i)'%i)
elif method=='paired' or method=='paired-test':
# Adds fixed number of terms from smallest to largest
x4l,a,b,f,g = [0.]*(n),[0.]*(n),[0.]*(n),[0.]*(n),[0.]*(n)
x2 = x*x
x4 = x2*x2
x4l[0], a[0], b[0] = 1., 1., 1./6.
f[0], g[0] = x4l[0]*a[0], x2*b[0]
for l in range(1,n):
x4l[l] = x4l[l-1] * x4
l4 = 4*l
#a[l] = a[l-1] / float( (l4+1)*l4*(l4-1)*(l4-2) )
#b[l] = 1. / float( (l4+2)*(l4+3) )
f[l] = f[l-1] * (x4 / float( (l4+1)*l4*(l4-1)*(l4-2) ) )
g[l] = x2 / float( (l4+2)*(l4+3) )
r = 0.
if method=='paired-test':
for i in range(n-1,-1,-1):
r = r - f[i] * g[i]
r = r + f[i]
if verbose: print('sine:',r*x,'(%i)'%i)
elif method=='paired':
for i in range(n-1,-1,-1):
#r = r + f[i] * ( 1. - g[i] )
r = r + ( f[i] - f[i] * g[i] )
if verbose: print('sine:',r*x,'(%i)'%i)
r = r * x
else:
raise Exception('Method "'+method+'" not implemented')
return r * fs
angle = numpy.pi/2
print( sin_by_series( angle, method='forward-explicit' ) )
print( sin_by_series( angle, method='forward-precomputed' ) )
print( sin_by_series( angle, method='reverse-precomputed' ) )
print( sin_by_series( angle, method='paired-test' ) )
print( sin_by_series( angle, method='paired' ) )
print( sin_by_series( angle, method='reverse-fixed' ) )
print( sin_by_series( angle, method='reverse-explicit' ) )
print( numpy.sin( angle ) )
sinfs = numpy.frompyfunc( sin_forward_series, 1, 1)
sinrs = numpy.frompyfunc( sin_reverse_series, 1, 1)
sinrf = numpy.frompyfunc( sin_reverse_series_fixed, 1, 1)
sinrp = numpy.frompyfunc( sin_reverse_precomputed, 1, 1)
x = numpy.linspace(-numpy.pi/2,numpy.pi/2,1024*128)
d = sinrf( x ) - sinrs( x )
plt.plot(x/numpy.pi*180, d+0/numpy.sin(x),'.');
numpy.count_nonzero( d ), numpy.abs( d/numpy.sin(x) ).max()
y = ( sinrf( x )**2 + sinrf( x + numpy.pi/2 )**2 ) - 1.
plt.plot( x*180/numpy.pi, y ) | _____no_output_____ | MIT | Calculation of sin(x).ipynb | adcroft/intrinsics |
Machine Learning Engineer Nanodegree Model Evaluation & Validation Project 1: Predicting Boston Housing PricesWelcome to the first project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with **'Implementation'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. >**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. Getting StartedIn this project, you will evaluate the performance and predictive power of a model that has been trained and tested on data collected from homes in suburbs of Boston, Massachusetts. A model trained on this data that is seen as a *good fit* could then be used to make certain predictions about a home — in particular, its monetary value. This model would prove to be invaluable for someone like a real estate agent who could make use of such information on a daily basis.The dataset for this project originates from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Housing). The Boston housing data was collected in 1978 and each of the 506 entries represent aggregated data about 14 features for homes from various suburbs in Boston, Massachusetts. For the purposes of this project, the following preoprocessing steps have been made to the dataset:- 16 data points have an `'MDEV'` value of 50.0. These data points likely contain **missing or censored values** and have been removed.- 1 data point has an `'RM'` value of 8.78. This data point can be considered an **outlier** and has been removed.- The features `'RM'`, `'LSTAT'`, `'PTRATIO'`, and `'MDEV'` are essential. The remaining **non-relevant features** have been excluded.- The feature `'MDEV'` has been **multiplicatively scaled** to account for 35 years of market inflation.Run the code cell below to load the Boston housing dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported. | # Import libraries necessary for this project
import numpy as np
import pandas as pd
import visuals as vs # Supplementary code
from sklearn.cross_validation import ShuffleSplit
# Pretty display for notebooks
%matplotlib inline
# Load the Boston housing dataset
data = pd.read_csv('housing.csv')
prices = data['MDEV']
features = data.drop('MDEV', axis = 1)
# Success
print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape) | Boston housing dataset has 489 data points with 4 variables each.
| Apache-2.0 | Udacity/boston/boston_housing.ipynb | Vayne-Lover/Machine-Learning |
Data ExplorationIn this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results.Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into **features** and the **target variable**. The **features**, `'RM'`, `'LSTAT'`, and `'PTRATIO'`, give us quantitative information about each data point. The **target variable**, `'MDEV'`, will be the variable we seek to predict. These are stored in `features` and `prices`, respectively. Implementation: Calculate StatisticsFor your very first coding implementation, you will calculate descriptive statistics about the Boston housing prices. Since `numpy` has already been imported for you, use this library to perform the necessary calculations. These statistics will be extremely important later on to analyze various prediction results from the constructed model.In the code cell below, you will need to implement the following:- Calculate the minimum, maximum, mean, median, and standard deviation of `'MDEV'`, which is stored in `prices`. - Store each calculation in their respective variable. | # TODO: Minimum price of the data
minimum_price = np.min(prices)
# TODO: Maximum price of the data
maximum_price = np.max(prices)
# TODO: Mean price of the data
mean_price = np.mean(prices)
# TODO: Median price of the data
median_price = np.median(prices)
# TODO: Standard deviation of prices of the data
std_price = np.std(prices)
# Show the calculated statistics
print "Statistics for Boston housing dataset:\n"
print "Minimum price: ${:,.2f}".format(minimum_price)
print "Maximum price: ${:,.2f}".format(maximum_price)
print "Mean price: ${:,.2f}".format(mean_price)
print "Median price ${:,.2f}".format(median_price)
print "Standard deviation of prices: ${:,.2f}".format(std_price) | Statistics for Boston housing dataset:
Minimum price: $105,000.00
Maximum price: $1,024,800.00
Mean price: $454,342.94
Median price $438,900.00
Standard deviation of prices: $165,171.13
| Apache-2.0 | Udacity/boston/boston_housing.ipynb | Vayne-Lover/Machine-Learning |
Question 1 - Feature ObservationAs a reminder, we are using three features from the Boston housing dataset: `'RM'`, `'LSTAT'`, and `'PTRATIO'`. For each data point (neighborhood):- `'RM'` is the average number of rooms among homes in the neighborhood.- `'LSTAT'` is the percentage of all Boston homeowners who have a greater net worth than homeowners in the neighborhood.- `'PTRATIO'` is the ratio of students to teachers in primary and secondary schools in the neighborhood._Using your intuition, for each of the three features above, do you think that an increase in the value of that feature would lead to an **increase** in the value of `'MDEV'` or a **decrease** in the value of `'MDEV'`? Justify your answer for each._ **Hint:** Would you expect a home that has an `'RM'` value of 6 be worth more or less than a home that has an `'RM'` value of 7? **Answer: **Firstly i want to say that it's hard to tell whether the RM can increase MDEV or not.You can imagine,if you have a lot of money,you may want to choose a place where you can build a big house which may have many m^2.However,some people like to live close to others to feel the love from neighborhood.But after i see the csv i find that usually RM can increase the MDEV.Secondly in csv i find when LSTAT increases MDEV decreases.Thirdly i also find when PTRATIO increases MDEV decreases. ---- Developing a ModelIn this second section of the project, you will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in your predictions. Implementation: Define a Performance MetricIt is difficult to measure the quality of a given model without quantifying its performance over training and testing. This is typically done using some type of performance metric, whether it is through calculating some type of error, the goodness of fit, or some other useful measurement. For this project, you will be calculating the [*coefficient of determination*](http://stattrek.com/statistics/dictionary.aspx?definition=coefficient_of_determination), R2, to quantify your model's performance. The coefficient of determination for a model is a useful statistic in regression analysis, as it often describes how "good" that model is at making predictions. The values for R2 range from 0 to 1, which captures the percentage of squared correlation between the predicted and actual values of the **target variable**. A model with an R2 of 0 always fails to predict the target variable, whereas a model with an R2 of 1 perfectly predicts the target variable. Any value between 0 and 1 indicates what percentage of the target variable, using this model, can be explained by the **features**. *A model can be given a negative R2 as well, which indicates that the model is no better than one that naively predicts the mean of the target variable.*For the `performance_metric` function in the code cell below, you will need to implement the following:- Use `r2_score` from `sklearn.metrics` to perform a performance calculation between `y_true` and `y_predict`.- Assign the performance score to the `score` variable. | # TODO: Import 'r2_score'
from sklearn.metrics import r2_score
def performance_metric(y_true, y_predict):
""" Calculates and returns the performance score between
true and predicted values based on the metric chosen. """
# TODO: Calculate the performance score between 'y_true' and 'y_predict'
score = r2_score(y_true,y_predict)
# Return the score
return score | _____no_output_____ | Apache-2.0 | Udacity/boston/boston_housing.ipynb | Vayne-Lover/Machine-Learning |
Question 2 - Goodness of FitAssume that a dataset contains five data points and a model made the following predictions for the target variable:| True Value | Prediction || :-------------: | :--------: || 3.0 | 2.5 || -0.5 | 0.0 || 2.0 | 2.1 || 7.0 | 7.8 || 4.2 | 5.3 |*Would you consider this model to have successfully captured the variation of the target variable? Why or why not?* Run the code cell below to use the `performance_metric` function and calculate this model's coefficient of determination. | # Calculate the performance of this model
score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])
print "Model has a coefficient of determination, R^2, of {:.3f}.".format(score) | Model has a coefficient of determination, R^2, of 0.923.
| Apache-2.0 | Udacity/boston/boston_housing.ipynb | Vayne-Lover/Machine-Learning |
**Answer:** The R^2 is 0.923 which is closed to 1 so i think can show the relationship well. Implementation: Shuffle and Split DataYour next implementation requires that you take the Boston housing dataset and split the data into training and testing subsets. Typically, the data is also shuffled into a random order when creating the training and testing subsets to remove any bias in the ordering of the dataset.For the code cell below, you will need to implement the following:- Use `train_test_split` from `sklearn.cross_validation` to shuffle and split the `features` and `prices` data into training and testing sets. - Split the data into 80% training and 20% testing. - Set the `random_state` for `train_test_split` to a value of your choice. This ensures results are consistent.- Assign the train and testing splits to `X_train`, `X_test`, `y_train`, and `y_test`. | # TODO: Import 'train_test_split'
from sklearn.cross_validation import train_test_split
# TODO: Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.8, random_state=20)
# Success
print "Training and testing split was successful." | Training and testing split was successful.
| Apache-2.0 | Udacity/boston/boston_housing.ipynb | Vayne-Lover/Machine-Learning |
Question 3 - Training and Testing*What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm?* **Hint:** What could go wrong with not having a way to test your model? **Answer: **In my opinion split dataset can let you know whether your model can fit other data or not.For example,if you just use all dataset training and if you overfit you model can't fit new data well.In all,we need testing dataset to see if our model work accurately. ---- Analyzing Model PerformanceIn this third section of the project, you'll take a look at several models' learning and testing performances on various subsets of training data. Additionally, you'll investigate one particular algorithm with an increasing `'max_depth'` parameter on the full training set to observe how model complexity affects performance. Graphing your model's performance based on varying criteria can be beneficial in the analysis process, such as visualizing behavior that may not have been apparent from the results alone. Learning CurvesThe following code cell produces four graphs for a decision tree model with different maximum depths. Each graph visualizes the learning curves of the model for both training and testing as the size of the training set is increased. Note that the shaded reigon of a learning curve denotes the uncertainty of that curve (measured as the standard deviation). The model is scored on both the training and testing sets using R2, the coefficient of determination. Run the code cell below and use these graphs to answer the following question. | # Produce learning curves for varying training set sizes and maximum depths
vs.ModelLearning(features, prices) | _____no_output_____ | Apache-2.0 | Udacity/boston/boston_housing.ipynb | Vayne-Lover/Machine-Learning |
Question 4 - Learning the Data*Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model?* **Hint:** Are the learning curves converging to particular scores? **Answer: **I choose max_depth=3.As we can see in picture,when the training points added the score of Testing increased and the score of Training decreased.And when we see the tendency of two curves we can know that both of them converge to the score 0.75. Complexity CurvesThe following code cell produces a graph for a decision tree model that has been trained and validated on the training data using different maximum depths. The graph produces two complexity curves — one for training and one for validation. Similar to the **learning curves**, the shaded regions of both the complexity curves denote the uncertainty in those curves, and the model is scored on both the training and validation sets using the `performance_metric` function. Run the code cell below and use this graph to answer the following two questions. | vs.ModelComplexity(X_train, y_train) | _____no_output_____ | Apache-2.0 | Udacity/boston/boston_housing.ipynb | Vayne-Lover/Machine-Learning |
Question 5 - Bias-Variance Tradeoff*When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions?* **Hint:** How do you know when a model is suffering from high bias or high variance? **Answer: **When maximum depth is 1,we can know the model suffer from high bias which means using just some of features can lead to underfit.And when use 10 the model suffer from high variance which lead to overfit.The picture of Decision Tree can shows that when we use 1 and 10 the model suffer from high bias to high variance.I must say that if the score differ a lot it will call the high variance. Question 6 - Best-Guess Optimal Model*Which maximum depth do you think results in a model that best generalizes to unseen data? What intuition lead you to this answer?* **Answer: **I think when maximum depth is 5 we may have best result.I make this judgement from Decision Tree Regressor Learning Performance.We can learn from the picture when max_depth equals to 5 both curves converging to particular scores. ----- Evaluating Model PerformanceIn this final section of the project, you will construct a model and make a prediction on the client's feature set using an optimized model from `fit_model`. Question 7 - Grid Search*What is the grid search technique and how it can be applied to optimize a learning algorithm?* **Answer: **Firstly Grid Search can search for estimator parameters.We can set different values of kernel,C and gamma,then we can use different group to train SVM and use cross validation to evaluate it's good or not and choose the best one. Question 8 - Cross-Validation*What is the k-fold cross-validation training technique? What benefit does this technique provide for grid search when optimizing a model?* **Hint:** Much like the reasoning behind having a testing set, what could go wrong with using grid search without a cross-validated set? **Answer: **For example we choose k=5,and we will use 5 times to cross validation.In another way,we use all the dataset to test and optimize the model.And it can also avoid the problem without testing set. Implementation: Fitting a ModelYour final implementation requires that you bring everything together and train a model using the **decision tree algorithm**. To ensure that you are producing an optimized model, you will train the model using the grid search technique to optimize the `'max_depth'` parameter for the decision tree. The `'max_depth'` parameter can be thought of as how many questions the decision tree algorithm is allowed to ask about the data before making a prediction. Decision trees are part of a class of algorithms called *supervised learning algorithms*.For the `fit_model` function in the code cell below, you will need to implement the following:- Use [`DecisionTreeRegressor`](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html) from `sklearn.tree` to create a decision tree regressor object. - Assign this object to the `'regressor'` variable.- Create a dictionary for `'max_depth'` with the values from 1 to 10, and assign this to the `'params'` variable.- Use [`make_scorer`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html) from `sklearn.metrics` to create a scoring function object. - Pass the `performance_metric` function as a parameter to the object. - Assign this scoring function to the `'scoring_fnc'` variable.- Use [`GridSearchCV`](http://scikit-learn.org/stable/modules/generated/sklearn.grid_search.GridSearchCV.html) from `sklearn.grid_search` to create a grid search object. - Pass the variables `'regressor'`, `'params'`, `'scoring_fnc'`, and `'cv_sets'` as parameters to the object. - Assign the `GridSearchCV` object to the `'grid'` variable. | # TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import make_scorer
from sklearn.grid_search import GridSearchCV
def fit_model(X, y):
""" Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y]. """
# Create cross-validation sets from the training data
cv_sets = ShuffleSplit(X.shape[0], n_iter = 10, test_size = 0.20, random_state = 0)
# TODO: Create a decision tree regressor object
regressor = DecisionTreeRegressor()
# TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
params = {'max_depth':[1,2,3,4,5,6,7,8,9,10]}
# TODO: Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = make_scorer(performance_metric)
# TODO: Create the grid search object
grid = GridSearchCV(regressor,param_grid=params,scoring=scoring_fnc,cv=cv_sets)
#We must take care that if we don't use cv=cv_sets,it will give wrong parameters!!!
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_ | _____no_output_____ | Apache-2.0 | Udacity/boston/boston_housing.ipynb | Vayne-Lover/Machine-Learning |
Making PredictionsOnce a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a *decision tree regressor*, the model has learned *what the best questions to ask about the input data are*, and can respond with a prediction for the **target variable**. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on. Question 9 - Optimal Model_What maximum depth does the optimal model have? How does this result compare to your guess in **Question 6**?_ Run the code block below to fit the decision tree regressor to the training data and produce an optimal model. | # Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth']) | Parameter 'max_depth' is 3 for the optimal model.
| Apache-2.0 | Udacity/boston/boston_housing.ipynb | Vayne-Lover/Machine-Learning |
**Answer: **After so many times trying i get the correct answer!How happy am i!!!OK,then come to the question,when max_depth equals to 3 it is the optimal model.In question 6 i think it's 5,and it shows that man's intuition is not reliable. Question 10 - Predicting Selling PricesImagine that you were a real estate agent in the Boston area looking to use this model to help price homes owned by your clients that they wish to sell. You have collected the following information from three of your clients:| Feature | Client 1 | Client 2 | Client 3 || :---: | :---: | :---: | :---: || Total number of rooms in home | 5 rooms | 4 rooms | 8 rooms || Household net worth (income) | Top 34th percent | Bottom 45th percent | Top 7th percent || Student-teacher ratio of nearby schools | 15-to-1 | 22-to-1 | 12-to-1 |*What price would you recommend each client sell his/her home at? Do these prices seem reasonable given the values for the respective features?* **Hint:** Use the statistics you calculated in the **Data Exploration** section to help justify your response. Run the code block below to have your optimized model make predictions for each client's home. | # Produce a matrix for client data
client_data = [[5, 34, 15], # Client 1
[4, 55, 22], # Client 2
[8, 7, 12]] # Client 3
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price) | Predicted selling price for Client 1's home: $252,787.50
Predicted selling price for Client 2's home: $252,787.50
Predicted selling price for Client 3's home: $971,600.00
| Apache-2.0 | Udacity/boston/boston_housing.ipynb | Vayne-Lover/Machine-Learning |
**Answer: **I will recommend each client sell their house at 252,787.5USD and 252,787.5USD and 971,600USD.As we concluded in Data Exploration that the more RM it has,the less LSTAT,PTRATIO it has,the more valuable it will be. SensitivityAn optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too noisy or contain too few samples to allow a model to adequately capture the target variable — i.e., the model is underfitted. Run the code cell below to run the `fit_model` function ten times with different training and testing sets to see how the prediction for a specific client changes with the data it's trained on. | vs.PredictTrials(features, prices, fit_model, client_data) | Trial 1: $324,240.00
Trial 2: $324,450.00
Trial 3: $346,500.00
Trial 4: $420,622.22
Trial 5: $302,400.00
Trial 6: $411,931.58
Trial 7: $344,750.00
Trial 8: $407,232.00
Trial 9: $352,315.38
Trial 10: $316,890.00
Range in prices: $118,222.22
| Apache-2.0 | Udacity/boston/boston_housing.ipynb | Vayne-Lover/Machine-Learning |
**定义各阶矩阵的RI大小** | RI_dict = {1: 0, 2: 0, 3: 0.58, 4: 0.90, 5: 1.12, 6: 1.24, 7: 1.32, 8: 1.41, 9: 1.45} | _____no_output_____ | MIT | python业务代码/AHP层次分析法/AHP.ipynb | RobinYaoWenbin/Python-CommonCode |
**定义计算出一个判断矩阵的一致性指标以及最大特征根的归一化特征向量的函数。**输入:np.array格式的一个二维矩阵,该二维矩阵的含义是判断矩阵。输出:若没有通过一致性检验,则输出提示信息:没有通过一致性检验。若通过一致性检验,则输出提示信息,并返回归一化的特征向量。 | # 出入判断矩阵,判断矩阵需要是numpy格式的,若通过一致性检验则返回最大特征根的特征向量,若不通过,则输出提示。
def get_w(array):
row = array.shape[0] # 计算出阶数
a_axis_0_sum = array.sum(axis=0)
# print(a_axis_0_sum)
b = array / a_axis_0_sum # 新的矩阵b
# print(b)
b_axis_0_sum = b.sum(axis=0)
b_axis_1_sum = b.sum(axis=1) # 每一行的特征向量
# print(b_axis_1_sum)
w = b_axis_1_sum / row # 归一化处理(特征向量)
nw = w * row
AW = (w * array).sum(axis=1)
# print(AW)
max_max = sum(AW / (row * w))
# print(max_max)
CI = (max_max - row) / (row - 1)
CR = CI / RI_dict[row]
if CR < 0.1:
print(round(CR, 3))
print('满足一致性')
print("权重特征向量为:" , w)
# print(np.max(w))
# print(sorted(w,reverse=True))
# print(max_max)
# print('特征向量:%s' % w)
return w
else:
print(round(CR, 3))
print('不满足一致性,请进行修改') | _____no_output_____ | MIT | python业务代码/AHP层次分析法/AHP.ipynb | RobinYaoWenbin/Python-CommonCode |
**对输入数据进行格式判断,若正确则调用get_w(array)进行计算,若不正确则输出提示信息。** | def main(array):
# 判断下判断矩阵array的数据类型,并给出提示,若格式正确,则可继续下一步计算一致性和特征向量
if type(array) is np.ndarray:
return get_w(array)
else:
print('请输入numpy对象') | _____no_output_____ | MIT | python业务代码/AHP层次分析法/AHP.ipynb | RobinYaoWenbin/Python-CommonCode |
**对博文中选干部的例子进行了计算,具体说明我都做了注释。博文连接:https://www.cnblogs.com/yhll/p/9967726.html**感谢大佬! | if __name__ == '__main__':
# 定义判断矩阵
e = np.array([[1, 2, 7, 5, 5], [1 / 2, 1, 4, 3, 3], [1 / 7, 1 / 4, 1, 1 / 2, 1 / 3], \
[1 / 5, 1 / 3, 2, 1, 1], [1 / 5, 1 / 3, 3, 1, 1]]) # 准则层对目标层判断矩阵
a = np.array([[1, 1 / 3, 1 / 8], [3, 1, 1 / 3], [8, 3, 1]]) # 对B1的判断矩阵
b = np.array([[1, 2, 5], [1 / 2, 1, 2], [1 / 5, 1 / 2, 1]]) # 对B2的判断矩阵
c = np.array([[1, 1, 3], [1, 1, 3], [1 / 3, 1 / 3, 1]]) # 对B3的判断矩阵
d = np.array([[1, 3, 4], [1 / 3, 1, 1], [1 / 4, 1, 1]]) # 对B4的判断矩阵
f = np.array([[1, 4, 1 / 2], [1 / 4, 1, 1 / 4], [2, 4, 1]]) # 对B5的判断矩阵
# 进行一致性检验,并计算特征向量
e = main(e) # 一致性检验并得到判断矩阵
a = main(a)# 一致性检验并得到判断矩阵
b = main(b)# 一致性检验并得到判断矩阵
c = main(c)# 一致性检验并得到判断矩阵
d = main(d)# 一致性检验并得到判断矩阵
f = main(f)# 一致性检验并得到判断矩阵
try:
res = np.array([a, b, c, d, f]) # 将方案层对准则层的各归一化特征向量组合起来得到矩阵
# ret = (np.transpose(res) * e).sum(axis=1)
ret = np.dot(np.transpose(res) , e) # 计算出最底层对最高层的总排序的权值
print("总排序:" , ret) # 总排序
except TypeError:
print('数据有误,可能不满足一致性,请进行修改') | 0.016
满足一致性
权重特征向量为: [0.47439499 0.26228108 0.0544921 0.09853357 0.11029827]
0.001
满足一致性
权重特征向量为: [0.08199023 0.23644689 0.68156288]
0.005
满足一致性
权重特征向量为: [0.59488796 0.27661064 0.1285014 ]
0.0
满足一致性
权重特征向量为: [0.42857143 0.42857143 0.14285714]
0.008
满足一致性
权重特征向量为: [0.63274854 0.19239766 0.1748538 ]
0.046
满足一致性
权重特征向量为: [0.34595035 0.11029711 0.54375254]
总排序: [0.31878206 0.23919592 0.44202202]
| MIT | python业务代码/AHP层次分析法/AHP.ipynb | RobinYaoWenbin/Python-CommonCode |
From Binomial Distribution to Poisson DistributionThe binomial distribution is given by,$$Bin(k|n,\theta) \triangleq \frac{n!}{k!(n-k)!}\theta^k(1-\theta)^{n-k} $$The poisson distribution is given by,$$Poi(k|\lambda) \triangleq e^{-\lambda}\frac{\lambda^k}{k!} $$Proof:Consider $\theta=\frac{\lambda}{n}$,\begin{align*} &\lim_{n\rightarrow \infty}\binom{n}{k} (\frac{\lambda}{n})^k(1-\frac{\lambda}{n})^{n-k}=\frac{\lambda^k}{k!}\lim_{n\rightarrow \infty}\frac{n!}{(n-k)!}\frac{1}{n^k}(1-\frac{\lambda}{n})^{n}(1-\frac{\lambda}{n})^{-k} \\&= \frac{\lambda^k}{k!}\lim_{n\rightarrow \infty}\frac{n(n-1)\dots(n-k+1)}{n^k}(1-\frac{\lambda}{n})^{n}(1-\frac{\lambda}{n})^{-k} \\&\approx \frac{\lambda^k}{k!}\lim_{n\rightarrow \infty}(1-\frac{\lambda}{n})^{n}(1-\frac{\lambda}{n})^{-k} =e^{-\lambda}\frac{\lambda^k}{k!}\end{align*} For $\lambda \in\{1,10\}$, | s = np.random.poisson(1, 1000000)
count, bins, ignored = plt.hist(s, density=True, rwidth=0.8)
s = np.random.poisson(10, 1000000)
count, bins, ignored = plt.hist(s,50, density=True) | _____no_output_____ | MIT | Machine Learning A Probabilistic Perspective/2Probability/F2.6/2.6poissonPlotDemo.ipynb | zcemycl/ProbabilisticPerspectiveMachineLearning |
From Figure 2.4, we have sampled from the binomial distribution. \If $n=100$, given $\lambda=1$, then $\theta = 1/100$. | s = np.random.randint(1,101,[100,1000000])
s = np.where(s>1,0,s)
countbinary = np.count_nonzero(s,axis=0)
plt.hist(countbinary) | _____no_output_____ | MIT | Machine Learning A Probabilistic Perspective/2Probability/F2.6/2.6poissonPlotDemo.ipynb | zcemycl/ProbabilisticPerspectiveMachineLearning |
GIven $\lambda=10$, then $\theta=1/10$ | s = np.random.randint(1,11,[100,1000000])
s = np.where(s>1,0,s)
countbinary = np.count_nonzero(s,axis=0)
plt.hist(countbinary) | _____no_output_____ | MIT | Machine Learning A Probabilistic Perspective/2Probability/F2.6/2.6poissonPlotDemo.ipynb | zcemycl/ProbabilisticPerspectiveMachineLearning |
 terrainbento model BasicVs steady-state solution This model shows example usage of the BasicVs model from the TerrainBento package.The BasicVs model implements modifies Basic to use variable source area runoff using the ""effective area"" approach:$\frac{\partial \eta}{\partial t} = - KA_{eff}^{1/2}S + D\nabla^2 \eta$where$A_{eff} = R_m A e^{-\alpha S / A}$and $\alpha = \frac{K_{sat} H_{init} dx}{R_m}$where $K$ and $D$ are constants, $S$ is local slope, and $\eta$ is the topography. $A$ is the local upstream drainage area, $R_m$ is the average recharge (or precipitation) rate, $A_{eff}$ is the effective drainage area, $K_{sat}$ is the hydraulic conductivity, $H$ is the soil thickness, and $dx$ is the grid cell width. $\alpha$ is a courtesy parameter called the "saturation area scale" that lumps together many constants.Refer to [Barnhart et al. (2019)](https://www.geosci-model-dev.net/12/1267/2019/) for further explaination. For detailed information about creating a BasicVs model, see [the detailed documentation](https://terrainbento.readthedocs.io/en/latest/source/terrainbento.derived_models.model_basicVs.html).This notebook (a) shows the initialization and running of this model, (b) saves a NetCDF file of the topography, which we will use to make an oblique Paraview image of the landscape, and (c) creates a slope-area plot at steady state. | from terrainbento import BasicVs
# import required modules
import os
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
from landlab import imshow_grid
from landlab.io.netcdf import write_netcdf
np.random.seed(4897)
#Ignore warnings
import warnings
warnings.filterwarnings('ignore')
# create the parameter dictionary needed to instantiate the model
params = {
# create the Clock.
"clock": {
"start": 0,
"step": 10,
"stop": 1e7
},
# Create the Grid.
"grid": {
"RasterModelGrid": [(25, 40), {
"xy_spacing": 40
}, {
"fields": {
"node": {
"topographic__elevation": {
"random": [{
"where": "CORE_NODE"
}]
},
"soil__depth": {
"constant": [{
"value": 1.0
}]
}
}
}
}]
},
# Set up Boundary Handlers
"boundary_handlers": {
"NotCoreNodeBaselevelHandler": {
"modify_core_nodes": True,
"lowering_rate": -0.001
}
},
# Set up Precipitator
"precipitator": {
"UniformPrecipitator": {
"rainfall_flux": 0.01
}
},
# Parameters that control output.
"output_interval": 1e4,
"save_first_timestep": True,
"output_prefix": "output/basicVs",
"fields": ["topographic__elevation"],
# Parameters that control process and rates.
"water_erodibility": 0.001,
"m_sp": 0.5,
"n_sp": 1.0,
"regolith_transport_parameter": 0.1,
"hydraulic_conductivity": 10.
}
# the tolerance here is high, so that this can run on binder and for tests. (recommended value = 0.001 or lower).
tolerance = 20.0
# we can use an output writer to run until the model reaches steady state.
class run_to_steady(object):
def __init__(self, model):
self.model = model
self.last_z = self.model.z.copy()
self.tolerance = tolerance
def run_one_step(self):
if model.model_time > 0:
diff = (self.model.z[model.grid.core_nodes] -
self.last_z[model.grid.core_nodes])
if max(abs(diff)) <= self.tolerance:
self.model.clock.stop = model._model_time
print("Model reached steady state in " +
str(model._model_time) + " time units\n")
else:
self.last_z = self.model.z.copy()
if model._model_time <= self.model.clock.stop - self.model.output_interval:
self.model.clock.stop += self.model.output_interval
# initialize the model using the Model.from_dict() constructor.
# We also pass the output writer here.
model = BasicVs.from_dict(params, output_writers={"class": [run_to_steady]})
# to run the model as specified, we execute the following line:
model.run()
# MAKE SLOPE-AREA PLOT
# plot nodes that are not on the boundary or adjacent to it
core_not_boundary = np.array(
model.grid.node_has_boundary_neighbor(model.grid.core_nodes)) == False
plotting_nodes = model.grid.core_nodes[core_not_boundary]
# assign area_array and slope_array
area_array = model.grid.at_node["drainage_area"][plotting_nodes]
slope_array = model.grid.at_node["topographic__steepest_slope"][plotting_nodes]
# instantiate figure and plot
fig = plt.figure(figsize=(6, 3.75))
slope_area = plt.subplot()
# plot the data
slope_area.scatter(area_array,
slope_array,
marker="o",
c="k",
label="Model BasicVs")
# make axes log and set limits
slope_area.set_xscale("log")
slope_area.set_yscale("log")
slope_area.set_xlim(9 * 10**1, 1 * 10**6)
slope_area.set_ylim(1e-4, 1e4)
# set x and y labels
slope_area.set_xlabel(r"Drainage area [m$^2$]")
slope_area.set_ylabel("Channel slope [-]")
slope_area.legend(scatterpoints=1, prop={"size": 12})
slope_area.tick_params(axis="x", which="major", pad=7)
plt.show()
# Save stack of all netcdfs for Paraview to use.
# model.save_to_xarray_dataset(filename="basicVs.nc",
# time_unit="years",
# reference_time="model start",
# space_unit="meters")
# remove temporary netcdfs
model.remove_output_netcdfs()
# make a plot of the final steady state topography
plt.figure()
imshow_grid(model.grid, "topographic__elevation",cmap='terrain')
plt.draw() | _____no_output_____ | CC-BY-4.0 | lessons/landlab/landlab-terrainbento/coupled_process_elements/model_basicVs_steady_solution.ipynb | josh-wolpert/espin |
Machine Translation: Word-based SMT using IBM1 In this notebook we'll be looking at the IBM Model 1 word alignment method. This will be used to demonstrate the process of training, using expectation maximisation, over a toy dataset. Note that this dataset and presentation closely follows JM2 Chapter 25. The optimised version of the code is based on Koehn09 Chapter 4. | from collections import defaultdict
import itertools | _____no_output_____ | BSD-4-Clause-UC | notebooks/WSTA_N21_machine_translation.ipynb | trevorcohn/comp90042 |
Our dataset will consist of two very short sentence pairs. | bitext = []
bitext.append(("green house".split(), "casa verde".split()))
bitext.append(("the house".split(), "la casa".split())) | _____no_output_____ | BSD-4-Clause-UC | notebooks/WSTA_N21_machine_translation.ipynb | trevorcohn/comp90042 |
Based on the vocabulary items in Spanish and English, we initialise our translation table, *t*, to a uniform distribution. That is, for each word type in English, we set all translations in Spanish to have 1/3. | t0 = defaultdict(dict)
for en_type in "the green house".split():
for es_type in "la casa verde".split():
t0[en_type][es_type] = 1.0 / 3
t0 | _____no_output_____ | BSD-4-Clause-UC | notebooks/WSTA_N21_machine_translation.ipynb | trevorcohn/comp90042 |
Now for the algorithm itself. Although we tend to merge the expectation and maximisation steps (to save storing big data structures for the expected counts), here we'll do the two separately for clarity. Also, following JM: - we won't apply the optimisation for IBM1 which allows us to deal with each position *j* independently. Instead we enumerate the space of all alignments using a cartesian product, see *itertools.product*. - we don't consider alignments to the null word | def expectation_step(bitext, translation_probs):
expectations = []
for E, F in bitext:
I = len(E)
J = len(F)
# store the unnormalised alignment probabilities
align = []
# track the sum of unnormalised alignment probabilities
Z = 0
for A in itertools.product(range(I), range(I)):
pr = 1.0
for j, aj in enumerate(A):
pr *= translation_probs[E[aj]][F[j]]
align.append([A, E, F, pr])
Z += pr
# normalise align to produce the alignment probabilities
for atuple in align:
atuple[-1] /= Z
# save the expectations for the M step
expectations.extend(align)
return expectations | _____no_output_____ | BSD-4-Clause-UC | notebooks/WSTA_N21_machine_translation.ipynb | trevorcohn/comp90042 |
Let's try running this and see what the expected alignments are | e0 = expectation_step(bitext, t0)
e0 | _____no_output_____ | BSD-4-Clause-UC | notebooks/WSTA_N21_machine_translation.ipynb | trevorcohn/comp90042 |
We can also view this graphically. You need to have Graphviz - Graph Visualization Software installed and the path to its bin folder e.g. C:\Program Files (x86)\Graphviz2.38\bin added to PATH. | from IPython.display import SVG, display
from nltk.translate import AlignedSent, Alignment
def display_expect(expectations):
stuff = []
for A, E, F, prob in expectations:
if prob > 0.01:
stuff.append('Prob = %.4f' % prob)
asent = AlignedSent(F, E, Alignment(list(enumerate(A))))
stuff.append(SVG(asent._repr_svg_()))
return display(*stuff)
display_expect(e0) | _____no_output_____ | BSD-4-Clause-UC | notebooks/WSTA_N21_machine_translation.ipynb | trevorcohn/comp90042 |
Note the uniform probabilities for each option (is this a surprise, given our initialisation?) Next up we need to learn the model parameters *t* from these expectations. This is simply a matter of counting occurrences of translation pairs, weighted by their probability. | def maximization_step(expectations):
counts = defaultdict(dict)
for A, E, F, prob in expectations:
for j, aj in enumerate(A):
counts[E[aj]].setdefault(F[j], 0.0)
counts[E[aj]][F[j]] += prob
translations = defaultdict(dict)
for e, fcounts in counts.items():
tdict = translations[e]
total = float(sum(fcounts.values()))
for f, count in fcounts.items():
tdict[f] = count / total
return translations | _____no_output_____ | BSD-4-Clause-UC | notebooks/WSTA_N21_machine_translation.ipynb | trevorcohn/comp90042 |
Now we can test this over our expectations. Do you expect this to be uniform like *t0*? | t1 = maximization_step(e0)
t1 | _____no_output_____ | BSD-4-Clause-UC | notebooks/WSTA_N21_machine_translation.ipynb | trevorcohn/comp90042 |
With working E and M steps, we can now iterate! | t = t0
for step in range(10):
e = expectation_step(bitext, t)
t = maximization_step(e)
t
display_expect(e) | _____no_output_____ | BSD-4-Clause-UC | notebooks/WSTA_N21_machine_translation.ipynb | trevorcohn/comp90042 |
Great, we've learned sensible translations as we hoped. Try viewing the expectations using *display_expect*, and vary the number of iterations. What happens to the learned parameters? Speeding things up Recall that the E-step above uses a naive enumeration over all possible alignments, which is going to be woefully slow for anything other than toy data. (What's its computational complexity?) Thankfully a bit of algebraic manipulation of the model1 formulation of $P(A|E,F)$ gives rise to a much simple formulation. Let's give this a try. | def fast_em(bitext, translation_probs):
# E-step, computing counts as we go
counts = defaultdict(dict)
for E, F in bitext:
I = len(E)
J = len(F)
# each j can be considered independently of the others
for j in range(J):
# get the translation probabilities (unnormalised)
prob_ajs = []
for aj in range(I):
prob_ajs.append(translation_probs[E[aj]][F[j]])
# compute denominator for normalisation
z = sum(prob_ajs)
# maintain running counts (this is really part of the M-step)
for aj in range(I):
counts[E[aj]].setdefault(F[j], 0.0)
counts[E[aj]][F[j]] += prob_ajs[aj] / z
# Rest of the M-step to normalise counts
translations = defaultdict(dict)
for e, fcounts in counts.items():
tdict = translations[e]
total = float(sum(fcounts.values()))
for f, count in fcounts.items():
tdict[f] = count / total
return translations | _____no_output_____ | BSD-4-Clause-UC | notebooks/WSTA_N21_machine_translation.ipynb | trevorcohn/comp90042 |
We can test that the parameters learned in each step match what we computed before. What's the time complexity of this algorithm? | t1p = fast_em(bitext, t0)
t1p
t2p = fast_em(bitext, t1)
t2p | _____no_output_____ | BSD-4-Clause-UC | notebooks/WSTA_N21_machine_translation.ipynb | trevorcohn/comp90042 |
Alignment models in NLTK NLTK has a range of translation tools, including the IBM models 1 - 5. These are implemented in their full glory, including the null alignment, and complex optimisation algorithms for models 3 and up. Note that model 4 requires a clustering of the vocabulary, see the [documentation](http://www.nltk.org/api/nltk.translate.html) for details. | from nltk.translate import IBMModel3
bt = [AlignedSent(E,F) for E,F in bitext]
m = IBMModel3(bt, 5)
m.translation_table | _____no_output_____ | BSD-4-Clause-UC | notebooks/WSTA_N21_machine_translation.ipynb | trevorcohn/comp90042 |
NLTK also includes a small section of the Europarl corpus (about 20K sentence pairs). You might want to apply the alignment models to this larger dataset, although be aware that you will first need to do sentence alignment to discard any sentences that aren't aligned 1:1, e.g., using [nltk.translate.gale_church](http://www.nltk.org/api/nltk.translate.htmlmodule-nltk.translate.gale_church) to infer the best alignment. You might also want to lower-case the dataset, which keeps the vocabulary small enough for reasonable runtime and robust estimation. | import nltk
nltk.download('europarl_raw')
from nltk.corpus.europarl_raw import english, spanish
print(english.sents()[0])
print(spanish.sents()[0]) | [nltk_data] Downloading package europarl_raw to
[nltk_data] /Users/tcohn/nltk_data...
[nltk_data] Package europarl_raw is already up-to-date!
['Resumption', 'of', 'the', 'session', 'I', 'declare', 'resumed', 'the', 'session', 'of', 'the', 'European', 'Parliament', 'adjourned', 'on', 'Friday', '17', 'December', '1999', ',', 'and', 'I', 'would', 'like', 'once', 'again', 'to', 'wish', 'you', 'a', 'happy', 'new', 'year', 'in', 'the', 'hope', 'that', 'you', 'enjoyed', 'a', 'pleasant', 'festive', 'period', '.']
['Reanudación', 'del', 'período', 'de', 'sesiones', 'Declaro', 'reanudado', 'el', 'período', 'de', 'sesiones', 'del', 'Parlamento', 'Europeo', ',', 'interrumpido', 'el', 'viernes', '17', 'de', 'diciembre', 'pasado', ',', 'y', 'reitero', 'a', 'Sus', 'Señorías', 'mi', 'deseo', 'de', 'que', 'hayan', 'tenido', 'unas', 'buenas', 'vacaciones', '.']
| BSD-4-Clause-UC | notebooks/WSTA_N21_machine_translation.ipynb | trevorcohn/comp90042 |
Bernstein-Vazirani Algorithm In this section, we first introduce the Bernstein-Vazirani problem, and classical and quantum algorithms to solve it. We then implement the quantum algorithm using Qiskit, and run on a simulator and device. Contents1. [Introduction](introduction) - [Bernstein-Vazirani Problem](bvproblem) - [Bernstein-Vazirani Algorithm](bvalgorithm)2. [Example](example)3. [Qiskit Implementation](implementation) - [Simulation](simulation) - [Device](device)4. [Problems](problems)5. [References](references) 1. Introduction The Bernstein-Vazirani algorithm, first introduced in Reference [1], can be seen as an extension of the Deutsch-Josza algorithm covered in the last section. It showed that there can be advantages in using a quantum computer as a computational tool for more complex problems compared to the Deutsch-Josza problem. 1a. Bernstein-Vazirani Problem We are again given a hidden function Boolean $f$, which takes as as input a string of bits, and returns either $0$ or $1$, that is:$f(\{x_0,x_1,x_2,...\}) \rightarrow 0 \textrm{ or } 1 \textrm{ where } x_n \textrm{ is }0 \textrm{ or } 1 $. Instead of the function being balanced or constant as in the Deutsch-Josza problem, now the function is guaranteed to return the bitwise product of the input with some string, $s$. In other words, given an input $x$, $f(x) = s \cdot x \, \text{(mod 2)}$. We are expected to find $s$. 1b. Bernstein-Vazirani Algorithm Classical SolutionClassically, the oracle returns $f_s(x) = s \cdot x \mod 2$ given an input $x$. Thus, the hidden bit string $s$ can be revealed by querying the oracle with $x = 1, 2, \ldots, 2^i, \ldots, 2^{n-1}$, where each query reveals the $i$-th bit of $s$ (or, $s_i$). For example, with $x=1$ one can obtain the least significant bit of $s$, and so on. This means we would need to call the function $f_s(x)$ $n$ times. Quantum SolutionUsing a quantum computer, we can solve this problem with 100% confidence after only one call to the function $f(x)$. The quantum Bernstein-Vazirani algorithm to find the hidden integer is very simple: (1) start from a $|0\rangle^{\otimes n}$ state, (2) apply Hadamard gates, (3) query the oracle, (4) apply Hadamard gates, and (5) measure, generically illustrated below:The correctness of the algorithm is best explained by looking at the transformation of a quantum register $|a \rangle$ by $n$ Hadamard gates, each applied to the qubit of the register. It can be shown that:$$|a\rangle \xrightarrow{H^{\otimes n}} \frac{1}{\sqrt{2^n}} \sum_{x\in \{0,1\}^n} (-1)^{a\cdot x}|x\rangle.$$In particular, when we start with a quantum register $|0\rangle$ and apply $n$ Hadamard gates to it, we have the familiar quantum superposition:$$|0\rangle \xrightarrow{H^{\otimes n}} \frac{1}{\sqrt{2^n}} \sum_{x\in \{0,1\}^n} |x\rangle,$$which is slightly different from the Hadamard transform of the reqister $|a \rangle$ by the phase $(-1)^{a\cdot x}$. Now, the quantum oracle $f_a$ returns $1$ on input $x$ such that $a \cdot x \equiv 1 \mod 2$, and returns $0$ otherwise. This means we have the following transformation:$$|x \rangle \xrightarrow{f_a} | x \rangle = (-1)^{a\cdot x} |x \rangle. $$The algorithm to reveal the hidden integer follows naturally by querying the quantum oracle $f_a$ with the quantum superposition obtained from the Hadamard transformation of $|0\rangle$. Namely,$$|0\rangle \xrightarrow{H^{\otimes n}} \frac{1}{\sqrt{2^n}} \sum_{x\in \{0,1\}^n} |x\rangle \xrightarrow{f_a} \frac{1}{\sqrt{2^n}} \sum_{x\in \{0,1\}^n} (-1)^{a\cdot x}|x\rangle.$$Because the inverse of the $n$ Hadamard gates is again the $n$ Hadamard gates, we can obtain $a$ by$$\frac{1}{\sqrt{2^n}} \sum_{x\in \{0,1\}^n} (-1)^{a\cdot x}|x\rangle \xrightarrow{H^{\otimes n}} |a\rangle.$$ 2. Example Let's go through a specific example for $n=2$ qubits and a secret string $s=11$. Note that we are following the formulation in Reference [2] that generates a circuit for the Bernstein-Vazirani quantum oracle using only one register. The register of two qubits is initialized to zero: $$\lvert \psi_0 \rangle = \lvert 0 0 \rangle$$ Apply a Hadamard gate to both qubits: $$\lvert \psi_1 \rangle = \frac{1}{2} \left( \lvert 0 0 \rangle + \lvert 0 1 \rangle + \lvert 1 0 \rangle + \lvert 1 1 \rangle \right) $$ For the string $s=11$, the quantum oracle can be implemented as $\text{Q}_f = Z_{1}Z_{2}$: $$\lvert \psi_2 \rangle = \frac{1}{2} \left( \lvert 0 0 \rangle - \lvert 0 1 \rangle - \lvert 1 0 \rangle + \lvert 1 1 \rangle \right)$$ Apply a Hadamard gate to both qubits: $$\lvert \psi_3 \rangle = \lvert 1 1 \rangle$$ Measure to find the secret string $s=11$ 3. Qiskit Implementation We now implement the Bernstein-Vazirani algorithm with Qiskit for a two bit function with $s=11$. | # initialization
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
# importing Qiskit
from qiskit import IBMQ, BasicAer
from qiskit.providers.ibmq import least_busy
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute
# import basic plot tools
from qiskit.tools.visualization import plot_histogram | _____no_output_____ | Apache-2.0 | content/ch-algorithms/bernstein-vazirani.ipynb | ibmamnt/qiskit-textbook |
We first set the number of qubits used in the experiment, and the hidden integer $s$ to be found by the algorithm. The hidden integer $s$ determines the circuit for the quantum oracle. | nQubits = 2 # number of physical qubits used to represent s
s = 3 # the hidden integer
# make sure that a can be represented with nqubits
s = s % 2**(nQubits) | _____no_output_____ | Apache-2.0 | content/ch-algorithms/bernstein-vazirani.ipynb | ibmamnt/qiskit-textbook |
We then use Qiskit to program the Bernstein-Vazirani algorithm. | # Creating registers
# qubits for querying the oracle and finding the hidden integer
qr = QuantumRegister(nQubits)
# bits for recording the measurement on qr
cr = ClassicalRegister(nQubits)
bvCircuit = QuantumCircuit(qr, cr)
barriers = True
# Apply Hadamard gates before querying the oracle
for i in range(nQubits):
bvCircuit.h(qr[i])
# Apply barrier
if barriers:
bvCircuit.barrier()
# Apply the inner-product oracle
for i in range(nQubits):
if (s & (1 << i)):
bvCircuit.z(qr[i])
else:
bvCircuit.iden(qr[i])
# Apply barrier
if barriers:
bvCircuit.barrier()
#Apply Hadamard gates after querying the oracle
for i in range(nQubits):
bvCircuit.h(qr[i])
# Apply barrier
if barriers:
bvCircuit.barrier()
# Measurement
bvCircuit.measure(qr, cr)
bvCircuit.draw(output='mpl') | _____no_output_____ | Apache-2.0 | content/ch-algorithms/bernstein-vazirani.ipynb | ibmamnt/qiskit-textbook |
3a. Experiment with Simulators We can run the above circuit on the simulator. | # use local simulator
backend = BasicAer.get_backend('qasm_simulator')
shots = 1024
results = execute(bvCircuit, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer) | _____no_output_____ | Apache-2.0 | content/ch-algorithms/bernstein-vazirani.ipynb | ibmamnt/qiskit-textbook |
We can see that the result of the measurement is the binary representation of the hidden integer $3$ $(11)$. 3b. Experiment with Real Devices We can run the circuit on the real device as below. | # Load our saved IBMQ accounts and get the least busy backend device with less than or equal to 5 qubits
IBMQ.load_account()
provider = IBMQ.get_provider(hub='ibm-q')
provider.backends()
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits <= 5 and
not x.configuration().simulator and x.status().operational==True))
print("least busy backend: ", backend)
# Run our circuit on the least busy backend. Monitor the execution of the job in the queue
from qiskit.tools.monitor import job_monitor
shots = 1024
job = execute(bvCircuit, backend=backend, shots=shots)
job_monitor(job, interval = 2)
# Get the results from the computation
results = job.result()
answer = results.get_counts()
plot_histogram(answer) | _____no_output_____ | Apache-2.0 | content/ch-algorithms/bernstein-vazirani.ipynb | ibmamnt/qiskit-textbook |
As we can see, most of the results are $11$. The other results are due to errors in the quantum computation. 4. Problems 1. The above [implementation](implementation) of Bernstein-Vazirani is for a secret bit string of $s = 11$. Modify the implementation for a secret string os $s = 1011$. Are the results what you expect? Explain.2. The above [implementation](implementation) of Bernstein-Vazirani is for a secret bit string of $s = 11$. Modify the implementation for a secret string os $s = 1110110101$. Are the results what you expect? Explain. 5. References 1. Ethan Bernstein and Umesh Vazirani (1997) "Quantum Complexity Theory" SIAM Journal on Computing, Vol. 26, No. 5: 1411-1473, [doi:10.1137/S0097539796300921](https://doi.org/10.1137/S0097539796300921).2. Jiangfeng Du, Mingjun Shi, Jihui Wu, Xianyi Zhou, Yangmei Fan, BangJiao Ye, Rongdian Han (2001) "Implementation of a quantum algorithm to solve the Bernstein-Vazirani parity problem without entanglement on an ensemble quantum computer", Phys. Rev. A 64, 042306, [10.1103/PhysRevA.64.042306](https://doi.org/10.1103/PhysRevA.64.042306), [arXiv:quant-ph/0012114](https://arxiv.org/abs/quant-ph/0012114). | import qiskit
qiskit.__qiskit_version__ | _____no_output_____ | Apache-2.0 | content/ch-algorithms/bernstein-vazirani.ipynb | ibmamnt/qiskit-textbook |
Human Activity Recognition (97.98 %) The above accuracy achieved is better than the research paper itself which was based on LSTM but my work includes ANN on the same dataset. **Original approach using LSTM -Testing Accuracy: 91.652% , Precision: 91.762% , Recall: 91.652% , f1_score: 91.643%** **My approach using ANN - Testing accuracy(validation): 97.98% , Precision: 95% , Recall: 94% , f1-score: 94% .** | import numpy as np
import pandas as pd
import os
import seaborn as sns
import matplotlib.pyplot as plt
print(os.listdir("../input"))
df = pd.read_csv("../input/train.csv")
test = pd.read_csv("../input/test.csv")
df.T
print(df.Activity.unique())
print("----------------------------------------")
print(df.Activity.value_counts())
sns.set(rc={'figure.figsize':(13,6)})
fig = sns.countplot(x = "Activity" , data = df)
plt.xlabel("Activity")
plt.ylabel("Count")
plt.title("Activity Count")
plt.grid(True)
plt.show(fig)
pd.crosstab(df.subject, df.Activity, margins=True).style.background_gradient(cmap='autumn_r')
print(df.shape , test.shape)
df.columns | _____no_output_____ | Apache-2.0 | Human Activity Recognition (97.98 %).ipynb | parisa1/Human-Activity-Recognition-with-Neural-Network-using-Gyroscopic-and-Accelerometer-variables |
Now some visualizations for feature distribution in space. | sns.set(rc={'figure.figsize':(15,7)})
colours = ["maroon","coral","darkorchid","goldenrod","purple","darkgreen","darkviolet","saddlebrown","aqua","olive"]
index = -1
for i in df.columns[0:10]:
index = index + 1
fig = sns.kdeplot(df[i] , shade=True, color=colours[index])
plt.xlabel("Features")
plt.ylabel("Value")
plt.title("Feature Distribution")
plt.grid(True)
plt.show(fig)
sns.set(rc={'figure.figsize':(15,7)})
colours = ["maroon","coral","darkorchid","goldenrod","purple","darkgreen","darkviolet","saddlebrown","aqua","olive"]
index = -1
for i in df.columns[10:20]:
index = index + 1
ax1 = sns.kdeplot(df[i] , shade=True, color=colours[index])
plt.xlabel("Features")
plt.ylabel("Value")
plt.title("Feature Distribution")
plt.grid(True)
plt.show(fig)
sns.set(rc={'figure.figsize':(15,7)})
colours = ["maroon","coral","darkorchid","goldenrod","purple","darkgreen","darkviolet","saddlebrown","aqua","olive"]
index = -1
for i in df.columns[20:30]:
index = index + 1
ax1 = sns.kdeplot(df[i] , shade=True, color=colours[index])
plt.xlabel("Features")
plt.ylabel("Value")
plt.title("Feature Distribution")
plt.grid(True)
plt.show(fig)
sns.set(rc={'figure.figsize':(15,7)})
colours = ["maroon","coral","darkorchid","goldenrod","purple","darkgreen","darkviolet","saddlebrown","aqua","olive"]
index = -1
for i in df.columns[30:40]:
index = index + 1
ax1 = sns.kdeplot(df[i] , shade=True, color=colours[index])
plt.xlabel("Features")
plt.ylabel("Value")
plt.title("Feature Distribution")
plt.grid(True)
plt.show(fig)
sns.set(rc={'figure.figsize':(15,7)})
colours = ["maroon","coral","darkorchid","goldenrod","purple","darkgreen","darkviolet","saddlebrown","aqua","olive"]
index = -1
for i in df.columns[40:50]:
index = index + 1
ax1 = sns.kdeplot(df[i] , shade=True, color=colours[index])
plt.xlabel("Features")
plt.ylabel("Value")
plt.title("Feature Distribution")
plt.grid(True)
plt.show(fig)
sns.set(rc={'figure.figsize':(15,10)})
plt.subplot(221)
fig1 = sns.stripplot(x='Activity', y= df.loc[df['Activity']=="STANDING"].iloc[:,10], data= df.loc[df['Activity']=="STANDING"], jitter=True)
plt.title("Feature Distribution")
plt.grid(True)
plt.show(fig1)
plt.subplot(224)
fig2 = sns.stripplot(x='Activity', y= df.loc[df['Activity']=="STANDING"].iloc[:,11], data= df.loc[df['Activity']=="STANDING"], jitter=True)
plt.title("Feature Distribution")
plt.grid(True)
plt.show(fig2)
plt.subplot(223)
fig2 = sns.stripplot(x='Activity', y= df.loc[df['Activity']=="STANDING"].iloc[:,12], data= df.loc[df['Activity']=="STANDING"], jitter=True)
plt.title("Feature Distribution")
plt.grid(True)
plt.show(fig2)
plt.subplot(222)
fig2 = sns.stripplot(x='Activity', y= df.loc[df['Activity']=="STANDING"].iloc[:,13], data= df.loc[df['Activity']=="STANDING"], jitter=True)
plt.title("Feature Distribution")
plt.grid(True)
plt.show(fig2)
sns.set(rc={'figure.figsize':(15,5)})
fig1 = sns.stripplot(x='Activity', y= df.loc[df['subject']==15].iloc[:,7], data= df.loc[df['subject']==15], jitter=True)
plt.title("Feature Distribution")
plt.grid(True)
plt.show(fig1) | _____no_output_____ | Apache-2.0 | Human Activity Recognition (97.98 %).ipynb | parisa1/Human-Activity-Recognition-with-Neural-Network-using-Gyroscopic-and-Accelerometer-variables |
**Feature Scaling** **Pre-processing and data preparation to feed data into Artificial Neural Network.** | from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
scaler.fit(df.iloc[:,0:562])
mat_train = scaler.transform(df.iloc[:,0:562])
print(mat_train)
scaler = MinMaxScaler()
scaler.fit(test.iloc[:,0:562])
mat_test = scaler.transform(test.iloc[:,0:562])
print(mat_test)
temp = []
for i in df.Activity:
if i == "WALKING": temp.append(0)
if i == "WALKING_UPSTAIRS": temp.append(1)
if i == "WALKING_DOWNSTAIRS": temp.append(2)
if i == "SITTING": temp.append(3)
if i == "STANDING": temp.append(4)
if i == "LAYING": temp.append(5)
df["n_Activity"] = temp
temp = []
for i in test.Activity:
if i == "WALKING": temp.append(0)
if i == "WALKING_UPSTAIRS": temp.append(1)
if i == "WALKING_DOWNSTAIRS": temp.append(2)
if i == "SITTING": temp.append(3)
if i == "STANDING": temp.append(4)
if i == "LAYING": temp.append(5)
test["n_Activity"] = temp
df.drop(["Activity"] , axis = 1 , inplace = True)
test.drop(["Activity"] , axis = 1 , inplace = True)
from keras.utils import to_categorical
y_train = to_categorical(df.n_Activity , num_classes=6)
y_test = to_categorical(test.n_Activity , num_classes=6)
X_train = mat_train
X_test = mat_test | _____no_output_____ | Apache-2.0 | Human Activity Recognition (97.98 %).ipynb | parisa1/Human-Activity-Recognition-with-Neural-Network-using-Gyroscopic-and-Accelerometer-variables |
**Though it is a 562 feature vector which is large enough and might cause overfitting while training.** **Also gone for feature selection using extra tree classifier and l1 selection , but the results were slightly better with all features only when I tune the hyperparameters of the model to its almost utmost level which took some time.** **Having less features will also take less time to train but in this case manual selection of features reagrding context can't be done and the other approach has already been discussed above.** | print(X_train.shape , y_train.shape)
print(X_test.shape , y_test.shape) | (7352, 562) (7352, 6)
(2947, 562) (2947, 6)
| Apache-2.0 | Human Activity Recognition (97.98 %).ipynb | parisa1/Human-Activity-Recognition-with-Neural-Network-using-Gyroscopic-and-Accelerometer-variables |
**Taking necessary callbacks of checkpointing and learning rate reducer.** | filepath="HAR_weights.hdf5"
from keras.callbacks import ReduceLROnPlateau , ModelCheckpoint
lr_reduce = ReduceLROnPlateau(monitor='val_acc', factor=0.1, epsilon=0.0001, patience=1, verbose=1)
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
from keras.models import Sequential
from keras.layers import Dense, Dropout , BatchNormalization
from sklearn.model_selection import train_test_split
from keras.utils import np_utils
from keras.optimizers import RMSprop, Adam | _____no_output_____ | Apache-2.0 | Human Activity Recognition (97.98 %).ipynb | parisa1/Human-Activity-Recognition-with-Neural-Network-using-Gyroscopic-and-Accelerometer-variables |
**The below model architecture is the best I could come up with after repeated tuning and changes in network architecture.** **At last, the BatchNormalization layer did some good to slightly boost the accuracy.** **Taken special care of learning rate and batch_size to which the model is very sensitive and have to repeatedly adjust them in order to get one of the best result in front.** | model = Sequential()
model.add(Dense(64, input_dim=X_train.shape[1] , activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(BatchNormalization())
model.add(Dense(128, activation='relu'))
model.add(Dense(196, activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(6, activation='sigmoid'))
model.compile(optimizer = Adam(lr = 0.0005),loss='categorical_crossentropy', metrics=['accuracy'])
print(model.summary()) | _________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_109 (Dense) (None, 64) 36032
_________________________________________________________________
dense_110 (Dense) (None, 64) 4160
_________________________________________________________________
batch_normalization_9 (Batch (None, 64) 256
_________________________________________________________________
dense_111 (Dense) (None, 128) 8320
_________________________________________________________________
dense_112 (Dense) (None, 196) 25284
_________________________________________________________________
dense_113 (Dense) (None, 32) 6304
_________________________________________________________________
dense_114 (Dense) (None, 6) 198
=================================================================
Total params: 80,554
Trainable params: 80,426
Non-trainable params: 128
_________________________________________________________________
None
| Apache-2.0 | Human Activity Recognition (97.98 %).ipynb | parisa1/Human-Activity-Recognition-with-Neural-Network-using-Gyroscopic-and-Accelerometer-variables |
Finally, the best model was checkpointed and got a validation loss of 0.0562 and a validation accuracy of 97.98% or ~98%. | history = model.fit(X_train, y_train , epochs=22 , batch_size = 256 , validation_data=(X_test, y_test) , callbacks=[checkpoint,lr_reduce])
from pylab import rcParams
rcParams['figure.figsize'] = 10, 4
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
from sklearn.metrics import confusion_matrix
model.load_weights("HAR_weights.hdf5")
pred = model.predict(X_test)
pred = np.argmax(pred,axis = 1)
y_true = np.argmax(y_test,axis = 1) | _____no_output_____ | Apache-2.0 | Human Activity Recognition (97.98 %).ipynb | parisa1/Human-Activity-Recognition-with-Neural-Network-using-Gyroscopic-and-Accelerometer-variables |
The confusion matrix is plotted to get better insight of model performance using mlxted to refrain from extra code via scikit. The model performance is evident from the diagonal concentration of the values.** | CM = confusion_matrix(y_true, pred)
from mlxtend.plotting import plot_confusion_matrix
fig, ax = plot_confusion_matrix(conf_mat=CM , figsize=(10, 5))
plt.show() | _____no_output_____ | Apache-2.0 | Human Activity Recognition (97.98 %).ipynb | parisa1/Human-Activity-Recognition-with-Neural-Network-using-Gyroscopic-and-Accelerometer-variables |
Precision - 95% , Recall - 94% and f1-score of 94%. | from sklearn.metrics import classification_report , accuracy_score
print(classification_report(y_true, pred)) | precision recall f1-score support
0 0.99 0.91 0.95 496
1 0.98 0.89 0.93 471
2 0.82 0.99 0.90 420
3 0.94 0.90 0.92 491
4 0.94 0.95 0.94 532
5 0.98 1.00 0.99 537
avg / total 0.95 0.94 0.94 2947
| Apache-2.0 | Human Activity Recognition (97.98 %).ipynb | parisa1/Human-Activity-Recognition-with-Neural-Network-using-Gyroscopic-and-Accelerometer-variables |
**Exporting predictions.** | d = { "Index":np.arange(2947) , "Activity":pred }
final = pd.DataFrame(d)
final.to_csv( 'human_activity_predictions.csv' , index = False) | _____no_output_____ | Apache-2.0 | Human Activity Recognition (97.98 %).ipynb | parisa1/Human-Activity-Recognition-with-Neural-Network-using-Gyroscopic-and-Accelerometer-variables |
Supplemental InformationThis notebook is intended to serve as a supplement to the manuscript "High-throughput workflows for determining adsorption energies on solid surfaces." It outlines basic use of the code and workflow software that has been developed for processing surface slabs and placing adsorbates according to symmetrically distinct sites on surface facets. InstallationTo use this notebook, we recommend installing python via [Anaconda](https://www.continuum.io/downloads), which includes jupyter and the associated iPython notebook software.The code used in this project primarily makes use of two packages, pymatgen and atomate, which are installable via pip or the matsci channel on conda (e. g. `conda install -c matsci pymatgen atomate`). Development versions with editable code may be installed by cloning the repositories and using `python setup.py develop`. Example 1: AdsorbateSiteFinder (pymatgen)An example using the the AdsorbateSiteFinder class in pymatgen is shown below. We begin with an import statement for the necessay modules. To use the MP RESTful interface, you must provide your own API key either in the MPRester call i.e. ```mpr=MPRester("YOUR_API_KEY")``` or provide in in your .pmgrc.yaml configuration file. API keys can be accessed at materialsproject.org under your "Dashboard." | # Import statements
from pymatgen import Structure, Lattice, MPRester, Molecule
from pymatgen.analysis.adsorption import *
from pymatgen.core.surface import generate_all_slabs
from pymatgen.symmetry.analyzer import SpacegroupAnalyzer
from matplotlib import pyplot as plt
%matplotlib inline
# Note that you must provide your own API Key, which can
# be accessed via the Dashboard at materialsproject.org
mpr = MPRester() | _____no_output_____ | BSD-3-Clause | notebooks/2018-07-24-2018-Adsorption on solid surfaces.ipynb | utf/matgenb |
We create a simple fcc structure, generate it's distinct slabs, and select the slab with a miller index of (1, 1, 1). | fcc_ni = Structure.from_spacegroup("Fm-3m", Lattice.cubic(3.5), ["Ni"], [[0, 0, 0]])
slabs = generate_all_slabs(fcc_ni, max_index=1, min_slab_size=8.0,
min_vacuum_size=10.0)
ni_111 = [slab for slab in slabs if slab.miller_index==(1,1,1)][0] | _____no_output_____ | BSD-3-Clause | notebooks/2018-07-24-2018-Adsorption on solid surfaces.ipynb | utf/matgenb |
We make an instance of the AdsorbateSiteFinder and use it to find the relevant adsorption sites. | asf_ni_111 = AdsorbateSiteFinder(ni_111)
ads_sites = asf_ni_111.find_adsorption_sites()
print(ads_sites)
assert len(ads_sites) == 4 | {'ontop': [array([1.23743687, 0.71443451, 9.0725408 ])], 'bridge': [array([-0.61871843, 1.78608627, 9.0725408 ])], 'hollow': [array([4.27067681e-16, 7.39702921e-16, 9.07254080e+00]), array([8.80455477e-16, 1.42886902e+00, 9.07254080e+00])], 'all': [array([1.23743687, 0.71443451, 9.0725408 ]), array([-0.61871843, 1.78608627, 9.0725408 ]), array([4.27067681e-16, 7.39702921e-16, 9.07254080e+00]), array([1.63125081e-15, 1.42886902e+00, 9.07254080e+00])]}
| BSD-3-Clause | notebooks/2018-07-24-2018-Adsorption on solid surfaces.ipynb | utf/matgenb |
We visualize the sites using a tool from pymatgen. | fig = plt.figure()
ax = fig.add_subplot(111)
plot_slab(ni_111, ax, adsorption_sites=True) | _____no_output_____ | BSD-3-Clause | notebooks/2018-07-24-2018-Adsorption on solid surfaces.ipynb | utf/matgenb |
Use the `AdsorbateSiteFinder.generate_adsorption_structures` method to generate structures of adsorbates. | fig = plt.figure()
ax = fig.add_subplot(111)
adsorbate = Molecule("H", [[0, 0, 0]])
ads_structs = asf_ni_111.generate_adsorption_structures(adsorbate,
repeat=[1, 1, 1])
plot_slab(ads_structs[0], ax, adsorption_sites=False, decay=0.09) | _____no_output_____ | BSD-3-Clause | notebooks/2018-07-24-2018-Adsorption on solid surfaces.ipynb | utf/matgenb |
Example 2: AdsorbateSiteFinder for various surfacesIn this example, the AdsorbateSiteFinder is used to find adsorption sites on different structures and miller indices. | fig = plt.figure()
axes = [fig.add_subplot(2, 3, i) for i in range(1, 7)]
mats = {"mp-23":(1, 0, 0), # FCC Ni
"mp-2":(1, 1, 0), # FCC Au
"mp-13":(1, 1, 0), # BCC Fe
"mp-33":(0, 0, 1), # HCP Ru
"mp-30": (2, 1, 1),
"mp-5229":(1, 0, 0),
} # Cubic SrTiO3
#"mp-2133":(0, 1, 1)} # Wurtzite ZnO
for n, (mp_id, m_index) in enumerate(mats.items()):
struct = mpr.get_structure_by_material_id(mp_id)
struct = SpacegroupAnalyzer(struct).get_conventional_standard_structure()
slabs = generate_all_slabs(struct, 1, 5.0, 2.0, center_slab=True)
slab_dict = {slab.miller_index:slab for slab in slabs}
asf = AdsorbateSiteFinder.from_bulk_and_miller(struct, m_index, undercoord_threshold=0.10)
plot_slab(asf.slab, axes[n])
ads_sites = asf.find_adsorption_sites()
sop = get_rot(asf.slab)
ads_sites = [sop.operate(ads_site)[:2].tolist()
for ads_site in ads_sites["all"]]
axes[n].plot(*zip(*ads_sites), color='k', marker='x',
markersize=10, mew=1, linestyle='', zorder=10000)
mi_string = "".join([str(i) for i in m_index])
axes[n].set_title("{}({})".format(struct.composition.reduced_formula, mi_string))
axes[n].set_xticks([])
axes[n].set_yticks([])
axes[4].set_xlim(-2, 5)
axes[4].set_ylim(-2, 5)
fig.savefig('slabs.png', dpi=200)
!open slabs.png | _____no_output_____ | BSD-3-Clause | notebooks/2018-07-24-2018-Adsorption on solid surfaces.ipynb | utf/matgenb |
Example 3: Generating a workflow from atomateIn this example, we demonstrate how MatMethods may be used to generate a full workflow for the determination of DFT-energies from which adsorption energies may be calculated. Note that this requires a working instance of [FireWorks](https://pythonhosted.org/FireWorks/index.html) and its dependency, [MongoDB](https://www.mongodb.com/). Note that MongoDB can be installed via [Anaconda](https://anaconda.org/anaconda/mongodb). | from fireworks import LaunchPad
lpad = LaunchPad()
lpad.reset('', require_password=False) | 2018-07-24 09:56:31,982 INFO Performing db tune-up
2018-07-24 09:56:31,995 INFO LaunchPad was RESET.
| BSD-3-Clause | notebooks/2018-07-24-2018-Adsorption on solid surfaces.ipynb | utf/matgenb |
Import the necessary workflow-generating function from atomate: | from atomate.vasp.workflows.base.adsorption import get_wf_surface, get_wf_surface_all_slabs | _____no_output_____ | BSD-3-Clause | notebooks/2018-07-24-2018-Adsorption on solid surfaces.ipynb | utf/matgenb |
Adsorption configurations take the form of a dictionary with the miller index as a string key and a list of pymatgen Molecule instances as the values. | co = Molecule("CO", [[0, 0, 0], [0, 0, 1.23]])
h = Molecule("H", [[0, 0, 0]]) | _____no_output_____ | BSD-3-Clause | notebooks/2018-07-24-2018-Adsorption on solid surfaces.ipynb | utf/matgenb |
Workflows are generated using the a slab a list of molecules. | struct = mpr.get_structure_by_material_id("mp-23") # fcc Ni
struct = SpacegroupAnalyzer(struct).get_conventional_standard_structure()
slabs = generate_all_slabs(struct, 1, 5.0, 2.0, center_slab=True)
slab_dict = {slab.miller_index:slab for slab in slabs}
ni_slab_111 = slab_dict[(1, 1, 1)]
wf = get_wf_surface([ni_slab_111], molecules=[co, h])
lpad.add_wf(wf) | 2018-07-24 09:56:33,057 INFO Added a workflow. id_map: {-9: 1, -8: 2, -7: 3, -6: 4, -5: 5, -4: 6, -3: 7, -2: 8, -1: 9}
| BSD-3-Clause | notebooks/2018-07-24-2018-Adsorption on solid surfaces.ipynb | utf/matgenb |
The workflow may be inspected as below. Note that there are 9 optimization tasks correponding the slab, and 4 distinct adsorption configurations for each of the 2 adsorbates. Details on running FireWorks, including [singleshot launching](https://pythonhosted.org/FireWorks/worker_tutorial.htmllaunch-a-rocket-on-a-worker-machine-fireworker), [queue submission](https://pythonhosted.org/FireWorks/queue_tutorial.html), [workflow management](https://pythonhosted.org/FireWorks/defuse_tutorial.html), and more can be found in the [FireWorks documentation](https://pythonhosted.org/FireWorks/index.html). | lpad.get_wf_summary_dict(1) | _____no_output_____ | BSD-3-Clause | notebooks/2018-07-24-2018-Adsorption on solid surfaces.ipynb | utf/matgenb |
Note also that running FireWorks via atomate may require system specific tuning (e. g. for VASP parameters). More information is available in the [atomate documentation](http://pythonhosted.org/atomate/). Example 4 - Screening of oxygen evolution electrocatalysts on binary oxides This final example is intended to demonstrate how to use the MP API and the adsorption workflow to do an initial high-throughput study of oxygen evolution electrocatalysis on binary oxides of transition metals. | from pymatgen.core.periodic_table import *
from pymatgen.core.surface import get_symmetrically_distinct_miller_indices
import tqdm
lpad.reset('', require_password=False) | 2018-07-24 09:56:33,079 INFO Performing db tune-up
2018-07-24 09:56:33,088 INFO LaunchPad was RESET.
| BSD-3-Clause | notebooks/2018-07-24-2018-Adsorption on solid surfaces.ipynb | utf/matgenb |
For oxygen evolution, a common metric for the catalytic activity of a given catalyst is the theoretical overpotential corresponding to the mechanism that proceeds through OH\*, O\*, and OOH\*. So we can define our adsorbates: | OH = Molecule("OH", [[0, 0, 0], [-0.793, 0.384, 0.422]])
O = Molecule("O", [[0, 0, 0]])
OOH = Molecule("OOH", [[0, 0, 0], [-1.067, -0.403, 0.796],
[-0.696, -0.272, 1.706]])
adsorbates = [OH, O, OOH] | _____no_output_____ | BSD-3-Clause | notebooks/2018-07-24-2018-Adsorption on solid surfaces.ipynb | utf/matgenb |
Then we can retrieve the structures using the MP rest interface, and write a simple for loop which creates all of the workflows corresponding to every slab and every adsorption site for each material. The code below will take ~15 minutes. This could be parallelized to be more efficient, but is not for simplicity in this case. | elements = [Element.from_Z(i) for i in range(1, 103)]
trans_metals = [el for el in elements if el.is_transition_metal]
# tqdm adds a progress bar so we can see the progress of the for loop
for metal in tqdm.tqdm_notebook(trans_metals):
# Get relatively stable structures with small unit cells
data = mpr.get_data("{}-O".format(metal.symbol))
data = [datum for datum in data if datum["e_above_hull"] < 0.05]
data = sorted(data, key = lambda x: x["nsites"])
struct = Structure.from_str(data[0]["cif"], fmt='cif')
# Put in conventional cell settings
struct = SpacegroupAnalyzer(struct).get_conventional_standard_structure()
# Get distinct miller indices for low-index facets
wf = get_wf_surface_all_slabs(struct, adsorbates)
lpad.add_wf(wf)
print("Processed: {}".format(struct.formula)) | _____no_output_____ | BSD-3-Clause | notebooks/2018-07-24-2018-Adsorption on solid surfaces.ipynb | utf/matgenb |
PinSage MovieRecommendationThis notebook has code that can be used to use PinSage for an "implicit recommender task." In this case, the data are Movie Rating, so you are data are users and ratings for some set of movies. The data are split into a training and test set, and the goal is to learn representations of users/movies that enable you to recommend movies a person would actually watch. You check the quality of your recommendtions by using the "test data" to see if they actually already watched any of the movies you recommended. Specifically, you measure what percentage of your "top10 ratings" are "hits" (movies they actually watched, i.e., movies in the test set that they have arated). To get started.0. Goto "Runtime"=>"Change runtime type" and make sure you are using the GPU, and that you uncheck "Omit code cell output when saving this notebook" so that cell-outputs will be saved in the file. (When I save notebooks to github, I exclude the cell outputs because the file is then smaller).1. Run the "PinSage Prep" section (~5-10min). 2. Run the "PinSage Code" section3. Open up the "Check the model with data" section to see what the PinSage model looks like4. Goto the section "PinSage Train on Implicit Task" and run the "baseline model, movie id only" section. This is the minimal model, and only tries to learn an embedding for movies without any extra information about the movies. This will be your "baseline model" and the critical question is whether adding more information (e.g., plot embeddings, poster embeddings), or changing hyperparameters improves the model's performance. (~90 min)5. Choose 1 more of the suggested "possible variations" to run and see what factors influence the model's performance.6. Writeup a brief Summary & Conclusions of your work. PinSage Prep This Chunk downloads and pre-processes moviedata, preparing the graphs training. | !pip install dgl-cu101 --upgrade
!python -m pip install dask[dataframe] --upgrade
!pip install madgrad
!wget -c http://files.grouplens.org/datasets/movielens/ml-1m.zip
!unzip ml-1m.zip
!rm ml-1m.zip
!wget -c https://www.dropbox.com/s/4blru88qafx1i4l/ml_25m_tmdb_plot_paraphrase-distilroberta-base-v1.pth.tar
!wget --quiet -c https://www.dropbox.com/s/8ty0mis0u3eza45/tmdb_backdrops_w780_SwinTransformer_avgpool.pth.tar
!wget --quiet -c https://www.dropbox.com/s/rrovh5ludxonzxs/tmdb_backdrops_w780_VGG_classifier.4.pth.tar
!wget --quiet -c https://www.dropbox.com/s/ixfo4yxq58utj9c/tmdb_posters_w500_VGG_classifier.4.pth.tar
!wget --quiet -c https://www.dropbox.com/s/u5akhzatpmrck3a/tmdb_posters_w500_SwinTransformer_avgpool.pth.tar
!wget --quiet -c https://www.dropbox.com/s/qeur875d23zivko/ml_25m_links_imdb_synopsis_paraphrase-distilroberta-base-v1.pth.tar
!wget --quiet -c https://www.dropbox.com/s/vpi2uno5plp2kvd/ml_25m_links_imdb_plot_paraphrase-distilroberta-base-v1.pth.tar
!wget --quiet -c https://www.dropbox.com/s/wrwcprh2wih7rz5/ml_25m_links_imdb_longest_paraphrase-distilroberta-base-v1.pth.tar
!wget --quiet -c https://www.dropbox.com/s/dgsom5hcdxjn8rs/ml_25m_links_imdb_full_plot_paraphrase-distilroberta-base-v1.pth.tar
"""Graph builder from pandas dataframes"""
from collections import namedtuple
from pandas.api.types import is_numeric_dtype, is_categorical_dtype, is_categorical
import dgl
__all__ = ['PandasGraphBuilder']
def _series_to_tensor(series):
if is_categorical(series):
return torch.LongTensor(series.cat.codes.values.astype('int64'))
else: # numeric
return torch.FloatTensor(series.values)
class PandasGraphBuilder(object):
"""Creates a heterogeneous graph from multiple pandas dataframes.
Examples
--------
Let's say we have the following three pandas dataframes:
User table ``users``:
=========== =========== =======
``user_id`` ``country`` ``age``
=========== =========== =======
XYZZY U.S. 25
FOO China 24
BAR China 23
=========== =========== =======
Game table ``games``:
=========== ========= ============== ==================
``game_id`` ``title`` ``is_sandbox`` ``is_multiplayer``
=========== ========= ============== ==================
1 Minecraft True True
2 Tetris 99 False True
=========== ========= ============== ==================
Play relationship table ``plays``:
=========== =========== =========
``user_id`` ``game_id`` ``hours``
=========== =========== =========
XYZZY 1 24
FOO 1 20
FOO 2 16
BAR 2 28
=========== =========== =========
One could then create a bidirectional bipartite graph as follows:
>>> builder = PandasGraphBuilder()
>>> builder.add_entities(users, 'user_id', 'user')
>>> builder.add_entities(games, 'game_id', 'game')
>>> builder.add_binary_relations(plays, 'user_id', 'game_id', 'plays')
>>> builder.add_binary_relations(plays, 'game_id', 'user_id', 'played-by')
>>> g = builder.build()
>>> g.number_of_nodes('user')
3
>>> g.number_of_edges('plays')
4
"""
def __init__(self):
self.entity_tables = {}
self.relation_tables = {}
self.entity_pk_to_name = {} # mapping from primary key name to entity name
self.entity_pk = {} # mapping from entity name to primary key
self.entity_key_map = {} # mapping from entity names to primary key values
self.num_nodes_per_type = {}
self.edges_per_relation = {}
self.relation_name_to_etype = {}
self.relation_src_key = {} # mapping from relation name to source key
self.relation_dst_key = {} # mapping from relation name to destination key
def add_entities(self, entity_table, primary_key, name):
entities = entity_table[primary_key].astype('category')
#set_trace()
#if not entity_table[primary_key].is_unique:
if not (entities.value_counts() == 1).all():
raise ValueError('Different entity with the same primary key detected.')
# preserve the category order in the original entity table
entities = entities.cat.reorder_categories(entity_table[primary_key].values)
self.entity_pk_to_name[primary_key] = name
self.entity_pk[name] = primary_key
self.num_nodes_per_type[name] = entity_table.shape[0]
#self.num_nodes_per_type[name] = len(entities.cat.categories)
self.entity_key_map[name] = entities
self.entity_tables[name] = entity_table
def add_binary_relations(self, relation_table, source_key, destination_key, name):
src = relation_table[source_key].astype('category')
src = src.cat.set_categories(
self.entity_key_map[self.entity_pk_to_name[source_key]].cat.categories)
dst = relation_table[destination_key].astype('category')
dst = dst.cat.set_categories(
self.entity_key_map[self.entity_pk_to_name[destination_key]].cat.categories)
if src.isnull().any():
raise ValueError(
'Some source entities in relation %s do not exist in entity %s.' %
(name, source_key))
if dst.isnull().any():
raise ValueError(
'Some destination entities in relation %s do not exist in entity %s.' %
(name, destination_key))
srctype = self.entity_pk_to_name[source_key]
dsttype = self.entity_pk_to_name[destination_key]
etype = (srctype, name, dsttype)
self.relation_name_to_etype[name] = etype
self.edges_per_relation[etype] = (src.cat.codes.values.astype('int64'), dst.cat.codes.values.astype('int64'))
self.relation_tables[name] = relation_table
self.relation_src_key[name] = source_key
self.relation_dst_key[name] = destination_key
def build(self):
# Create heterograph
graph = dgl.heterograph(self.edges_per_relation, self.num_nodes_per_type)
return graph
"""
Script that reads from raw MovieLens-1M data and dumps into a pickle
file the following:
* A heterogeneous graph with categorical features.
* A list with all the movie titles. The movie titles correspond to
the movie nodes in the heterogeneous graph.
This script exemplifies how to prepare tabular data with textual
features. Since DGL graphs do not store variable-length features, we
instead put variable-length features into a more suitable container
(e.g. torchtext to handle list of texts)
"""
import os
import re
import argparse
import pickle
import pandas as pd
import numpy as np
import scipy.sparse as ssp
import dgl
import torch
import torchtext
#from builder import PandasGraphBuilder
import torch
import dgl
import numpy as np
import scipy.sparse as ssp
import tqdm
import dask.dataframe as dd
# This is the train-test split method most of the recommender system papers running on MovieLens
# takes. It essentially follows the intuition of "training on the past and predict the future".
# One can also change the threshold to make validation and test set take larger proportions.
def train_test_split_by_time(df, timestamp, user):
df['train_mask'] = np.ones((len(df),), dtype=np.bool)
df['val_mask'] = np.zeros((len(df),), dtype=np.bool)
df['test_mask'] = np.zeros((len(df),), dtype=np.bool)
df = dd.from_pandas(df, npartitions=10)
def train_test_split(df):
df = df.sort_values([timestamp])
if df.shape[0] > 1:
df.iloc[-1, -3] = False
df.iloc[-1, -1] = True
if df.shape[0] > 2:
df.iloc[-2, -3] = False
df.iloc[-2, -2] = True
return df
df = df.groupby(user, group_keys=False).apply(train_test_split).compute(scheduler='processes').sort_index()
print(df[df[user] == df[user].unique()[0]].sort_values(timestamp))
return df['train_mask'].to_numpy().nonzero()[0], \
df['val_mask'].to_numpy().nonzero()[0], \
df['test_mask'].to_numpy().nonzero()[0]
def build_train_graph(g, train_indices, utype, itype, etype, etype_rev):
train_g = g.edge_subgraph(
{etype: train_indices, etype_rev: train_indices},
preserve_nodes=True)
# remove the induced node IDs - should be assigned by model instead
del train_g.nodes[utype].data[dgl.NID]
del train_g.nodes[itype].data[dgl.NID]
# copy features
for ntype in g.ntypes:
for col, data in g.nodes[ntype].data.items():
train_g.nodes[ntype].data[col] = data
for etype in g.etypes:
for col, data in g.edges[etype].data.items():
train_g.edges[etype].data[col] = data[train_g.edges[etype].data[dgl.EID]]
return train_g
def build_val_test_matrix(g, val_indices, test_indices, utype, itype, etype):
n_users = g.number_of_nodes(utype)
n_items = g.number_of_nodes(itype)
val_src, val_dst = g.find_edges(val_indices, etype=etype)
test_src, test_dst = g.find_edges(test_indices, etype=etype)
val_src = val_src.numpy()
val_dst = val_dst.numpy()
test_src = test_src.numpy()
test_dst = test_dst.numpy()
val_matrix = ssp.coo_matrix((np.ones_like(val_src), (val_src, val_dst)), (n_users, n_items))
test_matrix = ssp.coo_matrix((np.ones_like(test_src), (test_src, test_dst)), (n_users, n_items))
return val_matrix, test_matrix
def linear_normalize(values):
return (values - values.min(0, keepdims=True)) / \
(values.max(0, keepdims=True) - values.min(0, keepdims=True))
def process_movielens1m(directory, output_path):
## Build heterogeneous graph
# Load data
users = []
with open(os.path.join(directory, 'users.dat'), encoding='latin1') as f:
for l in f:
id_, gender, age, occupation, zip_ = l.strip().split('::')
users.append({
'user_id': int(id_),
'gender': gender,
'age': age,
'occupation': occupation,
'zip': zip_,
})
users = pd.DataFrame(users).astype('category')
movies = []
with open(os.path.join(directory, 'movies.dat'), encoding='latin1') as f:
for l in f:
id_, title, genres = l.strip().split('::')
genres_set = set(genres.split('|'))
# extract year
assert re.match(r'.*\([0-9]{4}\)$', title)
year = title[-5:-1]
title = title[:-6].strip()
data = {'movie_id': int(id_), 'title': title, 'year': year}
for g in genres_set:
data[g] = True
movies.append(data)
movies = pd.DataFrame(movies).astype({'year': 'category'})
ratings = []
with open(os.path.join(directory, 'ratings.dat'), encoding='latin1') as f:
for l in f:
user_id, movie_id, rating, timestamp = [int(_) for _ in l.split('::')]
ratings.append({
'user_id': user_id,
'movie_id': movie_id,
'rating': rating,
'timestamp': timestamp,
})
ratings = pd.DataFrame(ratings)
# Filter the users and items that never appear in the rating table.
distinct_users_in_ratings = ratings['user_id'].unique()
distinct_movies_in_ratings = ratings['movie_id'].unique()
users = users[users['user_id'].isin(distinct_users_in_ratings)]
movies = movies[movies['movie_id'].isin(distinct_movies_in_ratings)]
# Group the movie features into genres (a vector), year (a category), title (a string)
genre_columns = movies.columns.drop(['movie_id', 'title', 'year'])
movies[genre_columns] = movies[genre_columns].fillna(False).astype('bool')
movies_categorical = movies.drop('title', axis=1)
# Build graph
graph_builder = PandasGraphBuilder()
graph_builder.add_entities(users, 'user_id', 'user')
graph_builder.add_entities(movies_categorical, 'movie_id', 'movie')
graph_builder.add_binary_relations(ratings, 'user_id', 'movie_id', 'watched')
graph_builder.add_binary_relations(ratings, 'movie_id', 'user_id', 'watched-by')
g = graph_builder.build()
# Assign features.
# Note that variable-sized features such as texts or images are handled elsewhere.
g.nodes['user'].data['gender'] = torch.LongTensor(users['gender'].cat.codes.values)
g.nodes['user'].data['age'] = torch.LongTensor(users['age'].cat.codes.values)
g.nodes['user'].data['occupation'] = torch.LongTensor(users['occupation'].cat.codes.values)
g.nodes['user'].data['zip'] = torch.LongTensor(users['zip'].cat.codes.values)
g.nodes['movie'].data['year'] = torch.LongTensor(movies['year'].cat.codes.values)
g.nodes['movie'].data['genre'] = torch.FloatTensor(movies[genre_columns].values)
g.edges['watched'].data['rating'] = torch.LongTensor(ratings['rating'].values)
g.edges['watched'].data['timestamp'] = torch.LongTensor(ratings['timestamp'].values)
g.edges['watched-by'].data['rating'] = torch.LongTensor(ratings['rating'].values)
g.edges['watched-by'].data['timestamp'] = torch.LongTensor(ratings['timestamp'].values)
# Train-validation-test split
# This is a little bit tricky as we want to select the last interaction for test, and the
# second-to-last interaction for validation.
train_indices, val_indices, test_indices = train_test_split_by_time(ratings, 'timestamp', 'user_id')
# Build the graph with training interactions only.
train_g = build_train_graph(g, train_indices, 'user', 'movie', 'watched', 'watched-by')
assert train_g.out_degrees(etype='watched').min() > 0
# Build the user-item sparse matrix for validation and test set.
val_matrix, test_matrix = build_val_test_matrix(g, val_indices, test_indices, 'user', 'movie', 'watched')
## Build title set
movie_textual_dataset = {'title': movies['title'].values}
# The model should build their own vocabulary and process the texts. Here is one example
# of using torchtext to pad and numericalize a batch of strings.
# field = torchtext.data.Field(include_lengths=True, lower=True, batch_first=True)
# examples = [torchtext.data.Example.fromlist([t], [('title', title_field)]) for t in texts]
# titleset = torchtext.data.Dataset(examples, [('title', title_field)])
# field.build_vocab(titleset.title, vectors='fasttext.simple.300d')
# token_ids, lengths = field.process([examples[0].title, examples[1].title])
## Dump the graph and the datasets
dataset = {
'train-graph': train_g,
'val-matrix': val_matrix,
'test-matrix': test_matrix,
'item-texts': movie_textual_dataset,
'item-images': None,
'user-type': 'user',
'item-type': 'movie',
'user-to-item-type': 'watched',
'item-to-user-type': 'watched-by',
'timestamp-edge-column': 'timestamp'}
with open(output_path, 'wb') as f:
pickle.dump(dataset, f)
from IPython.core.debugger import set_trace
from fastprogress.fastprogress import progress_bar
def process_movielens1m_text(directory, output_path, text_embeddings,
only_id=False):
## Build heterogeneous graph
# Load plot embeddings
embeddings = torch.load(text_embeddings, map_location='cpu')
# Load data
users = []
with open(os.path.join(directory, 'users.dat'), encoding='latin1') as f:
for l in f:
id_, gender, age, occupation, zip_ = l.strip().split('::')
users.append({
'user_id': int(id_),
'gender': gender,
'age': age,
'occupation': occupation,
'zip': zip_,
})
users = pd.DataFrame(users).astype('category')
movies = []
with open(os.path.join(directory, 'movies.dat'), encoding='latin1') as f:
for l in f:
id_, title, genres = l.strip().split('::')
genres_set = set(genres.split('|'))
# extract year
assert re.match(r'.*\([0-9]{4}\)$', title)
year = title[-5:-1]
title = title[:-6].strip()
data = {'movie_id': int(id_), 'title': title, 'year': year}
for g in genres_set:
data[g] = True
movies.append(data)
movies = pd.DataFrame(movies).astype({'year': 'category'})
ratings = []
with open(os.path.join(directory, 'ratings.dat'), encoding='latin1') as f:
for l in f:
user_id, movie_id, rating, timestamp = [int(_) for _ in l.split('::')]
ratings.append({
'user_id': user_id,
'movie_id': movie_id,
'rating': rating,
'timestamp': timestamp,
})
ratings = pd.DataFrame(ratings)
# Filter the users and items that never appear in the rating table.
distinct_users_in_ratings = ratings['user_id'].unique()
distinct_movies_in_ratings = ratings['movie_id'].unique()
users = users[users['user_id'].isin(distinct_users_in_ratings)]
movies = movies[movies['movie_id'].isin(distinct_movies_in_ratings)]
# Filter users and items for movies that don't have embeddings
distinct_movies = movies['movie_id'].unique()
# drop embeddings for movies not in set
distinct_movies_with_embeddings = np.array(embeddings['ml_ids'])
embedding_has_rating = np.in1d(distinct_movies_with_embeddings, distinct_movies)
distinct_movies_with_embeddings = distinct_movies_with_embeddings[embedding_has_rating]
# drop movies without embedding
movie_has_embedding = np.in1d(distinct_movies, distinct_movies_with_embeddings)
rated_movies_with_embeddings = distinct_movies[movie_has_embedding]
# Filter ratings, users, movies
ratings = ratings[ratings['movie_id'].isin(rated_movies_with_embeddings)]
distinct_users_in_ratings = ratings['user_id'].unique()
distinct_movies_in_ratings = ratings['movie_id'].unique()
#filtering users breaks everything, do don't
#users = users[users['user_id'].isin(distinct_users_in_ratings)]
movies = movies[movies['movie_id'].isin(distinct_movies_in_ratings)]
# align the plot data with the movies dataframe
# use_embeddings = np.in1d(np.array(embeddings['ml_ids']), movies['movie_id'].unique())
plot_data = []
for r,movie in progress_bar(movies.iterrows(), total=len(movies)):
idx = embeddings['ml_ids'].index(movie.movie_id)
plot_data.append(embeddings['embedding'][idx])
# Group the movie features into genres (a vector), year (a category), title (a string)
genre_columns = movies.columns.drop(['movie_id', 'title', 'year'])
movies[genre_columns] = movies[genre_columns].fillna(False).astype('bool')
movies_categorical = movies.drop('title', axis=1)
# Build graph
graph_builder = PandasGraphBuilder()
graph_builder.add_entities(users, 'user_id', 'user')
graph_builder.add_entities(movies_categorical, 'movie_id', 'movie')
graph_builder.add_binary_relations(ratings, 'user_id', 'movie_id', 'watched')
graph_builder.add_binary_relations(ratings, 'movie_id', 'user_id', 'watched-by')
g = graph_builder.build()
# Assign features.
# Note that variable-sized features such as texts or images are handled elsewhere.
g.nodes['user'].data['gender'] = torch.LongTensor(users['gender'].cat.codes.values)
g.nodes['user'].data['age'] = torch.LongTensor(users['age'].cat.codes.values)
g.nodes['user'].data['occupation'] = torch.LongTensor(users['occupation'].cat.codes.values)
g.nodes['user'].data['zip'] = torch.LongTensor(users['zip'].cat.codes.values)
if only_id==False:
g.nodes['movie'].data['year'] = torch.LongTensor(movies['year'].cat.codes.values)
g.nodes['movie'].data['genre'] = torch.FloatTensor(movies[genre_columns].values)
g.nodes['movie'].data['plot'] = torch.stack(plot_data)
g.edges['watched'].data['rating'] = torch.LongTensor(ratings['rating'].values)
g.edges['watched'].data['timestamp'] = torch.LongTensor(ratings['timestamp'].values)
g.edges['watched-by'].data['rating'] = torch.LongTensor(ratings['rating'].values)
g.edges['watched-by'].data['timestamp'] = torch.LongTensor(ratings['timestamp'].values)
# Train-validation-test split
# This is a little bit tricky as we want to select the last interaction for test, and the
# second-to-last interaction for validation.
train_indices, val_indices, test_indices = train_test_split_by_time(ratings, 'timestamp', 'user_id')
# Build the graph with training interactions only.
train_g = build_train_graph(g, train_indices, 'user', 'movie', 'watched', 'watched-by')
assert train_g.out_degrees(etype='watched').min() > 0
# Build the user-item sparse matrix for validation and test set.
val_matrix, test_matrix = build_val_test_matrix(g, val_indices, test_indices, 'user', 'movie', 'watched')
## Build title set
movie_textual_dataset = {'title': movies['title'].values}
# The model should build their own vocabulary and process the texts. Here is one example
# of using torchtext to pad and numericalize a batch of strings.
# field = torchtext.data.Field(include_lengths=True, lower=True, batch_first=True)
# examples = [torchtext.data.Example.fromlist([t], [('title', title_field)]) for t in texts]
# titleset = torchtext.data.Dataset(examples, [('title', title_field)])
# field.build_vocab(titleset.title, vectors='fasttext.simple.300d')
# token_ids, lengths = field.process([examples[0].title, examples[1].title])
## Dump the graph and the datasets
dataset = {
'train-graph': train_g,
'val-matrix': val_matrix,
'test-matrix': test_matrix,
'item-texts': movie_textual_dataset,
'item-images': None,
'user-type': 'user',
'item-type': 'movie',
'user-to-item-type': 'watched',
'item-to-user-type': 'watched-by',
'timestamp-edge-column': 'timestamp'}
with open(output_path, 'wb') as f:
pickle.dump(dataset, f)
def process_movielens1m_posters(directory, output_path, image_embeddings):
## Build heterogeneous graph
# Load plot embeddings
embeddings = torch.load(image_embeddings, map_location='cpu')
# Load data
users = []
with open(os.path.join(directory, 'users.dat'), encoding='latin1') as f:
for l in f:
id_, gender, age, occupation, zip_ = l.strip().split('::')
users.append({
'user_id': int(id_),
'gender': gender,
'age': age,
'occupation': occupation,
'zip': zip_,
})
users = pd.DataFrame(users).astype('category')
movies = []
with open(os.path.join(directory, 'movies.dat'), encoding='latin1') as f:
for l in f:
id_, title, genres = l.strip().split('::')
genres_set = set(genres.split('|'))
# extract year
assert re.match(r'.*\([0-9]{4}\)$', title)
year = title[-5:-1]
title = title[:-6].strip()
data = {'movie_id': int(id_), 'title': title, 'year': year}
for g in genres_set:
data[g] = True
movies.append(data)
movies = pd.DataFrame(movies).astype({'year': 'category'})
ratings = []
with open(os.path.join(directory, 'ratings.dat'), encoding='latin1') as f:
for l in f:
user_id, movie_id, rating, timestamp = [int(_) for _ in l.split('::')]
ratings.append({
'user_id': user_id,
'movie_id': movie_id,
'rating': rating,
'timestamp': timestamp,
})
ratings = pd.DataFrame(ratings)
# Filter the users and items that never appear in the rating table.
distinct_users_in_ratings = ratings['user_id'].unique()
distinct_movies_in_ratings = ratings['movie_id'].unique()
users = users[users['user_id'].isin(distinct_users_in_ratings)]
movies = movies[movies['movie_id'].isin(distinct_movies_in_ratings)]
# Filter users and items for movies that don't have embeddings
distinct_movies = movies['movie_id'].unique()
# drop embeddings for movies not in set
distinct_movies_with_embeddings = np.array(embeddings['ml_ids'])
embedding_has_rating = np.in1d(distinct_movies_with_embeddings, distinct_movies)
distinct_movies_with_embeddings = distinct_movies_with_embeddings[embedding_has_rating]
# drop movies without embedding
movie_has_embedding = np.in1d(distinct_movies, distinct_movies_with_embeddings)
rated_movies_with_embeddings = distinct_movies[movie_has_embedding]
# Filter ratings, users, movies
ratings = ratings[ratings['movie_id'].isin(rated_movies_with_embeddings)]
distinct_users_in_ratings = ratings['user_id'].unique()
distinct_movies_in_ratings = ratings['movie_id'].unique()
#filtering users breaks everything, do don't
#users = users[users['user_id'].isin(distinct_users_in_ratings)]
movies = movies[movies['movie_id'].isin(distinct_movies_in_ratings)]
print(f"movies included: {len(movies)}")
# align the plot data with the movies dataframe
# use_embeddings = np.in1d(np.array(embeddings['ml_ids']), movies['movie_id'].unique())
image_data = []
for r,movie in progress_bar(movies.iterrows(), total=len(movies)):
idx = embeddings['ml_ids'].index(movie.movie_id)
image_data.append(embeddings['embedding'][idx])
# Group the movie features into genres (a vector), year (a category), title (a string)
genre_columns = movies.columns.drop(['movie_id', 'title', 'year'])
movies[genre_columns] = movies[genre_columns].fillna(False).astype('bool')
movies_categorical = movies.drop('title', axis=1)
# Build graph
graph_builder = PandasGraphBuilder()
graph_builder.add_entities(users, 'user_id', 'user')
graph_builder.add_entities(movies_categorical, 'movie_id', 'movie')
graph_builder.add_binary_relations(ratings, 'user_id', 'movie_id', 'watched')
graph_builder.add_binary_relations(ratings, 'movie_id', 'user_id', 'watched-by')
g = graph_builder.build()
# Assign features.
# Note that variable-sized features such as texts or images are handled elsewhere.
g.nodes['user'].data['gender'] = torch.LongTensor(users['gender'].cat.codes.values)
g.nodes['user'].data['age'] = torch.LongTensor(users['age'].cat.codes.values)
g.nodes['user'].data['occupation'] = torch.LongTensor(users['occupation'].cat.codes.values)
g.nodes['user'].data['zip'] = torch.LongTensor(users['zip'].cat.codes.values)
g.nodes['movie'].data['year'] = torch.LongTensor(movies['year'].cat.codes.values)
g.nodes['movie'].data['genre'] = torch.FloatTensor(movies[genre_columns].values)
g.nodes['movie'].data['poster'] = torch.stack(image_data)
g.edges['watched'].data['rating'] = torch.LongTensor(ratings['rating'].values)
g.edges['watched'].data['timestamp'] = torch.LongTensor(ratings['timestamp'].values)
g.edges['watched-by'].data['rating'] = torch.LongTensor(ratings['rating'].values)
g.edges['watched-by'].data['timestamp'] = torch.LongTensor(ratings['timestamp'].values)
# Train-validation-test split
# This is a little bit tricky as we want to select the last interaction for test, and the
# second-to-last interaction for validation.
train_indices, val_indices, test_indices = train_test_split_by_time(ratings, 'timestamp', 'user_id')
# Build the graph with training interactions only.
train_g = build_train_graph(g, train_indices, 'user', 'movie', 'watched', 'watched-by')
assert train_g.out_degrees(etype='watched').min() > 0
# Build the user-item sparse matrix for validation and test set.
val_matrix, test_matrix = build_val_test_matrix(g, val_indices, test_indices, 'user', 'movie', 'watched')
## Build title set
movie_textual_dataset = {'title': movies['title'].values}
# The model should build their own vocabulary and process the texts. Here is one example
# of using torchtext to pad and numericalize a batch of strings.
# field = torchtext.data.Field(include_lengths=True, lower=True, batch_first=True)
# examples = [torchtext.data.Example.fromlist([t], [('title', title_field)]) for t in texts]
# titleset = torchtext.data.Dataset(examples, [('title', title_field)])
# field.build_vocab(titleset.title, vectors='fasttext.simple.300d')
# token_ids, lengths = field.process([examples[0].title, examples[1].title])
## Dump the graph and the datasets
dataset = {
'train-graph': train_g,
'val-matrix': val_matrix,
'test-matrix': test_matrix,
'item-texts': movie_textual_dataset,
'item-images': None,
'user-type': 'user',
'item-type': 'movie',
'user-to-item-type': 'watched',
'item-to-user-type': 'watched-by',
'timestamp-edge-column': 'timestamp'}
with open(output_path, 'wb') as f:
pickle.dump(dataset, f)
process_movielens1m_text('/content/ml-1m', '/content/ml_1m_imdb_synopsis.pkl',
'ml_25m_links_imdb_synopsis_paraphrase-distilroberta-base-v1.pth.tar')
process_movielens1m_text('/content/ml-1m', '/content/ml_1m_imdb_plot.pkl',
'ml_25m_links_imdb_plot_paraphrase-distilroberta-base-v1.pth.tar')
process_movielens1m_text('/content/ml-1m', '/content/ml_1m_imdb_longest.pkl',
'ml_25m_links_imdb_longest_paraphrase-distilroberta-base-v1.pth.tar')
process_movielens1m_text('/content/ml-1m', '/content/ml_1m_imdb_full_plot.pkl',
'ml_25m_links_imdb_full_plot_paraphrase-distilroberta-base-v1.pth.tar')
process_movielens1m_posters('/content/ml-1m', '/content/ml_1m_backdrop_vgg16.pkl',
'tmdb_backdrops_w780_VGG_classifier.4.pth.tar')
process_movielens1m_posters('/content/ml-1m', '/content/ml_1m_backdrop_swin.pkl',
'tmdb_backdrops_w780_SwinTransformer_avgpool.pth.tar')
process_movielens1m_text('/content/ml-1m', '/content/ml_1m_plot_data.pkl',
'ml_25m_tmdb_plot_paraphrase-distilroberta-base-v1.pth.tar')
process_movielens1m_text('/content/ml-1m', '/content/ml_1m_only_id.pkl',
'ml_25m_links_imdb_longest_paraphrase-distilroberta-base-v1.pth.tar',
only_id=True) | _____no_output_____ | MIT | 2021/pinsage_movielens_robert_output_disabled.ipynb | harvard-visionlab/psy1406 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.