markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Wednesday* Calculate three relevant evaluation metrics for each ML solution and baseline* Refine machine learning approaches and test additional hyperparameter settings
# Wednesday's code goes here
_____no_output_____
MIT
notebooks/holodec.ipynb
carlosenciso/ai4ess-hackathon-2020
Thursday * Evaluate two interpretation methods for your machine learning solution* Compare interpretation of baseline with your approach* Submit best results on project to leaderboard* Prepare 2 Google Slides on team's approach and submit them
# Thursday's code goes here
_____no_output_____
MIT
notebooks/holodec.ipynb
carlosenciso/ai4ess-hackathon-2020
Hubspot - Update followers from linkedin **Tags:** hubspot crm sales contact naas_drivers linkedin network scheduler naas Input Import library
from naas_drivers import hubspot, linkedin import naas import pandas as pd
_____no_output_____
BSD-3-Clause
Hubspot/Hubspot_update_followers_from_linkedin.ipynb
vivard/awesome-notebooks
Enter Hubspot api key
auth_token = "YOUR_HUBSPOT_API_KEY"
_____no_output_____
BSD-3-Clause
Hubspot/Hubspot_update_followers_from_linkedin.ipynb
vivard/awesome-notebooks
Get your cookiesHow to get your cookies ?
LI_AT = 'YOUR_COOKIE_LI_AT' # EXAMPLE AQFAzQN_PLPR4wAAAXc-FCKmgiMit5FLdY1af3-2 JSESSIONID = 'YOUR_COOKIE_JSESSIONID' # EXAMPLE ajax:8379907400220387585
_____no_output_____
BSD-3-Clause
Hubspot/Hubspot_update_followers_from_linkedin.ipynb
vivard/awesome-notebooks
Connect to Hubspot
hs = hubspot.connect(auth_token)
_____no_output_____
BSD-3-Clause
Hubspot/Hubspot_update_followers_from_linkedin.ipynb
vivard/awesome-notebooks
Schedule your notebook everyday
naas.scheduler.add(cron="15 6 * * *")
_____no_output_____
BSD-3-Clause
Hubspot/Hubspot_update_followers_from_linkedin.ipynb
vivard/awesome-notebooks
Get all contacts in Hubspot
properties_list = [ "hs_object_id", "firstname", "lastname", "linkedinbio", "linkedinconnections", ] hubspot_contacts = hs.contacts.get_all(properties_list).fillna("Not Defined") hubspot_contacts
_____no_output_____
BSD-3-Clause
Hubspot/Hubspot_update_followers_from_linkedin.ipynb
vivard/awesome-notebooks
Model Filter to get linkedinconnections = "Not Defined" and "linkedinbio" = defined
df_to_update = hubspot_contacts.copy() # Filter on "Not defined" df_to_update = df_to_update[(df_to_update.linkedinbio != "Not Defined") & (df_to_update.linkedinconnections == "Not Defined")] # Limit to last 50 contacts df_to_update = df_to_update.sort_values(by="createdate", ascending=False)[:50].reset_index(drop=True) df_to_update
_____no_output_____
BSD-3-Clause
Hubspot/Hubspot_update_followers_from_linkedin.ipynb
vivard/awesome-notebooks
Get followers from Linkedin
for _, row in df_to_update.iterrows(): linkedinbio = row.linkedinbio # Get followers df = linkedin.connect(LI_AT, JSESSIONID).profile.get_network(linkedinbio) linkedinconnections = df.loc[0, "FOLLOWERS_COUNT"] # Get linkedinbio df_to_update.loc[_, "linkedinconnections"] = linkedinconnections df_to_update
_____no_output_____
BSD-3-Clause
Hubspot/Hubspot_update_followers_from_linkedin.ipynb
vivard/awesome-notebooks
Output Update followers in Hubspot
for _, row in df_to_update.iterrows(): # Init data data = {} # Get data hs_object_id = row.hs_object_id linkedinconnections = row.linkedinconnections # Update LK Bio if linkedinconnections != None: data = {"properties": {"linkedinconnections": linkedinconnections}} hs.contacts.patch(hs_object_id, data)
_____no_output_____
BSD-3-Clause
Hubspot/Hubspot_update_followers_from_linkedin.ipynb
vivard/awesome-notebooks
Reproducibility_Challenge_NeurIPS_2019This is a blog explains method proposed in the paper Competitive gradient descent [(Schäfer et al., 2019)](https://arxiv.org/abs/1905.12103). This has been written as a supplimentary to the reproducibility report for reproducibility challenge of NeurlIPS’19. The pdf format of the report is present [here](https://gopikishan14.github.io/Reproducibility_Challenge_NeurIPS_2019/) with this github [repository](https://github.com/GopiKishan14/Reproducibility_Challenge_NeurIPS_2019) as its source. Paper OverviewThe paper introduces a new algorithm for the numerical computation of Nash equilibria of competitive two-player games. The method is a natural generalization of gradient descent to the two-player setting where the update is given by the Nash equilibrium of a regularized bilinear local approximation of the underlying game. It avoids oscillatory and divergent behaviors seen in alternating gradient descent. Convergence and stability properties of the method are robust to strong interactions between the players, without adapting the stepsize, which is not the case with previous methods. The ability to choose larger stepsizes furthermore allows the algorithm to achieve faster convergence, as measured by the number of model evaluations (See the [report](https://gopikishan14.github.io/Reproducibility_Challenge_NeurIPS_2019/) experiments section). BackgroundThe traditional optimization is concerned with a single agent trying to optimize a cost function. Itcan be seen as $\min_{x \in R^m} f(x)$ . The agent has a clear objective to find (“Good local”) minimum off. Gradeint Descent (and its varients) are reliable Algorithmic Baseline for this purpose.The paper talks about Competitive optimization. Competitive optimization extends this problemto the setting of multiple agents each trying to minimize their own cost function, which in generaldepends on the actions of all agents. The paper deals with the case of two such agents: \begin{align} &\min_{x \in R^m} f(x,y),\ \ \ \min_{y \in R^n} g(x,y) \end{align} for two functions $f,g: R^m \times R^n \longrightarrow R$.In single agent optimization, the solution of the problem consists of the minimizer of the cost function.In competitive optimization, the right definition of solution is less obvious, but often one isinterested in computing Nash– or strategic equilibria: Pairs of strategies, such that no player candecrease their costs by unilaterally changing their strategies. If f and g are not convex, finding aglobal Nash equilibrium is typically impossible and instead we hope to find a "good" local Nashequilibrium About the problem Gradient descent/ascent and the cycling problem:For differentiable objective functions, the most naive approach to solving\begin{align} \label{eqn:game} &\min_{x \in R^m} f(x,y),\ \ \ \min_{y \in R^n} g(x,y) \end{align}is gradient descent ascent (GDA), whereby both players independently change their strategy in the direction of steepest descent of their cost function.Unfortunately, this procedure features oscillatory or divergent behavior even in the simple case of a bilinear game ($f(x,y) = x^{\top} y = -g(x,y)$) Solution approachTo motivate this algorithm, authors remind us that gradient descent with stepsize $\eta$ applied to the function $f:R^m \longrightarrow R$ can be written as\begin{equation} x_{k+1} = argmin_{x \in R^m} (x^T - x_{k}^T) \nabla_x f(x_k) + \frac{1}{2\eta} \|x - x_{k}\|^2. \end{equation}This models a (single) player solving a local linear approximation of the (minimization) game, subject to a quadratic penalty that expresses her limited confidence in the global accuracy of the model. ```The natural generalization of this idea to the competitive case should then be given by the two players solving a local approximation of the true game, both subject to a quadratic penalty that expresses their limited confidence in the accuracy of the local approximation.```In order to implement this idea, we need to find the appropriate way to generalize the linear approximation in the single agent setting to the competitive setting. Authors suggest to use a **bilinear** approximation in the two-player setting.Since the bilinear approximation is the lowest order approximation that can capture some interaction between the two players, they argue that the natural generalization of gradient descent to competitive optimization is not GDA, but rather the update rule $(x_{k+1},y_{k+1}) = (x_k,y_k) + (x,y)$, where $(x,y)$ is a Nash equilibrium of **the game**.\begin{align} \begin{split} \label{eqn:localgame} \min_{x \in R^m} x^{\top} \nabla_x f &+ x^{\top} D_{xy}^2 f y + y^{\top} \nabla_y f + \frac{1}{2\eta} x^{\top} x \\ \min_{y \in R^n} y^{\top} \nabla_y g &+ y^{\top} D_{yx}^2 g x + x^{\top} \nabla_x g + \frac{1}{2\eta} y^{\top} y. \end{split}\end{align}Indeed, the (unique) Nash equilibrium of the above Game can be computed in closed form. Proposed method**Among all (possibly randomized) strategies with finite first moment, the only Nash equilibrium of `the Game` is given by\begin{align}\label{eqn:nash}&x = -\eta \left( Id - \eta^2 D_{xy}^2f D_{yx}^2 g \right)^{-1} \left( \nabla_{x} f - \eta D_{xy}^2f \nabla_{y} g \right) \\&y = -\eta \left( Id - \eta^2 D_{yx}^2g D_{xy}^2 f \right)^{-1} \left( \nabla_{y} g - \eta D_{yx}^2g \nabla_{x} f \right),\end{align}given that the matrix inverses in the above expression exist.** Note that the matrix inverses exist for all but one value of $\eta$, and for all $\eta$ in the case of a zero sum game.According to the above Theorem, the Game has exactly one optimal pair of strategies, which is deter-ministic. Thus, we can use these strategies as an update rule, generalizing the idea of local optimalityfrom the single– to the multi agent setting and obtaining the following Algorithm.`Competitive Gradient Descent (CGD)`\begin{align}for\ (0 <= k <= N-1)\\&x_{k+1} = x_{k} - \eta \left( Id - \eta^2 D_{xy}^2f D_{yx}^2 g \right)^{-1}\left( \nabla_{x} f - \eta D_{xy}^2f \nabla_{y} g \right)\\&y_{k+1} = y_{k} - \eta \left( Id - \eta^2 D_{yx}^2g D_{xy}^2 f \right)^{-1} \left( \nabla_{y} g - \eta D_{yx}^2g \nabla_{x} f \right)\\ return\ (x_{N},y_{N})\;\end{align}**What I think that they think that I think ... that they do**: Another game-theoretic interpretation of CGD follows from the observation that its update rule can be written as \begin{equation}\begin{pmatrix} \Delta x\\ \Delta y\end{pmatrix} = -\begin{pmatrix} Id & \eta D_{xy}^2 f \\ \eta D_{yx}^2 g & Id \end{pmatrix}^{-1}\begin{pmatrix} \nabla_{x} f\\ \nabla_{y} g\end{pmatrix}.\end{equation}Applying the expansion $ \lambda_{\max} (A) < 1 \Rightarrow \left( Id - A \right)^{-1} = \lim_{N \rightarrow \infty} \sum_{k=0}^{N} A^k$ to the above equation, we observe that: \\1. The first partial sum ($N = 0$) corresponds to the optimal strategy if the other player's strategy stays constant (GDA).2. The second partial sum ($N = 1$) corresponds to the optimal strategy if the other player thinks that the other player's strategy stays constant (LCGD).3. The third partial sum ($N = 2$) corresponds to the optimal strategy if the other player thinks that the other player thinks that the other player's strategy stays constant, and so forth, until the Nash equilibrium is recovered in the limit. ComparisonThese six algorithms amount to different subsets of the following four terms.\begin{align*} & \text{GDA: } &\Delta x = &&&- \nabla_x f&\\ & \text{LCGD: } &\Delta x = &&&- \nabla_x f& &-\eta D_{xy}^2 f \nabla_y f&\\ & \text{SGA: } &\Delta x = &&&- \nabla_x f& &- \gamma D_{xy}^2 f \nabla_y f& & & \\ & \text{ConOpt: } &\Delta x = &&&- \nabla_x f& &- \gamma D_{xy}^2 f \nabla_y f& &- \gamma D_{xx}^2 f \nabla_x f& \\ & \text{OGDA: } &\Delta x \approx &&&- \nabla_x f& &-\eta D_{xy}^2 f \nabla_y f& &+\eta D_{xx}^2 f \nabla_x f& \\ & \text{CGD: } &\Delta x = &\left(Id + \eta^2 D_{xy}^2 f D_{yx}^2 f\right)^{-1}&\bigl( &- \nabla_x f& &-\eta D_{xy}^2 f \nabla_y f& & & \bigr) \end{align*}1. The **gradient term** $-\nabla_{x}f$, $\nabla_{y}f$ which corresponds to the most immediate way in which the players can improve their cost.2. The **competitive term** $-D_{xy}f \nabla_yf$, $D_{yx}f \nabla_x f$ which can be interpreted either as anticipating the other player to use the naive (GDA) strategy, or as decreasing the other players influence (by decreasing their gradient).3. The **consensus term** $ \pm D_{xx}^2 \nabla_x f$, $\mp D_{yy}^2 \nabla_y f$ that determines whether the players prefer to decrease their gradient ($\pm = +$) or to increase it ($\pm = -$). The former corresponds the players seeking consensus, whereas the latter can be seen as the opposite of consensus. (It also corresponds to an approximate Newton's method. \footnote{Applying a damped and regularized Newton's method to the optimization problem of Player 1 would amount to choosing $x_{k+1} = x_{k} - \eta(Id + \eta D_{xx}^2)^{-1} f \nabla_x f \approx x_{k} - \eta( \nabla_xf - \eta D_{xx}^{2}f \nabla_x f)$, for $\|\eta D_{xx}^2f\| \ll 1$.)4. The **equilibrium term** $(Id + \eta^2 D_{xy}^2 D_{yx}^2 f)^{-1}$, $(Id + \eta^2 D_{yx}^2 D_{xy}^2 f)^{-1}$, which arises from the players solving for the Nash equilibrium. This term lets each player prefer strategies that are less vulnerable to the actions of the other player. Code ImplementationThe competitive gradeint descent algorithm contains gradient, competitive and equilibrium term. So, we need to efficiently calculat them. The equibrium term is a matrix inverse Computing Hessian vector productsThe algorithm requires products of the mixed Hessian $v \mapsto D_{xy}f v$ and $v \mapsto D_{yx}g v$, which we want to compute using automatic differentiation.Many AD frameworks, like Autograd (https://github.com/HIPS/autograd) and ForwardDiff(https://github.com/JuliaDiff/ForwardDiff.jl) together with ReverseDiff(https://github.com/JuliaDiff/ReverseDiff.jl) support this procedure. While the authors used the AD frameworks from Julia, I will be using Autograd from PyTorch (https://pytorch.org/docs/stable/autograd.html) Matrix inversion for the equilibrium termAuthors propose to use iterative methods to approximate the inverse-matrix vector products arising in the *equilibrium term*.Authors focus on zero-sum games, where the matrix is always symmetric positive definite, making the [conjugate gradient (CG)](https://en.wikipedia.org/wiki/Conjugate_gradient_method) algorithm the method of choice. They also suggest terminating the iterative solver after a given relative decrease of the residual is achieved ($\| M x - y \| \leq \epsilon \|x\|$ for a small parameter $\epsilon$, when solving the system $Mx = y$).Briefly, conjugate gradient (CG) iteratively solves the system $Mx = y$ for $x$ without calculating $M^{-1}$.
import numpy as np import matplotlib.pyplot as plt """ Simple python implemetation of CG tested on an example """ # Problem setup A = np.matrix([[3.0, 2.0], [2.0, 6.0]]) # the matrix A in : Ax = b b = np.matrix([[2.0], [-8.0]]) # we will use the convention that a vector is a column vector # solution approach x = np.matrix([[-2.0], [-2.0]]) steps = [(-2.0, -2.0)] # modify according to x i = 0 imax = 10 eps = 0.01 r = b - A * x d = r deltanew = r.T * r delta0 = deltanew while i < imax and deltanew > eps**2 * delta0: alpha = float(deltanew / float(d.T * (A * d))) x = x + alpha * d steps.append((x[0, 0], x[1, 0])) r = b - A * x deltaold = deltanew deltanew = r.T * r beta = float(deltanew / float(deltaold)) d = r + beta * d i += 1 print("Solution vector x* for Ax = b :") print(x) print("And the steps taken by algorithm : ", steps) plt.plot(steps)
Solution vector x* for Ax = b : [[ 2.] [-2.]] And the steps taken by algorithm : [(-2.0, -2.0), (0.08000000000000007, -0.6133333333333333), (2.0, -2.0)]
MIT
.ipynb_checkpoints/README-checkpoint.ipynb
GopiKishan14/Reproducibility_Challenge_NeurIPS_2019
Mining Function SpecificationsWhen testing a program, one not only needs to cover its several behaviors; one also needs to _check_ whether the result is as expected. In this chapter, we introduce a technique that allows us to _mine_ function specifications from a set of given executions, resulting in abstract and formal _descriptions_ of what the function expects and what it delivers. These so-called _dynamic invariants_ produce pre- and post-conditions over function arguments and variables from a set of executions. They are useful in a variety of contexts:* Dynamic invariants provide important information for [symbolic fuzzing](SymbolicFuzzer.ipynb), such as types and ranges of function arguments.* Dynamic invariants provide pre- and postconditions for formal program proofs and verification.* Dynamic invariants provide a large number of assertions that can check whether function behavior has changed* Checks provided by dynamic invariants can be very useful as _oracles_ for checking the effects of generated testsTraditionally, dynamic invariants are dependent on the executions they are derived from. However, when paired with comprehensive test generators, they quickly become very precise, as we show in this chapter. **Prerequisites*** You should be familiar with tracing program executions, as in the [chapter on coverage](Coverage.ipynb).* Later in this section, we access the internal _abstract syntax tree_ representations of Python programs and transform them, as in the [chapter on information flow](InformationFlow.ipynb).
import fuzzingbook_utils import Coverage import Intro_Testing
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from fuzzingbook.DynamicInvariants import ```and then make use of the following features.This chapter provides two classes that automatically extract specifications from a function and a set of inputs:* `TypeAnnotator` for _types_, and* `InvariantAnnotator` for _pre-_ and _postconditions_.Both work by _observing_ a function and its invocations within a `with` clause. Here is an example for the type annotator:```python>>> def sum2(a, b):>>> return a + b>>> with TypeAnnotator() as type_annotator:>>> sum2(1, 2)>>> sum2(-4, -5)>>> sum2(0, 0)```The `typed_functions()` method will return a representation of `sum2()` annotated with types observed during execution.```python>>> print(type_annotator.typed_functions())def sum2(a: int, b: int) ->int: return a + b```The invariant annotator works in a similar fashion:```python>>> with InvariantAnnotator() as inv_annotator:>>> sum2(1, 2)>>> sum2(-4, -5)>>> sum2(0, 0)```The `functions_with_invariants()` method will return a representation of `sum2()` annotated with inferred pre- and postconditions that all hold for the observed values.```python>>> print(inv_annotator.functions_with_invariants())@precondition(lambda a, b: isinstance(a, int))@precondition(lambda a, b: isinstance(b, int))@postcondition(lambda return_value, a, b: a == return_value - b)@postcondition(lambda return_value, a, b: b == return_value - a)@postcondition(lambda return_value, a, b: isinstance(return_value, int))@postcondition(lambda return_value, a, b: return_value == a + b)@postcondition(lambda return_value, a, b: return_value == b + a)def sum2(a, b): return a + b```Such type specifications and invariants can be helpful as _oracles_ (to detect deviations from a given set of runs) as well as for all kinds of _symbolic code analyses_. The chapter gives details on how to customize the properties checked for. Specifications and AssertionsWhen implementing a function or program, one usually works against a _specification_ – a set of documented requirements to be satisfied by the code. Such specifications can come in natural language. A formal specification, however, allows the computer to check whether the specification is satisfied.In the [introduction to testing](Intro_Testing.ipynb), we have seen how _preconditions_ and _postconditions_ can describe what a function does. Consider the following (simple) square root function:
def my_sqrt(x): assert x >= 0 # Precondition ... assert result * result == x # Postcondition return result
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
The assertion `assert p` checks the condition `p`; if it does not hold, execution is aborted. Here, the actual body is not yet written; we use the assertions as a specification of what `my_sqrt()` _expects_, and what it _delivers_.The topmost assertion is the _precondition_, stating the requirements on the function arguments. The assertion at the end is the _postcondition_, stating the properties of the function result (including its relationship with the original arguments). Using these pre- and postconditions as a specification, we can now go and implement a square root function that satisfies them. Once implemented, we can have the assertions check at runtime whether `my_sqrt()` works as expected; a [symbolic](SymbolicFuzzer.ipynb) or [concolic](ConcolicFuzzer.ipynb) test generator will even specifically try to find inputs where the assertions do _not_ hold. (An assertion can be seen as a conditional branch towards aborting the execution, and any technique that tries to cover all code branches will also try to invalidate as many assertions as possible.) However, not every piece of code is developed with explicit specifications in the first place; let alone does most code comes with formal pre- and post-conditions. (Just take a look at the chapters in this book.) This is a pity: As Ken Thompson famously said, "Without specifications, there are no bugs – only surprises". It is also a problem for testing, since, of course, testing needs some specification to test against. This raises the interesting question: Can we somehow _retrofit_ existing code with "specifications" that properly describe their behavior, allowing developers to simply _check_ them rather than having to write them from scratch? This is what we do in this chapter. Why Generic Error Checking is Not EnoughBefore we go into _mining_ specifications, let us first discuss why it could be useful to _have_ them. As a motivating example, consider the full implementation of `my_sqrt()` from the [introduction to testing](Intro_Testing.ipynb):
import fuzzingbook_utils def my_sqrt(x): """Computes the square root of x, using the Newton-Raphson method""" approx = None guess = x / 2 while approx != guess: approx = guess guess = (approx + x / approx) / 2 return approx
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
`my_sqrt()` does not come with any functionality that would check types or values. Hence, it is easy for callers to make mistakes when calling `my_sqrt()`:
from ExpectError import ExpectError, ExpectTimeout with ExpectError(): my_sqrt("foo") with ExpectError(): x = my_sqrt(0.0)
Traceback (most recent call last): File "<ipython-input-8-262c66114b1c>", line 2, in <module> x = my_sqrt(0.0) File "<ipython-input-5-47185ad159a1>", line 7, in my_sqrt guess = (approx + x / approx) / 2 ZeroDivisionError: float division by zero (expected)
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
At least, the Python system catches these errors at runtime. The following call, however, simply lets the function enter an infinite loop:
with ExpectTimeout(1): x = my_sqrt(-1.0)
Traceback (most recent call last): File "<ipython-input-9-b72078127dc0>", line 2, in <module> x = my_sqrt(-1.0) File "<ipython-input-5-47185ad159a1>", line 6, in my_sqrt approx = guess File "<ipython-input-5-47185ad159a1>", line 6, in my_sqrt approx = guess File "ExpectError.ipynb", line 59, in check_time TimeoutError (expected)
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Our goal is to avoid such errors by _annotating_ functions with information that prevents errors like the above ones. The idea is to provide a _specification_ of expected properties – a specification that can then be checked at runtime or statically. \todo{Introduce the concept of *contract*.} Specifying and Checking Data TypesFor our Python code, one of the most important "specifications" we need is *types*. Python being a "dynamically" typed language means that all data types are determined at run time; the code itself does not explicitly state whether a variable is an integer, a string, an array, a dictionary – or whatever. As _writer_ of Python code, omitting explicit type declarations may save time (and allows for some fun hacks). It is not sure whether a lack of types helps in _reading_ and _understanding_ code for humans. For a _computer_ trying to analyze code, the lack of explicit types is detrimental. If, say, a constraint solver, sees `if x:` and cannot know whether `x` is supposed to be a number or a string, this introduces an _ambiguity_. Such ambiguities may multiply over the entire analysis in a combinatorial explosion – or in the analysis yielding an overly inaccurate result. Python 3.6 and later allows data types as _annotations_ to function arguments (actually, to all variables) and return values. We can, for instance, state that `my_sqrt()` is a function that accepts a floating-point value and returns one:
def my_sqrt_with_type_annotations(x: float) -> float: """Computes the square root of x, using the Newton-Raphson method""" return my_sqrt(x)
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
By default, such annotations are ignored by the Python interpreter. Therefore, one can still call `my_sqrt_typed()` with a string as an argument and get the exact same result as above. However, one can make use of special _typechecking_ modules that would check types – _dynamically_ at runtime or _statically_ by analyzing the code without having to execute it. Runtime Type CheckingThe Python `enforce` package provides a function decorator that automatically inserts type-checking code that is executed at runtime. Here is how to use it:
import enforce @enforce.runtime_validation def my_sqrt_with_checked_type_annotations(x: float) -> float: """Computes the square root of x, using the Newton-Raphson method""" return my_sqrt(x)
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Now, invoking `my_sqrt_with_checked_type_annotations()` raises an exception when invoked with a type dfferent from the one declared:
with ExpectError(): my_sqrt_with_checked_type_annotations(True)
Traceback (most recent call last): File "<ipython-input-13-68b73bd3f6ef>", line 2, in <module> my_sqrt_with_checked_type_annotations(True) File "/Users/zeller/Library/Python/3.6/site-packages/enforce/decorators.py", line 104, in universal _args, _kwargs, _ = enforcer.validate_inputs(parameters) File "/Users/zeller/Library/Python/3.6/site-packages/enforce/enforcers.py", line 86, in validate_inputs raise RuntimeTypeError(exception_text) enforce.exceptions.RuntimeTypeError: The following runtime type errors were encountered: Argument 'x' was not of type <class 'float'>. Actual type was bool. (expected)
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Note that this error is not caught by the "untyped" variant, where passing a boolean value happily returns $\sqrt{1}$ as result.
my_sqrt(True)
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
In Python (and other languages), the boolean values `True` and `False` can be implicitly converted to the integers 1 and 0; however, it is hard to think of a call to `sqrt()` where this would not be an error. Static Type CheckingType annotations can also be checked _statically_ – that is, without even running the code. Let us create a simple Python file consisting of the above `my_sqrt_typed()` definition and a bad invocation.
import inspect import tempfile f = tempfile.NamedTemporaryFile(mode='w', suffix='.py') f.name f.write(inspect.getsource(my_sqrt)) f.write('\n') f.write(inspect.getsource(my_sqrt_with_type_annotations)) f.write('\n') f.write("print(my_sqrt_with_type_annotations('123'))\n") f.flush()
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
These are the contents of our newly created Python file:
from fuzzingbook_utils import print_file print_file(f.name)
def my_sqrt(x): """Computes the square root of x, using the Newton-Raphson method""" approx = None guess = x / 2 while approx != guess: approx = guess guess = (approx + x / approx) / 2 return approx def my_sqrt_with_type_annotations(x: float) -> float: """Computes the square root of x, using the Newton-Raphson method""" return my_sqrt(x) print(my_sqrt_with_type_annotations('123'))
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
[Mypy](http://mypy-lang.org) is a type checker for Python programs. As it checks types statically, types induce no overhead at runtime; plus, a static check can be faster than a lengthy series of tests with runtime type checking enabled. Let us see what `mypy` produces on the above file:
import subprocess result = subprocess.run(["mypy", "--strict", f.name], universal_newlines=True, stdout=subprocess.PIPE) del f # Delete temporary file print(result.stdout)
/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/tmp207al5cu.py:1: error: Function is missing a type annotation /var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/tmp207al5cu.py:12: warning: Returning Any from function declared to return "float" /var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/tmp207al5cu.py:12: error: Call to untyped function "my_sqrt" in typed context /var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/tmp207al5cu.py:14: error: Argument 1 to "my_sqrt_with_type_annotations" has incompatible type "str"; expected "float"
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
We see that `mypy` complains about untyped function definitions such as `my_sqrt()`; most important, however, it finds that the call to `my_sqrt_with_type_annotations()` in the last line has the wrong type. With `mypy`, we can achieve the same type safety with Python as in statically typed languages – provided that we as programmers also produce the necessary type annotations. Is there a simple way to obtain these? Mining Type SpecificationsOur first task will be to mine type annotations (as part of the code) from _values_ we observe at run time. These type annotations would be _mined_ from actual function executions, _learning_ from (normal) runs what the expected argument and return types should be. By observing a series of calls such as these, we could infer that both `x` and the return value are of type `float`:
y = my_sqrt(25.0) y y = my_sqrt(2.0) y
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
How can we mine types from executions? The answer is simple: 1. We _observe_ a function during execution2. We track the _types_ of its arguments3. We include these types as _annotations_ into the code.To do so, we can make use of Python's tracing facility we already observed in the [chapter on coverage](Coverage.ipynb). With every call to a function, we retrieve the arguments, their values, and their types. Tracking CallsTo observe argument types at runtime, we define a _tracer function_ that tracks the execution of `my_sqrt()`, checking its arguments and return values. The `Tracker` class is set to trace functions in a `with` block as follows:```pythonwith Tracker() as tracker: function_to_be_tracked(...)info = tracker.collected_information()```As in the [chapter on coverage](Coverage.ipynb), we use the `sys.settrace()` function to trace individual functions during execution. We turn on tracking when the `with` block starts; at this point, the `__enter__()` method is called. When execution of the `with` block ends, `__exit()__` is called.
import sys class Tracker(object): def __init__(self, log=False): self._log = log self.reset() def reset(self): self._calls = {} self._stack = [] def traceit(self): """Placeholder to be overloaded in subclasses""" pass # Start of `with` block def __enter__(self): self.original_trace_function = sys.gettrace() sys.settrace(self.traceit) return self # End of `with` block def __exit__(self, exc_type, exc_value, tb): sys.settrace(self.original_trace_function)
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
The `traceit()` method does nothing yet; this is done in specialized subclasses. The `CallTracker` class implements a `traceit()` function that checks for function calls and returns:
class CallTracker(Tracker): def traceit(self, frame, event, arg): """Tracking function: Record all calls and all args""" if event == "call": self.trace_call(frame, event, arg) elif event == "return": self.trace_return(frame, event, arg) return self.traceit
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
`trace_call()` is called when a function is called; it retrieves the function name and current arguments, and saves them on a stack.
class CallTracker(CallTracker): def trace_call(self, frame, event, arg): """Save current function name and args on the stack""" code = frame.f_code function_name = code.co_name arguments = get_arguments(frame) self._stack.append((function_name, arguments)) if self._log: print(simple_call_string(function_name, arguments)) def get_arguments(frame): """Return call arguments in the given frame""" # When called, all arguments are local variables arguments = [(var, frame.f_locals[var]) for var in frame.f_locals] arguments.reverse() # Want same order as call return arguments
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
When the function returns, `trace_return()` is called. We now also have the return value. We log the whole call with arguments and return value (if desired) and save it in our list of calls.
class CallTracker(CallTracker): def trace_return(self, frame, event, arg): """Get return value and store complete call with arguments and return value""" code = frame.f_code function_name = code.co_name return_value = arg # TODO: Could call get_arguments() here to also retrieve _final_ values of argument variables called_function_name, called_arguments = self._stack.pop() assert function_name == called_function_name if self._log: print(simple_call_string(function_name, called_arguments), "returns", return_value) self.add_call(function_name, called_arguments, return_value)
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
`simple_call_string()` is a helper for logging that prints out calls in a user-friendly manner.
def simple_call_string(function_name, argument_list, return_value=None): """Return function_name(arg[0], arg[1], ...) as a string""" call = function_name + "(" + \ ", ".join([var + "=" + repr(value) for (var, value) in argument_list]) + ")" if return_value is not None: call += " = " + repr(return_value) return call
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
`add_call()` saves the calls in a list; each function name has its own list.
class CallTracker(CallTracker): def add_call(self, function_name, arguments, return_value=None): """Add given call to list of calls""" if function_name not in self._calls: self._calls[function_name] = [] self._calls[function_name].append((arguments, return_value))
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Using `calls()`, we can retrieve the list of calls, either for a given function, or for all functions.
class CallTracker(CallTracker): def calls(self, function_name=None): """Return list of calls for function_name, or a mapping function_name -> calls for all functions tracked""" if function_name is None: return self._calls return self._calls[function_name]
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Let us now put this to use. We turn on logging to track the individual calls and their return values:
with CallTracker(log=True) as tracker: y = my_sqrt(25) y = my_sqrt(2.0)
my_sqrt(x=25) my_sqrt(x=25) returns 5.0 my_sqrt(x=2.0) my_sqrt(x=2.0) returns 1.414213562373095 __exit__(self=<__main__.CallTracker object at 0x10fc937b8>, exc_type=None, exc_value=None, tb=None)
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
After execution, we can retrieve the individual calls:
calls = tracker.calls('my_sqrt') calls
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Each call is pair (`argument_list`, `return_value`), where `argument_list` is a list of pairs (`parameter_name`, `value`).
my_sqrt_argument_list, my_sqrt_return_value = calls[0] simple_call_string('my_sqrt', my_sqrt_argument_list, my_sqrt_return_value)
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
If the function does not return a value, `return_value` is `None`.
def hello(name): print("Hello,", name) with CallTracker() as tracker: hello("world") hello_calls = tracker.calls('hello') hello_calls hello_argument_list, hello_return_value = hello_calls[0] simple_call_string('hello', hello_argument_list, hello_return_value)
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Getting TypesDespite what you may have read or heard, Python actually _is_ a typed language. It is just that it is _dynamically typed_ – types are used and checked only at runtime (rather than declared in the code, where they can be _statically checked_ at compile time). We can thus retrieve types of all values within Python:
type(4) type(2.0) type([4])
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
We can retrieve the type of the first argument to `my_sqrt()`:
parameter, value = my_sqrt_argument_list[0] parameter, type(value)
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
as well as the type of the return value:
type(my_sqrt_return_value)
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Hence, we see that (so far), `my_sqrt()` is a function taking (among others) integers and returning floats. We could declare `my_sqrt()` as:
def my_sqrt_annotated(x: int) -> float: return my_sqrt(x)
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
This is a representation we could place in a static type checker, allowing to check whether calls to `my_sqrt()` actually pass a number. A dynamic type checker could run such checks at runtime. And of course, any [symbolic interpretation](SymbolicFuzzer.ipynb) will greatly profit from the additional annotations. By default, Python does not do anything with such annotations. However, tools can access annotations from functions and other objects:
my_sqrt_annotated.__annotations__
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
This is how run-time checkers access the annotations to check against. Accessing Function StructureOur plan is to annotate functions automatically, based on the types we have seen. To do so, we need a few modules that allow us to convert a function into a tree representation (called _abstract syntax trees_, or ASTs) and back; we already have seen these in the chapters on [concolic](ConcolicFuzzer.ipynb) and [symbolic](SymbolicFuzzer.ipynb) testing.
import ast import inspect import astor
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
We can get the source of a Python function using `inspect.getsource()`. (Note that this does not work for functions defined in other notebooks.)
my_sqrt_source = inspect.getsource(my_sqrt) my_sqrt_source
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
To view these in a visually pleasing form, our function `print_content(s, suffix)` formats and highlights the string `s` as if it were a file with ending `suffix`. We can thus view (and highlight) the source as if it were a Python file:
from fuzzingbook_utils import print_content print_content(my_sqrt_source, '.py')
def my_sqrt(x): """Computes the square root of x, using the Newton-Raphson method""" approx = None guess = x / 2 while approx != guess: approx = guess guess = (approx + x / approx) / 2 return approx
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Parsing this gives us an abstract syntax tree (AST) – a representation of the program in tree form.
my_sqrt_ast = ast.parse(my_sqrt_source)
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
What does this AST look like? The helper functions `astor.dump_tree()` (textual output) and `showast.show_ast()` (graphical output with [showast](https://github.com/hchasestevens/show_ast)) allow us to inspect the structure of the tree. We see that the function starts as a `FunctionDef` with name and arguments, followed by a body, which is a list of statements of type `Expr` (the docstring), type `Assign` (assignments), `While` (while loop with its own body), and finally `Return`.
print(astor.dump_tree(my_sqrt_ast))
Module( body=[ FunctionDef(name='my_sqrt', args=arguments(args=[arg(arg='x', annotation=None)], vararg=None, kwonlyargs=[], kw_defaults=[], kwarg=None, defaults=[]), body=[ Expr(value=Str(s='Computes the square root of x, using the Newton-Raphson method')), Assign(targets=[Name(id='approx')], value=NameConstant(value=None)), Assign(targets=[Name(id='guess')], value=BinOp(left=Name(id='x'), op=Div, right=Num(n=2))), While( test=Compare(left=Name(id='approx'), ops=[NotEq], comparators=[Name(id='guess')]), body=[Assign(targets=[Name(id='approx')], value=Name(id='guess')), Assign(targets=[Name(id='guess')], value=BinOp( left=BinOp(left=Name(id='approx'), op=Add, right=BinOp(left=Name(id='x'), op=Div, right=Name(id='approx'))), op=Div, right=Num(n=2)))], orelse=[]), Return(value=Name(id='approx'))], decorator_list=[], returns=None)])
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Too much text for you? This graphical representation may make things simpler.
from fuzzingbook_utils import rich_output if rich_output(): import showast showast.show_ast(my_sqrt_ast)
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
The function `astor.to_source()` converts such a tree back into the more familiar textual Python code representation. Comments are gone, and there may be more parentheses than before, but the result has the same semantics:
print_content(astor.to_source(my_sqrt_ast), '.py')
def my_sqrt(x): """Computes the square root of x, using the Newton-Raphson method""" approx = None guess = x / 2 while approx != guess: approx = guess guess = (approx + x / approx) / 2 return approx
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Annotating Functions with Given TypesLet us now go and transform these trees ti add type annotations. We start with a helper function `parse_type(name)` which parses a type name into an AST.
def parse_type(name): class ValueVisitor(ast.NodeVisitor): def visit_Expr(self, node): self.value_node = node.value tree = ast.parse(name) name_visitor = ValueVisitor() name_visitor.visit(tree) return name_visitor.value_node print(astor.dump_tree(parse_type('int'))) print(astor.dump_tree(parse_type('[object]')))
List(elts=[Name(id='object')])
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
We now define a helper function that actually adds type annotations to a function AST. The `TypeTransformer` class builds on the Python standard library `ast.NodeTransformer` infrastructure. It would be called as```python TypeTransformer({'x': 'int'}, 'float').visit(ast)```to annotate the arguments of `my_sqrt()`: `x` with `int`, and the return type with `float`. The returned AST can then be unparsed, compiled or analyzed.
class TypeTransformer(ast.NodeTransformer): def __init__(self, argument_types, return_type=None): self.argument_types = argument_types self.return_type = return_type super().__init__()
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
The core of `TypeTransformer` is the method `visit_FunctionDef()`, which is called for every function definition in the AST. Its argument `node` is the subtree of the function definition to be transformed. Our implementation accesses the individual arguments and invokes `annotate_args()` on them; it also sets the return type in the `returns` attribute of the node.
class TypeTransformer(TypeTransformer): def visit_FunctionDef(self, node): """Add annotation to function""" # Set argument types new_args = [] for arg in node.args.args: new_args.append(self.annotate_arg(arg)) new_arguments = ast.arguments( new_args, node.args.vararg, node.args.kwonlyargs, node.args.kw_defaults, node.args.kwarg, node.args.defaults ) # Set return type if self.return_type is not None: node.returns = parse_type(self.return_type) return ast.copy_location(ast.FunctionDef(node.name, new_arguments, node.body, node.decorator_list, node.returns), node)
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Each argument gets its own annotation, taken from the types originally passed to the class:
class TypeTransformer(TypeTransformer): def annotate_arg(self, arg): """Add annotation to single function argument""" arg_name = arg.arg if arg_name in self.argument_types: arg.annotation = parse_type(self.argument_types[arg_name]) return arg
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Does this work? Let us annotate the AST from `my_sqrt()` with types for the arguments and return types:
new_ast = TypeTransformer({'x': 'int'}, 'float').visit(my_sqrt_ast)
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
When we unparse the new AST, we see that the annotations actually are present:
print_content(astor.to_source(new_ast), '.py')
def my_sqrt(x: int) ->float: """Computes the square root of x, using the Newton-Raphson method""" approx = None guess = x / 2 while approx != guess: approx = guess guess = (approx + x / approx) / 2 return approx
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Similarly, we can annotate the `hello()` function from above:
hello_source = inspect.getsource(hello) hello_ast = ast.parse(hello_source) new_ast = TypeTransformer({'name': 'str'}, 'None').visit(hello_ast) print_content(astor.to_source(new_ast), '.py')
def hello(name: str) ->None: print('Hello,', name)
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Annotating Functions with Mined TypesLet us now annotate functions with types mined at runtime. We start with a simple function `type_string()` that determines the appropriate type of a given value (as a string):
def type_string(value): return type(value).__name__ type_string(4) type_string([])
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
For composite structures, `type_string()` does not examine element types; hence, the type of `[3]` is simply `list` instead of, say, `list[int]`. For now, `list` will do fine.
type_string([3])
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
`type_string()` will be used to infer the types of argument values found at runtime, as returned by `CallTracker.calls()`:
with CallTracker() as tracker: y = my_sqrt(25.0) y = my_sqrt(2.0) tracker.calls()
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
The function `annotate_types()` takes such a list of calls and annotates each function listed:
def annotate_types(calls): annotated_functions = {} for function_name in calls: try: annotated_functions[function_name] = annotate_function_with_types(function_name, calls[function_name]) except KeyError: continue return annotated_functions
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
For each function, we get the source and its AST and then get to the actual annotation in `annotate_function_ast_with_types()`:
def annotate_function_with_types(function_name, function_calls): function = globals()[function_name] # May raise KeyError for internal functions function_code = inspect.getsource(function) function_ast = ast.parse(function_code) return annotate_function_ast_with_types(function_ast, function_calls)
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
The function `annotate_function_ast_with_types()` invokes the `TypeTransformer` with the calls seen, and for each call, iterate over the arguments, determine their types, and annotate the AST with these. The universal type `Any` is used when we encounter type conflicts, which we will discuss below.
from typing import Any def annotate_function_ast_with_types(function_ast, function_calls): parameter_types = {} return_type = None for calls_seen in function_calls: args, return_value = calls_seen if return_value is not None: if return_type is not None and return_type != type_string(return_value): return_type = 'Any' else: return_type = type_string(return_value) for parameter, value in args: try: different_type = parameter_types[parameter] != type_string(value) except KeyError: different_type = False if different_type: parameter_types[parameter] = 'Any' else: parameter_types[parameter] = type_string(value) annotated_function_ast = TypeTransformer(parameter_types, return_type).visit(function_ast) return annotated_function_ast
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Here is `my_sqrt()` annotated with the types recorded usign the tracker, above.
print_content(astor.to_source(annotate_types(tracker.calls())['my_sqrt']), '.py')
def my_sqrt(x: float) ->float: """Computes the square root of x, using the Newton-Raphson method""" approx = None guess = x / 2 while approx != guess: approx = guess guess = (approx + x / approx) / 2 return approx
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
All-in-one AnnotationLet us bring all of this together in a single class `TypeAnnotator` that first tracks calls of functions and then allows to access the AST (and the source code form) of the tracked functions annotated with types. The method `typed_functions()` returns the annotated functions as a string; `typed_functions_ast()` returns their AST.
class TypeTracker(CallTracker): pass class TypeAnnotator(TypeTracker): def typed_functions_ast(self, function_name=None): if function_name is None: return annotate_types(self.calls()) return annotate_function_with_types(function_name, self.calls(function_name)) def typed_functions(self, function_name=None): if function_name is None: functions = '' for f_name in self.calls(): try: f_text = astor.to_source(self.typed_functions_ast(f_name)) except KeyError: f_text = '' functions += f_text return functions return astor.to_source(self.typed_functions_ast(function_name))
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Here is how to use `TypeAnnotator`. We first track a series of calls:
with TypeAnnotator() as annotator: y = my_sqrt(25.0) y = my_sqrt(2.0)
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
After tracking, we can immediately retrieve an annotated version of the functions tracked:
print_content(annotator.typed_functions(), '.py')
def my_sqrt(x: float) ->float: """Computes the square root of x, using the Newton-Raphson method""" approx = None guess = x / 2 while approx != guess: approx = guess guess = (approx + x / approx) / 2 return approx
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
This also works for multiple and diverse functions. One could go and implement an automatic type annotator for Python files based on the types seen during execution.
with TypeAnnotator() as annotator: hello('type annotations') y = my_sqrt(1.0) print_content(annotator.typed_functions(), '.py')
def hello(name: str): print('Hello,', name) def my_sqrt(x: float) ->float: """Computes the square root of x, using the Newton-Raphson method""" approx = None guess = x / 2 while approx != guess: approx = guess guess = (approx + x / approx) / 2 return approx
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
A content as above could now be sent to a type checker, which would detect any type inconsistency between callers and callees. Likewise, type annotations such as the ones above greatly benefit symbolic code analysis (as in the chapter on [symbolic fuzzing](SymbolicFuzzer.ipynb)), as they effectively constrain the set of values that arguments and variables can take. Multiple TypesLet us now resolve the role of the magic `Any` type in `annotate_function_ast_with_types()`. If we see multiple types for the same argument, we set its type to `Any`. For `my_sqrt()`, this makes sense, as its arguments can be integers as well as floats:
with CallTracker() as tracker: y = my_sqrt(25.0) y = my_sqrt(4) print_content(astor.to_source(annotate_types(tracker.calls())['my_sqrt']), '.py')
def my_sqrt(x: Any) ->float: """Computes the square root of x, using the Newton-Raphson method""" approx = None guess = x / 2 while approx != guess: approx = guess guess = (approx + x / approx) / 2 return approx
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
The following function `sum3()` can be called with floating-point numbers as arguments, resulting in the parameters getting a `float` type:
def sum3(a, b, c): return a + b + c with TypeAnnotator() as annotator: y = sum3(1.0, 2.0, 3.0) y print_content(annotator.typed_functions(), '.py')
def sum3(a: float, b: float, c: float) ->float: return a + b + c
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
If we call `sum3()` with integers, though, the arguments get an `int` type:
with TypeAnnotator() as annotator: y = sum3(1, 2, 3) y print_content(annotator.typed_functions(), '.py')
def sum3(a: int, b: int, c: int) ->int: return a + b + c
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
And we can also call `sum3()` with strings, giving the arguments a `str` type:
with TypeAnnotator() as annotator: y = sum3("one", "two", "three") y print_content(annotator.typed_functions(), '.py')
def sum3(a: str, b: str, c: str) ->str: return a + b + c
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
If we have multiple calls, but with different types, `TypeAnnotator()` will assign an `Any` type to both arguments and return values:
with TypeAnnotator() as annotator: y = sum3(1, 2, 3) y = sum3("one", "two", "three") typed_sum3_def = annotator.typed_functions('sum3') print_content(typed_sum3_def, '.py')
def sum3(a: Any, b: Any, c: Any) ->Any: return a + b + c
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
A type `Any` makes it explicit that an object can, indeed, have any type; it will not be typechecked at runtime or statically. To some extent, this defeats the power of type checking; but it also preserves some of the type flexibility that many Python programmers enjoy. Besides `Any`, the `typing` module supports several additional ways to define ambiguous types; we will keep this in mind for a later exercise. Specifying and Checking InvariantsBesides basic data types. we can check several further properties from arguments. We can, for instance, whether an argument can be negative, zero, or positive; or that one argument should be smaller than the second; or that the result should be the sum of two arguments – properties that cannot be expressed in a (Python) type.Such properties are called *invariants*, as they hold across all invocations of a function. Specifically, invariants come as _pre_- and _postconditions_ – conditions that always hold at the beginning and at the end of a function. (There are also _data_ and _object_ invariants that express always-holding properties over the state of data or objects, but we do not consider these in this book.) Annotating Functions with Pre- and PostconditionsThe classical means to specify pre- and postconditions is via _assertions_, which we have introduced in the [chapter on testing](Intro_Testing.ipynb). A precondition checks whether the arguments to a function satisfy the expected properties; a postcondition does the same for the result. We can express and check both using assertions as follows:
def my_sqrt_with_invariants(x): assert x >= 0 # Precondition ... assert result * result == x # Postcondition return result
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
A nicer way, however, is to syntactically separate invariants from the function at hand. Using appropriate decorators, we could specify pre- and postconditions as follows:```python@precondition lambda x: x >= 0@postcondition lambda return_value, x: return_value * return_value == xdef my_sqrt_with_invariants(x): normal code without assertions ...```The decorators `@precondition` and `@postcondition` would run the given functions (specified as anonymous `lambda` functions) before and after the decorated function, respectively. If the functions return `False`, the condition is violated. `@precondition` gets the function arguments as arguments; `@postcondition` additionally gets the return value as first argument. It turns out that implementing such decorators is not hard at all. Our implementation builds on a [code snippet from StackOverflow](https://stackoverflow.com/questions/12151182/python-precondition-postcondition-for-member-function-how):
import functools def condition(precondition=None, postcondition=None): def decorator(func): @functools.wraps(func) # preserves name, docstring, etc def wrapper(*args, **kwargs): if precondition is not None: assert precondition(*args, **kwargs), "Precondition violated" retval = func(*args, **kwargs) # call original function or method if postcondition is not None: assert postcondition(retval, *args, **kwargs), "Postcondition violated" return retval return wrapper return decorator def precondition(check): return condition(precondition=check) def postcondition(check): return condition(postcondition=check)
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
With these, we can now start decorating `my_sqrt()`:
@precondition(lambda x: x > 0) def my_sqrt_with_precondition(x): return my_sqrt(x)
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
This catches arguments violating the precondition:
with ExpectError(): my_sqrt_with_precondition(-1.0)
Traceback (most recent call last): File "<ipython-input-102-c02dc99b6c54>", line 2, in <module> my_sqrt_with_precondition(-1.0) File "<ipython-input-100-39ada1fd0b7e>", line 6, in wrapper assert precondition(*args, **kwargs), "Precondition violated" AssertionError: Precondition violated (expected)
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Likewise, we can provide a postcondition:
EPSILON = 1e-5 @postcondition(lambda ret, x: ret * ret - x < EPSILON) def my_sqrt_with_postcondition(x): return my_sqrt(x) y = my_sqrt_with_postcondition(2.0) y
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
If we have a buggy implementation of $\sqrt{x}$, this gets caught quickly:
@postcondition(lambda ret, x: ret * ret - x < EPSILON) def buggy_my_sqrt_with_postcondition(x): return my_sqrt(x) + 0.1 with ExpectError(): y = buggy_my_sqrt_with_postcondition(2.0)
Traceback (most recent call last): File "<ipython-input-107-38a36260c5b6>", line 2, in <module> y = buggy_my_sqrt_with_postcondition(2.0) File "<ipython-input-100-39ada1fd0b7e>", line 10, in wrapper assert postcondition(retval, *args, **kwargs), "Postcondition violated" AssertionError: Postcondition violated (expected)
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
While checking pre- and postconditions is a great way to catch errors, specifying them can be cumbersome. Let us try to see whether we can (again) _mine_ some of them. Mining InvariantsTo _mine_ invariants, we can use the same tracking functionality as before; instead of saving values for individual variables, though, we now check whether the values satisfy specific _properties_ or not. For instance, if all values of `x` seen satisfy the condition `x > 0`, then we make `x > 0` an invariant of the function. If we see positive, zero, and negative values of `x`, though, then there is no property of `x` left to talk about.The general idea is thus:1. Check all variable values observed against a set of predefined properties; and2. Keep only those properties that hold for all runs observed. Defining PropertiesWhat precisely do we mean by properties? Here is a small collection of value properties that would frequently be used in invariants. All these properties would be evaluated with the _metavariables_ `X`, `Y`, and `Z` (actually, any upper-case identifier) being replaced with the names of function parameters:
INVARIANT_PROPERTIES = [ "X < 0", "X <= 0", "X > 0", "X >= 0", "X == 0", "X != 0", ]
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
When `my_sqrt(x)` is called as, say `my_sqrt(5.0)`, we see that `x = 5.0` holds. The above properties would then all be checked for `x`. Only the properties `X > 0`, `X >= 0`, and `X != 0` hold for the call seen; and hence `x > 0`, `x >= 0`, and `x != 0` would make potential preconditions for `my_sqrt(x)`. We can check for many more properties such as relations between two arguments:
INVARIANT_PROPERTIES += [ "X == Y", "X > Y", "X < Y", "X >= Y", "X <= Y", ]
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Types also can be checked using properties. For any function parameter `X`, only one of these will hold:
INVARIANT_PROPERTIES += [ "isinstance(X, bool)", "isinstance(X, int)", "isinstance(X, float)", "isinstance(X, list)", "isinstance(X, dict)", ]
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
We can check for arithmetic properties:
INVARIANT_PROPERTIES += [ "X == Y + Z", "X == Y * Z", "X == Y - Z", "X == Y / Z", ]
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Here's relations over three values, a Python special:
INVARIANT_PROPERTIES += [ "X < Y < Z", "X <= Y <= Z", "X > Y > Z", "X >= Y >= Z", ]
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Finally, we can also check for list or string properties. Again, this is just a tiny selection.
INVARIANT_PROPERTIES += [ "X == len(Y)", "X == sum(Y)", "X.startswith(Y)", ]
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Extracting Meta-VariablesLet us first introduce a few _helper functions_ before we can get to the actual mining. `metavars()` extracts the set of meta-variables (`X`, `Y`, `Z`, etc.) from a property. To this end, we parse the property as a Python expression and then visit the identifiers.
def metavars(prop): metavar_list = [] class ArgVisitor(ast.NodeVisitor): def visit_Name(self, node): if node.id.isupper(): metavar_list.append(node.id) ArgVisitor().visit(ast.parse(prop)) return metavar_list assert metavars("X < 0") == ['X'] assert metavars("X.startswith(Y)") == ['X', 'Y'] assert metavars("isinstance(X, str)") == ['X']
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Instantiating PropertiesTo produce a property as invariant, we need to be able to _instantiate_ it with variable names. The instantiation of `X > 0` with `X` being instantiated to `a`, for instance, gets us `a > 0`. To this end, the function `instantiate_prop()` takes a property and a collection of variable names and instantiates the meta-variables left-to-right with the corresponding variables names in the collection.
def instantiate_prop_ast(prop, var_names): class NameTransformer(ast.NodeTransformer): def visit_Name(self, node): if node.id not in mapping: return node return ast.Name(id=mapping[node.id], ctx=ast.Load()) meta_variables = metavars(prop) assert len(meta_variables) == len(var_names) mapping = {} for i in range(0, len(meta_variables)): mapping[meta_variables[i]] = var_names[i] prop_ast = ast.parse(prop, mode='eval') new_ast = NameTransformer().visit(prop_ast) return new_ast def instantiate_prop(prop, var_names): prop_ast = instantiate_prop_ast(prop, var_names) prop_text = astor.to_source(prop_ast).strip() while prop_text.startswith('(') and prop_text.endswith(')'): prop_text = prop_text[1:-1] return prop_text assert instantiate_prop("X > Y", ['a', 'b']) == 'a > b' assert instantiate_prop("X.startswith(Y)", ['x', 'y']) == 'x.startswith(y)'
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Evaluating PropertiesTo actually _evaluate_ properties, we do not need to instantiate them. Instead, we simply convert them into a boolean function, using `lambda`:
def prop_function_text(prop): return "lambda " + ", ".join(metavars(prop)) + ": " + prop def prop_function(prop): return eval(prop_function_text(prop))
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Here is a simple example:
prop_function_text("X > Y") p = prop_function("X > Y") p(100, 1) p(1, 100)
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Checking InvariantsTo extract invariants from an execution, we need to check them on all possible instantiations of arguments. If the function to be checked has two arguments `a` and `b`, we instantiate the property `X < Y` both as `a < b` and `b < a` and check each of them. To get all combinations, we use the Python `permutations()` function:
import itertools for combination in itertools.permutations([1.0, 2.0, 3.0], 2): print(combination)
(1.0, 2.0) (1.0, 3.0) (2.0, 1.0) (2.0, 3.0) (3.0, 1.0) (3.0, 2.0)
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
The function `true_property_instantiations()` takes a property and a list of tuples (`var_name`, `value`). It then produces all instantiations of the property with the given values and returns those that evaluate to True.
def true_property_instantiations(prop, vars_and_values, log=False): instantiations = set() p = prop_function(prop) len_metavars = len(metavars(prop)) for combination in itertools.permutations(vars_and_values, len_metavars): args = [value for var_name, value in combination] var_names = [var_name for var_name, value in combination] try: result = p(*args) except: result = None if log: print(prop, combination, result) if result: instantiations.add((prop, tuple(var_names))) return instantiations
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Here is an example. If `x == -1` and `y == 1`, the property `X < Y` holds for `x < y`, but not for `y < x`:
invs = true_property_instantiations("X < Y", [('x', -1), ('y', 1)], log=True) invs
X < Y (('x', -1), ('y', 1)) True X < Y (('y', 1), ('x', -1)) False
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
The instantiation retrieves the short form:
for prop, var_names in invs: print(instantiate_prop(prop, var_names))
x < y
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Likewise, with values for `x` and `y` as above, the property `X < 0` only holds for `x`, but not for `y`:
invs = true_property_instantiations("X < 0", [('x', -1), ('y', 1)], log=True) for prop, var_names in invs: print(instantiate_prop(prop, var_names))
x < 0
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Extracting InvariantsLet us now run the above invariant extraction on function arguments and return values as observed during a function execution. To this end, we extend the `CallTracker` class into an `InvariantTracker` class, which automatically computes invariants for all functions and all calls observed during tracking. By default, an `InvariantTracker` uses the properties as defined above; however, one can specify alternate sets of properties.
class InvariantTracker(CallTracker): def __init__(self, props=None, **kwargs): if props is None: props = INVARIANT_PROPERTIES self.props = props super().__init__(**kwargs)
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
The key method of the `InvariantTracker` is the `invariants()` method. This iterates over the calls observed and checks which properties hold. Only the intersection of properties – that is, the set of properties that hold for all calls – is preserved, and eventually returned. The special variable `return_value` is set to hold the return value.
RETURN_VALUE = 'return_value' class InvariantTracker(InvariantTracker): def invariants(self, function_name=None): if function_name is None: return {function_name: self.invariants(function_name) for function_name in self.calls()} invariants = None for variables, return_value in self.calls(function_name): vars_and_values = variables + [(RETURN_VALUE, return_value)] s = set() for prop in self.props: s |= true_property_instantiations(prop, vars_and_values, self._log) if invariants is None: invariants = s else: invariants &= s return invariants
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Here's an example of how to use `invariants()`. We run the tracker on a small set of calls.
with InvariantTracker() as tracker: y = my_sqrt(25.0) y = my_sqrt(10.0) tracker.calls()
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
The `invariants()` method produces a set of properties that hold for the observed runs, together with their instantiations over function arguments.
invs = tracker.invariants('my_sqrt') invs
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
As before, the actual instantiations are easier to read:
def pretty_invariants(invariants): props = [] for (prop, var_names) in invariants: props.append(instantiate_prop(prop, var_names)) return sorted(props) pretty_invariants(invs)
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
We see that the both `x` and the return value have a `float` type. We also see that both are always greater than zero. These are properties that may make useful pre- and postconditions, notably for symbolic analysis. However, there's also an invariant which does _not_ universally hold, namely `return_value <= x`, as the following example shows:
my_sqrt(0.01)
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
Clearly, 0.1 > 0.01 holds. This is a case of us not learning from sufficiently diverse inputs. As soon as we have a call including `x = 0.1`, though, the invariant `return_value <= x` is eliminated:
with InvariantTracker() as tracker: y = my_sqrt(25.0) y = my_sqrt(10.0) y = my_sqrt(0.01) pretty_invariants(tracker.invariants('my_sqrt'))
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
We will discuss later how to ensure sufficient diversity in inputs. (Hint: This involves test generation.) Let us try out our invariant tracker on `sum3()`. We see that all types are well-defined; the properties that all arguments are non-zero, however, is specific to the calls observed.
with InvariantTracker() as tracker: y = sum3(1, 2, 3) y = sum3(-4, -5, -6) pretty_invariants(tracker.invariants('sum3'))
_____no_output_____
MIT
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook